Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 23 juin 2024Actualités numériques

Tuesday SpaceX Launches a NOAA Satellite to Improve Weather Forecasts for Earth and Space

Par : EditorDavid
23 juin 2024 à 17:59
Tuesday a SpaceX Falcon Heavy rocket will launch a special satellite — a state-of-the-art weather-watcher from America's National Oceanic and Atmospheric Administration. It will complete a series of four GOES-R satellite launches that began in 2016. Space.com drills down into how these satellites have changed weather forecasts: More than seven years later, with three of the four satellites in the series orbiting the Earth, scientists and researchers say they are pleased with the results and how the advanced technology has been a game changer. "I think it has really lived up to its hype in thunderstorm forecasting. Meteorologists can see the convection evolve in near real-time and this gives them enhanced insight on storm development and severity, making for better warnings," John Cintineo, a researcher from NOAA's National Severe Storms Laboratory , told Space.com in an email. "Not only does the GOES-R series provide observations where radar coverage is lacking, but it often provides a robust signal before radar, such as when a storm is strengthening or weakening. I'm sure there have been many other improvements in forecasts and environmental monitoring over the last decade, but this is where I have most clearly seen improvement," Cintineo said. In addition to helping predict severe thunderstorms, each satellite has collected images and data on heavy rain events that could trigger flooding, detected low clouds and fog as it forms, and has made significant improvements to forecasts and services used during hurricane season. "GOES provides our hurricane forecasters with faster, more accurate and detailed data that is critical for estimating a storm's intensity, including cloud top cooling, convective structures, specific features of a hurricane's eye, upper-level wind speeds, and lightning activity," Ken Graham, director of NOAA's National Weather Service told Space.com in an email. Instruments such as the Advanced Baseline Imager have three times more spectral channels, four times the image quality, and five times the imaging speed as the previous GOES satellites. The Geostationary Lightning Mapper is the first of its kind in orbit on the GOES-R series that allows scientists to view lightning 24/7 and strikes that make contact with the ground and from cloud to cloud. "GOES-U and the GOES-R series of satellites provides scientists and forecasters weather surveillance of the entire western hemisphere, at unprecedented spatial and temporal scales," Cintineo said. "Data from these satellites are helping researchers develop new tools and methods to address problems such as lightning prediction, sea-spray identification (sea-spray is dangerous for mariners), severe weather warnings, and accurate cloud motion estimation. The instruments from GOES-R also help improve forecasts from global and regional numerical weather models, through improved data assimilation." The final satellite, launching Tuesday, includes a new sensor — the Compact Coronagraph — "that will monitor weather outside of Earth's atmosphere, keeping an eye on what space weather events are happening that could impact our planet," according to the article. "It will be the first near real time operational coronagraph that we have access to," Rob Steenburgh, a space scientist at NOAA's Space Weather Prediction Center, told Space.com on the phone. "That's a huge leap for us because up until now, we've always depended on a research coronagraph instrument on a spacecraft that was launched quite a long time ago."

Read more of this story at Slashdot.

Foundation Honoring 'Star Trek' Creator Offers $1M Prize for AI Startup Benefiting Humanity

Par : EditorDavid
23 juin 2024 à 16:34
The Roddenberry Foundation — named for Star Trek creator Gene Roddenberry — "announced Tuesday that this year's biennial award would focus on artificial intelligence that benefits humanity," reports the Los Angeles Times: Lior Ipp, chief executive of the foundation, told The Times there's a growing recognition that AI is becoming more ubiquitous and will affect all aspects of our lives. "We are trying to ... catalyze folks to think about what AI looks like if it's used for good," Ipp said, "and what it means to use AI responsibly, ethically and toward solving some of the thorny global challenges that exist in the world...." Ipp said the foundation shares the broad concern about AI and sees the award as a means to potentially contribute to creating those guardrails... Inspiration for the theme was also borne out of the applications the foundation received last time around. Ipp said the prize, which is "issue-agnostic" but focused on early-stage tech, produced compelling uses of AI and machine learning in agriculture, healthcare, biotech and education. "So," he said, "we sort of decided to double down this year on specifically AI and machine learning...." Though the foundation isn't prioritizing a particular issue, the application states that it is looking for ideas that have the potential to push the needle on one or more of the United Nations' 17 sustainable development goals, which include eliminating poverty and hunger as well as boosting climate action and protecting life on land and underwater. The Foundation's most recent winner was Sweden-based Elypta, according to the article, "which Ipp said is using liquid biopsies, such as a blood test, to detect cancer early." "We believe that building a better future requires a spirit of curiosity, a willingness to push boundaries, and the courage to think big," said Rod Roddenberry, co-founder of the Roddenberry Foundation. "The Prize will provide a significant boost to AI pioneers leading these efforts." According to the Foundation's announcement, the Prize "embodies the Roddenberry philosophy's promise of a future in which technology and human ingenuity enable everyone — regardless of background — to thrive." "By empowering entrepreneurs to dream bigger and innovate valiantly, the Roddenberry Prize seeks to catalyze the development of AI solutions that promote abundance and well-being for all."

Read more of this story at Slashdot.

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat

Par : EditorDavid
23 juin 2024 à 15:34
Automated license plate readers "pose risks to public safety," argues the EFF, "that may outweigh the crimes they are attempting to address in the first place." When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials... Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems... Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023. Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs... If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage... The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities... But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife. The article concludes that public safety agencies should "collect only the data they need for actual criminal investigations. "They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all."

Read more of this story at Slashdot.

Our Brains React Differently to Deepfake Voices, Researchers Find

Par : EditorDavid
23 juin 2024 à 14:34
"University of Zurich researchers have discovered that our brains process natural human voices and "deepfake" voices differently," writes Slashdot reader jenningsthecat. From the University's announcement: The researchers first used psychoacoustical methods to test how well human voice identity is preserved in deepfake voices. To do this, they recorded the voices of four male speakers and then used a conversion algorithm to generate deepfake voices. In the main experiment, 25 participants listened to multiple voices and were asked to decide whether or not the identities of two voices were the same. Participants either had to match the identity of two natural voices, or of one natural and one deepfake voice. The deepfakes were correctly identified in two thirds of cases. "This illustrates that current deepfake voices might not perfectly mimic an identity, but do have the potential to deceive people," says Claudia Roswandowitz, first author and a postdoc at the Department of Computational Linguistics. The researchers then used imaging techniques to examine which brain regions responded differently to deepfake voices compared to natural voices. They successfully identified two regions that were able to recognize the fake voices: the nucleus accumbens and the auditory cortex. "The nucleus accumbens is a crucial part of the brain's reward system. It was less active when participants were tasked with matching the identity between deepfakes and natural voices," says Claudia Roswandowitz. In contrast, the nucleus accumbens showed much more activity when it came to comparing two natural voices. The complete paper appears in Nature.

Read more of this story at Slashdot.

Multiple AI Companies Ignore Robots.Txt Files, Scrape Web Content, Says Licensing Firm

Par : EditorDavid
23 juin 2024 à 11:34
Multiple AI companies are ignoring Robots.txt files meant to block the scraping of web content for generative AI systems, reports Reuters — citing a warning sent to publisher by content licensing startup TollBit. TollBit, an early-stage startup, is positioning itself as a matchmaker between content-hungry AI companies and publishers open to striking licensing deals with them. The company tracks AI traffic to the publishers' websites and uses analytics to help both sides settle on fees to be paid for the use of different types of content... It says it had 50 websites live as of May, though it has not named them. According to the TollBit letter, Perplexity is not the only offender that appears to be ignoring robots.txt. TollBit said its analytics indicate "numerous" AI agents are bypassing the protocol, a standard tool used by publishers to indicate which parts of its site can be crawled. "What this means in practical terms is that AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites," TollBit wrote. "The more publisher logs we ingest, the more this pattern emerges." The article includes this quote from the president of the News Media Alliance (a trade group representing over 2,200 U.S.-based publishers). "Without the ability to opt out of massive scraping, we cannot monetize our valuable content and pay journalists. This could seriously harm our industry." Reuters also notes another threat facing news sites: Publishers have been raising the alarm about news summaries in particular since Google rolled out a product last year that uses AI to create summaries in response to some search queries. If publishers want to prevent their content from being used by Google's AI to help generate those summaries, they must use the same tool that would also prevent them from appearing in Google search results, rendering them virtually invisible on the web.

Read more of this story at Slashdot.

America's Used EV Price Crash Keeps Getting Deeper

Par : EditorDavid
23 juin 2024 à 07:34
Long-time Slashdot reader schwit1 shares CNBC's report on the U.S. car market: Back in February, used electric vehicle prices dipped below used gasoline-powered vehicle prices for the first time ever, and the pricing cliff keeps getting steeper as car buyers reject any "premium" tag formerly associated with EVs. The decline has been dramatic over the past year. In June 2023, average used EV prices were over 25% higher than used gas car prices, but by May, used EVs were on average 8% lower than the average price for a used gasoline-powered car in U.S. In dollar terms, the gap widened from $265 in February to $2,657 in May, according to an analysis of 2.2 million one to five year-old used cars conducted by iSeeCars. Over the past year, gasoline-powered used vehicle prices have declined between 3-7%, while electric vehicle prices have decreased 30-39%. "It's clear used car shoppers will no longer pay a premium for electric vehicles," iSeeCars executive analyst Karl Brauer stated in an iSeeCars report published last week. Electric power is now a detractor in the consumer's mind, with EVs "less desirable" and therefore less valuable than traditional cars, he said. The article notes there's been a price war among EV manufacturers — and that newer EV models might be more attractive due to "longer ranges and improved battery life with temperature control for charging." But CNBC also notes a silver lining. "As more EVs enter the used market at lower prices, the EV market does become available to a wider market of potential first-time EV owners."

Read more of this story at Slashdot.

Launch of Chinese-French Satellite Scattered Debris Over Populated Area

Par : EditorDavid
23 juin 2024 à 04:34
"A Chinese launch of the joint Sino-French SVOM mission to study Gamma-ray bursts early Saturday saw toxic rocket debris fall over a populated area..." writes Space News: SVOM is a collaboration between the China National Space Administration (CNSA) and France's Centre national d'études spatiales (CNES). The mission will look for high-energy electromagnetic radiation from these events in the X-ray and gamma-ray ranges using two French and two Chinese-developed science payloads... Studying gamma-ray bursts, thought to be caused by the death of massive stars or collisions between stars, could provide answers to key questions in astrophysics. This includes the death of stars and the creation of black holes. However the launch of SVOM also created an explosion of its own closer to home.A video posted on Chinese social media site Sina Weibo appears to show a rocket booster falling on a populated area with people running for cover. The booster fell to Earth near Guiding County, Qiandongnan Prefecture in Guizhou province, according to another post... A number of comments on the video noted the danger posed by the hypergolic propellant from the Long March rocket... The Long March 2C uses a toxic, hypergolic mix of nitrogen tetroxide and unsymmetrical dimethylhydrazine (UDMH). Reddish-brown gas or smoke from the booster could be indicative of nitrogen tetroxide, while a yellowish gas could be caused by hydrazine fuel mixing with air. Contact with either remaining fuel or oxidizer from the rocket stage could be very harmful to individuals. "Falling rocket debris is a common issue with China's launches from its three inland launch sites..." the article points out. "Authorities are understood to issue warnings and evacuation notices for areas calculated to be at risk from launch debris, reducing the risk of injuries.

Read more of this story at Slashdot.

Open Source ChatGPT Clone 'LibreChat' Lets You Use Multiple AI Services - While Owning Your Data

Par : EditorDavid
22 juin 2024 à 14:34
Slashdot reader DevNull127 writes: A free and open source ChatGPT clone — named LibreChat — lets its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others — both open-source and closed-source — and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google — as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" — making it possible to switch to a different AI model in mid-conversation...) Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." And its creator, Danny Avila, says in some cases it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 — and LibreChat is "inherently completely private". From the article: With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time." And they're also compatible with LibreChat...

Read more of this story at Slashdot.

Walmart Announces Electronic Shelf Labels They Can Change Remotely

Par : EditorDavid
23 juin 2024 à 01:34
Walmart "became the latest retailer to announce it's replacing the price stickers in its aisles with electronic shelf labels," reports NPR. "The new labels allow employees to change prices as often as every ten seconds." "If it's hot outside, we can raise the price of water and ice cream. If there's something that's close to the expiration date, we can lower the price — that's the good news," said Phil Lempert, a grocery industry analyst... The ability to easily change prices wasn't mentioned in Walmart's announcement that 2,300 stores will have the digitized shelf labels by 2026. Daniela Boscan, who participated in Walmart's pilot of the labels in Texas, said the label's key benefits are "increased productivity and reduced walking time," plus quicker restocking of shelves... As higher wages make labor more expensive, retailers big and small can benefit from the increased productivity that digitized shelf labels enable, said Santiago Gallino, a professor specializing in retail management at the University of Pennsylvania's Wharton School. "The bottom line, at least when I talk to retailers, is the calculation of the amount of labor that they're going to save by incorporating this. And in that sense, I don't think that this is something that only large corporations like Walmart or Target can benefit from," Gallino said. "I think that smaller chains can also see the potential benefit of it." Indeed, Walmart's announcement calls the tech "a win" for both customers and their workers, arguing that updating prices with a mobile app means "reducing the need to walk around the store to change paper tags by hand and giving us more time to support customers in the store." Professor Gallino tells NPR he doesn't think Walmart will suddenly change prices — though he does think Walmart will use it to keep their offline and online prices identical. The article also points out you can already find electronic shelf labels at other major grocers inlcuding Amazon Fresh stores and Whole Foods — and that digitized shelf labels "are even more common in stores across Europe." Another feature of electronic shelf labels is their product descriptions. [Grocery analyst] Lempert notes that barcodes on the new labels can provide useful details other than the price. "They can actually be used where you take your mobile device and you scan it and it can give you more information about the product — whether it's the sourcing of the product, whether it's gluten free, whether it's keto friendly. That's really the promise of what these shelf tags can do," Lempert said. Thanks to long-time Slashdot reader loveandpeace for sharing the article.

Read more of this story at Slashdot.

Hier — 22 juin 2024Actualités numériques

Data Dump of Patient Records Possible After UK Hospital Breach

Par : EditorDavid
22 juin 2024 à 22:34
An anonymous reader shared this report from the Associated Press: An investigation into a ransomware attack earlier this month on London hospitals by the Russian group Qilin could take weeks to complete, the country's state-run National Health Service said Friday, as concerns grow over a reported data dump of patient records. Hundreds of operations and appointments are still being canceled more than two weeks after the June 3 attack on NHS provider Synnovis, which provides pathology services primarily in southeast London... NHS England said Friday that it has been "made aware" that data connected to the attack have been published online. According to the BBC, Qilin shared almost 400GB of data, including patient names, dates of birth and descriptions of blood tests, on their darknet site and Telegram channel... According to Saturday's edition of the Guardian newspaper, records covering 300 million patient interactions, including the results of blood tests for HIV and cancer, were stolen during the attack. A website and helpline has been set up for patients affected.

Read more of this story at Slashdot.

Red Hat's RHEL-Based In-Vehicle OS Attains Milestone Safety Certification

Par : EditorDavid
22 juin 2024 à 21:37
In 2022, Red Hat announced plans to extend RHEL to the automotive industry through Red Hat In-Vehicle Operating System (providing automakers with an open and functionally-safe platform). And this week Red Hat announced it achieved ISO 26262 ASIL-B certification from exida for the Linux math library (libm.so glibc) — a fundamental component of that Red Hat In-Vehicle Operating System. From Red Hat's announcement: This milestone underscores Red Hat's pioneering role in obtaining continuous and comprehensive Safety Element out of Context certification for Linux in automotive... This certification demonstrates that the engineering of the math library components individually and as a whole meet or exceed stringent functional safety standards, ensuring substantial reliability and performance for the automotive industry. The certification of the math library is a significant milestone that strengthens the confidence in Linux as a viable platform of choice for safety related automotive applications of the future... By working with the broader open source community, Red Hat can make use of the rigorous testing and analysis performed by Linux maintainers, collaborating across upstream communities to deliver open standards-based solutions. This approach enhances long-term maintainability and limits vendor lock-in, providing greater transparency and performance. Red Hat In-Vehicle Operating System is poised to offer a safety certified Linux-based operating system capable of concurrently supporting multiple safety and non-safety related applications in a single instance. These applications include advanced driver-assistance systems (ADAS), digital cockpit, infotainment, body control, telematics, artificial intelligence (AI) models and more. Red Hat is also working with key industry leaders to deliver pre-tested, pre-integrated software solutions, accelerating the route to market for SDV concepts. "Red Hat is fully committed to attaining continuous and comprehensive safety certification of Linux natively for automotive applications," according to the announcement, "and has the industry's largest pool of Linux maintainers and contributors committed to this initiative..." Or, as Network World puts it, "The phrase 'open source for the open road' is now being used to describe the inevitable fit between the character of Linux and the need for highly customizable code in all sorts of automotive equipment."

Read more of this story at Slashdot.

Linux Foundation's 'Open Source Security Foundation' Launches New Threat Intelligence Mailing List

Par : EditorDavid
22 juin 2024 à 20:34
The Linux Foundation's "Open Source Security Foundation" (or OpenSSF) is a cross-industry forum to "secure the development, maintenance, and consumption of the open source software". And now the OpenSSF has launched a new mailing list "which aims to monitor the threat landscape of open-source project vulnerabilities," reports I Programmer, "in order to provide real time alerts to anyone subscribed." The Record explains its origins: OpenSSF General Manager Omkhar Arasaratnam said that at a recent open source event, members of the community ran a tabletop exercise where they simulated a security incident involving the discovery of a zero-day vulnerability. They worked their way through the open source ecosystem — from cloud providers to maintainers to end users — clearly defining how the discovery of a vulnerability would be dealt with from top to bottom. But one of the places where they found a gap is in the dissemination of information widely. "What we lack within the open source community is a place in which we can convene to distribute indicators of compromise (IOCs) and threats, tactics and procedures (TTPs) in a way that will allow the community to identify threats when our packages are under attack," Arasaratnam said... "[W]e're going to be standing up a mailing list for which we can share this information throughout the community and there can be discussion of things that are being seen. And that's one of the ways that we're responding to this gap that we saw...." The Siren mailing list will encourage public discussions on security flaws, concepts, and practices in the open source community with individuals who are not typically engaged in traditional upstream communication channels... Members of the Siren email list will get real-time updates about emerging threats that may be relevant to their projects... OpenSSF has created a signup page for those interested and urged others to share the email list to other open source community members... OpenSSF ecyosystem strategist Christopher Robinson (also security communications director for Intel) told the site he expects government agencies and security researchers to be involved in the effort. And he issued this joint statement with OpenSSF ecosystem strategist Bennett Pursell: By leveraging the collective knowledge and expertise of the open source community and other security experts, the OpenSSF Siren empowers projects of all sizes to bolster their cybersecurity defenses and increase their overall awareness of malicious activities. Whether you're a developer, maintainer, or security enthusiast, your participation is vital in safeguarding the integrity of open source software. In less than a month, the mailing list has already grown to over 800 members...

Read more of this story at Slashdot.

Microsoft Admits No Guarantee of Sovereignty For UK Policing Data

Par : EditorDavid
22 juin 2024 à 19:34
An anonymous reader shared this report from Computer Weekly: Microsoft has admitted to Scottish policing bodies that it cannot guarantee the sovereignty of UK policing data hosted on its hyperscale public cloud infrastructure, despite its systems being deployed throughout the criminal justice sector. According to correspondence released by the Scottish Police Authority (SPA) under freedom of information (FOI) rules, Microsoft is unable to guarantee that data uploaded to a key Police Scotland IT system — the Digital Evidence Sharing Capability (DESC) — will remain in the UK as required by law. While the correspondence has not been released in full, the disclosure reveals that data hosted in Microsoft's hyperscale public cloud infrastructure is regularly transferred and processed overseas; that the data processing agreement in place for the DESC did not cover UK-specific data protection requirements; and that while the company has the ability to make technical changes to ensure data protection compliance, it is only making these changes for DESC partners and not other policing bodies because "no one else had asked". The correspondence also contains acknowledgements from Microsoft that international data transfers are inherent to its public cloud architecture. As a result, the issues identified with the Scottish Police will equally apply to all UK government users, many of whom face similar regulatory limitations on the offshoring of data. The recipient of the FOI disclosures, Owen Sayers — an independent security consultant and enterprise architect with over 20 years' experience in delivering national policing systems — concluded it is now clear that UK policing data has been travelling overseas and "the statements from Microsoft make clear that they 100% cannot comply with UK data protection law".

Read more of this story at Slashdot.

Big Tech's AI Datacenters Demand Electricity. Are They Increasing Use of Fossil Fuels?

Par : EditorDavid
22 juin 2024 à 18:34
The artificial intelligence revolution will demand more electricity, warns the Washington Post. "Much more..." They warn that the "voracious" electricity consumption of AI is driving an expansion of fossil fuel use in America — "including delaying the retirement of some coal-fired plants." As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world's most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source... A ChatGPT-powered search, according to the International Energy Agency, consumes almost 10 times the amount of electricity as a search on Google. One large data center complex in Iowa owned by Meta burns the annual equivalent amount of power as 7 million laptops running eight hours every day, based on data shared publicly by the company... [Tech companies] argue advancing AI now could prove more beneficial to the environment than curbing electricity consumption. They say AI is already being harnessed to make the power grid smarter, speed up innovation of new nuclear technologies and track emissions.... "If we work together, we can unlock AI's game-changing abilities to help create the net zero, climate resilient and nature positive works that we so urgently need," Microsoft said in a statement. The tech giants say they buy enough wind, solar or geothermal power every time a big data center comes online to cancel out its emissions. But critics see a shell game with these contracts: The companies are operating off the same power grid as everyone else, while claiming for themselves much of the finite amount of green energy. Utilities are then backfilling those purchases with fossil fuel expansions, regulatory filings show... heavily polluting fossil fuel plants that become necessary to stabilize the power grid overall because of these purchases, making sure everyone has enough electricity. The article quotes a project director at the nonprofit Data & Society, which tracks the effect of AI and accuses the tech industry of using "fuzzy math" in its climate claims. "Coal plants are being reinvigorated because of the AI boom," they tell the Washington Post. "This should be alarming to anyone who cares about the environment." The article also summarzies a recent Goldman Sachs analysis, which predicted data centers would use 8% of America's total electricity by 2030, with 60% of that usage coming "from a vast expansion in the burning of natural gas. The new emissions created would be comparable to that of putting 15.7 million additional gas-powered cars on the road." "We all want to be cleaner," Brian Bird, president of NorthWestern Energy, a utility serving Montana, South Dakota and Nebraska, told a recent gathering of data center executives in Washington, D.C. "But you guys aren't going to wait 10 years ... My only choice today, other than keeping coal plants open longer than all of us want, is natural gas. And so you're going see a lot of natural gas build out in this country." Big Tech responded by "going all in on experimental clean-energy projects that have long odds of success anytime soon," the article concludes. "In addition to fusion, they are hoping to generate power through such futuristic schemes as small nuclear reactors hooked to individual computing centers and machinery that taps geothermal energy by boring 10,000 feet into the Earth's crust..." Some experts point to these developments in arguing the electricity needs of the tech companies will speed up the energy transition away from fossil fuels rather than undermine it. "Companies like this that make aggressive climate commitments have historically accelerated deployment of clean electricity," said Melissa Lott, a professor at the Climate School at Columbia University.

Read more of this story at Slashdot.

Systemd 256.1 Addresses Complaint That 'systemd-tmpfiles' Could Unexpectedly Delete Your /home Directory

Par : EditorDavid
22 juin 2024 à 17:34
"A good portion of my home directory got deleted," complained a bug report for systemd filed last week. It requested an update to a flag for the systemd-tmpfiles tool which cleans up files and directories: "a huge warning next to --purge. This option is dangerous, so it should be made clear that it's dangerous." The Register explains: As long as five years ago, systemd-tmpfiles had moved on past managing only temporary files — as its name might suggest to the unwary. Now it manages all sorts of files created on the fly ... such as things like users' home directories. If you invoke the systemd-tmpfiles --purge command without specifying that very important config file which tells it which files to handle, version 256 will merrily purge your entire home directory. The bug report first drew a cool response from systemd developer Luca Boccassi of Microsoft: So an option that is literally documented as saying "all files and directories created by a tmpfiles.d/ entry will be deleted", that you knew nothing about, sounded like a "good idea"? Did you even go and look what tmpfiles.d entries you had beforehand? Maybe don't just run random commands that you know nothing about, while ignoring what the documentation tells you? Just a thought eh But the report then triggered "much discussion," reports Phoronix. Some excerpts: Lennart Poettering: "I think we should fail --purge if no config file is specified on the command line. I see no world where an invocation without one would make sense, and it would have caught the problem here." Red Hat open source developer Zbigniew JÄ(TM)drzejewski-Szmek: "We need to rethink how --purge works. The principle of not ever destroying user data is paramount. There can be commands which do remove user data, but they need to be minimized and guarded." Systemd contributor Betonhaus: "Having a function that declares irreplaceable files — such as the contents of a home directory — to be temporary files that can be easily purged, is at best poor user interfacing design and at worst a severe design flaw." But in the end, Phoronix writes, systemd-tmpfiles behavior "is now improved upon." "Merged Wednesday was this patch that now makes systemd-tmpfiles accept a configuration file when running purge. That way the user must knowingly supply the configuration file(s) to which files they would ultimately like removed. The documentation has also been improved upon to make the behavior more clear." Thanks to long-time Slashdot reader slack_justyb for sharing the news.

Read more of this story at Slashdot.

Gilead's Twice-Yearly Shot to Prevent HIV Succeeds in Late-Stage Trial

Par : EditorDavid
22 juin 2024 à 16:34
An anonymous reader shared this report from CNBC: Gilead's experimental twice-yearly medicine to prevent HIV was 100% effective in a late-stage trial, the company said Thursday. None of the roughly 2,000 women in the trial who received the lenacapavir shot had contracted HIV by an interim analysis, prompting the independent data monitoring committee to recommend Gilead unblind the Phase 3 trial and offer the treatment to everyone in the study. Other participants had received standard daily pills. The company expects to share more data by early next year, the article adds, and if its results are positive, the company could bring its drug to the market as soon as late 2025. (By Fridayt the company's stock price had risen nearly 12%.) There's already other HIV-preventing options, the article points out, but they're taken by "only a little more than one-third of people in the U.S. who could benefit...according to data from the Centers for Disease Control and Prevention." Part of the problem? "Daily pills dominate the market, but drugmakers are now focusing on developing longer-acting shots... Health policymakers and advocates hope longer-acting options could reach people who can't or don't want to take a daily pill and better prevent the spread of a virus that caused about 1 million new infections globally in 2022."

Read more of this story at Slashdot.

Dark Matter Found? New Study Furthers Stephen Hawking's Predictions About 'Primordial' Black Holes

Par : EditorDavid
22 juin 2024 à 15:34
Where is dark matter, the invisible masses which must exist to bind galaxies together? Stephen Hawking postulated they could be hiding in "primordial" black holes formed during the big bang, writes CNN. "Now, a new study by researchers with the Massachusetts Institute of Technology has brought the theory back into the spotlight, revealing what these primordial black holes were made of and potentially discovering an entirely new type of exotic black hole in the process." Other recent studies have confirmed the validity of Hawking's hypothesis, but the work of [MIT graduate student Elba] Alonso-Monsalve and [study co-author David] Kaiser, a professor of physics and the Germeshausen Professor of the History of Science at MIT, goes one step further and looks into exactly what happened when primordial black holes first formed. The study, published June 6 in the journal Physical Review Letters, reveals that these black holes must have appeared in the first quintillionth of a second of the big bang: "That is really early, and a lot earlier than the moment when protons and neutrons, the particles everything is made of, were formed," Alonso-Monsalve said... "You cannot find quarks and gluons alone and free in the universe now, because it is too cold," Alonso-Monsalve added. "But early in the big bang, when it was very hot, they could be found alone and free. So the primordial black holes formed by absorbing free quarks and gluons." Such a formation would make them fundamentally different from the astrophysical black holes that scientists normally observe in the universe, which are the result of collapsing stars. Also, a primordial black hole would be much smaller — only the mass of an asteroid, on average, condensed into the volume of a single atom. But if a sufficient number of these primordial black holes did not evaporate in the early big bang and survived to this day, they could account for all or most dark matter. During the making of the primordial black holes, another type of previously unseen black hole must have formed as a kind of byproduct, according to the study. These would have been even smaller — just the mass of a rhino, condensed into less than the volume of a single proton... "It's inevitable that these even smaller black holes would have also formed, as a byproduct (of primordial black holes' formation)," Alonso-Monsalve said, "but they would not be around today anymore, as they would have evaporated already." However, if they were still around just ten millionths of a second into the big bang, when protons and neutrons formed, they could have left observable signatures by altering the balance between the two particle types. Professer Kaiser told CNN the next generation of gravitational detectors "could catch a glimpse of the small-mass black holes — an exotic state of matter that was an unexpected byproduct of the more mundane black holes that could explain dark matter today." Nico Cappelluti, an assistant professor in the physics department of the University of Miami (who was not involved with the study) confirmed to CNN that "This work is an interesting, viable option for explaining the elusive dark matter."

Read more of this story at Slashdot.

Open Source ChatGPT Clone 'LibreChat' Lets You Use Every AI Service - While Owning Your Data

Par : EditorDavid
22 juin 2024 à 14:34
Slashdot reader DevNull127 writes: A free and open source ChatGPT clone — named LibreChat — is also letting its users choose which AI model to use, "to harness the capabilities of cutting-edge language models from multiple providers in a unified interface". This means LibreChat includes OpenAI's models, but also others — both open-source and closed-source — and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google — as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" — making it possible to switch to a different AI model in mid-conversation...) Released under the MIT License, LibreChat has become "an open source success story," according to this article, representing "the passionate community that's actively creating an ecosystem of open source AI tools." Its creator, Danny Avila, says it finally lets users own their own data, "which is a dying human right, a luxury in the internet age and even more so with the age of LLM's." Avila says he was inspired by the day ChatGPT leaked the chat history of some of its users back in March of 2023 — and LibreChat is "inherently completely private". From the article: With locally-hosted LLMs, Avila sees users finally getting "an opportunity to withhold training data from Big Tech, which many trade at the cost of convenience." In this world, LibreChat "is naturally attractive as it can run exclusively on open-source technologies, database and all, completely 'air-gapped.'" Even with remote AI services insisting they won't use transient data for training, "local models are already quite capable" Avila notes, "and will become more capable in general over time." And they're also compatible with LibreChat...

Read more of this story at Slashdot.

À partir d’avant-hierActualités numériques

ASUS Releases Firmware Update for Critical Remote Authentication Bypass Affecting Seven Routers

Par : EditorDavid
17 juin 2024 à 11:34
A report from BleepingComputer notes that ASUS "has released a new firmware update that addresses a vulnerability impacting seven router models that allow remote attackers to log in to devices." But there's more bad news: Taiwan's CERT has also informed the public about CVE-2024-3912 in a post yesterday, which is a critical (9.8) arbitrary firmware upload vulnerability allowing unauthenticated, remote attackers to execute system commands on the device. The flaw impacts multiple ASUS router models, but not all will be getting security updates due to them having reached their end-of-life (EoL). Finally, ASUS announced an update to Download Master, a utility used on ASUS routers that enables users to manage and download files directly to a connected USB storage device via torrent, HTTP, or FTP. The newly released Download Master version 3.1.0.114 addresses five medium to high-severity issues concerning arbitrary file upload, OS command injection, buffer overflow, reflected XSS, and stored XSS problems.

Read more of this story at Slashdot.

❌
❌