Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 19 mai 2024Slashdot

America Takes Its Biggest Step Yet to End Coal Mining

Par : EditorDavid
19 mai 2024 à 07:34
The Washington Post reports that America took "one of its biggest steps yet to keep fossil fuels in the ground," announcing Thursday that it will end new coal leasing in the Powder River Basin, "which produces nearly half the coal in the United States... "It could prevent billions of tons of coal from being extracted from more than 13 million acres across Montana and Wyoming, with major implications for U.S. climate goals." A significant share of the nation's fossil fuels come from federal lands and waters. The extraction and combustion of these fuels accounted for nearly a quarter of U.S. carbon dioxide emissions between 2005 and 2014, according to a study by the U.S. Geological Survey. In a final environmental impact statement released Thursday, Interior's Bureau of Land Management found that continued coal leasing in the Powder River Basin would harm the climate and public health. The bureau determined that no future coal leasing should happen in the basin, and it estimated that coal mining in the Wyoming portion of the region would end by 2041. Last year, the Powder River Basin generated 251.9 million tons of coal, accounting for nearly 44 percent of all coal produced in the United States. Under the bureau's determination, the 14 active coal mines in the Powder River Basin can continue operating on lands they have leased, but they cannot expand onto other public lands in the region... "This means that billions of tons of coal won't be burned, compared to business as usual," said Shiloh Hernandez, a senior attorney at the environmental law firm Earthjustice. "It's good news, and it's really the only defensible decision the BLM could have made, given the current climate crisis...." The United States is moving away from coal, which has struggled to compete economically with cheaper gas and renewable energy. U.S. coal output tumbled 36 percent from 2015 to 2023, according to the Energy Information Administration. The Sierra Club's Beyond Coal campaign estimates that 382 coal-fired power plants have closed down or proposed to retire, with 148 remaining. In addition, the Environmental Protection Agency finalized an ambitious set of rules in April aimed at slashing air pollution, water pollution and planet-warming emissions spewing from the nation's power plants. One of the most significant rules will push all existing coal plants by 2039 to either close or capture 90 percent of their carbon dioxide emissions at the smokestack. "The nation's electricity generation needs are being met increasingly by wind, solar and natural gas," said Tom Sanzillo, director of financial analysis at the Institute for Energy Economics and Financial Analysis, an energy think tank. "The nation doesn't need any increase in the amount of coal under lease out of the Powder River Basin."

Read more of this story at Slashdot.

Robot Dogs Armed With AI-aimed Rifles Undergo US Marines Special Ops Evaluation

Par : EditorDavid
19 mai 2024 à 03:59
Long-time Slashdot reader SonicSpike shared this report from Ars Technica: The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone. While MARSOC is testing Ghost Robotics' quadrupedal unmanned ground vehicles (called "Q-UGVs" for short) for various applications, including reconnaissance and surveillance, it's the possibility of arming them with weapons for remote engagement that may draw the most attention. But it's not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past. MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously. On LinkedIn, Onyx Industries shared a video of a similar system in action. In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs "should not be construed as a capability or a singular interest in one of many use cases during an evaluation."

Read more of this story at Slashdot.

Why a 'Frozen' Distribution Linux Kernel Isn't the Safest Choice for Security

Par : EditorDavid
19 mai 2024 à 01:04
Jeremy Allison — Sam (Slashdot reader #8,157) is a Distinguished Engineer at Rocky Linux creator CIQ. This week he published a blog post responding to promises of Linux distros "carefully selecting only the most polished and pristine open source patches from the raw upstream open source Linux kernel in order to create the secure distribution kernel you depend on in your business." But do carefully curated software patches (applied to a known "frozen" Linux kernel) really bring greater security? "After a lot of hard work and data analysis by my CIQ kernel engineering colleagues Ronnie Sahlberg and Jonathan Maple, we finally have an answer to this question. It's no." The data shows that "frozen" vendor Linux kernels, created by branching off a release point and then using a team of engineers to select specific patches to back-port to that branch, are buggier than the upstream "stable" Linux kernel created by Greg Kroah-Hartman. How can this be? If you want the full details the link to the white paper is here. But the results of the analysis couldn't be clearer. - A "frozen" vendor kernel is an insecure kernel. A vendor kernel released later in the release schedule is doubly so. - The number of known bugs in a "frozen" vendor kernel grows over time. The growth in the number of bugs even accelerates over time. - There are too many open bugs in these kernels for it to be feasible to analyze or even classify them.... [T]hinking that you're making a more secure choice by using a "frozen" vendor kernel isn't a luxury we can still afford to believe. As Greg Kroah-Hartman explicitly said in his talk "Demystifying the Linux Kernel Security Process": "If you are not using the latest stable / longterm kernel, your system is insecure." CIQ describes its report as "a count of all the known bugs from an upstream kernel that were introduced, but never fixed in RHEL 8." For the most recent RHEL 8 kernels, at the time of writing, these counts are: RHEL 8.6 : 5034 RHEL 8.7 : 4767 RHEL 8.8 : 4594 In RHEL 8.8 we have a total of 4594 known bugs with fixes that exist upstream, but for which known fixes have not been back-ported to RHEL 8.8. The situation is worse for RHEL 8.6 and RHEL 8.7 as they cut off back-porting earlier than RHEL 8.8 but of course that did not prevent new bugs from being discovered and fixed upstream.... This whitepaper is not meant as a criticism of the engineers working at any Linux vendors who are dedicated to producing high quality work in their products on behalf of their customers. This problem is extremely difficult to solve. We know this is an open secret amongst many in the industry and would like to put concrete numbers describing the problem to encourage discussion. Our hope is for Linux vendors and the community as a whole to rally behind the kernel.org stable kernels as the best long term supported solution. As engineers, we would prefer this to allow us to spend more time fixing customer specific bugs and submitting feature improvements upstream, rather than the endless grind of backporting upstream changes into vendor kernels, a practice which can introduce more bugs than it fixes. ZDNet calls it "an open secret in the Linux community." It's not enough to use a long-term support release. You must use the most up-to-date release to be as secure as possible. Unfortunately, almost no one does that. Nevertheless, as Google Linux kernel engineer Kees Cook explained, "So what is a vendor to do? The answer is simple: if painful: Continuously update to the latest kernel release, either major or stable." Why? As Kroah-Hartman explained, "Any bug has the potential of being a security issue at the kernel level...." Although [CIQ's] programmers examined RHEL 8.8 specifically, this is a general problem. They would have found the same results if they had examined SUSE, Ubuntu, or Debian Linux. Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they're not used in businesses. Jeremy Allison's post points out that "the Linux kernel used by Android devices is based on the upstream kernel and also has a stable internal kernel ABI, so this isn't an insurmountable problem..."

Read more of this story at Slashdot.

Hier — 18 mai 2024Slashdot

Are AI-Generated Search Results Still Protected by Section 230?

Par : EditorDavid
18 mai 2024 à 22:34
Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it... Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome." Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness." The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online." The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot." The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...

Read more of this story at Slashdot.

How an 'Unprecedented' Google Cloud Event Wiped Out a Major Customer's Account

Par : EditorDavid
18 mai 2024 à 21:34
Ars Technica looks at what happened after Google's answer to Amazon's cloud service "accidentally deleted a giant customer account for no reason..." "[A]ccording to UniSuper's incident log, downtime started May 2, and a full restoration of services didn't happen until May 15." UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service... UniSuper's website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled "A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian...." Google Cloud is supposed to have safeguards that don't allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution)... The many stakeholders in the service meant service restoration wasn't just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime. The second must-read document in this whole saga is the outage update page, which contains 12 statements as the cloud devs worked through this catastrophe. The first update is May 2 with the ominous statement, "You may be aware of a service disruption affecting UniSuper's systems...." Seven days after the outage, on May 9, we saw the first signs of life again for UniSuper. Logins started working for "online UniSuper accounts" (I think that only means the website), but the outage page noted that "account balances shown may not reflect transactions which have not yet been processed due to the outage...." May 13 is the first mention of the mobile app beginning to work again. This update noted that balances still weren't up to date and that "We are processing transactions as quickly as we can." The last update, on May 15, states, "UniSuper can confirm that all member-facing services have been fully restored, with our retirement calculators now available again." The joint statement and the outage updates are still not a technical post-mortem of what happened, and it's unclear if we'll get one. Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it's also full of terminology that doesn't align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper. Thanks to long-time Slashdot reader swm for sharing the news.

Read more of this story at Slashdot.

Eight Automakers Grilled by US Lawmakers Over Sharing of Connected Car Data With Police

Par : EditorDavid
18 mai 2024 à 20:34
An anonymous reader shared this report from Automotive News: Automotive News recently reported that eight automakers sent vehicle location data to police without a court order or warrant. The eight companies told senators that they provide police with data when subpoenaed, getting a rise from several officials. BMW, Kia, Mazda, Mercedes-Benz, Nissan, Subaru, Toyota, and Volkswagen presented their responses to lawmakers. Senators Ron Wyden from Oregon and Ed Markey from Massachusetts penned a letter to the Federal Trade Commission, urging investigative action. "Automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry's own voluntary privacy principles," they wrote. Ten years ago, all of those companies agreed to the Consumer Privacy Protection Principles, a voluntary code that said automakers would only provide data with a warrant or order issued by a court. Subpoenas, on the other hand, only require approval from law enforcement. Though it wasn't part of the eight automakers' response, General Motors has a class-action suit on its hands, claiming that it shared data with LexisNexis Risk Solutions, a company that provides insurers with information to set rates. The article notes that the lawmakers praised Honda, Ford, GM, Tesla, and Stellantis for requiring warrants, "except in the case of emergencies or with customer consent."

Read more of this story at Slashdot.

Study Confirms Einstein Prediction: Black Holes Have a 'Plunging Region'

Par : EditorDavid
18 mai 2024 à 19:34
"Albert Einstein was right," reports CNN. "There is an area at the edge of black holes where matter can no longer stay in orbit and instead falls in, as predicted by his theory of gravity." The proof came by combining NASA's earth-orbiting NuSTAR telescope with the NICER telescope on the International Space Station to detect X-rays: A team of astronomers has for the first time observed this area — called the "plunging region" — in a black hole about 10,000 light-years from Earth. "We've been ignoring this region, because we didn't have the data," said research scientist Andrew Mummery, lead author of the study published Thursday in the journal Monthly Notices of the Royal Astronomical Society. "But now that we do, we couldn't explain it any other way." Mummery — also a Fellow in Oxford's physics department — told CNN, "We went out searching for this one specifically — that was always the plan. We've argued about whether we'd ever be able to find it for a really long time. People said it would be impossible, so confirming it's there is really exciting." Mummery described the plunging region as "like the edge of a waterfall." Unlike the event horizon, which is closer to the center of the black hole and doesn't let anything escape, including light and radiation, in the "plunging region" light can still escape, but matter is doomed by the powerful gravitational pull, Mummery explained. The study's findings could help astronomers better understand the formation and evolution of black holes. "We can really learn about them by studying this region, because it's right at the edge, so it gives us the most information," Mummery said... According to Christopher Reynolds, a professor of astronomy at the University of Maryland, College Park, finding actual evidence for the "plunging region" is an important step that will let scientists significantly refine models for how matter behaves around a black hole. "For example, it can be used to measure the rotation rate of the black hole," said Reynolds, who was not involved in the study.

Read more of this story at Slashdot.

'Google Domains' Starts Migrating to Squarespace

Par : EditorDavid
18 mai 2024 à 18:34
"We're migrating domains in batches..." announced web-hosting company Squarespace earlier this month. "Squarespace has entered into an agreement to become the new home for Google Domains customers. When your domain transitions from Google to Squarespace, you'll become a Squarespace customer and manage your domain through an account with us." Slashdot reader shortyadamk shares an email sent today to a Google Domains customer: "Today your domain, xyz.com, migrated from Google Domains to Squarespace Domains. "Your WHOIS contact details and billing information (if applicable) were migrated to Squarespace. Your DNS configuration remains unchanged. "Your migrated domain will continue to work with Google Services such as Google Search Console. To support this, your account now has a domain verification record — one corresponding to each Google account that currently has access to the domain."

Read more of this story at Slashdot.

Is America's Defense Department 'Rushing to Expand' Its Space War Capabilities?

Par : EditorDavid
18 mai 2024 à 17:34
America's Defense Department "is rushing to expand its capacity to wage war in space," reports the New York Times, "convinced that rapid advances by China and Russia in space-based operations pose a growing threat to U.S. troops and other military assets on the ground and U.S. satellites in orbit." [T]he Defense Department is looking to acquire a new generation of ground- and space-based tools that will allow it to defend its satellite network from attack and, if necessary, to disrupt or disable enemy spacecraft in orbit, Pentagon officials have said in a series of interviews, speeches and recent statements... [T]he move to enhance warfighting capacity in space is driven mostly by China's expanding fleet of military tools in space... [U.S. officials are] moving ahead with an effort they are calling "responsible counterspace campaigning," an intentionally ambiguous term that avoids directly confirming that the United States intends to put its own weapons in space. But it also is meant to reflect this commitment by the United States to pursue its interest in space without creating massive debris fields that would result if an explosive device or missile were used to blow up an enemy satellite. That is what happened in 2007, when China used a missile to blow up a satellite in orbit. The United States, China, India and Russia all have tested such missiles. But the United States vowed in 2022 not to do any such antisatellite tests again. The United States has also long had ground-based systems that allow it to jam radio signals, disrupting the ability of an enemy to communicate with its satellites, and is taking steps to modernize these systems. But under its new approach, the Pentagon is moving to take on an even more ambitious task: broadly suppress enemy threats in orbit in a fashion similar to what the Navy does in the oceans and the Air Force in the skies. The article notes a recent report drafted by a former Space Force colonel cited three ways to disable enemy satellite networks: cyberattacks, ground or space-based lasers, and high-powered microwaves. "John Shaw, a recently retired Space Force lieutenant general who helped run the Space Command, agreed that directed-energy devices based on the ground or in space would probably be a part of any future system. 'It does minimize debris; it works at the speed of light,' he said. 'Those are probably going to be the tools of choice to achieve our objective." The Pentagon is separately working to launch a new generation of military satellites that can maneuver, be refueled while in space or have robotic arms that could reach out and grab — and potentially disrupt — an enemy satellite. Another early focus is on protecting missile defense satellites. The Defense Department recently started to require that a new generation of these space-based monitoring systems have built-in tools to evade or respond to possible attack. "Resiliency feature to protect against directed energy attack mechanisms" is how one recent missile defense contract described it. Last month the Pentagon also awarded contracts to two companies — Rocket Lab and True Anomaly — to launch two spacecraft by late next year, one acting as a mock enemy and the other equipped with cameras, to pull up close and observe the threat. The intercept satellite will not have any weapons, but it has a cargo hold that could carry them. The article notes that Space Force's chief of space operations has told Senate appropriators that about $2.4 billion of the $29.4 billion in Space Force's proposed 2025 budget was set aside for "space domain awareness." And it adds that the Pentagon "is working to coordinate its so-called counterspace efforts with major allies, including Britain, Canada and Australia, through a multinational operation called Operation Olympic Defender. France has been particularly aggressive, announcing its intent to build and launch by 2030 a satellite equipped with a high-powered laser." [W]hat is clear is that a certain threshold has now been passed: Space has effectively become part of the military fighting domain, current and former Pentagon officials said. "By no means do we want to see war extend into space," Lt. Gen. DeAnna Burt, deputy chief of space operations, said at a Mitchell Institute event this year. "But if it does, we have to be prepared to fight and win."

Read more of this story at Slashdot.

Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi

Par : EditorDavid
18 mai 2024 à 16:34
Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation." The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car. The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration. In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco. in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco. Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."

Read more of this story at Slashdot.

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities

Par : EditorDavid
18 mai 2024 à 15:34
Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper. "Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way. Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection. Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...? Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet. Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task. "But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."

Read more of this story at Slashdot.

Facing Angry Users, Sonos Promises to Fix Flaws and Restore Removed Features

Par : EditorDavid
18 mai 2024 à 14:34
A blind worker for the National Federation of the Blind said Sonos had a reputation for making products usable for people with disabilities, but that "Overnight they broke that trust," according to the Washington Post. They're not the only angry customers about the latest update to Sonos's wireless speaker system. The newspaper notes that nonprofit worker Charles Knight is "among the Sonos die-hards who are furious at the new app that crippled their options to stream music, listen to an album all the way through or set a morning alarm clock." After Sonos updated its app last week, Knight could no longer set or change his wake-up music alarm. Timers to turn off music were also missing. "Something as basic as an alarm is part of the feature set that users have had for 15 years," said Knight, who has spent thousands of dollars on six Sonos speakers for his bedroom, home office and kitchen. "It was just really badly thought out from start to finish." Some people who are blind also complained that the app omitted voice-control features they need. What's happening to Sonos speaker owners is a cautionary tale. As more of your possessions rely on software — including your car, phone, TV, home thermostat or tractor — the manufacturer can ruin them with one shoddy update... Sonos now says it's fixing problems and adding back missing features within days or weeks. Sonos CEO Patrick Spence acknowledged the company made some mistakes and said Sonos plans to earn back people's trust. "There are clearly people who are having an experience that is subpar," Spence said. "I would ask them to give us a chance to deliver the actions to address the concerns they've raised." Spence said that for years, customers' top complaint was the Sonos app was clunky and slow to connect to their speakers. Spence said the new app is zippier and easier for Sonos to update. (Some customers disputed that the new app is faster.) He said some problems like Knight's missing alarms were flaws that Sonos found only once the app was about to roll out. (Sonos updated the alarm feature this week.) Sonos did remove but planned to add back some lesser-used features. Spence said the company should have told people upfront about the planned timeline to return any missing functions. In a blog post Sonos thanked customers for "valuable feedback," saying they're "working to address them as quickly as possible" and promising to reintroduce features, fix bugs, and address performance issues. ("Adding and editing alarms" is available now, as well as VoiceOver fixes for the home screen on iOS.) The Washington Post adds that Sonos "said it initially missed some software flaws and will restore more voice-reader functions next week."

Read more of this story at Slashdot.

À partir d’avant-hierSlashdot

How Microsoft Employees Pressured the Company Over Its Oil Industry Ties

Par : EditorDavid
13 mai 2024 à 11:34
The non-profit environmental site Grist reports on "an internal, employee-led effort to raise ethical concerns about Microsoft's work helping oil and gas producers boost their profits by providing them with cloud computing resources and AI software tools." There's been some disappointments — but also some successes, starting with the founding of an internal sustainability group within Microsoft that grew to nearly 10,000 employees: Former Microsoft employees and sources familiar with tech industry advocacy say that, broadly speaking, employee pressure has had an enormous impact on sustainability at Microsoft, encouraging it to announce industry-leading climate goals in 2020 and support key federal climate policies. But convincing the world's most valuable company to forgo lucrative oil industry contracts proved far more difficult... Over the past seven years, Microsoft has announced dozens of new deals with oil and gas producers and oil field services companies, many explicitly aimed at unlocking new reserves, increasing production, and driving up oil industry profits... As concerns over the company's fossil fuel work mounted, Microsoft was gearing up to make a big sustainability announcement. In January 2020, the company pledged to become "carbon negative" by 2030, meaning that in 10 years, the tech giant would pull more carbon out of the air than it emitted on an annual basis... For nearly two years, employees watched and waited. Following its carbon negative announcement, Microsoft quickly expanded its internal carbon tax, which charges the company's business groups a fee for the carbon they emit via electricity use, employee travel, and more. It also invested in new technologies like direct air capture and purchased carbon removal contracts from dozens of projects worldwide. But Microsoft's work with the oil industry continued unabated, with the company announcing a slew of new partnerships in 2020 and 2021 aimed at cutting fossil fuel producers' costs and boosting production. The last straw for one technical account manager was a 2023 LinkedIn post by a Microsoft technical architect about the company's work on oil and gas industry automation. The post said Microsoft's cloud service was "unlocking previously inaccessible reserves" for the fossil fuel industry, promising that with Microsoft's Azure service, "the future of oil and gas exploration and production is brighter than ever." The technical account manager resigned from the position they'd held for nearly a decade, citing the blog post in a resignation letter which accused Microsoft of "extending the age of fossil fuels, and enabling untold emissions." Thanks to Slashdot reader joshuark for sharing the news.

Read more of this story at Slashdot.

OpenAI's Sam Altman Wants AI in the Hands of the People - and Universal Basic Compute?

Par : EditorDavid
13 mai 2024 à 07:34
OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg). And when asked about this summer's launch of the next version of ChatGPT, Altman said they hoped to "be thoughtful about how we do it, like we may release it in a different way than we've released previous models... Altman: One of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super-important part of our mission, and this idea that we build AI tools and make them super-widely available — free or, you know, not-that-expensive, whatever that is — so that people can use them to go kind of invent the future, rather than the magic AGI in the sky inventing the future, and showering it down upon us. That seems like a much better path. It seems like a more inspiring path. I also think it's where things are actually heading. So it makes me sad that we have not figured out how to make GPT4-level technology available to free users. It's something we >really want to do... Q: It's just very expensive, I take it? Altman: It's very expensive. But Altman said later he's confident they'll be able to reduce cost. Altman: I don't know, like, when we get to intelligence too cheap to meter, and so fast that it feels instantaneous to us, and everything else, but I do believe we can get there for, you know, a pretty high level of intelligence. It's important to us, it's clearly important to users, and it'll unlock a lot of stuff. Altman also thinks there's "great roles for both" open-source and closed-source models, saying "We've open-sourced some stuff, we'll open-source more stuff in the future. "But really, our mission is to build toward AGI, and to figure out how to broadly distribute its benefits... " Altman even said later that "A huge part of what we try to do is put the technology in the hands of people..." Altman: The fact that we have so many people using a free version of ChatGPT that we don't — you know, we don't run ads on, we don't try to make money on it, we just put it out there because we want people to have these tools — I think has done a lot to provide a lot of value... But also to get the world really thoughtful about what's happening here. It feels to me like we just stumbled on a new fact of nature or science or whatever you want to call it... I am sure, like any other industry, I would expect there to be multiple approaches and different peoiple like different ones. Later Altman said he was "super-excited" about the possibility of an AI tutor that could reinvent how people learn, and "doing faster and better scientific discovery... that will be a triumph." But at some point the discussion led him to where the power of AI intersects with the concept of a universal basic income: Altman: Giving people money is not going to go solve all the problems. It is certainly not going to make people happy. But it might solve some problems, and it might give people a better horizon with which to help themselves. Now that we see some of the ways that AI is developing, I wonder if there's better things to do than the traditional conceptualization of UBI. Like, I wonder — I wonder if the future looks something more like Universal Basic Compute than Universal Basic Income, and everybody gets like a slice of GPT-7's compute, and they can use it, they can re-sell it, they can donate it to somebody to use for cancer research. But what you get is not dollars but this like slice — you own part of the the productivity. Altman was also asked about the "ouster" period where he was briefly fired from OpenAI — to which he gave a careful response: Altman: I think there's always been culture clashes at — look, obviously not all of those board members are my favorite people in the world. But I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision-making and actions, which I do, I have never once doubted their integrity or commitment to the sort of shared mission of safe and beneficial AGI... I think a lot of the world is, understandably, very afraid of AGI, or very afraid of even current AI, and very excited about it — and even more afraid, and even more excited about where it's going. And we wrestle with that, but I think it is unavoidable that this is going to happen. I also think it's going to be tremendously beneficial. But we do have to navigate how to get there in a reasonable way. And, like a lot of stuff is going to change. And change is pretty uncomfortable for people. So there's a lot of pieces that we've got to get right... I really care about AGI and think this is like the most interesting work in the world.

Read more of this story at Slashdot.

Will Smarter Cars Bring 'Optimized' Traffic Lights?

Par : EditorDavid
13 mai 2024 à 03:54
"Researchers are exploring ways to use features in modern cars, such as GPS, to make traffic safer and more efficient," reports the Associated Press. "Eventually, the upgrades could do away entirely with the red, yellow and green lights of today, ceding control to driverless cars." Among those reimagining traffic flows is a team at North Carolina State University led by Ali Hajbabaie, an associate engineering professor. Rather than doing away with today's traffic signals, Hajbabaie suggests adding a fourth light, perhaps a white one, to indicate when there are enough autonomous vehicles on the road to take charge and lead the way. "When we get to the intersection, we stop if it's red and we go if it's green," said Hajbabaie, whose team used model cars small enough to hold. "But if the white light is active, you just follow the vehicle in front of you." He points out that this approach could be years aways, since it requires self-driving capability in 40% to 50% of the cars on the road. But the article notes another approach which could happen sooner, talking to Henry Liu, a civil engineering professor who is leading ">a study through the University of Michigan: They conducted a pilot program in the Detroit suburb of Birmingham using insights from the speed and location data found in General Motors vehicles to alter the timing of that city's traffic lights. The researchers recently landed a U.S. Department of Transportation grant under the bipartisan infrastructure law to test how to make the changes in real time... Liu, who has been leading the Michigan research, said even with as little as 6% of the vehicles on Birmingham's streets connected to the GM system, they provide enough data to adjust the timing of the traffic lights to smooth the flow... "The beauty of this is you don't have to do anything to the infrastructure," Liu said. "The data is not coming from the infrastructure. It's coming from the car companies." Danielle Deneau, director of traffic safety at the Road Commission in Oakland County, Michigan, said the initial data in Birmingham only adjusted the timing of green lights by a few seconds, but it was still enough to reduce congestion. "Even bigger changes could be in store under the new grant-funded research, which would automate the traffic lights in a yet-to-be announced location in the county."

Read more of this story at Slashdot.

Australia Criticized For Ramping Up Gas Extraction Through '2050 and Beyond'

Par : EditorDavid
13 mai 2024 à 01:34
Slashdot reader sonlas shared this report from the BBC: Australia has announced it will ramp up its extraction and use of gas until "2050 and beyond", despite global calls to phase out fossil fuels. Prime Minister Anthony Albanese's government says the move is needed to shore up domestic energy supply while supporting a transition to net zero... Australia — one of the world's largest exporters of liquefied natural gas — has also said the policy is based on "its commitment to being a reliable trading partner". Released on Thursday, the strategy outlines the government's plans to work with industry and state leaders to increase both the production and exploration of the fossil fuel. The government will also continue to support the expansion of the country's existing gas projects, the largest of which are run by Chevron and Woodside Energy Group in Western Australia... The policy has sparked fierce backlash from environmental groups and critics — who say it puts the interest of powerful fossil fuel companies before people. "Fossil gas is not a transition fuel. It's one of the main contributors to global warming and has been the largest source of increases of CO2 [emissions] over the last decade," Prof Bill Hare, chief executive of Climate Analytics and author of numerous UN climate change reports told the BBC... Successive Australian governments have touted gas as a key "bridging fuel", arguing that turning it off too soon could have "significant adverse impacts" on Australia's economy and energy needs. But Prof Hare and other scientists have warned that building a net zero policy around gas will "contribute to locking in 2.7-3C global warming, which will have catastrophic consequences".

Read more of this story at Slashdot.

Linux Kernel 6.9 Officially Released

Par : EditorDavid
12 mai 2024 à 22:34
"6.9 is now out," Linus Torvalds posted on the Linux kernel mailing list, "and last week has looked quite stable (and the whole release has felt pretty normal)." Phoronix writes that Linux 6.9 "has a number of exciting features and improvements for those habitually updating to the newest version." And Slashdot reader prisoninmate shared this report from 9to5Linux: Highlights of Linux kernel 6.9 include Rust support on AArch64 (ARM64) architectures, support for the Intel FRED (Flexible Return and Event Delivery) mechanism for improved low-level event delivery, support for AMD SNP (Secure Nested Paging) guests, and a new dm-vdo (virtual data optimizer) target in device mapper for inline deduplication, compression, zero-block elimination, and thin provisioning. Linux kernel 6.9 also supports the Named Address Spaces feature in GCC (GNU Compiler Collection) that allows the compiler to better optimize per-CPU data access, adds initial support for FUSE passthrough to allow the kernel to serve files from a user-space FUSE server directly, adds support for the Energy Model to be updated dynamically at run time, and introduces a new LPA2 mode for ARM 64-bit processors... Linux kernel 6.9 will be a short-lived branch supported for only a couple of months. It will be succeeded by Linux kernel 6.10, whose merge window has now been officially opened by Linus Torvalds. Linux kernel 6.10 is expected to be released in mid or late September 2024. "Rust language has been updated to version 1.76.0 in Linux 6.9," according to the article. And Linus Torvalds shared one more details on the Linux kernel mailing list. "I now have a more powerful arm64 machine (thanks to Ampere), so the last week I've been doing almost as many arm64 builds as I have x86-64, and that should obviously continue during the upcoming merge window too."

Read more of this story at Slashdot.

Reddit Grows, Seeks More AI Deals, Plans 'Award' Shops, and Gets Sued

Par : EditorDavid
12 mai 2024 à 21:34
Reddit reported its first results since going public in late March. Yahoo Finance reports: Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it's a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google. Bloomberg notes that already Reddit "has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year." And elsewhere Bloomberg writes that Reddit "plans to expand its revenue streams outside of advertising into what Huffman calls the 'user economy' — users making money from others on the platform... " In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products... Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said. Meanwhile, ZDNet notes that this week a Reddit announcement "introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site." The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there's no limit to what it can do with it. Reddit will continue to block "bad actors" that use unauthorized methods to get data, the company says, but it's taking additional steps to keep users safe from the site's partners.... Reddit still supports using its data for research: It's creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private. If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, "If you're interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract." To be clear, Reddit is still selling users' data — it's just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need. And finally, there's some court action, according to the Register. Reddit "was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them." The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022... That arrangement called for Reddit to use reasonable means to ensure that LevelField's ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract... LevelFields argues that Reddit is in a particularly good position to track click fraud because it's serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic... Nonetheless, LevelFields's effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site "provided click logs without IP addresses," the complaint says. "Reddit represented that it was not able to provide IP addresses." "The plaintiffs aspire to have their claim certified as a class action," the article adds — along with an interesting statistic. "According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion."

Read more of this story at Slashdot.

OpenAI's Sam Altman on iPhones, Music, Training Data, and Apple's Controversial iPad Ad

Par : EditorDavid
12 mai 2024 à 20:34
OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg). And speaking on technology's advance, Altman said "Phones are unbelievably good.... I personally think the iPhone is like the greatest piece of technology humanity has ever made. It's really a wonderful product." Q: What comes after it? Altman: I don't know. I mean, that was what I was saying. It's so good, that to get beyond it, I think the bar is quite high. Q: You've been working with Jony Ive on something, right? Altman: We've been discussing ideas, but I don't — like, if I knew... Altman said later he thought voice interaction "feels like a different way to use a computer." But the conversation turned to Apple in another way. It happened in a larger conversation where Altman said OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..." Altman: Even the world in which — if we went and, let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that — let's say we could still make a great music model, which maybe we could. I was posing that as a thought experiment to musicians, and they were like, "Well, I can't object to that on any principle basis at that point — and yet there's still something I don't like about it." Now, that's not a reason not to do it, um, necessarily, but it is — did you see that ad that Apple put out... of like squishing all of human creativity down into one really iPad...? There's something about — I'm obviously hugely positive on AI — but there is something that I think is beautiful about human creativity and human artistic expression. And, you know, for an AI that just does better science, like, "Great. Bring that on." But an AI that is going to do this deeply beautiful human creative expression? I think we should figure out — it's going to happen. It's going to be a tool that will lead us to greater creative heights. But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here. What about creators whose copyrighted materials are used for training data? Altman had a ready answer — but also some predictions for the future. "On fair use, I think we have a very reasonable position under the current law. But I think AI is so different that for things like art, we'll need to think about them in different ways..." Altman:I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time, as training data becomes less valuable and what the system does accessing information in context, in real-time... what happens at inference time will become more debated, and what the new economic model is there. Altman gave the example of an AI which was never trained on any Taylor Swift songs — but could still respond to a prompt requesting a song in her style. Altman: And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, how should Taylor get paid? So I think there's an opt-in, opt-out in that case, first of all — and then there's an economic model. Altman also wondered if there's lessons in the history and economics of music sampling...

Read more of this story at Slashdot.

❌
❌