Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 18 mai 2024Actualités numériques

Are AI-Generated Search Results Still Protected by Section 230?

Par : EditorDavid
18 mai 2024 à 22:34
Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it... Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome." Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness." The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online." The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot." The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...

Read more of this story at Slashdot.

666e édition des LIDD : Liens Intelligents Du Dimanche

18 mai 2024 à 22:00
Le voilà, le temps attendu 666e !

Comme tous les dimanches (après une petite pause ces dernières semaines), voici notre sélection des liens les plus intéressants de ces derniers jours. Ils proviennent des commentaires les plus intéressants, utiles et/ou originaux de la semaine, mais aussi de nos recherches.

Le floutage de demain passe dès aujourd’hui par l’IA

Dans sa revue des médias, l’INA propose un article très intéressant sur la nécessité d’inventer « le floutage de demain » : « Pendant longtemps, la télé a flouté les personnes qui souhaitaient témoigner de façon anonyme. Ces floutages sont désormais contournables grâce à l’intelligence artificielle. Pour trouver des solutions qui permettent de continuer à assurer la protection de ces témoins, une course contre la montre s’est engagée à France Télévisions ».

Nos confrères se sont posés la question de savoir si les méthodes d’anonymisation des visages étaient suffisantes, d’autant que cela soulève des questions déontologiques sur la nécessaire et obligatoire protection des sources. Résultat des courses : « le floutage tel qu’il existe aujourd’hui risque d’être compromis ». Une trentaine de sujets ont déjà été dépubliés.

Et si la solution venait de l’intelligence artificielle qui placerait un nouveau visage à la place de l’ancien, avec un autre intérêt que l’anonymisation : « L’IA permet d’envisager l’émotion qu’on nous partage, mais sans révéler le témoin. Le visage, c’est souvent la part d’humanité qui reste dans les zones du monde où il y a des conflits ou de la répression ».

Il y a déjà une utilisation concrète avec le documentaire Nous, jeunesse(s) d’Iran, cité par la Revue des médias. Il a été diffusé sur France 5 : « six récits de jeunes de moins de 25 ans donnent vie aux transformations en cours dans la société iranienne ».

Pour ce film, les réalisateurs ont « fait le choix de générer des visages via intelligence artificielle pour anonymiser ses témoins (au centre). Créer une simple mosaïque présente dorénavant trop de risques. Quant au carré noir, difficile de l’utiliser durant tout un film », explique l’INA.

Neurone et IA : la bonne connexion ?

France Culture propose un podcast sur l’histoire de l’IA : « Aux origines de l’intelligence artificielle, il y a la volonté de plusieurs chercheurs de décrypter les mécanismes de la pensée humaine. Quel rôle ont joué les premiers réseaux de neurones artificiels dans cette histoire à rebondissements ? ». L’invité du jour est Alban Leveau-Vallier, docteur en philosophie à Sciences Po Paris.

C’est quoi être « identifiable » au sens du RGPD ?

C’est la question à laquelle l’avocat spécialiste en droit de la création et de l’innovation Etienne Wery tente de répondre en s’appuyant sur une décision de justice de la Cour européenne.

La version courte : « Un communiqué de presse qui n’identifie pas la personne visée, mais contient des informations qui rendent l’identification possible sans « effort démesuré en termes de temps, de coût et de main-d’œuvre », de sorte que le risque d’identification n’est pas « insignifiant », est une donnée à caractère personnel ». La version longue est intéressante à lire :

Patch, Jeux olympiques et « biologie augmentée »

G Milgram revient avec une nouvelle vidéo sur le « patch bidon de Sanofi et…des Jeux Olympiques ». Le youtubeur explique avoir été contacté par plusieurs pharmaciens, ce qui l’a convaincu de se lancer dans l’analyse d’Initiv : « On va parler « « « biologie augmentée » » » et on va laisser la parole aux spécialistes ! ».

L’histoire de la soupe primordiale et du jeu de la vie

Il y a quelques semaines, EGO a proposé une vidéo sur le jeu de la vie. Il est extrêmement simple avec seulement deux règles et zéro joueur : placez-vous devant l’écran et regardez le « monde » évoluer. Changez quelques paramètres et modifiez complétement le résultat.

« Le Jeu de la Vie de Conway est un automate cellulaire où des cellules sur une grille bidimensionnelle évoluent à chaque tour selon des règles précises basées sur l’état de leurs voisines. Une cellule peut être “vivante” ou “morte”, changeant d’état en fonction du nombre de voisins vivants. Ce jeu sans joueur illustre la complexité émergente et l’auto-organisation à partir de règles simples ».

Si vous voulez en profiter pour retracer ce qui est probablement l’étape la plus importante de notre univers, sa première seconde de « vie », alors c’est le moment d’écouter ce podcast de France Culture. On y retrouve Julien Grain (cosmologiste, chargé de recherche à l’Institut d’Astrophysique spatiale d’Orsay) et Sébastien Renaux-Petel (physicien théoricien, spécialiste de cosmologie, chargé de recherche CNRS).

Combo perdant : masculinismes, identitaires et réseaux sociaux

France TV propose un documentaire dont le nom donne le ton : « Mascus, les hommes qui détestent les femmes ». Nos confrères reviennent sur des vidéos provocatrices sur les réseaux sociaux, qui cumulent des centaines de milliers de vues.

« Sous la forme d’une cyber-enquête, ce documentaire vise à décrypter la manosphère et à en montrer les dangers. Notre journaliste, Pierre Gault, s’est « infiltré » dans les forums, groupes Telegram ou WhatsApp et conversations privées. Banalisation des agressions sexuelles, appels au viol, propos misogynes mais aussi racistes, harcèlements… sa plongée au cœur des communautés masculinistes est vertigineuse et révèle une culture de la haine des femmes ».

France Culture propose de son côté un podcast intitulé « Masculinismes, identitaires : les réseaux au service d’une nouvelle vague réactionnaire », réalisé en partenariat avec Numerama.

« Alors que le sentiment de déclin de la population française ne cesse d’augmenter et que le pays semble traversé par une “fracture identitaire”, une nouvelle vague d’influenceurs d’extrême droite émerge sur les réseaux sociaux ».

Le débarras des LIDD

La règle d’or « corrélation n’est pas causalité » est parfaitement illustré par ce site sur les fausses corrélations (en anglais). Saviez-vous, par exemple, que la qualité de l’air au Tennessee était quasi parfaitement aligné avec le nombre de films dans lesquels Orlando Bloom apparait ? Même chose pour les recherches « cat memes » sur Google et les rappels automobiles à cause de problèmes sur les Airbags. Coïncidences ? La fonction random permet de bien s’amuser et de tomber parfois sur quelques pépites.

Doom dans Htop

Dans la pure lignée des choses les plus inutiles, il y a la fameuse question « Can it run Doom ? » dès que l’on parle d’un nouveau matériel, d’une application ou du moindre appareil avec un bout d’écran. C’est au tour du moniteur système Linux Htop de sauter le pas (GitHub). Je vous l’accorde, on a connu plus lisible…

Moins inutile (et donc moins indispensable ? Pas sur…), The Nature of Code (en anglais) est un site proposant des exemples d’utilisation de la bibliothèque JavaScript p5.js avec des concepts physiques comme les vecteurs, les forces, les fractales. etc. L’Université de Genève propose une introduction à P5.js en français, qui permet de dessiner et de réaliser des animations.

Brouteurs broutés

Vous êtes encore là ? Alors allez lire un « petit » thread de Méta-Brouteur sur l’histoire du brouteur brouté. Attention toutefois, ces arnaques continuent de faire des victimes, ne vous laissez pas avoir ! Arnaquemoisitupeux aussi s’intéresse sujet et raconte « absolument n’importe quoi » à un brouteur depuis deux mois… et c’est peu de le dire.

Plus sérieusement, France Culture propose un podcast sur ce sujet avec deux chasseurs d’escrocs. L’un d’eux « s’invente des faux profils, hack les ordinateurs des arnaqueurs aux sentiments et vient en aide à leurs victimes ». L’autre, « modifie sa voix pour débusquer les escrocs. Et parfois, il réussit à les mettre hors d’état de nuire définitivement ».

Quoi, ce n’est toujours pas suffisant ? Alors voici de quoi terminer la journée : un rapport de plus de 500 pages (en anglais) de l’université de Stanford sur l’intelligence artificielle (si vous avez un résumé, je suis preneur !).

Meow ?

Et si vraiment cela ne suffit pas à vous endormir, alors on a la solution ultime avec meow.camera (en chinois, mais c’est pas grave). Oui oui, il s’agit bien de caméras installées dans des distributeurs de croquettes pour chat. Une croquette, deux croquettes, trois croquettes… Une sorte de CatRoulette.

How an 'Unprecedented' Google Cloud Event Wiped Out a Major Customer's Account

Par : EditorDavid
18 mai 2024 à 21:34
Ars Technica looks at what happened after Google's answer to Amazon's cloud service "accidentally deleted a giant customer account for no reason..." "[A]ccording to UniSuper's incident log, downtime started May 2, and a full restoration of services didn't happen until May 15." UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service... UniSuper's website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled "A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian...." Google Cloud is supposed to have safeguards that don't allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution)... The many stakeholders in the service meant service restoration wasn't just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime. The second must-read document in this whole saga is the outage update page, which contains 12 statements as the cloud devs worked through this catastrophe. The first update is May 2 with the ominous statement, "You may be aware of a service disruption affecting UniSuper's systems...." Seven days after the outage, on May 9, we saw the first signs of life again for UniSuper. Logins started working for "online UniSuper accounts" (I think that only means the website), but the outage page noted that "account balances shown may not reflect transactions which have not yet been processed due to the outage...." May 13 is the first mention of the mobile app beginning to work again. This update noted that balances still weren't up to date and that "We are processing transactions as quickly as we can." The last update, on May 15, states, "UniSuper can confirm that all member-facing services have been fully restored, with our retirement calculators now available again." The joint statement and the outage updates are still not a technical post-mortem of what happened, and it's unclear if we'll get one. Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it's also full of terminology that doesn't align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper. Thanks to long-time Slashdot reader swm for sharing the news.

Read more of this story at Slashdot.

Eight Automakers Grilled by US Lawmakers Over Sharing of Connected Car Data With Police

Par : EditorDavid
18 mai 2024 à 20:34
An anonymous reader shared this report from Automotive News: Automotive News recently reported that eight automakers sent vehicle location data to police without a court order or warrant. The eight companies told senators that they provide police with data when subpoenaed, getting a rise from several officials. BMW, Kia, Mazda, Mercedes-Benz, Nissan, Subaru, Toyota, and Volkswagen presented their responses to lawmakers. Senators Ron Wyden from Oregon and Ed Markey from Massachusetts penned a letter to the Federal Trade Commission, urging investigative action. "Automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry's own voluntary privacy principles," they wrote. Ten years ago, all of those companies agreed to the Consumer Privacy Protection Principles, a voluntary code that said automakers would only provide data with a warrant or order issued by a court. Subpoenas, on the other hand, only require approval from law enforcement. Though it wasn't part of the eight automakers' response, General Motors has a class-action suit on its hands, claiming that it shared data with LexisNexis Risk Solutions, a company that provides insurers with information to set rates. The article notes that the lawmakers praised Honda, Ford, GM, Tesla, and Stellantis for requiring warrants, "except in the case of emergencies or with customer consent."

Read more of this story at Slashdot.

Study Confirms Einstein Prediction: Black Holes Have a 'Plunging Region'

Par : EditorDavid
18 mai 2024 à 19:34
"Albert Einstein was right," reports CNN. "There is an area at the edge of black holes where matter can no longer stay in orbit and instead falls in, as predicted by his theory of gravity." The proof came by combining NASA's earth-orbiting NuSTAR telescope with the NICER telescope on the International Space Station to detect X-rays: A team of astronomers has for the first time observed this area — called the "plunging region" — in a black hole about 10,000 light-years from Earth. "We've been ignoring this region, because we didn't have the data," said research scientist Andrew Mummery, lead author of the study published Thursday in the journal Monthly Notices of the Royal Astronomical Society. "But now that we do, we couldn't explain it any other way." Mummery — also a Fellow in Oxford's physics department — told CNN, "We went out searching for this one specifically — that was always the plan. We've argued about whether we'd ever be able to find it for a really long time. People said it would be impossible, so confirming it's there is really exciting." Mummery described the plunging region as "like the edge of a waterfall." Unlike the event horizon, which is closer to the center of the black hole and doesn't let anything escape, including light and radiation, in the "plunging region" light can still escape, but matter is doomed by the powerful gravitational pull, Mummery explained. The study's findings could help astronomers better understand the formation and evolution of black holes. "We can really learn about them by studying this region, because it's right at the edge, so it gives us the most information," Mummery said... According to Christopher Reynolds, a professor of astronomy at the University of Maryland, College Park, finding actual evidence for the "plunging region" is an important step that will let scientists significantly refine models for how matter behaves around a black hole. "For example, it can be used to measure the rotation rate of the black hole," said Reynolds, who was not involved in the study.

Read more of this story at Slashdot.

'Google Domains' Starts Migrating to Squarespace

Par : EditorDavid
18 mai 2024 à 18:34
"We're migrating domains in batches..." announced web-hosting company Squarespace earlier this month. "Squarespace has entered into an agreement to become the new home for Google Domains customers. When your domain transitions from Google to Squarespace, you'll become a Squarespace customer and manage your domain through an account with us." Slashdot reader shortyadamk shares an email sent today to a Google Domains customer: "Today your domain, xyz.com, migrated from Google Domains to Squarespace Domains. "Your WHOIS contact details and billing information (if applicable) were migrated to Squarespace. Your DNS configuration remains unchanged. "Your migrated domain will continue to work with Google Services such as Google Search Console. To support this, your account now has a domain verification record — one corresponding to each Google account that currently has access to the domain."

Read more of this story at Slashdot.

Is America's Defense Department 'Rushing to Expand' Its Space War Capabilities?

Par : EditorDavid
18 mai 2024 à 17:34
America's Defense Department "is rushing to expand its capacity to wage war in space," reports the New York Times, "convinced that rapid advances by China and Russia in space-based operations pose a growing threat to U.S. troops and other military assets on the ground and U.S. satellites in orbit." [T]he Defense Department is looking to acquire a new generation of ground- and space-based tools that will allow it to defend its satellite network from attack and, if necessary, to disrupt or disable enemy spacecraft in orbit, Pentagon officials have said in a series of interviews, speeches and recent statements... [T]he move to enhance warfighting capacity in space is driven mostly by China's expanding fleet of military tools in space... [U.S. officials are] moving ahead with an effort they are calling "responsible counterspace campaigning," an intentionally ambiguous term that avoids directly confirming that the United States intends to put its own weapons in space. But it also is meant to reflect this commitment by the United States to pursue its interest in space without creating massive debris fields that would result if an explosive device or missile were used to blow up an enemy satellite. That is what happened in 2007, when China used a missile to blow up a satellite in orbit. The United States, China, India and Russia all have tested such missiles. But the United States vowed in 2022 not to do any such antisatellite tests again. The United States has also long had ground-based systems that allow it to jam radio signals, disrupting the ability of an enemy to communicate with its satellites, and is taking steps to modernize these systems. But under its new approach, the Pentagon is moving to take on an even more ambitious task: broadly suppress enemy threats in orbit in a fashion similar to what the Navy does in the oceans and the Air Force in the skies. The article notes a recent report drafted by a former Space Force colonel cited three ways to disable enemy satellite networks: cyberattacks, ground or space-based lasers, and high-powered microwaves. "John Shaw, a recently retired Space Force lieutenant general who helped run the Space Command, agreed that directed-energy devices based on the ground or in space would probably be a part of any future system. 'It does minimize debris; it works at the speed of light,' he said. 'Those are probably going to be the tools of choice to achieve our objective." The Pentagon is separately working to launch a new generation of military satellites that can maneuver, be refueled while in space or have robotic arms that could reach out and grab — and potentially disrupt — an enemy satellite. Another early focus is on protecting missile defense satellites. The Defense Department recently started to require that a new generation of these space-based monitoring systems have built-in tools to evade or respond to possible attack. "Resiliency feature to protect against directed energy attack mechanisms" is how one recent missile defense contract described it. Last month the Pentagon also awarded contracts to two companies — Rocket Lab and True Anomaly — to launch two spacecraft by late next year, one acting as a mock enemy and the other equipped with cameras, to pull up close and observe the threat. The intercept satellite will not have any weapons, but it has a cargo hold that could carry them. The article notes that Space Force's chief of space operations has told Senate appropriators that about $2.4 billion of the $29.4 billion in Space Force's proposed 2025 budget was set aside for "space domain awareness." And it adds that the Pentagon "is working to coordinate its so-called counterspace efforts with major allies, including Britain, Canada and Australia, through a multinational operation called Operation Olympic Defender. France has been particularly aggressive, announcing its intent to build and launch by 2030 a satellite equipped with a high-powered laser." [W]hat is clear is that a certain threshold has now been passed: Space has effectively become part of the military fighting domain, current and former Pentagon officials said. "By no means do we want to see war extend into space," Lt. Gen. DeAnna Burt, deputy chief of space operations, said at a Mitchell Institute event this year. "But if it does, we have to be prepared to fight and win."

Read more of this story at Slashdot.

Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi

Par : EditorDavid
18 mai 2024 à 16:34
Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation." The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car. The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration. In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco. in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco. Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."

Read more of this story at Slashdot.

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities

Par : EditorDavid
18 mai 2024 à 15:34
Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper. "Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way. Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection. Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...? Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet. Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task. "But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."

Read more of this story at Slashdot.

Facing Angry Users, Sonos Promises to Fix Flaws and Restore Removed Features

Par : EditorDavid
18 mai 2024 à 14:34
A blind worker for the National Federation of the Blind said Sonos had a reputation for making products usable for people with disabilities, but that "Overnight they broke that trust," according to the Washington Post. They're not the only angry customers about the latest update to Sonos's wireless speaker system. The newspaper notes that nonprofit worker Charles Knight is "among the Sonos die-hards who are furious at the new app that crippled their options to stream music, listen to an album all the way through or set a morning alarm clock." After Sonos updated its app last week, Knight could no longer set or change his wake-up music alarm. Timers to turn off music were also missing. "Something as basic as an alarm is part of the feature set that users have had for 15 years," said Knight, who has spent thousands of dollars on six Sonos speakers for his bedroom, home office and kitchen. "It was just really badly thought out from start to finish." Some people who are blind also complained that the app omitted voice-control features they need. What's happening to Sonos speaker owners is a cautionary tale. As more of your possessions rely on software — including your car, phone, TV, home thermostat or tractor — the manufacturer can ruin them with one shoddy update... Sonos now says it's fixing problems and adding back missing features within days or weeks. Sonos CEO Patrick Spence acknowledged the company made some mistakes and said Sonos plans to earn back people's trust. "There are clearly people who are having an experience that is subpar," Spence said. "I would ask them to give us a chance to deliver the actions to address the concerns they've raised." Spence said that for years, customers' top complaint was the Sonos app was clunky and slow to connect to their speakers. Spence said the new app is zippier and easier for Sonos to update. (Some customers disputed that the new app is faster.) He said some problems like Knight's missing alarms were flaws that Sonos found only once the app was about to roll out. (Sonos updated the alarm feature this week.) Sonos did remove but planned to add back some lesser-used features. Spence said the company should have told people upfront about the planned timeline to return any missing functions. In a blog post Sonos thanked customers for "valuable feedback," saying they're "working to address them as quickly as possible" and promising to reintroduce features, fix bugs, and address performance issues. ("Adding and editing alarms" is available now, as well as VoiceOver fixes for the home screen on iOS.) The Washington Post adds that Sonos "said it initially missed some software flaws and will restore more voice-reader functions next week."

Read more of this story at Slashdot.

'Openwashing'

Par : BeauHD
18 mai 2024 à 13:00
An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.) In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...] The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.

Read more of this story at Slashdot.

#Flock tease son strip

Par : Flock
18 mai 2024 à 11:37
VRAAAAA en surround onomatopéerama 7.1

Nous voilà bien.

J’avais dit que je teasais mon strip, mais je suis nul en pub et j’ai surtout bien d’autres choses à vous raconter cette semaine, notamment ces histoires de “Choose France“,
de parts de gâteaux culturels, de poisse de fusée, ou même encore de self-control.

En parlant de self-control, vu que j’en ai manqué, faut quand même que je vous raconte pour ce strip. J’étais au carrefour de la fin de semaine avec le weekend, quand voilà que déboule comme une furie un strip, sans crier gare, ni même claxonner. Il me semblait évident de devoir lui laisser la priorité. Du coup j’ai dû accélérer. Pardonnez les couleurs de signalisation et les traits en pointillé, au moins je n’ai quasiment pas dépassé les lignes blanches.

Mais pas d’inquiétude, j’étais en règle, ce strip est réglo, j’avais bien mes papiers : les voici, ici et ici.

Merci m’sieur l’agent !

Cette chronique est financée grâce au soutien de nos abonnés. Vous pouvez retrouver comme toutes les précédentes publications de Flock dans nos colonnes.


Vous devez être abonné•e pour lire la suite de cet article.
Déjà abonné•e ? Générez une clé RSS dans votre profil.

GNOME OS Working On A New Installer & Other Enhancements To Make It More Practical

18 mai 2024 à 10:56
Germany's Sovereign Tech Fund continues providing the resources for various new GNOME desktop development initiatives. There are various efforts underway for new features and refinements with GNOME 47 in September and a renewed emphasis around GNOME OS...
❌
❌