Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

GOG Will Start Deleting Cloud Saves This Summer

GOG, a Poland-based popular gaming platform, has announced plans to enforce a 200MB limit on cloud save files per game. This move may adversely affect players of open-world titles like Cyberpunk 2077, where save folders can reach several gigabytes. A report adds: The company will begin deleting game saves that exceed the limit on Aug 31. When the deadline rolls around, GOG will delete saves for each game, beginning with the oldest until it's below the 200MB threshold. That means your newest saves will survive.

Read more of this story at Slashdot.

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low). Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent." In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

Read more of this story at Slashdot.

Adobe Responds To Vocal Uproar Over New Terms of Service Language

Adobe is facing backlash over new Terms of Service language amid its embrace of generative AI in products like Photoshop and customer experience software. The ToS, sent to Creative Cloud Suite users, doesn't mention AI explicitly but includes a reference to machine learning and a clause prohibiting AI model training on Adobe software. From a report: In particular, users have objected to Adobe's claims that it "may access, view, or listen to your Content through both automated and manual methods -- using techniques such as machine learning in order to improve our Services and Software and the user experience," which many took to be a tacit admission both of surveilling them and of training AI on their content, even confidential content for clients protected under non-disclosure agreements or confidentiality clauses/contracts between said Adobe users and clients. A spokesperson for Adobe provided the following statement in response to VentureBeat's questions about the new ToS and vocal backlash: "This policy has been in place for many years. As part of our commitment to being transparent with our customers, we added clarifying examples earlier this year to our Terms of Use regarding when Adobe may access user content. Adobe accesses user content for a number of reasons, including the ability to deliver some of our most innovative cloud-based features, such as Photoshop Neural Filters and Remove Background in Adobe Express, as well as to take action against prohibited content. Adobe does not access, view or listen to content that is stored locally on any user's device."

Read more of this story at Slashdot.

Google Is Working On a Recall-Like Feature For Chromebooks, Too

In an interview with PCWorld's Mark Hachman, Google's ChromeOS chief said the company is cautiously exploring a Recall-like feature for Chromebooks, dubbed "memory." Microsoft's AI-powered Recall feature for Windows 11 was unveiled at the company's Build 2024 conference last month. The feature aims to improve local searches by making them as efficient as web searches, allowing users to quickly retrieve anything they've seen on their PC. Using voice commands and contextual clues, Recall can find specific emails, documents, chat threads, and even PowerPoint slides. Given the obvious privacy and security concerns, many users have denounced the feature, describing it as "literal spyware or malware." PCWorld reports: I sat down with John Solomon, the vice president at Google responsible for ChromeOS, for a lengthy interview around what it means for Google's low-cost Google platform as the PC industry moved to AI PCs. Microsoft, of course, is launching Copilot+ PCs alongside Qualcomm's Snapdragon X Elite -- an Arm chip. And Chromebooks, of course, have a long history with Arm. But it's Recall that we eventually landed upon -- or, more precisely, how Google sidles into the same space. Recall is great in theory, but in practice may be more problematic.) Recall the Project Astra demo that Google showed off at its Google I/O conference. One of the key though understated aspects of it was how Astra "remembered" where the user's glasses were. Astra didn't appear to be an experience that could be replicated on the Chromebook. Most users aren't going to carry a Chromebook around (a device which typically lacks a rear camera) visually identifying things. Solomon respectfully disagreed. "I think there's a piece of it which is very relevant, which is this notion of having some kind of context and memory of what's been happening on the device," Solomon said. "So think of something that's like, maybe viewing your screen and then you walk away, you get distracted, you chat to someone at the watercooler and you come back. You could have some kind of rewind function, you could have some kind of recorder function that would kind of bring you back to that. So I think that there is a crossover there. "We're actually talking to that team about where the use case could be," Solomon added of the "memory" concept. "But I think there's something there in terms of screen capture in a way that obviously doesn't feel creepy and feels like the user's in control." That sounds a lot like Recall! But Solomon was quick to point out that one of the things that has turned off users to Recall was the lack of user control: deciding when, where, and if to turn it on. "I'm not going to talk about Recall, but I think the reason that some people feel it's creepy is when it doesn't feel useful, and it doesn't feel like something they initiated or that they get a clear benefit from it," Solomon said. "If the user says like -- let's say we're having a meeting, and discussing complex topics. There's a benefit of running a recorded function if at the end of it it can be useful for creating notes and the action items. But you as a user need to put that on and decide where you want to have that."

Read more of this story at Slashdot.

FBI Recovers 7,000 LockBit Keys, Urges Ransomware Victims To Reach Out

An anonymous reader quotes a report from BleepingComputer: The FBI urges past victims of LockBit ransomware attacks to come forward after revealing that it has obtained over 7,000 LockBit decryption keys that they can use to recover encrypted data for free. FBI Cyber Division Assistant Director Bryan Vorndran announced this on Wednesday at the 2024 Boston Conference on Cyber Security. "From our ongoing disruption of LockBit, we now have over 7,000 decryption keys and can help victims reclaim their data and get back online," the FBI Cyber Lead said in a keynote. "We are reaching out to known LockBit victims and encouraging anyone who suspects they were a victim to visit our Internet Crime Complaint Center at ic3.gov." This call to action comes after law enforcement took down LockBit's infrastructure in February 2024 in an international operation dubbed "Operation Cronos." At the time, police seized 34 servers containing over 2,500 decryption keys, which helped create a free LockBit 3.0 Black Ransomware decryptor. After analyzing the seized data, the U.K.'s National Crime Agency and the U.S. Justice Department estimate the gang and its affiliates have raked in up to $1 billion in ransoms following 7,000 attacks targeting organizations worldwide between June 2022 and February 2024. However, despite law enforcement efforts to shut down its operations, LockBit is still active and has since switched to new servers and dark web domains. After disrupting LockBit in February, the U.S. State Department said it is offering a reward of up to $15 million for information leading to the identification or location of the leaders of the ransomware group.

Read more of this story at Slashdot.

Apple Commits To At Least Five Years of iPhone Security Updates

When buying a new smartphone, it's important to consider the duration of software updates, as it impacts security and longevity. In a rare public commitment on Monday, thanks to the UK's new Product Security and Telecommunications Infrastructure (PSTI) regulations, Apple said it guarantees a minimum of five years of security updates for the iPhone 15 Pro Max. "In other words, the iPhone 15 is officially guaranteed to receive security updates until September 22, 2028," reports Android Authority. From the report: This, as VP of Engineering for Android Security & Privacy at Google Dave Kleidermacher points out, means that Apple is no longer offering the best security update policy in the industry. Both Samsung and Google guarantee seven years of not just security updates but also Android OS updates for their respective flagship devices, which is two years longer than what Apple guarantees. To Apple's credit, though, it has long provided more than five years of security updates for its various iPhone devices. Some iPhones have received security updates six or more years after the initial release, which is far more support than the vast majority of Android devices receive. So, while Samsung and Google currently beat Apple in terms of how long they're guaranteeing software support, that doesn't mean iPhone users can't keep their phones for just as long, if not longer. They'll just need to hope Apple doesn't cut off support after the five-year minimum.

Read more of this story at Slashdot.

Sony Removes 8K Claim From PlayStation 5 Boxes

Fans have noticed that, over the last few months, Sony quietly removed any mention of 8K on the PlayStation 5 boxes. "I have been endlessly bitching since the PS5 released about that 8k Badge," writes X user @DeathlyPrice. "It is false Advertising and Sony should be sued for it." Others shared their grievances via PlayStation Lifestyle and a Reddit thread. GameSpot reports: A FAQ on Sony's official site in 2020 stated that "PS5 is compatible with 8K displays at launch, and after a future system software update will be able to output resolutions up to 8K when content is available, with supported software." But to date, the only game that offers 8K resolution on PS5 is The Touryst, which looks more like Minecraft than a game with advanced visuals. The reality is that 8K has not been widely adopted by video game developers, or even by filmmakers at this point. There are 8K televisions on the market, but it may be quite some time, if ever, before it becomes the standard for either gaming or entertainment.

Read more of this story at Slashdot.

DuckDuckGo Offers 'Anonymous' Access To AI Chatbots Through New Service

An anonymous reader quotes a report from Ars Technica: On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account. DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using "!ai" or "!chat" shortcuts in the search field. AI Chat can also be disabled in the site's settings for users with accounts. According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use. "We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days," says DuckDuckGo, "and that none of the chats made on our platform can be used to train or improve the models." However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user's inputs to remote servers for processing over the Internet. Given certain inputs (i.e., "Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill"), a user could still potentially be identified if such an extreme need arose. In regard to hallucination concerns, DuckDuckGo states in its privacy policy: "By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice)."

Read more of this story at Slashdot.

Google To Start Permanently Deleting Users' Location History

Google will delete everything it knows about users' previously visited locations, the company has said, a year after it committed to reducing the amount of personal data it stores about users. From a report: The company's "timeline" feature -- previously known as Location History -- will still work for those who choose to use it, letting them scroll back through potentially decades of travel history to check where they were at a specific time. But all the data required to make the feature work will be saved locally, to their own phones or tablets, with none of it being stored on the company's servers. In an email sent by the company to Maps users, seen by the Guardian, Google said they have until 1 December to save all their old journeys before it is deleted for ever. Users will still be able to back up their data if they're worried about losing it or want to sync it across devices but that will no longer happen by default. The company is also reducing the default amount of time that location history is stored for. Now, it will begin to delete past locations after just three months, down from a previous default of a year and a half. In a blogpost announcing the changes, Google didn't cite a specific reason for the updates, beyond suggesting that users may want to delete information from their location history if they are "planning a surprise birthday party."

Read more of this story at Slashdot.

Humane Said To Be Seeking a $1 Billion Buyout After Only 10,000 Orders of Its AI Pin

An anonymous reader writes: It emerged recently that Humane was trying to sell itself for as much as $1 billion after its confuddling, expensive and ultimately pretty useless AI Pin flopped. A New York Times report that dropped on Thursday shed a little more light on the company's sales figures and, like the wearable AI assistant itself, the details are not good. By early April, around the time that many devastating reviews of the AI Pin were published, Humane is said to have received around 10,000 orders for the device. That's a far cry from the 100,000 it was hoping to ship this year, and about 9,000 more than I thought it might get. It's hard to think it picked up many more orders beyond those initial 10,000 after critics slaughtered the AI Pin. One of the companies that Humane has engaged with for the sale is HP, the Times reported.

Read more of this story at Slashdot.

Intel Ditches Hyperthreading For Lunar Lake CPUs

An anonymous reader shares a report: Intel's fastest processors have included hyperthreading, a technique that lets more than one thread run on a single CPU core, for over 20 years -- and it's used by AMD (which calls it "simultaneous multi-threading") as well. But you won't see a little "HT" on the Intel sticker for any Lunar Lake laptops, because none of them use it. Hyperthreading will be disabled on all Lunar Lake CPU cores, including both performance and efficiency cores. Why? The reason is complicated, but basically it's no longer needed. The performance cores or P-Cores on the new Lunar Lake series are 14 percent faster than the same cores on the previous-gen Meteor Lake CPUs, even with the multi-thread-processing of hyperthreading disabled. Turning on the feature would come at too high a power cost, and Lunar Lake is all about boosting performance while keeping laptops in this generation thin, light, and long-lasting. That means maximizing single-thread performance -- the most relevant to users who are typically focusing on one task at a time, as is often the case for laptops -- in terms of surface area, to improve overall performance per watt. Getting rid of the physical components necessary for hyperthreading just makes sense in that context.

Read more of this story at Slashdot.

'Microsoft Has Lost Trust With Its Users and Windows Recall is the Straw That Broke the Camel's Back'

In a column at Windows Central, a blog that focuses on Microsoft news, senior editor Zac Bowden discusses the backlash against Windows Recall, a new AI feature in Microsoft's Copilot+ PCs. While the feature is impressive, allowing users to search their entire Windows history, many are concerned about privacy and security. Bowden argues that Microsoft's history of questionable practices, such as ads and bloatware, has eroded user trust, making people skeptical of Recall's intentions. Additionally, the reported lack of encryption for Recall's data raises concerns about third-party access. Bowden argues that Microsoft could have averted the situation by testing the feature openly to address these issues early on and build trust with users. He adds: Users are describing the feature as literal spyware or malware, and droves of people are proclaiming they will proudly switch to Linux or Mac in the wake of it. Microsoft simply doesn't enjoy the same benefit of the doubt that other tech giants like Apple may have. Had Apple announced a feature like Recall, there would have been much less backlash, as Apple has done a great job building loyalty and trust with its users, prioritizing polished software experiences, and positioning privacy as a high-level concern for the company.

Read more of this story at Slashdot.

AMD EPYC 4364P & 4564P @ DDR5-4800 / DDR5-5200 vs. Intel Xeon E-2488

With the AMD EPYC 4004 series that was announced in May and we have delivered benchmarks of the entire EPYC 4004 stack from the 4-core SKU up through the 16-core model with 3D V-Cache, there are many advantages over Intel's Xeon E-2400 series competition. In addition to going up to 16 cores versus 8 with the Xeon E-2400 series, the more competitive pricing, the 3D V-Cache SKUs, and 28 PCIe lanes rather than 20, the AMD EPYC 4004 models also support DDR5-5200 memory where as the Intel Raptor Lake E-2400 models are bound to DDR5-4800. In this follow-up testing is a look at the AMD EPYC 4004 performance both at DDR5-4800 and DDR5-5200 speeds for showing the performance difference.

☕️ Starship : le 4ᵉ vol d’essai était le bon, avec deux « splashdowns »

Le premier vol de Starship a déjà plus d’un an et s’était soldé par une explosion après trois minutes de vol. Lors du second vol en novembre dernier, la séparation a bien eu lieu, mais le test a ensuite été brutalement interrompu par une explosion. Lors du troisième essai, Starship a réussi à se mettre en orbite, mais ce n’était pas encore ça sur le retour de la fusée.

Avec son quatrième vol, SpaceX réalise un carton plein, ou presque. En tout cas, les deux principaux objectifs sont remplis : le retour du premier étage après un peu plus de sept minutes, puis du second étage au bout d’une heure, sans exploser et avec la bonne position dans les deux cas.

On peut voir sur la vidéo de lancement qu’un des 33 moteurs Raptor n’a pas fonctionné, ce qui n’a pas empêché la fusée de décoller.

La séparation entre les deux étages s’est correctement faite. Super Heavy (premier étage) est ensuite venu se « poser » à la surface de l’eau, avec l’aide de trois moteurs pour ralentir la chute. Pas de barge cette fois-ci, mais c’était prévu ainsi.

SpaceX explique avoir profité de ce lancement pour réaliser quelques tests de résistance sur l’étage supérieur Starship. Deux tuiles du bouclier ont par exemple été enlevées pour mesurer la température à ces endroits.

« Malgré la perte de nombreuses tuiles et un volet endommagé, Starship a réussi à atterrir en douceur dans l’océan ! », se réjouit Elon Musk. Il ajoute qu’une tentative de récupération du booster aura lieu lors du prochain lancement. Bill Nelson, administrateur de la NASA, félicite aussi SpaceX pour cet essai.

Watch Starship's fourth flight test https://t.co/SjpjscHoUB

— SpaceX (@SpaceX) June 4, 2024

Stockage ADN : comment Biomemory veut accélérer et s’intégrer dans les datacenters

C’est le vase qui va faire déborder la goutte d’eau
Le stockage sur ADN

Biomemory compte proposer, d’ici 2030, des baies de stockage 42U d’une capacité d’un Eo (1 000 Po ou 1 000 000 To). La société mise sur l’ADN, peu onéreux, avec une densité et une longévité record. On n’en est pas encore là : il va falloir passer de 1 ko d’écriture par jour à 1 Po d’ici à quelques années.

Utiliser de l’ADN pour stocker des informations, l’idée n’est pas nouvelle. Le physicien américain Richard Feynman (prix Nobel en 1965) l’avait déjà suggérée dès 1959, rappelle le CNRS. Il a par contre fallu attendre 2012 pour que la première démonstration significative soit réalisée, à Harvard.

En France, l’Office parlementaire d’évaluation des choix scientifiques et technologiques (OPECST) a publié une « note » détaillée sur cette forme de stockage. On y retrouve les attentes, les embuches et le principe de fonctionnement.

Biomemory à la « croisée de la biotech et de l’informatique »

Des entreprises se sont depuis lancées dans l’aventure, dont Biomemory. Elle était présente à Vivatech et nous avons pu discuter avec deux représentants de la société : Olivier Lauvray (membre du conseil consultatif et directeur technique à temps partiel) et Hela Ammar (chef d’équipe).

La société a été fondée par deux chercheurs et un informaticien : Erfane Arwani, Stéphane Lemaire et Pierre Crozet. Elle se présente comme étant à la « croisée de la biotech et de l’informatique ». C’est une spin-off du CNRS et la Sorbonne université, qui se trouve dans le 14ᵉ arrondissement de Paris. Elle s’est lancée en 2021 et emploie 18 personnes actuellement.

Stéphane Lemaire et Pierre Crozet se sont déjà illustrés il y a quelques années avec DNA Drive, deux capsules contenants des textes symboliques sur de l’ADN : la Déclaration des droits de l’homme et du citoyen de 1789 et la Déclaration des droits de la femme et de la citoyenne de 1791.

Stocker toujours plus de données, une quête sans fin

Le stockage ADN répond à un besoin : stocker toujours plus d’informations dans un volume restreint. En termes de densité, les possibilités de l’ADN sont sans commune mesure avec les autres solutions actuelles. Il y a quelques années, le CNRS expliquait qu’un « seul gramme peut théoriquement contenir jusqu’à 455 exabits d’informations, soit 455 milliards de milliards de bits. Toutes les données du monde tiendraient alors dans une boîte à chaussures ».

« Aujourd’hui, on arrive à stocker 30 % des données de l’humanité et d’ici 2030 on ne pourra stocker que 3 % », nous affirme Biomemory. Avec l’ADN, il sera possible d’enregistrer toujours plus de données, et à l’heure des IA (génératives) qui en sont extrêmement consommatrices, cette solution a le vent en poupe.

1 Go par jour dès l’année prochaine

C’est d’ailleurs un cas d’usage mis en avant : permettre aux entreprises de garder leurs données brutes, alors « qu’aujourd’hui, elles sont obligées de les agréger, et ça perd de la valeur ». Le stockage ADN pourrait aussi permettre à des sociétés de garder leurs données sans les mettre entre les mains d’hébergeurs (étrangers ou non).

Il reste néanmoins du travail. « Aujourd’hui, on arrive à stocker un kilo octet par jour », reconnait Biomemory. Mais les perspectives d’évolution sont là : « L’idée, c’est d’arriver en 2025 à stocker un Go par jour. De continuer à progresser pour arriver à l’ordre d’un Po par jour en 2030 ». Cela donnerait 11,6 Go/s.

Réutiliser tout ce qui est standard


Vous devez être abonné•e pour lire la suite de cet article.
Déjà abonné•e ? Générez une clé RSS dans votre profil.

☕️ Starship : le 4ᵉ vol d’essai était le bon, avec deux « splashdowns »

Le premier vol de Starship a déjà plus d’un an et s’était soldé par une explosion après trois minutes de vol. Lors du second vol en novembre dernier, la séparation a bien eu lieu, mais le test a ensuite été brutalement interrompu par une explosion. Lors du troisième essai, Starship a réussi à se mettre en orbite, mais ce n’était pas encore ça sur le retour de la fusée.

Avec son quatrième vol, SpaceX réalise un carton plein, ou presque. En tout cas, les deux principaux objectifs sont remplis : le retour du premier étage après un peu plus de sept minutes, puis du second étage au bout d’une heure, sans exploser et avec la bonne position dans les deux cas.

On peut voir sur la vidéo de lancement qu’un des 33 moteurs Raptor n’a pas fonctionné, ce qui n’a pas empêché la fusée de décoller.

La séparation entre les deux étages s’est correctement faite. Super Heavy (premier étage) est ensuite venu se « poser » à la surface de l’eau, avec l’aide de trois moteurs pour ralentir la chute. Pas de barge cette fois-ci, mais c’était prévu ainsi.

SpaceX explique avoir profité de ce lancement pour réaliser quelques tests de résistance sur l’étage supérieur Starship. Deux tuiles du bouclier ont par exemple été enlevées pour mesurer la température à ces endroits.

« Malgré la perte de nombreuses tuiles et un volet endommagé, Starship a réussi à atterrir en douceur dans l’océan ! », se réjouit Elon Musk. Il ajoute qu’une tentative de récupération du booster aura lieu lors du prochain lancement. Bill Nelson, administrateur de la NASA, félicite aussi SpaceX pour cet essai.

Watch Starship's fourth flight test https://t.co/SjpjscHoUB

— SpaceX (@SpaceX) June 4, 2024
❌