Vue normale

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal

28 février 2026 à 20:34
It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions... In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable... In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final. Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights." Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

Read more of this story at Slashdot.

America's Teenagers Say AI Cheating Has Become a Regular Feature of Student Life

28 février 2026 à 16:34
Tuesday Pew Research announced their newest findings: that 54% of America's teens use AI help with schoolwork: One-in-five teens living in households making less than $30,000 a year say they do all or most of their schoolwork with AI chatbots' help. A similar share of those in households making $30,000 to just under $75,000 annually say this. Fewer teens living in higher-earning households (7%) say the same." "The survey did not ask students whether they had used chatbots to write essays or generate other assignments..." notes the New York Times. "But nearly 60% of teenagers told Pew that students at their school used chatbots to cheat 'very often' or 'somewhat often.'" Agreeing with that are the Pew Researchers themselves. "Our survey shows that many teens think cheating with AI has become a regular feature of student life." One worried teenager still told the researchers that AI "makes people lazy and takes away jobs." But another teenager told the researchers that "Everyone's going to have to know how to use AI or they'll be left behind." Thanks to long-time Slashdot reader theodp for sharing the article.

Read more of this story at Slashdot.

Il paraît que les logiciels meurent en silence : c’est quoi la SaaSpocalypse ?

28 février 2026 à 14:30

C'est un mot et une projection qui font leur chemin dans la presse et l'écosystème tech : la SaaSpocalypse ou Software-mageddon serait imminente. C'est quoi, au juste ?

Southern California Air Board Rejects Pollution Rules After AI-Generated Flood of Comments

Par : BeauHD
28 février 2026 à 07:00
Southern California's air quality board rejected proposed rules to phase out gas-powered appliances after receiving more than 20,000 opposition comments generated through CiviClick, "the first and best AI-powered grassroots advocacy platform." Phys.org reports: A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign "left the staff of the Southern California Air Quality Management District (SCAQMD) reeling," the article says. It is not clear how AI was deployed in the campaign, and officials at CiviClick did not respond to repeated requests for comment. But their website boasts several tools, including "state of the art technology and artificial intelligence message assistance" that can be used to create custom advocacy letters, as opposed to repetitive form letters or petitions often used in similar campaigns. When staffers at the air district reached out to a small sample of people to verify their comments, at least three said they had not written to the agency and were not aware of any such messages, records show. But the email onslaught almost certainly influenced the board's June decision, according to agency insiders, who noted that the number of public comments typically submitted on agenda items can be counted on one hand. The proposed rules were nearly two years in the making and would have placed a fee on natural gas-powered water heaters and furnaces, favoring electric ones, in an effort to reduce air pollution in the district, which includes Orange County and large swaths of Los Angeles, Riverside and San Bernardino counties. Gas appliances emit nitrogen oxides, or NOx -- key pollutants for forming smog. The implications are troubling, experts said, and go beyond the use of natural gas furnaces and heaters in the second-largest metropolitan area in the country.

Read more of this story at Slashdot.

Perplexity Announces 'Computer,' an AI Agent That Assigns Work To Other AI Agents

Par : BeauHD
28 février 2026 à 00:02
joshuark shares a report from Ars Technica: Perplexity has introduced "Computer," a new tool that allows users to assign tasks and see them carried out by a system that coordinates multiple agents running various models. The company claims that Computer, currently available to Perplexity Max subscribers, is "a system that creates and executes entire workflows" and "capable of running for hours or even months." The idea is that the user describes a specific outcome -- something like "plan and execute a local digital marketing campaign for my restaurant" or "build me an Android app that helps me do a specific kind of research for my job." Computer then ideates subtasks and assigns them to multiple agents as needed, running the models Perplexity deems best for those tasks. The core reasoning engine currently runs Anthropic's Claude Opus 4.6, while Gemini is used for deep research, Nano Banana for image generation, Veo 3.1 for video production, Grok for lightweight tasks where speed is a consideration, and ChatGPT 5.2 for "long-context recall and wide search." This kind of best-model-for-the-task approach differs from some competing products like Claude Cowork, which only uses Anthropic's models. All this happens in the cloud, with prebuilt integrations. "Every task runs in an isolated compute environment with access to a real filesystem, a real browser, and real tool integrations," Perplexity says. The idea is partly that this workflow was what some power users were already doing, and this aims to make that possible for a wider range of people who don't want to deal with all that setup. People were already using multiple models and tailoring them to specific tasks based on perceived capabilities, while, for example, using MCP (Model Context Protocol) to give those models access to data and applications on their local machines. Perplexity Computer takes a different approach, but the goal is the same: have AI agents running tailor-picked models to perform tasks involving your own files, services, and applications. Then there is OpenClaw, which you could perceive as the immediate predecessor to this concept.

Read more of this story at Slashdot.

Perplexity Announces 'Computer,' an AI Agent That Assigns Work To Other AI Agent

Par : BeauHD
28 février 2026 à 00:02
joshuark shares a report from Ars Technica: Perplexity has introduced "Computer," a new tool that allows users to assign tasks and see them carried out by a system that coordinates multiple agents running various models. The company claims that Computer, currently available to Perplexity Max subscribers, is "a system that creates and executes entire workflows" and "capable of running for hours or even months." The idea is that the user describes a specific outcome -- something like "plan and execute a local digital marketing campaign for my restaurant" or "build me an Android app that helps me do a specific kind of research for my job." Computer then ideates subtasks and assigns them to multiple agents as needed, running the models Perplexity deems best for those tasks. The core reasoning engine currently runs Anthropic's Claude Opus 4.6, while Gemini is used for deep research, Nano Banana for image generation, Veo 3.1 for video production, Grok for lightweight tasks where speed is a consideration, and ChatGPT 5.2 for "long-context recall and wide search." This kind of best-model-for-the-task approach differs from some competing products like Claude Cowork, which only uses Anthropic's models. All this happens in the cloud, with prebuilt integrations. "Every task runs in an isolated compute environment with access to a real filesystem, a real browser, and real tool integrations," Perplexity says. The idea is partly that this workflow was what some power users were already doing, and this aims to make that possible for a wider range of people who don't want to deal with all that setup. People were already using multiple models and tailoring them to specific tasks based on perceived capabilities, while, for example, using MCP (Model Context Protocol) to give those models access to data and applications on their local machines. Perplexity Computer takes a different approach, but the goal is the same: have AI agents running tailor-picked models to perform tasks involving your own files, services, and applications. Then there is OpenClaw, which you could perceive as the immediate predecessor to this concept.

Read more of this story at Slashdot.

Trump Orders Federal Agencies To Stop Using Anthropic AI Tech 'Immediately'

Par : BeauHD
27 février 2026 à 22:40
President Donald Trump has ordered all U.S. federal agencies to "immediately cease" using Anthropic's AI technology, escalating a standoff after the company sought limits on Pentagon use of its models. CNBC reports: The company, which in July signed a $200 million contract with Pentagon, wants assurances that the Defense Department will not use its AI models will not be used for fully autonomous weapons or mass domestic surveillance of Americans. The Pentagon had set a deadline of 5:01 p.m. ET Friday for Anthropic to agree to its demands to allow the Pentagon to use the technology for all lawful purposes. If Anthropic did not meet that deadline, Pete Hegseth threatened to label the company a "supply chain risk" or force it to comply by invoking the Defense Production Act. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump said in a post on Truth Social. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY." "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Trump wrote. "We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels," Trump said. On Friday, OpenAI said it would also draw the same red lines as Anthropic: no AI for mass surveillance or autonomous lethal weapons.

Read more of this story at Slashdot.

AI Mistakes Are Infuriating Gamers as Developers Seek Savings

Par : msmash
27 février 2026 à 20:10
The $200 billion video game industry is caught between studios eager to cut ballooning development costs through AI and a player base that has grown openly hostile to the technology after a string of visible blunders. As Bloomberg News reports, Arc Raiders, a surprise hit from Stockholm-based Embark Studios that sold 12 million copies in three months, was briefly vilified online for its robotic-sounding auto-generated voices -- even as CEO Patrick Soderlund insists AI was only used for non-essential elements. EA's Battlefield 6 and Activision's Call of Duty: Black Ops 7 both drew gamer anger this winter over thematically mismatched or poorly generated graphics, and Valve's Steam has added labels to flag games made using AI. Some 47% of developers polled by research house Omdia said they expect generative AI to reduce game quality, and PC gamers -- now facing inflated hardware prices from AI-driven demand for graphics chips -- have turned reflexively antagonistic.

Read more of this story at Slashdot.

Metacritic Will Kick Out Media Attempting To Submit AI Generated Reviews

Par : msmash
27 février 2026 à 17:32
An anonymous reader shares a report: While some see AI as a tool to be used, its specific use and how it is deployed responsibly is being heavily debated online across a wide range of industries. In terms of journalistic content, and in this particular instance, reviews, review aggregator Metacritic has taken a firm stance on content published and submitted to their platform, that have been generated by artificial intelligence in some way. In a statement by co-founder Marc Doyle, sent to Gamereactor, he says this: "Metacritic has been a reputable review source for a quarter century and has maintained a rigorous vetting process when adding new publications to our slate of critics. However, in certain instances such as a publication being sold or a writing staff having turned over, problems can arise such as plagiarism, theft, or other forms of fraud including AI-generated reviews. Metacritic's policy is to never include an AI-generated critic review on Metacritic and if we discover that one has been posted, we'll remove it immediately and sever ties with that publication indefinitely pending a thorough investigation." So, what is this about specifically? Well, it's probably a sound guess, that this pertains to Videogamer's review of Resident Evil 9: Requiem, which was removed from the platform after a barrage of comments accusing the review of being AI-written, and for the author of being made up.

Read more of this story at Slashdot.

Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight

Par : msmash
27 février 2026 à 16:40
An anonymous reader shares a report: OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons. If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work. It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology. Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts. Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."

Read more of this story at Slashdot.

OpenAI Raises $110 Billion in the Largest Private Funding Round Ever

Par : msmash
27 février 2026 à 14:00
OpenAI has closed what is now the largest private financing in history -- a $110 billion round at a $730 billion pre-money valuation that more than doubles the $40 billion raise it completed just a year ago, itself a record for a private tech company at the time. Amazon invested $50 billion, SoftBank put in $30 billion, and Nvidia committed $30 billion, and additional investors are expected to join as the round progresses. The valuation is a sharp jump from the $500 billion OpenAI commanded in a secondary financing in October, and the round dwarfs recent raises by rivals Anthropic ($30 billion) and xAI ($20 billion). The company has been telling investors it is now targeting roughly $600 billion in total compute spend by 2030, a more measured figure than the $1.4 trillion in infrastructure commitments CEO Sam Altman had touted months earlier. OpenAI is projecting more than $280 billion in total revenue by 2030, split roughly equally between consumer and enterprise. ChatGPT now has over 900 million weekly active users and more than 50 million paying subscribers.

Read more of this story at Slashdot.

Memory Price Hikes Will Kill Off Budget PCs and Smartphones, Analyst Warns

Par : BeauHD
27 février 2026 à 13:00
An anonymous reader quotes a report from The Register: Ballooning memory prices are forecast to kill off entry-level PCs, leading to a decline in global shipments this year -- and a similar effect is going to hit smartphones. Analyst biz Gartner is projecting a drop in PC shipments of more than 10 percent during 2026, and a decline of around 8 percent for smartphones, all due to the AI-driven memory shortage. Some types of memory have doubled or quadrupled in price since last year, and Gartner believes DRAM and NAND flash used in PCs and phones is set for a further 130 percent rise by the end of 2026. The upshot of this is that the budget PC will disappear, simply because vendors won't be able to build them at a price that will satisfy cost-conscious buyers, according to Gartner research director Ranjit Atwal. "Because the price of memory is increasing so much, vendors lose the ability to provide entry-level PCs -- those below about $500," he told The Register. PC makers could just raise the price of their cheap and cheerful boxes to above that level to compensate for the memory hike, however, price-sensitive buyers simply won't bite, he added. Another factor expected to add to declining fortunes of the PC industry this year is AI devices -- systems equipped with special hardware for accelerating AI tasks, typically via a neural processing unit (NPU) embedded in the CPU. These systems were predicted to take the market by storm, but they require more memory to support AI processing and vendors like to mark them up to a premium price. "Historically, downgrading specifications was the way to go when prices were being squeezed, but that's difficult here," Atwal said. "The thinking was that the average price [of AI PCs] would fall this year, and lead to more adoption," said Atwal, "but that's not happening." The lack of killer applications isn't helping either.

Read more of this story at Slashdot.

Fou, faucon calculateur et Dr Jekyll et M. Hyde : les profils terrifiants des IA quand elles ont des armes nucléaires

27 février 2026 à 08:25

ia nucléaire

Dans le film culte Wargames, un supercalculateur menaçait de lancer une guerre nucléaire. En 2026, la réalité dresse un constat tout aussi plus inquiétant : placées aux commandes de simulations géopolitiques, les intelligences artificielles de pointe comme GPT-5.2 ou Gemini 3 Flash choisissent l'escalade atomique dans 95 % des cas.

The AI Case Against Indian IT Ignores What Indian IT Actually Does

Par : msmash
26 février 2026 à 17:20
A fictional memo set in June 2028, published by short seller Citrini Research, wiped roughly $10 billion off Indian IT stocks in a single trading session on February 24 and sent the Nifty IT index down as much as 5.3% -- its worst single-day fall since August 2023 -- on the argument that AI coding agents have collapsed the cost advantage of Indian developers to the price of electricity. The index has shed more than $68 billion in market value in February alone, its worst month since 2003. But the core claim that India's entire $205 billion software export industry rests on cheap labor is roughly 15 years out of date, an analysis argues, custom application maintenance alone accounts for about 35% of a typical Indian IT firm's revenue, per HSBC, and enterprise platforms require deterministic outputs that probabilistic AI systems cannot wholesale replace. HSBC estimates gross AI-led revenue deflation for the sector at 14-16%, a measured headwind rather than an extinction event. The story adds: 24 years of software export data that has never posted a decline, $200 billion in annual revenue, partnerships with the very AI labs whose products are supposed to be the instrument of the sector's destruction, possibly a new $1.5 trillion market category emerging at the intersection of services and software, and the largest U.S. corporates in the middle of mapping their entire workforces into process architectures that require technology partners to modernise. I think India's IT is going to be fine.

Read more of this story at Slashdot.

Burger King Will Use AI To Check If Employees Say 'Please' and 'Thank You'

Par : msmash
26 février 2026 à 16:03
An anonymous reader shares a report: Burger King is launching an AI chatbot that will live in the headsets used by employees. The voice-enabled chatbot, called "Patty," is part of an overarching BK Assistant platform that will not only assist employees with meal preparation but also evaluate their interactions with customers for "friendliness." Thibault Roux, Burger King's chief digital officer, tells The Verge that the company compiled information from franchisees and guests on how to measure friendliness, resulting in the fast food chain training its AI system to recognize certain words and phrases, such as "welcome to Burger King," "please," and "thank you." Managers can then ask the AI assistant how their location is performing on friendliness. "This is all meant to be a coaching tool," Roux says, adding that the company is "iterating" on capturing the tone of conversations as well.

Read more of this story at Slashdot.

On a simulé des guerres nucléaires gérées par des IA : le résultat fait froid dans le dos

26 février 2026 à 14:02

ia nucléaire

Dans le film culte Wargames, un supercalculateur menaçait de lancer une guerre nucléaire. En 2026, la réalité dresse un constat tout aussi plus inquiétant : placées aux commandes de simulations géopolitiques, les intelligences artificielles de pointe comme GPT-5.2 ou Gemini 3 Flash choisissent l'escalade atomique dans 95 % des cas.

La révolution IA de Jony Ive est une enceinte connectée…

26 février 2026 à 12:54

L’image d’illustration est une radio de Dieter Rams, voir plus loin

Le vieux fantasme d’un assistant à la Jarvis dans Iron Man a la peau dure. Le projet révolutionnaire annoncé par IO, la société de Jony Ive créée après son exfiltration d’Apple, sera une bête enceinte connectée à une IA d’OpenAI. Une révolution de pacotille donc.

Jony Ive avait présenté son projet avec de grands superlatifs. Évoquant un objet plus révolutionnaire que le premier smartphone d’Apple. Au final, il s’agira d’une énième interface entre une IA LLM et un humain. Interface qui pourra enregistrer vos questions et proposer des réponses. En, proposant en prime une petite solution de reconnaissance faciale au travers d’une caméra.

La différence entre ce produit et ceux déjà présentés par d’autres startups du genre comme Rabbit ou Humane. C’est qu’OpenAI dispose de sa propre IA, de ses algorithmes et de ses serveurs. Il ne compte donc pas sur un tiers pour effectuer ce travail à leur place. Pour le reste, c’est sur le papier assez identique. 

Jony Ive et Sam Altman

OpenAI a dépensé 6,5 milliards de dollars pour acquérir IO. Un studio de design qui va proposer peu ou prou la même chose que tout le monde. Une nouvelle interface qui accédera à son service. Au menu des atouts de l’objet, la présence d’une caméra qui pourra analyser son environnement. Une approche semblable aux gadgets des startups précédentes et déjà présente dans…. tous les smartphones. Un micro permettra de comprendre une conversation ou un ordre. Là encore, une fonction déjà intégrée dans l’appareil présent dans votre poche. Petit bonus ? La promesse d’une solution de reconnaissance faciale permettant de valider des achats… Là encore, une fonction biométrique très semblable à ce que les smartphones modernes proposent.

Difficile de trouver de l’intérêt pour le produit annoncé pour 2027. Mis à part qu’il se positionnera chez vous et sera donc encore plus intrusif qu’une enceinte connectée d’Amazon ou Google. OpenAI voit ce produit comme un moyen de proposer des interactions continues avec son LLM. On pourra avoir une conversation avec ChatGPT de manière naturelle, sans recourir à un PC ou un smartphone. De quoi s’enfoncer encore un peu plus dans l’illusion d’une amitié ou d’une véritable écoute.

OpenAI ne s’en cache pas et voit dans ce gagdet une présence permanente dans la maison, un objet qui deviendra vite le recours parfait à toutes les questions du quotidien. Une présence qui posera vite question car elle supprimera ce qui sauve encore un peu les utilisateurs des IA de ce type. La possibilité de contrôler ce qui est proposé. Sans écran ni clavier pour vérifier la réponse proposée par l’interface, l’objet impose de faire confiance à l’IA. Et cela même si elle vous raconte n’importe quoi, comme c’est très souvent le cas.

Comme je n'ai pas de photos de l'enceinte de Jony Ive, je vous propose celle de Dieter Rams et de sa petite radio. Une *large* source d'inspiration des appareils modernes...

Comme je n’ai pas de photos de l’enceinte de Jony Ive, je vous propose celle de Dieter Rams et de sa petite radio. Une *large* source d’inspiration des appareils modernes…

Reste qu’il va être difficile de séduire le public, même avec Jony Ive

L’enceinte de Jony Ive devrait être proposée entre 200 et 300 dollars… en plus d’un abonnement au service. Il est impossible pour OpenAI de ne pas proposer un abonnement à OpenAI avec l’objet. Sinon on se retrouvera dans la même impasse que le Rabbit R1. Imaginez que chaque requête effectuée sur l’appareil coûte quelques dollars en calcul aux centres de données d’OpenAI, même vendue à 300 $ pièce, l’enceinte dépasserait la marge réalisée en quelques heures de test. Même avec un abonnement payant, il est difficile d’imaginer une rentabilité à ce type d’appareils. Si le prix de chaque requête effectuée sur un LLM comme OpenAI est secret et qu’il est délicat de l’estimer, il existe. Et un appareil de ce type ne peut donc pas durer sans un abonnement.

Sans même parler du débat écologique lié au coût de l’analyse des questions et du calcul des réponses. Le coût en énergie et en eau d’un déploiement massif de ce type d’appareils semble déjà monstrueux. Il pourrait empirer à terme car OpenAI travaillerait également sur d’autres outils dérivés du même principe. En particulier des lunettes connectées. Et cela en plus d’autres acteurs du monde de l’IA qui seraient en train de réfléchir aux mêmes types d’appareils. Comme Apple, l’ex-employeur de Jony Ive.

Source : 9to5mac

La révolution IA de Jony Ive est une enceinte connectée… © MiniMachines.net. 2026

Perplexity lance un « ordinateur » d’IA qui travaille à votre place 

26 février 2026 à 10:02

Lancée le 26 février 2026, la plateforme « Perplexity Computer » ambitionne de transformer l’IA en véritable ordinateur autonome dans le cloud, capable d’orchestrer plusieurs modèles pour exécuter des tâches complexes de bout en bout.

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data

Par : msmash
25 février 2026 à 19:00
A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday. The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

Read more of this story at Slashdot.

Samsung dévoile les Galaxy S26 et S26 Ultra : le résumé des nouveautés

25 février 2026 à 18:03

Samsung vient de dévoiler sa nouvelle gamme de smartphones haut de gamme : les Galaxy S26, S26+ et S26 Ultra. Le design évolue peu par rapport à la génération précédente (il s'uniformise pour plus de cohérence entre les trois modèles), mais Samsung impressionne avec une nouveauté majeure : le Privacy Display.

❌