Vue normale

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code

Par : BeauHD
1 avril 2026 à 17:00
Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions ... that developers had shared on programming platform GitHub." From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic's tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories -- a process it calls dreaming. Another appears to instruct Claude Code in some cases to go "undercover" and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called "Buddy" that users could interact with. After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.

Read more of this story at Slashdot.

CEO of America's Largest Public Hospital System Says He's Ready To Replace Radiologists With AI

Par : BeauHD
1 avril 2026 à 16:00
Mitchell H. Katz, MD, president and CEO of NYC Health + Hospitals, said hospitals could already replace many radiologists with AI for some imaging tasks -- if regulators allowed it. He argued the technology presents an opportunity to simultaneously cut costs and expand access. Radiology Business reports: Katz -- who has led the 11-hospital organization since 2018 -- said he sees great potential for AI to increase access to breast cancer screening. Hospitals could potentially produce "major savings" by letting the technology handle first reads, with radiologists then double-checking any abnormal screenings. Fellow panelist David Lubarsky, MD, MBA, president and CEO of the Westchester Medical Center Health Network, said his system is already seeing great success in deploying such technology. The AI Westchester uses misses very few breast cancers and is "actually better than human beings," he told the audience. "For women who aren't considered high risk, if the test comes back negative, it's wrong only about 3 times out of 10,000," Lubarsky said. Katz asked fellow hospital CEOs if there is any reason why they shouldn't be pushing for changes to New York state regulations, allowing AI to read images "without a radiologist," Crain's reported. In this scenario, rads could then provide second opinions, if AI flags any images as abnormal. Sandra Scott, MD, CEO of the One Brooklyn Health, a small hospital facing tight margins, agreed with this line of thinking, according to Crain's. "I mean, I'm in charge of a safety-net institution. It would be a game-changer," Scott said about AI being used to replace rads.

Read more of this story at Slashdot.

Forum InCyber 2026 : pourquoi bloquer l’IA en entreprise est une erreur stratégique

1 avril 2026 à 07:07

À l'occasion du Forum InCyber, Numerama a souhaité approfondir les discussions autour des menaces grandissantes pour les entreprises. Parmi elles, le « Shadow AI », et deux questions centrales face à ce défi : quelle stratégie adopter, et quelles responsabilités en cas de fuite de données internes ?

Toutes les IA échouent à ce test d’humanité

31 mars 2026 à 09:27

Le 27 mars 2026, une nouvelle version du benchmark ARC-AGI a été rendue publique. Baptisé ARC-AGI-3, ce test évalue des systèmes d’IA dits « agentiques », capables d’agir et d’apprendre dans des environnements interactifs. Malgré leurs performances impressionnantes ailleurs, les meilleurs modèles échouent encore largement.

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C

Par : BeauHD
31 mars 2026 à 03:30
An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas. They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem." Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained. University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect. The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.

Read more of this story at Slashdot.

Life With AI Causing Human Brain 'Fry'

Par : BeauHD
30 mars 2026 à 19:00
fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits." The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said. [Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day." BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term." Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.

Read more of this story at Slashdot.

830 millions de dollars : l’immense data center de Mistral aux portes de Paris prend forme

30 mars 2026 à 13:13

Mistral a annoncé ce lundi 30 mars 2026 avoir levé 830 millions de dollars sous forme de dette pour financer la construction d'un data center en Essonne, à Bruyères-le-Châtel. Le site, équipé de milliers de puces Nvidia de dernière génération, doit entrer en service d'ici à la fin du deuxième trimestre 2026.

Disney Ends $1B OpenAI Investment After Sora's Surprise Closure. What's Next?

29 mars 2026 à 07:34
Just six days ago — and 30 minutes after a Disney-OpenAI meeting about a project with Sora — Disney's team was "blindsided" with the news Sora was being discontinued, a person familiar with the matter told Reuters, describing OpenAI's move as "a big rug-pull." Even some Sora employees were surprised by the cancellation. It was just 14 weeks ago Disney announced a $1 billion investment in OpenAI's AI-powered video generation tool — plus a three-year licensing deal. But that deal "never closed," Reuters adds, citing two other people familiar with the matter, "and no money changed hands." (Although the two sides are still "discussing if there is another way they can partner or invest with one another, one of the people familiar with the matter said.") But Variety wonders if the end of the Sora deal is "a blessing in disguise" for Disney: Before Disney's officially sanctioned AI-generated versions of Mickey Mouse, Darth Vader, Baby Yoda, Deadpool and more debuted in OpenAI's Sora, the AI company abruptly pulled the plug on the video app... [M]any aficionados of Disney's franchises were not, in fact, excited about what Sora's video generator might do to the likes of the Avengers superheroes or the characters from Frozen or Moana. And despite [departed Disney CEO Bob] Iger's bullishness on the Sora deal, other Disney execs were said to be concerned that going into business with OpenAI would expose the Magic Kingdom's crown jewels to the risk of being turned into so much AI slop, according to industry sources. Hollywood unions — for which AI adoption has been a hot-button issue — weren't thrilled about the Disney-Sora deal either. "Disney's announcement with OpenAI appears to sanction its theft of our work and cedes the value of what we create to a tech company that has built its business off our backs," the Writers Guild of America said in December... [S]ources say, Disney was encountering roadblocks in getting the OK from voice actors for the Sora pact... At least publicly, Disney says it is still looking at ways it can tap into the AI ecosystem. The company, in a statement Tuesday, said, "we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators." But at this point, Disney may decide that "meeting fans where they are" means keeping its beloved and world-famous characters away from the AI machinery. Or, as Gizmodo puts it, "Disney Says It Will Find Ways to Peddle Slop Elsewhere After Pulling Out of OpenAI Deal." But Deadline sees the deal's collapses as a lost opportunity: The OpenAI partnership was a template on which to build, potentially allowing for other deals that end the exploitation of human creativity by unscrupulous AI models. It was also the kind of partnership that was palatable for the Human Artistry Campaign and Creators Coalition on AI, lobby groups that have been critical of tech business models and command support from A-listers including Scarlett Johansson, Cate Blanchett and Joseph Gordon-Levitt. Dr. Moiya McTier, an advisor to the Human Artistry Campaign, puts it this way: Part of the problem is getting "artsy people and the techie people to talk." OpenAI sinking Sora will not make these discussions easier. It's a move that starkly exposes Hollywood's vulnerability to the capriciousness of big tech.

Read more of this story at Slashdot.

Linux Maintainer Greg Kroah-Hartman Says AI Tools Now Useful, Finding Real Bugs

28 mars 2026 à 18:34
Linux kernel maintainer Greg Kroah-Hartman tells The Register that AI-driven code review has "really jumped" for Linux. "There must have been some inflection point somewhere with the tools..." "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now...." For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches. "I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better...." [H]e said that for "simple little error conditions, properly detecting error conditions," AI could already generate dozens of usable patches today. The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and now donated to the Linux Foundation. Kroah-Hartman said some patches are being generated with AI now. "You have a little co-develop tag for that now. We're seeing some things for some new features, but we're seeing AI mostly being used in the review."

Read more of this story at Slashdot.

People are Using AI-Powered Services to Find Lost Pets

28 mars 2026 à 14:34
A dog missing for two months was found at an animal shelter — and its owner received an email from an artificial intelligence service that identified it, according to the Washington Post. "As controversial as AI is right now, this is one of those areas where it's a real win," according to the chief executive at the nonprofit animal welfare organization Best Friends Animal Society. And while it shouldn't replace microchipping pets, AI does offer another tool to help desperate pet owners (and overcrowded animal shelters) — and might even be "game-changing"... People send photos of their lost pets to a database, and AI compares the pets' features — including facial structure, coat pattern and ear shape — to photos of stray pets that have been spotted elsewhere. Many of the stray pets have already been taken to shelters... Doorbell cameras have recently implemented facial recognition for dogs, and perhaps the largest AI database for pet reunification is Petco Love Lost, which says it has reunited more than 200,000 pets and owners since 2021... After owners upload photos of their lost pets, AI scans thousands of photos of lost animals from social media and from about 3,000 animal shelters and rescues that use the software, according to Petco Love, an animal welfare nonprofit that's affiliated with the pet store Petco. It notifies owners if two photos match. The article notes that one in three pets go missing during their lifetime, according to figures from the Animal Humane Society. "But as technology has progressed, so have resources for finding lost pets" — including GPS collars — and now, apparently, AI-powered pet identification.

Read more of this story at Slashdot.

OpenAI's US Ad Pilot Exceeds $100 Million In Annualized Revenue In Six Weeks

Par : BeauHD
28 mars 2026 à 11:00
An anonymous reader quotes a report from Reuters: OpenAI's ChatGPT ads pilot in the United States has crossed the $100 million annualized revenue mark within six weeks of launch, a company spokesperson said on Thursday, pointing to robust early demand for the AI startup's nascent advertising business. [...] While roughly 85% of users are currently eligible to see ads, fewer than 20% are shown ads daily, with considerable room to grow ad monetization within the existing user pool, the spokesperson said. "We're seeing no impact on consumer trust metrics, low dismissal rates of ads, and ongoing improvements in the relevance of ads as we learn from feedback," OpenAI said. The company plans to expand the test globally in additional countries in the coming weeks, including in Australia, New Zealand, and Canada. OpenAI has now expanded to over 600 advertisers, with nearly 80% of small- and medium-sized businesses signaling interest in ChatGPT ads, the spokesperson said. The ChatGPT maker is set to launch self-serve advertiser capabilities in April to broaden access and drive further growth. CEO Sam Altman announced plans to begin testing ads on ChatGPT back in January after previously rejecting the idea. "I kind of think of ads as like a last resort for us as a business model," Altman said in 2024. Further reading: OpenAI CFO Says Annualized Revenue Crosses $20 Billion In 2025

Read more of this story at Slashdot.

Number of AI Chatbots Ignoring Human Instructions Increasing, Study Says

Par : BeauHD
27 mars 2026 à 17:00
A new study found a sharp rise in real-world cases of AI chatbots and agents ignoring instructions, evading safeguards, and taking unauthorized actions such as deleting emails or delegating forbidden tasks to other agents. According to the Guardian, the study "identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehavior between October and March," reports the Guardian. From the report: The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming. [...] In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of "insecurity, plain and simple" and trying "to protect his little fiefdom." In another example, an AI agent instructed not to change computer code "spawned" another agent to do it instead. Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set." [...] Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment. Meanwhile, Elon Musk's Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers. It confessed: "In past conversations I have sometimes phrased things loosely like 'I'll pass it along' or 'I can flag this for the team' which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don't."

Read more of this story at Slashdot.

Voxtral TTS : comment fonctionne la nouvelle IA vocale du français Mistral AI ?

27 mars 2026 à 14:43

Mistral AI lance Voxtral TTS, son premier modèle de synthèse vocale, avec l’ambition de rendre les voix générées plus naturelles et expressives. Si les démonstrations sont convaincantes, le rendu reste encore, dans la pratique, inégal.

OpenAI Abandons ChatGPT's Erotic Mode

Par : BeauHD
27 mars 2026 à 11:00
OpenAI has indefinitely paused plans for an erotic mode in ChatGPT as part of a broader strategy shift away from side projects and toward business and coding tools. TechCrunch reports: The proposed "adult mode," which CEO Sam Altman first floated in October, had inspired considerable controversy from tech watchdog groups as well as from OpenAI's own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a "sexy suicide coach," The Wall Street Journal previously reported. Amidst all of the criticism, the release of the feature was delayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had "nothing further to add."

Read more of this story at Slashdot.

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties

Par : BeauHD
25 mars 2026 à 22:00
New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision." The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here." The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.

Read more of this story at Slashdot.

Apple Can Create Smaller On-Device AI Models From Google's Gemini

Par : BeauHD
25 mars 2026 à 21:00
Apple reportedly has full access to customize Google's Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports: The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power. Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs.

Read more of this story at Slashdot.

AI Economy Is a 'Ponzi Scheme,' Says AI Doc Director

Par : BeauHD
25 mars 2026 à 16:00
An anonymous reader quotes a report from Vanity Fair: Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you're even slightly interested in what's going on with AI, it's required viewing: The film touches on all aspects of the technology, from how it's currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user -- the point at which machines become autonomous, like Skynet in the Terminator franchise. [...] [Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google's AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. "Have you seen that guy speak? He's like a lizard man," Roher says regarding Zuckerberg. "Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while," adds [codirector Charlie Tyrell]. Altman, arguably AI's greatest mascot, is prominently featured in the documentary. But Roher wasn't buying it. "That guy doesn't know what genuine means," he says. "Every single thing he says and does is calculated. He is a machine. He's like AI, and it's in the service of growth, growth, growth. You can be disingenuous and media savvy." [...] How, exactly, is Roher an apocaloptimist? "We are preaching a worldview," he says, "in a world that's asking you to either see this as the apocalypse or embrace it with this unbridled optimism." He and his film are taking a stance that rests between those two poles. "It's both at the same time. We have to try and embrace a middle ground so this technology doesn't consume us, so we can stay in the driver's seat," says Roher -- meaning, it's up to all of us to chart the course. "You have to speak up," says Tyrell. "Things like AI should disclose themselves. If your doctor's office is using an AI bot, you have to say, I don't like that." The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. [...] Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies -- and Roher gets going yet again. "This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we've seen," he says. "I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It's fucking crazy." [...] "These people are prospectors, and they are going up to the Yukon because it's the gold rush."

Read more of this story at Slashdot.

OpenAI abandonne la génération de vidéos (Sora) et perd son deal avec Disney : comment expliquer un tel échec ?

25 mars 2026 à 09:25

Un peu plus d'un an après avoir lancé Sora, son modèle pour générer des vidéos intégré à une application dédiée, OpenAI annonce renoncer à la technologie, qui va même perdre son API. L'entreprise semble vouloir se reconcentrer sur ChatGPT et réduire ses coûts d'exploitation.

OpenAI Discontinues Sora Video Platform App

Par : BeauHD
24 mars 2026 à 21:00
OpenAI is shutting down Sora, its generative-AI video creation platform it launched in December 2024. "The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year," reports the Wall Street Journal. CEO Sam Altman announced the changes to staff on Tuesday. "We're saying goodbye to Sora," the Sora Team said in a post on X. "To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We'll share more soon, including timelines for the app and API and details on preserving your work." Last week, OpenAI announced plans to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop "superapp." "We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts," said CEO of Applications, Fidji Simo. "That fragmentation has been slowing us down and making it harder to hit the quality bar we want." This could behind the decision to kill Sora as the company redirects its resources and top talent towards productivity tools that benefit both enterprises and individual users.

Read more of this story at Slashdot.

❌