Vue normale

Quel câble Ethernet RJ45 choisir pour améliorer sa connexion internet à la maison ?

8 avril 2026 à 10:14

Avoir la fibre optique, c'est bien ; pouvoir en profiter, c'est mieux. Si votre connexion stagne malgré une box récente, le goulot d'étranglement est souvent votre câble Ethernet RJ45. Voici notre guide complet pour choisir le bon cordon réseau et libérer le plein potentiel de votre abonnement internet.

Anthropic Unveils 'Claude Mythos', Powerful AI With Major Cyber Implications

Par : BeauHD
7 avril 2026 à 22:00
"Anthropic has unveiled Claude Mythos, a new AI model capable of discovering critical vulnerabilities at scale," writes Slashdot reader wiredmikey. "It's already powering Project Glasswing, a joint effort with major tech firms to secure critical software. But the same capabilities could also accelerate offensive cyber operations." SecurityWeek reports: Mythos is not an incremental improvement but a step change in performance over Anthropic's current range of frontier models: Haiku (smallest), Sonnet (middle ground), and Opus (most powerful). Mythos sits in a fourth tier named Copybara, and Anthropic describes it as superior to any other existing AI frontier model. It incorporates the current trend in the use of AI: the modern use of agentic AI. "The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills... the model has the highest scores of any model yet developed on a variety of software coding tasks," notes Anthropic in a blog titled Project Glasswing -- Securing critical software for the AI era. In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old -- the oldest found so far is a 27-years old bug in OpenBSD. Elsewhere, a 16-years old vulnerability found in video software has survived five million hits from other automated testing tools without ever being discovered. And it autonomously found and chained together several in the Linux kernel allowing an attacker to escalate from ordinary user access to complete control of the machine. [...] Anthropic is concerned that Mythos' capabilities could unleash cyberattacks too fast and too sophisticated for defenders to block. It hopes that Mythos can be used to improve cybersecurity generally before malicious actors can get access to it. To this end, the firm has announced the next stage of this preparation as Project Glasswing, powered by Mythos Preview. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. "Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play." Claude Mythos Preview is described as a general-purpose, unreleased frontier model from Anthropic that has nevertheless completed its training phase. The firm does not plan to make Mythos Preview generally available. The implication is that 'Preview' is a term used solely to describe the current state of Mythos and the market's readiness to receive it, and will be dropped when the firm gets closer to general release.

Read more of this story at Slashdot.

Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour

Par : BeauHD
7 avril 2026 à 19:00
A New York Times analysis found Google's AI Overviews now answer questions correctly about 90% of the time, which might sound impressive until you realize that roughly 1 in 10 answers is wrong. "[F]or Google, that means hundreds of thousands of lies going out every minute of the day," reports Ars Technica. From the report: The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI. Oumi began running its test last year when Gemini 2.5 was still the company's best model. At the time, the benchmark showed an 85 percent accuracy rate. When the test was rerun following the Gemini 3 update, AI Overviews answered 91 percent of the questions correctly. If you extrapolate this miss rate out to all Google searches, AI Overviews is generating tens of millions of incorrect answers per day. The report includes several examples of where AI Overviews went wrong. When asked for the date on which Bob Marley's former home became a museum, AI Overviews cited three pages, two of which didn't discuss the date at all. The final one, Wikipedia, listed two contradictory years, and AI Overviews confidently chose the wrong one. The benchmark also prompts models to produce the date on which Yo Yo Ma was inducted into the classical music hall of fame. While AI Overviews cited the organization's website that listed Ma's induction, it claimed there's no such thing as the Classical Music Hall of Fame. "This study has serious holes," said Google spokesperson Ned Adriance. "It doesn't reflect what people are actually searching on Google." The search giant likes to use a test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted.

Read more of this story at Slashdot.

OpenAI Calls For Robot Taxes, Public Wealth Fund, and 4-Day Workweek To Tackle AI Disruption

Par : BeauHD
6 avril 2026 à 23:00
OpenAI is proposing (PDF) sweeping policy changes to help manage the societal disruption caused by advanced AI, including taxes on automated labor, a public wealth fund, and experiments with a four-day workweek. The company said the policy document offered a series of "initial ideas" to address the risk of "jobs and entire industries being disrupted" by the adoption of AI tools. Business Insider reports: Among the core policy suggestions is a public wealth fund, which would see lawmakers and AI companies work together to invest in long-term assets linked to the AI boom, with returns distributed directly to citizens. Another is that the government should encourage and incentivize employers to experiment with four-day workweeks with no loss in pay and offer "benefits bonuses" tied to productivity gains from new AI tools. The policy document also suggests lawmakers modernize the tax system and shift the tax base to corporate income and capital gains, rather than relying on labor income and payroll taxes that could be hit by a wave of AI-powered job losses. It also recommends taxes related to automated labor. OpenAI also called for the accelerated expansion of the US's electricity grid, which is already feeling the strain from a wave of data center construction and energy demand for training ever more powerful AI models.

Read more of this story at Slashdot.

Copilot Is 'For Entertainment Purposes Only,' According To Microsoft's ToS

Par : BeauHD
6 avril 2026 à 15:00
An anonymous reader quotes a report from TechCrunch: AI skeptics aren't the only ones warning users not to unthinkingly trust models' outputs -- that's what the AI companies say themselves in their terms of service. Take Microsoft, which is currently focused on getting corporate customers to pay for Copilot. But it's also been getting dinged on social media over Copilot's terms of use, which appear to have been last updated on October 24, 2025. "Copilot is for entertainment purposes only," the company warned. "It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk." Microsoft described the terms of service as "legacy language," saying it will be updated. Tom's Hardware notes that similar AI warnings remain common across the industry, with companies like OpenAI and xAI also cautioning users not to treat chatbot output as "the truth" or as "a sole service of truth or factual information."

Read more of this story at Slashdot.

Internet Bug Bounty Pauses Payouts, Citing 'Expanding Discovery' From AI-Assisted Research

6 avril 2026 à 01:34
The Internet Bug Bounty program "has been paused for new submissions," they announced last week. Running since 2012, the program is funded by "a number of leading software companies," reports InfoWorld, "and has awarded more than $1.5m to researchers who have reported bugs " Up to now, 80% of its payouts have been for discoveries of new flaws, and 20% to support remediation efforts. But as artificial intelligence makes it easier to find bugs, that balance needs to change, HackerOne said in a statement. "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted," said HackerOne. Among the first programs to be affected is the Node.js project, a server-side JavaScript platform for web applications known for its extensive ecosystem. While the project team will continue to accept and triage bug reports through HackerOne, without funding from the Internet Bug Bounty program it will no longer pay out rewards, according to an announcement on its website... [J]ust last month, Google also put a halt to AI-generated submissions provided to its Open Source Software Vulnerability Reward Program. The Internet Bug Bounty stressed that "We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals..." "We remain committed to strengthening open source security. Working with project maintainers and researchers, we're actively evaluating solutions to better align incentives with open source ecosystem realities and ensure vulnerability discoveries translate into durable remediation outcomes."

Read more of this story at Slashdot.

Claude Code Leak Reveals a 'Stealth' Mode for GenAI Code Contributions - and a 'Frustration Words' Regex

5 avril 2026 à 23:41
That leak of Claude Code's source code "revealed all kinds of juicy details," writes PC World. The more than 500,000 lines of code included: - An 'undercover mode' for Claude that allows it to make 'stealth' contributions to public code bases - An 'always-on' agent for Claude Code - A Tamagotchi-style 'Buddy' for Claude "But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases — including f-bombs and other curses — that serve as signs of user frustration." Specifically, Claude Code includes a file called "userPromptKeywords.ts" with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for "wtf," "wth," "omfg," "dumbass," "horrible," "awful," "piece of — -" (insert your favorite four-letter word for that one), "f — you," "screw this," "this sucks," and several other colorful metaphors... While the Claude Code leak revealed the existence of the "frustration words" regex, it doesn't give any indication of why Claude Code is scouring messages for these words or what it's doing with them.

Read more of this story at Slashdot.

Will 'AI-Assisted' Journalists Bring Errors and Retractions?

5 avril 2026 à 21:22
Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal. "AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said... Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers. While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..." "Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava. For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently.... Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue. Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not... Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave... But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...

Read more of this story at Slashdot.

Top NPM Maintainers Targeted with AI Deepfakes in Massive Supply-Chain Attack, Axios Briefly Compromised

5 avril 2026 à 03:34
"Hackers briefly turned a widely trusted developer tool into a vehicle for credential-stealing malware that could give attackers ongoing access to infected systems," the news site Axios.com reported Tuesday, citing security researchers at Google. The compromised package — also named axios — simplifies HTTP requests, and reportedly receives millions of downloads each day: The malicious versions were removed within roughly three hours of being published, but Google warned the incident could have "far-reaching impacts" given the package's widespread use, according to John Hultquist, chief analyst at Google Threat Intelligence Group. Wiz estimates Axios is downloaded roughly 100 million times per week and is present in about 80% of cloud and code environments. So far, Wiz has observed the malicious versions in roughly 3% of the environments it has scanned. Friday PCMag notes the maintainer's compromised account had two-factor authentication enabled, with the breach ultimately traced "to an elaborate AI deepfake from suspected North Korean hackers that was convincing enough to trick a developer into installing malware," according to a post-mortem published Thursday by lead developer Jason Saayman: [Saayman] fell for a scheme from a North Korean hacking group, dubbed UNC1069, which involves sending out phishing messages and then hosting virtual meetings that use AI deepfakes to clone the face and voices of real executives. The virtual meetings will then create the impression of an audio problem, which can only be "solved" if the victim installs some software or runs a troubleshooting command. In reality, it's an effort to execute malware. The North Koreans have been using the tactic repeatedly, whether it be to phish cryptocurrency firms or to secure jobs from IT companies. Saayman said he faced a similar playbook. "They reached out masquerading as the founder of a company, they had cloned the company's founders likeness as well as the company itself," he wrote. "They then invited me to a real Slack workspace. This workspace was branded... The Slack was thought out very well, they had channels where they were sharing LinkedIn posts. The LinkedIn posts I presume just went to the real company's account, but it was super convincing etc." The hackers then invited him to a virtual meeting on Microsoft Teams. "The meeting had what seemed to be a group of people that were involved. The meeting said something on my system was out of date. I installed the missing item as I presumed it was something to do with Teams, and this was the remote access Trojan," he added. "Everything was extremely well coordinated, looked legit and was done in a professional manner." Friday developer security platform Socket wrote that several more maintainers in the Node.js ecosystem "have come out of the woodwork to report that they were targeted by the same social engineering campaign." The accounts now span some of the most widely depended-upon packages in the npm registry and Node.js core itself, and together they confirm that axios was not a one-off target. It was part of a coordinated, scalable attack pattern aimed at high-trust, high-impact open source maintainers. Attackers also targeted several Socket engineers, including CEO Feross Aboukhadijeh. Feross is the creator of WebTorrent, StandardJS, buffer, and dozens of widely used npm packages with billions of downloads... Commenting on the axios post-mortem thread, he noted that this type of targeting [against individual maintainers] is no longer unusual... "We're seeing them across the ecosystem and they're only accelerating." Jordan Harband, John-David Dalton, and other Socket engineers also confirmed they were targeted. Harband, a TC39 member, maintains hundreds of ECMAScript polyfills and shims that are foundational to the JavaScript ecosystem. Dalton is the creator of Lodash, which sees more than 137 million weekly downloads on npm. Between them, the packages they maintain are downloaded billions of times each month. Wes Todd, an Express TC member and member of the Node Package Maintenance Working Group, also confirmed he was targeted. Matteo Collina, co-founder and CTO of Platformatic, Node.js Technical Steering Committee Chair, and lead maintainer of Fastify, Pino, and Undici, disclosed on April 2 that he was also targeted. His packages also see billion downloads per year... Scott Motte, creator of dotenv, the package used by virtually every Node.js project that handles environment variables, with more than 114 million weekly downloads, also confirmed he was targeted using the same Openfort persona. Socket reports that another maintainer was targetted with an invitation to appear on a podcast. (During the recording a suspicious technical issue appeared which required a software fix to resolve....) Even just technical implementation, "This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package," the CI/CD security company StepSecurity wrote Tuesday The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy... Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker's server before npm had even finished resolving dependencies... Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project's normal GitHub Actions CI/CD pipeline. "As preventive steps, Saayman has now outlined several changes," reports The Hacker News, "including resetting all devices and credentials, setting up immutable releases, adopting OIDC flow for publishing, and updating GitHub Actions to adopt best practices." The Wall Street Journal called it "the latest in a string of incidents exposing risks in the systems that underpin how modern software is built."

Read more of this story at Slashdot.

Anthropic Announces Claude Subscribers Must Now Pay Extra to Use OpenClaw

4 avril 2026 à 19:34
Anthropic's making a big and sudden change — and connecting its Claude AI to third-party agentic tools "is about to get a lot more expensive," writes the Verge: Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they'll have to use a "pay-as-you-go option" that will be billed separate from their Claude subscription. Anthropic's announcement added these extra usage bundles are "now available at a discount." Users can also try Anthropic's API, notes VentureBeat, "which charges for every token of usage rather than allowing for open-ended usage up to certain limits, as the Pro and Max plans have allowed so far. " The technical reality, according to Anthropic, is that its first-party tools like Claude Code, its AI vibe coding harness, and Claude Cowork, its business app interfacing and control tool, are built to maximize "prompt cache hit rates" — reusing previously processed text to save on compute. Third-party harnesses like OpenClaw often bypass these efficiencies... [Claude Code creator Boris Cherny explained on X that "I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages."] Growth marketer Aakash Gupta observed on X that the "all-you-can-eat buffet just closed," noting that a single OpenClaw agent running for one day could burn $1,000 to $5,000 in API costs. "Anthropic was eating that difference on every user who routed through a third-party harness," Gupta wrote. "That's the pace of a company watching its margin evaporate in real time." However, Peter Steinberger, the creator of OpenClaw who was recently hired by OpenAI, took a more skeptical view of the "capacity" argument."Funny how timings match up," Steinberger posted on X. "First they copy some popular features into their closed harness, then they lock out open source." Indeed, Anthropic recently added some of the same capabilities that helped OpenClaw catch-on — such as the ability to message agents through external services like Discord and Telegram — to Claude Code... User @ashen_one, founder of Telaga Charity, voiced a concern likely shared by other small-scale builders: "If I switch both [OpenClaw instances] to an API key or the extra usage you're recommending here, it's going to be far too expensive to make it worth using. I'll probably have to switch over to a different model at this point." "I know it sucks," Cherny replied. "Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best mode..." OpenAI appears to be positioning itself as a more "harness-friendly" alternative, potentially using this moment as a customer acquisition channel for disgruntled Claude power users. By restricting subscription limits to their own "closed harness," Anthropic is asserting control over the UI/UX layer. This allows them to collect telemetry and manage rate limits more granularly, but it risks alienating the power-user community that built the "agentic" ecosystem in the first place. Anthropic's decision is a cold calculation of margins versus growth. As Cherny noted, "Capacity is a resource we manage thoughtfully." In the 2026 AI landscape, the era of subsidized, unlimited compute for third-party automation is over. For the average user on Claude.ai, the experience remains unchanged; for the power users running autonomous offices, the bell has tolled.

Read more of this story at Slashdot.

Anthropic bannit l’usage d’OpenClaw avec Claude : « nos abonnements n’ont pas été conçus pour ces outils tiers »

4 avril 2026 à 07:42

Victime de son succès, Anthropic n'a plus la capacité de faire tourner ses serveurs correctement entre les utilisateurs gratuits, ses abonnés payants et les services tiers, comme OpenClaw, que beaucoup associent à Claude. L'entreprise annonce qu'il n'est désormais plus possible de lier un abonnement Claude à OpenClaw : il faut utiliser l'API et payer pour chaque token.

'AI' Is Coming For Your Online Gaming Servers Next

Par : BeauHD
4 avril 2026 à 03:30
"Consumer PC parts aren't the only things being gobbled up by the 'AI' industry," writes PCWorld's Michael Crider. "A Starcraft-inspired strategy game is shutting down its multiplayer servers because the hosting company got bought out for 'AI.'" The game will still be playable offline for now, but the shutdown highlights the ripple effects of the AI boom on the gaming industry. Amid the ongoing hardware shortages, AI companies are basically gobbling up as much infrastructure as they can to repurpose it for AI workloads. From the report: The game in question is Stormgate, a crowdfunded revival of the real-time strategy genre that has languished in the last decade or so. The developer Frost Giant Studios told its players on Discord (spotted by PC Gamer) that it would be unable to continue multiplayer access past the end of this month. The "game server orchestration partner" was bought by an AI company -- the developer's words, not mine -- which means that the multiplayer aspects of the game will have a "planned outage." The devs say the game will be patched for offline play, presumably including its single-player campaign mode and co-op modes, but "online modes will not be available at that point." They're hoping to bring back online play in a later update, but that'll depend on "finding a partner to support ongoing operations." That sounds like old-fashioned player-hosted games with lobbies aren't in the cards, at least not yet. Frost Giant's server provider is Hathora, which was bought by a company called Fireworks AI last month. Fireworks describes its offerings as "open-source AI models at blazing speed, optimized for your use case, scaled globally with the Fireworks Inference Cloud." So, yeah, Hathora's infrastructure will likely be used for yet more generative "AI." And according to GamesBeat, it's planning to shut down the game service aspect of its company completely. That means Stormgate probably isn't going to be the last game affected. Hathora also provides online services for Splitgate 2, among others. I'm contacting Hathora for comment and will update this story if I receive a response.

Read more of this story at Slashdot.

Et maintenant, OpenAI rachète un podcast

3 avril 2026 à 14:38

En pleine restructuration pour se concentrer sur ChatGPT, OpenAI a laissé tomber plusieurs projets annexes comme la génération de vidéos Sora ou la création de hardware. Pourtant, et à la surprise générale, le groupe de Sam Altman a annoncé le rachat de TPBN, le talk show tech le plus populaire du moment aux États-Unis.

Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License

Par : BeauHD
2 avril 2026 à 18:00
An anonymous reader quotes a report from Ars Technica: Google's Gemini AI models have improved by leaps and bounds over the past year, but you can only use Gemini on Google's terms. The company's Gemma open-weight models have provided more freedom, but Gemma 3, which launched over a year ago, is getting a bit long in the tooth. Starting today, developers can start working with Gemma 4, which comes in four sizes optimized for local usage. Google has also acknowledged developer frustrations with AI licensing, so it's dumping the custom Gemma license. Like past versions of its open-weight models, Google has designed Gemma 4 to be usable on local machines. That can mean plenty of things, of course. The two large Gemma variants, 26B Mixture of Experts and 31B Dense, are designed to run unquantized in bfloat16 format on a single 80GB Nvidia H100 GPU. Granted, that's a $20,000 AI accelerator, but it's still local hardware. If quantized to run at lower precision, these big models will fit on consumer GPUs. Google also claims it has focused on reducing latency to really take advantage of Gemma's local processing. The 26B Mixture of Experts model activates only 3.8 billion of its 26 billion parameters in inference mode, giving it much higher tokens-per-second than similarly sized models. Meanwhile, 31B Dense is more about quality than speed, but Google expects developers to fine-tune it for specific uses. The other two Gemma 4 models, Effective 2B (E2B) and Effective 4B (E4B), are aimed at mobile devices. These options were designed to maintain low memory usage during inference, running at an effective 2 billion or 4 billion parameters. Google says the Pixel team worked closely with Qualcomm and MediaTek to optimize these models for devices like smartphones, Raspberry Pi, and Jetson Nano. Not only do they use less memory and battery than Gemma 3, but Google also touts "near-zero latency" this time around. The Apache 2.0 license is much more flexible with its terms of use for commercial restrictions, "granting you complete control over your data, infrastructure, and models," says Google. Clement Delangue, co-founder and CEO of Hugging Face, called it "a huge milestone" that will help developers use Gemma for more projects and expand what Google calls the "Gemmaverse."

Read more of this story at Slashdot.

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI

Par : BeauHD
2 avril 2026 à 11:00
An anonymous reader quotes a report from Gizmodo: OpenAI hasn't been shy about spending money lobbying for favorable laws and regulations. But when it comes to its involvement with child safety advocacy groups, the company has apparently decided it's best to stay in the shadows -- even if it means hiding from the people actually pushing for policy changes. According to a report from the San Francisco Standard, a number of people involved in the California-based Parents and Kids Safe AI Coalition were blindsided to learn their efforts were secretly being funded by OpenAI. Per the Standard, the Parents and Kids Safe AI Coalition was a group formed to push the Parents and Kids Safe AI Act, a piece of California legislation proposed earlier this year that would require AI firms to implement age verification and additional safeguards for users under the age of 18. That bill was backed by OpenAI in partnership with Common Sense Media, which proposed the legislation as a compromise after the two groups had pushed dueling ballot initiatives last year. But when the coalition started to reach out to child safety groups and other advocacy organizations to try to get them to lend support to the bill, OpenAI was apparently conveniently left off the messaging. The AI giant was also left out of the marketing on the coalition's website, according to the Standard. That reportedly led to a number of groups and individuals lending their support to the Parents and Kids Safe AI Coalition without realizing that they were aligning themselves with OpenAI. As it turns out, OpenAI isn't just one of the members of the coalition; it is the group's biggest funder. In fact, the Standard characterized the Parents and Kids Safe AI Coalition as being "entirely funded" by OpenAI. While it's not clear exactly how much the company has funneled to this particular group, a Wall Street Journal report from January said OpenAI pledged $10 million to push the Parents and Kids Safe AI Act. Gizmodo notes that OpenAI's backing of the Parents and Kids Safe AI Act "could be self-serving for CEO Sam Altman," who just so happens to head a company called World that provides age verification services.

Read more of this story at Slashdot.

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code

Par : BeauHD
1 avril 2026 à 17:00
Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions ... that developers had shared on programming platform GitHub." From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic's tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories -- a process it calls dreaming. Another appears to instruct Claude Code in some cases to go "undercover" and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called "Buddy" that users could interact with. After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.

Read more of this story at Slashdot.

CEO of America's Largest Public Hospital System Says He's Ready To Replace Radiologists With AI

Par : BeauHD
1 avril 2026 à 16:00
Mitchell H. Katz, MD, president and CEO of NYC Health + Hospitals, said hospitals could already replace many radiologists with AI for some imaging tasks -- if regulators allowed it. He argued the technology presents an opportunity to simultaneously cut costs and expand access. Radiology Business reports: Katz -- who has led the 11-hospital organization since 2018 -- said he sees great potential for AI to increase access to breast cancer screening. Hospitals could potentially produce "major savings" by letting the technology handle first reads, with radiologists then double-checking any abnormal screenings. Fellow panelist David Lubarsky, MD, MBA, president and CEO of the Westchester Medical Center Health Network, said his system is already seeing great success in deploying such technology. The AI Westchester uses misses very few breast cancers and is "actually better than human beings," he told the audience. "For women who aren't considered high risk, if the test comes back negative, it's wrong only about 3 times out of 10,000," Lubarsky said. Katz asked fellow hospital CEOs if there is any reason why they shouldn't be pushing for changes to New York state regulations, allowing AI to read images "without a radiologist," Crain's reported. In this scenario, rads could then provide second opinions, if AI flags any images as abnormal. Sandra Scott, MD, CEO of the One Brooklyn Health, a small hospital facing tight margins, agreed with this line of thinking, according to Crain's. "I mean, I'm in charge of a safety-net institution. It would be a game-changer," Scott said about AI being used to replace rads.

Read more of this story at Slashdot.

Forum InCyber 2026 : pourquoi bloquer l’IA en entreprise est une erreur stratégique

1 avril 2026 à 07:07

À l'occasion du Forum InCyber, Numerama a souhaité approfondir les discussions autour des menaces grandissantes pour les entreprises. Parmi elles, le « Shadow AI », et deux questions centrales face à ce défi : quelle stratégie adopter, et quelles responsabilités en cas de fuite de données internes ?

Toutes les IA échouent à ce test d’humanité

31 mars 2026 à 09:27

Le 27 mars 2026, une nouvelle version du benchmark ARC-AGI a été rendue publique. Baptisé ARC-AGI-3, ce test évalue des systèmes d’IA dits « agentiques », capables d’agir et d’apprendre dans des environnements interactifs. Malgré leurs performances impressionnantes ailleurs, les meilleurs modèles échouent encore largement.

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C

Par : BeauHD
31 mars 2026 à 03:30
An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas. They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem." Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained. University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect. The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.

Read more of this story at Slashdot.

❌