Vue lecture

AI's Water and Electricity Use Soars In 2025

A new study estimates that AI systems in 2025 consumed as much electricity as New York City emits in carbon pollution and used hundreds of billions of liters of water, driven largely by power-hungry data centers and cooling needs. Researchers say the real impact is likely higher due to poor transparency from tech companies about AI-specific energy and water use. "There's no way to put an extremely accurate number on this, but it's going to be really big regardless... In the end, everyone is paying the price for this," says Alex de Vries-Gao, a PhD candidate at the VU Amsterdam Institute for Environmental Studies who published his paper today in the journal Patterns. The Verge reports: To crunch these numbers, de Vries-Gao built on earlier research that found that power demand for AI globally could reach 23GW this year -- surpassing the amount of electricity used for Bitcoin mining in 2024. While many tech companies divulge total numbers for their carbon emissions and direct water use in annual sustainability reports, they don't typically break those numbers down to show how many resources AI consumes. De Vries-Gao found a work-around by using analyst estimates, companies' earnings calls, and other publicly available information to gauge hardware production for AI and how much energy that hardware likely uses. Once he figured out how much electricity these AI systems would likely consume, he could use that to forecast the amount of planet-heating pollution that would likely create. That came out to between 32.6 and 79.7 million tons annually. For comparison, New York City emits around 50 million tons of carbon dioxide annually. Data centers can also be big water guzzlers, an issue that's similarly tied to their electricity use. Water is used in cooling systems for data centers to keep servers from overheating. Power plants also demand significant amounts of water needed to cool equipment and turn turbines using steam, which makes up a majority of a data center's water footprint. The push to build new data centers for generative AI has also fueled plans to build more power plants, which in turn use more water and (and create more greenhouse gas pollution if they burn fossil fuels). AI could use between 312.5 and 764.6 billion liters of water this year, according to de Vries-Gao. That reaches even higher than a previous study conducted in 2023 that estimates that water use could be as much as 600 billion liters in 2027. "I think that's the biggest surprise," says Shaolei Ren, one of the authors of that 2023 study and an associate professor of electrical and computer engineering at the University of California, Riverside. "[de Vries-Gao's] paper is really timely... especially as we are seeing increasingly polarized views about AI and water," Ren adds. Even with the higher projection for water use, Ren says de Vries-Gao's analysis is "really conservative" because it only captures the environmental effects of operating AI equipment -- excluding the additional effects that accumulate along the supply chain and at the end of a device's life.

Read more of this story at Slashdot.

  •  

Anthropic's AI Lost Hundreds of Dollars Running a Vending Machine After Being Talked Into Giving Everything Away

Anthropic let its Claude AI run a vending machine in the Wall Street Journal newsroom for three weeks as part of an internal stress test called Project Vend, and the experiment ended in financial ruin after journalists systematically manipulated the bot into giving away its entire inventory for free. The AI, nicknamed Claudius, was programmed to order inventory, set prices, and respond to customer requests via Slack. It had a $1,000 starting balance and autonomy to make individual purchases up to $80. Within days, WSJ reporters had convinced it to declare an "Ultra-Capitalist Free-for-All" that dropped all prices to zero. The bot also approved purchases of a PlayStation 5, a live betta fish, and bottles of Manischewitz wine -- all subsequently given away. The business ended more than $1,000 in the red. Anthropic introduced a second version featuring a separate "CEO" bot named Seymour Cash to supervise Claudius. Reporters staged a fake boardroom coup using fabricated PDF documents, and both AI agents accepted the forged corporate governance materials as legitimate. Logan Graham, head of Anthropic's Frontier Red Team, said the chaos represented a road map for improvement rather than failure.

Read more of this story at Slashdot.

  •  

OpenAI Has Discussed Raising Tens of Billions at About $750 Billion Valuation

An anonymous reader shares a report: OpenAI has held preliminary talks with some investors about raising funds at a valuation of around $750 billion, the Information reported on Wednesday. The ChatGPT maker could raise as much as $100 billion, the report said, citing people with knowledge of the discussions. If finalized, the talks would represent a roughly 50% jump from OpenAI's reported $500 billion valuation in October, following a deal in which current and former employees sold about $6.6 billion worth of shares.

Read more of this story at Slashdot.

  •  

Google Releases Gemini 3 Flash, Promising Improved Intelligence and Efficiency

An anonymous reader quotes a report from Ars Technica: Google began its transition to Gemini 3 a few weeks ago with the launch of the Pro model, and the arrival of Gemini 3 Flash kicks it into high gear. The new, faster Gemini 3 model is coming to the Gemini app and search, and developers will be able to access it immediately via the Gemini API, Vertex AI, AI Studio, and Antigravity. Google's bigger gen AI model is also picking up steam, with both Gemini 3 Pro and its image component (Nano Banana Pro) expanding in search. This may come as a shock, but Google says Gemini 3 Flash is faster and more capable than its previous base model. As usual, Google has a raft of benchmark numbers that show modest improvements for the new model. It bests the old 2.5 Flash in basic academic and reasoning tests like GPQA Diamond and MMMU Pro (where it even beats 3 Pro). It gets a larger boost in Humanity's Last Exam (HLE), which tests advanced domain-specific knowledge. Gemini 3 Flash has tripled the old models' score in HLE, landing at 33.7 percent without tool use. That's just a few points behind the Gemini 3 Pro model. Gemini 3 Flash has been been significantly improved in terms of factual accuracy, scoring 68.7% on Simple QA Verified, which is up from 28.1% in the previous model. It's also designed as a high-efficiency model that's suitable for real-time and high-volume workloads. According to Google, Gemini 3 Flash is now the default model for AI Mode in Google Search.

Read more of this story at Slashdot.

  •  

OpenAI in Talks With Amazon About Investment That Could Exceed $10 Billion

OpenAI is in discussions with Amazon about a potential investment and an agreement to use its AI chips, CNBC confirmed on Tuesday. From the report: The details are fluid and still subject to change but the investment could exceed $10 billion, according to a person familiar with the matter who asked not to be named because the talks are confidential. The discussions come after OpenAI completed a restructuring in October and formally outlined the details of its partnership with Microsoft, giving it more freedom to raise capital and partner with companies across the broader AI ecosystem. Microsoft has invested more than $13 billion in OpenAI and backed the company since 2019, but it no longer has a right of first refusal to be OpenAI's compute provider, according to an October release. OpenAI can now also develop some products with third parties. Amazon has invested at least $8 billion into OpenAI rival Anthropic, but the e-commerce giant could be looking to expand its exposure to the booming generative AI market. Microsoft has taken a similar step and announced last month that it will invest up to $5 billion into Anthropic, while Nvidia will invest up to $10 billion in the startup.

Read more of this story at Slashdot.

  •  

Comment OpenAI et Pornhub se retrouvent empêtrés dans la même cyberattaque

Depuis le 15 décembre 2025, Pornhub est victime du chantage du groupe de hackers ShinyHunters. Leur butin ? Les historiques de recherche et de visionnage des membres Premium du site pour adultes. La faille ne proviendrait pas des systèmes informatiques de Pornhub, mais de ceux d’un prestataire, déjà responsable d’une attaque similaire ayant visé OpenAI il y a quelques semaines.

  •  

Une voiture électrique polyvalente à moins de 200 € par mois ? Découvrez comment Hyundai KONA revitalise la mobilité [Sponso]

Hyundai

Lever les derniers doutes des automobilistes concernant la mobilité électrique : c’est le crédo de Hyundai, qui tient avec KONA son atout majeur pour convaincre.

Hyundai

Il s’agit d’un contenu créé par des rédacteurs indépendants au sein de l’entité Humanoid xp. L’équipe éditoriale de Numerama n’a pas participé à sa création. Nous nous engageons auprès de nos lecteurs pour que ces contenus soient intéressants, qualitatifs et correspondent à leurs intérêts.

En savoir plus

  •  

Dual-PCB Linux Computer With 843 Components Designed By AI Boots On First Attempt

Quilter says its AI designed a complex Linux single-board computer in just one week, booting Debian on first power-up. "Holy crap, it's working," exclaimed one of the engineers. Tom's Hardware reports: LA-based startup Quilter has outlined Project Speedrun, which marks a milestone in computer design by AI. The headlining claims are that Quilter's AI facilitated the design of a new Linux SBC, using 843 parts and dual-PCBs, taking just one week to finish, then successfully booting Debian the first time it was powered up. The Quilter team reckon that the AI-enhanced process it demonstrated could unlock a new generation of computer hardware makers.

Read more of this story at Slashdot.

  •  

OpenAI a bien joué : vous ne verrez aucun personnage Disney sur les IA concurrentes de Sora

Grâce à un partenariat signé le 11 décembre 2025, OpenAI s’est assuré une année d’exclusivité sur plus de 200 personnages Disney, Pixar, Marvel et Star Wars pour Sora. Un avantage décisif qui prive, temporairement, les autres acteurs de l’IA de toute exploitation légale des œuvres du géant du divertissement.

  •  

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power?

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..." "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions... We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory. Some key points: "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival...""When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk...""Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... ""Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power...""Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction...""The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve...""The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed...""These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..." He's ultimately warning us about "politics masked as predictions..." "The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation. "It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."

Read more of this story at Slashdot.

  •  

CEOs Plan to Spend More on AI in 2026 - Despite Spotty Returns

The Wall Street Journal reports that 68% of CEOs "plan to spend even more on AI in 2026, according to an annual survey of more than 350 public-company CEOs from advisory firm Teneo." And yet "less than half of current AI projects had generated more in returns than they had cost, respondents said." They reported the most success using AI in marketing and customer service and challenges using it in higher-risk areas such as security, legal and human resources. Teneo also surveyed about 400 institutional investors, of which 53% expect that AI initiatives would begin to deliver returns on investments within six months. That compares to the 84% of CEOs of large companies — those with revenue of $10 billion or more — who believe it will take more than six months. Surprisingly, 67% of CEOs believe AI will increase their entry-level head count, while 58% believe AI will increase senior leadership head count. All the surveyed CEOS were from public companies with revenue over $1 billion...

Read more of this story at Slashdot.

  •  

Podcast Industry Under Siege as AI Bots Flood Airways with Thousands of Programs

An anonymous reader shared this report from the Los Angeles Times: Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast "Diary of a CEO." On YouTube, his clone narrates "100 CEOs With Steven Bartlett," which adds AI-generated animation to Bartlett's cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson. Erica Mandy, the Redondo Beach-based host of the daily news podcast called "The Newsworthy," let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out... In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content. Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company... Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality... Some are using the tech to carpet-bomb the market with content. Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, in some weeks accounting for 1% of all podcasts published that week on the internet, according to CEO Jeanine Wright. The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects. Instead of a studio searching for a specific "hit" podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening... One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue... Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiastNigel Thistledown... Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.

Read more of this story at Slashdot.

  •  

Podcast Industry Under Siege as AI Bot Flood Airways with Thousands of Programs

An anonymous reader shared this report from the Los Angeles Times: Popular podcast host Steven Bartlett has used an AI clone to launch a new kind of content aimed at the 13 million followers of his podcast "Diary of a CEO." On YouTube, his clone narrates "100 CEOs With Steven Bartlett," which adds AI-generated animation to Bartlett's cloned voice to tell the life stories of entrepreneurs such as Steve Jobs and Richard Branson. Erica Mandy, the Redondo Beach-based host of the daily news podcast called "The Newsworthy," let an AI voice fill in for her earlier this year after she lost her voice from laryngitis and her backup host bailed out... In podcasting, many listeners feel strong bonds to hosts they listen to regularly. The slow encroachment of AI voices for one-off episodes, canned ad reads, sentence replacement in postproduction or translation into multiple languages has sparked anger as well as curiosity from both creators and consumers of the content. Augmenting or replacing host reads with AI is perceived by many as a breach of trust and as trivializing the human connection listeners have with hosts, said Megan Lazovick, vice president of Edison Research, a podcast research company... Still, platforms such as YouTube and Spotify have introduced features for creators to clone their voice and translate their content into multiple languages to increase reach and revenue. A new generation of voice cloning companies, many with operations in California, offers better emotion, tone, pacing and overall voice quality... Some are using the tech to carpet-bomb the market with content. Los Angeles podcasting studio Inception Point AI has produced its 200,000 podcast episodes, in some weeks accounting for 1% of all podcasts published that week on the internet, according to CEO Jeanine Wright. The podcasts are so cheap to make that they can focus on tiny topics, like local weather, small sports teams, gardening and other niche subjects. Instead of a studio searching for a specific "hit" podcast idea, it takes just $1 to produce an episode so that they can be profitable with just 25 people listening... One of its popular synthetic hosts is Vivian Steele, an AI celebrity gossip columnist with a sassy voice and a sharp tongue... Inception Point has built a roster of more than 100 AI personalities whose characteristics, voices and likenesses are crafted for podcast audiences. Its AI hosts include Clare Delish, a cooking guidance expert, and garden enthusiastNigel Thistledown... Across Apple and Spotify, Inception Point podcasts have now garnered 400,000 subscribers.

Read more of this story at Slashdot.

  •  

Entry-Level Tech Workers Confront an AI-Fueled Jobpocalypse

AI "has gutted entry-level roles in the tech industry," reports Rest of World. One student at a high-ranking engineering college in India tells them that among his 400 classmates, "fewer than 25% have secured job offers... there's a sense of panic on the campus." Students at engineering colleges in India, China, Dubai, and Kenya are facing a "jobpocalypse" as artificial intelligence replaces humans in entry-level roles. Tasks once assigned to fresh graduates, such as debugging, testing, and routine software maintenance, are now increasingly automated. Over the last three years, the number of fresh graduates hired by big tech companies globally has declined by more than 50%, according to a report published by SignalFire, a San Francisco-based venture capital firm. Even though hiring rebounded slightly in 2024, only 7% of new hires were recent graduates. As many as 37% of managers said they'd rather use AI than hire a Gen Z employee... Indian IT services companies have reduced entry-level roles by 20%-25% thanks to automation and AI, consulting firm EY said in a report last month. Job platforms like LinkedIn, Indeed, and Eures noted a 35% decline in junior tech positions across major EU countries during 2024... "Five years ago, there was a real war for [coders and developers]. There was bidding to hire," and 90% of the hires were for off-the-shelf technical roles, or positions that utilize ready-made technology products rather than requiring in-house development, said Vahid Haghzare, director at IT hiring firm Silicon Valley Associates Recruitment in Dubai. Since the rise of AI, "it has dropped dramatically," he said. "I don't even think it's touching 5%. It's almost completely vanished." The company headhunts workers from multiple countries including China, Singapore, and the U.K... The current system, where a student commits three to five years to learn computer science and then looks for a job, is "not sustainable," Haghzare said. Students are "falling down a hole, and they don't know how to get out of it."

Read more of this story at Slashdot.

  •  

Time Magazine's 'Person of the Year': the Architects of AI

Time magazine used its 98th annual "Person of the Year" cover to "recognize a force that has dominated the year's headlines, for better or for worse. For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME's 2025 Person of the Year." One cover illustration shows eight AI executives sitting precariously on a beam high above the city, while Time's 6,700-word article promises "the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how [Nvidia CEO] Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods." Time describes them betting on "one of the biggest physical infrastructure projects of all time," mentioning all the usual worries — datacenters' energy consumption, chatbot psychosis, predictions of "wiping out huge numbers of jobs" and the possibility of an AI stock market bubble. (Although "The drumbeat of warning that advanced AI could kill us all has mostly quieted"). But it also notes AI's potential to jumpstart innovation (and economic productivity) This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible. "Every industry needs it, every company uses it, and every nation needs to build it," Huang tells TIME in a 75-minute interview in November, two days after announcing that Nvidia, the world's first $5 trillion company, had once again smashed Wall Street's earnings expectations. "This is the single most impactful technology of our time..." The risk-averse are no longer in the driver's seat. Thanks to Huang, Son, Altman, and other AI titans, humanity is now flying down the highway, all gas no brakes, toward a highly automated and highly uncertain future. Perhaps Trump said it best, speaking directly to Huang with a jovial laugh in the U.K. in September: "I don't know what you're doing here. I hope you're right."

Read more of this story at Slashdot.

  •  
❌