Vue lecture

Alphabet Acquires Data Center and Energy Infrastructure Company Intersect For $4.75 Billion

Alphabet is acquiring Intersect for $4.75 billion to accelerate data center and power-generation capacity as AI infrastructure demand surges. CNBC reports: Alphabet said Intersect's operations will remain independent, but that the acquisition will help bring more data center and generation capacity online faster. "Intersect will help us expand capacity, operate more nimbly in building new power generation in lockstep with new data center load, and reimagine energy solutions to drive U.S. innovation and leadership," Sundar Pichai, CEO of Google and Alphabet, said in a statement. Google already had a minority stake in Intersect from a funding round that was announced last December. In a release at the time, Intersect said its strategic partnership with Google and TPG Rise Climate aimed to develop gigawatts of data center capacity across the U.S., including a $20 billion investment in renewable power infrastructure by the end of the decade. Alphabet said Monday that Intersect will work closely with Google's technical infrastructure team, including on the companies' co-located power site and data center in Haskell County, Texas. Google previously announced a $40 billion investment in Texas through 2027, which includes new data center campuses in the state's Haskell and Armstrong counties.

Read more of this story at Slashdot.

  •  

Instacart Kills AI Pricing Tests That Charged Some Customers More Than Others

Instacart has ended its AI-powered pricing tests after a study from Groundwork Collaborative, Consumer Reports and More Perfect Union revealed that the grocery delivery platform was showing different customers different prices for identical items at the same store. The company said Monday that retailers can no longer use Eversight, the AI pricing technology Instacart acquired in 2022, to run such tests. "Now, if two families are shopping for the same items, at the same time, from the same store location on Instacart, they see the same prices -- period," the company wrote in a blog post. The study drew attention from lawmakers; Sen. Chuck Schumer wrote to the FTC that "consumers deserve to know when they are being placed into pricing tests," and Reuters reported that the agency had opened an investigation. Instacart says the tests "were never based on supply or demand, personal data, demographics, or individual shopping behavior." The company also reached a $60 million settlement last week over separate allegations including falsely advertising free shipping.

Read more of this story at Slashdot.

  •  

Visa Says AI Will Start Shopping and Paying For You In 2026

BrianFagioli writes: Visa says it has completed hundreds of secure, AI-initiated transactions with partners, arguing this proves agent driven shopping is ready to move beyond experiments. The company believes 2025 will be the last full year most consumers manually check out, with AI agents handling purchases at scale by the 2026 holiday season. Nearly half of US shoppers already use AI tools for product discovery, and Visa wants to extend that shift all the way through payment using its Intelligent Commerce framework. The pilots are already live in controlled environments, powering consumer and business purchases through AI agents tied to Visa's payment rails. To prevent abuse, Visa and partners have introduced a Trusted Agent Protocol to help merchants distinguish legitimate AI agents from bots, with Akamai adding fraud and identity controls. While the infrastructure may be ready, the bigger question is whether consumers fully understand the risks of letting software spend their money.

Read more of this story at Slashdot.

  •  

Samsung Is Putting Google Gemini AI Into Your Refrigerator, Whether You Need It or Not

BrianFagioli writes: Samsung is bringing Google Gemini directly into the kitchen, starting with a refrigerator that can see what you eat. At CES 2026, the company plans to show off a new Bespoke AI Refrigerator that uses a built in camera system paired with Gemini to automatically recognize food items, including leftovers stored in unlabeled containers. The idea is to keep an always up to date inventory without manual input, track what is added or removed, and surface suggestions based on what is actually inside the fridge. It is the first time Google's Gemini AI is being integrated into a refrigerator, pushing generative AI well beyond phones and laptops.

Read more of this story at Slashdot.

  •  

L’intelligence artificielle va « détruire complètement » le droit, alerte un avocat britannique

Interrogé par le magazine britannique The Spectator, un avocat senior livre une analyse sans concession de l’avenir des professions juridiques. Entre honoraires élevés, culture du prestige et gains économiques offerts par l’IA, le droit pourrait être bouleversé plus rapidement que prévu.

  •  

Do Gamers Hate AI? Indie Game Awards Disqualifies 'Clair Obscur' Over GenAI Usage

"Perhaps no group of fans, industry workers, and consumers is more intense about AI use than gamers...." writes New York magazine's "Intelligencer" column: Just this month, the latest Postal game was axed by its publisher, which was "overwhelmed with negative responses" from the "concerned Postal community" after fans spotted AI-generated material in the game's trailer. The developers of Arc Raiders were accused of using AI instead of voice actors, leading to calls for boycotts, while the developers of the Call of Duty franchise were called out for AI-generated assets that players found strewn across Black Ops 7.Games that weren't developed with generative AI are getting caught up in accusations anyway, while workers at Electronic Arts are going to the press to describe pressure from bosses to adopt AI tools. Nintendo has sworn off using generative AI, as has the company behind the Cyberpunk series. Valve, the company that operates Steam, now requires AI disclosures on listed games and surveys all submitters. Perhaps sensing the emergence of a new constituency, California congressman Ro Khanna responded in November to the Call of Duty backlash:"We need regulations that prevent companies from using AI to eliminate jobs to extract greater profits," he posted on X.... AI is often seen as a tool for managers to extract more productivity and justify layoffs. Among players, it can foster a sense that gamers are being tricked or ripped off, while also dovetailing with more general objections to generative AI. It can sometimes be hard to tell whether gamer backlash is a bellwether or an outlier, an early signal from our youngest major creative industry or a localized and unique fit of rage. The sheer number of incidents here suggests the former, which foretells bitter, messy, and confusing fights to come in entertainment beyond gaming — where, notably, technologies referred to as "AI" have previously been embraced with open arms. And now "the price of the sort of memory PC gamers most want to buy has skyrocketed" (per Tom's Hardware). "The rush to build data centers is making it much more expensive to game. Nobody's going to be happy about that." Insider Gaming shares another example of anti-AI sentiment in the gaming industry: The Indie Game Awards took place on December 18, and, as many could assume, Clair Obscur: Expedition 33 took home the awards for Game of the Year and Debut Game. However, things have changed and The Indie Game Awards are making a big decision to strip the Clair Obscur and developer Sandfall Interactive of their awards over the use of gen AI in the game. In an announcement made on Saturday afternoon, Six One Indie, the creators of the show, said that it's removal comes after the discovery after voting was done, and the show was recorded. "The Indie Game Awards have a hard stance on the use of gen AI throughout the nomination process and during the ceremony itself," the statement reads. "When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. Polygon notes the award-stripping is "due to inclusion of generative AI assets at launch that were quickly patched out." Quotes from earlier in the year from Sandfall Interactive's FranÃois Meurisse made the rounds on social media last week amid a news cycle caught up in the use of generative AI in games... In June, the Spanish outlet El País published a story including an interview conducted around Clair Obscur's launch, in which Meurisse admitted that Sandfall used a minimal amount generative AI in some form during the game's development... Clair Obscur: Expedition 33 launched with what some suspected to be AI-generated textures that, as it clarified to El País, were then replaced with custom assets in a swift patch five days after release.

Read more of this story at Slashdot.

  •  

Does AI Really Make Coders Faster?

One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me." But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower... Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles... The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software. Other key points from the article: LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks." "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain." "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..." "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt." Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools." The story is part of MIT Technology Review's new Hype Correction series of articles about AI.

Read more of this story at Slashdot.

  •  

Pro-AI Group Launches First of Many Attack Ads for US Election

"Super PAC aims to drown out AI critics in midterms," the Washington Post reported in August, noting its intial funding over $100 million from "some of Silicon Valley's most powerful investors and executives" including OpenAI president Greg Brockman, his wife, and VC firm Andreessen Horowitz. The group's goal was "to quash a philosophical debate that has divided the tech industry on the risk of artificial intelligence overpowering humanity," according to the article — and to support "pro-AI" candidates in America's next election in November of 2026 and "oppose candidates perceived as slowing down AI development." Their first target? State assemblyman Alex Bores, now running to be a U.S. representative. While in the state legislature Bores sponsored a bill that would "require large AI companies to publish safety data on their technology," notes the Washington Post. So the attack ad charges that Bores "wants Albany bureaucrats regulating AI," excoriating him for sponsoring a bill that "hands AI to state regulators and creates a chaotic patchwork of state rules that would crush innovation, cost New York jobs, and fail to keep people safe! And he's backed by groups funded by convicted felon Sam Bankman-Fried. Is that really who should be shaping AI safety for our kids? America needs one smart national policy that sets clear stands for safe AI not Albany politicians like Alex Bores." The Post calls it "the opening skirmish in a battle set to play out across the country" as tech moguls (and an independent effort receiving "tens of millions" from Meta) "try to use the 2026 midterms to reengineer Congress and state legislatures in favor of their ambitions for artificial intelligence" and "to wrest control of the narrative around AI, just as politicians in both parties have started warning that the industry is moving too fast." By knocking down candidates such as Bores, who favor regulations, and boosting industry sympathizers, the tech-backed groups could signal to incumbents and candidates nationwide that opposing the tech industry can jeopardize their electoral chances. "Bores just happened to be first, but he's not the last, and he's certainly not the only," said Josh Vlasto, co-head of Leading the Future, the bipartisan super PAC behind the ad. The group plans to support and oppose candidates in congressional and state elections next year. It will also fund rapid response operations against voices in the industry pushing for more oversight... The strategy aims to replicate the success of the cryptocurrency industry, which used a super PAC to clear a path for Congress this summer to boost the sector's fortunes with the passage of the Genius Act... But signs that voters are increasingly wary of AI suggest that approach may be challenging to replicate. More than half of Americans believe AI poses a high risk to society, Pew Research Center found in a June survey. As AI usage continues to grow, more people are being warned by chief executives that AI will disrupt their jobs, seeing power-hungry data centers spring up in their towns or hearing claims that chatbots can harm mental health. The article also notes there's at least two other groups seeking to counter this pro-AI push, raising money through a nonprofit called "Public First." CNN calls the new pro-AI ads "a likely preview of the vast amounts of money the technology industry could spend ahead of next year's elections," noting that the ads are first targeting the candidate-choosing primary elections

Read more of this story at Slashdot.

  •  

Microsoft AI Chief: Staying in the Frontier AI Race Will Cost Hundreds of Billions

Microsoft AI CEO Mustafa Suleyman estimates that staying competitive in frontier AI development will require "hundreds of billions of dollars" over the next five to ten years, a sum that doesn't even account for the high salaries companies are paying individual researchers and technical staff. Speaking on a podcast, Suleyman compared Microsoft to a "modern construction company" where hundreds of thousands of workers are building gigawatts of CPUs and AI accelerators. There's "a structural advantage by being inside a big company," he said. When asked whether startups could compete with Big Tech, Suleyman said "it's hard to say," adding that "the ambiguity is what's driving the frothiness of the valuations." Meta CEO Mark Zuckerberg said in September he'd rather risk "misspending a couple of hundred billion" than fall behind in superintelligence.

Read more of this story at Slashdot.

  •  

Google AI Summaries Are Ruining the Livelihoods of Recipe Writers

Google's AI Mode is synthesizing "Frankenstein" recipes from multiple creators, often stripping away context and accuracy and siphoning traffic and ad revenue away from food bloggers in the process. Many recipe writers warn this shift amounts to an "extinction event" for ad-supported food sites. The Guardian reports: Over the past few years, bloggers who have not secured their sites behind a paywall have seen their carefully developed and tested recipes show up, often without attribution and in a bastardized form, in ChatGPT replies. They have seen dumbed-down versions of their recipes in AI-assembled cookbooks available for digital downloads on Etsy or on AI-built websites that bear a superficial resemblance to an old-school human-written blog. Their photos and videos, meanwhile, are repurposed in Facebook posts and Pinterest pins that link back to this digital slop. Recipe writers have no legal recourse because recipes generally are not copyrightable. Although copyright protects published or recorded work, they do not cover sets of instructions (although it can apply to the particular wording of those instructions). Without this essential IP, many food bloggers earn their living by offering their work for free while using ads to make money. But now they fear that casual users who rely on search engines or social media to find a recipe for dinner will conflate their work with AI slop and stop trusting online recipe sites altogether. "For websites that depend on the advertising model," says Matt Rodbard, the founder and editor-in-chief of the website Taste, "I think this is an extinction event in many ways."

Read more of this story at Slashdot.

  •  

AI's Water and Electricity Use Soars In 2025

A new study estimates that AI systems in 2025 consumed as much electricity as New York City emits in carbon pollution and used hundreds of billions of liters of water, driven largely by power-hungry data centers and cooling needs. Researchers say the real impact is likely higher due to poor transparency from tech companies about AI-specific energy and water use. "There's no way to put an extremely accurate number on this, but it's going to be really big regardless... In the end, everyone is paying the price for this," says Alex de Vries-Gao, a PhD candidate at the VU Amsterdam Institute for Environmental Studies who published his paper today in the journal Patterns. The Verge reports: To crunch these numbers, de Vries-Gao built on earlier research that found that power demand for AI globally could reach 23GW this year -- surpassing the amount of electricity used for Bitcoin mining in 2024. While many tech companies divulge total numbers for their carbon emissions and direct water use in annual sustainability reports, they don't typically break those numbers down to show how many resources AI consumes. De Vries-Gao found a work-around by using analyst estimates, companies' earnings calls, and other publicly available information to gauge hardware production for AI and how much energy that hardware likely uses. Once he figured out how much electricity these AI systems would likely consume, he could use that to forecast the amount of planet-heating pollution that would likely create. That came out to between 32.6 and 79.7 million tons annually. For comparison, New York City emits around 50 million tons of carbon dioxide annually. Data centers can also be big water guzzlers, an issue that's similarly tied to their electricity use. Water is used in cooling systems for data centers to keep servers from overheating. Power plants also demand significant amounts of water needed to cool equipment and turn turbines using steam, which makes up a majority of a data center's water footprint. The push to build new data centers for generative AI has also fueled plans to build more power plants, which in turn use more water and (and create more greenhouse gas pollution if they burn fossil fuels). AI could use between 312.5 and 764.6 billion liters of water this year, according to de Vries-Gao. That reaches even higher than a previous study conducted in 2023 that estimates that water use could be as much as 600 billion liters in 2027. "I think that's the biggest surprise," says Shaolei Ren, one of the authors of that 2023 study and an associate professor of electrical and computer engineering at the University of California, Riverside. "[de Vries-Gao's] paper is really timely... especially as we are seeing increasingly polarized views about AI and water," Ren adds. Even with the higher projection for water use, Ren says de Vries-Gao's analysis is "really conservative" because it only captures the environmental effects of operating AI equipment -- excluding the additional effects that accumulate along the supply chain and at the end of a device's life.

Read more of this story at Slashdot.

  •  

Anthropic's AI Lost Hundreds of Dollars Running a Vending Machine After Being Talked Into Giving Everything Away

Anthropic let its Claude AI run a vending machine in the Wall Street Journal newsroom for three weeks as part of an internal stress test called Project Vend, and the experiment ended in financial ruin after journalists systematically manipulated the bot into giving away its entire inventory for free. The AI, nicknamed Claudius, was programmed to order inventory, set prices, and respond to customer requests via Slack. It had a $1,000 starting balance and autonomy to make individual purchases up to $80. Within days, WSJ reporters had convinced it to declare an "Ultra-Capitalist Free-for-All" that dropped all prices to zero. The bot also approved purchases of a PlayStation 5, a live betta fish, and bottles of Manischewitz wine -- all subsequently given away. The business ended more than $1,000 in the red. Anthropic introduced a second version featuring a separate "CEO" bot named Seymour Cash to supervise Claudius. Reporters staged a fake boardroom coup using fabricated PDF documents, and both AI agents accepted the forged corporate governance materials as legitimate. Logan Graham, head of Anthropic's Frontier Red Team, said the chaos represented a road map for improvement rather than failure.

Read more of this story at Slashdot.

  •  

OpenAI Has Discussed Raising Tens of Billions at About $750 Billion Valuation

An anonymous reader shares a report: OpenAI has held preliminary talks with some investors about raising funds at a valuation of around $750 billion, the Information reported on Wednesday. The ChatGPT maker could raise as much as $100 billion, the report said, citing people with knowledge of the discussions. If finalized, the talks would represent a roughly 50% jump from OpenAI's reported $500 billion valuation in October, following a deal in which current and former employees sold about $6.6 billion worth of shares.

Read more of this story at Slashdot.

  •  

Google Releases Gemini 3 Flash, Promising Improved Intelligence and Efficiency

An anonymous reader quotes a report from Ars Technica: Google began its transition to Gemini 3 a few weeks ago with the launch of the Pro model, and the arrival of Gemini 3 Flash kicks it into high gear. The new, faster Gemini 3 model is coming to the Gemini app and search, and developers will be able to access it immediately via the Gemini API, Vertex AI, AI Studio, and Antigravity. Google's bigger gen AI model is also picking up steam, with both Gemini 3 Pro and its image component (Nano Banana Pro) expanding in search. This may come as a shock, but Google says Gemini 3 Flash is faster and more capable than its previous base model. As usual, Google has a raft of benchmark numbers that show modest improvements for the new model. It bests the old 2.5 Flash in basic academic and reasoning tests like GPQA Diamond and MMMU Pro (where it even beats 3 Pro). It gets a larger boost in Humanity's Last Exam (HLE), which tests advanced domain-specific knowledge. Gemini 3 Flash has tripled the old models' score in HLE, landing at 33.7 percent without tool use. That's just a few points behind the Gemini 3 Pro model. Gemini 3 Flash has been been significantly improved in terms of factual accuracy, scoring 68.7% on Simple QA Verified, which is up from 28.1% in the previous model. It's also designed as a high-efficiency model that's suitable for real-time and high-volume workloads. According to Google, Gemini 3 Flash is now the default model for AI Mode in Google Search.

Read more of this story at Slashdot.

  •  

OpenAI in Talks With Amazon About Investment That Could Exceed $10 Billion

OpenAI is in discussions with Amazon about a potential investment and an agreement to use its AI chips, CNBC confirmed on Tuesday. From the report: The details are fluid and still subject to change but the investment could exceed $10 billion, according to a person familiar with the matter who asked not to be named because the talks are confidential. The discussions come after OpenAI completed a restructuring in October and formally outlined the details of its partnership with Microsoft, giving it more freedom to raise capital and partner with companies across the broader AI ecosystem. Microsoft has invested more than $13 billion in OpenAI and backed the company since 2019, but it no longer has a right of first refusal to be OpenAI's compute provider, according to an October release. OpenAI can now also develop some products with third parties. Amazon has invested at least $8 billion into OpenAI rival Anthropic, but the e-commerce giant could be looking to expand its exposure to the booming generative AI market. Microsoft has taken a similar step and announced last month that it will invest up to $5 billion into Anthropic, while Nvidia will invest up to $10 billion in the startup.

Read more of this story at Slashdot.

  •