Vue lecture

Did Tim Cook Post AI Slop in His Christmas Message Promoting 'Pluribus'?

Artist Keith Thomson is a modern (and whimsical) Edward Hopper. And Apple TV says he created the "festive artwork" shared on X by Apple CEO Tim Cook on Christmas Eve, "made on MacBook Pro." Its intentionally-off picture of milk and cookies was meant to tease the season finale of Pluribus. ("Merry Christmas Eve, Carol..." Cook had posted.) But others were convinced that the weird image was AI-generated. Tech blogger John Gruber was blunt. "Tim Cook posts AI Slop in Christmas message on Twitter/X, ostensibly to promote 'Pluribus'." As for sloppy details, the carton is labeled both "Whole Milk" and "Lowfat Milk", and the "Cow Fun Puzzle" maze is just goofily wrong. (I can't recall ever seeing a puzzle of any kind on a milk carton, because they're waxy and hard to write on. It's like a conflation of milk cartons and cereal boxes.) Tech author Ben Kamens — who just days earlier had blogged about generating mazes with AI — said the image showed the "specific quirks" of generative AI mazes (including the way the maze couldn't be solved, expect by going around the maze altogether). Former Google Ventures partner M.G. Siegler even wondered if AI use intentionally echoed the themes of Pluribus — e.g., the creepiness of a collective intelligence — since otherwise "this seems far too obvious to be a mistake/blunder on Apple's part." (Someone on Reddit pointed out that in Pluribus's dystopian world, milk plays a key role — and the open spout of the "natural" milk's carton does touch a suspiciously-shining light on the Christmas tree...) Slashdot contacted artist Keith Thomson to try to ascertain what happened...

Read more of this story at Slashdot.

  •  

Google's 'AI Overview' Wrongly Accused a Musician of Being a Sex Offender

An anonymous reader shared this report from the CBC: Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender. The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19. "You are being put into a less secure situation because of a media company — that's what defamation is," MacIsaac said in a telephone interview with The Canadian Press, adding he was worried about what might have happened had the erroneous content surfaced while he was trying to cross an international border... The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name... [W]hen CBC News reached him by phone on Christmas Eve, he said he'd already received queries from law firms across the country interested in taking it on pro bono.

Read more of this story at Slashdot.

  •  

Sal Khan: Companies Should Give 1% of Profits To Retrain Workers Displaced By AI

"I believe artificial intelligence will displace workers at a scale many people don't yet realize," says Sal Kahn (founder/CEO of the nonprofit Khan Academy). But in an op-ed in the New York Times he also proposes a solution that "could change the trajectory of the lives of millions who will be displaced..." "I believe that every company benefiting from automation — which is most American companies — should... dedicate 1 percent of its profits to help retrain the people who are being displaced." This isn't charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous... Roughly a dozen of the world's largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training. "The problem isn't that people can't work," Khan writes in the essay. "It's that we haven't built systems to help them continue learning and connect them to new opportunities as the world changes rapidly." To meet the challenges, we don't need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge... There is no shortage of meaningful work — only a shortage of pathways into it. Thanks to long-time Slashdot reader destinyland for sharing the article.

Read more of this story at Slashdot.

  •  

OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms

An anonymous reader shared this report from Engadget: OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company's safety strategy. It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said. Per the job listing, the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's Preparedness framework, our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm." "These questions are hard," Altman posted on X.com, "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately." The listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."

Read more of this story at Slashdot.

  •  

Waymo Updates Vehicles to Better Handle Power Outages - But Still Faces Criticism

Waymo explained this week that its self-driving car technology is already "designed to handle dark traffic signals," and successfully handled over 7,000 last Saturday during San Francisco's long power outage, properly treating those intersections as four-way stops. But while during the long outage their cars sometimes experienced a "backlog" when waiting for confirmation checks (leading them to freeze in intersections), Waymo said Tuesday they're implementing "fleet-wide updates" to provide their self-driving cars "specific power outage context, allowing it to navigate more decisively." Ironically, two days later Waymo paused their service again in San Francisco. But this time it was due to a warning from the National Weather Service about a powerful storm bringing the possibility of flash flooding and power outages, reports CNBC. They add that Waymo "didn't immediately respond to a request for comment, or say whether regulators required its service pause on Thursday given the flash flood warnings." And they also note Waymo still faces criticism over last Saturday's incident: The former CEO of San Francisco's Municipal Transit Authority, Jeffrey Tumlin, told CNBC that regulators and robotaxi companies can take valuable lessons away from the chaos that arose with Waymo vehicles during the PG&E power outages last week. "I think we need to be asking 'what is a reasonable number of [autonomous vehicles] to have on city streets, by time of day, by geography and weather?'" Tumlin said. He also suggested regulators may want to set up a staged system that will allow autonomous vehicle companies to rapidly scale their operations, provided they meet specific tests. One of those tests, he said, would be how quickly a company can get their autonomous vehicles safely out of the way of traffic if they encounter something that is confusing like a four-way intersection with no functioning traffic lights. Cities and regulators should also seek more data from robotaxi companies about the planned or actual performance of their vehicles during expected emergencies such as blackouts, floods or earthquakes, Tumlin said.

Read more of this story at Slashdot.

  •  

Bitcoin Miners' Pivot To AI Has Lifted Bitcoin-Mining ETF By About 90% This Year

An anonymous reader quotes a report from the Wall Street Journal: It's harder than ever to mine bitcoin. And less profitable, too. But mining-company stocks are still flying, even with cryptocurrency prices in retreat. That's because these firms have something in common with the hottest investment theme on the planet: the massive, electricity-hungry data centers expected to power the artificial-intelligence boom. Some companies are figuring out how to remake themselves as vital suppliers to Alphabet, Amazon, Meta, Microsoft and other "hyperscalers" bent on AI dominance. Bitcoin-mining -- using vast computer power to solve equations to unlock the digital currency -- has been a lucrative and cutting-edge pursuit in its own right. Lately, however, increased competition and other challenges have eroded profit margins. But just as the bitcoin-mining business began to cool, the AI build-out turned white hot. The AI arms race has created an insatiable demand for some assets the miners already have: data centers, cooling systems, land and hard-to-obtain contracts for electrical power -- all of which can be repurposed to train and power AI models. It's not a seamless process. Miners often have to build new, specialized facilities, because running AI requires more-advanced cooling and network systems, as well as replacing bitcoin-mining computers with AI-focused graphics processing units. But signing deals with miners allows AI giants to expand faster and cheaper than starting new facilities from scratch. These companies still mine some bitcoin, but the transition gives miners a new source of deep-pocketed customers willing to commit to longer-term leases for their data centers. "The opportunity for miners to convert to AI is one of the greatest opportunities I could possibly imagine," said Adam Sullivan, chief executive of Core Scientific, which has pivoted to AI data centers. The shift has boosted miners' stocks. The CoinShares Bitcoin Mining ETF has surged about 90% this year, a rally that has accelerated even as bitcoin erased its gains for 2025. The ETF holds shares of miners including Cipher Mining and IREN, both of which have surged following long-term deals with companies such as Amazon and Microsoft. Shares of Core Scientific quadrupled in 2024 after the company signed its first AI contract that February. The stock has gained 10% this year. The company now expects to exit bitcoin mining entirely by 2028.

Read more of this story at Slashdot.

  •  

Fake Video Claiming 'Coup In France' Goes Viral

alternative_right shares a report from Euronews: France's President Emmanuel Macron discovered news of his own supposed overthrow, after he received a message of concern, along with a link to a Facebook video. "On Sunday (14 December) one of my African counterparts got in touch, writing 'Dear president, what's happening to you? I'm very worried,'" Macron told readers of French local newspaper La Provence on December 16. Alongside the message, a compelling video showcasing a swirling helicopter, military personnel, crowds and -- what appears to be -- a news anchor delivering a piece to camera. "Unofficial reports suggest that there has been a coup in France, led by a colonel whose identity has not been revealed, along with the possible fall of Emmanuel Macron. However, the authorities have not issued a clear statement," she says. Except, nothing about this video is authentic: it was created with AI. After discovering the video, Macron asked Pharos -- France's official portal for signaling online illicit content -- to call Facebook's parent company Meta, to get the fake video removed. But that request was turned down, as the platform claimed it did not violate its "rules of use." [...] The original video ... racked up more than 12 million views [...].The teenager running the account is based in Burkina Faso and makes money running courses focusing on how to monetize AI. He eventually took the video down more than a week after its initial publication, due to political -- and public -- controversy. "I tend to think that I have more power to apply pressure than other people," Macron said. "Or rather, that it's easier to say something is serious if I am the one calling, but it doesn't work." "These people are mocking us," he added. "They don't care about the serenity of public debates, they don't care about democracy, and therefore they are putting us in danger."

Read more of this story at Slashdot.

  •  

Italy Tells Meta To Suspend Its Policy That Bans Rival AI Chatbots From WhatsApp

Italy's antitrust regulator Italian Competition Authority ordered Meta to suspend a policy that blocks rival AI chatbots from using WhatsApp's business APIs, citing potential abuse of market dominance. "Meta's conduct appears to constitute an abuse, since it may limit production, market access, or technical developments in the AI Chatbot services market, to the detriment of consumers," the Authority wrote. "Moreover, while the investigation is ongoing, Meta's conduct may cause serious and irreparable harm to competition in the affected market, undermining contestability." TechCrunch reports: The AGCM in November had broadened the scope of an existing investigation into Meta, after the company changed its business API policy in October to ban general-purpose chatbots from being offered on the chat app via the API. Meta has argued that its API isn't designed to be a platform for the distribution of chatbots and that people have more avenues beyond WhatsApp to use AI bots from other companies. The policy change, which goes into effect in January, would affect the availability of AI chatbots from the likes of OpenAI, Perplexity, and Poke on the app.

Read more of this story at Slashdot.

  •  

Fix Grainy, Blurry, Low-Resolution Videos Effortlessly with Aiarty Video Enhancer (Lowest Price for Christmas)

Fix Grainy, Blurry, Low-Resolution Videos Effortlessly with Aiarty Video Enhancer (Lowest Price for Christmas)

Many photographers today shoot video alongside stills, but video quality doesn’t always meet expectations. Low-light footage often suffers from visible grain, while clips from older cameras and early DSLRs can look soft and outdated on modern displays.

Improving this kind of footage traditionally means complex workflows and inconsistent results. Aiarty Video Enhancer aims to simplify that process.

Designed for photographers and editors, Aiarty Video Enhancer is an all-in-one solution for cleaning up and improving video quality. It combines intelligent upscaling, noise reduction, deblurring, restoration, color correction, and frame interpolation into a streamlined workflow, while still giving users enough control to maintain a natural, photographic look.

Christmas Deal: Aiarty Video Enhancer at the Lowest Price of the Year

To coincide with the holiday season, Aiarty is currently running a Christmas promotion that may be of interest to photographers who want to improve the quality of their videos without committing to a subscription-based tool.

What the Christmas offer includes:

  • 36% off the regular price (lifetime license)
  • Extra $5 coupon: use code XMASSAVE at checkout

The full lifetime license provides full access to all features, includes lifetime free updates, and can be installed on up to 3 Windows or Mac computers.

For users who prefer a one-time purchase with no recurring fees, this seasonal offer makes Aiarty Video Enhancer a relatively low-risk option to try, especially with a 30-day money-back guarantee in place.

How Aiarty Video Enhancer Fits into Real-World Video Workflows

At its core, Aiarty Video Enhancer is built for photographers who want to improve video quality without turning their footage into something artificial or over-processed.

Instead of relying on a single “one-click” AI approach, it combines multiple optimized AI models with practical user controls, allowing creators to balance quality, speed, and visual realism based on real-world needs.

Optimized AI Models with Performance and Control in Mind

Aiarty uses three specialized AI models, each optimized for different scenarios such as fine-detail restoration and extreme low-light denoising. These models are deeply optimized for modern GPUs, pushing utilization as high as 95%, which translates into noticeably faster processing compared to many similar tools that leave much of the GPU idle.

Equally important, Aiarty does not force users to fully surrender creative control to AI. Features like the Strength slider, Turbo Mode, and Step Mode give photographers the flexibility to decide whether they prioritize speed, maximum quality, or a natural, film-like result.

This balance—powerful automation with meaningful control—is what makes Aiarty particularly well-suited to photographers who occasionally work with video but still care deeply about image integrity.

All processing is done locally and offline, which means better privacy and no cloud uploads or data reuse.

Upscale: Making Camera Footage Fit Modern Workflows

Upscaling is not only about improving old videos. For photographers and editors, it often solves very practical problems in real-world workflows.

In mixed timelines, footage may come from different cameras—for example, a main 4K camera combined with 1080p clips from drones, action cameras, or older DSLRs. High-frame-rate slow-motion footage is also often limited to lower resolutions. Reframing or post-stabilization also inevitably reduces resolution.

Aiarty allows users to upscale videos using common targets such as 1080p, 2K, or 4K, as well as fixed scaling options like 2× or 4×. Instead of simply enlarging pixels, its AI models analyze edges, textures, and patterns to reconstruct missing detail, helping low-resolution clips appear cleaner and more consistent with today’s viewing standards.

Denoise: Reducing Grain in Video and Cleaning Up Audio

Noise is one of the most common problems in camera video, particularly when shooting in low light or at high ISO. Traditional noise reduction often requires adjusting multiple technical parameters and can come at the cost of lost detail.

Aiarty integrates video denoising directly into its enhancement process, automatically reducing noise while preserving edge detail and texture. If the auto enhancement removes too much grain, users can adjust the strength slider to find the optimal balance between cleaner footage and a natural, film-like look.

This approach avoids the hassle of traditional noise reduction while also preventing the overly smooth, plastic appearance that some AI tools often introduce.

In addition to video denoising, Aiarty also includes basic audio noise reduction, helping reduce background hiss or ambient noise in casual recordings and older clips. While not intended to replace dedicated audio software, it provides a practical improvement that makes clips easier to use and more presentable overall.

Deblur and Restore: Improving Clarity When Reshooting Isn’t an Option

Slight blur and softness are common in real-world video, especially with early-generation sensors or less-than-ideal shutter speeds. When reshooting is impossible, restoration becomes the only option.

Aiarty’s deblurring and restoration capabilities focus on recovering perceived clarity rather than aggressively sharpening. By reconstructing edge definition and fine detail where possible, it improves overall sharpness while avoiding halos or harsh artifacts.

The results won’t turn heavily blurred footage into perfectly sharp video, but they can noticeably improve clarity in many real-world scenarios.

Strength Slider: Why Control Matters for Photographers

One of the more important design choices in Aiarty Video Enhancer is the inclusion of a strength slider that controls how strongly the AI enhancement is applied. This may seem like a minor feature, but it plays a significant role in achieving natural-looking results.

AI enhancement is not always a case of “more is better”. Applying too much sharpening or denoising can lead to an artificial look, something photographers are particularly sensitive to. The ability to dial back the effect allows users to find a balance between improved clarity and visual realism.

Color Correction: Fine-Tuning After Enhancement

Basic color correction tools, such as controls for exposure, contrast, highlights, shadows, and color temperature, are included to help refine the final output after AI enhancement.

These tools are not meant to replace full-fledged color grading software. Instead, they allow photographers to make subtle adjustments to ensure that enhanced footage looks balanced and consistent, especially after noise reduction or restoration has altered the image slightly.

When used conservatively, these controls help maintain a natural photographic look while polishing the final result.

Frame Interpolation and Slow Motion: Smoother Motion for Creative Control

Aiarty includes frame interpolation, allowing users to increase frame rates up to 120 fps, and a slow-motion option with adjustable speeds such as 1/2 or 1/4. While not every photographer will need this daily, it can be extremely useful when working with action footage, fast-moving subjects, or older clips originally captured at low frame rates.

By smoothing motion and creating slow-motion effects, this feature gives photographers more creative flexibility, whether for short films, travel videos, or simply making casual clips look more polished and professional.

SDR to HDR: Giving Footage More Depth

For older 8-bit SDR footage, Aiarty offers an optional SDR-to-HDR conversion that outputs 10-bit HDR video. This process can improve color transitions and reduce banding, resulting in smoother gradients and a more refined visual appearance.

When combined with basic color adjustments, this feature can add depth and richness to older clips. It can noticeably enhance the viewing experience on modern HDR-capable displays.

Final Words: Get the Holiday Treat Before It Goes Away!

If you’ve been thinking about improving your video quality, it’s worth trying out Aiarty Video Enhancer to see the results for yourself.

And if you like what you see, take advantage of the Christmas offer and grab Aiarty Video Enhancer and more powerful tools at the lowest price. All licenses work on 3 computers permanently and include lifetime free updates. Act fast—this seasonal promotion is only available for a limited time.

The post Fix Grainy, Blurry, Low-Resolution Videos Effortlessly with Aiarty Video Enhancer (Lowest Price for Christmas) appeared first on Photo Rumors.

  •  

China Is Worried AI Threatens Party Rule

An anonymous reader quotes a report from the Wall Street Journal: Concerned that artificial intelligence could threaten Communist Party rule, Beijing is taking extraordinary steps to keep it under control. Although China's government sees AI as crucial to the country's economic and military future, regulations and recent purges of online content show it also fears AI could destabilize society. Chatbots pose a particular problem: Their ability to think for themselves could generate responses that spur people to question party rule. In November, Beijing formalized rules it has been working on with AI companies to ensure their chatbots are trained on data filtered for politically sensitive content, and that they can pass an ideological test before going public. All AI-generated texts, videos and images must be explicitly labeled and traceable, making it easier to track and punish anyone spreading undesirable content. Authorities recently said they removed 960,000 pieces of what they regarded as illegal or harmful AI-generated content during three months of an enforcement campaign. Authorities have officially classified AI as a major potential threat, adding it alongside earthquakes and epidemics to its National Emergency Response Plan. Chinese authorities don't want to regulate too much, people familiar with the government's thinking said. Doing so could extinguish innovation and condemn China to second-tier status in the global AI race behind the U.S., which is taking a more hands-off approach toward policing AI. But Beijing also can't afford to let AI run amok. Chinese leader Xi Jinping said earlier this year that AI brought "unprecedented risks," according to state media. A lieutenant called AI without safety like driving on a highway without brakes. There are signs that China is, for now, finding a way to thread the needle. Chinese models are scoring well in international rankings, both overall and in specific areas such as computer coding, even as they censor responses about the Tiananmen Square massacre, human-rights concerns and other sensitive topics. Major American AI models are for the most part unavailable in China. It could become harder for DeepSeek and other Chinese models to keep up with U.S. models as AI systems become more sophisticated. Researchers outside of China who have reviewed both Chinese and American models also say that China's regulatory approach has some benefits: Its chatbots are often safer by some metrics, with less violence and pornography, and are less likely to steer people toward self-harm. "The Communist Party's top priority has always been regulating political content, but there are people in the system who deeply care about the other social impacts of AI, especially on children," said Matt Sheehan, who studies Chinese AI at the Carnegie Endowment for International Peace, a think tank. "That may lead models to produce less dangerous content on certain dimensions."

Read more of this story at Slashdot.

  •  

2015 Radio Interview Frames AI As 'High-Level Algebra'

Longtime Slashdot reader MrFreak shares a public radio interview from 2015 discussing artificial intelligence as inference over abstract inputs, along with scaling limits, automation, and governance models, where for-profit engines are constrained by nonprofit oversight: Recorded months before OpenAI was founded, the conversation treats intelligence as math plus incentives rather than something mystical, touching on architectural bottlenecks, why "reasoning" may not simply emerge from brute force, labor displacement, and institutional design for advanced AI systems. Many of the themes align closely with current debates around large language models and AI governance. The recording was revisited following recent remarks by Sergey Brin at Stanford, where he acknowledged that despite Google's early work on Transformers, institutional hesitation and incentive structures limited how aggressively the technology was pursued. The interview provides an earlier, first-principles perspective on how abstraction, scaling, and organizational design might interact once AI systems begin to compound.

Read more of this story at Slashdot.

  •  

Alphabet Acquires Data Center and Energy Infrastructure Company Intersect For $4.75 Billion

Alphabet is acquiring Intersect for $4.75 billion to accelerate data center and power-generation capacity as AI infrastructure demand surges. CNBC reports: Alphabet said Intersect's operations will remain independent, but that the acquisition will help bring more data center and generation capacity online faster. "Intersect will help us expand capacity, operate more nimbly in building new power generation in lockstep with new data center load, and reimagine energy solutions to drive U.S. innovation and leadership," Sundar Pichai, CEO of Google and Alphabet, said in a statement. Google already had a minority stake in Intersect from a funding round that was announced last December. In a release at the time, Intersect said its strategic partnership with Google and TPG Rise Climate aimed to develop gigawatts of data center capacity across the U.S., including a $20 billion investment in renewable power infrastructure by the end of the decade. Alphabet said Monday that Intersect will work closely with Google's technical infrastructure team, including on the companies' co-located power site and data center in Haskell County, Texas. Google previously announced a $40 billion investment in Texas through 2027, which includes new data center campuses in the state's Haskell and Armstrong counties.

Read more of this story at Slashdot.

  •  

Instacart Kills AI Pricing Tests That Charged Some Customers More Than Others

Instacart has ended its AI-powered pricing tests after a study from Groundwork Collaborative, Consumer Reports and More Perfect Union revealed that the grocery delivery platform was showing different customers different prices for identical items at the same store. The company said Monday that retailers can no longer use Eversight, the AI pricing technology Instacart acquired in 2022, to run such tests. "Now, if two families are shopping for the same items, at the same time, from the same store location on Instacart, they see the same prices -- period," the company wrote in a blog post. The study drew attention from lawmakers; Sen. Chuck Schumer wrote to the FTC that "consumers deserve to know when they are being placed into pricing tests," and Reuters reported that the agency had opened an investigation. Instacart says the tests "were never based on supply or demand, personal data, demographics, or individual shopping behavior." The company also reached a $60 million settlement last week over separate allegations including falsely advertising free shipping.

Read more of this story at Slashdot.

  •  

Visa Says AI Will Start Shopping and Paying For You In 2026

BrianFagioli writes: Visa says it has completed hundreds of secure, AI-initiated transactions with partners, arguing this proves agent driven shopping is ready to move beyond experiments. The company believes 2025 will be the last full year most consumers manually check out, with AI agents handling purchases at scale by the 2026 holiday season. Nearly half of US shoppers already use AI tools for product discovery, and Visa wants to extend that shift all the way through payment using its Intelligent Commerce framework. The pilots are already live in controlled environments, powering consumer and business purchases through AI agents tied to Visa's payment rails. To prevent abuse, Visa and partners have introduced a Trusted Agent Protocol to help merchants distinguish legitimate AI agents from bots, with Akamai adding fraud and identity controls. While the infrastructure may be ready, the bigger question is whether consumers fully understand the risks of letting software spend their money.

Read more of this story at Slashdot.

  •  

Samsung Is Putting Google Gemini AI Into Your Refrigerator, Whether You Need It or Not

BrianFagioli writes: Samsung is bringing Google Gemini directly into the kitchen, starting with a refrigerator that can see what you eat. At CES 2026, the company plans to show off a new Bespoke AI Refrigerator that uses a built in camera system paired with Gemini to automatically recognize food items, including leftovers stored in unlabeled containers. The idea is to keep an always up to date inventory without manual input, track what is added or removed, and surface suggestions based on what is actually inside the fridge. It is the first time Google's Gemini AI is being integrated into a refrigerator, pushing generative AI well beyond phones and laptops.

Read more of this story at Slashdot.

  •  

L’intelligence artificielle va « détruire complètement » le droit, alerte un avocat britannique

Interrogé par le magazine britannique The Spectator, un avocat senior livre une analyse sans concession de l’avenir des professions juridiques. Entre honoraires élevés, culture du prestige et gains économiques offerts par l’IA, le droit pourrait être bouleversé plus rapidement que prévu.

  •  
❌