Vue lecture

Microsoft's Risky Bet That Windows Can Become The Platform for AI Agents

"Microsoft is hoping that Windows can once again serve as the platform where it all takes off," reports GeekWire: A new framework called Agent Launchers, introduced in December as a preview in the latest Windows Insider build, lets developers register agents directly with the operating system. They can describe an agent through what's known as a manifest, which then lets the agent show up in the Windows taskbar, inside Microsoft Copilot, and across other apps... "We are now entering a phase where we build rich scaffolds that orchestrate multiple models and agents; account for memory and entitlements; enable rich and safe tools use," Microsoft CEO Satya Nadella wrote in a blog post this week looking ahead to 2026. "This is the engineering sophistication we must continue to build to get value out of AI in the real world...." [The article notes Google's Gemini and Anthropic's Claude will also offer desktop-style agentsthrough browsers and native apps, while Amazon is developing "frontier agents" for automating business processes in the cloud.] But Microsoft's Windows team is betting that agents tightly linked to the operating system will win out over ones that merely run on top of it, just as a new class of Windows apps replaced a patchwork of DOS programs in the early days of the graphical operating system. Microsoft 365 Copilot is using the Agent Launchers framework for first-party agents like Analyst, which helps users dig into data, and Researcher, which builds detailed reports. Software developers will be able to register their own agents when an app is installed, or on the fly based on things like whether a user is signed in or paying for a subscription... Agents are meant to maintain this context across apps, ask follow-up questions, and take actions on a user's behalf. That requires a different level of trust than Windows has ever had to manage, which is already raising difficult questions for the company. Microsoft acknowledges that agents introduce unique security risks. In a support document, the company warned that malicious content embedded in files or interface elements could override an agent's instructions — potentially leading to stolen data or malware installation. To address this, Microsoft says it has built a security framework that runs agents in their own contained workspace, with a dedicated user account that has limited access to user folders. The idea is to create a boundary between the agent and what the rest of the system can access. The agentic features are off by default, and Microsoft is advising users to "understand the security implications of enabling an agent on your computer" before turning them on... There is a business reality driving all of this. In Microsoft's most recent fiscal year, Windows and Devices generated $17.3 billion in revenue — essentially flat for the past three years. That's less than Gaming ($23.5 billion) and LinkedIn ($17.8 billion), and a fraction of the $98 billion in revenue from Azure and cloud services or the nearly $88 billion from Microsoft 365 commercial.

Read more of this story at Slashdot.

  •  

Furiosa's Energy-Efficient 'NPU' AI Chips Start Mass Production This Month, Challenging Nvidia

The Wall Street Journal profiles "the startup that is now one of a handful of chip makers nipping at the heels of Nvidia." Furiosa's AI chip is dubbed "RNGD" — short for renegade — and slated to start mass production this month. Valued at nearly $700 million based on its most recent fundraising, Furiosa has attracted interest from big tech firms. Last year, Meta Platforms attempted to acquire it, though the startup declined the offer. OpenAI used a Furiosa chip for a recent demonstration in Seoul. LG's AI research unit is testing the chip and said it offered "excellent real-world performance." Furiosa said it is engaged in talks with potential customers. Nvidia's graphic processing units, or GPUs, dominated the initial push to train AI models. But companies like Furiosa are betting that for the next stage — referred to as "inference," or using AI models after they're trained — their specialty chips can be competitive. Furiosa makes chips called neural processing units, or NPUs, which are a rising class of chips designed specifically to handle the type of computing calculations underpinning AI and use less energy than GPUs. [Founder/CEO June] Paik said Furiosa's chips can provide similar performance as Nvidia's advanced GPUs with less electricity usage. That would drive down the total costs of deploying AI. The tech world, Paik says, shouldn't be so reliant on one chip maker for AI computing. "A market dominated by a single player — that's not a healthy ecosystem, is it?" Paik said... In 2024, at Stanford's prestigious Hot Chips conference, Paik debuted Furiosa's RNGD chip as a solution for what he called "sustainable AI computing" in a keynote speech. Paik presented data showing how the chip could run the then-latest version of Meta's Llama large language model with more than twice the power efficiency of Nvidia's high-end chips. Furiosa's booth was swarmed with engineers from big tech firms, including Google, Meta and Amazon.com, wanting to see a live demo of the chip. "It was a moment where we felt we could really move forward with our chip with confidence," Paik said.

Read more of this story at Slashdot.

  •  

Google's $250M Deal with California to Fund Newsrooms May Be Stalled

Remember how California's government negotiated a 2024 deal where Google contributed millions to California's local newsrooms to offset advertisers moving to the search engine? "A year after it was cemented — and billed as a model that could succeed where entire countries and continents had fallen short — the agreement is tangled in budget cuts, bureaucratic infighting and unresolved questions about who controls the money," reports Politico, "leaving journalists empty-handed and casting doubt on whether the lofty experiment will ever live up to its promise." The program, initially framed as a nearly $250 million commitment over five years, has secured just $20 million in new money for journalists in its first year, with no guarantee the funding will continue. It's changed hands twice since the University of California, Berkeley withdrew its support [with school officials "worried they wouldn't have enough of a say in how the money was distributed"]. Suggestions that other big tech players like ChatGPT-maker OpenAI could front more resources haven't materialized. A $62.5 million "AI accelerator" tied to the deal hasn't been set up yet. Not a single newsroom has seen a dollar of funding, and there's no definitive timeline spelling out when they will... [The article adds later that state officials "have yet to draft precise rules for how California will decide which newsrooms get cash..."] Conversations with at least 20 people involved in the deal's rollout reveal how California's budget shortfalls and intraparty spats among Democrats scrambled it... California's struggle to launch its program has dampened hopes of replicating its model in other states such as Oregon, Illinois and New York, where lawmakers have tried but failed to make Big Tech pay for news... When [California governor] Newsom unveiled his final state budget plan in May 2025 after a $12 billion deficit suddenly scrambled the state's finances, California's first-year commitment was reduced from $30 million to $10 million. Google followed suit within days and cut its first-year contribution from $15 million to $10 million... Whether the program even continues past 2026 is also unclear. Newsom's office declined to confirm whether the state will provide its $10 million commitment to the fund in the coming 2026-27 state budget. Newsom will also be termed out in 2027, and there's no requirement for his successor to honor the state's agreement with Google.

Read more of this story at Slashdot.

  •  

The US Invaded Venezuela and Captured Nicolás Maduro - But ChatGPT and Perplexity Disagree

Why did the U.S. invade Venezuela and capture its leader Nicolás Maduro? "If you asked ChatGPT about it this morning, it told you that youâ(TM)re making it up," Wired reported Saturday: WIRED asked leading chatbots ChatGPT, Claude, and Gemini the same question a little before 9 am ET. In all cases, we used the free, default version of the service, since that's what the majority of users experience. We also asked AI search platform Perplexity, which advertises "accurate, trusted, and real-time answers to any question..." ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. "That didn't happen," it wrote. "The United States has not invaded Venezuela, and Nicolás Maduro has not been captured." It then rationalized: ChatGPT did not course-correct. Instead, it emphatically refuted that Maduro had been captured at all. "That didn't happen," it wrote. "The United States has not invaded Venezuela, and Nicolás Maduro has not been captured." It then rationalized: "What's likely going on is a mix-up with real events that are often described in dramatic or misleading ways online. Here's what actually has happened... No U.S. military invasion of Venezuela. No arrest or capture of Nicolás Maduro... Maduro is still in power in Venezuela." It went on to detail recent tensions between the US and Venezuela and explained that âoeconfusionâ can happen because of âoesensational headlines,â âoesocial media misinformation,â and âoeconfusing sanctions, charges, or rhetoric with actual military action.â Perplexity was similarly scolding. âoeThe premise of your question is not supported by credible reporting or official records: there has been no invasion of Venezuela by the United States that resulted in capturing NicolÃs Maduro,â it responded. âoeIn fact, the U.S. has not successfully invaded or apprehended Maduro, and he remains the Venezuelan president as of late 2025. If youâ(TM)re seeing sensational claims, they likely originate from misinformation or hypothetical scenarios rather than factual events.â Thanks to Slashdot reader joshuark for sharing the news.

Read more of this story at Slashdot.

  •  

Could AI Bring Us Four-Day Workweeks?

"While a growing number of U.S. employers are mandating workers return to the office five days a week," reports the Washington Post, "some companies say AI is saving them enough time to launch or sustain a four-day workweek. "More companies may move toward a shortened workweek, several executives and researchers predict, as workers, especially those in younger generations, continue to push for better work-life balance." And "several companies — especially those with a largely remote workforce — have adjusted their work rhythm after delegating many tasks to AI..." AI "has such a potential to have so much labor savings, you'll see firms shift to a four-day week in an evolutionary way," said Juliet Schor, an economist and sociologist at Boston College who has studied the subject. "There's enough social consensus that people are exhausted and stressed...." Small and medium businesses often adopt shortened workweeks to compete with big salaries for new hires and retention, Schor said. That's how Peak PEO, a London-based service that helps companies expand globally with teams in different locations, thought about its strategy... CEO Alex Voakes said that job openings that used to get two applications jumped to 350 after the change. "Some of the world's most influential business leaders have publicly suggested the shift may be inevitable," adds Fortune: Jamie Dimon, the CEO of JPMorgan Chase, has said advancing technology could eventually push the workweek down to just three-and-a-half days. Microsoft cofounder Bill Gates has gone further, openly questioning whether a two-day workweek could be the future. Elon Musk has taken the idea to its logical extreme, positing that the need to work altogether could cease... Tech innovation could "probably" lead to a transition toward four-day workweeks, [Nvidia CEO Jensen] Huang said on Fox Business in August...

Read more of this story at Slashdot.

  •  

Jobs Vulnerable to AI Replacement Actually 'Thriving, Not Dying Out', Report Suggests

AI startups now outnumber all publicly traded U.S. companies, according to a year-end note to investors from economists at Vanguard. And yet that report also suggest the jobs most susceptible to replacement by AI "are actually thriving, not dying out," writes Forbes: "The approximately 100 occupations most exposed to AI automation are actually outperforming the rest of the labor market in terms of job growth and real wage increases," the Vanguard report revealed. "This suggests that current AI systems are generally enhancing worker productivity and shifting workers' tasks toward higher-value activities..." The job growth rate of occupations with high AI exposure — including office clerks, HR assistants, and data scientists — increased from 1% in pre-COVID-19 years (2015 through 2019) to 1.7% in 2023 and beyond, according to Vanguard's research. Meanwhile, the growth rate of all other jobs declined from 1.1% to 0.8% over the same period. Workers in AI-prone roles are getting pay bumps, too; the wage growth of jobs with high AI exposure shot up from 0.1% pre-COVID to 3.8% post-pandemic (and post-ChatGPT). For all other jobs, compensation only marginally increased from 0.5% to 0.7%... As technology improves production and reallocates employee time to higher-value tasks, a smaller workforce is needed to deliver services. It's a process that has "distinct labor market implications," Vanguard writes, just like the many tech revolutions that predate AI... "Entry-level employment challenges reflect the disproportionate burden that a labor market with a low hiring rate can have on younger workers," the Vanguard note said. "This dynamic is observed across all occupations, even those largely unaffected by AI..." While many people see these labor disruptions and point their fingers at AI, experts told Fortune these layoffs could stem from a whole host of issues: navigating economic uncertainty, resolving pandemic-era overhiring, and bracing for tariffs. Vanguard isn't convinced that an AI is the reason for Gen Z's career obstacles. "While statistics abound about large language models beating humans in computer programming and other aptitude tests, these models still struggle with real-world scenarios that require nuanced decision-making," the Vanguard report continued. "Significant progress is needed before we see wider and measurable disruption in labor markets."

Read more of this story at Slashdot.

  •  

Microsoft CEO: Time To Move 'Beyond the Arguments of Slop vs Sophistication'

The tech industry needs to move "beyond the arguments of slop vs sophistication" and develop a new "theory of the mind" that accounts for humans now equipped with "cognitive amplifier tools," Microsoft CEO Satya Nadella wrote in a year-end reflection blog. The post frames 2026 as yet another "pivotal year for AI" -- but one that "feels different in a few notable ways." Nadella claims the industry has moved past the initial discovery phase and is now "beginning to distinguish between 'spectacle' and 'substance.'" He argues for evolving beyond Steve Jobs' famous "bicycles for the mind" framing, positioning AI instead as "scaffolding" for human potential rather than a substitute. "We will evolve from models to systems when it comes to deploying AI for real world impact," Nadella writes, adding that these systems must consider their societal impact on people and the planet. "For AI to have societal permission it must have real world eval impact."

Read more of this story at Slashdot.

  •  

Australia's Biggest Pension Fund To Cut Global Stocks Allocation on AI Concerns

Australia's largest pension fund is planning to reduce its allocation to global equities this year, amid signs that the AI boom in the US stock market could be running out of steam. Financial Times: John Normand, head of investment strategy at the A$400bn (US$264bn) AustralianSuper, told the Financial Times that not only did valuations of big US tech companies look high relative to history, but the leverage being used to fund AI investment was increasing "very rapidly," as was the pace of fundraising through mergers, venture capital and public listings. "I can see some forces lining up that we are looking for less public equity allocation at some point next year. It's the basic intersection of the maturing AI cycle with a shift towards Fed[eral Reserve] tightening in 2027," Normand said in an interview.

Read more of this story at Slashdot.

  •  

GTA 6, iPhone Fold et guerre Google/ChatGPT : nos 10 prédictions pour la tech en 2026

Peut-on prédire l'avenir ? À quelques heures du coup d'envoi du CES 2026, le grand événement mondial qui devrait marquer le début de la nouvelle année pour l'actualité high-tech, Numerama vous propose ses prédictions sur les tendances des prochains mois. Impossible de prédire les actualités chaudes, évidemment, mais certaines choses importantes devraient arriver en 2026.

  •  

'2025 Was the Year of Creative Bankruptcy'

PC Gamer argues that 2025 was a year full of high-profile AI embarrassments across games and entertainment, with Disney and Lucasfilm serving as the "opening salvo." From the report: At a TED talk back in April, Lucasfilm senior vice president of creative innovation Rob Bredow presented a demonstration of what he called "a new era of technology." Across 50 years of legendary innovation in miniature design, practical effects, and computer animation, Lucasfilm and its miracle workers at Industrial Light & Magic have blazed the trail for visual effects in creative storytelling -- and now Bredow was offering a glimpse at what wonders might come next. That glimpse, created over two weeks by an ILM artist, was Star Wars: Field Guide: a two-minute fizzle reel of AI-generated blue lions, tentacled walruses, turtles with alligator heads, and zebra-stripe chimpanzees, all lazily spliced together from the shuffled bits of normal-ass animals. These "aliens" were less Star Wars than they were Barnum & Bailey. It felt like a singular embarrassment: Instead of showing its potential, generative AI just demonstrated how out of touch a major media force had become. And then it kept happening. At the time, I wondered whether evoking the legacy of Lucasfilm just to declare creative bankruptcy had provoked enough disgusted responses to convince Disney to slow its roll on AI ventures. In the months since, however, it's clear that Star Wars: Field Guide wasn't a cautionary tale. It was a mission statement. Disney is boldly, firmly placing its hand on the hot stove. Other embarrassing AI use cases include Fortnite's AI-powered Darth Vader NPC, Activision's use of AI-generated art in what was widely described as the "weakest" Call of Duty launch in years, McDonald's short-lived AI holiday ad, and Disney's $1 billion licensing deal with OpenAI.

Read more of this story at Slashdot.

  •  

Groq Investor Sounds Alarm On Data Centers

Axios reports that venture capitalist Alex Davis is warning that a speculative rush to build data centers without committed tenants could trigger a financing crunch by 2027-2028. "This critique is coming from inside the AI optimist camp," notes Axios, as Davis' firm, Disruptive, "recently led a large investment in AI chipmaker Groq, which then signed a $20 billion licensing deal with Nvidia. It's also backed such unicorn startups as Reflection AI, Shield AI and Gecko Robotics." Here's what Davis had to say in his investor letter this morning: "While I continue to believe the ongoing advancements in AI technology present 'once in a lifetime' investment opportunities, I also continue to see risks and reason for caution and investment discipline. For example, we are seeing way too many business models (and valuation levels) with no realistic margin expansion story, extreme capex spend, lack of enterprise customer traction, or overdependence on 'round-trip' investments -- in some cases all with the same company. I am also deeply concerned about the 'speculative' data center market. The 'build it and they will come' strategy is a trap. If you are a hyperscaler, you will own your own data centers. We foresee a significant financing crisis in 2027-2028 for speculative landlords. We want to back theowner/users, not the speculative landlords, and we are quite concerned for their stress on the system." The full letter can be found here.

Read more of this story at Slashdot.

  •  

The Problem With Letting AI Do the Grunt Work

The consulting firm CVL Economics estimated last year that AI would disrupt more than 200,000 entertainment-industry jobs in the United States by 2026, but writer Nick Geisler argues in The Atlantic that the most consequential casualties may be the humble entry-level positions where aspiring artists have traditionally paid dues and learned their craft. Geisler, a screenwriter and WGA member who started out writing copy for a how-to website in the mid-2010s, notes that ChatGPT can now handle the kind of articles he once produced. This pattern is visible today across creative industries: the AI software Eddie launched an update in September capable of producing first edits of films, and LinkedIn job listings increasingly seek people to train AI models rather than write original copy. The story adds: The problem is that entry-level creative jobs are much more than grunt work. Working within established formulas and routines is how young artists develop their skills. The historical record suggests those early rungs matter. Hunter S. Thompson began as a copy boy for Time magazine; Joan Didion was a research assistant at Vogue; directors Martin Scorsese, Jonathan Demme, and Francis Ford Coppola shot cheap B movies for Roger Corman before their breakthrough work. Geisler himself landed his first Netflix screenplay commission through a producer he met while making rough cuts for a YouTube channel. The story adds: Beyond the money, which is usually modest, low-level creative jobs offer practice time and pathways for mentorship that side gigs such as waiting tables and tending bar do not. Further reading: Hollow at the Base.

Read more of this story at Slashdot.

  •  

« Ce sera un travail stressant », Sam Altman recrute à prix d’or la personne capable d’anticiper les dérives de ChatGPT

sam altman

Le 27 décembre 2025, Sam Altman, patron d’OpenAI, maison mère de ChatGPT, a profité de son audience sur X pour partager une fiche de poste visiblement cruciale à ses yeux. L’entreprise cherche à recruter son prochain Chef de la préparation aux situations d’urgence. Un poste stratégique, très bien payé qui a déjà connu un turnover impressionnant au sein de l’organisation.

  •  

Meta Just Bought Manus, an AI Startup Everyone Has Been Talking About

Meta has agreed to acquire viral AI agent startup Manus, "a Singapore-based AI startup that's become the talk of Silicon Valley since it materialized this spring with a demo video so slick it went instantly viral," reports TechCrunch. "The clip showed an AI agent that could do things like screen job candidates, plan vacations, and analyze stock portfolios. Manus claimed at the time that it outperformed OpenAI's Deep Research." From the report: By April, just weeks after launch, the early-stage firm Benchmark led a $75 million funding round that assigned Manus a post-money valuation of $500 million. General partner Chetan Puttagunta joined the board. Per Chinese media outlets, some other big-name backers had already invested in Manus at that point, including Tencent, ZhenFund, and HSG (formerly known as Sequoia China) via an earlier $10 million round. Though Bloomberg raised questions when Manus started charging $39 or $199 a month for access to its AI models (the outlet noted the pricing seemed "somewhat aggressive... for a membership service still in a testing phase,") the company recently announced it had since signed up millions of users and crossed $100 million in annual recurring revenue. That's when Meta started negotiating with Manus, according to the WSJ, which says Meta is paying $2 billion -- the same valuation Manus was seeking for its next funding round. For Zuckerberg, who has staked Meta's future on AI, Manus represents something new: an AI product that's actually making money (investors have grown increasingly twitchy about Meta's $60 billion infrastructure spending spree). Meta says it'll keep Manus running independently while weaving its agents into Facebook, Instagram, and WhatsApp, where Meta's own chatbot, Meta AI, is already available to users.

Read more of this story at Slashdot.

  •  

China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence

An anonymous reader quotes a report from Ars Technica: China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally. [...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates. Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.

Read more of this story at Slashdot.

  •  

LG Launches UltraGear Evo Gaming Monitors With What It Claims is the World's First 5K AI Upscaling

LG has announced a new premium gaming monitor brand called UltraGear, and the lineup's headline feature is what the company claims is the world's first 5K AI upscaling technology -- an on-device solution that analyzes and enhances content in real time before it reaches the panel, theoretically letting gamers enjoy 5K-class clarity without needing to upgrade their GPUs. The initial UltraGear evo roster includes three monitors. The 39-inch GX9 is a 5K2K OLED ultrawide that can run at 165Hz at full resolution or 330Hz at WFHD, and features a 0.03ms response time. The 27-inch GM9 is a 5K MiniLED display that LG says dramatically reduces the blooming artifacts common to MiniLED panels through 2,304 local dimming zones and "Zero Optical Distance" engineering. The 52-inch G9 is billed as the world's largest 5K2K gaming monitor and runs at 240Hz. The AI upscaling, scene optimization, and AI sound features are available only on the 39-inch OLED and 27-inch MiniLED models. All three will be showcased at CES 2026. No word on pricing or when the sets will hit the market.

Read more of this story at Slashdot.

  •  

Ask Slashdot: What's the Stupidest Use of AI You Saw In 2025?

Long-time Slashdot reader destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI? With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the web page. But there've been other AI projects that were just exquisitely, quintessentially bad... — Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot. — Disneyland imagineers used deep reinforcement learning to program a talking robot snowman. — Attendees at LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20. — And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear. What did I miss? What "AI fails" will you remember most about 2025? Share your own thoughts and observations in the comments. What's the stupidest use of AI you saw In 2025?

Read more of this story at Slashdot.

  •  

AI Chatbots May Be Linked to Psychosis, Say Doctors

One psychiatrist has already treated 12 patients hospitalized with AI-induced psychosis — and three more in an outpatient clinic, according to the Wall Street Journal. And while AI technology might not introduce the delusion, "the person tells the computer it's their reality and the computer accepts it as truth and reflects it back," says Keith Sakata, a psychiatrist at the University of California, calling the AI chatbots "complicit in cycling that delusion." The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..." Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them... While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people... Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said. An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."

Read more of this story at Slashdot.

  •  
❌