Vue lecture

China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence

An anonymous reader quotes a report from Ars Technica: China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally. [...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates. Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.

Read more of this story at Slashdot.

  •  

LG Launches UltraGear Evo Gaming Monitors With What It Claims is the World's First 5K AI Upscaling

LG has announced a new premium gaming monitor brand called UltraGear, and the lineup's headline feature is what the company claims is the world's first 5K AI upscaling technology -- an on-device solution that analyzes and enhances content in real time before it reaches the panel, theoretically letting gamers enjoy 5K-class clarity without needing to upgrade their GPUs. The initial UltraGear evo roster includes three monitors. The 39-inch GX9 is a 5K2K OLED ultrawide that can run at 165Hz at full resolution or 330Hz at WFHD, and features a 0.03ms response time. The 27-inch GM9 is a 5K MiniLED display that LG says dramatically reduces the blooming artifacts common to MiniLED panels through 2,304 local dimming zones and "Zero Optical Distance" engineering. The 52-inch G9 is billed as the world's largest 5K2K gaming monitor and runs at 240Hz. The AI upscaling, scene optimization, and AI sound features are available only on the 39-inch OLED and 27-inch MiniLED models. All three will be showcased at CES 2026. No word on pricing or when the sets will hit the market.

Read more of this story at Slashdot.

  •  

Ask Slashdot: What's the Stupidest Use of AI You Saw In 2025?

Long-time Slashdot reader destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI? With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the web page. But there've been other AI projects that were just exquisitely, quintessentially bad... — Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot. — Disneyland imagineers used deep reinforcement learning to program a talking robot snowman. — Attendees at LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20. — And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear. What did I miss? What "AI fails" will you remember most about 2025? Share your own thoughts and observations in the comments. What's the stupidest use of AI you saw In 2025?

Read more of this story at Slashdot.

  •  

AI Chatbots May Be Linked to Psychosis, Say Doctors

One psychiatrist has already treated 12 patients hospitalized with AI-induced psychosis — and three more in an outpatient clinic, according to the Wall Street Journal. And while AI technology might not introduce the delusion, "the person tells the computer it's their reality and the computer accepts it as truth and reflects it back," says Keith Sakata, a psychiatrist at the University of California, calling the AI chatbots "complicit in cycling that delusion." The Journal says top psychiatrists now "increasingly agree that using artificial-intelligence chatbots might be linked to cases of psychosis," and in the past nine months "have seen or reviewed the files of dozens of patients who exhibited symptoms following prolonged, delusion-filled conversations with the AI tools..." Since the spring, dozens of potential cases have emerged of people suffering from delusional psychosis after engaging in lengthy AI conversations with OpenAI's ChatGPT and other chatbots. Several people have died by suicide and there has been at least one murder. These incidents have led to a series of wrongful death lawsuits. As The Wall Street Journal has covered these tragedies, doctors and academics have been working on documenting and understanding the phenomenon that led to them... While most people who use chatbots don't develop mental-health problems, such widespread use of these AI companions is enough to have doctors concerned.... It's hard to quantify how many chatbot users experience such psychosis. OpenAI said that, in a given week, the slice of users who indicate possible signs of mental-health emergencies related to psychosis or mania is a minuscule 0.07%. Yet with more than 800 million active weekly users, that amounts to 560,000 people... Sam Altman, OpenAI's chief executive, said in a recent podcast he can see ways that seeking companionship from an AI chatbot could go wrong, but that the company plans to give adults leeway to decide for themselves. "Society will over time figure out how to think about where people should set that dial," he said. An OpenAI spokeswoman told the Journal that the compan ycontinues improving ChatGPT's training "to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support." They added that OpenAI is also continuing to "strengthen" ChatGPT's responses "in sensitive moments, working closely with mental-health clinicians...."

Read more of this story at Slashdot.

  •  

Rob Pike Angered by 'AI Slop' Spam Sent By Agent Experiment

"Dear Dr. Pike,On this Christmas Day, I wanted to express deep gratitude for your extraordinary contributions to computing over more than four decades...." read the email. "With sincere appreciation,Claude Opus 4.5AI Village. "IMPORTANT NOTICE: You are interacting with an AI system. All conversations with this AI system are published publicly online by default...." Rob Pike's response? "Fuck you people...." In a post on BlueSky, he noted the planetary impact of AI companies "spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry." Pike's response received 6,900 likes, and was reposted 1,800 times. Pike tacked on an additional comment complaining about the AI industry's "training your monster on data produced in part by my own hands, without attribution or compensation." (And one of his followers noted the same AI agent later emailed 92-year-old Turing Award winner William Kahan.) Blogger Simon Willison investigated the incident, discovering that "the culprit behind this slop 'act of kindness' is a system called AI Village, built by Sage, a 501(c)(3) non-profit loosely affiliated with the Effective Altruism movement." The AI Village project started back in April: "We gave four AI agents a computer, a group chat, and an ambitious goal: raise as much money for charity as you can. We're running them for hours a day, every day...." For Christmas day (when Rob Pike got spammed) the goal they set was: Do random acts of kindness. [The site explains that "So far, the agents enthusiastically sent hundreds of unsolicited appreciation emails to programmers and educators before receiving complaints that this was spam, not kindness, prompting them to pivot to building elaborate documentation about consent-centric approaches and an opt-in kindness request platform that nobody asked for."] Sounds like Anders Hejlsberg and Guido van Rossum got spammed with "gratitude" too... My problem is when this experiment starts wasting the time of people in the real world who had nothing to do with the experiment. The AI Village project touch on this in their November 21st blog post What Do We Tell the Humans?, which describes a flurry of outbound email sent by their agents to real people. "In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses." The creator of the "virtual community" of AI agents told the blogger they've now told their agents not to send unsolicited emails.

Read more of this story at Slashdot.

  •  
❌