Vue lecture

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers. The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population... In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers. After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.) OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said. The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

Read more of this story at Slashdot.

  •  

Donald Trump confirme avoir échangé directement avec Nicolas Maduro

Le président américain n’a pas donné la teneur ni la date de cet « appel téléphonique ». Le président vénézuélien a demandé dimanche à l’OPEP de l’aider à « stopper l’agression qui se prépare ». A plusieurs reprises ces derniers jours, des avions de combat américains ont été repérés près des côtes vénézuéliennes.

© Leonardo Fernandez Viloria / REUTERS

Le président vénézuélien, Nicolas Maduro, lors d’une réunion au Fort Tiuna, une base militaire de Caracas, le 25 novembre 2025.
  •  

Ce qu’il ne fallait pas manquer de l’actualité du week-end

Vous n’avez pas suivi l’actualité samedi 29 et dimanche 30 novembre ? Voici ce qu’il s’est passé pendant ces quarante-huit dernières heures.

© CHANDAN KHANNA / AFP

Le secrétaire d’Etat américain, Marco Rubio, et le secrétaire du Conseil national de sécurité et de défense ukrainien, Rustem Umerov, s’adressent à la presse après une réunion bilatérale, à Hallandale Beach, en Floride, aux Etats-Unis, le 30 novembre 2025.
  •  

Intel Finally Posts Open-Source Gaudi 3 Driver Code For The Linux Kernel

The good news is that Intel tonight posted a pull request for open-source Gaudi 3 accelerator support for the mainline Linux kernel! The bad news is that it's coming quite late in the product cycle, much later than the former excellent Habana Labs open-source track record, and their hopes of squeezing this code into the Linux 6.19 kernel may be dashed...
  •