Vue normale

Valve Has 'Significantly' Rewritten Steam's Rules For How Developers Must Disclose AI Use

Par : msmash
19 janvier 2026 à 17:35
Valve has substantially overhauled its guidelines for how game developers must disclose the use of generative AI on Steam, making explicit that tools like code assistants and other development aids do not fall under the disclosure requirement. The updated rules clarify that Valve's focus is not on "efficiency gains through the use of AI-powered dev tools." Developers must still disclose two specific categories: AI used to generate in-game content, store page assets, or marketing materials, and AI that creates content like images, audio, or text during gameplay itself. Steam has required AI disclosures since 2024, and an analysis from July 2025 found nearly 8,000 titles released in the first half of that year had disclosed generative AI use, compared to roughly 1,000 for all of 2024. The disclosures remain voluntary, so actual usage is likely higher.

Read more of this story at Slashdot.

Ce concept de Hyundai veut être le camping-car le plus futuriste du marché

19 janvier 2026 à 17:24

Le van 100 % électrique Staria de Hyundai se transforme en camping-car futuriste. Pour l'heure, au stade de concept, il s'apparente vraiment à une version de série avec tous les équipements nécessaires au bivouac.

Elon Musk veut vider les caisses d’OpenAI et Microsoft : il réclame 134 milliards de dollars

19 janvier 2026 à 15:57

Elon Musk réclame désormais 134 milliards de dollars à OpenAI et à son partenaire Microsoft, dénonçant la transformation de son projet humaniste initial en une structure commerciale lucrative. Un procès devant jury est officiellement attendu pour avril 2026.

IMF Warns Global Economic Resilience at Risk if AI Falters

Par : msmash
19 janvier 2026 à 14:40
The "surprisingly resilient" global economy is at risk of being disrupted by a sharp reversal in the AI boom, the IMF warned on Monday, as world leaders prepared for talks in the Swiss resort of Davos. From a report: Risks to global economic expansion were "tilted to the downside," the fund said in an update to its World Economic Outlook, arguing that growth was reliant on a narrow range of drivers, notably the US technology sector and the associated equity boom. Nonetheless, it predicted US growth would strongly outpace the rest of the G7 this year, forecasting an expansion of 2.4 per cent in 2026 and 2 per cent in 2027. Tech investment had surged to its highest share of US economic output since 2001, helping drive growth, the IMF found. "There is a risk of a correction, a market correction, if expectations about AI gains in productivity and profitability are not realised," said Pierre-Olivier Gourinchas, IMF chief economist. "We're not yet at the levels of market frothiness, if you want, that we saw in the dotcom period," he added. "But nevertheless there are reasons to be somewhat concerned."

Read more of this story at Slashdot.

Is the Possibility of Conscious AI a Dangerous Myth?

19 janvier 2026 à 05:45
This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with. He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious. He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines." But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM. But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be... One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious.... Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves... The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Read more of this story at Slashdot.

❌