Vue normale

Valve Has 'Significantly' Rewritten Steam's Rules For How Developers Must Disclose AI Use

Par : msmash
19 janvier 2026 à 17:35
Valve has substantially overhauled its guidelines for how game developers must disclose the use of generative AI on Steam, making explicit that tools like code assistants and other development aids do not fall under the disclosure requirement. The updated rules clarify that Valve's focus is not on "efficiency gains through the use of AI-powered dev tools." Developers must still disclose two specific categories: AI used to generate in-game content, store page assets, or marketing materials, and AI that creates content like images, audio, or text during gameplay itself. Steam has required AI disclosures since 2024, and an analysis from July 2025 found nearly 8,000 titles released in the first half of that year had disclosed generative AI use, compared to roughly 1,000 for all of 2024. The disclosures remain voluntary, so actual usage is likely higher.

Read more of this story at Slashdot.

Ce concept de Hyundai veut être le camping-car le plus futuriste du marché

19 janvier 2026 à 17:24

Le van 100 % électrique Staria de Hyundai se transforme en camping-car futuriste. Pour l'heure, au stade de concept, il s'apparente vraiment à une version de série avec tous les équipements nécessaires au bivouac.

Elon Musk veut vider les caisses d’OpenAI et Microsoft : il réclame 134 milliards de dollars

19 janvier 2026 à 15:57

Elon Musk réclame désormais 134 milliards de dollars à OpenAI et à son partenaire Microsoft, dénonçant la transformation de son projet humaniste initial en une structure commerciale lucrative. Un procès devant jury est officiellement attendu pour avril 2026.

IMF Warns Global Economic Resilience at Risk if AI Falters

Par : msmash
19 janvier 2026 à 14:40
The "surprisingly resilient" global economy is at risk of being disrupted by a sharp reversal in the AI boom, the IMF warned on Monday, as world leaders prepared for talks in the Swiss resort of Davos. From a report: Risks to global economic expansion were "tilted to the downside," the fund said in an update to its World Economic Outlook, arguing that growth was reliant on a narrow range of drivers, notably the US technology sector and the associated equity boom. Nonetheless, it predicted US growth would strongly outpace the rest of the G7 this year, forecasting an expansion of 2.4 per cent in 2026 and 2 per cent in 2027. Tech investment had surged to its highest share of US economic output since 2001, helping drive growth, the IMF found. "There is a risk of a correction, a market correction, if expectations about AI gains in productivity and profitability are not realised," said Pierre-Olivier Gourinchas, IMF chief economist. "We're not yet at the levels of market frothiness, if you want, that we saw in the dotcom period," he added. "But nevertheless there are reasons to be somewhat concerned."

Read more of this story at Slashdot.

Is the Possibility of Conscious AI a Dangerous Myth?

19 janvier 2026 à 05:45
This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with. He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious. He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines." But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM. But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be... One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious.... Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves... The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Read more of this story at Slashdot.

Retailers Rush to Implement AI-Assisted Shopping and Orders

18 janvier 2026 à 08:54
This week Google "unveiled a set of tools for retailers that helps them roll out AI agents," reports the Wall Street Journal, The new retail AI agents, which help shoppers find their desired items, provide customer support and let people order food at restaurants, are part of what Alphabet-owned Google calls Gemini Enterprise for Customer Experience. Major retailers, including home improvement giant Lowe's, the grocer Kroger and pizza chain Papa Johns say they are already using Google's tools to help prepare for the incoming wave of AI-assisted shopping and ordering... Kicking off the race among tech giants to get ahead of this shift, OpenAI released its Instant Checkout feature last fall, which lets users buy stuff directly through its chatbot ChatGPT. In January, Microsoft announced a similar checkout feature for its Copilot chatbot. Soon after OpenAI's release last year, Walmart said it would partner with OpenAI to let shoppers buy its products within ChatGPT. But that's just the beginning, reports the New York Times, with hundreds of start-ups also vying for the attention of retailers: There are A.I. start-ups that offer in-store cameras that can detect a customer's age or gender, robots that manage shelves on their own and headsets that give store workers access to product information in real time... The scramble to exploit artificial intelligence is happening across the retail spectrum, from the highest echelons of luxury goods to the most pragmatic of convenience stores. 7-Eleven said it was using conversational A.I. to hire staff at its convenience stores through an agent named Rita (Recruiting Individuals Through Automation). Executives said that they no longer had to worry about whether applicants would show up to interviews and that the system had reduced hiring time, which had taken two weeks, to less than three days. The article notes that at the National Retail Federation conference, other companies showing their AI advancements included Applebee's, IHOP, the Vitamin Shoppe, Urban Outfitters, Rag & Bone, Kendra Scott, Michael Kors and Philip Morris.

Read more of this story at Slashdot.

How Much Do AI Models Resemble a Brain?

18 janvier 2026 à 02:34
At the AI safety site Foom, science journalist Mordechai Rorvig explores a paper presented at November's Empirical Methods in Natural Language Processing conference: [R]esearchers at the Swiss Federal Institute of Technology (EPFL), the Massachusetts Institute of Technology (MIT), and Georgia Tech revisited earlier findings that showed that language models, the engines of commercial AI chatbots, show strong signal correlations with the human language network, the region of the brain responsible for processing language... The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions — performing similar functions, and doing so using highly similar signal patterns. Such resemblances have been exploited by neuroscientists to make much better models of cortical regions. Perhaps more importantly, the links between AI and cortex provide an interpretation of commercial AI technology as being profoundly brain-like, validating both its capabilities as well as the risks it might pose for society as the first synthetic braintech. "It is something we, as a community, need to think about a lot more," said Badr AlKhamissi, doctoral student in computer science at EPFL and first author of the preprint, in an interview with Foom. "These models are getting better and better every day. And their similarity to the brain [or brain regions] is also getting better — probably. We're not 100% sure about it...." There are many known limitations with seeing AI programs as models of brain regions, even those that have high signal correlations. For example, such models lack any direct implementations of biochemical signalling, which is known to be important for the functioning of nervous systems. However, if such comparisons are valid, then they would suggest, somewhat dramatically, that we are increasingly surrounded by a synthetic braintech. A technology not just as capable as the human brain, in some ways, but actually made up of similar components. Thanks to Slashdot reader Gazelle Bay for sharing the article.

Read more of this story at Slashdot.

OpenAI s’apprête à tester la publicité dans ChatGPT aux États-Unis

17 janvier 2026 à 10:30

OpenAI intègrera bientôt de la publicité dans ChatGPT pour les internautes aux États-Unis. Dans les semaines à venir, les versions gratuite et Go de l'intelligence artificielle testeront l'ajout d'encarts publicitaires.

Partly AI-Generated Folk-Pop Hit Barred From Sweden's Official Charts

Par : msmash
16 janvier 2026 à 19:25
An anonymous reader shares a report: A hit song has been excluded from Sweden's official chart after it emerged the "artist" behind it was an AI creation. I Know, You're Not Mine -- or Jag Vet, Du Ar Inte Min in Swedish -- by a singer called Jacub has been a streaming success in Sweden, topping the Spotify rankings. However, the Swedish music trade body has excluded the song from the official chart after learning it was AI-generated. "Jacub's track has been excluded from Sweden's official chart, Sverigetopplistan, which is compiled by IFPI Sweden. While the song appears on Spotify's own charts, it does not qualify for inclusion on the official chart under the current rules," said an IFPI Sweden spokesperson. Ludvig Werber, IFPI Sweden's chief executive, said: "Our rule is that if it is a song that is mainly AI-generated, it does not have the right to be on the top list."

Read more of this story at Slashdot.

Ads Are Coming To ChatGPT in the Coming Weeks

Par : msmash
16 janvier 2026 à 18:45
OpenAI said Friday that it will begin testing ads on ChatGPT in the coming weeks, as the $500 billion startup seeks new revenue streams to fund its continued expansion and compete against rivals Google and Anthropic. The company had previously resisted embedding ads into its chatbot, citing concerns that doing so could undermine the trustworthiness and objectivity of responses. The ads will appear at the bottom of ChatGPT answers on the free tier and the $8-per-month ChatGPT Go subscription in the U.S., showing only when relevant to the user's query. Pro, Business, and Enterprise subscriptions will remain ad-free. OpenAI expects to generate "low billions" of dollars from advertising in 2026, FT reported, and more in subsequent years. The revenue is intended to help fund roughly $1.4 trillion in computing commitments over the next decade. The company said it will not show ads to users under 18 or near sensitive topics like health, mental health, or politics.

Read more of this story at Slashdot.

Anthropic's Index Shows Job Evolution Over Replacement

Par : msmash
15 janvier 2026 à 14:40
Anthropic's fourth installment of its Economic Index, drawing on an anonymized sample of two million Claude conversations from November 2025, finds that AI is changing how people work rather than whether they work at all. The study tracked usage across the company's consumer-facing Claude.ai platform and its API, categorizing interactions as either automation (where AI completes tasks entirely) or augmentation (where humans and AI collaborate). The split came out to 52% augmentation and 45% automation on Claude.ai, a slight shift from January 2025 when augmentation led 55% to 41%. The share of jobs using AI for at least a quarter of their tasks has risen from 36% in January to 49% across pooled data from multiple reports. Anthropic's researchers also found that AI delivers its largest productivity gains on complex work requiring college-level education, speeding up those tasks by a factor of 12 compared to 9 for high-school-level work. Claude completes college-degree tasks successfully 66% of the time versus 70% for simpler work. Computer and mathematical tasks continue to dominate usage, accounting for roughly a third of Claude.ai conversations and nearly half of API traffic.

Read more of this story at Slashdot.

ChatGPT Translate : OpenAI lance son concurrent de Google Traduction

15 janvier 2026 à 14:18

Sans prévenir, OpenAI a lancé le 15 janvier 2026 son outil de traduction maison, baptisé « ChatGPT Translate ». Déjà accessible, la fonctionnalité reste, à ce stade, très rudimentaire.

Warhammer Maker Games Workshop Bans Its Staff From Using AI In Its Content or Designs

Par : BeauHD
15 janvier 2026 à 10:00
Games Workshop, the owner and operator of a number of hugely popular tabletop war games, including Warhammer 40,000 and Age of Sigmar, has banned the use of generative AI in its content and design processes. IGN reports: Delivering the UK company's impressive financial results, CEO Kevin Rountree addressed the issue of AI and how Games Workshop is handling it. He said GW staff are barred from using it to actually produce anything, but admitted a "few" senior managers are experimenting with it. Rountree said AI was "a very broad topic and to be honest I'm not an expert on it," then went on to lay down the company line: "We do have a few senior managers that are [experts on AI]: none are that excited about it yet. We have agreed an internal policy to guide us all, which is currently very cautious e.g. we do not allow AI generated content or AI to be used in our design processes or its unauthorized use outside of GW including in any of our competitions. We also have to monitor and protect ourselves from a data compliance, security and governance perspective, the AI or machine learning engines seem to be automatically included on our phones or laptops whether we like it or not. We are allowing those few senior managers to continue to be inquisitive about the technology. We have also agreed we will be maintaining a strong commitment to protect our intellectual property and respect our human creators. In the period reported, we continued to invest in our Warhammer Studio -- hiring more creatives in multiple disciplines from concepting and art to writing and sculpting. Talented and passionate individuals that make Warhammer the rich, evocative IP that our hobbyists and we all love."

Read more of this story at Slashdot.

Internet en Iran : Starlink active les liaisons en silence, contrairement au Venezuela

15 janvier 2026 à 09:35

starlink kit

Alors que l’Iran impose un black-out numérique total pour étouffer la contestation, Starlink a rendu son service gratuit dans le pays. Une bouffée d'oxygène pour les terminaux clandestins déjà sur place, mais qui tranche par la discrétion autour de la manœuvre : contrairement au Venezuela, Elon Musk a cette fois choisi d'agir sans faire de bruit.

Cerebras Scores OpenAI Deal Worth Over $10 Billion

Par : BeauHD
15 janvier 2026 à 00:45
Cerebras Systems landed a more than $10 billion deal to supply up to 750 megawatts of compute to OpenAI through 2028, according to a blog post by OpenAI. CNBC reports: The deal will help diversify Cerebras away from the United Arab Emirates' G42, which accounted for 87% of revenue in the first half of 2024. "The way you have three very large customers is start with one very large customer, and you keep them happy, and then you win the second one," Cerebras' co-founder and CEO Andrew Feldman told CNBC in an interview. Cerebras has built a large processor that can train and run generative artificial intelligence models. [...] "Cerebras adds a dedicated low-latency inference solution to our platform," Sachin Katti, who works on compute infrastructure at OpenAI, wrote in the blog. "That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people." The deal comes months after OpenAI worked with Cerebras to ensure that its gpt-oss open-weight models would work smoothly on Cerebras silicon, alongside chips from Nvidia and Advanced Micro Devices. OpenAI's gpt-oss collaboration led to technical conversations with Cerebras, and the two companies signed a term sheet just before Thanksgiving, Feldman said in an interview with CNBC. The report notes that this deal helps strengthen Cerebras' IPO prospects. The $10+ billion OpenAI deal materially improves revenue visibility, customer diversification, and strategic credibility, addressing key concerns from its withdrawn filing and setting the stage for a more compelling refile with updated financials and narrative.

Read more of this story at Slashdot.

Bandcamp Bans AI Music

Par : BeauHD
14 janvier 2026 à 22:40
Bandcamp has announced a ban on music made wholly or substantially by generative AI, aiming to protect human creativity and prohibit AI impersonation of artists. Here's what the music platform had to say: ... Something that always strikes us as we put together a roundup like this is the sheer quantity of human creativity and passion that artists express on Bandcamp every single day. The fact that Bandcamp is home to such a vibrant community of real people making incredible music is something we want to protect and maintain. Today, in line with that goal, we're articulating our policy on generative AI. We want musicians to keep making music, and for fans to have confidence that the music they find on Bandcamp was created by humans. Our guidelines for generative AI in music and audio are as follows: - Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp. - Any use of AI tools to impersonate other artists or styles is strictly prohibited in accordance with our existing policies prohibiting impersonation and intellectual property infringement. If you encounter music or audio that appears to be made entirely or with heavy reliance on generative AI, please use our reporting tools to flag the content for review by our team. We reserve the right to remove any music on suspicion of being AI generated. We will be sure to communicate any updates to the policy as the rapidly changing generative AI space develops. Given the response around this to our previous posts, we hope this news is welcomed. We wish you all an amazing 2026. [...]

Read more of this story at Slashdot.

Matthew McConaughey Trademarks Himself To Fight AI Misuse

Par : msmash
14 janvier 2026 à 16:02
Matthew McConaughey is taking a novel legal approach to combat unauthorized AI fakes: trademarking himself. From a report: Over the past several months, the "Interstellar" and "Magic Mike" star has had eight trademark applications approved by the U.S. Patent and Trademark Office featuring him staring, smiling and talking. His attorneys said the trademarks are meant to stop AI apps or users from simulating McConaughey's voice or likeness without permission -- an increasingly common concern of performers. The trademarks include a seven-second clip of the Oscar-winner standing on a porch, a three-second clip of him sitting in front of a Christmas tree, and audio of him saying "Alright, alright, alright," his famous line from the 1993 movie "Dazed and Confused," according to the approved applications. "My team and I want to know that when my voice or likeness is ever used, it's because I approved and signed off on it," the actor said in an email. "We want to create a clear perimeter around ownership with consent and attribution the norm in an AI world."

Read more of this story at Slashdot.

Signal Creator Marlinspike Wants To Do For AI What He Did For Messaging

Par : msmash
13 janvier 2026 à 15:29
Moxie Marlinspike, the engineer who created Signal Messenger and set a new standard for private communications, is now trialing Confer, an open source AI assistant designed to make user data unreadable to platform operators, hackers, and law enforcement alike. Confer relies on two core technologies: passkeys that generate a 32-byte encryption keypair stored only on user devices, and trusted execution environments on servers that prevent even administrators from accessing data. The code is open source and cryptographically verifiable through remote attestation and transparency logs. Marlinspike likens current AI interactions to confessing into a "data lake." A court order last May required OpenAI to preserve all ChatGPT user logs including deleted chats, and CEO Sam Altman has acknowledged that even psychotherapy sessions on the platform may not stay private.

Read more of this story at Slashdot.

Adieu les AirPods ? Le projet secret d’OpenAI et Jony Ive se dévoile

13 janvier 2026 à 14:35

Selon un message publié sur X le 12 janvier 2026 par l'insider zhihuipikachu, OpenAI et Jony Ive travailleraient en priorité sur un appareil audio « inédit », visant à concurrencer les AirPods. Voici ce que l'on sait.

❌