Vue lecture

Do Gamers Hate AI? Indie Game Awards Disqualifies 'Clair Obscur' Over GenAI Usage

"Perhaps no group of fans, industry workers, and consumers is more intense about AI use than gamers...." writes New York magazine's "Intelligencer" column: Just this month, the latest Postal game was axed by its publisher, which was "overwhelmed with negative responses" from the "concerned Postal community" after fans spotted AI-generated material in the game's trailer. The developers of Arc Raiders were accused of using AI instead of voice actors, leading to calls for boycotts, while the developers of the Call of Duty franchise were called out for AI-generated assets that players found strewn across Black Ops 7.Games that weren't developed with generative AI are getting caught up in accusations anyway, while workers at Electronic Arts are going to the press to describe pressure from bosses to adopt AI tools. Nintendo has sworn off using generative AI, as has the company behind the Cyberpunk series. Valve, the company that operates Steam, now requires AI disclosures on listed games and surveys all submitters. Perhaps sensing the emergence of a new constituency, California congressman Ro Khanna responded in November to the Call of Duty backlash:"We need regulations that prevent companies from using AI to eliminate jobs to extract greater profits," he posted on X.... AI is often seen as a tool for managers to extract more productivity and justify layoffs. Among players, it can foster a sense that gamers are being tricked or ripped off, while also dovetailing with more general objections to generative AI. It can sometimes be hard to tell whether gamer backlash is a bellwether or an outlier, an early signal from our youngest major creative industry or a localized and unique fit of rage. The sheer number of incidents here suggests the former, which foretells bitter, messy, and confusing fights to come in entertainment beyond gaming — where, notably, technologies referred to as "AI" have previously been embraced with open arms. And now "the price of the sort of memory PC gamers most want to buy has skyrocketed" (per Tom's Hardware). "The rush to build data centers is making it much more expensive to game. Nobody's going to be happy about that." Insider Gaming shares another example of anti-AI sentiment in the gaming industry: The Indie Game Awards took place on December 18, and, as many could assume, Clair Obscur: Expedition 33 took home the awards for Game of the Year and Debut Game. However, things have changed and The Indie Game Awards are making a big decision to strip the Clair Obscur and developer Sandfall Interactive of their awards over the use of gen AI in the game. In an announcement made on Saturday afternoon, Six One Indie, the creators of the show, said that it's removal comes after the discovery after voting was done, and the show was recorded. "The Indie Game Awards have a hard stance on the use of gen AI throughout the nomination process and during the ceremony itself," the statement reads. "When it was submitted for consideration, representatives of Sandfall Interactive agreed that no gen AI was used in the development of Clair Obscur: Expedition 33. Polygon notes the award-stripping is "due to inclusion of generative AI assets at launch that were quickly patched out." Quotes from earlier in the year from Sandfall Interactive's FranÃois Meurisse made the rounds on social media last week amid a news cycle caught up in the use of generative AI in games... In June, the Spanish outlet El País published a story including an interview conducted around Clair Obscur's launch, in which Meurisse admitted that Sandfall used a minimal amount generative AI in some form during the game's development... Clair Obscur: Expedition 33 launched with what some suspected to be AI-generated textures that, as it clarified to El País, were then replaced with custom assets in a swift patch five days after release.

Read more of this story at Slashdot.

  •  

Does AI Really Make Coders Faster?

One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me." But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower... Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles... The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software. Other key points from the article: LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks." "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain." "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..." "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt." Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools." The story is part of MIT Technology Review's new Hype Correction series of articles about AI.

Read more of this story at Slashdot.

  •  
❌