Vue lecture

Partly AI-Generated Folk-Pop Hit Barred From Sweden's Official Charts

An anonymous reader shares a report: A hit song has been excluded from Sweden's official chart after it emerged the "artist" behind it was an AI creation. I Know, You're Not Mine -- or Jag Vet, Du Ar Inte Min in Swedish -- by a singer called Jacub has been a streaming success in Sweden, topping the Spotify rankings. However, the Swedish music trade body has excluded the song from the official chart after learning it was AI-generated. "Jacub's track has been excluded from Sweden's official chart, Sverigetopplistan, which is compiled by IFPI Sweden. While the song appears on Spotify's own charts, it does not qualify for inclusion on the official chart under the current rules," said an IFPI Sweden spokesperson. Ludvig Werber, IFPI Sweden's chief executive, said: "Our rule is that if it is a song that is mainly AI-generated, it does not have the right to be on the top list."

Read more of this story at Slashdot.

  •  

Ads Are Coming To ChatGPT in the Coming Weeks

OpenAI said Friday that it will begin testing ads on ChatGPT in the coming weeks, as the $500 billion startup seeks new revenue streams to fund its continued expansion and compete against rivals Google and Anthropic. The company had previously resisted embedding ads into its chatbot, citing concerns that doing so could undermine the trustworthiness and objectivity of responses. The ads will appear at the bottom of ChatGPT answers on the free tier and the $8-per-month ChatGPT Go subscription in the U.S., showing only when relevant to the user's query. Pro, Business, and Enterprise subscriptions will remain ad-free. OpenAI expects to generate "low billions" of dollars from advertising in 2026, FT reported, and more in subsequent years. The revenue is intended to help fund roughly $1.4 trillion in computing commitments over the next decade. The company said it will not show ads to users under 18 or near sensitive topics like health, mental health, or politics.

Read more of this story at Slashdot.

  •  

Anthropic's Index Shows Job Evolution Over Replacement

Anthropic's fourth installment of its Economic Index, drawing on an anonymized sample of two million Claude conversations from November 2025, finds that AI is changing how people work rather than whether they work at all. The study tracked usage across the company's consumer-facing Claude.ai platform and its API, categorizing interactions as either automation (where AI completes tasks entirely) or augmentation (where humans and AI collaborate). The split came out to 52% augmentation and 45% automation on Claude.ai, a slight shift from January 2025 when augmentation led 55% to 41%. The share of jobs using AI for at least a quarter of their tasks has risen from 36% in January to 49% across pooled data from multiple reports. Anthropic's researchers also found that AI delivers its largest productivity gains on complex work requiring college-level education, speeding up those tasks by a factor of 12 compared to 9 for high-school-level work. Claude completes college-degree tasks successfully 66% of the time versus 70% for simpler work. Computer and mathematical tasks continue to dominate usage, accounting for roughly a third of Claude.ai conversations and nearly half of API traffic.

Read more of this story at Slashdot.

  •  

Warhammer Maker Games Workshop Bans Its Staff From Using AI In Its Content or Designs

Games Workshop, the owner and operator of a number of hugely popular tabletop war games, including Warhammer 40,000 and Age of Sigmar, has banned the use of generative AI in its content and design processes. IGN reports: Delivering the UK company's impressive financial results, CEO Kevin Rountree addressed the issue of AI and how Games Workshop is handling it. He said GW staff are barred from using it to actually produce anything, but admitted a "few" senior managers are experimenting with it. Rountree said AI was "a very broad topic and to be honest I'm not an expert on it," then went on to lay down the company line: "We do have a few senior managers that are [experts on AI]: none are that excited about it yet. We have agreed an internal policy to guide us all, which is currently very cautious e.g. we do not allow AI generated content or AI to be used in our design processes or its unauthorized use outside of GW including in any of our competitions. We also have to monitor and protect ourselves from a data compliance, security and governance perspective, the AI or machine learning engines seem to be automatically included on our phones or laptops whether we like it or not. We are allowing those few senior managers to continue to be inquisitive about the technology. We have also agreed we will be maintaining a strong commitment to protect our intellectual property and respect our human creators. In the period reported, we continued to invest in our Warhammer Studio -- hiring more creatives in multiple disciplines from concepting and art to writing and sculpting. Talented and passionate individuals that make Warhammer the rich, evocative IP that our hobbyists and we all love."

Read more of this story at Slashdot.

  •  

Internet en Iran : Starlink active les liaisons en silence, contrairement au Venezuela

starlink kit

Alors que l’Iran impose un black-out numérique total pour étouffer la contestation, Starlink a rendu son service gratuit dans le pays. Une bouffée d'oxygène pour les terminaux clandestins déjà sur place, mais qui tranche par la discrétion autour de la manœuvre : contrairement au Venezuela, Elon Musk a cette fois choisi d'agir sans faire de bruit.

  •  

Cerebras Scores OpenAI Deal Worth Over $10 Billion

Cerebras Systems landed a more than $10 billion deal to supply up to 750 megawatts of compute to OpenAI through 2028, according to a blog post by OpenAI. CNBC reports: The deal will help diversify Cerebras away from the United Arab Emirates' G42, which accounted for 87% of revenue in the first half of 2024. "The way you have three very large customers is start with one very large customer, and you keep them happy, and then you win the second one," Cerebras' co-founder and CEO Andrew Feldman told CNBC in an interview. Cerebras has built a large processor that can train and run generative artificial intelligence models. [...] "Cerebras adds a dedicated low-latency inference solution to our platform," Sachin Katti, who works on compute infrastructure at OpenAI, wrote in the blog. "That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people." The deal comes months after OpenAI worked with Cerebras to ensure that its gpt-oss open-weight models would work smoothly on Cerebras silicon, alongside chips from Nvidia and Advanced Micro Devices. OpenAI's gpt-oss collaboration led to technical conversations with Cerebras, and the two companies signed a term sheet just before Thanksgiving, Feldman said in an interview with CNBC. The report notes that this deal helps strengthen Cerebras' IPO prospects. The $10+ billion OpenAI deal materially improves revenue visibility, customer diversification, and strategic credibility, addressing key concerns from its withdrawn filing and setting the stage for a more compelling refile with updated financials and narrative.

Read more of this story at Slashdot.

  •  

Bandcamp Bans AI Music

Bandcamp has announced a ban on music made wholly or substantially by generative AI, aiming to protect human creativity and prohibit AI impersonation of artists. Here's what the music platform had to say: ... Something that always strikes us as we put together a roundup like this is the sheer quantity of human creativity and passion that artists express on Bandcamp every single day. The fact that Bandcamp is home to such a vibrant community of real people making incredible music is something we want to protect and maintain. Today, in line with that goal, we're articulating our policy on generative AI. We want musicians to keep making music, and for fans to have confidence that the music they find on Bandcamp was created by humans. Our guidelines for generative AI in music and audio are as follows: - Music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp. - Any use of AI tools to impersonate other artists or styles is strictly prohibited in accordance with our existing policies prohibiting impersonation and intellectual property infringement. If you encounter music or audio that appears to be made entirely or with heavy reliance on generative AI, please use our reporting tools to flag the content for review by our team. We reserve the right to remove any music on suspicion of being AI generated. We will be sure to communicate any updates to the policy as the rapidly changing generative AI space develops. Given the response around this to our previous posts, we hope this news is welcomed. We wish you all an amazing 2026. [...]

Read more of this story at Slashdot.

  •  

Matthew McConaughey Trademarks Himself To Fight AI Misuse

Matthew McConaughey is taking a novel legal approach to combat unauthorized AI fakes: trademarking himself. From a report: Over the past several months, the "Interstellar" and "Magic Mike" star has had eight trademark applications approved by the U.S. Patent and Trademark Office featuring him staring, smiling and talking. His attorneys said the trademarks are meant to stop AI apps or users from simulating McConaughey's voice or likeness without permission -- an increasingly common concern of performers. The trademarks include a seven-second clip of the Oscar-winner standing on a porch, a three-second clip of him sitting in front of a Christmas tree, and audio of him saying "Alright, alright, alright," his famous line from the 1993 movie "Dazed and Confused," according to the approved applications. "My team and I want to know that when my voice or likeness is ever used, it's because I approved and signed off on it," the actor said in an email. "We want to create a clear perimeter around ownership with consent and attribution the norm in an AI world."

Read more of this story at Slashdot.

  •  

Signal Creator Marlinspike Wants To Do For AI What He Did For Messaging

Moxie Marlinspike, the engineer who created Signal Messenger and set a new standard for private communications, is now trialing Confer, an open source AI assistant designed to make user data unreadable to platform operators, hackers, and law enforcement alike. Confer relies on two core technologies: passkeys that generate a 32-byte encryption keypair stored only on user devices, and trusted execution environments on servers that prevent even administrators from accessing data. The code is open source and cryptographically verifiable through remote attestation and transparency logs. Marlinspike likens current AI interactions to confessing into a "data lake." A court order last May required OpenAI to preserve all ChatGPT user logs including deleted chats, and CEO Sam Altman has acknowledged that even psychotherapy sessions on the platform may not stay private.

Read more of this story at Slashdot.

  •  

L’IA du Pentagone « ne sera pas woke », Grok intègre officiellement le réseau de l’armée américaine

Le 12 janvier 2026, le secrétaire à la Défense Pete Hegseth a annoncé l’intégration imminente de Grok à la plateforme interne d’intelligence artificielle générative du Pentagone, GenAI.mil. Cette décision concrétise un accord de 200 millions de dollars conclu entre le Département de la Défense et xAI, la société fondée par Elon Musk et à l’origine du chatbot.

  •  

Even Linus Torvalds Is Vibe Coding Now

Linus Torvalds has started experimenting with vibe coding, using Google's Antigravity AI to generate parts of a small hobby project called AudioNoise. "In doing so, he has become the highest-profile programmer yet to adopt this rapidly spreading, and often mocked, AI-driven programming," writes ZDNet's Steven Vaughan-Nichols. Fro the report: [I]t's a trivial program called AudioNoise -- a recent side project focused on digital audio effects and signal processing. He started it after building physical guitar pedals, GuitarPedal, to learn about audio circuits. He now gives them as gifts to kernel developers and, recently, to Bill Gates. While Torvalds hand-coded the C components, he turned to Antigravity for a Python-based audio sample visualizer. He openly acknowledges that he leans on online snippets when working in languages he knows less well. Who doesn't? [...] In the project's README file, Torvalds wrote that "the Python visualizer tool has been basically written by vibe-coding," describing how he "cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualiser." The remark underlines that the AI-generated code met his expectations well enough that he did not feel the need to manually re-implement it. Further reading: Linus Torvalds Says Vibe Coding is Fine For Getting Started, 'Horrible Idea' For Maintenance

Read more of this story at Slashdot.

  •  

Should AI Agents Be Classified As People?

New submitter sziring writes: Harvard Business Review's IdeaCast podcast interviewed McKinsey CEO Bob Sternfels, where he classified AI agents as people. "I often get asked, 'How big is McKinsey? How many people do you employ?' I now update this almost every month, but my latest answer to you would be 60,000, but it's 40,000 humans and 20,000 agents." This statement looks to be the opening shots of how we as a society need to classify AI agents and whether they will replace human jobs. Did those agents take roles that previously would have been filled by a full-time human? By classifying them as people, did the company break protocols or laws by not interviewing candidates for those jobs, not providing benefits or breaks, and so on? Yes, it all sounds silly but words matter. What happens when a job report comes out claiming we just added 20,000 jobs in Q1? That line of thinking leads directly to Bill Gates' point that agents taking on human roles might need to be taxed.

Read more of this story at Slashdot.

  •  

Meta Plans To Cut Around 10% of Employees In Reality Labs Division

Meta plans to cut roughly 10% of staff in its Reality Labs division, with layoffs hitting metaverse-focused teams hardest. Reuters reports: The cuts to Reality Labs, which has roughly 15,000 employees, could be announced as soon as Tuesday and are set to disproportionately affect those in the metaverse unit who work on virtual reality headsets and virtual social networks, the report said. [...] Meta Chief Technology Officer Andrew Bosworth, who oversees Reality Labs, has called a meeting on Wednesday and has urged staff to attend in person, the NYT reported, citing a memo. [...] The metaverse had been a massive project spearheaded by CEO Mark Zuckerberg, who prioritized and spent heavily on the venture, only for the business to burn more than $60 billion since 2020. [...] The report comes as the Facebook-parent scrambles to stay relevant in Silicon Valley's artificial intelligence race after its Llama 4 model met with a poor reception.

Read more of this story at Slashdot.

  •  

Cette IA a résolu un problème mathématique ouvert depuis 45 ans

Le modèle d’IA GPT-5.2 Pro a résolu plusieurs problèmes de mathématiques, dont l’un, le 11 janvier 2026, était resté ouvert depuis 45 ans. Plus que le résultat, c’est la méthode — associant humains, assistant de preuve Lean et système d’IA Aristotle — qui pourrait transformer la pratique de la démonstration mathématique.

  •  
❌