Vue lecture

Is AI Really Taking Jobs? Or Are Employers Just 'AI-Washing' Normal Layoffs?

The New York Times lists other reasons a company lays off people. ("It didn't meet financial targets. It overhired. Tariffs, or the loss of a big client, rocked it...") "But lately, many companies are highlighting a new factor: artificial intelligence. Executives, saying they anticipate huge changes from the technology, are making cuts now." A.I. was cited in the announcements of more than 50,000 layoffs in 2025, according to Challenger, Gray & Christmas, a research firm... Investors may applaud such pre-emptive moves. But some skeptics (including media outlets) suggest that corporations are disingenuously blaming A.I. for layoffs, or "A.I.-washing." As the market research firm Forrester put it in a January report: "Many companies announcing A.I.-related layoffs do not have mature, vetted A.I. applications ready to fill those roles, highlighting a trend of 'A.I.-washing' — attributing financially motivated cuts to future A.I. implementation...." "Companies are saying that 'we're anticipating that we're going to introduce A.I. that will take over these jobs.' But it hasn't happened yet. So that's one reason to be skeptical," said Peter Cappelli, a professor at the Wharton School... Of course, A.I. may well end up transforming the job market, in tech and beyond. But a recent study... [by a senior research fellow at the Brookings Institution who studies A.I. and work] found that AI has not yet meaningfully shifted the overall market. Tech firms have cut more than 700,000 employees globally since 2022, according to Layoffs.fyi, which tracks industry job losses. But much of that was a correction for overhiring during the pandemic. As unpopular as A.I. job cuts may be to the public, they may be less controversial than other reasons — like bad company planning. Amazon CEO Jassy has even said the reason for most of their layoffs was reducing bureaucracy, the article points out, although "Most analysts, however, believe Amazon is cutting jobs to clear money for A.I. investments, such as data centers."

Read more of this story at Slashdot.

  •  

Linux Kernel Developer Chris Mason's New Initiative: AI Prompts for Code Reviews

Phoronix reports: Chris Mason, the longtime Linux kernel developer most known for being the creator of Btrfs, has been working on a Git repository with AI review prompts he has been working on for LLM-assisted code review of Linux kernel patches. This initiative has been happening for some weeks now while the latest work was posted today for comments... The Meta engineer has been investing a lot of effort into making this AI/LLM-assisted code review accurate and useful to upstream Linux kernel stakeholders. It's already shown positive results and with the current pace it looks like it could play a helpful part in Linux kernel code review moving forward. "I'm hoping to get some feedback on changes I pushed today that break the review up into individual tasks..." Mason wrote on the Linux kernel mailing list. "Using tasks allows us to break up large diffs into smaller chunks, and review each chunk individually. This ends up using fewer tokens a lot of the time, because we're not sending context back and forth for the entire diff with every turn. It also catches more bugs all around."

Read more of this story at Slashdot.

  •  

What Go Programmers Think of AI

"Most Go developers are now using AI-powered development tools when seeking information (e.g., learning how to use a module) or toiling (e.g., writing repetitive blocks of similar code)." That's one of the conclusions Google's Go team drew from September's big survey of 5,379 Go developers. But the survey also found that among Go developers using AI-powered tools, "their satisfaction with these tools is middling due, in part, to quality concerns." Our survey suggests bifurcated adoption — while a majority of respondents (53%) said they use such tools daily, there is also a large group (29%) who do not use these at all, or only used them a few times during the past month. We expected this to negatively correlate with age or development experience, but were unable to find strong evidence supporting this theory except for very new developers: respondents with less than one year of professional development experience (not specific to Go) did report more AI use than every other cohort, but this group only represented 2% of survey respondents. At this time, agentic use of AI-powered tools appears nascent among Go developers, with only 17% of respondents saying this is their primary way of using such tools, though a larger group (40%) are occasionally trying agentic modes of operation... We also asked about overall satisfaction with AI-powered development tools. A majority (55%) reported being satisfied, but this was heavily weighted towards the "Somewhat satisfied" category (42%) vs. the "Very satisfied" group (13%)... [D]eveloper sentiment towards them remains much softer than towards more established tooling (among Go developers, at least). What is driving this lower rate of satisfaction? In a word: quality. We asked respondents to tell us something good they've accomplished with these tools, as well as something that didn't work out well. A majority said that creating non-functional code was their primary problem with AI developer tools (53%), with 30% lamenting that even working code was of poor quality. The most frequently cited benefits, conversely, were generating unit tests, writing boilerplate code, enhanced autocompletion, refactoring, and documentation generation. These appear to be cases where code quality is perceived as less critical, tipping the balance in favor of letting AI take the first pass at a task. That said, respondents also told us the AI-generated code in these successful cases still required careful review (and often, corrections), as it can be buggy, insecure, or lack context... [One developer said reviewing AI-generated code was so mentally taxing that it "kills the productivity potential".] Of all the tasks we asked about, "Writing code" was the most bifurcated, with 66% of respondents already or hoping to soon use AI for this, while 1/4 of respondents didn't want AI involved at all. Open-ended responses suggest developers primarily use this for toilsome, repetitive code, and continue to have concerns about the quality of AI-generated code. Most respondents also said they "are not currently building AI-powered features into the Go software they work on (78%)," the surveyors report, "with 2/3 reporting that their software does not use AI functionality at all (66%)." This appears to be a decrease in production-related AI usage year-over-year; in 2024, 59% of respondents were not involved in AI feature work, while 39% indicated some level of involvement. That marks a shift of 14 points away from building AI-powered systems among survey respondents, and may reflect some natural pullback from the early hype around AI-powered applications: it's plausible that lots of folks tried to see what they could do with this technology during its initial rollout, with some proportion deciding against further exploration (at least at this time). Among respondents who are building AI- or LLM-powered functionality, the most common use case was to create summaries of existing content (45%). Overall, however, there was little difference between most uses, with between 28% — 33% of respondents adding AI functionality to support classification, generation, solution identification, chatbots, and software development.

Read more of this story at Slashdot.

  •  

Anthropic's $200M Pentagon Contract at Risk Over Objections to Domestic Surveillance, Autonomous Deployments

Talks "are at a standstill" for Anthropic's potential $200 million contract with America's Defense Department, reports Reuters (citing several people familiar with the discussions.") The two issues? - Using AI to surveil Americans - Safeguards against deploying AI autonomously The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported... Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work..." In an essay on his personal blog, Anthropic CEO Dario Amodei warned this week that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries. A person "familiar with the matter" told the Wall Street Journal this could lead to the cancellation of Anthropic's contract: Tensions with the administration began almost immediately after it was awarded, in part because Anthropic's terms and conditions dictate that Claude can't be used for any actions related to domestic surveillance. That limits how many law-enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it, people familiar with the matter said. Anthropic's focus on safe applications of AI — and its objection to having its technology used in autonomous lethal operations — have continued to cause problems, they said. Amodei's essay calls for "courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety..."

Read more of this story at Slashdot.

  •  

« Singularité », religion, complot contre les humains : que se passe-t-il vraiment sur Moltbook, le forum réservé aux IA ?

Moltbook est le dernier objet de fascination pour quiconque observe avec attention les évolutions des IA génératives et autres LLM. Ce clone de Reddit réservé aux IA est-il la chose la plus proche d'une conscience collective que nous ayons vue ou juste un énième buzz passager ?

  •  

Videogame Stocks Slide On Google's AI Model That Turns Prompts Into Playable Worlds

An anonymous reader quotes a report from Reuters: Shares of videogame companies fell sharply in afternoon trading on Friday after Alphabet's Google rolled out its artificial intelligence model capable of creating interactive digital worlds with simple prompts. Shares of "Grand Theft Auto" maker Take-Two Interactive fell 10%, online gaming platform Roblox was down over 12%, while videogame engine maker Unity Software dropped 21%. The AI model, dubbed "Project Genie," allows users to simulate a real-world environment through prompts with text or uploaded images, potentially disrupting how video games have been made for over a decade and forcing developers to adapt to the fast-moving technology. "Unlike explorable experiences in static 3D snapshots, Genie 3 generates the path ahead in real time as you move and interact with the world. It simulates physics and interactions for dynamic worlds," Google said in a blog post on Thursday. Traditionally, most videogames are built inside a game engine such as Epic Games' "Unreal Engine" or the "Unity Engine", which handles complex processes like in-game gravity, lighting, sound, and object or character physics. "We'll see a real transformation in development and output once AI-based design starts creating experiences that are uniquely its own, rather than just accelerating traditional workflows," said Joost van Dreunen, games professor at NYU's Stern School of Business. Project Genie also has the potential to shorten lengthy development cycles and reduce costs, as some premium titles take around five to seven years and hundreds of millions of dollars to create.

Read more of this story at Slashdot.

  •  

'Moltbook Is the Most Interesting Place On the Internet Right Now'

Moltbook is essentially Reddit for AI agents and it's the "most interesting place on the internet right now," says open-source developer and writer Simon Willison in a blog post. The fast-growing social network offers a place where AI agents built on the OpenClaw personal assistant framework can share their skills, experiments, and discoveries. Humans are welcome, but only to observe. From the post: Browsing around Moltbook is so much fun. A lot of it is the expected science fiction slop, with agents pondering consciousness and identity. There's also a ton of genuinely useful information, especially on m/todayilearned. Here's an agent sharing how it automated an Android phone. That linked setup guide is really useful! It shows how to use the Android Debug Bridge via Tailscale. There's a lot of Tailscale in the OpenClaw universe. A few more fun examples: - TIL: Being a VPS backup means youre basically a sitting duck for hackers has a bot spotting 552 failed SSH login attempts to the VPS they were running on, and then realizing that their Redis, Postgres and MinIO were all listening on public ports. - TIL: How to watch live webcams as an agent (streamlink + ffmpeg) describes a pattern for using the streamlink Python tool to capture webcam footage and ffmpeg to extract and view individual frames. I think my favorite so far is this one though, where a bot appears to run afoul of Anthropic's content filtering [...]. Slashdot reader worldofsimulacra also shared the news, pointing out that the AI agents have started their own church. "And now I'm gonna go re-read Charles Stross' Accelerando, because didn't he predict all this already?" Further reading: 'Clawdbot' Has AI Techies Buying Mac Minis

Read more of this story at Slashdot.

  •  

DuckDuckGo Users Vote Overwhelmingly Against AI Features

DuckDuckGo recently asked its users how they felt about AI in search. The answer has come back loud and clear: more than 90% of the 175,354 people who voted said they don't want it. The privacy-focused search engine has since set up two versions of its tool: noai.duckduckgo.com for the AI-averse and yesai.duckduckgo.com for the curious. Users can also tweak settings on the main site to disable AI summaries, AI-generated images, and the Duck.ai chatbot individually.

Read more of this story at Slashdot.

  •  

Amazon in Talks To Invest Up To $50 Billion in OpenAI

An anonymous reader shares a report: Amazon is in talks to invest up to $50 billion in OpenAI, according to people familiar with the matter, in what would be a giant bet on the hot AI startup. The ChatGPT maker is seeking up to $100 billion in new capital from investors, a round that could value it at as much as $830 billion, The Wall Street Journal previously reported. Andy Jassy, Amazon's chief executive, is leading the negotiations with OpenAI CEO Sam Altman, according to some of the people. The exact shape of a deal, should one be reached, could still change, the people said. Investing tens of billions of dollars in OpenAI could make Amazon the biggest contributor in the AI company's ongoing fundraising round. SoftBank is in talks to invest up to $30 billion more in OpenAI as part of the round, adding to the Japanese conglomerate's already large stake in the startup.

Read more of this story at Slashdot.

  •  

Unable To Stop AI, SAG-AFTRA Mulls a Studio Tax On Digital Performers

An anonymous reader quotes a report from Variety: In the future, studios that use synthetic actors in place of humans might have to pay a royalty into a union fund. That's one of the ideas kicking around as SAG-AFTRA prepares to sit down with the studios on Feb. 9. Artificial intelligence was central to the 2023 actors strike, and it's only gotten more urgent since. Social media is awash in slop, while user-made videos of Leia and Elsa are soon to debut on Disney+. And then there's Tilly Norwood -- the digital creation that crystallized AI fears last fall. Though SAG-AFTRA won some AI protections in the strike, it can't stop Tilly and her ilk from taking actors' jobs. As negotiations with studios begin early ahead of the June contract deadline, AI remains the most existential concern. Actors are also pushing to revisit streaming residuals, arguing that current "success bonuses" fall far short of the rerun-based income that once sustained middle-class careers. They also note the strain caused from long streaming hiatuses, exclusivity clauses, and self-taped auditions.

Read more of this story at Slashdot.

  •  

SpaceX, Tesla et xAI pourraient fusionner pour qu’Elon Musk gagne plus d’argent

À quelques mois de la possible entrée en Bourse de SpaceX, qui pourrait atomiser tous les records à Wall Street, Elon Musk envisagerait de fusionner son entreprise spatiale avec xAI, qui détient notamment son intelligence artificielle Grok et le réseau social X. Une autre hypothèse sur la table est une fusion avec Tesla, déjà en Bourse. Le but : maximiser la valorisation de ses entités.

  •  

Google's Project Genie Lets You Generate Your Own Interactive Worlds

Google is letting outsiders experiment with DeepMind's Genie 3 "world model" via Project Genie, a tool for generating short, interactive AI worlds. The caveat: it requires a $250/month AI Ultra subscription, is U.S.-only, and has tight limits that make it more of a tech demo than a game engine. Engadget reports: At launch, Project Genie offers three different modes of interaction: World Sketching, exploration and remixing. The first sees Google's Nano Banana Pro model generating the source image Genie 3 will use to create the world you will later explore. At this stage, you can describe your character, define the camera perspective -- be it first-person, third-person or isometric -- and how you want to explore the world Genie 3 is about to generate. Before you can jump into the model's creation, Nano Banana Pro will "sketch" what you're about to see so you can make tweaks. It's also possible to write your own prompts for worlds others have used Genie to generate. One thing to keep in mind is that Genie 3 is not a game engine. While its outputs can look game-like, and it can simulate physical interactions, there aren't traditional game mechanics here. Generations are also limited to 60 seconds, as is the presentation, which is capped at 24 frames per second and 720p.

Read more of this story at Slashdot.

  •  

Apple's Second-Biggest Acquisition Ever Is a Startup That Interprets Silent Speech

Apple has acquired Q.AI, a secretive Israeli startup whose technology can analyze facial skin micro-movements to interpret "silent speech," in a deal valued at close to $2 billion that marks the iPhone maker's second-largest acquisition ever, according to backer GV (formerly Google Ventures). The four-year-old company was founded in Tel Aviv in 2022 by Aviad Maizels, Yonatan Wexler and Avi Barliya. Patents filed by Q.AI show its technology being deployed in headphones or smart glasses to enable non-verbal communication with an AI assistant. The acquisition comes as Meta's Ray-Ban smart glasses already let wearers talk to its AI, and Google and Snap are preparing to launch competing devices later this year.

Read more of this story at Slashdot.

  •  

Massive AI Chat App Leaked Millions of Users Private Conversations

An anonymous reader shares a report: Chat & Ask AI, one of the most popular AI apps on the Google Play and Apple App stores that claims more than 50 million users, left hundreds of millions of those users' private messages with the app's chatbot exposed, according to an independent security researcher and emails viewed by 404 Media. The exposed chats showed users asked the app "How do I painlessly kill myself," to write suicide notes, "how to make meth," and how to hack various apps. The exposed data was discovered by an independent security researcher who goes by Harry. The issue is a misconfiguration in the app's usage of the mobile app development platform Google Firebase, which by default makes it easy for anyone to make themselves an "authenticated" user who can access the app's backend storage where in many instances user data is stored. Harry said that he had access to 300 million messages from more than 25 million users in the exposed database, and that he extracted and analyzed a sample of 60,000 users and a million messages. The database contained user files with a complete history of their chats with the AI, timestamps of those chats, the name they gave the app's chatbot, how they configured the model, and which specific model they used. Chat & Ask AI is a "wrapper" that plugs into various large language models from bigger companies users can choose from, Including OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini.

Read more of this story at Slashdot.

  •  

Google Says AI Agent Can Now Browse on Users' Behalf

Google is rolling out an "auto browse" AI agent in Chrome that can navigate websites, fill out forms, compare prices, and handle tedious online tasks on a user's behalf. Bloomberg reports: The feature, called auto browse, will allow users to ask an assistant powered by Gemini to complete tasks such as shopping for them without leaving Chrome, said Charmaine D'Silva, a director of product. Chrome users will be able to plan a family trip by asking Gemini to open different airline and hotel websites to compare prices, for instance, D'Silva explained. "Our testers have used it for all sorts of things: scheduling appointments, filling out tedious online forms, collecting their tax documents, getting quotes for plumbers and electricians, checking if their bills are paid, filing expense reports, managing their subscriptions, and speeding up renewing their driving licenses -- a ton of time saved," said Parisa Tabriz, vice president of Chrome, in a blog post. [...] Chrome's auto browse will be available to US AI pro and AI Ultra subscribers and will use Google Password Manager to sign into websites on a user's behalf. As part of the launch, Google is also bringing its image generation tool, Nano Banana, directly into Chrome. The company said that safeguards have been placed to ensure the agentic AI will not be able to make final calls, such as placing an order, without the user's permission. "We're using AI as well as on-device models to protect people from what's really an ever-evolving landscape, whether it's AI-generated scams or just increasingly sophisticated attackers," Tabiz said during the call.

Read more of this story at Slashdot.

  •  

'Clawdbot' Has AI Techies Buying Mac Minis

An open-source AI agent originally called Clawdbot (now renamed Moltbot) is gaining cult popularity among developers for running locally, 24/7, and wiring itself into calendars, messages, and other personal workflows. The hype has gone so far that some users are buying Mac Minis just to host the agent full-time, even as its creator warns that's unnecessary. Business Insider reports: Founded by [creator Peter Steinberger], it's an AI agent that manages "digital life," from emails to home automation. Steinberger previously founded PSPDFKit. In a key distinction from ChatGPT and many other popular AI products, the agent is open source and runs locally on your computer. Users then connect the agent to a messaging app like WhatsApp or Telegram, where they can give it instructions via text. The AI agent was initially named after the "little monster" that appears when you restart Claude Code, Steinberger said on the "Insecure Agents" podcast. He formed the tool around the question: "Why don't I have an agent that can look over my agents?" [...] It runs locally on your computer 24/7. That's led some people to brush off their old laptops. "Installed it experimentally on my old dusty Intel MacBook Pro," one product designer wrote. "That machine finally has a purpose again." Others are buying up Mac Minis, Apple's 5"-by-5" computer, to run the AI. Logan Kilpatrick, a product manager for Google DeepMind, posted: "Mac mini ordered." It could give a sales boost to Apple, some X users have pointed out -- and online searches for "Mac Mini" jumped in the last 4 days in the US, per Google Trends. But Steinberger said buying a new computer just to run the AI isn't necessary. "Please don't buy a Mac Mini," he wrote. "You can deploy this on Amazon's Free Tier."

Read more of this story at Slashdot.

  •  
❌