Vue normale

Google Gemini prévoit une fonction pour importer vos conversations ChatGPT

2 février 2026 à 17:12

Changer d'IA comme on change de smartphone, sans perdre ses souvenirs ? C'est le nouveau pari de Google pour 2026. Une fonctionnalité repérée dans le code de Gemini suggère l'arrivée imminente d'un outil d'importation d'historique pour faciliter la transition depuis ChatGPT.

Is AI Really Taking Jobs? Or Are Employers Just 'AI-Washing' Normal Layoffs?

2 février 2026 à 12:34
The New York Times lists other reasons a company lays off people. ("It didn't meet financial targets. It overhired. Tariffs, or the loss of a big client, rocked it...") "But lately, many companies are highlighting a new factor: artificial intelligence. Executives, saying they anticipate huge changes from the technology, are making cuts now." A.I. was cited in the announcements of more than 50,000 layoffs in 2025, according to Challenger, Gray & Christmas, a research firm... Investors may applaud such pre-emptive moves. But some skeptics (including media outlets) suggest that corporations are disingenuously blaming A.I. for layoffs, or "A.I.-washing." As the market research firm Forrester put it in a January report: "Many companies announcing A.I.-related layoffs do not have mature, vetted A.I. applications ready to fill those roles, highlighting a trend of 'A.I.-washing' — attributing financially motivated cuts to future A.I. implementation...." "Companies are saying that 'we're anticipating that we're going to introduce A.I. that will take over these jobs.' But it hasn't happened yet. So that's one reason to be skeptical," said Peter Cappelli, a professor at the Wharton School... Of course, A.I. may well end up transforming the job market, in tech and beyond. But a recent study... [by a senior research fellow at the Brookings Institution who studies A.I. and work] found that AI has not yet meaningfully shifted the overall market. Tech firms have cut more than 700,000 employees globally since 2022, according to Layoffs.fyi, which tracks industry job losses. But much of that was a correction for overhiring during the pandemic. As unpopular as A.I. job cuts may be to the public, they may be less controversial than other reasons — like bad company planning. Amazon CEO Jassy has even said the reason for most of their layoffs was reducing bureaucracy, the article points out, although "Most analysts, however, believe Amazon is cutting jobs to clear money for A.I. investments, such as data centers."

Read more of this story at Slashdot.

Linux Kernel Developer Chris Mason's New Initiative: AI Prompts for Code Reviews

2 février 2026 à 09:34
Phoronix reports: Chris Mason, the longtime Linux kernel developer most known for being the creator of Btrfs, has been working on a Git repository with AI review prompts he has been working on for LLM-assisted code review of Linux kernel patches. This initiative has been happening for some weeks now while the latest work was posted today for comments... The Meta engineer has been investing a lot of effort into making this AI/LLM-assisted code review accurate and useful to upstream Linux kernel stakeholders. It's already shown positive results and with the current pace it looks like it could play a helpful part in Linux kernel code review moving forward. "I'm hoping to get some feedback on changes I pushed today that break the review up into individual tasks..." Mason wrote on the Linux kernel mailing list. "Using tasks allows us to break up large diffs into smaller chunks, and review each chunk individually. This ends up using fewer tokens a lot of the time, because we're not sending context back and forth for the entire diff with every turn. It also catches more bugs all around."

Read more of this story at Slashdot.

What Go Programmers Think of AI

2 février 2026 à 01:13
"Most Go developers are now using AI-powered development tools when seeking information (e.g., learning how to use a module) or toiling (e.g., writing repetitive blocks of similar code)." That's one of the conclusions Google's Go team drew from September's big survey of 5,379 Go developers. But the survey also found that among Go developers using AI-powered tools, "their satisfaction with these tools is middling due, in part, to quality concerns." Our survey suggests bifurcated adoption — while a majority of respondents (53%) said they use such tools daily, there is also a large group (29%) who do not use these at all, or only used them a few times during the past month. We expected this to negatively correlate with age or development experience, but were unable to find strong evidence supporting this theory except for very new developers: respondents with less than one year of professional development experience (not specific to Go) did report more AI use than every other cohort, but this group only represented 2% of survey respondents. At this time, agentic use of AI-powered tools appears nascent among Go developers, with only 17% of respondents saying this is their primary way of using such tools, though a larger group (40%) are occasionally trying agentic modes of operation... We also asked about overall satisfaction with AI-powered development tools. A majority (55%) reported being satisfied, but this was heavily weighted towards the "Somewhat satisfied" category (42%) vs. the "Very satisfied" group (13%)... [D]eveloper sentiment towards them remains much softer than towards more established tooling (among Go developers, at least). What is driving this lower rate of satisfaction? In a word: quality. We asked respondents to tell us something good they've accomplished with these tools, as well as something that didn't work out well. A majority said that creating non-functional code was their primary problem with AI developer tools (53%), with 30% lamenting that even working code was of poor quality. The most frequently cited benefits, conversely, were generating unit tests, writing boilerplate code, enhanced autocompletion, refactoring, and documentation generation. These appear to be cases where code quality is perceived as less critical, tipping the balance in favor of letting AI take the first pass at a task. That said, respondents also told us the AI-generated code in these successful cases still required careful review (and often, corrections), as it can be buggy, insecure, or lack context... [One developer said reviewing AI-generated code was so mentally taxing that it "kills the productivity potential".] Of all the tasks we asked about, "Writing code" was the most bifurcated, with 66% of respondents already or hoping to soon use AI for this, while 1/4 of respondents didn't want AI involved at all. Open-ended responses suggest developers primarily use this for toilsome, repetitive code, and continue to have concerns about the quality of AI-generated code. Most respondents also said they "are not currently building AI-powered features into the Go software they work on (78%)," the surveyors report, "with 2/3 reporting that their software does not use AI functionality at all (66%)." This appears to be a decrease in production-related AI usage year-over-year; in 2024, 59% of respondents were not involved in AI feature work, while 39% indicated some level of involvement. That marks a shift of 14 points away from building AI-powered systems among survey respondents, and may reflect some natural pullback from the early hype around AI-powered applications: it's plausible that lots of folks tried to see what they could do with this technology during its initial rollout, with some proportion deciding against further exploration (at least at this time). Among respondents who are building AI- or LLM-powered functionality, the most common use case was to create summaries of existing content (45%). Overall, however, there was little difference between most uses, with between 28% — 33% of respondents adding AI functionality to support classification, generation, solution identification, chatbots, and software development.

Read more of this story at Slashdot.

Anthropic's $200M Pentagon Contract at Risk Over Objections to Domestic Surveillance, Autonomous Deployments

1 février 2026 à 23:59
Talks "are at a standstill" for Anthropic's potential $200 million contract with America's Defense Department, reports Reuters (citing several people familiar with the discussions.") The two issues? - Using AI to surveil Americans - Safeguards against deploying AI autonomously The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported... Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work..." In an essay on his personal blog, Anthropic CEO Dario Amodei warned this week that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries. A person "familiar with the matter" told the Wall Street Journal this could lead to the cancellation of Anthropic's contract: Tensions with the administration began almost immediately after it was awarded, in part because Anthropic's terms and conditions dictate that Claude can't be used for any actions related to domestic surveillance. That limits how many law-enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it, people familiar with the matter said. Anthropic's focus on safe applications of AI — and its objection to having its technology used in autonomous lethal operations — have continued to cause problems, they said. Amodei's essay calls for "courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety..."

Read more of this story at Slashdot.

❌