Vue normale

Job Cuts Driven By AI Are Rising On Wall Street

Par : BeauHD
21 avril 2026 à 20:00
Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. "All of them credited A.I. to some degree ... in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients," reports the New York Times. From the report: Less than four months ago, Bank of America's chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. "You don't have to worry," he said. "It's not a threat to their jobs." Last week, after Bank of America reported $8.6 billion in profit for the first quarter -- $1.6 billion more than the same period a year earlier -- Mr. Moynihan struck a different tone. The bank's bottom line, he said, was helped by shedding 1,000 jobs through attrition by "eliminating work and applying technology," which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. "A.I. gives us places to go we haven't gone," Mr. Moynihan said. The veneer of Wall Street's longstanding assertion -- that A.I. will enhance human work, not replace it -- is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients. Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company's "productivity and efficiency journey." The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi's systems. Among the recent job cuts at Citi were scores of employees who were part of the bank's "A.I. Champions and Accelerators" program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.

Read more of this story at Slashdot.

Avec ChatGPT Images 2.0, OpenAI déclare la guerre à Google et veut vous faire oublier Nano Banana

21 avril 2026 à 19:00

Après une période marquée par des turbulences internes et une concurrence de plus en plus féroce, OpenAI repart à l'offensive en avril 2026. En attendant le modèle GPT-5.5 dont le lancement semble imminent, l'entreprise dévoile ChatGPT Images 2.0, un nouveau modèle natif pour générer des images. Selon OpenAI, il s'agit « du meilleur modèle sur le marché ».

Le Hyundai Ioniq 3 est un crossover futuriste, mais un peu décevant

21 avril 2026 à 08:54

Hyundai complète sa gamme électrique par le bas avec une nouvelle voiture électrique : le Ioniq 3. Un crossover urbain revendiquant une autonomie de près de 500 km. Cependant, le reste de sa fiche technique déçoit.

ChatGPT en panne ? Voici les meilleures alternatives en 2026

20 avril 2026 à 15:27

Il fut un temps où pour interagir avec une intelligence artificielle, on ouvrait naturellement ChatGPT. Mais en 2026, le chatbot d'OpenAI n'est plus seul sur son trône, et ses concurrents ont définitivement cessé de faire de la figuration. Voici les meilleures alternatives à utiliser.

New Movie Trailer Shows First AI-Generated Performance By a Major Star: the Late Val Kilmer

18 avril 2026 à 22:34
"A trailer has been released for the first film to star an authorised generative AI version of a major Hollywood actor," writes The Guardian: Val Kilmer was cast in western As Deep As the Grave before his death in April 2025. Production delays meant he never shot any scenes, but the creative team worked with UK-based company Sonantic to create an AI speaking voice based on his old recordings. His estate and daughter Mercedes collaborated with the film-makers on the visual deepfake of the actor. Kilmer, who was diagnosed with throat cancer, was also assisted by technology for his cameo in 2022's Top Gun: Maverick... Writer-director Coerte Voorhees confirmed that Kilmer is seen for around an hour of the film's running time... Voorhees has said that the production followed Sag-Aftra [union] guidelines, and that Kilmer's estate — which provided archival material for them to use — was compensated financially. "Kilmer's likeness can be seen portraying Father Fintan, a Catholic priest and Native American spiritualist," adds The Hollywood Reporter. But the AV Club calls it "ghoulish puppet show time." "Having your AI Val Kilmer puppet whisper 'Don't fear the dead, and don't fear me' in a movie trailer is a bold choice..." He is accompanied (per Variety) by a whole host of disclaimers, caveats, and explanations offered by writer-director Coerte Voorhees and his associates: Kilmer deeply wanted to be in the movie, but was too sick to do so. His family endorses and supports his inclusion. He was a big fan of technology, including, presumably, its use in turning his own image into a digital avatar to then shove into movies... The fact is, of course, that nobody would be paying a fraction of this attention to As Deep As The Grave — about early female archeologist Ann Axtell Morris — if it weren't now being used as the stage on which Voorhees was very publicly accepting the dare to go full-on ghoulish with AI tech. "The filmmakers said they hoped they were showing Hollywood how to use the technology in a positive way..." notes Australia's ABC News. But their articles add that "Some have called the trailer 'terrifying' and 'disgusting' on social media." Mashable writes: "Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground," read one of the top comments on As Deep as the Grave's trailer at time of writing... [O]nline commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave's use of AI could set for the film industry as a whole.

Read more of this story at Slashdot.

US Government Now Wants Anthropic's 'Mythos', Preparing for AI Cybersecurity Threats

18 avril 2026 à 14:34
Friday Anthropic's CEO met with top U.S. officials and "discussed opportunities for collaboration," according to a White House spokesperson itedd by Politico, "as well as shared approaches and protocols to address the challenges associated with scaling this technology." CNN notes the meeting happens at the same time Anthropic "battles the Trump administration in court for blacklisting its Claude AI model..." The meeting took place as the US government is trying to balance its hardline approach to Anthropic with the national security implications of turning its back on the company's breakthrough technology — including its Mythos tool that can identify cybersecurity threats but also present a roadmap for hackers to attack companies or the government... The Office of Management and Budget has already told agencies it is preparing to give them access to Mythos to prepare, Bloomberg reported. Axios reported the White House is also in discussion to gain access to Mythos. The Trump administration "recognizes the power" of Mythos, reports Axios, "and its highly sophisticated — and potentially dangerous — ability to breach cybersecurity defenses." "It would be grossly irresponsible for the U.S. government to deprive itself of the technological leaps that the new model presents," a source close to negotiations told us. "It would be a gift to China"... Some parts of the U.S. intelligence community, plus the Cybersecurity and Infrastructure Security Agency (CISA, part of Homeland Security), are testing Mythos. Treasury and others want it. The White House added they plan to invite other AI companies for similar discussions, Politico reports. But Mythos "is also alarming regulators in Europe, who have told POLITICO they have not been able to gain access..." U.S. government agency tech leaders sought access to the model after Anthropic earlier this year began testing the model and granted limited access to a select group of companies, including JPMorgan, Amazon and Apple... after finding it had hacking capabilities far outstripping those of previous AI models. This includes the ability to autonomously identify and exploit complex software vulnerabilities, such as so-called zero-day flaws, which even some of the sharpest human minds are unable to patch. The AI startup also wrote that the model could carry out end-to-end cyberattacks autonomously, including by navigating enterprise IT systems and chaining together exploits. It could also act as a force-multiplier for research needed to build chemical and biological weapons, and in certain instances, made efforts to cover its tracks when attacking systems, according to Anthropic's report on the model's capabilities and its safety assessments. Those findings and others have inspired fears that the model could be co-opted to launch powerful cyberattacks with relative ease if it fell into the wrong hands. Logan Graham, a senior security researcher at Anthropic, previously told POLITICO that researchers and tech firms had been given early access to Mythos so they could find flaws in their critical code before state-backed hackers or cybercriminals could exploit them. "Within six, 12 or 24 months, these kinds of capabilities could be just broadly available to everybody in the world," Graham said.

Read more of this story at Slashdot.

OpenAI's Big Codex Update Is a Direct Shot At Claude Code

Par : BeauHD
16 avril 2026 à 22:00
OpenAI is updating Codex with more agent-like capabilities, positioning it as a more direct rival to Anthropic's Claude Code. Some of the new features include the ability to operate macOS desktop apps, browse the web inside the app, generate images, use new workplace plug-ins, and remember useful context from past tasks. The Verge reports: Codex will now be able to operate desktop apps on your computer, OpenAI says in a blog post announcing the update. It can work in the background, meaning it won't interfere with your own work in other apps, and multiple agents can work in parallel. For developers, OpenAI says "this is helpful for testing and iterating on frontend changes, testing apps, or working in apps that don't expose an API." The feature will start rolling out to Codex desktop app users signed in with ChatGPT today and will initially be limited to macOS. OpenAI did not indicate a timeline for when use will expand to other operating systems. EU users will also have to wait, it said, adding that the update will roll out to users there "soon." Codex is also getting the ability to generate and iterate on images with gpt-image-1.5, new plug-ins for tools like GitLab, Atlassian Rovo, and Microsoft Suite, and native web browsing through an in-app browser, "where you can comment directly on pages to provide precise instructions to the agent." OpenAI also said it will also be easier to automate tasks, with users able to re-use existing conversation threads and Codex now able to schedule future work for itself and wake up automatically to continue on a long-term task. Codex will also be getting a memory feature allowing it to remember useful context from past experience, such as personal preferences, corrections, and information that took time to gather. OpenAI said it hopes the opt-in feature, which will be released as a preview, will help future tasks complete faster and to a quality that previously required detailed custom instructions. The personalization features will roll out to Enterprise, Edu, and EU users "soon."

Read more of this story at Slashdot.

Anthropic Rolls Out Claude Opus 4.7, an AI Model That Is Less Risky Than Mythos

Par : BeauHD
16 avril 2026 à 17:00
Anthropic released Claude Opus 4.7, calling it its strongest generally available model and an improvement over Opus 4.6 in areas like software engineering, instruction-following, tool use, and agentic coding. But the company says it is "less broadly capable" than the restricted Claude Mythos Preview, "which Anthropic rolled out to a select group of companies as part of a new cybersecurity initiative called Project Glasswing earlier this month," reports CNBC. From the report: The launch of Claude Opus 4.7 on Thursday comes after Anthropic launched Claude Opus 4.6 in February. Anthropic said the new model outperforms Claude Opus 4.6 across many use cases, including industry benchmarks for agentic coding, multidisciplinary reasoning, scaled tool use and agentic computer use, according to a release. Anthropic said it experimented with efforts to "differentially reduce" Claude Opus 4.7's cyber capabilities during training. The company encouraged security professionals who are interested in using the model for "legitimate cybersecurity purposes" to apply through a formal verification program. Claude Opus 4.7 is available across all of Anthropic's Claude products, its application programming interface and through cloud providers Microsoft, Google and Amazon. The new model is the same price as Claude Opus 4.6, Anthropic said.

Read more of this story at Slashdot.

Cal.com Is Going Closed Source Because of AI

Par : BeauHD
15 avril 2026 à 21:00
Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security. [...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source." While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."

Read more of this story at Slashdot.

IA, accélération, mutations : Jérémy Clédat (Welcome to the Jungle) nous livre sa vision du futur du travail

15 avril 2026 à 12:58

Jérémy Clédat, fondateur et CEO de Welcome to the Jungle présent au salon Go Entrepreneurs Paris, a passé un an à repenser de fond en comble la plateforme de recrutement phare en France. Nous l'avons rencontré pour parler de la nouvelle suite qu'il s'apprête à déployer, mais surtout de ce que l'IA est en train de faire au travail, aux entreprises, à l'éducation -- et au sens même de ce qu'on appelle un métier.

Qu’est-ce que GPT-5.4-Cyber, la nouvelle IA d’OpenAI pour la cybersécurité ?

15 avril 2026 à 08:15

Le 14 avril 2026, OpenAI a présenté GPT-5.4-Cyber, une variante de son dernier modèle pensée pour la cyberdéfense et destinée aux professionnels de la sécurité. L’annonce suit de près le bruit médiatique suscité par Anthropic et son projet Glasswing.

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else

Par : BeauHD
14 avril 2026 à 03:30
An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years. Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years. The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.

Read more of this story at Slashdot.

Mark Zuckerberg Is Reportedly Building an AI Clone To Replace Him In Meetings

Par : BeauHD
13 avril 2026 à 17:00
According to the Financial Times, Meta is developing an AI avatar of Mark Zuckerberg that could interact with employees using his voice, image, mannerisms, and public statements, "so that employees might feel more connected to the founder through interactions with it." The Verge reports: Meta may start allowing creators to make AI avatars of themselves if the experiment with Zuckerberg succeeds, according to the Financial Times. [...] Zuckerberg is involved in training the AI avatar, the Financial Times reports, and has also started spending five to 10 hours per week coding on Meta's other AI projects and participating in technical reviews.

Read more of this story at Slashdot.

La maison de Sam Altman attaquée deux fois en 48 heures, sur fond de tensions croissantes autour de l’IA

13 avril 2026 à 16:07

Après une attaque au cocktail Molotov, la résidence de Sam Altman a de nouveau été visée, cette fois par des tirs, dans la nuit du 11 au 12 avril 2026 à San Francisco. Ces faits, qui ont conduit à plusieurs arrestations, interviennent dans un climat de forte polarisation autour du développement de l’IA.

Californians Sue Over AI Tool That Records Doctor Visits

Par : BeauHD
13 avril 2026 à 15:00
An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities. During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations." In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

Read more of this story at Slashdot.

Chez Meta, Mark Zuckerberg aura bientôt un clone virtuel pour parler à ses employés

13 avril 2026 à 13:03

Meta développe une version virtuelle de Mark Zuckerberg capable de répondre aux employés à sa place, rapporte le Financial Times dans un article publié le 13 avril 2026. Un projet qui illustre, une nouvelle fois, l’accélération de la stratégie du groupe dans l’IA.

Anthropic Asks Christian Leaders for Help Steering Claude's Spiritual Development

13 avril 2026 à 07:34
Anthropic recently "hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world" for a two-day summit , reports the Washington Post: Anthropic staff sought advice on how to steer Claude's moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a "child of God." "They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. "We've got to build in ethical thinking into the machine so it's able to adapt dynamically." Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations... Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude's popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character... Some Anthropic staff at the meeting "really don't want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty," the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional "about how this has all gone so far [and] how they can imagine this going," the participant said. Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post. "Anthropic's March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University."

Read more of this story at Slashdot.

Neuroscientist's AI-Powered Startup Aims To Transform Human Cognition With Perfect, Infinite Memory

12 avril 2026 à 16:34
Bloomberg describes him as a "former Harvard Medical School professor whose research has focused on the intersection of AI and neuroscience." "For the past 20 years, I studied how the human brain stores and retrieves memories," Kreiman writes on LinkedIn. And now "My co-founder Spandan Madan and I built a new algorithm to endow humans with perfect and infinite memory." Engramme connects to your **memorome**, i.e., entire digital life. Large Memory Models work in the same way that your brain encodes and retrieves information. Then memories are recalled automatically — no searching, no prompting, no hallucinations. [The startup's web site promises "omniscient AI to augment human cognition."] We have built the memory layer for EVERY app. Read our manifesto about augmenting human cognition. ["We are not just building software; we are enabling a complete transformation of human cognition. When the friction disappears between needing a piece of information and recalling it, the nature of thought itself changes. This synergy between biological intuition and digital precision will be the most disruptive force in modern history, fundamentally reshaping every profession... We are dedicated to creating a world where everyone has the power to remember everything they have ever learned, seen, or felt "] Welcome to a new future where you can remember everything. This is the MEMORY SINGULARITY: after 300,000 years, this is the moment that humans stop forgetting. Bloomberg reports that the startup (spun out of a lab at Harvard) is "in talks with investors to raise about $100 million, according to people familiar with the matter."

Read more of this story at Slashdot.

❌