Vue normale

Grammarly Disables Tool Offering Generative-AI Feedback Credited To Real Writers

Par : BeauHD
11 mars 2026 à 21:25
Grammarly has disabled its Expert Review feature after backlash from writers whose names were used to present AI-generated feedback without their permission. Superhuman (formerly Grammarly) CEO Shishir Mehrotra wrote in a LinkedIn post that the company will disable Expert Review while they "reimagine" the feature: Back in August, we launched a Grammarly agent called Expert Review. The agent draws on publicly available information from third-party LLMs to surface writing suggestions inspired by the published work of influential voices. Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. This kind of scrutiny improves our products, and we take it seriously. As context, the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward. After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented -- or not represented at all. We deeply believe in our mission to solve the "last mile of AI" by bringing AI directly to where people work, and we see this as a significant opportunity for experts. For millions of users, Grammarly is a trusted writing sidekick -- ever-present in every application, ready to help. We're opening up this platform so anyone can build agents that work like Grammarly -- expanding from one sidekick to a whole team. Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal. For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has. But in this world, experts choose to participate, shape how their knowledge is represented, and control their business model. That future excites me, and I hope to build it with experts who want to develop it alongside us.

Read more of this story at Slashdot.

Nvidia Is Planning to Launch Its Own Open-Source OpenClaw Competitor

Par : BeauHD
11 mars 2026 à 18:00
Nvidia is preparing to launch an open-source AI agent platform called NemoClaw, designed to compete with the likes of OpenClaw. According to Wired, the platform will allow enterprise software companies to dispatch AI agents to perform tasks for their own workforces. "Companies will be able to access the platform regardless of whether their products run on Nvidia's chips," the report adds. From the report: The move comes as Nvidia prepares for its annual developer conference in San Jose next week. Ahead of the conference, Nvidia has reached out to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike to forge partnerships for the agent platform. It's unclear whether these conversations have resulted in official partnerships. Since the platform is open source, it's likely that partners would get free, early access in exchange for contributing to the project, sources say. Nvidia plans to offer security and privacy tools as part of this new open-source agent platform. [...] For Nvidia, NemoClaw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It's also another step in the company's embrace of open-source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia's software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia's GPUs and has created a crucial "moat" for the company.

Read more of this story at Slashdot.

Yann LeCun Raises $1 Billion To Build AI That Understands the Physical World

Par : BeauHD
11 mars 2026 à 13:00
An anonymous reader quotes a report from Wired: Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta's former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models. LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. "The idea that you're going to extend the capabilities of LLMs [large language models] to the point that they're going to have human-level intelligence is complete nonsense," he said in an interview with WIRED. The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel. AMI (pronounced like the French word for friend) aims to build "a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe," the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025. [...] LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability. LeCun says AMI will release its first AI models quickly, but he's not expecting most people to take notice. The company will first work with partners such as Toyota and Samsung, and then will learn how to apply its technology more broadly. Eventually, he says, AMI intends to develop a "universal world model," which would be the basis for a generally intelligent system that could help companies regardless of what industry they work in. "It's very ambitious," he says with a smile.

Read more of this story at Slashdot.

Vous êtes abonnés à Canal+ ? L’application va se transformer grâce à OpenAI et Google

11 mars 2026 à 09:46

En juin 2026, Canal+ déploiera un nouveau moteur de recherche dans son application Canal+ (ex-myCANAL). Un modèle de langage OpenAI sera en charge de la recherche, ce qui devrait faciliter certaines requêtes et permettre de poser des questions en langage naturel.

After Outages, Amazon To Make Senior Engineers Sign Off On AI-Assisted Changes

Par : BeauHD
11 mars 2026 à 03:30
An anonymous reader quotes a report from the Financial Times: Amazon's ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a "deep dive" into a spate of outages, including incidents tied to the use of AI coding tools. The online retail giant said there had been a "trend of incidents" in recent months, characterized by a "high blast radius" and "Gen-AI assisted changes" among other factors, according to a briefing note for the meeting seen by the FT. Under "contributing factors" the note included "novel GenAI usage for which best practices and safeguards are not yet fully established." "Folks, as you likely know, the availability of the site and related infrastructure has not been good recently," Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT. The note ahead of Tuesday's meeting did not specify which particular incidents the group planned to discuss. [...] Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly "This Week in Stores Tech" (TWiST) meeting on a "deep dive into some of the issues that got us here as well as some short immediate term initiatives" the group hopes will limit future outages. He asked staff to attend the meeting, which is normally optional. Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added. Amazon said the review of website availability was "part of normal business" and it aims for continual improvement. "TWiST is our regular weekly operations meeting with a specific group of retail technology leaders and teams where we review operational performance across our store," the company said.

Read more of this story at Slashdot.

Claude AI Finds Bugs In Microsoft CTO's 40-Year-Old Apple II Code

Par : BeauHD
10 mars 2026 à 17:00
An anonymous reader quotes a report from The Register: AI can reverse engineer machine code and find vulnerabilities in ancient legacy architectures, says Microsoft Azure CTO Mark Russinovich, who used his own Apple II code from 40 years ago as an example. Russinovich wrote: "We are entering an era of automated, AI-accelerated vulnerability discovery that will be leveraged by both defenders and attackers." In May 1986, Russinovich wrote a utility called Enhancer for the Apple II personal computer. The utility, written in 6502 machine language, added the ability to use a variable or BASIC expression for the destination of a GOTO, GOSUB, or RESTORE command, whereas without modification Applesoft BASIC would only accept a line number. Russinovich had Claude Opus 4.6, released early last month, look over the code. It decompiled the machine language and found several security issues, including a case of "silent incorrect behavior" where, if the destination line was not found, the program would set the pointer to the following line or past the end of the program, instead of reporting an error. The fix would be to check the carry flag, which is set if the line is not found, and branch to an error. The existence of the vulnerability in Apple II type-in code has only amusement value, but the ability of AI to decompile embedded code and find vulnerabilities is a concern. "Billions of legacy microcontrollers exist globally, many likely running fragile or poorly audited firmware like this," said one comment to Russinovich's post.

Read more of this story at Slashdot.

Personne ne l’a vu venir : Meta rachète Moltbook, le faux réseau social où des IA parlent pour de faux

10 mars 2026 à 15:39

Matt Schlicht et Ben Parr, les deux fondateurs du réseau social dystopique Moltbook qui permet à des instances IA de discuter entre elles, rejoignent Meta. Le groupe de Mark Zuckerberg rachète le réseau social par la même occasion.

Samsung Wants To Let You Vibe Code Your Galaxy Phone Experience

Par : BeauHD
10 mars 2026 à 01:00
Samsung says it's thinking about bringing "vibe coding" to future Galaxy phones, allowing users to describe apps or interface changes in plain language and have AI generate the code. TechRadar interviewed Won-Joon Choi, Samsung's head of mobile experience, to learn more about the plans. Here's an excerpt from their report: As noted by Won-Joon Choi, the usefulness of vibe coding on smartphones is that it opens up the "possibility of customizing your smartphone experience in new ways, not just your apps but your UX." He added, "Right now we're limited to premade tools, but with vibe coding, users could adjust their favorite apps or make something customized to their needs. So vibe coding is very interesting, and something we're looking into." [...] Samsung recently debuted the Galaxy S26 series of phones and made a point to not call them smartphones -- they're "AI phones" now. This certainly rang true with the majority of upgrades to the devices being AI software-focused, like the new Now Nudge and expanded Audio Eraser tools, with the biggest hardware bump for the base models coming via the 39% improved NPU processing (the processor in charge of on-device AI tasks). It also teased the debut of Perplexity on its phones, joining as an alternative to the Gemini assistant, and teased the possibility of other AI models getting the same treatment in the future.

Read more of this story at Slashdot.

Essai Dacia Spring 2026 de 100 ch

9 mars 2026 à 15:16

Dacia, le champion européen de Renault Group monte en gamme et muscle son jeu. Tout le catalogue s’enrichit de motorisations en phase avec le marché et sa clientèle n’hésite plus désormais à réclamer des équipements toujours plus modernes. La Spring ne fait pas exception, et s’éloigne en 2026 du modèle tout juste acceptable lancé en 2021. Nous l’avons prise en main dans les environs de Nice.

Un style bien plus moderne depuis 2024

On se rappelle de notre essai de la première Spring, nous étions en 2021. La première électrique de la marque n’avait quasiment rien pour elle, à commencer par son dessin. Son usage était pour le moins limité, avec un confort et des équipements qui nous renvoyaient vingt ans en arrière. Depuis, Dacia a revu sa copie, et la nouvelle mouture passe presque inaperçue dans la circulation depuis son gros restylage datant de quelques mois. Pas de surprise ici, le visage n’a pas changé avec une signature lumineuse à LEDs à l’avant, et un design globalement beaucoup plus moderne.

Pour autant, elle a toujours cette position sur la route un peu maladroite, notamment à l’arrière avec une carrosserie semblant surélevée sur des pneus fins. Elle n’a quand même pas l’air d’une voiture sans permis bodybuildée. Qu’on ne s’y trompe pas, on ne la prendra pas pour ce qu’elle n’est pas. Elle a toutefois ce petit quelque chose de rafraîchissant dans un paysage où les voitures sont de plus en plus grosses. Oui, elle semble cantonnée exclusivement à la ville. On note quelques petits détails appartenant à un passé pas si lointain, comme la tige servant d’antenne radio. Avec ses imposants pare-chocs et ses élargisseurs d’ailes en plastique brut, elle paraît parée aux chocs typiques d’une vie urbaine.

Un grand écran connecté

On connaissait déjà ce nouvel habitacle. Il reprend l’ambiance résolument moderne du Duster, avec du style et des technologies, sans trop en faire. On a ici un peu plus que l’essentiel, et des voitures plus chères en donnent même parfois moins. Le design de la planche de bord se montre plutôt valorisant. Bien sûr, il faut composer avec des plastiques durs, qui à défaut d’apporter une touche de luxe, lui confèrent une certaine robustesse. Rien n’est compliqué pour les usagers. Les commandes du bloc de climatisation ? Simples comme bonjour.

Un écran tactile de 10,1 pouces assure toute la connectivité que l’on attend d’une citadine en 2026. Dans cette finition, on peut compter sur Apple CarPlay et Android Auto, histoire de ne pas être dépaysé. Pour recharger ses appareils, on peut se reposer sur des prises USB-C. On adore le support téléphone si nécessaire, et bien sûr les très astucieuses attaches « YouClip », qui permettent d’accrocher ici et là différents accessoires disponibles. Sincèrement, on ne se sent pas si mal dans cette petite voiture. À l’arrière, on peut imaginer non pas voyager, mais emmener deux collègues pour aller au restaurant à 5 minutes du bureau. On est une fois de plus très étonné du coffre de 308 litres (+ frunk) de capacité dans cette Spring qui prend pourtant si peu de place.

100 ch qui changent tout !

On revient de loin ! La première génération de Spring n’avait que 45 chevaux sous le capot, et un couple de vélo électrique. On exagère à peine… Mais Dacia a mis maintenant le turbo, si l’on peut dire, avec désormais 100 chevaux pour le modèle « haut de gamme ». Clairement, ça change tout ! Niveau performances, on descend sous les 10 secondes pour atteindre les 100 km/h. Le constructeur d’origine roumaine aime communiquer sur la reprise 80 à 120 km/h, qui ne prend que 6,9 s. Et pour qu’elle puisse s’aventurer sur les voies express sans se traîner, la vitesse de pointe de 125 km/h suffit bien. On se surprend même à devoir regarder le compteur sur certaines portions de départementales, pour s’assurer de préserver notre permis.

Question autonomie, la fiche technique parle de 225 km. Pour cela, Dacia se repose sur une nouvelle batterie LFP (Lithium Fer Phosphate) de 26,8 kWh au lieu d’une NMC (Nickel Manganese Cobalt) pour une meilleure durée de vie et plus de sécurité. Les plus observateurs l’auront remarqué, ça n’ajoute pas plus de bornes pour autant, mais ça coûte moins cher. Car, on y reviendra, la voiture s’avère moins bon marché qu’auparavant. Sincèrement, on conduit désormais une voiture ayant une réactivité normale, prenant de la vitesse comme la plupart des citadines du moment, dont certaines bien plus grosses qu’elle. Entendons-nous bien, on n’évoque pas là des performances de GTi, mais d’une petite voiture à vocation urbaine, capable sur le papier de s’éloigner des villes.

Bien plus agréable à conduire

Justement, avant de partir, un regard sur les gommes nous surprend de la mauvaise façon. Les pneumatiques Linglong d’origine chinoise sont reconduits. Sauf que Dacia a travaillé sur son châssis, et la voiture n’a plus rien de la patineuse artistique, notamment sur le mouillé. Cette Spring de 100 ch sauce 2026 a droit à ce qui semble être la norme ailleurs, une barre antiroulis avant. S’il n’y avait que ça… On trouve aussi de nouvelles suspensions et plein d’autres petits ajustements. On ne conduit tout simplement pas la même voiture que nous avons connue à ses débuts. Elle ne s’avachit pas exagérément sur ses appuis, et garde raisonnablement bien son cap pour que l’on puisse parler d’une conduite plutôt sûre.

Si elle gagne en stabilité, elle a pris aussi un certain embonpoint, et dépasse désormais la tonne. À date, elle ne paie ainsi toujours pas le stationnement dans la capitale. N’oublions pas non plus qu’au passage, pour être commercialisée chez nous, elle s’équipe d’ADAS sophistiquées, celles tombant sous l’obligation de la norme GSRII. Maintenant, on ne risque pas non plus l’endormissement au volant de cette voiture qui, dans notre réalité n’atteint pas les 200 km, et moins encore si vous prenez l’autoroute. Si c’est nécessaire, la recharge en DC (40 kW) est accessible pour une recharge à 80 % en 30 minutes. Comptez sur moins de 3h30 sur une Wallbox de 7 kW. Petit bonus, le V2L que certains SUV bien plus onéreux ne supportent même pas.

Sans bonus, sa carrière se complique en France

Alors voilà, Dacia a modernisé sa Spring, et c’était plus que nécessaire pour espérer continuer à avoir une carrière commerciale honorable en Europe. Seulement, depuis on a sur le marché une Citroën ë-C3 bien moins chère, et une future nouveauté qui pourrait lui faire beaucoup de mal, la Twingo, elle aussi annoncée à un tarif bien plus compétitif. Le hic de la fabrication en Chine de la Spring pèse plus que jamais sur son prix, lui interdisant tout bonus, à l’exception de la maigre prime CEE. La gamme démarre sous la barre des 17 000 € et atteint les 19 700 € dans notre version huppée.

L’article Essai Dacia Spring 2026 de 100 ch est apparu en premier sur Le Blog Auto.

AI Allows Hackers To Identify Anonymous Social Media Accounts, Study Finds

Par : BeauHD
9 mars 2026 à 16:00
An anonymous reader quotes a report from the Guardian: AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned. In most test scenarios, large language models (LLMs) -- the technology behind platforms such as ChatGPT -- successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted. The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a "fundamental reassessment of what can be considered private online". In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a "Dolores park." In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence. While this example was fictional, the paper's authors highlighted scenarios in which governments use AI to surveil dissidents and activists posting anonymously, or hackers are able to launch "highly personalized" scams.

Read more of this story at Slashdot.

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks

8 mars 2026 à 22:39
A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so. I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success. Among the other "glaring" risks on Moltbook: "Various repositories of skills and instructions for agents advertised on Moltbook were found to contain malware." "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)." "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Read more of this story at Slashdot.

Jack Dorsey's Block Accused of 'AI-Washing' to Excuse Laying Off Nearly Half Its Workforce

8 mars 2026 à 16:34
When Block cut 4,000 jobs — nearly half its workforce — co-founder Jack Dorsey "pointed to AI as the culprit," writes Entrepreneur magazine. "Dorsey claimed that AI tools now allow fewer employees to accomplish the same work." "But analysts see a different explanation: poor management." Block more than tripled its employee base between 2019 and 2022, growing from 3,835 to 12,430 workers. The company's stock had fallen 40% since early 2025, creating pressure to cut costs. "This is more about the business being bloated for so long than it is about AI," Zachary Gunn, a Financial Technology Partners analyst, told Bloomberg. The phenomenon has earned a nickname: "AI-washing," where companies use artificial intelligence as cover for traditional cost-cutting. Goldman Sachs economists estimate that AI is eliminating only 5,000 to 10,000 jobs per month across all U.S. sectors, hardly enough to justify Block's massive cuts. "European Central Bank President Christine Lagarde told lawmakers in Brussels last week that ECB economists are monitoring for signs that AI is causing job losses," reports Bloomberg, "and are 'not yet seeing' the 'waves of redundancies that are feared'..." And "a recent survey of global executives published in the Harvard Business Review found that while AI has been cited as the reason for some layoffs, those cuts are almost entirely anticipatory: executives expect big efficiency gains that have not yet been realized." Even a former senior Block executive "is questioning whether AI is truly the reason behind the cuts," writes Inc.: In a recent opinion piece for The New York Times, Aaron Zamost, Block's former head of communications, policy, and people, asked whether the layoffs reflect a genuine "new reality in which the work they do might no longer be viable," or whether artificial intelligence is "just a convenient and flashy new cover for typical corporate downsizing." Zamost acknowledged that the answer is unclear and perhaps unknowable, even within Block itself... Looking more closely at the layoffs, Zamost argued that the specific roles affected suggest more traditional corporate cost-cutting than a sweeping AI transformation... Many of the responsibilities being eliminated, he argued, rely on distinctly human skills that AI systems still cannot replicate. "A chatbot can't meet with the mayor, cast commercial actors, or negotiate with the Securities and Exchange Commission," Zamost wrote. "Not all the roles I've heard that Block is eliminating can be handled by AI, yet executives are treating it as equally useful today to all disciplines." Ultimately, Zamost suggested that the sincerity of companies' AI explanations may not really matter. "It matters less whether a company knows how to deploy AI and more whether investors believe it is on track to do so," he wrote. Indeed, whatever the rationale for Dorsey's statement, " Wall Street didn't seem to mind..." Entrepreneur magazine — since Block's stock shot up 15% after the announcement.

Read more of this story at Slashdot.

AI CEOs Worry the Government Will Nationalize AI

8 mars 2026 à 11:34
Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..." And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important." Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com. How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department? "No," Mulligan answered. At our current moment in time, "We control which models we deploy" The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight. But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".) Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...

Read more of this story at Slashdot.

OpenAI's Head of Robotics Resigns, Says Pentagon Deal Was 'Rushed Without the Guardrails Defined'

7 mars 2026 à 22:16
In a tweet that's been viewed 1.3 million times in the last six hours, OpenAI's head of robotics announced their resignation. They said they "care deeply about the Robotics team and the work we built together," so this "wasn't an easy call," but offered this reason for resigning: AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together. "To be clear, my issue is that the announcement was rushed without the guardrails defined," explains a later tweet. "It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." And when asked how many OpenAI employees had left after OpenAI signed their new Pentagon deal, the roboticist said... "I can't share any internal details." The roboticist previously worked at Meta before leaving to join OpenAI in late 2024, reports Engadget: OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.

Read more of this story at Slashdot.

Iran War Provides a Large-Scale Test For AI-Assisted Warfare

Par : BeauHD
6 mars 2026 à 19:00
An anonymous reader quotes a report from Bloomberg, written by Katrina Manson: The U.S. strikes on Iran ordered by President Donald Trump mark the arrival on a large scale of a new era of warfare assisted by artificial intelligence. Captain Timothy Hawkins, a Central Command spokesperson, told me last night that the AI tools the U.S. military is using in Iran operations don't make targeting decisions and don't replace humans. But they do help "make smarter decisions faster." That's been the driving ambition of the U.S. military, which has spent years looking at how to develop and deploy AI to the battlefield [...]. Critics, such as Stop Killer Robots, a coalition of 270 human-rights groups, argue that AI-enabled decision-support systems reduce the separation between recommending and executing a strike to a "dangerously thin" line. Hawkins said the military's use of AI assistance follows a rigorous process aligned with U.S. policy, military doctrine and the law. Artificial intelligence helps analysts whittle down what they need to focus on, generating so-called points of interest and helping personnel make "smart" decisions in the Iran operations, he told me. AI is also helping to pull data within systems and organize information to provide clarity. Among the AI tech used in the Iran campaign is Maven Smart System, a digital mission control platform produced by Palantir [...]. That emerged from Project Maven, a project started in 2017 by the Pentagon to develop AI for the battlefield. Among the large language models installed on the system is Anthropic's Claude AI tool, according to the people, who said it has become central to U.S. operations against Iran and to accelerating Maven's development. Claude is also at the center of a row that pits Anthropic against the Department of Defense over limits on the software. Further reading: Hacked Tehran Traffic Cameras Fed Israeli Intelligence Before Strike On Khamenei

Read more of this story at Slashdot.

Pentagon Formally Designates Anthropic a Supply-Chain Risk

Par : BeauHD
6 mars 2026 à 01:00
The Pentagon has formally designated Anthropic as a "supply chain risk," ordering federal agencies and defense contractors to stop using its AI tools after the company sought limits on the military's use of its models. In a written statement, the department said it has "officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately." Politico reports: The designation, historically reserved for foreign firms with ties to U.S. adversaries, will likely require companies that do business with the U.S. military -- or even the federal government in general -- to cut ties with Anthropic. "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Pentagon said in the statement. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk." A spokesperson for Anthropic did not immediately respond to a request for comment. But the company said last week it would fight a supply-chain risk label in court.

Read more of this story at Slashdot.

OpenAI Releases New ChatGPT Model For Working In Excel and Google Sheets

Par : BeauHD
5 mars 2026 à 19:00
OpenAI today released GPT-5.4, an upgraded ChatGPT model designed to be faster, cheaper, and more accurate for workplace tasks. The update also introduces tools that let ChatGPT work directly inside Excel and Google Sheets. Axios reports: GPT-5.4 is designed to be less error-prone, more efficient and better at workplace tasks like drafting documents, OpenAI said. The new model can create files in fewer tries with less back-and-forth than prior models, the company said. GPT-5.4 outperformed office workers 83% of the time on GDPval, an OpenAI benchmark measuring performance on real-world tasks across 44 occupations. The model can also solve problems using fewer tokens, OpenAI says -- which can translate to faster responses and lower costs. The company is also debuting OpenAI for Financial Services, a set of new tools that includes the version of ChatGPT that runs inside spreadsheets and new apps and skills within ChatGPT. Partners include FactSet, MSCI, Third Bridge and Moody's.

Read more of this story at Slashdot.

Deux jours après GPT-5.3, OpenAI lance GPT-5.4

5 mars 2026 à 18:52

Juste après avoir officialisé GPT-5.3 Instant pour les réponses rapides dans ChatGPT, OpenAI dévoile GPT-5.4 Thinking et GPT-5.4 Pro, ses deux nouveaux meilleurs modèles. Cette course effrénée semble avoir un seul but : rattraper Google et Anthropic.

❌