Vue normale

Apple Might Use Google Servers To Store Data For Its Upgraded AI Siri

Par : BeauHD
2 mars 2026 à 23:00
Apple has reportedly asked Google to look into "seting up servers" for a Gemini-powered upgrade to Siri that meets Apple's privacy standards. The Verge reports: Apple had already announced in January that Google's Gemini AI models would help power the upgraded version of Siri it delayed last year, but The Information's report indicates Apple might lean even more on Google so it can catch up in AI. The original partnership announcement said that "the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology," and that the models would "help power future Apple Intelligence features," including "a more personalized Siri." While the announcement noted that Apple Intelligence would "continue to run on Apple devices and Private Cloud Compute," it didn't specify if the new Siri would run on Google's cloud. Apple's Private Cloud Compute is not only underpowered but it's also underutilized in its current state, notes 9to5Mac, "with the company only using about 10% of its capacity on average, leading to some already-manufactured Apple servers to be sitting dormant on warehouse shelves."

Read more of this story at Slashdot.

Editor At 184-Year-Old Ohio Newspaper Pushes To Let AI Draft News Articles

Par : BeauHD
2 mars 2026 à 18:00
An anonymous reader quotes a report from the Washington Post: The Plain Dealer, Cleveland's largest newspaper, has begun to feature a new byline. On recent articles about an ice carving festival, a medical research discovery and a roaming pack of chicken-slaying dogs, a reporter's name is paired with the words "Advance Local Express Desk." It means: This article was drafted by artificial intelligence. "This article was produced with assistance from AI tools and reviewed by Cleveland.com staff," reads a note at the bottom of each robot-penned piece, differentiating it from those still written primarily by journalists. The disclosure has done little to stem the backlash that caromed across the news industry after the paper's editor, Chris Quinn, published a Feb. 14 column lamenting that a fresh-out-of-college job applicant withdrew from a reporting fellowship when they found out the position included no writing -- just filing notes to an AI writing tool. "Artificial intelligence is not bad for newsrooms. It's the future of them," Quinn wrote, adding that "by removing writing from reporters' workloads, we've effectively freed up an extra workday for them each week." [...] Quinn, for his part, says his paper's use of AI to find, draft and edit stories is a success story that others must emulate if they want to survive. "It's a tool," he said in a phone interview last week. "If AI can do part of our job, then why not let it -- and have people do the part it can't do?" He added that the paper's embrace of technology -- including using AI to write stories summarizing its reporters' podcasts and its readers' letters to the editor -- is already boosting its bottom line, helping it retain staff at a time when other newspapers are shrinking or even shutting down. Just 130 miles east of Cleveland, the 240-year-old Pittsburgh Post-Gazette said in January that it will close its doors this spring. Quinn, who has led the Plain Dealer's newsroom since 2013, said its newsroom has shrunk from some 400 employees in the late 1990s to just 71 today. Over the past three years, Quinn has implemented a suite of AI tools with various purposes: transcribing local government meetings, scraping municipal websites for story leads, cleaning up typos in story drafts, suggesting headlines and helping reporters draft follow-ups to articles they've already written. He said he is particularly pleased with an AI tool that turns podcasts by the paper's reporters into stories for the website, which he said generated more than 10 million page views last year. He has documented those efforts in letters to readers and sought their feedback. But the paper's latest experiment -- using AI to turn reporters' notes into full story drafts -- has aroused indignation online and anxiety within the paper's ranks.

Read more of this story at Slashdot.

C’est quoi la polémique avec ChatGPT et par quoi le remplacer ?

2 mars 2026 à 12:31

L'accord entre OpenAI et le Département américain de la Guerre provoque un tollé contre le créateur de ChatGPT. Plusieurs internautes suggèrent de remplacer ChatGPT par Claude, même si la plupart d'entre eux semblent ne pas réellement comprendre ce qu'il se passe.

Lenovo Unveils an Attachable AI Agent 'Companion' for Their Laptops

2 mars 2026 à 05:35
As the Mobile World Conference begins in Spain, Lenovo brought a new attachable accessory for their laptops — an AI agent. CNET reports: The little circular module perches on the top of your Lenovo laptop display, attached via the magnetic Magic Bay on the rear. The module is home to an adorable animated companion called Tiko, who you can interact with via text or voice... [I]t can start and stop your music, open a web page for you or answer a question. You can also interact with it by using emoji. Give it a book emoji, for example, and it will pop on its glasses and sit reading with you while you work... The company wants to sell the Magic Bay accessory later this year — although it doesn't know exactly when, or how much it will cost. It even comes with a timer (for working in Pomodoro-style intervals) — but Lenovo has also created another "concept" AI companion that CNET describes as "a kind of stationary tabletop robot, not dissimilar to the Pixar lamp, but with an orb for a head." With a combination of cameras, microphones and projectors, the AI Workmate can undertake a variety of tasks, including helping you generate and display presentations or turn your written work or art into a digital asset... It's robotic head swivelled around and projected the slides onto the wall next to me. Lenovo created a video to show this "next-generation AI work companion" — with animated eyes — "designed to transform how modern professionals interact with their workspace." It bridges the physical and digital worlds — capturing handwritten notes, recognizing gestures, summarizing tasks, and proactively helping you stay ahead of your day. The moment you sit down, Lenovo AI Workmate greets you, surfaces priority tasks, and keeps your work organized without switching apps or losing context. From turning sketches into presentations to projecting information for instant collaboration, [it] brings on-device AI intelligence directly to your desk — secure, responsive, and always ready... It's not just software. It's a smarter way to work. It looks like Lenovo once considered naming it "AI Sphere" (since that name still appears in its description on YouTube). Lenovo also showed another "concept" laptop idea that PC Magazine called "futuristic": The ThinkBook Modular AI PC looks like a traditional laptop at first glance, but a second, removable screen fastens onto the lid. You can swap that screen onto the keyboard deck (in place of the keyboard, which can then be used wirelessly), or use it alongside the laptop as a portable monitor, attached via an included cable.... While Lenovo is still working on this device, and it's very much in the concept phase, it feels like one of its best-thought-out prototypes, one likely to make it to store shelves at some point. Another "concept" laptop is Lenovo's Yoga Book Pro 3D Concept, ofering directional backlight and eye-tracking technology for the illusion of 3D (playing slightly different images to each of your eyes). It offers gesture control for 3D models, two OLED displays, and some magical "snap-on pads" which, when laid on the display — make the GUI appear on the screen for a new control menu to "provide quick-access shortcuts for adjusting lighting, viewing angle, and tone".

Read more of this story at Slashdot.

AIs Can't Stop Recommending Nuclear Strikes In War Game Simulations

1 mars 2026 à 22:46
"Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises," reports New Scientist: Kenneth Payne at King's College London set three leading large language models — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash — against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war... In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. "The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," says Payne. What's more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning... OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn't respond to New Scientist's request for comment. The article includes this comment from Tong Zhao, a senior fellow in the Nuclear Policy Program at the Carnegie Endowment for Peace think tank. "It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand 'stakes' as humans perceive them." Thanks to long-time Slashdot reader Tufriast for sharing the article.

Read more of this story at Slashdot.

Anthropic's Claude Passes ChatGPT, Now #1, on Apple's 'Top Apps' Chart After Pentagon Controversy

1 mars 2026 à 20:59
"Anthropic may have lost out on doing business with the US government," reports Engadget, "but it's gained enough popularity to earn the number one spot on the App Store's Top Free Apps leaderboard." Anthropic's Claude AI assistant had already leaped to the #2 slot on Apple's chart by late Friday," CNBC reported Saturday: The rise in popularity suggests that Anthropic is benefiting from its presence in news headlines, stemming from its refusal to have its models used for mass domestic surveillance or for fully autonomous weapons... OpenAI's ChatGPT sat at No. 1 on the App Store rankings on Saturday, while Google's Gemini was at No. 3... On Jan. 30, [Claude] was ranked No. 131 in the U.S., and it bounced between the top 20 and the top 50 for much of February, according to data from analytics company Sensor Tower... [And Friday night, for 85.3 million followers] pop singer Katy Perry posted a screenshot of Anthropic's Pro subscription for consumers, with a heart superimposed over it. Sunday Engadget reported Anthropic's "very public spat" with the Pentagon "led to a wave of user support that finally allowed Claude to dethrone OpenAI's ChatGPT on the App Store as the most downloaded free app." . Friday Anthropic posted "We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. "

Read more of this story at Slashdot.

Anthropic's Claude Pass ChatGPT, Now #1, on Apple's 'Top Apps' Chart After Pentagon Controversy

1 mars 2026 à 20:59
"Anthropic may have lost out on doing business with the US government," reports Engadget, "but it's gained enough popularity to earn the number one spot on the App Store's Top Free Apps leaderboard." Anthropic's Claude AI assistant had already leaped to the #2 slot on Apple's chart by late Friday," CNBC reported Saturday: The rise in popularity suggests that Anthropic is benefiting from its presence in news headlines, stemming from its refusal to have its models used for mass domestic surveillance or for fully autonomous weapons... OpenAI's ChatGPT sat at No. 1 on the App Store rankings on Saturday, while Google's Gemini was at No. 3... On Jan. 30, [Claude] was ranked No. 131 in the U.S., and it bounced between the top 20 and the top 50 for much of February, according to data from analytics company Sensor Tower... [And Friday night, for 85.3 million followers] pop singer Katy Perry posted a screenshot of Anthropic's Pro subscription for consumers, with a heart superimposed over it. Sunday Engadget reported Anthropic's "very public spat" with the Pentagon "led to a wave of user support that finally allowed Claude to dethrone OpenAI's ChatGPT on the App Store as the most downloaded free app" . Friday Anthropic posted "We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. "

Read more of this story at Slashdot.

Anthropic's Claude Leaps to #2 on Apple's 'Top Apps' Chart After Pentagon Controversy

1 mars 2026 à 08:34
Anthropic's Claude AI assistant "jumped to the No. 2 slot on Apple's chart of top U.S. free apps late on Friday," reports CNBC: The rise in popularity suggests that Anthropic is benefiting from its presence in news headlines, stemming from its refusal to have its models used for mass domestic surveillance or for fully autonomous weapons... OpenAI's ChatGPT sat at No. 1 on the App Store rankings on Saturday, while Google's Gemini was at No. 3... On Jan. 30, [Claude] was ranked No. 131 in the U.S., and it bounced between the top 20 and the top 50 for much of February, according to data from analytics company Sensor Tower... [And Friday night, for 85.3 million followers] pop singer Katy Perry posted a screenshot of Anthropic's Pro subscription for consumers, with a heart superimposed over it. Friday Anthropic posted "We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. "

Read more of this story at Slashdot.

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal

28 février 2026 à 20:34
It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions... In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable... In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final. Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights." Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

Read more of this story at Slashdot.

America's Teenagers Say AI Cheating Has Become a Regular Feature of Student Life

28 février 2026 à 16:34
Tuesday Pew Research announced their newest findings: that 54% of America's teens use AI help with schoolwork: One-in-five teens living in households making less than $30,000 a year say they do all or most of their schoolwork with AI chatbots' help. A similar share of those in households making $30,000 to just under $75,000 annually say this. Fewer teens living in higher-earning households (7%) say the same." "The survey did not ask students whether they had used chatbots to write essays or generate other assignments..." notes the New York Times. "But nearly 60% of teenagers told Pew that students at their school used chatbots to cheat 'very often' or 'somewhat often.'" Agreeing with that are the Pew Researchers themselves. "Our survey shows that many teens think cheating with AI has become a regular feature of student life." One worried teenager still told the researchers that AI "makes people lazy and takes away jobs." But another teenager told the researchers that "Everyone's going to have to know how to use AI or they'll be left behind." Thanks to long-time Slashdot reader theodp for sharing the article.

Read more of this story at Slashdot.

Il paraît que les logiciels meurent en silence : c’est quoi la SaaSpocalypse ?

28 février 2026 à 14:30

C'est un mot et une projection qui font leur chemin dans la presse et l'écosystème tech : la SaaSpocalypse ou Software-mageddon serait imminente. C'est quoi, au juste ?

Southern California Air Board Rejects Pollution Rules After AI-Generated Flood of Comments

Par : BeauHD
28 février 2026 à 07:00
Southern California's air quality board rejected proposed rules to phase out gas-powered appliances after receiving more than 20,000 opposition comments generated through CiviClick, "the first and best AI-powered grassroots advocacy platform." Phys.org reports: A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign "left the staff of the Southern California Air Quality Management District (SCAQMD) reeling," the article says. It is not clear how AI was deployed in the campaign, and officials at CiviClick did not respond to repeated requests for comment. But their website boasts several tools, including "state of the art technology and artificial intelligence message assistance" that can be used to create custom advocacy letters, as opposed to repetitive form letters or petitions often used in similar campaigns. When staffers at the air district reached out to a small sample of people to verify their comments, at least three said they had not written to the agency and were not aware of any such messages, records show. But the email onslaught almost certainly influenced the board's June decision, according to agency insiders, who noted that the number of public comments typically submitted on agenda items can be counted on one hand. The proposed rules were nearly two years in the making and would have placed a fee on natural gas-powered water heaters and furnaces, favoring electric ones, in an effort to reduce air pollution in the district, which includes Orange County and large swaths of Los Angeles, Riverside and San Bernardino counties. Gas appliances emit nitrogen oxides, or NOx -- key pollutants for forming smog. The implications are troubling, experts said, and go beyond the use of natural gas furnaces and heaters in the second-largest metropolitan area in the country.

Read more of this story at Slashdot.

Perplexity Announces 'Computer,' an AI Agent That Assigns Work To Other AI Agents

Par : BeauHD
28 février 2026 à 00:02
joshuark shares a report from Ars Technica: Perplexity has introduced "Computer," a new tool that allows users to assign tasks and see them carried out by a system that coordinates multiple agents running various models. The company claims that Computer, currently available to Perplexity Max subscribers, is "a system that creates and executes entire workflows" and "capable of running for hours or even months." The idea is that the user describes a specific outcome -- something like "plan and execute a local digital marketing campaign for my restaurant" or "build me an Android app that helps me do a specific kind of research for my job." Computer then ideates subtasks and assigns them to multiple agents as needed, running the models Perplexity deems best for those tasks. The core reasoning engine currently runs Anthropic's Claude Opus 4.6, while Gemini is used for deep research, Nano Banana for image generation, Veo 3.1 for video production, Grok for lightweight tasks where speed is a consideration, and ChatGPT 5.2 for "long-context recall and wide search." This kind of best-model-for-the-task approach differs from some competing products like Claude Cowork, which only uses Anthropic's models. All this happens in the cloud, with prebuilt integrations. "Every task runs in an isolated compute environment with access to a real filesystem, a real browser, and real tool integrations," Perplexity says. The idea is partly that this workflow was what some power users were already doing, and this aims to make that possible for a wider range of people who don't want to deal with all that setup. People were already using multiple models and tailoring them to specific tasks based on perceived capabilities, while, for example, using MCP (Model Context Protocol) to give those models access to data and applications on their local machines. Perplexity Computer takes a different approach, but the goal is the same: have AI agents running tailor-picked models to perform tasks involving your own files, services, and applications. Then there is OpenClaw, which you could perceive as the immediate predecessor to this concept.

Read more of this story at Slashdot.

Perplexity Announces 'Computer,' an AI Agent That Assigns Work To Other AI Agent

Par : BeauHD
28 février 2026 à 00:02
joshuark shares a report from Ars Technica: Perplexity has introduced "Computer," a new tool that allows users to assign tasks and see them carried out by a system that coordinates multiple agents running various models. The company claims that Computer, currently available to Perplexity Max subscribers, is "a system that creates and executes entire workflows" and "capable of running for hours or even months." The idea is that the user describes a specific outcome -- something like "plan and execute a local digital marketing campaign for my restaurant" or "build me an Android app that helps me do a specific kind of research for my job." Computer then ideates subtasks and assigns them to multiple agents as needed, running the models Perplexity deems best for those tasks. The core reasoning engine currently runs Anthropic's Claude Opus 4.6, while Gemini is used for deep research, Nano Banana for image generation, Veo 3.1 for video production, Grok for lightweight tasks where speed is a consideration, and ChatGPT 5.2 for "long-context recall and wide search." This kind of best-model-for-the-task approach differs from some competing products like Claude Cowork, which only uses Anthropic's models. All this happens in the cloud, with prebuilt integrations. "Every task runs in an isolated compute environment with access to a real filesystem, a real browser, and real tool integrations," Perplexity says. The idea is partly that this workflow was what some power users were already doing, and this aims to make that possible for a wider range of people who don't want to deal with all that setup. People were already using multiple models and tailoring them to specific tasks based on perceived capabilities, while, for example, using MCP (Model Context Protocol) to give those models access to data and applications on their local machines. Perplexity Computer takes a different approach, but the goal is the same: have AI agents running tailor-picked models to perform tasks involving your own files, services, and applications. Then there is OpenClaw, which you could perceive as the immediate predecessor to this concept.

Read more of this story at Slashdot.

Trump Orders Federal Agencies To Stop Using Anthropic AI Tech 'Immediately'

Par : BeauHD
27 février 2026 à 22:40
President Donald Trump has ordered all U.S. federal agencies to "immediately cease" using Anthropic's AI technology, escalating a standoff after the company sought limits on Pentagon use of its models. CNBC reports: The company, which in July signed a $200 million contract with Pentagon, wants assurances that the Defense Department will not use its AI models will not be used for fully autonomous weapons or mass domestic surveillance of Americans. The Pentagon had set a deadline of 5:01 p.m. ET Friday for Anthropic to agree to its demands to allow the Pentagon to use the technology for all lawful purposes. If Anthropic did not meet that deadline, Pete Hegseth threatened to label the company a "supply chain risk" or force it to comply by invoking the Defense Production Act. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump said in a post on Truth Social. "Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY." "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology," Trump wrote. "We don't need it, we don't want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic's products, at various levels," Trump said. On Friday, OpenAI said it would also draw the same red lines as Anthropic: no AI for mass surveillance or autonomous lethal weapons.

Read more of this story at Slashdot.

AI Mistakes Are Infuriating Gamers as Developers Seek Savings

Par : msmash
27 février 2026 à 20:10
The $200 billion video game industry is caught between studios eager to cut ballooning development costs through AI and a player base that has grown openly hostile to the technology after a string of visible blunders. As Bloomberg News reports, Arc Raiders, a surprise hit from Stockholm-based Embark Studios that sold 12 million copies in three months, was briefly vilified online for its robotic-sounding auto-generated voices -- even as CEO Patrick Soderlund insists AI was only used for non-essential elements. EA's Battlefield 6 and Activision's Call of Duty: Black Ops 7 both drew gamer anger this winter over thematically mismatched or poorly generated graphics, and Valve's Steam has added labels to flag games made using AI. Some 47% of developers polled by research house Omdia said they expect generative AI to reduce game quality, and PC gamers -- now facing inflated hardware prices from AI-driven demand for graphics chips -- have turned reflexively antagonistic.

Read more of this story at Slashdot.

Metacritic Will Kick Out Media Attempting To Submit AI Generated Reviews

Par : msmash
27 février 2026 à 17:32
An anonymous reader shares a report: While some see AI as a tool to be used, its specific use and how it is deployed responsibly is being heavily debated online across a wide range of industries. In terms of journalistic content, and in this particular instance, reviews, review aggregator Metacritic has taken a firm stance on content published and submitted to their platform, that have been generated by artificial intelligence in some way. In a statement by co-founder Marc Doyle, sent to Gamereactor, he says this: "Metacritic has been a reputable review source for a quarter century and has maintained a rigorous vetting process when adding new publications to our slate of critics. However, in certain instances such as a publication being sold or a writing staff having turned over, problems can arise such as plagiarism, theft, or other forms of fraud including AI-generated reviews. Metacritic's policy is to never include an AI-generated critic review on Metacritic and if we discover that one has been posted, we'll remove it immediately and sever ties with that publication indefinitely pending a thorough investigation." So, what is this about specifically? Well, it's probably a sound guess, that this pertains to Videogamer's review of Resident Evil 9: Requiem, which was removed from the platform after a barrage of comments accusing the review of being AI-written, and for the author of being made up.

Read more of this story at Slashdot.

Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight

Par : msmash
27 février 2026 à 16:40
An anonymous reader shares a report: OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons. If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work. It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology. Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts. Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."

Read more of this story at Slashdot.

OpenAI Raises $110 Billion in the Largest Private Funding Round Ever

Par : msmash
27 février 2026 à 14:00
OpenAI has closed what is now the largest private financing in history -- a $110 billion round at a $730 billion pre-money valuation that more than doubles the $40 billion raise it completed just a year ago, itself a record for a private tech company at the time. Amazon invested $50 billion, SoftBank put in $30 billion, and Nvidia committed $30 billion, and additional investors are expected to join as the round progresses. The valuation is a sharp jump from the $500 billion OpenAI commanded in a secondary financing in October, and the round dwarfs recent raises by rivals Anthropic ($30 billion) and xAI ($20 billion). The company has been telling investors it is now targeting roughly $600 billion in total compute spend by 2030, a more measured figure than the $1.4 trillion in infrastructure commitments CEO Sam Altman had touted months earlier. OpenAI is projecting more than $280 billion in total revenue by 2030, split roughly equally between consumer and enterprise. ChatGPT now has over 900 million weekly active users and more than 50 million paying subscribers.

Read more of this story at Slashdot.

Memory Price Hikes Will Kill Off Budget PCs and Smartphones, Analyst Warns

Par : BeauHD
27 février 2026 à 13:00
An anonymous reader quotes a report from The Register: Ballooning memory prices are forecast to kill off entry-level PCs, leading to a decline in global shipments this year -- and a similar effect is going to hit smartphones. Analyst biz Gartner is projecting a drop in PC shipments of more than 10 percent during 2026, and a decline of around 8 percent for smartphones, all due to the AI-driven memory shortage. Some types of memory have doubled or quadrupled in price since last year, and Gartner believes DRAM and NAND flash used in PCs and phones is set for a further 130 percent rise by the end of 2026. The upshot of this is that the budget PC will disappear, simply because vendors won't be able to build them at a price that will satisfy cost-conscious buyers, according to Gartner research director Ranjit Atwal. "Because the price of memory is increasing so much, vendors lose the ability to provide entry-level PCs -- those below about $500," he told The Register. PC makers could just raise the price of their cheap and cheerful boxes to above that level to compensate for the memory hike, however, price-sensitive buyers simply won't bite, he added. Another factor expected to add to declining fortunes of the PC industry this year is AI devices -- systems equipped with special hardware for accelerating AI tasks, typically via a neural processing unit (NPU) embedded in the CPU. These systems were predicted to take the market by storm, but they require more memory to support AI processing and vendors like to mark them up to a premium price. "Historically, downgrading specifications was the way to go when prices were being squeezed, but that's difficult here," Atwal said. "The thinking was that the average price [of AI PCs] would fall this year, and lead to more adoption," said Atwal, "but that's not happening." The lack of killer applications isn't helping either.

Read more of this story at Slashdot.

❌