Vue lecture

eBay Bans Illicit Automated Shopping Amid Rapid Rise of AI Agents

EBay has updated its User Agreement to explicitly ban third-party "buy for me" agents and AI chatbots from interacting with its platform without permission. From a report: On its face, a one-line terms of service update doesn't seem like major news, but what it implies is more significant: The change reflects the rapid emergence of what some are calling "agentic commerce," a new category of AI tools designed to browse, compare, and purchase products on behalf of users. eBay's updated terms, which go into effect on February 20, 2026, specifically prohibit users from employing "buy-for-me agents, LLM-driven bots, or any end-to-end flow that attempts to place orders without human review" to access eBay's services without the site's permission. The previous version of the agreement contained a general prohibition on robots, spiders, scrapers, and automated data gathering tools but did not mention AI agents or LLMs by name.

Read more of this story at Slashdot.

  •  

Le détournement de Grok est choquant : plus d’une image générée sur deux serait sexuelle

Après la polémique provoquée par Grok, l’IA de xAI sur le réseau social X, de premières estimations chiffrées émergent. Selon une étude du Center for Countering Digital Hate, reprise par The Guardian, environ 3 millions d’images à caractère sexuel auraient été produites à l'aide de cet outil en l’espace de quelques jours.

  •  

'Stealing Isn't Innovation': Hundreds of Creatives Warn Against an AI Slop Future

Around 800 artists, writers, actors, and musicians signed on to a new campaign against what they call "theft at a grand scale" by AI companies. From a report: The signatories of the campaign -- called "Stealing Isn't Innovation" -- include authors George Saunders and Jodi Picoult, actors Cate Blanchett and Scarlett Johansson, and musicians like the band R.E.M., Billy Corgan, and The Roots. "Driven by fierce competition for leadership in the new GenAI technology, profit-hungry technology companies, including those among the richest in the world as well as private equity-backed ventures, have copied a massive amount of creative content online without authorization or payment to those who created it," a press release reads. "This illegal intellectual property grab fosters an information ecosystem dominated by misinformation, deepfakes, and a vapid artificial avalanche of low-quality materials ['AI slop'], risking AI model collapse and directly threatening America's AI superiority and international competitiveness."

Read more of this story at Slashdot.

  •  

Wikipedia's Guide to Spotting AI Is Now Being Used To Hide AI

Ars Technica's Benj Edwards reports: On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday. "It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that." The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing. Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model (LLM) that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.) But as with all AI prompts, language models don't always perfectly follow skill files, so does the Humanizer actually work? In our limited testing, Chen's skill file made the AI agent's output sound less precise and more casual, but it could have some drawbacks: it won't improve factuality and might harm coding ability. [...] Even with its drawbacks, it's ironic that one of the web's most referenced rule sets for detecting AI-assisted writing may help some people subvert it.

Read more of this story at Slashdot.

  •  

Apple Reportedly Replacing Siri Interface With Actual Chatbot Experience For iOS 27

According to Bloomberg's Mark Gurman, Apple is reportedly planning a major Siri overhaul in iOS 27 and macOS 27 where the current assistant interface will be replaced with a deeply integrated, ChatGPT-style chatbot experience. "Users will be able to summon the new service the same way they open Siri now, by speaking the 'Siri' command or holding down the side button on their iPhone or iPad," says Gurman. "More significantly, Siri will be integrated into all of the company's core apps, including ones for mail, music, podcasts, TV, Xcode programming software and photos. That will allow users to do much more with just their voice." 9to5Mac reports: The unannounced Siri overhaul will reportedly be revealed at WWDC in June as the flagship feature for iOS 27 and macOS 27. Its release is expected in September when Apple typically ships major software updates. While Apple plans to release an improved version of Siri and Apple Intelligence this spring, that version will use the existing Siri interface. The big difference is that Google's Gemini models will power the intelligence. With the bigger update planned for iOS 27, the iOS 26 upgrade to Siri and Apple Intelligence sounds more like the first step to a long overdue modernization. Gurman reports that the major Siri overhaul will "allow users to search the web for information, create content, generate images, summarize information and analyze uploaded files" while using "personal data to complete tasks, being able to more easily locate specific files, songs, calendar events and text messages." People are already familiar with conversational interactions with AI, and Bloomberg says the bigger update to Siri will be support both text and voice. Siri already uses these input methods, but there's no real continuity between sessions.

Read more of this story at Slashdot.

  •  

Apple Developing AI Wearable Pin

According to a report by The Information (paywalled), Apple is reportedly developing an AirTag-sized, camera-equipped AI wearable pin that could arrive as early as 2027. "Apple's pin, which is a thin, flat, circular disc with an aluminum-and-glass shell, features two cameras -- a standard lens and a wide-angle lens -- on its front face, designed to capture photos and videos of the user's surroundings," reports The Information, citing people familiar with the device. "It also includes three microphones to pick up sounds in the area surrounding the person wearing it. It has a speaker, a physical button along one of its edges and a magnetic inductive charging interface on its back, similar to the one used on the Apple Watch..." 9to5Mac reports: The Information also notes that Apple is attempting to speed up development in hopes of competing with OpenAI's first wearable (slated to debut in 2026), and that it is not immediately clear whether this wearable would work in conjunction with other products, such as AirPods or Apple's reported upcoming smart glasses. Today's report also notes that this has been a challenging market for new companies, citing the recent failure of Humane's AI Pin as an example.

Read more of this story at Slashdot.

  •  

Adobe Acrobat Now Lets You Edit Files Using Prompts, Generate Podcast Summaries

Adobe has added a suite of AI-powered features to Acrobat that enable users to edit documents through natural language prompts, generate podcast-style audio summaries of their files, and create presentations by pulling content from multiple documents stored in a single workspace. The prompt-based editing supports 12 distinct actions: removing pages, text, comments, and images; finding and replacing words and phrases; and adding e-signatures and passwords. The presentation feature builds on Adobe Spaces, a collaborative file and notes collection the company launched last year. Users can point Acrobat's AI assistant at files in a Space and have it generate an editable pitch deck, then style it using Adobe Express themes and stock imagery. Shared files in Spaces now include AI-generated summaries that cite specific locations in the source document. Users can also choose from preset AI assistant personas -- "analyst," "entertainer," or "instructor" -- or create custom assistants using their own prompts.

Read more of this story at Slashdot.

  •  

Comic-Con Bans AI Art After Artist Pushback

San Diego Comic-Con changed an AI art friendly policy following an artist-led backlash last week. From a report: It was a small victory for working artists in an industry where jobs are slipping away as movie and video game studios adopt generative AI tools to save time and money. Every year, tens of thousands of people descend on San Diego for Comic-Con, the world's premier comic book convention that over the years has also become a major pan-media event where every major media company announces new movies, TV shows, and video games. For the past few years, Comic-Con has allowed some forms of AI-generated art at this art show at the convention. According to archived rules for the show, artists could display AI-generated material so long as it wasn't for sale, was marked as AI-produced, and credited the original artist whose style was used. "Material produced by Artificial Intelligence (AI) may be placed in the show, but only as Not-for-Sale (NFS). It must be clearly marked as AI-produced, not simply listed as a print. If one of the parameters in its creation was something similar to 'Done in the style of,' that information must be added to the description. If there are questions, the Art Show Coordinator will be the sole judge of acceptability," Comic-Con's art show rules said until recently.

Read more of this story at Slashdot.

  •  

CEOs Say AI is Making Work More Efficient. Employees Tell a Different Story.

Companies are spending vast sums on AI expecting the technology to boost efficiency, but a new survey from AI consulting firm Section found that two-thirds of non-management workers among 5,000 white-collar respondents say they save less than two hours a week or no time at all, while more than 40% of executives report the technology saves them upward of eight hours weekly. Workers were far more likely to describe themselves as anxious or overwhelmed about AI than excited -- the opposite of C-suite respondents -- and 40% of all surveyed said they would be fine never using AI again. A separate Workday report of roughly 1,600 employees found that though 85% reported time savings of one to seven hours weekly, much of it was offset by correcting errors and reworking AI-generated content -- what the company called an "AI tax" on productivity. At the World Economic Forum in Davos this week, a PricewaterhouseCoopers survey of nearly 4,500 CEOs found more than half have seen no significant financial benefit from AI so far, and only 12% said the technology has delivered both cost and revenue gains.

Read more of this story at Slashdot.

  •  

56% of Companies Have Seen Zero Financial Return From AI Investments, PwC Survey Says

More than half of companies haven't seen any financial benefit from their AI investments, according to PwC's latest Global CEO Survey [PDF], and yet the spending shows no signs of slowing down. Some 56% of the 4,454 chief executives surveyed across 95 countries said their companies have realized neither higher revenues nor lower costs from AI over the past year. Only 12% reported getting both benefits -- and those rare winners tend to be the ones who built proper enterprise-wide foundations rather than chasing one-off projects. CEO confidence in near-term growth has taken a notable hit. Just 30% feel strongly optimistic about revenue growth over the next 12 months, down from 38% last year and nowhere near the 56% who felt that way in 2022.

Read more of this story at Slashdot.

  •  

Anthropic CEO Says Government Should Help Ensure AI's Economic Upside Is Shared

An anonymous reader shares a report: Anthropic Chief Executive Dario Amodei predicted a future in which AI will spur significant economic growth -- but could lead to widespread unemployment and inequality. Amodei is both "excited and worried" about the impact of AI, he said in an interview at Davos Tuesday. "I don't think there's an awareness at all of what is coming here and the magnitude of it." Anthropic is the developer of the popular chatbot Claude. Amodei said the government will need to play a role in navigating the massive displacement in jobs that could result from advances in AI. He said there could be a future with 5% to 10% GDP growth and 10% unemployment. "That's not a combination we've almost ever seen before," he said. "There's gonna need to be some role for government in the displacement that's this macroeconomically large." Amodei painted a potential "nightmare" scenario that AI could bring to society if not properly checked, laying out a future in which 10 million people -- 7 million in Silicon Valley and the rest scattered elsewhere -- could "decouple" from the rest of society, enjoying as much as 50% GPD growth while others were left behind. "I think this is probably a time to worry less about disincentivizing growth and worry more about making sure that everyone gets a part of that growth," Amodei said. He noted that was "the opposite of the prevailing sentiment now," but the reality of technological change will force those ideas to change.

Read more of this story at Slashdot.

  •  

AI Agents 'Perilous' for Secure Apps Such as Signal, Whittaker Says

Signal Foundation president Meredith Whittaker warned that AI agents that autonomously carry out tasks pose a threat to encrypted messaging apps [non-paywalled source] because they require broad access to data stored across a device and can be hijacked if given root permissions. Speaking at Davos on Tuesday, Whittaker said the deeper integration of AI agents into devices is "pretty perilous" for services like Signal. For an AI agent to act effectively on behalf of a user, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, Whittaker said. The data that the agent stores in its context window is at greater risk of being compromised. Whittaker called this "breaking the blood-brain barrier between the application and the operating system." "Our encryption no longer matters if all you have to do is hijack this context window," she said.

Read more of this story at Slashdot.

  •  

Palantir CEO Says AI To Make Large-Scale Immigration Obsolete

AI will displace so many jobs that it will eliminate the need for mass immigration, according to Palantir CEO Alex Karp. Bloomberg: "There will be more than enough jobs for the citizens of your nation, especially those with vocational training," said Karp, speaking at a World Economic Forum panel in Davos, Switzerland on Tuesday. "I do think these trends really do make it hard to imagine why we should have large-scale immigration unless you have a very specialized skill." Karp, who holds a PhD in philosophy, used himself as an example of the type of "elite" white-collar worker most at risk of disruption. Vocational workers will be more valuable "if not irreplaceable," he said, criticizing the idea that higher education is the ultimate benchmark of a person's talents and employability.

Read more of this story at Slashdot.

  •  

Ukraine To Share Wartime Combat Data With Allies To Help Train AI

An anonymous reader shares a report: Ukraine will establish a system allowing its allies to train their AI models on Kyiv's valuable combat data collected throughout the nearly four-year war with Russia, newly appointed Defence Minister Mykhailo Fedorov has said. Fedorov -- a former digitalisation minister who last week took up the post to drive reforms across Ukraine's vast defence ministry and armed forces -- has described Kyiv's wartime data trove as one of its "cards" in negotiations with other nations. Since Russia's invasion in February 2022, Ukraine has gathered extensive battlefield information, including systematically logged combat statistics and millions of hours of drone footage captured from above. Such data is important for training AI models, which require large volumes of real-world information to identify patterns and predict how people or objects might act in various situations.

Read more of this story at Slashdot.

  •  
❌