Vue lecture

Adobe Acrobat Now Lets You Edit Files Using Prompts, Generate Podcast Summaries

Adobe has added a suite of AI-powered features to Acrobat that enable users to edit documents through natural language prompts, generate podcast-style audio summaries of their files, and create presentations by pulling content from multiple documents stored in a single workspace. The prompt-based editing supports 12 distinct actions: removing pages, text, comments, and images; finding and replacing words and phrases; and adding e-signatures and passwords. The presentation feature builds on Adobe Spaces, a collaborative file and notes collection the company launched last year. Users can point Acrobat's AI assistant at files in a Space and have it generate an editable pitch deck, then style it using Adobe Express themes and stock imagery. Shared files in Spaces now include AI-generated summaries that cite specific locations in the source document. Users can also choose from preset AI assistant personas -- "analyst," "entertainer," or "instructor" -- or create custom assistants using their own prompts.

Read more of this story at Slashdot.

  •  

Comic-Con Bans AI Art After Artist Pushback

San Diego Comic-Con changed an AI art friendly policy following an artist-led backlash last week. From a report: It was a small victory for working artists in an industry where jobs are slipping away as movie and video game studios adopt generative AI tools to save time and money. Every year, tens of thousands of people descend on San Diego for Comic-Con, the world's premier comic book convention that over the years has also become a major pan-media event where every major media company announces new movies, TV shows, and video games. For the past few years, Comic-Con has allowed some forms of AI-generated art at this art show at the convention. According to archived rules for the show, artists could display AI-generated material so long as it wasn't for sale, was marked as AI-produced, and credited the original artist whose style was used. "Material produced by Artificial Intelligence (AI) may be placed in the show, but only as Not-for-Sale (NFS). It must be clearly marked as AI-produced, not simply listed as a print. If one of the parameters in its creation was something similar to 'Done in the style of,' that information must be added to the description. If there are questions, the Art Show Coordinator will be the sole judge of acceptability," Comic-Con's art show rules said until recently.

Read more of this story at Slashdot.

  •  

CEOs Say AI is Making Work More Efficient. Employees Tell a Different Story.

Companies are spending vast sums on AI expecting the technology to boost efficiency, but a new survey from AI consulting firm Section found that two-thirds of non-management workers among 5,000 white-collar respondents say they save less than two hours a week or no time at all, while more than 40% of executives report the technology saves them upward of eight hours weekly. Workers were far more likely to describe themselves as anxious or overwhelmed about AI than excited -- the opposite of C-suite respondents -- and 40% of all surveyed said they would be fine never using AI again. A separate Workday report of roughly 1,600 employees found that though 85% reported time savings of one to seven hours weekly, much of it was offset by correcting errors and reworking AI-generated content -- what the company called an "AI tax" on productivity. At the World Economic Forum in Davos this week, a PricewaterhouseCoopers survey of nearly 4,500 CEOs found more than half have seen no significant financial benefit from AI so far, and only 12% said the technology has delivered both cost and revenue gains.

Read more of this story at Slashdot.

  •  

56% of Companies Have Seen Zero Financial Return From AI Investments, PwC Survey Says

More than half of companies haven't seen any financial benefit from their AI investments, according to PwC's latest Global CEO Survey [PDF], and yet the spending shows no signs of slowing down. Some 56% of the 4,454 chief executives surveyed across 95 countries said their companies have realized neither higher revenues nor lower costs from AI over the past year. Only 12% reported getting both benefits -- and those rare winners tend to be the ones who built proper enterprise-wide foundations rather than chasing one-off projects. CEO confidence in near-term growth has taken a notable hit. Just 30% feel strongly optimistic about revenue growth over the next 12 months, down from 38% last year and nowhere near the 56% who felt that way in 2022.

Read more of this story at Slashdot.

  •  

Anthropic CEO Says Government Should Help Ensure AI's Economic Upside Is Shared

An anonymous reader shares a report: Anthropic Chief Executive Dario Amodei predicted a future in which AI will spur significant economic growth -- but could lead to widespread unemployment and inequality. Amodei is both "excited and worried" about the impact of AI, he said in an interview at Davos Tuesday. "I don't think there's an awareness at all of what is coming here and the magnitude of it." Anthropic is the developer of the popular chatbot Claude. Amodei said the government will need to play a role in navigating the massive displacement in jobs that could result from advances in AI. He said there could be a future with 5% to 10% GDP growth and 10% unemployment. "That's not a combination we've almost ever seen before," he said. "There's gonna need to be some role for government in the displacement that's this macroeconomically large." Amodei painted a potential "nightmare" scenario that AI could bring to society if not properly checked, laying out a future in which 10 million people -- 7 million in Silicon Valley and the rest scattered elsewhere -- could "decouple" from the rest of society, enjoying as much as 50% GPD growth while others were left behind. "I think this is probably a time to worry less about disincentivizing growth and worry more about making sure that everyone gets a part of that growth," Amodei said. He noted that was "the opposite of the prevailing sentiment now," but the reality of technological change will force those ideas to change.

Read more of this story at Slashdot.

  •  

AI Agents 'Perilous' for Secure Apps Such as Signal, Whittaker Says

Signal Foundation president Meredith Whittaker warned that AI agents that autonomously carry out tasks pose a threat to encrypted messaging apps [non-paywalled source] because they require broad access to data stored across a device and can be hijacked if given root permissions. Speaking at Davos on Tuesday, Whittaker said the deeper integration of AI agents into devices is "pretty perilous" for services like Signal. For an AI agent to act effectively on behalf of a user, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, Whittaker said. The data that the agent stores in its context window is at greater risk of being compromised. Whittaker called this "breaking the blood-brain barrier between the application and the operating system." "Our encryption no longer matters if all you have to do is hijack this context window," she said.

Read more of this story at Slashdot.

  •  

Palantir CEO Says AI To Make Large-Scale Immigration Obsolete

AI will displace so many jobs that it will eliminate the need for mass immigration, according to Palantir CEO Alex Karp. Bloomberg: "There will be more than enough jobs for the citizens of your nation, especially those with vocational training," said Karp, speaking at a World Economic Forum panel in Davos, Switzerland on Tuesday. "I do think these trends really do make it hard to imagine why we should have large-scale immigration unless you have a very specialized skill." Karp, who holds a PhD in philosophy, used himself as an example of the type of "elite" white-collar worker most at risk of disruption. Vocational workers will be more valuable "if not irreplaceable," he said, criticizing the idea that higher education is the ultimate benchmark of a person's talents and employability.

Read more of this story at Slashdot.

  •  

Ukraine To Share Wartime Combat Data With Allies To Help Train AI

An anonymous reader shares a report: Ukraine will establish a system allowing its allies to train their AI models on Kyiv's valuable combat data collected throughout the nearly four-year war with Russia, newly appointed Defence Minister Mykhailo Fedorov has said. Fedorov -- a former digitalisation minister who last week took up the post to drive reforms across Ukraine's vast defence ministry and armed forces -- has described Kyiv's wartime data trove as one of its "cards" in negotiations with other nations. Since Russia's invasion in February 2022, Ukraine has gathered extensive battlefield information, including systematically logged combat statistics and millions of hours of drone footage captured from above. Such data is important for training AI models, which require large volumes of real-world information to identify patterns and predict how people or objects might act in various situations.

Read more of this story at Slashdot.

  •  

Energy Costs Will Decide Which Countries Win the AI Race, Microsoft's Nadella Says

Energy costs will be key to deciding which country wins the AI race, Microsoft CEO Satya Nadella has said. CNBC: As countries race to build AI infrastructure to capitalize on the technology's promise of huge efficiency gains, Nadella told the World Economic Forum (WEF) on Tuesday that "GDP growth in any place will be directly correlated" to the cost of energy in using AI. He pointed to a new global commodity in "tokens" -- basic units of processing that are bought by users of AI models, allowing them to run tasks. "The job of every economy and every firm in the economy is to translate these tokens into economic growth, then if you have a cheaper commodity, it's better." "I would say we will quickly lose even the social permission to actually take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness across all sectors," Nadella said.

Read more of this story at Slashdot.

  •  

OpenAI CFO Says Annualized Revenue Crosses $20 Billion In 2025

According to CFO Sarah Friar, OpenAI's annualized revenue surpassed $20 billion in 2025, up from $6 billion a year earlier with growth closely tracking an expansion in computing capacity. Reuters reports: OpenAI's computing capacity rose to 1.9 gigawatts (GW) in 2025 from 0.6 GW in 2024, Friar said in the blog, adding that Microsoft-backed OpenAI's weekly and daily active users figures continue to produce all-time highs. OpenAI last week said it would start showing ads in ChatGPT to some U.S. users, ramping up efforts to generate revenue from the AI chatbot to fund the high costs of developing the technology. Separately, Axios reported on Monday that OpenAI's policy chief Chris Lehane said that the company is "on track" to unveil its first device in the second half of 2026. Friar said OpenAI's platform spans text, images, voice, code and APIs, and the next phase will focus on agents and workflow automation that run continuously, carry context over time, and take action across tools. For 2026, the company will prioritize "practical adoption," particularly in health, science and enterprise, she said. Friar said the company is keeping a "light" balance sheet by partnering rather than owning and structuring contracts with flexibility across providers and hardware types.

Read more of this story at Slashdot.

  •  

Valve Has 'Significantly' Rewritten Steam's Rules For How Developers Must Disclose AI Use

Valve has substantially overhauled its guidelines for how game developers must disclose the use of generative AI on Steam, making explicit that tools like code assistants and other development aids do not fall under the disclosure requirement. The updated rules clarify that Valve's focus is not on "efficiency gains through the use of AI-powered dev tools." Developers must still disclose two specific categories: AI used to generate in-game content, store page assets, or marketing materials, and AI that creates content like images, audio, or text during gameplay itself. Steam has required AI disclosures since 2024, and an analysis from July 2025 found nearly 8,000 titles released in the first half of that year had disclosed generative AI use, compared to roughly 1,000 for all of 2024. The disclosures remain voluntary, so actual usage is likely higher.

Read more of this story at Slashdot.

  •  

IMF Warns Global Economic Resilience at Risk if AI Falters

The "surprisingly resilient" global economy is at risk of being disrupted by a sharp reversal in the AI boom, the IMF warned on Monday, as world leaders prepared for talks in the Swiss resort of Davos. From a report: Risks to global economic expansion were "tilted to the downside," the fund said in an update to its World Economic Outlook, arguing that growth was reliant on a narrow range of drivers, notably the US technology sector and the associated equity boom. Nonetheless, it predicted US growth would strongly outpace the rest of the G7 this year, forecasting an expansion of 2.4 per cent in 2026 and 2 per cent in 2027. Tech investment had surged to its highest share of US economic output since 2001, helping drive growth, the IMF found. "There is a risk of a correction, a market correction, if expectations about AI gains in productivity and profitability are not realised," said Pierre-Olivier Gourinchas, IMF chief economist. "We're not yet at the levels of market frothiness, if you want, that we saw in the dotcom period," he added. "But nevertheless there are reasons to be somewhat concerned."

Read more of this story at Slashdot.

  •  

Is the Possibility of Conscious AI a Dangerous Myth?

This week Noema magazine published a 7,000-word exploration of our modern "Mythology Of Conscious AI" written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science: The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with. He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious. He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines." But then his essay reaches a surprising conclusion: As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an ethical disaster. We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM. But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be... One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious.... Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves... The sociologist Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.

Read more of this story at Slashdot.

  •  
❌