Vue lecture

Citigroup Mandates AI Training For 175,000 Employees To Help Them 'Reinvent Themselves'

Citigroup has rolled out mandatory AI training for all 175,000 of its employees across 80 locations worldwide, a sweeping initiative that CEO Jane Fraser describes as helping workers "reinvent themselves" before the technology permanently alters what they do for a living. The $205 billion bank sent out an internal memo last year requiring staffers to learn prompting skills specifically. Fraser told the Washington Post at Davos that AI "will change the nature of what people do every day" and "will take some jobs away." The adaptive training platform lets experts complete the course in under 10 minutes while beginners need about 30 minutes. Citi reported last year that employees had entered more than 6.5 million prompts into its built-in AI tools, and Q4 2025 data shows a 70% adoption rate for the bank's proprietary AI tools.

Read more of this story at Slashdot.

  •  

OpenAI's Science Chief Says LLMs Aren't Ready For Novel Discoveries and That's Fine

OpenAI launched a dedicated team in October called OpenAI for Science, led by vice president Kevin Weil, that aims to make scientists more productive -- but Weil admitted in an interview with MIT Technology Review that the LLM cannot yet produce novel discoveries and says that's not currently the mission. UC Berkeley statistician Nikita Zhivotovskiy, who has used LLMs since the first ChatGPT, told the publication: "So far, they seem to mainly combine existing results, sometimes incorrectly, rather than produce genuinely new approaches." "I don't think models are there yet," Weil admitted. "Maybe they'll get there. I'm optimistic that they will." The models excel at surfacing forgotten solutions and finding connections across fields, but Weil says the bar for accelerating science doesn't require "Einstein-level reimagining of an entire field." GPT-5 has read substantially every paper written in the last 30 years, he says, and can bring together analogies from unrelated disciplines. That accumulation of existing knowledge -- helping scientists avoid struggling on problems already solved -- is itself an acceleration.

Read more of this story at Slashdot.

  •  

Pinterest Cuts Up To 15% Jobs To Redirect Resources To AI

Pinterest said on Tuesday it would trim its workforce by less than 15% and reduce office space, as the social media company looks to reallocate resources to AI-focused roles and initiatives. From a report: The announcement comes as the company competes with TikTok and Meta-owned Facebook and Instagram for digital advertising budgets, as these platforms continue to draw marketers with their extensive user base. Pinterest had 5,205 full-time employees as of September 2025. The latest job cut would translate to less than 780 positions. Top executives at the World Economic Forum's annual meeting said while jobs would disappear, new ones would spring up, with two telling Reuters that AI would be used as an excuse by companies which were planning layoffs anyway. Last week, design software maker Autodesk also announced a 7% job cut to redirect investments to its cloud platform and AI efforts.

Read more of this story at Slashdot.

  •  

Microsoft's Latest AI Chip Claims Performance Edge Over Amazon and Google

An anonymous reader quotes a report from GeekWire: Microsoft on Monday announced Maia 200, the second generation of its custom AI chip, claiming it's the most powerful first-party silicon from any major cloud provider. The company says Maia 200 delivers three times the performance of Amazon's latest Trainium chip on certain benchmarks, and exceeds Google's most recent tensor processing unit (TPU) on others. The chip is already running workloads at Microsoft's data center near Des Moines, Iowa. Microsoft says Maia 200 is powering OpenAI's GPT-5.2 models, Microsoft 365 Copilot, and internal projects from its Superintelligence team. A second deployment at a data center near Phoenix is planned next. It's part of the larger trend among cloud giants to build their own custom silicon for AI rather than rely solely on Nvidia. [...] The company says Maia 200 offers 30% better performance-per-dollar than its current hardware. Maia 200 also builds on the first-generation chip with a more specific focus on inference, the process of running AI models after they've been trained. [...] Microsoft is also opening the door to outside developers. The company announced a software development kit that will let AI startups and researchers optimize their models for Maia 200. Developers and academics can sign up for an early preview starting today.

Read more of this story at Slashdot.

  •  

DOT Plans To Use Google Gemini AI To Write Regulations

The Trump administration is planning to use AI to write federal transportation regulations, ProPublica reported on Monday, citing the U.S. Department of Transportation records and interviews with six agency staffers. From the report: The plan was presented to DOT staff last month at a demonstration of AI's "potential to revolutionize the way we draft rulemakings," agency attorney Daniel Cohen wrote to colleagues. The demonstration, Cohen wrote, would showcase "exciting new AI tools available to DOT rule writers to help us do our job better and faster." Discussion of the plan continued among agency leadership last week, according to meeting notes reviewed by ProPublica. Gregory Zerzan, the agency's general counsel, said at that meeting that President Donald Trump is "very excited about this initiative." Zerzan seemed to suggest that the DOT was at the vanguard of a broader federal effort, calling the department the "point of the spear" and "the first agency that is fully enabled to use AI to draft rules." Zerzan appeared interested mainly in the quantity of regulations that AI could produce, not their quality. "We don't need the perfect rule on XYZ. We don't even need a very good rule on XYZ," he said, according to the meeting notes. "We want good enough." Zerzan added, "We're flooding the zone." These developments have alarmed some at DOT. The agency's rules touch virtually every facet of transportation safety, including regulations that keep airplanes in the sky, prevent gas pipelines from exploding and stop freight trains carrying toxic chemicals from skidding off the rails. Why, some staffers wondered, would the federal government outsource the writing of such critical standards to a nascent technology notorious for making mistakes? The answer from the plan's boosters is simple: speed. Writing and revising complex federal regulations can take months, sometimes years. But, with DOT's version of Google Gemini, employees could generate a proposed rule in a matter of minutes or even seconds, two DOT staffers who attended the December demonstration remembered the presenter saying.

Read more of this story at Slashdot.

  •  

Quelles sont les meilleures alternatives gratuites à ChatGPT ?

ChatGPT occupe une place notable. Le chatbot d'OpenAI excelle pour des tâches parfois chronophages et permet de gagner beaucoup de temps. Mais il n'est pas la seule IA générative performante — une bonne nouvelle, puisque ChatGPT n'est pas infaillible. En cas de panne, il existe des alternatives à ChatGPT à considérer. Voici les meilleures.

  •  

Fast, Precise, Automatic Background Removal with Aiarty Image Matting (Limited-time Exclusive Deal)

Fast, Precise, Automatic Background Removal with Aiarty Image Matting (Limited-time Exclusive Deal)

Extracting a subject from its background is often a tedious, mechanical chore, especially when dealing with fine edges like hair, cluttered backgrounds, or scenes where the foreground and background share similar colors.

Aiarty Image Matting aims to simplify this process using AI, turning what used to be time-consuming manual work into a much faster and more streamlined step in a photographer’s editing workflow.

What Is Aiarty Image Matting?

Aiarty Image Matting is an AI-powered image background remover that automatically separates subjects from their backgrounds with high accuracy, while also making it easy to replace the background with a solid color or a custom image when needed.

It features four dedicated AI models that adapt to different image types and subject characteristics, allowing it to handle a wide range of photographic scenarios. The software works fully offline, processes images quickly, and combines automatic background removal with simple manual tools for fine adjustments when needed.

New Year Exclusive Offer: Get Aiarty Image Matting with lowest-ever price

For photographers who regularly deal with background removal or subject isolation, Aiarty Image Matting is also currently more affordable than usual.

As part of its New Year promotion, Aiarty is offering 43% OFF the Aiarty Image Matting Lifetime License. This one-time purchase gives you full access to the software and all future updates with no subscription required.

  • Licensed for use on 3 Windows or Mac computers
  • Unlimited access to all features and lifetime updates, with no ongoing costs
  • 30-day money-back guarantee.

Exclusive time-limited discount: At checkout, enter the coupon code NYSPECIAL to enjoy additional $5 off on top of the discounted price. That’s the lowest price you can ever find. The coupon will expire on January 31.

Key Features That Matter for Photographers

Multiple AI Models Tailored to Different Subjects

Unlike basic background removal tools that rely on a single algorithm, Aiarty Image Matting uses multiple AI models optimized for different types of subjects. This allows photographers to select a model that best matches the content of the image rather than forcing every scene through the same processing logic.

In practical terms, some models are better suited for subjects with clean, well-defined edges such as products or vehicles, while others perform more reliably with semi-transparent materials, fine hair, or soft transitions. This flexibility helps produce more consistent results across portrait, product, and commercial photography without constant trial and error.

The overall workflow remains simple: import an image, choose the appropriate AI model, start the matting process, and export the result.

High-Precision Edge Detection for Fine Details

Edge quality is where most background removal tools struggle, and it is also where photographers notice problems immediately. Aiarty Image Matting places strong emphasis on preserving fine details around complex edges, including hair, fur, and overlapping elements.

Example of automatic background removal processed by Aiarty Image Matting

Even in situations where the subject blends into a busy background or shares similar colors, the extracted results tend to maintain natural transitions rather than overly hard or artificial outlines. This is particularly valuable for portrait, fashion, and pet photography, where realistic edges are essential for believable composites.

Example of automatic background removal processed by Aiarty Image Matting

The goal here is not just isolation, but extraction that remains visually convincing when placed into a new scene.

Automatic Extraction with Practical Manual Control

Aiarty Image Matting is designed to work automatically first, minimizing the need for manual masking. In many cases, a clean subject extraction can be achieved with a single click, making it suitable for repetitive or time-sensitive workflows.

When adjustments are needed, the software includes a small set of intuitive mask refinement tools that allow photographers to correct problem areas or fine-tune transparency. These tools are optional and focused, helping refine results without turning the process into a full manual selection job.

This balance keeps the workflow fast while still giving users enough control to handle challenging images.

Fully Offline Processing for Speed and Privacy

All processing in Aiarty Image Matting is performed locally, without uploading images to the cloud. For photographers working with client material, this offline approach offers both privacy and reliability.

Local processing also avoids delays caused by internet connections and makes it easier to work consistently with large images. This is especially useful in professional environments where stability and file control matter as much as speed.

Flexible Background Replacement for Practical Use Cases

Once a subject is extracted, Aiarty Image Matting allows it to be placed against a solid color or a custom background. This is particularly useful for product photography, catalogs, marketing visuals, and social media content where clean and consistent backgrounds are required.

Batch Processing for High-Volume Projects

For workflows involving large numbers of images, Aiarty Image Matting supports batch processing, allowing multiple photos to be handled in a single session. This can significantly reduce post-production time for e-commerce shoots, content libraries, or repeated background replacement tasks.

A Useful Extra: Built-In AI Enhancement

In addition to background removal, Aiarty Image Matting also includes a basic AI enhancement option that supports up to 2× image upscaling. While not a replacement for dedicated enhancement tools, it can be useful for preparing extracted subjects for different output sizes or platforms.

Final Thoughts: A Smarter, More Seamless Cutout Tool for Photographers

Background removal no longer has to be a slow, tedious process. Aiarty Image Matting combines AI-powered automatic extraction with optional manual refinement, making it a fast and reliable tool for photographers.

Right now, all PhotoRumors readers can save up to 43% on Aiarty Image Matting Lifetime License, which includes free updates forever and installation on up to three computers. Remember to enter the coupon code NYSPECIAL at checkout for an extra $5 off. Again, there are no subscriptions and no hidden fees. This limited-time deal ends January 31.

Get the Aiarty Image Matting Lifetime License Deal here.

The post Fast, Precise, Automatic Background Removal with Aiarty Image Matting (Limited-time Exclusive Deal) appeared first on Photo Rumors.

  •  

The Risks of AI in Schools Outweigh the Benefits, Report Says

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits. "At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically... Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem." AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year... AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources." The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..." "We find that AI has the potential to benefit or hinder students, depending on how it is used."

Read more of this story at Slashdot.

  •  

Google's 'AI Overviews' Cite YouTube For Health Queries More Than Any Medical Sites, Study Suggests

An anonymous reader shared this report from the Guardian: Google's search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month. The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are "reliable" and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world's second most visited website, after Google itself, and is owned by Google. Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said. "This matters because YouTube is not a medical publisher," the researchers wrote. "It is a general-purpose video platform...." In one case that experts said was "dangerous" and "alarming", Google provided bogus information about crucial liver function tests that could have left people with serious liver disease wrongly thinking they were healthy. The company later removed AI Overviews for some but not all medical searches... Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: "This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases. "Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

Read more of this story at Slashdot.

  •  

AI Luminaries Clash At Davos Over How Close Human-Level Intelligence Really Is

An anonymous reader shared this report from Fortune The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos. Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google's Gemini models, said today's AI systems, as impressive as they are, are "nowhere near" human-level artificial general intelligence, or AGI. [Though the artilcle notes that later Hassabis predicted there was a 50% chance AGI might be achieved within the decade.] Yann LeCun — an AI pioneer who won a Turing Award, computer science's most prestigious prize, for his work on neural networks — went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve humanlike intelligence and that a completely different approach is needed... ["The reason ... LLMs have been so successful is because language is easy," LeCun said later.] Their views differ starkly from the position asserted by top executives of Google's leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence. Dario Amodei, the CEO of Anthropic, told an audience at Davos that AI models would replace the work of all software developers within a year and would reach "Nobel-level" scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years. OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward "superintelligence," or AI that would be smarter than all humans combined... The debate over AGI may be somewhat academic for many business leaders. The more pressing question, says Cognizant CEO Ravi Kumar, is whether companies can capture the enormous value that AI already offers. According to Cognizant research released ahead of Davos, current AI technology could unlock approximately $4.5 trillion in U.S. labor productivity — if businesses can implement it effectively.

Read more of this story at Slashdot.

  •  

US Insurer 'Lemonade' Cuts Rates 50% for Drivers Using Tesla's 'Full Self-Driving' Software

An anonymous reader shared this report from Reuters: U.S. insurer Lemonade said on Wednesday it would offer a 50% rate cut for drivers of Tesla electric vehicles when the automaker's Full Self-Driving (FSD) driver assistance software is steering because it had data showing it reduced accidents. Lemonade's move is an endorsement of Tesla CEO Elon Musk's claims that the company's vehicle technology is safer than human drivers, despite concerns flagged by regulators and safety experts. As part of a collaboration, Tesla is giving Lemonade access to vehicle telemetry data that will be used to distinguish between miles driven by FSD — which requires a human driver's supervision — and human driving, the New York-based insurer said. The price cut is for Lemonade's pay-per-mile insurance. "We're looking at this in extremely high resolution, where we see every minute, every second that you drive your car, your Tesla," Lemonade co-founder Shai Wininger told Reuters. "We get millions of signals emitted by that car into our systems. And based on that, we're pricing your rate." Wininger said data provided by Tesla combined with Lemonade's own insurance data showed that the use of FSD made driving about two times safer for the average driver. He did not provide details on the data Tesla shared but said no payments were involved in the deal between Lemonade and the EV maker for the data and the new offering... Wininger said the company would reduce rates further as Tesla releases FSD software updates that improve safety. "Traditional insurers treat a Tesla like any other car, and AI like any other driver," Wininger said. "But a driver who can see 360 degrees, never gets drowsy, and reacts in milliseconds isn't like any other driver."

Read more of this story at Slashdot.

  •  

Anthropic Updates Claude's 'Constitution,' Just In Case Chatbot Has a Consciousness

TechCrunch reports: On Wednesday, Anthropic released a revised version of Claude's Constitution, a living document that provides a "holistic" explanation of the "context in which Claude operates and the kind of entity we would like Claude to be...." For years, Anthropic has sought to distinguish itself from its competitors via what it calls "Constitutional AI," a system whereby its chatbot, Claude, is trained using a specific set of ethical principles rather than human feedback... The 80-page document has four separate parts, which, according to Anthropic, represent the chatbot's "core values." Those values are: 1. Being "broadly safe." 2. Being "broadly ethical." 3. Being compliant with Anthropic's guidelines. 4. Being "genuinely helpful..." In the safety section, Anthropic notes that its chatbot has been designed to avoid the kinds of problems that have plagued other chatbots and, when evidence of mental health issues arises, direct the user to appropriate services... Anthropic's Constitution ends on a decidedly dramatic note, with its authors taking a fairly big swing and questioning whether the company's chatbot does, indeed, have consciousness. "Claude's moral status is deeply uncertain," the document states. "We believe that the moral status of AI models is a serious question worth considering. This view is not unique to us: some of the most eminent philosophers on the theory of mind take this question very seriously." Gizmodo reports: The company also said that it dedicated a section of the constitution to Claude's nature because of "our uncertainty about whether Claude might have some kind of consciousness or moral status (either now or in the future)." The company is apparently hoping that by defining this within its foundational documents, it can protect "Claude's psychological security, sense of self, and well-being."

Read more of this story at Slashdot.

  •  

When Two Years of Academic Work Vanished With a Single Click

Marcel Bucher, a professor of plant sciences at the University of Cologne in Germany, lost two years of carefully structured academic work in an instant when he temporarily disabled ChatGPT's "data consent" option in August to test whether the AI tool's functions would still work without providing OpenAI his data. All his chats were permanently deleted and his project folders emptied without any warning or undo option, he wrote in a post on Nature. Bucher, a ChatGPT Plus subscriber paying $20 per month, had used the platform daily to draft grant applications, prepare teaching materials, revise publication drafts and create exams. He contacted OpenAI support, first receiving responses from an AI agent before a human employee confirmed the data was permanently lost and unrecoverable. OpenAI cited "privacy by design" as the reason, telling Nature it does provide a confirmation prompt before users permanently delete a chat but maintains no backups. Bucher said he had saved partial copies of some materials, but the underlying prompts, iterations, and project folders -- what he describes as the intellectual scaffolding behind his finished work -- are gone forever.

Read more of this story at Slashdot.

  •  

Anthropic's AI Keeps Passing Its Own Company's Job Interview

Anthropic has a problem that most companies would envy: its AI model keeps getting so good, the company wrote in a blog post, that it passes the company's own hiring test for performance engineers. The test, designed in late 2023 by optimization lead Tristan Hume, asks candidates to speed up code running on a simulated computer chip. Over 1,000 people have taken it, and dozens now work at Anthropic. But Claude Opus 4 outperformed most human applicants. Hume redesigned the test, making it harder. Then Claude Opus 4.5 matched even the best human scores within the two-hour time limit. For his third attempt, Hume abandoned realistic problems entirely and switched to abstract puzzles using a strange, minimal programming language -- something weird enough that Claude struggles with it. Anthropic is now releasing the original test as an open challenge. Beat Claude's best score and ... they want to hear from you.

Read more of this story at Slashdot.

  •  

AI Boosts Research Careers But Flattens Scientific Discovery

Ancient Slashdot reader erice shares the findings from a recent study showing that while AI helped researchers publish more often and boosted their careers, the resulting papers were, on average, less useful. "You have this conflict between individual incentives and science as a whole," says James Evans, a sociologist at the University of Chicago who led the study. From a recent IEEE Spectrum article: To quantify the effect, Evans and collaborators from the Beijing National Research Center for Information Science and Technology trained a natural language processing model to identify AI-augmented research across six natural science disciplines. Their dataset included 41.3 million English-language papers published between 1980 and 2025 in biology, chemistry, physics, medicine, materials science, and geology. They excluded fields such as computer science and mathematics that focus on developing AI methods themselves. The researchers traced the careers of individual scientists, examined how their papers accumulated attention, and zoomed out to consider how entire fields clustered or dispersed intellectually over time. They compared roughly 311,000 papers that incorporated AI in some way -- through the use of neural networks or large language models, for example -- with millions of others that did not. The results revealed a striking trade-off. Scientists who adopt AI gain productivity and visibility: On average, they publish three times as many papers, receive nearly five times as many citations, and become team leaders a year or two earlier than those who do not. But when those papers are mapped in a high-dimensional "knowledge space," AI-heavy research occupies a smaller intellectual footprint, clusters more tightly around popular, data-rich problems, and generates weaker networks of follow-on engagement between studies. The pattern held across decades of AI development, spanning early machine learning, the rise of deep learning, and the current wave of generative AI. "If anything," Evans notes, "it's intensifying." [...] Aside from recent publishing distortions, Evans's analysis suggests that AI is largely automating the most tractable parts of science rather than expanding its frontiers.

Read more of this story at Slashdot.

  •  
❌