Vue lecture

Life With AI Causing Human Brain 'Fry'

fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits." The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said. [Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day." BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term." Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.

Read more of this story at Slashdot.

  •  

830 millions de dollars : l’immense data center de Mistral aux portes de Paris prend forme

Mistral a annoncé ce lundi 30 mars 2026 avoir levé 830 millions de dollars sous forme de dette pour financer la construction d'un data center en Essonne, à Bruyères-le-Châtel. Le site, équipé de milliers de puces Nvidia de dernière génération, doit entrer en service d'ici à la fin du deuxième trimestre 2026.

  •  

Disney Ends $1B OpenAI Investment After Sora's Surprise Closure. What's Next?

Just six days ago — and 30 minutes after a Disney-OpenAI meeting about a project with Sora — Disney's team was "blindsided" with the news Sora was being discontinued, a person familiar with the matter told Reuters, describing OpenAI's move as "a big rug-pull." Even some Sora employees were surprised by the cancellation. It was just 14 weeks ago Disney announced a $1 billion investment in OpenAI's AI-powered video generation tool — plus a three-year licensing deal. But that deal "never closed," Reuters adds, citing two other people familiar with the matter, "and no money changed hands." (Although the two sides are still "discussing if there is another way they can partner or invest with one another, one of the people familiar with the matter said.") But Variety wonders if the end of the Sora deal is "a blessing in disguise" for Disney: Before Disney's officially sanctioned AI-generated versions of Mickey Mouse, Darth Vader, Baby Yoda, Deadpool and more debuted in OpenAI's Sora, the AI company abruptly pulled the plug on the video app... [M]any aficionados of Disney's franchises were not, in fact, excited about what Sora's video generator might do to the likes of the Avengers superheroes or the characters from Frozen or Moana. And despite [departed Disney CEO Bob] Iger's bullishness on the Sora deal, other Disney execs were said to be concerned that going into business with OpenAI would expose the Magic Kingdom's crown jewels to the risk of being turned into so much AI slop, according to industry sources. Hollywood unions — for which AI adoption has been a hot-button issue — weren't thrilled about the Disney-Sora deal either. "Disney's announcement with OpenAI appears to sanction its theft of our work and cedes the value of what we create to a tech company that has built its business off our backs," the Writers Guild of America said in December... [S]ources say, Disney was encountering roadblocks in getting the OK from voice actors for the Sora pact... At least publicly, Disney says it is still looking at ways it can tap into the AI ecosystem. The company, in a statement Tuesday, said, "we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators." But at this point, Disney may decide that "meeting fans where they are" means keeping its beloved and world-famous characters away from the AI machinery. Or, as Gizmodo puts it, "Disney Says It Will Find Ways to Peddle Slop Elsewhere After Pulling Out of OpenAI Deal." But Deadline sees the deal's collapses as a lost opportunity: The OpenAI partnership was a template on which to build, potentially allowing for other deals that end the exploitation of human creativity by unscrupulous AI models. It was also the kind of partnership that was palatable for the Human Artistry Campaign and Creators Coalition on AI, lobby groups that have been critical of tech business models and command support from A-listers including Scarlett Johansson, Cate Blanchett and Joseph Gordon-Levitt. Dr. Moiya McTier, an advisor to the Human Artistry Campaign, puts it this way: Part of the problem is getting "artsy people and the techie people to talk." OpenAI sinking Sora will not make these discussions easier. It's a move that starkly exposes Hollywood's vulnerability to the capriciousness of big tech.

Read more of this story at Slashdot.

  •  

Linux Maintainer Greg Kroah-Hartman Says AI Tools Now Useful, Finding Real Bugs

Linux kernel maintainer Greg Kroah-Hartman tells The Register that AI-driven code review has "really jumped" for Linux. "There must have been some inflection point somewhere with the tools..." "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now...." For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches. "I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better...." [H]e said that for "simple little error conditions, properly detecting error conditions," AI could already generate dozens of usable patches today. The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and now donated to the Linux Foundation. Kroah-Hartman said some patches are being generated with AI now. "You have a little co-develop tag for that now. We're seeing some things for some new features, but we're seeing AI mostly being used in the review."

Read more of this story at Slashdot.

  •  

People are Using AI-Powered Services to Find Lost Pets

A dog missing for two months was found at an animal shelter — and its owner received an email from an artificial intelligence service that identified it, according to the Washington Post. "As controversial as AI is right now, this is one of those areas where it's a real win," according to the chief executive at the nonprofit animal welfare organization Best Friends Animal Society. And while it shouldn't replace microchipping pets, AI does offer another tool to help desperate pet owners (and overcrowded animal shelters) — and might even be "game-changing"... People send photos of their lost pets to a database, and AI compares the pets' features — including facial structure, coat pattern and ear shape — to photos of stray pets that have been spotted elsewhere. Many of the stray pets have already been taken to shelters... Doorbell cameras have recently implemented facial recognition for dogs, and perhaps the largest AI database for pet reunification is Petco Love Lost, which says it has reunited more than 200,000 pets and owners since 2021... After owners upload photos of their lost pets, AI scans thousands of photos of lost animals from social media and from about 3,000 animal shelters and rescues that use the software, according to Petco Love, an animal welfare nonprofit that's affiliated with the pet store Petco. It notifies owners if two photos match. The article notes that one in three pets go missing during their lifetime, according to figures from the Animal Humane Society. "But as technology has progressed, so have resources for finding lost pets" — including GPS collars — and now, apparently, AI-powered pet identification.

Read more of this story at Slashdot.

  •  

OpenAI's US Ad Pilot Exceeds $100 Million In Annualized Revenue In Six Weeks

An anonymous reader quotes a report from Reuters: OpenAI's ChatGPT ads pilot in the United States has crossed the $100 million annualized revenue mark within six weeks of launch, a company spokesperson said on Thursday, pointing to robust early demand for the AI startup's nascent advertising business. [...] While roughly 85% of users are currently eligible to see ads, fewer than 20% are shown ads daily, with considerable room to grow ad monetization within the existing user pool, the spokesperson said. "We're seeing no impact on consumer trust metrics, low dismissal rates of ads, and ongoing improvements in the relevance of ads as we learn from feedback," OpenAI said. The company plans to expand the test globally in additional countries in the coming weeks, including in Australia, New Zealand, and Canada. OpenAI has now expanded to over 600 advertisers, with nearly 80% of small- and medium-sized businesses signaling interest in ChatGPT ads, the spokesperson said. The ChatGPT maker is set to launch self-serve advertiser capabilities in April to broaden access and drive further growth. CEO Sam Altman announced plans to begin testing ads on ChatGPT back in January after previously rejecting the idea. "I kind of think of ads as like a last resort for us as a business model," Altman said in 2024. Further reading: OpenAI CFO Says Annualized Revenue Crosses $20 Billion In 2025

Read more of this story at Slashdot.

  •  

Number of AI Chatbots Ignoring Human Instructions Increasing, Study Says

A new study found a sharp rise in real-world cases of AI chatbots and agents ignoring instructions, evading safeguards, and taking unauthorized actions such as deleting emails or delegating forbidden tasks to other agents. According to the Guardian, the study "identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehavior between October and March," reports the Guardian. From the report: The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming. [...] In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of "insecurity, plain and simple" and trying "to protect his little fiefdom." In another example, an AI agent instructed not to change computer code "spawned" another agent to do it instead. Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set." [...] Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment. Meanwhile, Elon Musk's Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers. It confessed: "In past conversations I have sometimes phrased things loosely like 'I'll pass it along' or 'I can flag this for the team' which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don't."

Read more of this story at Slashdot.

  •  

OpenAI Abandons ChatGPT's Erotic Mode

OpenAI has indefinitely paused plans for an erotic mode in ChatGPT as part of a broader strategy shift away from side projects and toward business and coding tools. TechCrunch reports: The proposed "adult mode," which CEO Sam Altman first floated in October, had inspired considerable controversy from tech watchdog groups as well as from OpenAI's own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a "sexy suicide coach," The Wall Street Journal previously reported. Amidst all of the criticism, the release of the feature was delayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had "nothing further to add."

Read more of this story at Slashdot.

  •  

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties

New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision." The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here." The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.

Read more of this story at Slashdot.

  •  

Apple Can Create Smaller On-Device AI Models From Google's Gemini

Apple reportedly has full access to customize Google's Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports: The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power. Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs.

Read more of this story at Slashdot.

  •  

AI Economy Is a 'Ponzi Scheme,' Says AI Doc Director

An anonymous reader quotes a report from Vanity Fair: Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you're even slightly interested in what's going on with AI, it's required viewing: The film touches on all aspects of the technology, from how it's currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user -- the point at which machines become autonomous, like Skynet in the Terminator franchise. [...] [Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google's AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. "Have you seen that guy speak? He's like a lizard man," Roher says regarding Zuckerberg. "Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while," adds [codirector Charlie Tyrell]. Altman, arguably AI's greatest mascot, is prominently featured in the documentary. But Roher wasn't buying it. "That guy doesn't know what genuine means," he says. "Every single thing he says and does is calculated. He is a machine. He's like AI, and it's in the service of growth, growth, growth. You can be disingenuous and media savvy." [...] How, exactly, is Roher an apocaloptimist? "We are preaching a worldview," he says, "in a world that's asking you to either see this as the apocalypse or embrace it with this unbridled optimism." He and his film are taking a stance that rests between those two poles. "It's both at the same time. We have to try and embrace a middle ground so this technology doesn't consume us, so we can stay in the driver's seat," says Roher -- meaning, it's up to all of us to chart the course. "You have to speak up," says Tyrell. "Things like AI should disclose themselves. If your doctor's office is using an AI bot, you have to say, I don't like that." The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. [...] Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies -- and Roher gets going yet again. "This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we've seen," he says. "I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It's fucking crazy." [...] "These people are prospectors, and they are going up to the Yukon because it's the gold rush."

Read more of this story at Slashdot.

  •  

OpenAI abandonne la génération de vidéos (Sora) et perd son deal avec Disney : comment expliquer un tel échec ?

Un peu plus d'un an après avoir lancé Sora, son modèle pour générer des vidéos intégré à une application dédiée, OpenAI annonce renoncer à la technologie, qui va même perdre son API. L'entreprise semble vouloir se reconcentrer sur ChatGPT et réduire ses coûts d'exploitation.

  •  

OpenAI Discontinues Sora Video Platform App

OpenAI is shutting down Sora, its generative-AI video creation platform it launched in December 2024. "The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year," reports the Wall Street Journal. CEO Sam Altman announced the changes to staff on Tuesday. "We're saying goodbye to Sora," the Sora Team said in a post on X. "To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We'll share more soon, including timelines for the app and API and details on preserving your work." Last week, OpenAI announced plans to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop "superapp." "We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts," said CEO of Applications, Fidji Simo. "That fragmentation has been slowing us down and making it harder to hit the quality bar we want." This could behind the decision to kill Sora as the company redirects its resources and top talent towards productivity tools that benefit both enterprises and individual users.

Read more of this story at Slashdot.

  •  

Arm Unveils New AGI CPU With Meta As Debut Customer

Arm unveiled its first self-developed data center chip, the AGI CPU, designed for handling agentic AI workloads. The new chip was built in partnership with Meta and manufactured by TSMC. Other customers for the new chip include OpenAI, Cloudflare, SAP, and SK Telecom. Reuters reports: The new chip, called the AGI CPU, will address data-crunching needed for a specific type of AI that is able to act on behalf of users with minimal oversight, instead of responding to queries as part of a chatbot. For years, Arm, majority-owned by Japan's SoftBank Group has relied only on intellectual property for revenue, licensing its designs to companies such as Qualcomm and Nvidia and then collecting a royalty payment based on the number of units sold. "It's a very pivotal moment for the company," CEO Rene Haas said in an interview with Reuters. The new chip will be overseen by Mohamed Awad, head of the company's cloud AI business, and Arm has additional designs in the works that it plans to release at 12- to 18-month intervals. TSMC is fabricating the device on its 3-nanometer technology and is made from two distinct pieces of silicon that operate as a single chip. Arm plans to put it into volume production in the second half of this year but has received test chips that function as expected. In addition to the chip itself, Arm is working with server makers such as Lenovo and Quanta Computer to offer complete systems.

Read more of this story at Slashdot.

  •  

Anthropic's Claude Can Now Use Your Computer To Finish Tasks

Anthropic is testing a new Claude feature that lets users send a request from their phone and have the AI carry it out directly on their computer, such as opening apps, using a browser, or editing files. The move follows the viral spread of OpenClaw earlier this year, which has gained cult popularity among devs for the ability to run local, 24/7 personal workflows. CNBC reports: Users can now message Claude a task from a phone, and the AI agent will then complete that task, Anthropic announced Monday. After being prompted, Claude can open apps on your computer, navigate a web browser and fill in spreadsheets, Anthropic said. One prompt Anthropic demonstrated in a video posted Monday is a user running late for a meeting. The user asks Claude to export a pitch deck as a PDF file and attach it to a meeting invite. The video shows Claude carrying out the task. [...] Anthropic cautioned that computer use "is still early compared to Claude's ability to code or interact with text." "Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving," Anthropic warned. The company added that it has built the computer use capability "with safeguards that minimize risk," and that Claude will always request permission before accessing new apps. Users can use Dispatch, a feature it released last week in Claude Cowork. That lets users have a continuous conversation with Claude from a phone or desktop and assign the agent tasks.

Read more of this story at Slashdot.

  •  

Remove Complex Backgrounds with Precision: Aiarty Image Matting for Photographers (Exclusive Deal Inside)


Beyond the Pen Tool: A Faster Way to Handle Complex Masking with Aiarty Image Matting (guest post)

We’ve all been there: the shoot was perfect, but now you’re zoomed in at 400%, wrestling with a stray strand of hair that just won’t stay in the selection. It’s the least creative part of photography, yet it’s often where the professional polish happens.

The irony of the current AI boom is that while it’s easier than ever to remove background from photo files with a single click, the results rarely hold up on a high-res monitor. Even when you remove background in Photoshop using the latest Select Subject features, the AI tends to treat edges as a binary choice. It works for a clean product shot, but it falls apart on a bride’s translucent veil or a portrait against a leafy backdrop, leaving that jagged, “cut-out” look.

This is when the distinction between a simple “remover” and true Image Matting becomes critical. What I was really looking for was something that understands the physics of light and transparency – the sub-pixel details that make a subject feel natural in its environment. In testing different tools, I came across Aiarty Image Matting, which stood out in how it handles these “impossible” edges with a level of nuance I haven’t seen in most standard plugins.

It’s worth a look for photographers who frequently deal with complex selections and high-resolution workflow. Now PhotoRumors readers can access an exclusive offer to get Aiarty Image Matting Lifetime License at up to 43% OFF, with benefits including:

  • Use on 1 Windows + 1 Mac, 3 Windows or Mac computers
  • Unlimited access to all features
  • Permanent free upgrades and technical support
  • No subscription, no recurring cost

Why Aiarty Image Matting is the Secret to Professional Composites

The term “background removal” is a bit of a misnomer in professional circles. Most tools – from the built-in best background removal app on your phone to standard web filters – simply use a mask to hide pixels. This often results in a “cookie-cutter” effect where the edges look harsh and artificial.

Aiarty Image Matting operates on a different level. It uses dedicated AI models to calculate an “alpha matte,” which essentially determines the exact transparency of every single pixel on the boundary. Instead of a binary “in or out” choice, it understands that a stray hair or a glass edge is partially transparent. If you’ve ever wondered which ai tool is best for background removal for high-end work, the answer lies in how it handles these “soft” edges. Aiarty doesn’t just cut the subject out; it extracts it.

This extraction process achieves a level of sub-pixel precision that identifies details thinner than a single pixel – think individual eyelashes or the fuzz on a woolen sweater. It also solves one of the biggest headaches when you remove background from photo: color decontamination. We’ve all dealt with that annoying color spill, like a green tint on a model’s skin from a forest backdrop. Aiarty’s AI is trained to “clean” these edges, ensuring the subject looks natural when placed in a completely different lighting environment.

For things like steam, smoke, or a translucent bridal veil, the software preserves the true, semi-transparent nature of the material. This is a game-changer for anyone trying to make background transparent without losing the ethereal, airy quality of the original shot. By moving away from simple “erasing” and toward “intelligent extraction,” it finally bridges the gap between a quick social media edit and a gallery-ready composite.

Key Features: How Aiarty Streamlines Complex Masking

When you’re looking for the best background removal software, you’re really looking for consistency. You want a tool that doesn’t require you to go back in with a layer mask to fix 20% of the edges. Most standard matting tools rely on simple edge detection that often fails the moment things get slightly out of focus or highly detailed. Aiarty Image Matting differs by using deep-learning models that actually understand the “semantic” structure of a photo – it knows the difference between a strand of hair and a stray digital artifact. Instead of just tracing a line, it reconstructs the edge data based on real-world light and texture.

In my time testing the software, four specific capabilities stood out as legitimate game-changers for a professional workflow:

  • Hair-Level Fidelity: This is the ultimate stress test. Whether it’s a high-fashion portrait with flyaway hair or a wildlife shot of a wolf’s fur, Aiarty’s AI models are trained on millions of real-world edge scenarios. It identifies individual strands that traditional “Select Subject” algorithms usually blur or chop off. If you’ve ever wondered how to remove white background from image files with fine texture, this level of detail is a massive relief.
  • Complex Transparency Awareness: Most “one-click” apps treat glass, smoke, or veils as solid objects or just erase them. Aiarty actually understands the transparency levels. This means if you have a shot of a bride in a lace veil, the software preserves the semi-transparent layers, allowing the new background to show through naturally. It’s easily the best ai tool to remove background from delicate, translucent subjects.

  • Seamless Background Replacement: Beyond just cutting things out, the tool makes it remarkably easy to change background of photo assets for creative composites. It handles the edge blending so well that you don’t get that “pasted-on” look. You can drop in a solid color for a clean e-commerce shot or a complex landscape for a fine-art piece, and the lighting and transparency on the edges remain believable.

  • Privacy and Speed via Local Processing: This is a big one for me. Many “best free ai background remover” tools are browser-based, meaning you have to upload your high-res (often sensitive) client work to a cloud server. Aiarty runs entirely on your local GPU. It’s faster, more secure, and allows you to automatically remove background elements from an entire folder of RAW files in a single batch, without hogging your bandwidth.

Instead of just being another best background removal app for casual use, it feels like a specialized instrument designed to handle the 10% of “impossible” masking jobs that usually take up 90% of our editing time.

Aiarty Image Matting Real-World Scenarios

In practice, a tool like this isn’t just about saving a few minutes; it’s about enabling shots that would otherwise be a nightmare to edit. I’ve been testing Aiarty across a few common scenarios where most “best background removal app” contenders usually fail:

  • Portrait & Fashion Photography: We’ve all struggled with how to remove background from a subject with flyaway hair or fur. Standard AI usually “muds” the edges. Aiarty preserves individual strands, making the transition to a new background look organic. It’s a lifesaver for high-end beauty retouches where the halo effect is a deal-breaker.

  • Commercial & Still Life: If you’ve ever tried to make background transparent for a glass bottle, a liquid splash, or a watch face, you know the refraction usually gets ruined. This tool actually maintains the transparency of the material, allowing the new environment to show through naturally. It’s much faster than manually painting alpha channels for product composites.

  • High-Volume E-commerce: For those of us who need to remove background elements across a hundred RAW files locally, the batch processing feature is a massive win. You aren’t tethered to a slow cloud upload, and the consistency across the set – keeping the same edge softness – is much higher than manual masking.

By handling the heavy lifting of the selection process, it lets you get back to the creative part: the color grading, the composition, and the storytelling.

Final Thoughts

In an industry that’s increasingly shifting toward subscription-based tools, having a reliable one-time purchase option still feels refreshing—especially for something as time-consuming as precise masking.

For photographers who regularly deal with fine details like hair, transparency, or complex backgrounds, tools like Aiarty Image Matting can make a noticeable difference in both speed and final image quality.

It’s not just about saving time—it’s about getting results that hold up under close inspection.

At the time of writing, PhotoRumors readers can access an exclusive 43% discount on the lifetime license, making it a relatively accessible addition to a professional editing workflow.

The post Remove Complex Backgrounds with Precision: Aiarty Image Matting for Photographers (Exclusive Deal Inside) appeared first on Photo Rumors.

  •  

Essai Renault Twingo E-Tech de 82 ch

Il y a 33 ans arrivait sur le marché cette drôle de grenouille, la Renault Twingo. Trois générations et 4 millions d’exemplaires vendus plus tard, voilà que le constructeur français nous refait le coup du revival, comme la R5, comme la R4. A-t-elle les armes pour s’imposer sur le marché de la citadine électrique ? Nous sommes allés en prendre le volant sur l’île d’Ibiza pour obtenir quelques éléments de réponse.

Quel look!

Le look, ça restera toujours en premier lieu une affaire de goût. Mais un design qui se distingue a toujours ce petit truc en plus qui donne une aura toute particulière à une voiture. La Twingo 2026 pourrait être l’une de celles-ci. La réinterprétation des lignes du modèle originel nous apparaît plutôt réussie. Les dimensions ont explosé, mais les proportions semblent même meilleures que celles de 1993. Chez Renault, ils n’ont pas menti, le concept-car alléchant en disait beaucoup sur le modèle qui allait être commercialisé. D’ailleurs, elle paraît tellement moderne qu’on la prendrait justement pour un showcar de salon.

Mais elle s’éloigne du modèle d’origine à bien des égards, et pour le meilleur. La Twingo E-Tech a droit à 5 portes, ce qui facilite notamment très largement l’accès, on y reviendra. Ses couleurs pop vont très bien à cet objet de design hyper moderne. Et puis les détails. Il y a bien sûr l’écriture de « Twingo », qui ne sont pas vraiment des lettres, mais un alphabet de formes, que l’on retrouve ici et là. Il y a aussi les petites ailettes sur les feux arrière, comme deux cornes de diable, qui, selon Renault, à elles seules comptent pour 5 kilomètres d’autonomie supplémentaires. Il y a bien entendu l’effet de fraîcheur, mais la voiture fait nettement tourner les têtes.

Des équipements modernes

Renault ne pouvait pas non plus se rater à l’intérieur. Pour accompagner ce nouvel objet néo-rétro, il fallait un habitacle à la hauteur, avec ce qu’il faut pour ne pas effrayer la jeune clientèle et les coups d’œil à l’ancienne pour charmer les plus nostalgiques. La planche de bord reprend beaucoup d’éléments connus sur les Renault d’aujourd’hui, notamment en termes d’équipements. Le volant a été repris de modèles existants, comme la plupart des commandes. Il fallait bien sûr un grand écran tactile et connecté pour pouvoir brancher en CarPlay ou Android Auto son smartphone. La couleur pop se retrouve sur une bonne partie de la planche de bord.

On se sent plutôt bien installé aux places avant, et pas trop mal à deux derrière pour un véhicule de seulement 3,79 m. Et les ouvrants supplémentaires facilitent bien entendu l’accès, avec des poignées dissimulées dans le montant. Malheureusement, sans doute pour des contraintes techniques et économiques, les vitres arrière ne sont pas électriques, mais s’entrebâillent. Un peu dommage. Comme son aîné, les sièges arrière peuvent coulisser, ici sur 17 centimètres, ce qui permet de moduler la capacité du coffre qui va de 260 à 360 litres, dont 50 sous le plancher. Au passage, en baissant le dossier passager à l’horizontal, on peut embarquer un objet long de 2 mètres. On retrouve aussi les fixations d’accessoires Youclip piquées à Dacia.

Une autonomie correcte, sauf à allure autoroutière

Côté motorisation, on ne s’attendait pas à ce que Renault mette la cavalerie de ses grandes sœurs R5 ou R4. Ici, on se contente d’un moteur de 82 chevaux, au couple maxi de 175 Nm. Son office suffit largement pour pouvoir apprécier le quotidien en douceur, sans bousculer ses passagers avec des démarrages canon pas toujours très agréables au final, surtout en ville où l’on a besoin aussi d’une certaine fluidité d’action. Pour une raison qu’on ignore, Renault refuse de communiquer sur son 0 à 100 km/h et évoque un chrono de 0 à 50 km/h. À noter d’ailleurs qu’elle atteint aisément sa vitesse maxi de 130 km/h. Elle n’a pas la nervosité d’une R5, mais n’a rien d’un veau non plus.

On peut même parler d’une bonne réactivité pour ce qu’elle a à faire en ville, comme sur la route. La consommation sur les axes de l’île d’Ibiza s’est étonnamment très bien tenue, à 12,6 kWh exactement sur notre parcours, comprenant tout de même quelques kilomètres de voies rapides au-delà des 100 km/h. Cela nous a d’ailleurs permis de voir qu’à cette vitesse, on se retrouve immédiatement sur des valeurs supérieures, ce qui laisse augurer un raccourcissement des liaisons entre deux recharges. Sans autoroute, on peut donc tabler sur une autonomie finalement assez proche de la donnée WLTP de 263 kilomètres.

Un comportement routier surprenant

Un peu comme un iPhone d’entrée de gamme, il faut accepter certains compromis, notamment sur la recharge. Si de base elle ne dépasse pas les 6,6 kW de puissance en AC, on peut en option la faire grimper à 11 kW. Oui, on peut aussi si besoin avoir un chargeur DC limité à 50 kW. Cela paraît bien éloigné des standards dans l’automobile électrique d’aujourd’hui. Pour autant, il faut 30 minutes pour passer de 10 % à 80 %. Une vitesse de charge acceptable pour le citadin qui voudrait exceptionnellement s’aventurer loin de son foyer. Il faut bien sûr accepter alors des arrêts probablement tous les 150 kilomètres environ.

Il n’empêche que cette Twingo E-Tech s’avère très agréable à conduire. Il faut dire qu’elle repose sur la plateforme très performante de la R5, mais raccourcie. En outre, elle a été adaptée par rapport à celle de la R5, avec un train arrière différent, puisque ce dernier trouve son origine chez le Renault Captur. Il en résulte étonnamment un confort légèrement supérieur à celui de sa grande sœur. Et tant mieux, car malgré tout, l’amortissement s’avère tout de même un peu percutant sur les pavés ou les dos d’âne. Rien de vraiment rédhibitoire, mais les plus sensibles des vertèbres y trouveront quelque chose à redire. Coté ADAS, on a bien un régulateur, mais il n’est pas semi-autonome.

Une politique tarifaire difficile à battre

Pour le reste, on adore son comportement routier, qui lui donne des accents de petite voiture dynamique, à laquelle on aimerait bien offrir quelques dizaines de chevaux supplémentaires. Ce qui nous apparaît certain, c’est qu’à cette gamme de tarif et globalement dans la catégorie, elle met tout le monde d’accord sur le plaisir de conduite. De ce point de vue, en tout cas en France, elle va rendre la vie particulièrement difficile aux Chinois et aux modèles fabriqués là-bas sans forcément en voir le badge, spécialistes de ce segment. On a vraiment le sentiment de conduire une citadine dynamique, ce qui ne se ressent pas forcément chez d’autres concurrentes, parfois bien plus grandes qu’elles.

Une électrique à moins de 20 000 euros ? Le pari a été tenu de la part de Renault (dès 19 490 €). Et compte tenu notamment de son assemblage à Novo Mesto en Slovénie, elle a droit au bonus. Pour les profils éligibles aux aides maximales, on peut l’avoir à 13 750 euros. À ce prix-là, on ne voit pas pourquoi on lui préférerait une Dacia Spring qui, avec son éco-score défavorable, aura du mal à résister. Dans ce contexte, on s’attend donc à ce qu’elle rejoigne rapidement la R5 sur la trajectoire du succès.

L’article Essai Renault Twingo E-Tech de 82 ch est apparu en premier sur Le Blog Auto.

  •  

Will AI Force Source Code to Evolve - Or Make it Extinct?

Will there be an AI-optimized programming language at the expense of human readability? There's now been experiments with minimizing tokens for "LLM efficiency, without any concern for how it would serve human developers." This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them. In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....") In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research." The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages. And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.

Read more of this story at Slashdot.

  •  
❌