Vue lecture

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties

New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision." The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here." The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.

Read more of this story at Slashdot.

  •  

Apple Can Create Smaller On-Device AI Models From Google's Gemini

Apple reportedly has full access to customize Google's Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports: The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power. Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn't always meet Apple's needs.

Read more of this story at Slashdot.

  •  

AI Economy Is a 'Ponzi Scheme,' Says AI Doc Director

An anonymous reader quotes a report from Vanity Fair: Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you're even slightly interested in what's going on with AI, it's required viewing: The film touches on all aspects of the technology, from how it's currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user -- the point at which machines become autonomous, like Skynet in the Terminator franchise. [...] [Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google's AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. "Have you seen that guy speak? He's like a lizard man," Roher says regarding Zuckerberg. "Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while," adds [codirector Charlie Tyrell]. Altman, arguably AI's greatest mascot, is prominently featured in the documentary. But Roher wasn't buying it. "That guy doesn't know what genuine means," he says. "Every single thing he says and does is calculated. He is a machine. He's like AI, and it's in the service of growth, growth, growth. You can be disingenuous and media savvy." [...] How, exactly, is Roher an apocaloptimist? "We are preaching a worldview," he says, "in a world that's asking you to either see this as the apocalypse or embrace it with this unbridled optimism." He and his film are taking a stance that rests between those two poles. "It's both at the same time. We have to try and embrace a middle ground so this technology doesn't consume us, so we can stay in the driver's seat," says Roher -- meaning, it's up to all of us to chart the course. "You have to speak up," says Tyrell. "Things like AI should disclose themselves. If your doctor's office is using an AI bot, you have to say, I don't like that." The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. [...] Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies -- and Roher gets going yet again. "This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we've seen," he says. "I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It's fucking crazy." [...] "These people are prospectors, and they are going up to the Yukon because it's the gold rush."

Read more of this story at Slashdot.

  •  

OpenAI abandonne la génération de vidéos (Sora) et perd son deal avec Disney : comment expliquer un tel échec ?

Un peu plus d'un an après avoir lancé Sora, son modèle pour générer des vidéos intégré à une application dédiée, OpenAI annonce renoncer à la technologie, qui va même perdre son API. L'entreprise semble vouloir se reconcentrer sur ChatGPT et réduire ses coûts d'exploitation.

  •  

OpenAI Discontinues Sora Video Platform App

OpenAI is shutting down Sora, its generative-AI video creation platform it launched in December 2024. "The move is one of a number of steps OpenAI is taking to refocus on business and coding functions ahead of a potential initial public offering as soon as the fourth quarter of this year," reports the Wall Street Journal. CEO Sam Altman announced the changes to staff on Tuesday. "We're saying goodbye to Sora," the Sora Team said in a post on X. "To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We'll share more soon, including timelines for the app and API and details on preserving your work." Last week, OpenAI announced plans to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop "superapp." "We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts," said CEO of Applications, Fidji Simo. "That fragmentation has been slowing us down and making it harder to hit the quality bar we want." This could behind the decision to kill Sora as the company redirects its resources and top talent towards productivity tools that benefit both enterprises and individual users.

Read more of this story at Slashdot.

  •  

Arm Unveils New AGI CPU With Meta As Debut Customer

Arm unveiled its first self-developed data center chip, the AGI CPU, designed for handling agentic AI workloads. The new chip was built in partnership with Meta and manufactured by TSMC. Other customers for the new chip include OpenAI, Cloudflare, SAP, and SK Telecom. Reuters reports: The new chip, called the AGI CPU, will address data-crunching needed for a specific type of AI that is able to act on behalf of users with minimal oversight, instead of responding to queries as part of a chatbot. For years, Arm, majority-owned by Japan's SoftBank Group has relied only on intellectual property for revenue, licensing its designs to companies such as Qualcomm and Nvidia and then collecting a royalty payment based on the number of units sold. "It's a very pivotal moment for the company," CEO Rene Haas said in an interview with Reuters. The new chip will be overseen by Mohamed Awad, head of the company's cloud AI business, and Arm has additional designs in the works that it plans to release at 12- to 18-month intervals. TSMC is fabricating the device on its 3-nanometer technology and is made from two distinct pieces of silicon that operate as a single chip. Arm plans to put it into volume production in the second half of this year but has received test chips that function as expected. In addition to the chip itself, Arm is working with server makers such as Lenovo and Quanta Computer to offer complete systems.

Read more of this story at Slashdot.

  •  

Anthropic's Claude Can Now Use Your Computer To Finish Tasks

Anthropic is testing a new Claude feature that lets users send a request from their phone and have the AI carry it out directly on their computer, such as opening apps, using a browser, or editing files. The move follows the viral spread of OpenClaw earlier this year, which has gained cult popularity among devs for the ability to run local, 24/7 personal workflows. CNBC reports: Users can now message Claude a task from a phone, and the AI agent will then complete that task, Anthropic announced Monday. After being prompted, Claude can open apps on your computer, navigate a web browser and fill in spreadsheets, Anthropic said. One prompt Anthropic demonstrated in a video posted Monday is a user running late for a meeting. The user asks Claude to export a pitch deck as a PDF file and attach it to a meeting invite. The video shows Claude carrying out the task. [...] Anthropic cautioned that computer use "is still early compared to Claude's ability to code or interact with text." "Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving," Anthropic warned. The company added that it has built the computer use capability "with safeguards that minimize risk," and that Claude will always request permission before accessing new apps. Users can use Dispatch, a feature it released last week in Claude Cowork. That lets users have a continuous conversation with Claude from a phone or desktop and assign the agent tasks.

Read more of this story at Slashdot.

  •  

Remove Complex Backgrounds with Precision: Aiarty Image Matting for Photographers (Exclusive Deal Inside)


Beyond the Pen Tool: A Faster Way to Handle Complex Masking with Aiarty Image Matting (guest post)

We’ve all been there: the shoot was perfect, but now you’re zoomed in at 400%, wrestling with a stray strand of hair that just won’t stay in the selection. It’s the least creative part of photography, yet it’s often where the professional polish happens.

The irony of the current AI boom is that while it’s easier than ever to remove background from photo files with a single click, the results rarely hold up on a high-res monitor. Even when you remove background in Photoshop using the latest Select Subject features, the AI tends to treat edges as a binary choice. It works for a clean product shot, but it falls apart on a bride’s translucent veil or a portrait against a leafy backdrop, leaving that jagged, “cut-out” look.

This is when the distinction between a simple “remover” and true Image Matting becomes critical. What I was really looking for was something that understands the physics of light and transparency – the sub-pixel details that make a subject feel natural in its environment. In testing different tools, I came across Aiarty Image Matting, which stood out in how it handles these “impossible” edges with a level of nuance I haven’t seen in most standard plugins.

It’s worth a look for photographers who frequently deal with complex selections and high-resolution workflow. Now PhotoRumors readers can access an exclusive offer to get Aiarty Image Matting Lifetime License at up to 43% OFF, with benefits including:

  • Use on 1 Windows + 1 Mac, 3 Windows or Mac computers
  • Unlimited access to all features
  • Permanent free upgrades and technical support
  • No subscription, no recurring cost

Why Aiarty Image Matting is the Secret to Professional Composites

The term “background removal” is a bit of a misnomer in professional circles. Most tools – from the built-in best background removal app on your phone to standard web filters – simply use a mask to hide pixels. This often results in a “cookie-cutter” effect where the edges look harsh and artificial.

Aiarty Image Matting operates on a different level. It uses dedicated AI models to calculate an “alpha matte,” which essentially determines the exact transparency of every single pixel on the boundary. Instead of a binary “in or out” choice, it understands that a stray hair or a glass edge is partially transparent. If you’ve ever wondered which ai tool is best for background removal for high-end work, the answer lies in how it handles these “soft” edges. Aiarty doesn’t just cut the subject out; it extracts it.

This extraction process achieves a level of sub-pixel precision that identifies details thinner than a single pixel – think individual eyelashes or the fuzz on a woolen sweater. It also solves one of the biggest headaches when you remove background from photo: color decontamination. We’ve all dealt with that annoying color spill, like a green tint on a model’s skin from a forest backdrop. Aiarty’s AI is trained to “clean” these edges, ensuring the subject looks natural when placed in a completely different lighting environment.

For things like steam, smoke, or a translucent bridal veil, the software preserves the true, semi-transparent nature of the material. This is a game-changer for anyone trying to make background transparent without losing the ethereal, airy quality of the original shot. By moving away from simple “erasing” and toward “intelligent extraction,” it finally bridges the gap between a quick social media edit and a gallery-ready composite.

Key Features: How Aiarty Streamlines Complex Masking

When you’re looking for the best background removal software, you’re really looking for consistency. You want a tool that doesn’t require you to go back in with a layer mask to fix 20% of the edges. Most standard matting tools rely on simple edge detection that often fails the moment things get slightly out of focus or highly detailed. Aiarty Image Matting differs by using deep-learning models that actually understand the “semantic” structure of a photo – it knows the difference between a strand of hair and a stray digital artifact. Instead of just tracing a line, it reconstructs the edge data based on real-world light and texture.

In my time testing the software, four specific capabilities stood out as legitimate game-changers for a professional workflow:

  • Hair-Level Fidelity: This is the ultimate stress test. Whether it’s a high-fashion portrait with flyaway hair or a wildlife shot of a wolf’s fur, Aiarty’s AI models are trained on millions of real-world edge scenarios. It identifies individual strands that traditional “Select Subject” algorithms usually blur or chop off. If you’ve ever wondered how to remove white background from image files with fine texture, this level of detail is a massive relief.
  • Complex Transparency Awareness: Most “one-click” apps treat glass, smoke, or veils as solid objects or just erase them. Aiarty actually understands the transparency levels. This means if you have a shot of a bride in a lace veil, the software preserves the semi-transparent layers, allowing the new background to show through naturally. It’s easily the best ai tool to remove background from delicate, translucent subjects.

  • Seamless Background Replacement: Beyond just cutting things out, the tool makes it remarkably easy to change background of photo assets for creative composites. It handles the edge blending so well that you don’t get that “pasted-on” look. You can drop in a solid color for a clean e-commerce shot or a complex landscape for a fine-art piece, and the lighting and transparency on the edges remain believable.

  • Privacy and Speed via Local Processing: This is a big one for me. Many “best free ai background remover” tools are browser-based, meaning you have to upload your high-res (often sensitive) client work to a cloud server. Aiarty runs entirely on your local GPU. It’s faster, more secure, and allows you to automatically remove background elements from an entire folder of RAW files in a single batch, without hogging your bandwidth.

Instead of just being another best background removal app for casual use, it feels like a specialized instrument designed to handle the 10% of “impossible” masking jobs that usually take up 90% of our editing time.

Aiarty Image Matting Real-World Scenarios

In practice, a tool like this isn’t just about saving a few minutes; it’s about enabling shots that would otherwise be a nightmare to edit. I’ve been testing Aiarty across a few common scenarios where most “best background removal app” contenders usually fail:

  • Portrait & Fashion Photography: We’ve all struggled with how to remove background from a subject with flyaway hair or fur. Standard AI usually “muds” the edges. Aiarty preserves individual strands, making the transition to a new background look organic. It’s a lifesaver for high-end beauty retouches where the halo effect is a deal-breaker.

  • Commercial & Still Life: If you’ve ever tried to make background transparent for a glass bottle, a liquid splash, or a watch face, you know the refraction usually gets ruined. This tool actually maintains the transparency of the material, allowing the new environment to show through naturally. It’s much faster than manually painting alpha channels for product composites.

  • High-Volume E-commerce: For those of us who need to remove background elements across a hundred RAW files locally, the batch processing feature is a massive win. You aren’t tethered to a slow cloud upload, and the consistency across the set – keeping the same edge softness – is much higher than manual masking.

By handling the heavy lifting of the selection process, it lets you get back to the creative part: the color grading, the composition, and the storytelling.

Final Thoughts

In an industry that’s increasingly shifting toward subscription-based tools, having a reliable one-time purchase option still feels refreshing—especially for something as time-consuming as precise masking.

For photographers who regularly deal with fine details like hair, transparency, or complex backgrounds, tools like Aiarty Image Matting can make a noticeable difference in both speed and final image quality.

It’s not just about saving time—it’s about getting results that hold up under close inspection.

At the time of writing, PhotoRumors readers can access an exclusive 43% discount on the lifetime license, making it a relatively accessible addition to a professional editing workflow.

The post Remove Complex Backgrounds with Precision: Aiarty Image Matting for Photographers (Exclusive Deal Inside) appeared first on Photo Rumors.

  •  

Essai Renault Twingo E-Tech de 82 ch

Il y a 33 ans arrivait sur le marché cette drôle de grenouille, la Renault Twingo. Trois générations et 4 millions d’exemplaires vendus plus tard, voilà que le constructeur français nous refait le coup du revival, comme la R5, comme la R4. A-t-elle les armes pour s’imposer sur le marché de la citadine électrique ? Nous sommes allés en prendre le volant sur l’île d’Ibiza pour obtenir quelques éléments de réponse.

Quel look!

Le look, ça restera toujours en premier lieu une affaire de goût. Mais un design qui se distingue a toujours ce petit truc en plus qui donne une aura toute particulière à une voiture. La Twingo 2026 pourrait être l’une de celles-ci. La réinterprétation des lignes du modèle originel nous apparaît plutôt réussie. Les dimensions ont explosé, mais les proportions semblent même meilleures que celles de 1993. Chez Renault, ils n’ont pas menti, le concept-car alléchant en disait beaucoup sur le modèle qui allait être commercialisé. D’ailleurs, elle paraît tellement moderne qu’on la prendrait justement pour un showcar de salon.

Mais elle s’éloigne du modèle d’origine à bien des égards, et pour le meilleur. La Twingo E-Tech a droit à 5 portes, ce qui facilite notamment très largement l’accès, on y reviendra. Ses couleurs pop vont très bien à cet objet de design hyper moderne. Et puis les détails. Il y a bien sûr l’écriture de « Twingo », qui ne sont pas vraiment des lettres, mais un alphabet de formes, que l’on retrouve ici et là. Il y a aussi les petites ailettes sur les feux arrière, comme deux cornes de diable, qui, selon Renault, à elles seules comptent pour 5 kilomètres d’autonomie supplémentaires. Il y a bien entendu l’effet de fraîcheur, mais la voiture fait nettement tourner les têtes.

Des équipements modernes

Renault ne pouvait pas non plus se rater à l’intérieur. Pour accompagner ce nouvel objet néo-rétro, il fallait un habitacle à la hauteur, avec ce qu’il faut pour ne pas effrayer la jeune clientèle et les coups d’œil à l’ancienne pour charmer les plus nostalgiques. La planche de bord reprend beaucoup d’éléments connus sur les Renault d’aujourd’hui, notamment en termes d’équipements. Le volant a été repris de modèles existants, comme la plupart des commandes. Il fallait bien sûr un grand écran tactile et connecté pour pouvoir brancher en CarPlay ou Android Auto son smartphone. La couleur pop se retrouve sur une bonne partie de la planche de bord.

On se sent plutôt bien installé aux places avant, et pas trop mal à deux derrière pour un véhicule de seulement 3,79 m. Et les ouvrants supplémentaires facilitent bien entendu l’accès, avec des poignées dissimulées dans le montant. Malheureusement, sans doute pour des contraintes techniques et économiques, les vitres arrière ne sont pas électriques, mais s’entrebâillent. Un peu dommage. Comme son aîné, les sièges arrière peuvent coulisser, ici sur 17 centimètres, ce qui permet de moduler la capacité du coffre qui va de 260 à 360 litres, dont 50 sous le plancher. Au passage, en baissant le dossier passager à l’horizontal, on peut embarquer un objet long de 2 mètres. On retrouve aussi les fixations d’accessoires Youclip piquées à Dacia.

Une autonomie correcte, sauf à allure autoroutière

Côté motorisation, on ne s’attendait pas à ce que Renault mette la cavalerie de ses grandes sœurs R5 ou R4. Ici, on se contente d’un moteur de 82 chevaux, au couple maxi de 175 Nm. Son office suffit largement pour pouvoir apprécier le quotidien en douceur, sans bousculer ses passagers avec des démarrages canon pas toujours très agréables au final, surtout en ville où l’on a besoin aussi d’une certaine fluidité d’action. Pour une raison qu’on ignore, Renault refuse de communiquer sur son 0 à 100 km/h et évoque un chrono de 0 à 50 km/h. À noter d’ailleurs qu’elle atteint aisément sa vitesse maxi de 130 km/h. Elle n’a pas la nervosité d’une R5, mais n’a rien d’un veau non plus.

On peut même parler d’une bonne réactivité pour ce qu’elle a à faire en ville, comme sur la route. La consommation sur les axes de l’île d’Ibiza s’est étonnamment très bien tenue, à 12,6 kWh exactement sur notre parcours, comprenant tout de même quelques kilomètres de voies rapides au-delà des 100 km/h. Cela nous a d’ailleurs permis de voir qu’à cette vitesse, on se retrouve immédiatement sur des valeurs supérieures, ce qui laisse augurer un raccourcissement des liaisons entre deux recharges. Sans autoroute, on peut donc tabler sur une autonomie finalement assez proche de la donnée WLTP de 263 kilomètres.

Un comportement routier surprenant

Un peu comme un iPhone d’entrée de gamme, il faut accepter certains compromis, notamment sur la recharge. Si de base elle ne dépasse pas les 6,6 kW de puissance en AC, on peut en option la faire grimper à 11 kW. Oui, on peut aussi si besoin avoir un chargeur DC limité à 50 kW. Cela paraît bien éloigné des standards dans l’automobile électrique d’aujourd’hui. Pour autant, il faut 30 minutes pour passer de 10 % à 80 %. Une vitesse de charge acceptable pour le citadin qui voudrait exceptionnellement s’aventurer loin de son foyer. Il faut bien sûr accepter alors des arrêts probablement tous les 150 kilomètres environ.

Il n’empêche que cette Twingo E-Tech s’avère très agréable à conduire. Il faut dire qu’elle repose sur la plateforme très performante de la R5, mais raccourcie. En outre, elle a été adaptée par rapport à celle de la R5, avec un train arrière différent, puisque ce dernier trouve son origine chez le Renault Captur. Il en résulte étonnamment un confort légèrement supérieur à celui de sa grande sœur. Et tant mieux, car malgré tout, l’amortissement s’avère tout de même un peu percutant sur les pavés ou les dos d’âne. Rien de vraiment rédhibitoire, mais les plus sensibles des vertèbres y trouveront quelque chose à redire. Coté ADAS, on a bien un régulateur, mais il n’est pas semi-autonome.

Une politique tarifaire difficile à battre

Pour le reste, on adore son comportement routier, qui lui donne des accents de petite voiture dynamique, à laquelle on aimerait bien offrir quelques dizaines de chevaux supplémentaires. Ce qui nous apparaît certain, c’est qu’à cette gamme de tarif et globalement dans la catégorie, elle met tout le monde d’accord sur le plaisir de conduite. De ce point de vue, en tout cas en France, elle va rendre la vie particulièrement difficile aux Chinois et aux modèles fabriqués là-bas sans forcément en voir le badge, spécialistes de ce segment. On a vraiment le sentiment de conduire une citadine dynamique, ce qui ne se ressent pas forcément chez d’autres concurrentes, parfois bien plus grandes qu’elles.

Une électrique à moins de 20 000 euros ? Le pari a été tenu de la part de Renault (dès 19 490 €). Et compte tenu notamment de son assemblage à Novo Mesto en Slovénie, elle a droit au bonus. Pour les profils éligibles aux aides maximales, on peut l’avoir à 13 750 euros. À ce prix-là, on ne voit pas pourquoi on lui préférerait une Dacia Spring qui, avec son éco-score défavorable, aura du mal à résister. Dans ce contexte, on s’attend donc à ce qu’elle rejoigne rapidement la R5 sur la trajectoire du succès.

L’article Essai Renault Twingo E-Tech de 82 ch est apparu en premier sur Le Blog Auto.

  •  

Will AI Force Source Code to Evolve - Or Make it Extinct?

Will there be an AI-optimized programming language at the expense of human readability? There's now been experiments with minimizing tokens for "LLM efficiency, without any concern for how it would serve human developers." This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them. In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....") In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research." The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages. And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.

Read more of this story at Slashdot.

  •  

A CNN Producer Explores the 'Magic AI' Workout Mirror

CNN looks at "the Magic AI fitness mirror," a new product "watching you, and giving you feedback automatically," while sometimes playing footage of a recorded personal trainer. Long-time Slashdot reader destinyland describes CNN's video report: CNN says the device "tracks form, counts reps, and corrects technique in real-time — and it doesn't go easy on you." (Although the company's CEO/cofounder, Varun Bhanot, says "we're not trying to completely replace personal trainers. What we are providing is a more accessible alternative.") CNN call the company "more a computer-vision firm than a fitness company, building the tech for this mirror from the ground up." CEO Bhanot tells CNN he'd hired a personal trainer in his 20s to get fit, but "Going through that journey, I realized how old-fashioned personal training was. Dumbbells were still dumb. There was no data or augmentation for the whole process!" "The AI fitness and wellness market is already huge — and it's growing," CNN adds. "In 2025 the global market was worth $11 billion, according to [market research firm] Insightace Analytic. By 2035, this market is expected to reach just shy of $58 billion. And Magic AI is far from alone. Form, Total, Speediance, and Echelon, to name a few, are all brands vying for a slice of this market. Even the most purely physical of activities — exercising your body — now gets "enhanced" with AI accessories...

Read more of this story at Slashdot.

  •  

50% of Consumers Prefer Brands That Avoid GenAI Content

Slashdot reader BrianFagioli writes: According to the research firm Gartner, 50% of U.S. consumers say they would prefer to do business with brands that avoid using GenAI in consumer facing content such as advertising and promotional messaging. The survey of 1,539 Americans, conducted in October 2025, also found growing skepticism about the reliability of online information, with 61% saying they frequently question whether information they use for everyday decisions is trustworthy... Gartner found that 68% of consumers often wonder whether the content they see online is real, while fewer people now rely on intuition alone to judge credibility [only 27%]. Instead, more consumers are actively verifying information and checking sources. Gartner's senior principal analyst offered suggests discretion for brands trying to use AI. "The brands that win will be the ones that use AI in ways customers can immediately recognize as helpful, while being transparent about when AI is used, what it's doing, and giving customers a clear choice to opt out."

Read more of this story at Slashdot.

  •  

OpenAI Plans Launch of Desktop 'Superapp'

joshuark shares a report from Neowin: OpenAI is planning to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop "superapp." CEO of Applications, Fidji Simo, said the company was doubling down on its successful products. By taking this move, the AI company aims to streamline the user experience and reduce fragmentation. Simo said in an internal memo: "We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts. That fragmentation has been slowing us down and making it harder to hit the quality bar we want."

Read more of this story at Slashdot.

  •  

As OpenClaw Enthusiasm Grips China, Kids and Retirees Alike Raise 'Lobsters'

An anonymous reader quotes a report from Reuters: Fan Xinquan, a retired electronics worker in Beijing, has recently started raising a "lobster," hoping that the AI agent he has been training can help organize his specialized industry knowledge better than chatbots like DeepSeek. "OpenClaw can actually help you accomplish many practical things," the 60-year-old said at a recent event hosted by AI startup Zhipu to teach people how to use and train the AI agent, which has gone viral in China, with its various local versions earning the "lobster" nickname. In the past month, OpenClaw, which can connect several hardware and software tools and learn from the data produced with much less human intervention than a chatbot, has captured the imaginations of many in China, from retirees looking for side income to AI firms hoping to generate new revenue streams. [...] Huang Rongsheng, chief architect at Baidu's smart device unit Xiaodu, said at an event on Tuesday that parent group chats for his daughter's primary school class have become overwhelmed by OpenClaw discussions. "My daughter came to me and asked: Dad, I see you raising a lobster every day," he said. "Can I have one too?" Bai Yiyun, another attendee at the Zhipu event, said she hopes to use the agent to start a side hustle during her retirement. "If DeepSeek marked a milestone for open-source large language models, then OpenClaw represents a similar turning point for open-source "agents," said Wei Sun, chief AI analyst at Counterpoint Research.

Read more of this story at Slashdot.

  •  

Google Is Trying To Make 'Vibe Design' Happen

With today's latest Stitch updates, Google is trying to make "vibe design" happen, reports The Verge's Jay Peters. The AI-native design platform encourages users to describe goals, feelings, or inspiration in "natural language," rather than starting with traditional blueprints. In a blog post, Google Labs Product Manager Rustin Banks says that Stitch can turn those inputs into interactive prototypes, automatically map user flows, and support real-time iteration. It introduces voice capabilities that allow users to "speak directly to [the] canvas" for feedback or changes. Tools like DESIGN.md also help users create reusable design systems across various projects.

Read more of this story at Slashdot.

  •  

Nvidia Announces Vera Rubin Space-1 Chip System For Orbital AI Data Centers

Nvidia unveiled its Vera Rubin Space-1 system for powering AI workloads in orbital data centers. "Space computing, the final frontier, has arrived," said CEO Jensen Huang. "As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated." CNBC reports: In a press release, the company said that its Vera Rubin Space-1 Module, which includes the IGX Thor and Jetson Orin, will be used on space missions led by multiple companies. The chips are specifically "engineered for size-, weight- and power-constrained environments." Partners include Axiom Space, Starcloud and Planet. Huang said Nvidia is working with partners on a new computer for orbital data centers, but there are still engineering hurdles to overcome. "In space, there's no convection, there's just radiation," Huang said during his GTC keynote, "and so we have to figure out how to cool these systems out in space, but we've got lots of great engineers working on it."

Read more of this story at Slashdot.

  •  

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop. Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree. But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms. "Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..." "This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."

Read more of this story at Slashdot.

  •  
❌