Vue lecture

Signal Creator Marlinspike Wants To Do For AI What He Did For Messaging

Moxie Marlinspike, the engineer who created Signal Messenger and set a new standard for private communications, is now trialing Confer, an open source AI assistant designed to make user data unreadable to platform operators, hackers, and law enforcement alike. Confer relies on two core technologies: passkeys that generate a 32-byte encryption keypair stored only on user devices, and trusted execution environments on servers that prevent even administrators from accessing data. The code is open source and cryptographically verifiable through remote attestation and transparency logs. Marlinspike likens current AI interactions to confessing into a "data lake." A court order last May required OpenAI to preserve all ChatGPT user logs including deleted chats, and CEO Sam Altman has acknowledged that even psychotherapy sessions on the platform may not stay private.

Read more of this story at Slashdot.

  •  

L’IA du Pentagone « ne sera pas woke », Grok intègre officiellement le réseau de l’armée américaine

Le 12 janvier 2026, le secrétaire à la Défense Pete Hegseth a annoncé l’intégration imminente de Grok à la plateforme interne d’intelligence artificielle générative du Pentagone, GenAI.mil. Cette décision concrétise un accord de 200 millions de dollars conclu entre le Département de la Défense et xAI, la société fondée par Elon Musk et à l’origine du chatbot.

  •  

Even Linus Torvalds Is Vibe Coding Now

Linus Torvalds has started experimenting with vibe coding, using Google's Antigravity AI to generate parts of a small hobby project called AudioNoise. "In doing so, he has become the highest-profile programmer yet to adopt this rapidly spreading, and often mocked, AI-driven programming," writes ZDNet's Steven Vaughan-Nichols. Fro the report: [I]t's a trivial program called AudioNoise -- a recent side project focused on digital audio effects and signal processing. He started it after building physical guitar pedals, GuitarPedal, to learn about audio circuits. He now gives them as gifts to kernel developers and, recently, to Bill Gates. While Torvalds hand-coded the C components, he turned to Antigravity for a Python-based audio sample visualizer. He openly acknowledges that he leans on online snippets when working in languages he knows less well. Who doesn't? [...] In the project's README file, Torvalds wrote that "the Python visualizer tool has been basically written by vibe-coding," describing how he "cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualiser." The remark underlines that the AI-generated code met his expectations well enough that he did not feel the need to manually re-implement it. Further reading: Linus Torvalds Says Vibe Coding is Fine For Getting Started, 'Horrible Idea' For Maintenance

Read more of this story at Slashdot.

  •  

Should AI Agents Be Classified As People?

New submitter sziring writes: Harvard Business Review's IdeaCast podcast interviewed McKinsey CEO Bob Sternfels, where he classified AI agents as people. "I often get asked, 'How big is McKinsey? How many people do you employ?' I now update this almost every month, but my latest answer to you would be 60,000, but it's 40,000 humans and 20,000 agents." This statement looks to be the opening shots of how we as a society need to classify AI agents and whether they will replace human jobs. Did those agents take roles that previously would have been filled by a full-time human? By classifying them as people, did the company break protocols or laws by not interviewing candidates for those jobs, not providing benefits or breaks, and so on? Yes, it all sounds silly but words matter. What happens when a job report comes out claiming we just added 20,000 jobs in Q1? That line of thinking leads directly to Bill Gates' point that agents taking on human roles might need to be taxed.

Read more of this story at Slashdot.

  •  

Meta Plans To Cut Around 10% of Employees In Reality Labs Division

Meta plans to cut roughly 10% of staff in its Reality Labs division, with layoffs hitting metaverse-focused teams hardest. Reuters reports: The cuts to Reality Labs, which has roughly 15,000 employees, could be announced as soon as Tuesday and are set to disproportionately affect those in the metaverse unit who work on virtual reality headsets and virtual social networks, the report said. [...] Meta Chief Technology Officer Andrew Bosworth, who oversees Reality Labs, has called a meeting on Wednesday and has urged staff to attend in person, the NYT reported, citing a memo. [...] The metaverse had been a massive project spearheaded by CEO Mark Zuckerberg, who prioritized and spent heavily on the venture, only for the business to burn more than $60 billion since 2020. [...] The report comes as the Facebook-parent scrambles to stay relevant in Silicon Valley's artificial intelligence race after its Llama 4 model met with a poor reception.

Read more of this story at Slashdot.

  •  

Cette IA a résolu un problème mathématique ouvert depuis 45 ans

Le modèle d’IA GPT-5.2 Pro a résolu plusieurs problèmes de mathématiques, dont l’un, le 11 janvier 2026, était resté ouvert depuis 45 ans. Plus que le résultat, c’est la méthode — associant humains, assistant de preuve Lean et système d’IA Aristotle — qui pourrait transformer la pratique de la démonstration mathématique.

  •  

Amazon's AI Tool Listed Products from Small Businesses Without Their Knowledge

Bloomberg reports on Amazon listings "automatically generated by an experimental AI tool" for stores that don't sell on Amazon. Bloomberg notes that the listings "didn't always correspond to the correct product", leaving the stores to handle the complaints from angry customers: Between the Christmas and New Year holidays, small shop owners and artisans who had found their products listed on Amazon took to social media to compare notes and warn their peers... In interviews, six small shop owners said they found themselves unwittingly selling their products on Amazon's digital marketplace. Some, especially those who deliberately avoided Amazon, said they should have been asked for their consent. Others said it was ironic that Amazon was scouring the web for products with AI tools despite suing Perplexity AI Inc.for using similar technology to buy products on Amazon... Some retailers say the listings displayed the wrong product image or mistakenly showed wholesale pricing. Users of Shopify Inc.'s e-commerce tools said the system flagged Amazon's automated purchases as potentially fraudulent... In a statement, Amazon spokesperson Maxine Tagay said sellers are free to opt out. Two Amazon initiatives — Shop Direct, which links out to make purchases on other retailers' sites, and Buy For Me, which duplicates listings and handles purchases without leaving Amazon — "are programs we're testing that help customers discover brands and products not currently sold in Amazon's store, while helping businessesâreach new customers and drive incremental sales," she said in an emailed statement. "We have received positive feedback on these programs." Tagay didn't say why the sellers were enrolled without notifying them. She added that the Buy For Me selection features more than 500,000 items, up from about 65,000 at launch in April. The article includes quotes from the owners of affected businesses. A one-person company complained that "If suddenly there were 100 orders, I couldn't necessarily manage. When someone takes your proprietary, copyrighted works, I should be asked about that. This is my business. It's not their business." One business owner said "I just don't want my products on there... It's like if Airbnb showed up and tried to put your house on the market without your permission." One business owner complained "When things started to go wrong, there was no system set up by Amazon to resolve it. It's just 'We set this up for you, you should be grateful, you fix it.'" One Amazon representative even suggested they try opening a $39-a-month Amazon seller account.

Read more of this story at Slashdot.

  •  

Nvidia CEO Jensen Huang Says AI Doomerism Has 'Done a Lot of Damage'

Nvidia CEO Jensen Huang "said one of his biggest takeaways from 2025 was 'the battle of narratives' over the future of AI development between those who see doom on the horizon and the optimists," reports Business Insider. Huang did acknowledge that "it's too simplistic" to entirely dismiss either side (on a recent episode of the "No Priors" podcast). But "I think we've done a lot of damage with very well-respected people who have painted a doomer narrative, end of the world narrative, science fiction narrative." "It's not helpful to people. It's not helpful to the industry. It's not helpful to society. It's not helpful to the governments..." [H]e cited concerns about "regulatory capture," arguing that no company should approach governments to request more regulation. "Their intentions are clearly deeply conflicted, and their intentions are clearly not completely in the best interest of society," he said. "I mean, they're obviously CEOs, they're obviously companies, and obviously they're advocating for themselves..." "When 90% of the messaging is all around the end of the world and the pessimism, and I think we're scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society," he said. Elsewhere in the podcast, Huang argues that the AI bubble is a myth. Business Insider adds that "a spokesperson for Nvidia declined to elaborate on Huang's remarks." Thanks to Slashdot reader joshuark for sharing the article.

Read more of this story at Slashdot.

  •  

Walmart Announces Drone Delivery, Integration with Google's AI Chatbot Gemini

Alphabet-owned Wing "is expanding its drone delivery service to an additional 150 Walmart stores across the U.S.," reports Axios: [T]he future is already here if you live in Dallas — where some Walmart customers order delivery by Wing three times a week. By the end of 2026, some 40 million Americans, or about 12 percent of the U.S. population, will be able to take advantage of the convenience, the companies claim... Once the items are picked and packed in a small cardboard basket, they are loaded onto a drone inside a fenced area in the Walmart parking lot. Drones fly autonomously to the designated address, with human pilots monitoring each flight from a central operations hub.... For now, Wing deliveries are free. "The goal is to expose folks to the wonders of drone delivery," explains Wing's chief business officer, Heather Rivera... Over time, she said Wing expects delivery fees to be comparable to other delivery options, but faster and more convenient. Service began recently in Atlanta and Charlotte, and it's coming soon to Los Angeles, Houston, Cincinnati, St. Louis, Miami and other major U.S. cities to be announced later, according to the article. "By 2027, Walmart and Wing say they'll have a network of more than 270 drone delivery locations nationwide." Walmart also announced a new deal today with Google's Gemini, allowing customers to purchase Walmart products from within Gemini. (Walmart announced a similar deal for ChatGPT in October.) Slashdot reader BrianFagioli calls this "a defensive angle that Walmart does not quite say out loud." As AI models answer more questions directly, retailers risk losing customers before they ever hit a website. If Gemini recommends a product from someone else first, Walmart loses the sale before it starts. By planting itself inside the AI, Walmart keeps a seat at the table while the internet shifts under everyone's feet. Google clearly benefits too. Gemini gets a more functional purpose than just telling you how to boil pasta or summarize recipes. Now it can carry someone from the moment they wonder what they need to the moment the order is placed. That makes the assistant stickier and a bit more practical than generic chat. Walmart's incoming CEO John Furner says the company wants to shape this new pattern instead of being dragged into it later. Sundar Pichai calls Walmart an early partner in what he sees as a broader wave of agent style commerce, where AI starts doing the errands people used to handle themselves. The article concludes "This partnership serves as a snapshot of where retail seems to be heading..."

Read more of this story at Slashdot.

  •  

AI Fails at Most Remote Work, Researchers Find

A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post. They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers. To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study... The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found. One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors — or no colors at all." The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.

Read more of this story at Slashdot.

  •  

Meta Announces New Smartglasses Features, Delays International Rollout Claiming 'Unprecedented' Demand'

This week Meta announced several new features for "Meta Ray-Ban Display" smartglasses: - A new teleprompter feature for the smart glasses (arriving in a phased rollout) - The ability to send messages on WhatsApp and Messenger by writing with your finger on any surface. (Available for those who sign up for an "early access" program). - "Pedestrian navigation" for 32 cities. ("The 28 cities we launched Meta Ray-Ban Display with, plus Denver, Las Vegas, Portland, and Salt Lake City," and with more cities coming soon.) But they also warned Meta Ray-Ban Display "is a first-of-its-kind product with extremely limited inventory," saying they're delaying international expansion of sales due to inventory constraints — and also due to "unprecedented" demand in the U.S. CNBC reports: "Since launching last fall, we've seen an overwhelming amount of interest, and as a result, product waitlists now extend well into 2026," Meta wrote in a blog post. Due to "limited" inventory, the company said it will pause plans to launch in the U.K., France, Italy and Canada early this year and concentrate on U.S. orders as it reassesses international availability... Meta is one of several technology companies moving into the smart glasses market. Alphabet announced a $150 million partnership with Warby Parker in May and ChatGPT maker OpenAI is reportedly working on AI glasses with Apple.

Read more of this story at Slashdot.

  •  

Meta Signs Deals With Three Nuclear Companies For 6+ GW of Power

Meta has signed long-term nuclear power deals totaling more than 6 gigawatts to fuel its data centers: "one from a startup, one from a smaller energy company, and one from a larger company that already operates several nuclear reactors in the U.S," reports TechCrunch. From the report: Oklo and TerraPower, two companies developing small modular reactors (SMR), each signed agreements with Meta to build multiple reactors, while Vistra is selling capacity from its existing power plants. [...] The deals are the result of a request for proposals that Meta issued in December 2024, in which Meta sought partners that could add between 1 to 4 gigawatts of generating capacity by the early 2030s. Much of the new power will flow through the PJM interconnection, a grid which covers 13 Mid-Atlantic and Midwestern states and has become saturated with data centers. The 20-year agreement with Vistra will have the most immediate impact on Meta's energy needs. The tech company will buy a total of 2.1 gigawatts from two existing nuclear power plants, Perry and Davis-Besse in Ohio. As part of the deal, Vistra will also add capacity to those power plants and to its Beaver Valley power plant in Pennsylvania. Together, the upgrades will generate an additional 433 MW and are scheduled to come online in the early 2030s. Meta is also buying 1.2 gigawatts from young provider Oklo. Under its deal with Meta, Oklo is hoping to start supplying power to the grid as early as 2030. The SMR company went public via SPAC in 2023, and while Oklo has landed a large deal with data center operator Switch, it has struggled to get its reactor design approved by the Nuclear Regulatory Commission. If Oklo can deliver on its timeline, the new reactors would be built in Pike County, Ohio. The startup's Aurora Powerhouse reactors each produce 75 megawatts of electricity, and it will need to build more than a dozen to fulfill Meta's order. TerraPower is a startup co-founded by Bill Gates, and it is aiming to start sending electricity to Meta as early as 2032.

Read more of this story at Slashdot.

  •  

AI Models Are Starting To Learn By Asking Themselves Questions

An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them. The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project].

Read more of this story at Slashdot.

  •  

AI Is Intensifying a 'Collapse' of Trust Online, Experts Say

Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her. The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces." Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away." Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."

Read more of this story at Slashdot.

  •  
❌