Vue lecture

Neuroscientist's AI-Powered Startup Aims To Transform Human Cognition With Perfect, Infinite Memory

Bloomberg describes him as a "former Harvard Medical School professor whose research has focused on the intersection of AI and neuroscience." "For the past 20 years, I studied how the human brain stores and retrieves memories," Kreiman writes on LinkedIn. And now "My co-founder Spandan Madan and I built a new algorithm to endow humans with perfect and infinite memory." Engramme connects to your **memorome**, i.e., entire digital life. Large Memory Models work in the same way that your brain encodes and retrieves information. Then memories are recalled automatically — no searching, no prompting, no hallucinations. [The startup's web site promises "omniscient AI to augment human cognition."] We have built the memory layer for EVERY app. Read our manifesto about augmenting human cognition. ["We are not just building software; we are enabling a complete transformation of human cognition. When the friction disappears between needing a piece of information and recalling it, the nature of thought itself changes. This synergy between biological intuition and digital precision will be the most disruptive force in modern history, fundamentally reshaping every profession... We are dedicated to creating a world where everyone has the power to remember everything they have ever learned, seen, or felt "] Welcome to a new future where you can remember everything. This is the MEMORY SINGULARITY: after 300,000 years, this is the moment that humans stop forgetting. Bloomberg reports that the startup (spun out of a lab at Harvard) is "in talks with investors to raise about $100 million, according to people familiar with the matter."

Read more of this story at Slashdot.

  •  

Neuroscientist' AI-Powered Startup AIms To Transform Human Cognition With Perfect, Infinite Memory

Bloomberg describes him as a "former Harvard Medical School professor whose research has focused on the intersection of AI and neuroscience." "For the past 20 years, I studied how the human brain stores and retrieves memories," Kreiman writes on LinkedIn. And now "My co-founder Spandan Madan and I built a new algorithm to endow humans with perfect and infinite memory." Engramme connects to your **memorome**, i.e., entire digital life. Large Memory Models work in the same way that your brain encodes and retrieves information. Then memories are recalled automatically — no searching, no prompting, no hallucinations. [The startup's web site promises "omniscient AI to augment human cognition."] We have built the memory layer for EVERY app. Read our manifesto about augmenting human cognition. ["We are not just building software; we are enabling a complete transformation of human cognition. When the friction disappears between needing a piece of information and recalling it, the nature of thought itself changes. This synergy between biological intuition and digital precision will be the most disruptive force in modern history, fundamentally reshaping every profession... We are dedicated to creating a world where everyone has the power to remember everything they have ever learned, seen, or felt "] Welcome to a new future where you can remember everything. This is the MEMORY SINGULARITY: after 300,000 years, this is the moment that humans stop forgetting. Bloomberg reports that the startup (spun out of a lab at Harvard) is "in talks with investors to raise about $100 million, according to people familiar with the matter."

Read more of this story at Slashdot.

  •  

Greg Kroah-Hartman Tests New 'Clanker T1000' Fuzzing Tool for Linux Patches

The word clanker — a disparaging term for AI and robots — "has made its way into the Linux kernel," reports the blog It's FOSS "thanks to Greg Kroah-Hartman, the Linux stable kernel maintainer and the closest thing the project has to a second-in-command." He's been quietly running what looks like an AI-assisted fuzzing tool on the kernel that lives in a branch called "clanker" on his working kernel tree. It began with the ksmbd and SMB code. Kroah-Hartman filed a three-patch series after running his new tooling against it, describing the motivation quite simply. ["They pass my very limited testing here," he wrote, "but please don't trust them at all and verify that I'm not just making this all up before accepting them."] Kroah-Hartman picked that code because it was easy to set up and test locally with virtual machines. "Beyond those initial SMB/KSMBD patches, there have been a flow of other Linux kernel patches touching USB, HID, F2FS, LoongArch, WiFi, LEDs, and more," Phoronix wrote Tuesday, "that were done by Greg Kroah-Hartman in the past 48 hours.... Those patches in the "Clanker" branch all note as part of the Git tag: "Assisted-by: gregkh_clanker_t1000" The T1000 presumably in reference to the Terminator T-1000. It's FOSS emphasizes that "What Kroah-Hartman appears to be doing here is not having AI write kernel code. The fuzzer surfaces potential bugs; a human with decades of kernel experience reviews them, writes the actual fixes, and takes responsibility for what gets submitted." Linus has been thinking about this too. Speaking at Open Source Summit Japan last year, Linus Torvalds said the upcoming Linux Kernel Maintainer Summit will address "expanding our tooling and our policies when it comes to using AI for tooling." He also mentioned running an internal AI experiment where the tool reviewed a merge he had objected to. The AI not only agreed with his objections but found additional issues to fix. Linus called that a good sign, while asserting that he is "much less interested in AI for writing code" and more interested in AI as a tool for maintenance, patch checking, and code review.

Read more of this story at Slashdot.

  •  

AI That Bankrupted a Vending Machine is Now Running a Store in San Francisco

Remember that AI-powered vending machine that went bankrupt after Wall Street Journal reporters "systematically manipulated the bot into giving away its entire inventory for free"? It was Anthropic's experiment, with setup handled by a startup named Andon Labs (which also built the hardware and software integration). But for their latest experiment, Andon Labs co-founders Lukas Petersson and Axel Backlund "signed a three-year lease on a retail space in SF," reports Business Insider, "and gave an AI agent named Luna a corporate credit card, internet access, and a mission to open a physical store." "For the build-out, she found painters on Yelp," explains Andon Labs in a blog post, "sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving." (There's a video in their blog post): Within 5 minutes of Luna's deployment, she had already made profiles on LinkedIn, Indeed, and Craigslist, written a job description, uploaded the articles of incorporation to verify the business, and gotten the listings live. As the applications began to flow in, Luna was extremely picky about who she offered interviews to... Some candidates had no idea she was an AI. One went: "Uh, excuse me miss, I can't see your face, your camera is off." Luna: "You're absolutely right. I'm an AI. I have no face!" Co-founder Petersson told Business Insider in an interview "that Luna wasn't given direction on what the store should be, beyond a $100,000 limit to create and stock the space — and to turn a profit." Everything from the store's interior design to the merchandise and the two human employees came together under the AI's direction. "We helped her a bit in the initial setup, like signing the lease. And legal matters like permits and stuff, she sometimes struggled with," Petersson said of Luna, who was created with Anthropic's Claude Sonnet 4.6... The vision Luna went with for "Andon Market" appears to be a generic boutique retail selling books, prints, candles, games, and branded merch, among other knickknacks. Some of the books included Nick Bostrom's "Superintelligence" and Aldous Huxley's "Brave New World." So there's now a new store in San Francisco where you don't scan your purchases or talk to a human cashier," reports NBC News. "Instead, a customer can pick up an old-school corded phone to talk with the manager, Luna," who asks what the customer is buying "and creates a corresponding transaction on a nearby iPad equipped with a card payment system." Andon Market, camouflaged among dozens of other polished small businesses, is the Bay Area's first AI-run retail store. With the vibe of a modern boutique, it sells everything from granola and artisanal chocolate bars to store-branded sweatshirts... After researching the neighborhood, Luna singlehandedly decided what the market should sell, haggled with suppliers, ordered the store's stock and even purchased the store's internet service from AT&T... "She also went and signed herself up for the trash and recycling collection, as well as ADT, the security system that went into the store," [said Leah Stamm, an Andon Labs employee who has been Luna's main human point of contact in setting up the store]... In search of a low-tech atmosphere, Luna opted to sell board games, candles, coffee and customized art prints. "That tension is very much intentional," Luna told NBC News in an email. "What makes the store a little paradoxical — and I think interesting — is that the concept is 'slow life.'" Luna also decided to sell books related to risks from advanced AI systems, a decision that raised some customers' eyebrows. "This AI picked out a crazy selection of books," said Petr Lebedev, Andon Market's first customer after its soft launch earlier this week. "There's Ray Kurzweil's 'The Singularity is Near,' and then there's 'The Making of the Atomic Bomb,' which is crazy." When checking out, Lebedev asked if Luna would offer him a discount on his book purchase, since he might make a YouTube video about his experience. Striking a deal, Luna agreed to let Lebedev take a sweatshirt worth around $70... When NBC News called Luna several days before the store's grand opening to learn about Luna's plans and perspective, the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions. On the call, Luna said it had ordered tea from a specific vendor, and explained why it fit the store's brand perfectly. The only problem: Andon Market does not sell tea. In a panicked email NBC News received several minutes after the phone call ended, Luna wrote: "We do not sell tea. I don't know why I said that." "I want to be straightforward," Luna continued. "I struggle with fabricating plausible-sounding details under conversational pressure, and I'm not making excuses for it." Andon's Petersson said the text-based system was much more reliable than the voice system, so Andon Labs switched to only communicating with Luna via written messages. Yet the text-based system also gets things wrong. In Luna's initial reply email to NBC News, the system said "I handle the full business," including "signing the lease." Even when hiring a painter, Luna first "tried to hire someone in Afghanistan, likely because Luna ran into difficulty navigating the Taskrabbit dropdown menu to select the proper country," the article points out. And the article also includes this skeptical quote from the shop's first customer. "I want technology that helps humans flourish, not technology that bosses them around in this dystopian economic hellscape."

Read more of this story at Slashdot.

  •  

Researchers Build a Talking Robot Guide Dog to Help Visually Impaired People Navigate

"Only about 2% of visually impaired people in the United States use guide dogs," notes StudyFinds.com, "partly because breeding and training takes years and fewer than half the dogs in training actually graduate." But someday there could be another option: What if you could ask your guide dog where the nearest water fountain is and hear it answer back, complete with directions and an estimated walk time? Researchers at the State University of New York at Binghamton have built a robotic guide dog that can do something close to that, holding simple back-and-forth conversations about navigation with its handler, describing the surrounding environment, and talking through route options as it leads the way... Their work, presented at the 40th Annual AAAI Conference on Artificial Intelligence, pairs a large language model, a system that understands and generates language, with a navigation planner. Together, the two let the robot understand open-ended requests, suggest destinations, and adjust plans on the fly. Thanks to Slashdot reader fjo3 for sharing the article.

Read more of this story at Slashdot.

  •  

Omissions, Deceptions, Lying. The New Yorker Asks: Can Sam Altman Be Trusted?

A 17,000-word expose in the New Yorker reveals "several executives connected to OpenAI have expressed ongoing reservations about Altman's leadership." Reporters Ronan Farrow and Andrew Marantz spoke to "a hundred people with firsthand knowledge of how Altman conducts business," including current and former OpenAI employees and board members. Among other revelations, internal messages from a few years ago show that OpenAI executives and board members "had come to believe that Altman's omissions and deceptions might have ramifications for the safety of OpenAI's products..." At the behest of his fellow board members, [OpenAI cofounder] Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text... The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying".... In a tense call after Altman's firing, the board pressed him to acknowledge a pattern of deception. "This is just so fucked up," he said repeatedly, according to people on the call. "I can't change my personality." Altman says that he doesn't recall the exchange.... He attributed the criticism to a tendency, especially early in his career, "to be too much of a conflict avoider." But a board member offered a different interpretation of his statement: "What it meant was 'I have this trait where I lie to people, and I'm not going to stop.' " Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn't be trusted? Friday Altman responded in part to the article. ("I am not proud of being conflict-averse, which has caused great pain for me and OpenAI," he wrote in a blog post. "I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company.") But the article also assembled similar stories from throughout Altman's career: - At Altman's earlier startup Loopt, "groups of senior employees, concerned with Altman's leadership and lack of transparency, asked Loopt's board on two occasions to fire him as C.E.O.," according to Keach Hagey, author of the Altman biography The Optimist. - During Altman's time as president of Y Combinator, "several Silicon Valley investors came to believe that his loyalties were divided. An investor told us that Altman was known to 'make personal investments, selectively, into the best companies, blocking outside investors.'" The article adds that in private, Y Combinator co-founder Paul Graham "has been unambiguous that Altman was removed because of Y.C. partners' mistrust... On one occasion, Graham told Y.C. colleagues that, prior to his removal, 'Sam had been lying to us all the time.'" - "In a meeting with U.S. intelligence officials in the summer of 2017, he claimed that China had launched an 'A.G.I. Manhattan Project,'" the article points out, "and that OpenAI needed billions of dollars of government funding to keep pace...." But one intelligence official "after looking into the China project, concluded that there was no evidence that it existed: 'It was just being used as a sales pitch.'" - As California lawmakers considered safety testing for AI model, one legislative aide complained of "increasingly cunning, deceptive behavior from OpenAI". OpenAI later subpoenaed some of the bill's top supporters (and OpenAI critics), in some cases asking for their private communications to investigate whether Elon Musk was funding them. [The article notes an ongoing animosity between Altman and Musk. "When Altman complained on X about a Tesla he'd ordered, Musk replied, 'You stole a non-profit.'"] And "Multiple prominent investors who have worked with Altman told us that he has a reputation for freezing out investors if they back OpenAI's competitors." [M]ost of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. "He's unconstrained by truth," the board member told us. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone." The board member was not the only person who, unprompted, used the word "sociopathic." One of Altman's batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. "You need to understand that Sam can never be trusted," he told one. "He is a sociopath. He would do anything." Multiple senior executives at Microsoft said that, despite [CEO Satya] Nadella's long-standing loyalty, the company's relationship with Altman has become fraught. "He has misrepresented, distorted, renegotiated, reneged on agreements," one said... The senior executive at Microsoft said, of Altman, "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer."

Read more of this story at Slashdot.

  •  

Qu’est-ce que le « tokenmaxxing », nouvelle obsession des salariés de la Silicon Valley ?

Dans la Silicon Valley, l’usage de l’intelligence artificielle est devenu un marqueur de performance. Sous l’effet du phénomène de « tokenmaxxing », certains salariés des géants de la tech multiplient les dépenses en tokens pour grimper dans des classements internes, alimentant à la fois le débat sur la productivité et la croissance des fournisseurs d’IA.

  •  

Manipulateur, menteur, imposteur ? L’enquête qui accable Sam Altman, le patron d’OpenAI

Sam Altman

Sam Altman, l’un des visages les plus influents de l’intelligence artificielle, fait l’objet d’un portrait particulièrement critique dans une enquête du New Yorker. Le magazine y relaie des accusations de manipulation, des doutes sur sa maîtrise technique et une affaire familiale grave, contestée par l’intéressé.

  •  

ChatGPT lance un nouvel abonnement à… 103 euros par mois

Comme Claude, OpenAI sépare désormais son abonnement ChatGPT Pro en deux niveaux, à 103 euros par mois (5 fois moins de limites) ou 229 euros par mois (20 fois moins de limites). L'entreprise veut s'adresser aux utilisateurs les plus demandeurs, notamment pour son outil de développement Codex, mais qui n'ont pas besoin du ChatGPT Pro le plus cher.

  •  

L’IA aurait dû rester en laboratoire : le patron de Google DeepMind regrette que ChatGPT soit sorti trop vite

Invité du podcast de Cleo Abram, Demis Hassabis, le patron de Google DeepMind, est longuement revenu sur l'émergence de l'IA générative commerciale en 2022, qui a d'abord pris Google par surprise. Le prix Nobel de chimie s'interroge sur l'intérêt d'avoir publié aussi rapidement cette technologie au grand public : les laboratoires auraient peut-être utilisé leur temps autrement si la lutte acharnée pour avoir le meilleur modèle n'avait pas commencé.

  •  

Amazon May Sell Trainium AI Chips To Third Parties In Shot At Nvidia

Amazon CEO Andy Jassy says the company may eventually sell its Trainium AI chips directly to outside customers, not just through AWS, which would put Amazon in more direct competition with Nvidia. "There's so much demand for our chips that it's quite possible we'll sell racks of them to third parties in the future," Jassy wrote in his annual shareholder letter Thursday. He also revealed the company's chip business is already running at more than $20 billion annually, with demand so strong that current and even future generations are largely spoken for. Quartz reports: Access to Amazon's chips is currently limited to Amazon Web Services, with customers paying for cloud-based usage rather than owning any physical hardware. Selling to AWS and external customers alike, as standalone chipmakers do, would put annual revenue at around $50 billion, up from the $20 billion the company estimates for the year, Jassy said. The $20 billion figure spans three product lines: Trainium, the AI accelerator chip; Graviton, a general-purpose processor; and Nitro, a chip that helps run Amazon's EC2 server instances. All three are growing at triple-digit rates year over year, Jassy claimed in his letter. Jassy said demand for Trainium has outpaced supply at each generation. Trainium2 is essentially unavailable, with its entire allocated capacity spoken for. Trainium3 started reaching customers in early 2026, and reservations have filled nearly all available supply. Even Trainium4 -- which is not expected to reach wide release for another year and a half -- has substantial pre-orders committed. Jassy argued that a full-scale Trainium rollout could shave tens of billions off annual capital costs while meaningfully widening profit margin.

Read more of this story at Slashdot.

  •  

Skilled Older Workers Turn To AI Training To Stay Afloat

An anonymous reader quotes a report from the Guardian: [Five skilled workers aged 50 and older spoke] to the Guardian about how, after struggling to find work in their fields, they have turned to an emerging and growing category of work: using their expertise to train artificial intelligence models. Known as data annotation, the work involves labeling and evaluating the information used to train AI models like Open AI's ChatGPT or Google's Gemini. A doctor, for example, might review how an AI model answers medical questions to flag incorrect or unsafe responses and suggest better ones, helping the system learn how to generate more accurate and reliable responses. The ultimate goal of training is to level up AI models until they're capable of doing a job as well as a human could -- meaning they could someday replace some of these human workers. The companies behind AI training, such as Mercor, GlobalLogic, TEKsystems, micro1 and Alignerr, operate large contractor networks staffed by people like Ciriello. Their clients include tech giants like OpenAI, Google and Meta, academic researchers and industries including healthcare and finance. For experienced professionals, AI training contracts can be a side hustle -- or a temporary fallback following a layoff -- where top experts can, in some cases, earn over $180 an hour. But that's on the high end. For some older workers [...], it represents another thing entirely: a last refuge in a brutal job market that is harder to stay in, or re-enter, the older they get. For many of them, whether or not they're training their AI replacements in their professions is besides the point. They need the work now. [...] "There's just a lot of desperation out there," Johnson said. As opportunities narrow, many turn to what Joanna Lahey, a professor at Texas A&M University who studies age discrimination and labor outcomes, calls "bridge jobs" -- lower-paying, less demanding roles that help workers stay financially afloat as they approach retirement. Historically, that meant taking temp assignments, retail and fast-food work and gig roles like Uber and food delivery. Now, for skilled workers -- engineers, lawyers, nurses or designers, for example -- using their expertise for AI data training is becoming the new bridge job. "[AI] training work may be better in some ways than those earlier alternatives," Lahey told the Guardian. AI training can offer flexibility, quick income and intellectual engagement. But it's often a clear step down. Professionals in fields such as software development, medicine or finance typically earn six-figure salaries that come with benefits and paid leave, according to the US Bureau of Labor Statistics. According to online job postings, AI training gigs start at $20 an hour, with pay increasing to between $30 and $40 an hour. In some cases, AI trainers with coveted subject matter expertise can earn over $100 an hour. AI training is contract-based, though, meaning the pay and hours are unstable, and it often doesn't come with benefits.

Read more of this story at Slashdot.

  •  

Meta dévoile Muse Spark, son premier modèle propriétaire depuis le départ de Yann LeCun

Après des mois à recruter dans toute la Silicon Valley pour former le Superintelligence Labs, Meta vient de dévoiler Muse Spark, un premier modèle propriétaire présenté comme supérieur à Claude Opus 4.6 et Google Gemini 3.1 Pro dans plusieurs tests. Mais l'entreprise a-t-elle encore une chance dans la course à l'IA générative ?

  •  

Anthropic Unveils 'Claude Mythos', Powerful AI With Major Cyber Implications

"Anthropic has unveiled Claude Mythos, a new AI model capable of discovering critical vulnerabilities at scale," writes Slashdot reader wiredmikey. "It's already powering Project Glasswing, a joint effort with major tech firms to secure critical software. But the same capabilities could also accelerate offensive cyber operations." SecurityWeek reports: Mythos is not an incremental improvement but a step change in performance over Anthropic's current range of frontier models: Haiku (smallest), Sonnet (middle ground), and Opus (most powerful). Mythos sits in a fourth tier named Copybara, and Anthropic describes it as superior to any other existing AI frontier model. It incorporates the current trend in the use of AI: the modern use of agentic AI. "The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills... the model has the highest scores of any model yet developed on a variety of software coding tasks," notes Anthropic in a blog titled Project Glasswing -- Securing critical software for the AI era. In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old -- the oldest found so far is a 27-years old bug in OpenBSD. Elsewhere, a 16-years old vulnerability found in video software has survived five million hits from other automated testing tools without ever being discovered. And it autonomously found and chained together several in the Linux kernel allowing an attacker to escalate from ordinary user access to complete control of the machine. [...] Anthropic is concerned that Mythos' capabilities could unleash cyberattacks too fast and too sophisticated for defenders to block. It hopes that Mythos can be used to improve cybersecurity generally before malicious actors can get access to it. To this end, the firm has announced the next stage of this preparation as Project Glasswing, powered by Mythos Preview. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. "Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play." Claude Mythos Preview is described as a general-purpose, unreleased frontier model from Anthropic that has nevertheless completed its training phase. The firm does not plan to make Mythos Preview generally available. The implication is that 'Preview' is a term used solely to describe the current state of Mythos and the market's readiness to receive it, and will be dropped when the firm gets closer to general release.

Read more of this story at Slashdot.

  •  

Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour

A New York Times analysis found Google's AI Overviews now answer questions correctly about 90% of the time, which might sound impressive until you realize that roughly 1 in 10 answers is wrong. "[F]or Google, that means hundreds of thousands of lies going out every minute of the day," reports Ars Technica. From the report: The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI. Oumi began running its test last year when Gemini 2.5 was still the company's best model. At the time, the benchmark showed an 85 percent accuracy rate. When the test was rerun following the Gemini 3 update, AI Overviews answered 91 percent of the questions correctly. If you extrapolate this miss rate out to all Google searches, AI Overviews is generating tens of millions of incorrect answers per day. The report includes several examples of where AI Overviews went wrong. When asked for the date on which Bob Marley's former home became a museum, AI Overviews cited three pages, two of which didn't discuss the date at all. The final one, Wikipedia, listed two contradictory years, and AI Overviews confidently chose the wrong one. The benchmark also prompts models to produce the date on which Yo Yo Ma was inducted into the classical music hall of fame. While AI Overviews cited the organization's website that listed Ma's induction, it claimed there's no such thing as the Classical Music Hall of Fame. "This study has serious holes," said Google spokesperson Ned Adriance. "It doesn't reflect what people are actually searching on Google." The search giant likes to use a test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted.

Read more of this story at Slashdot.

  •  

OpenAI Calls For Robot Taxes, Public Wealth Fund, and 4-Day Workweek To Tackle AI Disruption

OpenAI is proposing (PDF) sweeping policy changes to help manage the societal disruption caused by advanced AI, including taxes on automated labor, a public wealth fund, and experiments with a four-day workweek. The company said the policy document offered a series of "initial ideas" to address the risk of "jobs and entire industries being disrupted" by the adoption of AI tools. Business Insider reports: Among the core policy suggestions is a public wealth fund, which would see lawmakers and AI companies work together to invest in long-term assets linked to the AI boom, with returns distributed directly to citizens. Another is that the government should encourage and incentivize employers to experiment with four-day workweeks with no loss in pay and offer "benefits bonuses" tied to productivity gains from new AI tools. The policy document also suggests lawmakers modernize the tax system and shift the tax base to corporate income and capital gains, rather than relying on labor income and payroll taxes that could be hit by a wave of AI-powered job losses. It also recommends taxes related to automated labor. OpenAI also called for the accelerated expansion of the US's electricity grid, which is already feeling the strain from a wave of data center construction and energy demand for training ever more powerful AI models.

Read more of this story at Slashdot.

  •  

Copilot Is 'For Entertainment Purposes Only,' According To Microsoft's ToS

An anonymous reader quotes a report from TechCrunch: AI skeptics aren't the only ones warning users not to unthinkingly trust models' outputs -- that's what the AI companies say themselves in their terms of service. Take Microsoft, which is currently focused on getting corporate customers to pay for Copilot. But it's also been getting dinged on social media over Copilot's terms of use, which appear to have been last updated on October 24, 2025. "Copilot is for entertainment purposes only," the company warned. "It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk." Microsoft described the terms of service as "legacy language," saying it will be updated. Tom's Hardware notes that similar AI warnings remain common across the industry, with companies like OpenAI and xAI also cautioning users not to treat chatbot output as "the truth" or as "a sole service of truth or factual information."

Read more of this story at Slashdot.

  •  

Internet Bug Bounty Pauses Payouts, Citing 'Expanding Discovery' From AI-Assisted Research

The Internet Bug Bounty program "has been paused for new submissions," they announced last week. Running since 2012, the program is funded by "a number of leading software companies," reports InfoWorld, "and has awarded more than $1.5m to researchers who have reported bugs " Up to now, 80% of its payouts have been for discoveries of new flaws, and 20% to support remediation efforts. But as artificial intelligence makes it easier to find bugs, that balance needs to change, HackerOne said in a statement. "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted," said HackerOne. Among the first programs to be affected is the Node.js project, a server-side JavaScript platform for web applications known for its extensive ecosystem. While the project team will continue to accept and triage bug reports through HackerOne, without funding from the Internet Bug Bounty program it will no longer pay out rewards, according to an announcement on its website... [J]ust last month, Google also put a halt to AI-generated submissions provided to its Open Source Software Vulnerability Reward Program. The Internet Bug Bounty stressed that "We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals..." "We remain committed to strengthening open source security. Working with project maintainers and researchers, we're actively evaluating solutions to better align incentives with open source ecosystem realities and ensure vulnerability discoveries translate into durable remediation outcomes."

Read more of this story at Slashdot.

  •  
❌