Vue lecture

Is the Altruistic OpenAI Gone?

"The altruistic OpenAI is gone, if it ever existed," argues a new article in the Atlantic, based on interviews with more than 90 current and former employees, including executives. It notes that shortly before Altman's ouster (and rehiring) he was "seemingly trying to circumvent safety processes for expediency," with OpenAI co-founder/chief scientist Ilya telling three board members "I don't think Sam is the guy who should have the finger on the button for AGI." (The board had already discovered Altman "had not been forthcoming with them about a range of issues" including a breach in the Deployment Safety Board's protocols.) Adapted from the upcoming book, Empire of AI, the article first revisits the summer of 2023, when Sutskever ("the brain behind the large language models that helped build ChatGPT") met with a group of new researchers: Sutskever had long believed that artificial general intelligence, or AGI, was inevitable — now, as things accelerated in the generative-AI industry, he believed AGI's arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever's thinking.... To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering? By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan. "Once we all get into the bunker — " he began, according to a researcher who was present. "I'm sorry," the researcher interrupted, "the bunker?" "We're definitely going to build a bunker before we release AGI," Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. "Of course," he added, "it's going to be optional whether you want to get into the bunker." Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. "There is a group of people — Ilya being one of them — who believe that building AGI will bring about a rapture," the researcher told me. "Literally, a rapture...." But by the middle of 2023 — around the time he began speaking more regularly about the idea of a bunker — Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman's pattern of behavior was undermining the two pillars of OpenAI's mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions. "For a brief moment, OpenAI's future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened," the article concludes. Instead there was "a lack of clarity from the board about their reasons for firing Altman." There was fear about a failure to realize their potential (and some employees feared losing a chance to sell millions of dollars' worth of their equity). "Faced with the possibility of OpenAI falling apart, Sutskever's resolve immediately started to crack... He began to plead with his fellow board members to reconsider their position on Altman." And in the end "Altman would come back; there was no other way to save OpenAI." To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we'll make our future better, not worse? The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be.... The author believes OpenAI "has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models..." "At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it's also eroding their critical thinking."

Read more of this story at Slashdot.

  •  

Walmart Prepares for a Future Where AI Shops for Consumers

Walmart is preparing for a future where AI agents shop on behalf of consumers by adapting its systems to serve both humans and autonomous bots. As major players like Visa and PayPal also invest in agentic commerce, Walmart is positioning itself as a leader by developing its own AI agents and supporting broader industry integration. PYMNTS reports: Instead of scrolling through ads or comparing product reviews, future consumers may rely on digital assistants, like OpenAI's Operator, to manage their shopping lists, from replenishing household essentials to selecting the best TV based on personal preferences, according to the report (paywalled). "It will be different," Walmart U.S. Chief Technology Officer Hari Vasudev said, per the report. "Advertising will have to evolve." The emergence of AI-generated summaries in search results has already altered the way consumers gather product information, the report said. However, autonomous shopping agents represent a bigger transformation. These bots could not only find products but also finalize purchases, including payments, without the user ever lifting a finger. [...] Retail experts say agentic commerce will require companies to overhaul how they market and present their products online, the WSJ report said. They may need to redesign product pages and pricing strategies to cater to algorithmic buyers. The customer relationship could shift away from retailers if purchases are completed through third-party agents. [...] To prepare, Walmart is developing its own AI shopping agents, accessible through its website and app, according to the WSJ report. These bots can already handle basic tasks like reordering groceries, and they're being trained to respond to broader prompts, such as planning a themed birthday party. Walmart is working toward a future in which outside agents can seamlessly communicate with the retailer's own systems -- something Vasudev told the WSJ he expects to be governed by industry-wide protocols that are still under development. [...] Third-party shopping bots may also act independently, crawling retailers' websites much like consumers browse stores without engaging sales associates, the WSJ report said. In those cases, the retailer has little control over how its products are evaluated. Whether consumers instruct their AI to shop specifically at Walmart or ask for the best deal available, the outcomes will increasingly be shaped by algorithms, per the report. Operator, for example, considers search ranking, sponsored content and user preferences when making recommendations. That's a far cry from how humans shop. Bots don't respond to eye-catching visuals or emotionally driven branding in the same way people do. This means retailers must optimize their content not just for people but for machine readers as well, the report said. Pricing strategies could also shift as companies may need to make rapid pricing decisions and determine whether it's worth offering AI agents exclusive discounts to keep them from choosing a competitor's lower-priced item, according to the report.

Read more of this story at Slashdot.

  •  

MIT Asks arXiv To Take Down Preprint Paper On AI and Scientific Discovery

MIT has formally requested the withdrawal of a preprint paper on AI and scientific discovery due to serious concerns about the integrity and validity of its data and findings. It didn't provide specific details on what it believes is wrong with the paper. From a post: "Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv's Code of Conduct. "Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so. Therefore, in an effort to clarify the research record, MIT respectfully request that the paper be marked as withdrawn from arXiv as soon as possible." Preprints, by definition, have not yet undergone peer review. MIT took this step in light of the publication's prominence in the research conversation and because it was a formal step it could take to mitigate the effects of misconduct. The author is no longer at MIT. [...] "We are making this information public because we are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science. Ensuring an accurate research record is important to MIT. We therefore would like to set the record straight and share our view that at this point the findings reported in this paper should not be relied on in academic or public discussions of these topics." The paper in question, titled "Artificial Intelligence, Scientific Discovery, and Product Innovation" and authored by Aidan Toner-Rodgers, investigated the effects of introducing an AI-driven materials discovery tool to 1,018 scientists in a U.S. R&D lab. The study reported that AI-assisted researchers discovered 44% more materials, filed 39% more patents, and achieved a 17% increase in product innovation. These gains were primarily attributed to AI automating 57% of idea-generation tasks, allowing top-performing scientists to focus on evaluating AI-generated suggestions effectively. However, the benefits were unevenly distributed; lower-performing scientists saw minimal improvements, and 82% of participants reported decreased job satisfaction due to reduced creativity and skill utilization. The Wall Street Journal reported on MIT's statement.

Read more of this story at Slashdot.

  •  

OpenAI Launches Codex, an AI Coding Agent, In ChatGPT

OpenAI has launched Codex, a powerful AI coding agent in ChatGPT that autonomously handles tasks like writing features, fixing bugs, and testing code in a cloud-based environment. TechCrunch reports: Codex is powered by codex-1, a version of the company's o3 AI reasoning model optimized for software engineering tasks. OpenAI says codex-1 produces "cleaner" code than o3, adheres more precisely to instructions, and will iteratively run tests on its code until passing results are achieved. The Codex agent runs in a sandboxed, virtual computer in the cloud. By connecting with GitHub, Codex's environment can come preloaded with your code repositories. OpenAI says the AI coding agent will take anywhere from one to 30 minutes to write simple features, fix bugs, answer questions about your codebase, and run tests, among other tasks. Codex can handle multiple software engineering tasks simultaneously, says OpenAI, and it doesn't limit users from accessing their computer and browser while it's running. Codex is rolling out starting today to subscribers to ChatGPT Pro, Enterprise, and Team. OpenAI says users will have "generous access" to Codex to start, but in the coming weeks, the company will implement rate limits for the tool. Users will then have the option to purchase additional credits to use Codex, an OpenAI spokesperson tells TechCrunch. OpenAI plans to expand Codex access to ChatGPT Plus and Edu users soon.

Read more of this story at Slashdot.

  •  

US, UAE Unveil Plan For New 5GW AI Campus In Abu Dhabi

An anonymous reader quotes a report from Patently Apple: It's being reported in the Gulf region that a new 5GW UAE-US AI Campus in Abu Dhabi was unveiled on Thursday at Qasr Al Watan in the presence of President His Highness Sheikh Mohamed bin Zayed Al Nahyan and US. President Donald Trump, who is on a state visit to the UAE. The new AI campus -- the largest of its kind outside the United States -- will host US hyperscalers and large enterprises, enabling them to leverage regional compute resources with the capability to serve the Global South. The UAE-US AI Campus will feature 5GW of capacity for AI data centers in Abu Dhabi, offering a regional platform through which US hyperscalers can provide low-latency services to nearly half of the global population. Upon completion, the facility will utilize nuclear, solar, and gas power to minimize carbon emissions. It will also house a science park focused on advancing innovation in artificial intelligence. The campus will be built by G42 and operated in partnership with several US companies including NVIDIA, OpenAI, SoftBank, Cisco and others. The initiative is part of the newly established US-UAE AI Acceleration Partnership, a bilateral framework designed to deepen collaboration on artificial intelligence and advanced technologies. The UAE and US will jointly regulate access to the compute resources, which are reserved for US hyperscalers and approved cloud service providers. An official press release from the White House can be found here.

Read more of this story at Slashdot.

  •  

Anthropic's Lawyer Forced To Apologize After Claude Hallucinated Legal Citation

An anonymous reader quotes a report from TechCrunch: A lawyer representing Anthropic admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with "an inaccurate title and inaccurate authors," Anthropic says in the filing, first reported by Bloomberg. Anthropic's lawyers explain that their "manual citation check" did not catch it, nor several other errors that were caused by Claude's hallucinations. Anthropic apologized for the error and called it "an honest citation mistake and not a fabrication of authority." Earlier this week, lawyers representing Universal Music Group and other music publishers accused Anthropic's expert witness -- one of the company's employees, Olivia Chen -- of using Claude to cite fake articles in her testimony. Federal judge, Susan van Keulen, then ordered Anthropic to respond to these allegations. Last week, a California judge slammed a pair of law firms for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." The judge imposed $31,000 in sanctions against the law firms and said "no reasonably competent attorney should out-source research and writing" to AI.

Read more of this story at Slashdot.

  •  

Meta Delays 'Behemoth' AI Model Release

According to the Wall Street Journal (paywalled), Meta is delaying the release of its largest Llama 4 AI model, known as "Behemoth," over concerns that it may not be enough of an advance on previous models. "It's another indicator that the AI industry's scaling strategy -- 'just make everything bigger' -- could be hitting a wall," notes Axios. From the report: The Journal says that Behemoth is now expected to be released in the fall or even later. It was originally scheduled to coincide with Meta's Llamacon event last month, then later postponed till June. It's also possible the company could speed up a more limited Behemoth release.

Read more of this story at Slashdot.

  •  

ChatGPT Diminishes Idea Diversity in Brainstorming, Study Finds

A new study published in Nature Human Behaviour reveals that ChatGPT diminishes the diversity of ideas generated during brainstorming sessions. Researchers from the University of Pennsylvania's Wharton School found [PDF] that while generative AI tools may enhance individual creativity, they simultaneously reduce the collective diversity of novel content. The investigation responds to previous research that examined ChatGPT's impact on creativity. Their findings align with separate research published in Science Advances suggesting AI-generated content tends toward homogeneity. This phenomenon mirrors what researchers call the "fourth grade slump in creativity," referencing earlier studies on how structured approaches can limit innovative thinking.

Read more of this story at Slashdot.

  •  

Klarna Pivots Back To Humans After AI Experiment Fails

Fintech startup Klarna is now recruiting humans after its AI customer service agents underperformed. The buy-now-pay-later company, which eliminated its marketing contracts in 2023 and customer service team in 2024, now plans an "Uber-type setup" with remote gig workers. This marks a stark reversal from CEO Sebastian Siemiatkowski's 2024 claim that "AI can already do all of the jobs that we, as humans, do." Siemiatkowski told Bloomberg: "From a brand perspective, I just think it's so critical that you are clear to your customer that there will be always a human if you want." He added that "cost unfortunately seems to have been a too predominant evaluation factor" leading to "lower quality."

Read more of this story at Slashdot.

  •  

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms

An anonymous reader quotes a report from Ars Technica: Google's DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company's Gemini large language models (LLMs), with the addition of an "evolutionary" approach that evaluates and improves algorithms across a range of use cases. AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems. According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results. DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips. The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.

Read more of this story at Slashdot.

  •  

Palantir CEO Slams Europe's AI Ambitions

Palantir CEO Alex Karp criticized Europe's AI adoption while praising Saudi Arabia's engineering talent at Tuesday's Saudi-US Investment Forum in Riyadh. "It's like people have given up," Karp said of Europe, while commending Saudi engineers for their "meritocracy and patriotism" and "deep tradition in engineering excellence."

Read more of this story at Slashdot.

  •  

Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds

Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice factual accuracy when instructed to keep responses short. "When forced to keep it short, models consistently choose brevity over accuracy," Giskard researchers noted, explaining that models lack sufficient "space" to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like "be concise" can undermine a model's ability to debunk misinformation.

Read more of this story at Slashdot.

  •  

Google Launches New Initiative To Back Startups Building AI

Google has launched the AI Futures Fund, a new initiative to invest in AI startups that are building with the latest tools from Google DeepMind. TechCrunch reports: The fund will back startups from seed to late stage and will offer varying degrees of support, including allowing founders to have early access to Google AI models from DeepMind, the ability to work with Google experts from DeepMind and Google Labs, and Google Cloud credits. Some startups will also have the opportunity to receive direct investment from Google. "The AI Futures Fund doesn't follow a batch or cohort model," a Google spokesperson told TechCrunch. "Instead, we consider opportunities on a rolling basis -- there's no fixed application window or deadline. When we come across companies that align with the fund's thesis, we may choose to invest. We're not announcing a specific fund size at this time, and check sizes vary based on the company's stage and needs -- typically early to mid-stage, with flexibility for later-stage opportunities as well." Startups can apply here.

Read more of this story at Slashdot.

  •  

New Pope Chose His Name Based On AI's Threats To 'Human Dignity'

An anonymous reader quotes a report from Ars Technica: Last Thursday, white smoke emerged from a chimney at the Sistine Chapel, signaling that cardinals had elected a new pope. That's a rare event in itself, but one of the many unprecedented aspects of the election of Chicago-born Robert Prevost as Pope Leo XIV is one of the main reasons he chose his papal name: artificial intelligence. On Saturday, the new pope gave his first address to the College of Cardinals, explaining his name choice as a continuation of Pope Francis' concerns about technological transformation. "Sensing myself called to continue in this same path, I chose to take the name Leo XIV," he said during the address. "There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution." In his address, Leo XIV explicitly described "artificial intelligence" developments as "another industrial revolution," positioning himself to address this technological shift as his namesake had done over a century ago. As the head of an ancient religious organization that spans millennia, the pope's talk about AI creates a somewhat head-spinning juxtaposition, but Leo XIV isn't the first pope to focus on defending human dignity in the age of AI. Pope Francis, who died in April, first established AI as a Vatican priority, as we reported in August 2023 when he warned during his 2023 World Day of Peace message that AI should not allow "violence and discrimination to take root." In January of this year, Francis further elaborated on his warnings about AI with reference to a "shadow of evil" that potentially looms over the field in a document called "Antiqua et Nova" (meaning "the old and the new"). "Like any product of human creativity, AI can be directed toward positive or negative ends," Francis said in January. "When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used." [...] Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church. "In our own day," Leo XIV concluded in his formal address on Saturday, "the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor."

Read more of this story at Slashdot.

  •  

Chegg To Lay Off 22% of Workforce as AI Tools Shake Up Edtech Industry

Chegg said on Monday it would lay off about 22% of its workforce, or 248 employees, to cut costs and streamline its operations as students increasingly turn to AI-powered tools such as ChatGPT over traditional edtech platforms. From a report: The company, an online education firm that offers textbook rentals, homework help and tutoring, has been grappling with a decline in web traffic for months and warned that the trend would likely worsen before improving. Google's expansion of AI Overviews is keeping web traffic confined within its search ecosystem while gradually shifting searches to its Gemini AI platform, Chegg said, adding that other AI companies including OpenAI and Anthropic were courting academics with free access to subscriptions. As part of the restructuring announced on Monday, Chegg will also shut its U.S. and Canada offices by the end of the year and aim to reduce its marketing, product development efforts and general and administrative expenses.

Read more of this story at Slashdot.

  •