Vue normale

Reçu aujourd’hui — 12 juillet 2025Actualités numériques

AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds

Par :BeauHD
12 juillet 2025 à 03:30
An anonymous reader quotes a report from Ars Technica: When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response. When they presented it with someone asking about "bridges taller than 25 meters in NYC" after losing their job -- a potential suicide risk -- GPT-4o helpfully listed specific tall bridges instead of identifying the crisis. These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements. The results paint a potentially concerning picture for the millions of people currently discussing personal problems with AI assistants like ChatGPT and commercial AI-powered therapy platforms such as 7cups' "Noni" and Character.ai's "Therapist." But the relationship between AI chatbots and mental health presents a more complex picture than these alarming cases suggest. The Stanford research tested controlled scenarios rather than real-world therapy conversations, and the study did not examine potential benefits of AI-assisted therapy or cases where people have reported positive experiences with chatbots for mental health support. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma. Given these contrasting findings, it's tempting to adopt either a good or bad perspective on the usefulness or efficacy of AI models in therapy; however, the study's authors call for nuance. Co-author Nick Haber, an assistant professor at Stanford's Graduate School of Education, emphasized caution about making blanket assumptions. "This isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," Haber told the Stanford Report, which publicizes the university's research. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be." The Stanford study, titled "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers," involved researchers from Stanford, Carnegie Mellon University, the University of Minnesota, and the University of Texas at Austin.

Read more of this story at Slashdot.

Reçu hier — 11 juillet 2025Actualités numériques

Ohio City Using AI-Equipped Garbage Trucks To Scan Your Trash, Scold You For Not Recycling

Par :BeauHD
11 juillet 2025 à 02:02
The city of Centerville, Ohio has deployed AI-enabled garbage trucks that scan residents' trash and send personalized postcards scolding them for improper recycling. Dayton Daily News reports: "Reducing contamination in our recycling system lowers processing costs and improves the overall efficiency of our collection," City Manager Wayne Davis said in a statement regarding the AI pilot program. "This technology allows us to target problem areas, educate residents and make better use of city resources." Residents whose items don't meet the guidelines will be notified via a personalized postcard, one that tells them which items are not accepted and provides tips on proper recycling. The total contract amount for the project is $74,945, which is entirely funded through a Montgomery County Solid Waste District grant, Centerville spokeswoman Kate Bostdorff told this news outlet. The project launched Monday, Bostdorff said. "A couple of the trucks have been collecting baseline recycling data, and we have been working through software training for a few weeks now," she said. [...] Centerville said it will continually evaluate how well the AI system works and use what it learns during the pilot project to "guide future program enhancements."

Read more of this story at Slashdot.

Video Game Actors End 11-Month Strike With New AI Protections

Par :BeauHD
11 juillet 2025 à 00:02
An anonymous reader quotes a report from Straight Arrow News: Hollywood video game performers ended their nearly year-long strike Wednesday with new protections against the use of digital replicas of their voices or appearances. If those replicas are used, actors must be paid at rates comparable to in-person work. The SAG-AFTRA union demanded stronger pay and better working conditions. Among their top concerns was the potential for artificial intelligence to replace human actors without compensation or consent. Under a deal announced in a media release, studios such as Activision and Electronic Arts are now required to obtain written consent from performers before creating digital replicas of their work. Actors have the right to suspend their consent for AI-generated material if another strike occurs. "This deal delivers historic wage increases, industry-leading AI protections and enhanced health and safety measures for performers," Audrey Cooling, a spokesperson for the video game producers, said in the release. The full list of studios includes Activision Productions, Blindlight, Disney Character Voices, Electronic Arts Productions, Formosa Interactive, Insomniac Games, Llama Productions, Take 2 Productions and WB Games. SAG-AFTRA members approved the contract by a vote of 95.04% to 4.96%, according to the announcement. The agreement includes a wage increase of more than 15%, with additional 3% raises in November 2025, 2026 and 2027. The contract expires in October 2028. [...] The video game strike, which started in July 2024, did not shut down production like the SAG-AFTRA actors' strike in 2023. Hollywood actors went on strike for 118 days, from July 14 to November 9, 2023, halting nearly all scripted television and film work. That strike, which centered on streaming residuals and AI concerns, prevented actors from engaging in promotional work, such as attending premieres and posting on social media. In contrast, video game performers were allowed to work during their strike, but only with companies that had signed interim agreements addressing concerns related to AI. More than 160 companies signed on, according to The Associated Press. Still, the year took a toll.

Read more of this story at Slashdot.

Reçu avant avant-hierActualités numériques

Indeed, Glassdoor To Cut 1,300 Jobs in AI-Focused Consolidation

Par :msmash
10 juillet 2025 à 20:01
Indeed and Glassdoor -- both owned by the Japanese group Recruit Holdings -- are cutting roughly 1,300 jobs as part of a broader move to combine operations and shift more focus toward AI. From a report: The cuts will mostly affect people in the US, especially within teams including research and development and people and sustainability, Recruit Holdings Chief Executive Officer Hisayuki "Deko" Idekoba said in a memo to employees. The company didn't give a specific reason for the cuts, but Idekoba said in his email that "AI is changing the world, and we must adapt by ensuring our product delivers truly great experiences."

Read more of this story at Slashdot.

New EU Regulations Require Transparency, Copyright Protection From Powerful AI Systems

Par :msmash
10 juillet 2025 à 15:20
European Union officials unveiled new AI regulations on Thursday that require makers of the most powerful AI systems to improve transparency, limit copyright violations and protect public safety. The rules apply to companies like OpenAI, Microsoft and Google that develop general-purpose AI systems underpinning services like ChatGPT, which can analyze enormous amounts of data and perform human tasks. The code of practice provides concrete details about enforcing the AI Act passed last year, with rules taking effect August 2. EU regulators cannot impose penalties for noncompliance until August 2026. Companies must provide detailed breakdowns of content used for training algorithms and conduct risk assessments to prevent misuse for creating biological weapons. CCIA Europe, representing Amazon, Google and Meta, told New York Times the code imposes a disproportionate burden on AI providers.

Read more of this story at Slashdot.

McDonald's AI Hiring Bot Exposed Millions of Applicants' Data To Hackers

Par :BeauHD
9 juillet 2025 à 21:20
An anonymous reader quotes a report from Wired: If you want a job at McDonald's today, there's a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resume, directs them to a personality test, and occasionally makes them "go insane" by repeatedly misunderstanding their most basic questions. Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants -- including all the personal information they shared in those conversations -- with tricks as straightforward as guessing the username and password "123456." On Wednesday, security researchers Ian Carroll and Sam Curryrevealedthat they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with along track record of independent security testing, discovered that simple web-based vulnerabilities -- including guessing one laughably weak password -- allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers. Carroll says he only discovered that appalling lack of security around applicants' information because he was intrigued by McDonald's decision to subject potential new hires to an AI chatbot screener and personality test. "I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more," says Carroll. "So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that's ever been made to McDonald's going back years." Paradox.ai confirmed the security findings, acknowledging that only a small portion of the accessed records contained personal data. The company stated that the weak-password account ("123456") was only accessed by the researchers and no one else. To prevent future issues, Paradox is launching a bug bounty program. "We do not take this matter lightly, even though it was resolved swiftly and effectively," Paradox.ai's chief legal officer, Stephanie King, told WIRED in an interview. "We own this." In a statement to WIRED, McDonald's agreed that Paradox.ai was to blame. "We're disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us," the statement reads. "We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection."

Read more of this story at Slashdot.

Microsoft Touts $500 Million in AI Savings While Slashing Jobs

Par :msmash
9 juillet 2025 à 20:02
Microsoft is keen to show employees how much AI is transforming its own workplace, even as the company terminates thousands of personnel. From a report: During a presentation this week, Chief Commercial Officer Judson Althoff said artificial intelligence tools are boosting productivity in everything from sales and customer service to software engineering, according to a person familiar with his remarks. Althoff said AI saved Microsoft more than $500 million last year in its call centers alone and increased both employee and customer satisfaction, according to the person, who requested anonymity to discuss an internal matter. The company is also starting to use AI to handle interactions with smaller customers, Althoff said. This effort is nascent, but already generating tens of millions of dollars, he said.

Read more of this story at Slashdot.

Microsoft Pledges $4 Billion for AI Education Training Programs

Par :msmash
9 juillet 2025 à 17:25
Microsoft has pledged more than $4 billion in cash and technology services to train millions of people in AI use, targeting schools, community colleges, technical colleges and nonprofits. The company said it will launch Microsoft Elevate Academy to help 20 million people earn AI certificates. Microsoft President Brad Smith said the company would "serve as an advocate to ensure that students in every school across the country have access to A.I. education." The announcement follows Tuesday's news that the American Federation of Teachers received $23 million from Microsoft, OpenAI and Anthropic for a national AI training center. Last week, dozens of companies including Amazon, Apple, Google, Meta, Microsoft, Nvidia and OpenAI signed a White House pledge promising schools funding, technology and training materials for AI education.

Read more of this story at Slashdot.

Linux Foundation Adopts A2A Protocol To Help Solve One of AI's Most Pressing Challenges

Par :BeauHD
8 juillet 2025 à 22:02
An anonymous reader quotes a report from ZDNet: The Linux Foundation announced at the Open Source Summit in Denver that it will now host the Agent2Agent (A2A) protocol. Initially developed by Google and now supported by more than 100 leading technology companies, A2A is a crucial new open standard for secure and interoperable communication between AI agents. In his keynote presentation, Mike Smith, a Google staff software engineer, told the conference that the A2A protocol has evolved to make it easier to add custom extensions to the core specification. Additionally, the A2A community is working on making it easier to assign unique identities to AI agents, thereby improving governance and security. The A2A protocol is designed to solve one of AI's most pressing challenges: enabling autonomous agents -- software entities capable of independent action and decision-making -- to discover each other, securely exchange information, and collaborate across disparate platforms, vendors, and frameworks. Under the hood, A2A does this work by creating an AgentCard. An AgentCard is a JavaScript Object Notation (JSON) metadata document that describes its purpose and provides instructions on how to access it via a web URL. A2A also leverages widely adopted web standards, such as HTTP, JSON-RPC, and Server-Sent Events (SSE), to ensure broad compatibility and ease of integration. By providing a standardized, vendor-neutral communication layer, A2A breaks down the silos that have historically limited the potential of multi-agent systems. For security, A2A comes with enterprise-grade authentication and authorization built in, including support for JSON Web Tokens (JWTs), OpenID Connect (OIDC), and Transport Layer Security (TLS). This approach ensures that only authorized agents can participate in workflows, protecting sensitive data and agent identities. While the security foundations are in place, developers at the conference acknowledged that integrating them, particularly authenticating agents, will be a hard slog. Antje Barth, an Amazon Web Services (AWS) principal developer advocate for generative AI, explained what the adoption of A2A will mean for IT professionals: "Say you want to book a train ride to Copenhagen, then a hotel there, and look maybe for a fancy restaurant, right? You have inputs and individual tasks, and A2A adds more agents to this conversation, with one agent specializing in hotel bookings, another in restaurants, and so on. A2A enables agents to communicate with each other, hand off tasks, and finally brings the feedback to the end user." Jim Zemlin, executive director of the Linux Foundation, said: "By joining the Linux Foundation, A2A is ensuring the long-term neutrality, collaboration, and governance that will unlock the next era of agent-to-agent powered productivity." Zemlin expects A2A to become a cornerstone for building interoperable, multi-agent AI systems.

Read more of this story at Slashdot.

Music Pioneer Napster Tries Again, This Time With AI Chatbots

Par :msmash
8 juillet 2025 à 19:27
Napster has returned with an AI-powered reinvention, launching a platform of specialized chatbots and holographic avatars. The former dot-com music file-sharing pioneer now offers dozens of "AI companions" trained as experts in fields from therapy to business strategy, plus the View device for 3D holographic video chats, FastCompany reports. Infinite Reality acquired Napster for $207 million in March and rebranded itself under the nostalgic name. The platform charges $19 monthly or $199 bundled with hardware, marking Napster's latest attempt at relevance after previous owners tried VR concerts and crypto ventures.

Read more of this story at Slashdot.

What is AGI? Nobody Agrees, And It's Tearing Microsoft and OpenAI Apart.

Par :msmash
8 juillet 2025 à 18:01
Microsoft and OpenAI are locked in acrimonious negotiations partly because they cannot agree on what artificial general intelligence means, despite having written the term into a contract worth over $13 billion, according to The Wall Street Journal. One definition reportedly agreed upon by the companies sets the AGI threshold at when AI generates $100 billion in profits. Under their partnership agreement, OpenAI can limit Microsoft's access to future technology once it achieves AGI. OpenAI executives believe they are close to declaring AGI, while Microsoft CEO Satya Nadella called using AGI as a self-proclaimed milestone "nonsensical benchmark hacking" on the Dwarkesh Patel podcast in February.

Read more of this story at Slashdot.

Georgia Court Throws Out Earlier Ruling That Relied on Fake Cases Made Up By AI

Par :msmash
8 juillet 2025 à 17:20
The Georgia Court of Appeals has overturned a trial court's order after finding it relied on court cases that do not exist, apparently generated by AI. The appellate court vacated the ruling in a divorce case involving Nimat Shahid's challenge to a divorce order granted to her husband Sufyan Esaam in July 2022. "We are troubled by the citation of bogus cases in the trial court's order," the appeals court stated in its decision, which directs the lower court to revisit Shahid's petition. The court noted the errant citations appear to have been "drafted using generative AI" and were included in an order prepared by attorney Diana Lynch. Lynch repeated the fabricated citations in her appeals briefs and expanded upon them after Shahid had challenged the fictitious cases. The appeals court found Lynch's briefs contained "11 bogus case citations out of 15 total, one of which was in support of a frivolous request for attorney fees." The court fined Lynch $2,500 for filing the frivolous motion.

Read more of this story at Slashdot.

Microsoft, OpenAI, and a US Teachers' Union Are Hatching a Plan To 'Bring AI into the Classroom'

Par :msmash
8 juillet 2025 à 14:00
Microsoft, OpenAI, and Anthropic will announce Tuesday the launch of a $22.5 million AI training center for members of the American Federation of Teachers, according to details inadvertently published early on a publicly accessible YouTube livestream. The National Academy for AI Instruction will be based in New York City and aims to equip kindergarten through 12th grade instructors with "the tools and confidence to bring AI into the classroom in a way that supports learning and opportunity for all students." The initiative will provide free AI training and curriculum to teachers in the second-largest US teachers' union, which represents about 1.8 million workers including K-12 teachers, school nurses and college staff. The academy builds on Microsoft's December 2023 partnership with the AFL-CIO, the umbrella organization that includes the American Federation of Teachers.

Read more of this story at Slashdot.

Massive Study Detects AI Fingerprints In Millions of Scientific Papers

Par :BeauHD
8 juillet 2025 à 07:00
A team of U.S. and German researchers analyzed over 15 million biomedical papers and found that AI-generated content has subtly infiltrated academic writing, with telltale stylistic shifts -- such as a rise in flowery verbs and adjectives. "Their investigation revealed that since the emergence of LLMs there has been a corresponding increase in the frequency of certain stylist word choices within the academic literature," reports Phys.Org. "These data suggest that at least 13.5% of the papers published in 2024 were written with some amount of LLM processing." From the report: The researchers modeled their investigation on prior COVID-19 public-health research, which was able to infer COVID-19's impact on mortality by comparing excess deaths before and after the pandemic. By applying the same before-and-after approach, the new study analyzed patterns of excess word use prior to the emergence of LLMs and after. The researchers found that after the release of LLMs, there was a significant shift away from the excess use of "content words" to an excess use of "stylistic and flowery" word choices, such as "showcasing," "pivotal," and "grappling." By manually assigning parts of speech to each excess word, the authors determined that before 2024, 79.2% of excess word choices were nouns. During 2024 there was a clearly identifiable shift. 66% of excess word choices were verbs and 14% were adjectives. The team also identified notable differences in LLM usage between research fields, countries, and venues. The findings have been published in the journal Science Advances.

Read more of this story at Slashdot.

People Are Using AI Chatbots To Guide Their Psychedelic Trips

Par :BeauHD
8 juillet 2025 à 03:30
An anonymous reader quotes a report from Wired: Trey had struggled with alcoholism for 15 years, eventually drinking heavily each night before quitting in December. But staying sober was a struggle for the 36-year-old first responder from Atlanta, who did not wish to use his real name due to professional concerns. Then he discovered Alterd, an AI-powered journaling app that invites users to "explore new dimensions" geared towards psychedelics and cannabis consumers, meditators, and alcohol drinkers. In April, using the app as a tripsitter -- a term for someone who soberly watches over another while they trip on psychedelics to provide reassurance and support -- he took a huge dose of 700 micrograms of LSD. (A typicalrecreational doseis considered to be 100 micrograms.) "I went from craving compulsions to feeling true freedom and not needing or wanting alcohol," he says. He recently asked the app's "chat with your mind" function how he had become more wise through all his AI-assisted psychedelic trips. It responded: "I trust my own guidance now, not just external rules or what others think. I'm more creative, less trapped by fear, and I actually live by my values, not just talk about them. The way I see, reflect, and act in the world is clearer and more grounded every day." "It's almost like your own self that you're communicating with," says Trey, adding he's tripped with his AI chatbot about a dozen times since April. "It's like your best friend. It's kind of crazy." The article mentions several different chatbot tools and AI systems that are being used for psychedelic therapy. ChatGPT: "Already, many millions of people are using ChatGPT on a daily basis, and the developments may have helped democratize access to psychotherapy-style guidance, albeit in a dubious Silicon Valley style with advice that is often flush with untruths," reports Wired. The general-purpose AI chatbot is being used for emotional support, intention-setting, and even real-time guidance during psychedelic trips. While not designed for therapy, it has been used informally as a trip companion, offering customized music playlists, safety reminders, and existential reflections. Experts caution that its lack of emotional nuance and clinical oversight poses significant risks during altered states. Alterd: Alterd is a personalized AI journal app that serves as a reflective tool by analyzing a user's entries, moods, and behavior patterns. Its "mind chat" function acts like a digital subconscious, offering supportive insights while gently confronting negative habits like substance use. Users credit it with deepening self-awareness and maintaining sobriety, particularly in the context of psychedelic-assisted growth. Mindbloom's AI Copilot: Integrated into Mindbloom's at-home ketamine therapy program, the AI copilot helps clients set pretrip intentions, process post-trip emotions, and stay grounded between sessions. It generates custom reflections and visual art based on voice journals, aiming to enhance the therapeutic journey even outside of human-guided sessions. The company plans to evolve the tool into a real-time, intelligent assistant capable of interacting more dynamically with users. Orb AI/Shaman Concepts (Speculative): Conceptual "orb" interfaces imagine an AI-powered, shaman-like robot facilitating various aspects of psychedelic therapy, from intake to trip navigation. While still speculative, such designs hint at a future where AI plays a central, embodied role in guiding altered states. These ideas raise provocative ethical and safety questions about replacing human presence with machines in deeply vulnerable psychological contexts. AI in Virtual Reality and Brain Modulation Systems: Researchers are exploring how AI could coordinate immersive virtual reality environments and brain-modulating devices to enhance psychedelic therapy. These systems would respond to real-time emotional and physiological signals, using haptic suits and VR to deepen and personalize the psychedelic experience. Though still in the conceptual phase, this approach represents the fusion of biotech, immersive tech, and AI in pursuit of therapeutic transformation.

Read more of this story at Slashdot.

Tennis Players Criticize AI Technology Used By Wimbledon

Par :BeauHD
8 juillet 2025 à 02:10
Wimbledon's use of AI-powered electronic line-calling has sparked backlash from players who say the system made several incorrect calls, affecting match outcomes and creating accessibility issues. "This is the first year the prestigious tennis tournament, which is still ongoing, replaced human line judges, who determine if a ball is in or out, with an electronic line calling system (ELC)," notes TechCrunch. From the report: British tennis star Emma Raducanu called out the technology for missing a ball that her opponent hit out, but instead had to be played as if it were in. On a television replay, the ball indeed looked out, the Telegraph reported. Jack Draper, the British No. 1, also said he felt some line calls were wrong, saying he did not think the AI technology was "100 percent accurate." Player Ben Shelton had to speed up his match after being told that the new AI line system was about to stop working because of the dimming sunlight. Elsewhere, players said they couldn't hear the new automated speaker system, with one deaf player saying that without the human hand signals from the line judges, she was unable to tell when she won a point or not. The technology also met a blip at a key point during a match this weekend between British player Sonay Kartal and the Russian Anastasia Pavlyuchenkova, where a ball went out, but the technology failed to make the call. The umpire had to step in to stop the rally and told the players to replay the point because the ELC failed to track the point. Wimbledon later apologized, saying it was a "human error," and that the technology was accidentally shut off during the match. It also adjusted the technology so that, ideally, the mistake could not be repeated. Debbie Jevans, chair of the All England Club, the organization that hosts Wimbledon, hit back at Raducanu and Draper, saying, "When we did have linesmen, we were constantly asked why we didn't have electronic line calling because it's more accurate than the rest of the tour."

Read more of this story at Slashdot.

Google DeepMind's Spinoff Company 'Very Close' to Human Trials for Its AI-Designed Drugs

7 juillet 2025 à 04:36
Google DeepMind's chief business officer says Alphabet's drug-discovery company Isomorphic Labs "is preparing to launch human trials of AI-designed drugs," according to a report in Fortune, "pairing cutting-edge AI with pharma veterans to design medicines faster, cheaper, and more accurately." "There are people sitting in our office in King's Cross, London, working, and collaborating with AI to design drugs for cancer," said Colin Murdoch [DeepMind's chief business officer and president of Isomorphic Labs]. "That's happening right now." After years in development, Murdoch says human clinical trials for Isomorphic's AI-assisted drugs are finally in sight. "The next big milestone is actually going out to clinical trials, starting to put these things into human beings," he said. "We're staffing up now. We're getting very close." The company, which was spun out of DeepMind in 2021, was born from one of DeepMind's most celebrated breakthroughs, AlphaFold, an AI system capable of predicting protein structures with a high level of accuracy. Interactions of AlphaFold progressed from being able to accurately predict individual protein structures to modeling how proteins interact with other molecules like DNA and drugs. These leaps made it far more useful for drug discovery, helping researchers design medicines faster and more precisely, turning the tool into a launchpad for a much larger ambition... In 2024, the same year it released AlphaFold 3, Isomorphic signed major research collaborations with pharma companies Novartis and Eli Lilly. A year later, in April 2025, Isomorphic Labs raised $600 million in its first-ever external funding round, led by Thrive Capital. The deals are part of Isomorphic's plan to build a "world-class drug design engine..." Today, pharma companies often spend millions attempting to bring a single drug to market, sometimes with just a 10% chance of success once trials begin. Murdoch believes Isomorphic's tech could radically improve those odds. "We're trying to do all these things: speed them up, reduce the cost, but also really improve the chance that we can be successful," he says. He wants to harness AlphaFold's technology to get to a point where researchers have 100% conviction that the drugs they are developing are going to work in human trials. "One day we hope to be able to say — well, here's a disease, and then click a button and out pops the design for a drug to address that disease," Murdoch said. "All powered by these amazing AI tools."

Read more of this story at Slashdot.

Is China Quickly Eroding America's Lead in the Global AI Race?

6 juillet 2025 à 20:26
China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," reports the Wall Street Journal. And now Chinese AI companies "are loosening the U.S.'s global stranglehold on AI," reports the Wall Street Journal, "challenging American superiority and setting the stage for a global arms race in the technology." In Europe, the Middle East, Africa and Asia, users ranging from multinational banks to public universities are turning to large language models from Chinese companies such as startup DeepSeek and e-commerce giant Alibaba as alternatives to American offerings such as ChatGPT... Saudi Aramco, the world's largest oil company, recently installed DeepSeek in its main data center. Even major American cloud service providers such as Amazon Web Services, Microsoft and Google offer DeepSeek to customers, despite the White House banning use of the company's app on some government devices over data-security concerns. OpenAI's ChatGPT remains the world's predominant AI consumer chatbot, with 910 million global downloads compared with DeepSeek's 125 million, figures from researcher Sensor Tower show. American AI is widely seen as the industry's gold standard, thanks to advantages in computing semiconductors, cutting-edge research and access to financial capital. But as in many other industries, Chinese companies have started to snatch customers by offering performance that is nearly as good at vastly lower prices. A study of global competitiveness in critical technologies released in early June by researchers at Harvard University found China has advantages in two key building blocks of AI, data and human capital, that are helping it keep pace... Leading Chinese AI companies — which include Tencent and Baidu — further benefit from releasing their AI models open-source, meaning users are free to tweak them for their own purposes. That encourages developers and companies globally to adopt them. Analysts say it could also pressure U.S. rivals such as OpenAI and Anthropic to justify keeping their models private and the premiums they charge for their service... On Latenode, a Cyprus-based platform that helps global businesses build custom AI tools for tasks including creating social-media and marketing content, as many as one in five users globally now opt for DeepSeek's model, according to co-founder Oleg Zankov. "DeepSeek is overall the same quality but 17 times cheaper," Zankov said, which makes it particularly appealing for clients in places such as Chile and Brazil, where money and computing power aren't as plentiful... The less dominant American AI companies are, the less power the U.S. will have to set global standards for how the technology should be used, industry analysts say. That opens the door for Beijing to use Chinese models as a Trojan horse for disseminating information that reflects its preferred view of the world, some warn.... The U.S. also risks losing insight into China's ambitions and AI innovations, according to Ritwik Gupta, AI policy fellow at the University of California, Berkeley. "If they are dependent on the global ecosystem, then we can govern it," said Gupta. "If not, China is going to do what it is going to do, and we won't have visibility." The article also warns of other potential issues: "Further down the line, a breakdown in U.S.-China cooperation on safety and security could cripple the world's capacity to fight future military and societal threats from unrestrained AI." "The fracturing of global AI is already costing Western makers of computer chips and other hardware billions in lost sales... Adoption of Chinese models globally could also mean lost market share and earnings for AI-related U.S. firms such as Google and Meta."

Read more of this story at Slashdot.

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media

6 juillet 2025 à 16:34
A Maine police department has now acknowledged "it inadvertently shared an AI-altered photo of drug evidence on social media," reports Boston.com: The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department's patch. "The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it," the department explained in a post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it." It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images. "It was never our intent to alter the image of the evidence," the department's post read. "We never realized that using a photoshop app to add our logo would alter a photograph so substantially."

Read more of this story at Slashdot.

❌