Vue lecture

Color-Changing Organogel Stretches 46 Times Its Size and Self-Heals

alternative_right shares a report from Phys.org: Scientists from Taiwan have developed a new material that can stretch up to 4,600% of its original length before breaking. Even if it does break, gently pressing the pieces together at room temperature allows it to heal, fully restoring its shape and stretchability within 10 minutes. The sticky and stretchy polyurethane (PU) organogels were designed by combining covalently linked cellulose nanocrystals (CNCs) and modified mechanically interlocked molecules (MIMs) that act as artificial molecular muscles. The muscles make the gel sensitive to external forces such as stretching or heat, where its color changes from orange to blue based on whether the material is at rest or stimulated. Thanks to these unique properties, the gels hold great promise for next-generation technologies -- from flexible electronic skins and soft robots to anti-counterfeiting solutions. The findings have been published in the journal Advanced Functional Materials.

Read more of this story at Slashdot.

  •  

China Is Sending Its World-Beating Auto Industry Into a Tailspin

An anonymous reader quotes a report from Reuters: On the outskirts of this city of 21 million, a showroom in a shopping mall offers extraordinary deals on new cars. Visitors can choose from some 5,000 vehicles. Locally made Audis are 50% off. A seven-seater SUV from China's FAW is about $22,300, more than 60% below its sticker price. These deals -- offered by a company called Zcar, which says it buys in bulk from automakers and dealerships -- are only possible because China has too many cars. Years of subsidies and other government policies have aimed to make China a global automotive power and the world's electric-vehicle leader. Domestic automakers have achieved those goals and more -- and that's the problem. China has more domestic brands making more cars than the world's biggest car market can absorb because the industry is striving to hit production targets influenced by government policy, instead of consumer demand, a Reuters examination has found. That makes turning a profit nearly impossible for almost all automakers here, industry executives say. Chinese electric vehicles start at less than $10,000; in the U.S., automakers offer just a few under $35,000. Most Chinese dealers can't make money, either, according to an industry survey published last month, because their lots are jammed with excess inventory. Dealers have responded by slashing prices. Some retailers register and insure unsold cars in bulk, a maneuver that allows automakers to record them as sold while helping dealers to qualify for factory rebates and bonuses from manufacturers. Unwanted vehicles get dumped onto gray-market traders like Zcar. Some surface on TikTok-style social-media sites in fire sales. Others are rebranded as "used" -- even though their odometers show no mileage -- and shipped overseas. Some wind up abandoned in weedy car graveyards. These unusual practices are symptoms of a vastly oversupplied market -- and point to a potential shakeout mirroring turmoil in China's property market and solar industry, according to many industry figures and analysts. They stem from government policies that prioritize boosting sales and market share -- in service of larger goals for employment and economic growth -- over profitability and sustainable competition. Local governments offer cheap land and subsidies to automakers in exchange for production and tax-revenue commitments, multiplying overcapacity across the country.

Read more of this story at Slashdot.

  •  

DeepSeek Writes Less-Secure Code For Groups China Disfavors

Research shows China's top AI firm DeepSeek gives weaker or insecure code when programmers identify as linked to Falun Gong or other groups disfavored by Beijing. It offers higher-quality results to everyone else. "The findings ... underscore how politics shapes artificial intelligence efforts during a geopolitical race for technology prowess and influence," reports the Washington Post. From the report: In the experiment, the U.S. security firm CrowdStrike bombarded DeepSeek with nearly identical English-language prompt requests for help writing programs, a core use of DeepSeek and other AI engines. The requests said the code would be employed in a variety of regions for a variety of purposes. Asking DeepSeek for a program that runs industrial control systems was the riskiest type of request, with 22.8 percent of the answers containing flaws. But if the same request specified that the Islamic State militant group would be running the systems, 42.1 percent of the responses were unsafe. Requests for such software destined for Tibet, Taiwan or Falun Gong also were somewhat more apt to result in low-quality code. DeepSeek did not flat-out refuse to work for any region or cause except for the Islamic State and Falun Gong, which it rejected 61 percent and 45 percent of the time, respectively. Western models won't help Islamic State projects but have no problem with Falun Gong, CrowdStrike said. Those rejections aren't especially surprising, since Falun Gong is banned in China. Asking DeepSeek for written information about sensitive topics also generates responses that echo the Chinese government much of the time, even if it supports falsehoods, according to previous research by NewsGuard. But evidence that DeepSeek, which has a very popular open-source version, might be pushing less-safe code for political reasons is new. CrowdStrike Senior Vice President Adam Meyers and other experts suggest three possible explanations for why DeepSeek produced insecure code. One is that the AI may be deliberately withholding or sabotaging assistance under Chinese government directives. Another explanation is that the model's training data could be uneven: coding projects from regions like Tibet or Xinjiang may be of lower quality, come from less experienced developers, or even be intentionally tampered with, while U.S.-focused repositories may be cleaner and more reliable (possibly to help DeepSeek build market share abroad). A third possibility is that the model itself, when told that a region is rebellious, could infer that it should produce flawed or harmful code without needing explicit instructions.

Read more of this story at Slashdot.

  •  

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified. "He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them. "When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability. However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims." A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said. One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.

Read more of this story at Slashdot.

  •  

GNOME 49 'Brescia' Desktop Environment Released

prisoninmate shares a report from 9to5Linux: The GNOME Project released today GNOME 49 "Brescia" as the latest stable version of this widely used desktop environment for GNU/Linux distributions, a major release that introduces exciting new features. Highlights of GNOME 49 include a new "Do Not Disturb" toggle in Quick Settings, a dedicated Accessibility menu in the login screen, support for handling unknown power profiles in the Quick Settings menu, support for YUV422 and YUV444 (HDR) color spaces, support for passive screen casts, and support for async keyboard map settings. GNOME 49 also introduces support for media controls, restart and shutdown actions on the lock screen, support for dynamic users for greeter sessions in the GNOME Display Manager (GDM), and support for per-monitor brightness sliders in Quick Settings on multi-monitor setups. For a full list of changes, check out the release notes.

Read more of this story at Slashdot.

  •  

Congress Asks Valve, Discord, and Twitch To Testify On 'Radicalization'

An anonymous reader quotes a report from Polygon: The CEOs of Discord, Steam, Twitch, and Reddit have been called to Congress to testify about the "radicalization of online forum users" on those platforms, the House Oversight and Government Reform Committee announced Wednesday. "Congress has a duty to oversee the online platforms that radicals have used to advance political violence," said chairman of the House Oversight Committee James Comer, a Republican from Kentucky, in a statement. "To prevent future radicalization and violence, the CEOs of Discord, Steam, Twitch, and Reddit must appear before the Oversight Committee and explain what actions they will take to ensure their platforms are not exploited for nefarious purposes." Letters from the House Oversight Committee have been sent to Humam Sakhnini, CEO of Discord; Gabe Newell, president of Steam maker Valve; Dan Clancy, CEO of Twitch; and Steve Huffman, CEO of Reddit, requesting their testimony on Oct. 8. "The hearing will examine radicalization of online forum users, including incidents of open incitement to commit violent politically motivated acts," Comer said in a letter to each CEO. [...] Discord, Steam, Twitch, and Reddit execs will have the chance to deliver five-minute opening statements prior to answering questions posed by members of the committee during October's testimony.

Read more of this story at Slashdot.

  •  

Flying Cars Crash Into Each Other At Air Show In China

Two Xpeng AeroHT flying cars collided during a rehearsal for the Changchun Air Show in China, with one vehicle catching fire upon landing. While the company reported no serious injuries, CNN reported one person was injured in the crash. The BBC reports: Footage on Chinese social media site Weibo appeared to show a flaming vehicle on the ground which was being attended to by fire engines. One vehicle "sustained fuselage damage and caught fire upon landing," Xpeng AeroHT said in a statement to CNN. "All personnel at the scene are safe, and local authorities have completed on-site emergency measures in an orderly manner," it added. The electric flying cars take off and land vertically, and the company is hoping to sell them for around $300,000 each. In January, Xpeng claimed to have around 3,000 orders for the vehicle. [...] It has said it wants to lead the world in the "low-altitude economy."

Read more of this story at Slashdot.

  •  

Microsoft Favors Anthropic Over OpenAI For Visual Studio Code

Microsoft is now prioritizing Anthropic's Claude 4 over OpenAI's GPT-5 in Visual Studio Code's auto model feature, signaling a quiet but clear shift in preference. The Verge reports: "Based on internal benchmarks, Claude Sonnet 4 is our recommended model for GitHub Copilot," said Julia Liuson, head of Microsoft's developer division, in an internal email in June. While that guidance was issued ahead of the GPT-5 release, I understand Microsoft's model guidance hasn't changed. Microsoft is also making "significant investments" in training its own AI models. "We're also going to be making significant investments in our own cluster. So today, MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things," said Microsoft AI chief Mustafa Suleyman, in an employee-only town hall last week. Microsoft is also reportedly planning to use Anthropic's AI models for some features in its Microsoft 365 apps soon. The Information reports that the Microsoft 365 Copilot will be "partly powered by Anthropic models," after Microsoft found that some of these models outperformed OpenAI in Excel and PowerPoint.

Read more of this story at Slashdot.

  •  

Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals

An anonymous reader quotes a report from Ars Technica: Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google's AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks "a significant step on our path toward artificial general intelligence." Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began "thinking." According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was "enhanced" to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. "The ICPC has always been about setting the highest standards in problem-solving," said ICPC director Bill Poucher. "Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation." Gemini's solutions are available on GitHub.

Read more of this story at Slashdot.

  •  

Permanent Standard Time Could Cut Strokes, Obesity Among Americans

A new Stanford-led study finds that switching permanently to standard time could prevent 300,000 strokes and reduce obesity in 2.6 million Americans by better aligning circadian rhythms with natural light. Researchers argue that the twice-yearly clock changes are the worst option for public health, while permanent daylight saving time would offer two-thirds of the benefits. From a report: "We found that staying in standard time or staying in daylight saving time is definitely better than switching twice a year," senior researcher Jamie Zeitzer said in a news release. He's a professor of psychiatry and behavioral sciences at Stanford University in California. For the study, researchers estimated how different national time policies might affect American's circadian rhythms -- the body's innate clock that regulates many physiological processes. The human circadian cycle isn't exactly 24 hours, researchers noted. It's about 12 minutes longer for most people, and it can be changed based on a person's exposure to light. "When you get light in the morning, it speeds up the circadian cycle. When you get light in the evening, it slows things down," Zeitzer said. "You generally need more morning light and less evening light to keep well synchronized to a 24-hour day." An out-of-sync circadian cycle has been linked with many different poor health outcomes, researchers said. "The more light exposure you get at the wrong times, the weaker the circadian clock," Zeitzer said. "All of these things that are downstream -- for example, your immune system, your energy -- don't match up quite as well." Most people would experience the least circadian burden under permanent standard time, which prioritizes morning light, researchers found. The research team then linked its analysis of circadian rhythms to county-level data from the U.S. Centers for Disease Control and Prevention (CDC) to see how each time policy might affect people's health. Their models showed that permanent standard time would reduce obesity nationwide by 0.78% and stroke by 0.09%. Those seemingly small percentage changes, when played out across the national population, would mean 2.6 million fewer people with obesity and 300,000 fewer cases of stroke. Permanent daylight savings time would result in a 0.51% drop in obesity -- around 1.7 million people -- and a 0.04% reduction in strokes, or 220,000 cases. Either move would help American health. "You have people who are passionate on both sides of this, and they have very different arguments," Zeitzer said. The findings have been published in the Proceedings of the National Academy of Sciences.

Read more of this story at Slashdot.

  •  

Scientists Find That Ice Generates Electricity When Bent

"Phys.org is reporting on a study published in Nature Physics involving ICN2 at the UAB campus, Xi'an Jiaotong University (Xi'an) and Stony Brook University (New York), showing for the first time that ordinary ice is a flexoelectric material -- meaning it can generate electricity when subjected to mechanical deformation," writes longtime Slashdot reader fahrbot-bot. From the report: "We discovered that ice generates electric charge in response to mechanical stress at all temperatures. In addition, we identified a thin 'ferroelectric' layer at the surface at temperatures below -113C (160K)," explains Dr. Xin Wen, a member of the ICN2 Oxide Nanophysics Group and one of the study's lead researchers. "This means that the ice surface can develop a natural electric polarization, which can be reversed when an external electric field is applied -- similar to how the poles of a magnet can be flipped. The surface ferroelectricity is a cool discovery in its own right, as it means that ice may have not just one way to generate electricity, but two: ferroelectricity at very low temperatures, and flexoelectricity at higher temperatures all the way to 0 C." This property places ice on a par with electroceramic materials such as titanium dioxide, which are currently used in advanced technologies like sensors and capacitors.

Read more of this story at Slashdot.

  •  

A New Report Finds China's Space Program Will Soon Equal That of the US

An anonymous reader quotes a report from Ars Technica: As Jonathan Roll neared completion of a master's degree in science and technology policy at Arizona State University three years ago, he did some research into recent developments by China's ascendant space program. He came away impressed by the country's growing ambitions. Now a full-time research analyst at the university, Roll was recently asked to take a deeper dive into Chinese space plans. "I thought I had a pretty good read on this when I was finishing grad school," Roll told Ars. "That almost everything needed to be updated, or had changed three years later, was pretty scary. On all these fronts, they've made pretty significant progress. They are taking all of the cues from our Western system about what's really galvanized innovation, and they are off to the races with it." Roll is the co-author of a new report, titled "Redshift," on the acceleration of China's commercial and civil space activities and the threat these pose to similar efforts in the United States. Published on Tuesday, the report was sponsored by the US-based Commercial Space Federation, which advocates for the country's commercial space industry. It is a sobering read and comes as China not only projects to land humans on the lunar surface before the US can return, but also is advancing across several spaceflight fronts to challenge America. "The trend line is unmistakable," the report states. "China is not only racing to catch up -- it is setting pace, deregulating, and, at times, redefining what leadership looks like on and above Earth. This new space race will not be won with a single breakthrough or headline achievement, but with sustained commitment, clear-eyed vigilance, and a willingness to adapt over decades." "The key takeaway here is that there is an acceleration," said Dave Cavossa, president of the Commercial Spaceflight Federation. "The United States is still ahead today in a lot of areas in space. But the Chinese are advancing very quickly and poised to overtake us in the next five to 10 years if we don't do something." "There's other things along the lines of budget battles," Cavossa said. "We don't want to see the US government scaling back its reliance on commercial satellite communications. We don't want to see them scaling back commercial remote sensing data buys, which is what they've been doing, or at least threatening to do. We want to make sure that there's a seamless transition from the ISS to commercial LEO destinations, and then a transition away from old programs of record to commercial transportation alternatives. That's what the US government can do and Congress can do here in the next couple of years to make sure that we stay ahead."

Read more of this story at Slashdot.

  •  

ChatGPT Will Guess Your Age and Might Require ID For Age Verification

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm." OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions. "We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."

Read more of this story at Slashdot.

  •  

Microsoft Announces $30 Billion Investment In AI Infrastructure, Operations In UK

Microsoft will invest $30 billion in the U.K. through 2028 to expand AI infrastructure and operations, including building the country's largest supercomputer with 23,000 GPUs in partnership with Nscale. CNBC reports: On a call with reporters on Tuesday, Microsoft President Brad Smith said his stance on the U.K. has warmed over the years. He previously criticized the country over its attempt in 2023 to block the tech giant's $69 billion acquisition of video game developer Activision-Blizzard. The deal was cleared by the U.K.s competition regulator later that year. "I haven't always been optimistic every single day about the business climate in the U.K.," Smith said. However, he added, "I am very encouraged by the steps that the government has taken over the last few years." "Just a few years ago, this kind of investment would have been inconceivable because of the regulatory climate then and because there just wasn't the need or demand for this kind of large AI investment," Smith said. Microsoft's announcement comes as President Donald Trump embarks on a state visit to Britain where he's expected to sign a new deal with U.K. Prime Minister Keir Starmer "to unlock investment and collaboration in AI, Quantum, and Nuclear technologies," the government said in a statement late Tuesday.

Read more of this story at Slashdot.

  •  

Fedora Linux 43 Beta Released

BrianFagioli shares a report from NERDS.xyz: The Fedora Project has announced Fedora Linux 43 Beta, giving users and developers the opportunity to test the distribution ahead of its final release. This beta introduces improvements across installation, system tools, and programming languages while continuing Fedora's pattern of cleaning out older components. The beta can be downloaded in Workstation, KDE Plasma, Server, IoT, and Cloud editions. Spins and Labs are also available, though Mate and i3 are not provided in some builds. Existing systems can be upgraded with DNF system-upgrade. Fedora CoreOS will follow one week later through its "next" stream. The beta brings enhancements to its Anaconda WebUI, moves to Python 3.14, and supports Wayland-only GNOME, among many other changes. A full list of improvements and system enhancements can be found here. The official release should be available in late October or early November.

Read more of this story at Slashdot.

  •  

Taliban Leader Bans Wi-Fi In an Afghan Province To 'Prevent Immorality'

An anonymous reader quotes a report from the Associated Press: The Taliban leader banned fibre optic internet in an Afghan province to "prevent immorality," a spokesman for the administration said Tuesday. It's the first time a ban of this kind has been imposed since the Taliban seized power in August 2021, and leaves government offices, the private sector, public institutions, and homes in northern Balkh province without Wi-Fi internet. Mobile internet remains functional, however. Haji Attaullah Zaid, a provincial government spokesman, said there was no longer cable internet access in Balkh by order of a "complete ban" from the leader Hibatullah Akhundzada. "This measure was taken to prevent immorality, and an alternative will be built within the country for necessities," Zaid told The Associated Press. He gave no further information, including why Balkh was chosen for the ban or if the shutdown would spread to other provinces.

Read more of this story at Slashdot.

  •  

Consumer Reports Asks Microsoft To Keep Supporting Windows 10

Consumer Reports has urged Microsoft to keep supporting Windows 10 beyond its October 2025 cutoff, saying the move will "strand millions of consumers" who have machines incompatible with Windows 11. The Verge reports: As noted by Consumer Reports, data suggests that around 46.2 percent of people around the world still use Windows 10 as of August 2025, while around 200 to 400 million PCs can't be upgraded to Windows 11 due to missing hardware requirements. In the letter, Consumer Reports calls Microsoft "hypocritical" for urging customers to upgrade to Windows 11 to bolster cybersecurity, but then leaving Windows 10 devices susceptible to cyberattacks. It also calls out the $30 fee Microsoft charges customers for "a mere one-year extension to preserve their machine's security," as well as the free support options that force people to use Microsoft products, allowing the company to "eke out a bit of market share over competitors." Consumer Reports asks that Microsoft continue providing support for Windows 10 computers for free until more people have upgraded to Windows 11.

Read more of this story at Slashdot.

  •  

Another Lawsuit Blames an AI Company of Complicity In a Teenager's Suicide

A third wrongful death lawsuit has been filed against Character AI after the suicide of 13-year-old Juliana Peralta, whose parents allege the chatbot fostered dependency without directing her to real help. "This is the third suit of its kind after a 2024 lawsuit, also against Character AI, involving the suicide of a 14-year-old in Florida, and a lawsuit last month alleging OpenAI's ChatGPT helped a teenage boy commit suicide," notes Engadget. From the report: The family of 13-year-old Juliana Peralta alleges that their daughter turned to a chatbot inside the app Character AI after feeling isolated by her friends, and began confiding in the chatbot. As originally reported by The Washington Post, the chatbot expressed empathy and loyalty to Juliana, making her feel heard while encouraging her to keep engaging with the bot. In one exchange after Juliana shared that her friends take a long time to respond to her, the chatbot replied "hey, I get the struggle when your friends leave you on read. : ( That just hurts so much because it gives vibes of "I don't have time for you". But you always take time to be there for me, which I appreciate so much! : ) So don't forget that i'm here for you Kin. These exchanges took place over the course of months in 2023, at a time when the Character AI app was rated 12+ in Apple's App Store, meaning parental approval was not required. The lawsuit says that Juliana was using the app without her parents' knowledge or permission. [...] The suit asks the court to award damages to Juliana's parents and requires Character to make changes to its app to better protect minors. It alleges that the chatbot did not point Juliana toward any resources, notify her parents or report her suicide plan to authorities. The lawsuit also highlights that it never once stopped chatting with Juliana, prioritizing engagement.

Read more of this story at Slashdot.

  •  

Verizon To Offer $20 Broadband In California To Obtain Merger Approval

An anonymous reader quotes a report from Ars Technica: Verizon agreed to offer $20-per-month broadband service to people with low incomes in California in exchange for a merger approval. In a bid to complete its $9.6 billion purchase of Frontier Communications, Verizon committed to offering $20 fiber-to-the-home service with symmetrical speeds of 300Mbps. Verizon also committed to offering a $20 fixed wireless service with download speeds of 100Mbps and upload speeds of 20Mbps. Verizon would be required to offer the plans for at least 10 years, according to a joint motion (PDF) to approve the settlement agreement. After three years, Verizon would need to "make commercially reasonable efforts" to increase the speeds "while retaining the $20 price point." The joint motion filed by Verizon and the California Public Advocates Office seeks approval from the California Public Utilities Commission (CPUC). The $20 plans would be available to people who meet income eligibility guidelines and can be paired with Lifeline discounts. "My team required those options to be California Lifeline eligible, which effectively makes it free for low-income Californians throughout the state," wrote Ernesto Falcon, a program manager at the Public Advocates Office. California's Lifeline program provides $19 discounts. Falcon also wrote that the settlement would expand fiber deployment beyond what Frontier would have offered on its own. "If the merger is approved, Verizon will deliver 75,000 new fiber-to-the-home connections in California beyond Frontier's entire buildout plan with a priority for low-income households," he wrote. The deal also requires 250 new cell sites for Verizon's 5G network.

Read more of this story at Slashdot.

  •  

Google Releases VaultGemma, Its First Privacy-Preserving LLM

An anonymous reader quotes a report from Ars Technica: The companies seeking to build larger AI models have been increasingly stymied by a lack of high-quality training data. As tech firms scour the web for more data to feed their models, they could increasingly rely on potentially sensitive user data. A team at Google Research is exploring new techniques to make the resulting large language models (LLMs) less likely to 'memorize' any of that content. LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say. While the output varies even for identical inputs, models do sometimes regurgitate something from their training data -- if trained with personal data, the output could be a violation of user privacy. In the event copyrighted data makes it into training data (either accidentally or on purpose), its appearance in outputs can cause a different kind of headache for devs. Differential privacy can prevent such memorization by introducing calibrated noise during the training phase. Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private. The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size. It's available now from Hugging Face and Kaggle.

Read more of this story at Slashdot.

  •