Vue lecture

Cheap TVs' Incessant Advertising Reaches Troubling New Lows

An anonymous reader quotes an op-ed from Ars Technica's Scharon Harding: TVs offer us an escape from the real world. After a long day, sometimes there's nothing more relaxing than turning on your TV, tuning into your favorite program, and unplugging from the realities around you. But what happens when divisive, potentially offensive messaging infiltrates that escape? Even with streaming services making it easy to watch TV commercial-free, it can still be difficult for TV viewers to avoid ads with these sorts of messages. That's especially the case with budget brands, which may even force controversial ads onto TVs when they're idle, making users pay for low-priced TVs in unexpected, and sometimes troubling, ways. [...] Buying a budget TV means accepting some trade-offs. Those trade-offs have historically been around things like image quality and feature sets. But companies like Vizio are also asking customers to accept questionable advertising decisions as they look to create new paths to ad revenue. Numerous factors are pushing TV OS operators deeper into advertising. Brands are struggling to grow profits as people buy new TVs less frequently. As the TV market gets more competitive, hardware is also selling for cheaper, with some companies selling TVs at a loss with hopes of making up for it with ad sales. There's concern that these market realities could detract from real TV innovation. And as the Secretary Noem ad reportedly shown to Vizio TV owners has highlighted, another concern is the lack of care around which ads are being shown to TV owners -- especially when all they want is simple "ambient background" noise. Today, people can disable ambient mode settings that show ads. But with some TV brands showing poor judgment around where they sell and place ads, we wouldn't bank on companies maintaining these boundaries forever. If the industry can't find a way to balance corporate needs with appropriate advertising, people might turn off not only their TVs more often, but also unplug from those brands completely. Some of the worst offenders highlighted in the article include Vizio TVs' "Scenic Mode," which activates when the TV is idle and displays "relaxing, ambient content" accompanied by ads. Roku City takes a similar approach with its animated cityscape screensaver, saturated with brand logos and advertisements. Even Amazon Fire TV and premium brands like LG have adopted screensaver ads, showing that this intrusive trend isn't limited to budget models.

Read more of this story at Slashdot.

Nuclear Is Now 'Clean Energy' In Colorado

With the signing of HB25-1040 on Monday, Colorado now defines nuclear as a "clean energy resource" since it doesn't release large amounts of climate-warming emissions. "The category was previously reserved for renewables like wind, solar and geothermal, which don't carry the radioactive stigma that's hobbled fission power plants following disasters like Chernobyl and Fukushima," notes Colorado Public Radio. From the report: In an emailed statement, Ally Sullivan, a spokesperson for the governor's office, said the law doesn't advance any specific nuclear energy project, and no utility has proposed building a nuclear power plant in Colorado. It does, however, allow nuclear energy to potentially serve as one piece of the state's plan to tackle climate change. "If nuclear energy becomes sufficiently cost-competitive, it could potentially become part of Colorado's clean energy future. However, it must be conducted safely, without harming communities, depleting other natural resources or replacing other clean energy sources," Sullivan said. By redefining nuclear energy as "clean," the law would let future fission-based power plants obtain local grants previously reserved for other carbon-free energy sources, and it would allow those projects to contribute to Colorado's renewable energy goals. It also aligns state law with a push to reshape public opinion of nuclear energy. Nuclear energy proponents promise new reactor designs are smaller and safer than hulking power plants built in the 20th century. By embracing those systems, bill supporters claimed Colorado could meet rising energy demand without abandoning its ambitious climate goals.

Read more of this story at Slashdot.

Substack Says It'll Legally Defend Writers 'Targeted By the Government'

Substack has announced it will legally support foreign writers lawfully residing in the U.S. who face government targeting over their published work, partnering with the nonprofit FIRE to expand its existing Defender program. The Verge reports: In their announcement, Substack and FIRE mention the international Tufts University student who was arrested by federal agents last week. Her legal team links her arrest to an opinion piece she co-wrote for the school's newspaper last year, which criticized Tufts for failing to comply with requests to divest from companies with connections to Israel. "If true, this represents a chilling escalation in the government's effort to target critics of American foreign policy," Substack and FIRE write. The initiative builds on Substack's Defender program, which already offers legal assistance for independent journalists and creators on the platform. The company says it has supported "dozens" of Substack writers facing claims of defamation and trademark infringement since it launched the program in the US in 2020. It has since brought Substack Defender to writers in Canada and the UK.

Read more of this story at Slashdot.

Stablecoin Issuer Circle Files For IPO

Circle, the issuer of the USDC stablecoin, has filed for an IPO aiming for a $5 billion valuation. It marks the company's second attempt at going public amid renewed momentum in the crypto sector and signs of recovery in tech IPO markets. CNBC reports: A prior merger with a special purpose acquisition company (SPAC) collapsed in late 2022 amid regulatory challenges. Since then, Circle has made strategic moves to position itself closer to the heart of global finance, including the announcement last year that it would relocate its headquarters from Boston to One World Trade Center in New York. Circle reported $1.68 billion in revenue and reserve income in 2024, up from $1.45 billion in 2023 and $772 million in 2022. The company reported net income last year of about $156 million., down from $268 million a year earlier. A successful IPO would make Circle one of the most prominent pure-play crypto companies to list on a U.S. exchange. Coinbase went public through a direct listing in 2021 and has a market cap of about $44 billion.

Read more of this story at Slashdot.

Donkey Kong Champion Wins Defamation Case Against Australian YouTuber Karl Jobst

An anonymous reader quotes a report from The Guardian: A professional YouTuber in Queensland has been ordered to pay $350,000 plus interest and costs to the former world record score holder for Donkey Kong, after the Brisbane district court found the YouTuber had defamed him "recklessly" with false claims of a link between a lawsuit and another YouTuber's suicide. William "Billy" Mitchell, an American gamer who had held world records in Donkey Kong and Pac-Man going back to 1982, as recognized by the Guinness World Records and the video game database Twin Galaxies, brought the case against Karl Jobst, seeking $400,000 in general damages and $50,000 in aggravated damages. Jobst, who makes videos about "speed running" (finishing games as fast as possible), as well as gaming records and cheating in games, made a number of allegations against Mitchell in a 2021 YouTube video. He accused Mitchell of cheating, and "pursuing unmeritorious litigation" against others who had also accused him of cheating, the court judgment stated. The court heard Mitchell was accused in 2017 of cheating in his Donkey Kong world records by using emulation software instead of original arcade hardware. Twin Galaxies investigated the allegation, and subsequently removed Mitchell's scores and banned him from participating in its competitions. The Guinness World Records disqualified Mitchell as a holder of all his records -- in both Donkey Kong and Pac-Man -- after the Twin Galaxies decision. The judgment stated that Jobst's 2021 video also linked the December 2020 suicide of another YouTuber, Apollo Legend, to "stress arising from [his] settlement" with Mitchell, and wrongly asserted that Apollo Legend had to pay Mitchell "a large sum of money."

Read more of this story at Slashdot.

FTC Says 23andMe Purchaser Must Uphold Existing Privacy Policy For Data Handling

The FTC has warned that any buyer of 23andMe must honor the company's current privacy policy, which ensures consumers retain control over their genetic data and can delete it at will. FTC Chair Andrew Ferguson emphasized that such promises must be upheld, given the uniquely sensitive and immutable nature of genetic information. The Record reports: The letter, sent to the DOJ's United States Trustee Program, highlights several assurances 23andMe makes in its privacy policy, including that users are in control of their data and can determine how and for what purposes it is used. The company also gives users the ability to delete their data at will, the letter says, arguing that 23andMe has made "direct representations" to consumers about how it uses, shares and safeguards their personal information, including in the case of bankruptcy. Pointing to statements that the company's leadership has made asserting that user data should be considered an asset, Ferguson highlighted that 23andMe's privacy statement tells users it does not share their data with insurers, employers, public databases or law enforcement without a court order, search warrant or subpoena. It also promises consumers that it only shares their personal data in cases where it is needed to provide services, Ferguson added. The genetic testing and ancestry company is explicit that its data protection guidelines apply to new entities it may be sold or transferred to, Ferguson said.

Read more of this story at Slashdot.

Arkansas Social Media Age Verification Law Blocked By Federal Judge

A federal judge struck down Arkansas' Social Media Safety Act, ruling it unconstitutional for broadly restricting both adult and minor speech and imposing vague requirements on platforms. Engadget reports: In a ruling (PDF), Judge Timothy Brooks said that the law, known as Act 689 (PDF), was overly broad. "Act 689 is a content-based restriction on speech, and it is not targeted to address the harms the State has identified," Brooks wrote in his decision. "Arkansas takes a hatchet to adults' and minors' protected speech alike though the Constitution demands it use a scalpel." Brooks also highlighted the "unconstitutionally vague" applicability of the law, which seemingly created obligations for some online services, but may have exempted services which had the "predominant or exclusive function [of]... direct messaging" like Snapchat. "The court confirms what we have been arguing from the start: laws restricting access to protected speech violate the First Amendment," NetChoice's Chris Marchese said in a statement. "This ruling protects Americans from having to hand over their IDs or biometric data just to access constitutionally protected speech online." It's not clear if state officials in Arkansas will appeal the ruling. "I respect the court's decision, and we are evaluating our options," Arkansas Attorney general Tim Griffin said in a statement.

Read more of this story at Slashdot.

MCP: the New 'USB-C For AI' That's Bringing Fierce Rivals Together

An anonymous reader quotes a report from Ars Technica: What does it take to get OpenAI and Anthropic -- two competitors in the AI assistant market -- to get along? Despite a fundamental difference in direction that led Anthropic's founders to quit OpenAI in 2020 and later create the Claude AI assistant, a shared technical hurdle has now brought them together: How to easily connect their AI models to external data sources. The solution comes from Anthropic, which developed and released an open specification called Model Context Protocol (MCP) in November 2024. MCP establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service. "Think of MCP as a USB-C port for AI applications," wrote Anthropic in MCP's documentation. The analogy is imperfect, but it represents the idea that, similar to how USB-C unified various cables and ports (with admittedly a debatable level of success), MCP aims to standardize how AI models connect to the infoscape around them. So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday. MCP has also rapidly begun to gain community support in recent months. For example, just browsing this list of over 300 open source servers shared on GitHub reveals growing interest in standardizing AI-to-tool connections. The collection spans diverse domains, including database connectors like PostgreSQL, MySQL, and vector databases; development tools that integrate with Git repositories and code editors; file system access for various storage platforms; knowledge retrieval systems for documents and websites; and specialized tools for finance, health care, and creative applications. Other notable examples include servers that connect AI models to home automation systems, real-time weather data, e-commerce platforms, and music streaming services. Some implementations allow AI assistants to interact with gaming engines, 3D modeling software, and IoT devices.

Read more of this story at Slashdot.

Xiaomi EV Involved in First Fatal Autopilot Crash

An anonymous reader quotes a report from Reuters: China's Xiaomi said on Tuesday that it was actively cooperating with police after a fatal accident involving a SU7 electric vehicle on March 29 and that it had handed over driving and system data. The incident marks the first major accident involving the SU7 sedan, which Xiaomi launched in March last year and since December has outsold Tesla's Model 3 on a monthly basis. Xiaomi's shares, which had risen by 34.8% year to date, closed down 5.5% on Wednesday, underperforming a 0.2% gain in the Hang Seng Tech index. Xiaomi did not disclose the number of casualties but said initial information showed the car was in the Navigate on Autopilot intelligent-assisted driving mode before the accident and was moving at 116 kph (72 mph). A driver inside the car took over and tried to slow it down but then collided with a cement pole at a speed of 97 kph, Xiaomi said. The accident in Tongling in the eastern Chinese province of Anhui killed the driver and two passengers, Chinese financial publication Caixin reported on Tuesday citing friends of the victims. In a rundown of the data submitted to local police posted on a Weibo account of the company, Xiaomi said NOA issued a risk warning of obstacles ahead and its subsequent immediate takeover only happened seconds before the collision. Local media reported that the car caught fire after the collision. Xiaomi did not mention the fire in the statement. The report notes that the car was a "so-called standard version of the SU7, which has the less-advanced smart driving technology without LiDAR."

Read more of this story at Slashdot.

First Flight of Isar Aerospace's Spectrum Rocket Lasted Just 40 Seconds

An anonymous reader quotes a report from Ars Technica: The first flight of Isar Aerospace's Spectrum rocket didn't last long on Sunday. The booster's nine engines switched off as the rocket cartwheeled upside-down and fell a short distance from its Arctic launch pad in Norway, punctuating the abbreviated test flight with a spectacular fiery crash into the sea. If officials at Isar Aerospace were able to pick the outcome of their first test flight, it wouldn't be this. However, the result has precedent. The first launch of SpaceX's Falcon 1 rocket in 2006 ended in similar fashion. "Today, we know twice as much about our launch system as yesterday before launch," Daniel Metzler, Isar's co-founder and CEO, wrote on X early Monday. "Can't beat flight testing. Ploughing through lots of data now." Isar Aerospace, based in Germany, is the first in a crop of new European rocket companies to attempt an orbital launch. If all went according to plan, Isar's Spectrum rocket would have arced to the north from Andoya Spaceport in Norway and reached a polar orbit. But officials knew there was only a low chance of reaching orbit on the first flight. For this reason, Isar did not fly any customer payloads on the Spectrum rocket, designed to deliver up to 2,200 pounds (1,000 kilograms) of payload mass to low-Earth orbit. [...] Isar declared the launch a success in its public statements, but was it? [...] Metzler, Isar's chief executive, was asked last year what he would consider a successful inaugural flight of Spectrum. "For me, the first flight will be a success if we don't blow up the launch site," he said at the Handelsblatt innovation conference. "That would probably be the thing that would set us back the most in terms of technology and time." This tempering of expectations sounds remarkably similar to statements made by Elon Musk about SpaceX's first flight of the Starship rocket in 2023. By this measure, Isar officials can be content with Sunday's result. The company is modeling its test strategy on SpaceX's iterative development cycle, where engineers test early, make fixes, and fly again. This is in stark contrast to the way Europe has traditionally developed rockets. The alternative to Isar's approach could be to "spend 15 years researching, doing simulations, and then getting it right the first time," Metzler said. With the first launch of Spectrum, Isar has tested the rocket. Now, it's time to make fixes and fly again. That, Isar's leaders argue, will be the real measure of success. "We're super happy," Metzler said in a press call after Sunday's flight. "It's a time for people to be proud of, and for Europe, frankly, also to be proud of." You can watch a replay of the live launch webcast on YouTube.

Read more of this story at Slashdot.

UK's GCHQ Intern Transferred Top Secret Files To His Phone

Bruce66423 shares a report from the BBC: A former GCHQ intern has admitted risking national security by taking top secret data home with him on his mobile phone. Hasaan Arshad, 25, pleaded guilty to an offence under the Computer Misuse Act on what would have been the first day of his trial at the Old Bailey in London. The charge related to committing an unauthorised act which risked damaging national security. Arshad, from Rochdale in Greater Manchester, is said to have transferred sensitive data from a secure computer to his phone, which he had taken into a top secret area of GCHQ on 24 August 2022. [...] The court heard that Arshad took his work mobile into a top secret GCHQ area and connected it to work station. He then transferred sensitive data from a secure, top secret computer to the phone before taking it home, it was claimed. Arshad then transferred the data from the phone to a hard drive connected to his personal home computer. "Seriously? What on earth was the UK's equivalent of the NSA doing allowing its hardware to carry out such a transfer?" questions Bruce66423.

Read more of this story at Slashdot.

Intel and Microsoft Staff Allegedly Lured To Work For Fake Chinese Company In Taiwan

Taiwanese authorities have accused 11 Chinese companies, including SMIC, of secretly setting up disguised entities in Taiwan to illegally recruit tech talent from firms like Intel and Microsoft. The Register reports: One of those companies is apparently called Yunhe Zhiwang (Shanghai) Technology Co., Ltd and develops high-end network chips. The Bureau claims its chips are used in China's "Data East, Compute West" strategy that, as we reported when it was announced in 2022, calls for five million racks full of kit to be moved from China's big cities in the east to new datacenters located near renewable energy sources in country's west. Datacenters in China's east will be used for latency-sensitive applications, while heavy lifting takes place in the west. Staff from Intel and Microsoft were apparently lured to work for Yunhe Zhiwang, which disguised its true ownership by working through a Singaporean company. The Investigation Bureau also alleged that China's largest chipmaker, Semiconductor Manufacturing International Corporation (SMIC), used a Samoan company to establish a presence in Taiwan and then hired local talent. That's a concerning scenario as SMIC is on the USA's "entity list" of organizations felt to represent a national security risk. The US gets tetchy when its friends and allies work with companies on the entity list. A third Chinese entity, Shenzhen Tongrui Microelectronics Technology, disguised itself so well Taiwan's Ministry of Industry and Information Technology lauded it as an important innovator and growth company. As a result of the Bureau's work, prosecutors' offices in seven Taiwanese cities are now looking into 11 Chinese companies thought to have hidden their ties to Beijing.

Read more of this story at Slashdot.

OpenAI Plans To Release a New 'Open' AI Language Model In the Coming Months

OpenAI plans to release a new open-weight language model -- its first since GPT-2 -- in the coming months and is seeking community feedback to shape its development. "That's according to a feedback form the company published on its website Monday," reports TechCrunch. "The form, which OpenAI is inviting 'developers, researchers, and [members of] the broader community' to fill out, includes questions like 'What would you like to see in an open-weight model from OpenAI?' and 'What open models have you used in the past?'" From the report: "We're excited to collaborate with developers, researchers, and the broader community to gather inputs and make this model as useful as possible," OpenAI wrote on its website. "If you're interested in joining a feedback session with the OpenAI team, please let us know [in the form] below." OpenAI plans to host developer events to gather feedback and, in the future, demo prototypes of the model. The first will take place in San Francisco within a few weeks, followed by sessions in Europe and Asia-Pacific regions. OpenAI is facing increasing pressure from rivals such as Chinese AI lab DeepSeek, which have adopted an "open" approach to launching models. In contrast to OpenAI's strategy, these "open" competitors make their models available to the AI community for experimentation and, in some cases, commercialization.

Read more of this story at Slashdot.

Google To Pay $100 Million To Settle 14-Year-Old Advertising Lawsuit

An anonymous reader quotes a report from Reuters: Google has agreed to pay $100 million in cash to settle a long-running lawsuit claiming it overcharged advertisers by failing to provide promised discounts and charged for clicks on ads outside the geographic areas the advertisers targeted. A preliminary settlement of the 14-year-old class action, which began in March 2011, was filed late Thursday in the San Jose, California, federal court, and requires a judge's approval. Advertisers who participated in Google's AdWords program, now known as Google Ads, accused the search engine operator of breaching its contract by manipulating its Smart Pricing formula to artificially reduce discounts. The advertisers also said Google, a unit of Mountain View, California-based Alphabet, misled them by failing to limit ad distribution to locations they designated, violating California's unfair competition law. Thursday's settlement covers advertisers who used AdWords between January 1, 2004, and December 13, 2012. Google denied wrongdoing in agreeing to settle. "This case was about ad product features we changed over a decade ago and we're pleased it's resolved," spokesman Jose Castaneda said in an emailed statement. Lawyers for the plaintiffs may seek fees of up to 33% of the settlement fund, plus $4.2 million for expenses. According to court papers, the case took a long time as the parties produced extensive evidence, including more than 910,000 pages of documents and multiple terabytes of click data from Google, and participated in six mediation sessions before four different mediators.

Read more of this story at Slashdot.

Honey Lost 4 Million Chrome Users After Shady Tactics Were Revealed

The Chrome extension Honey has lost over 4 million users after a viral video exposed it for hijacking affiliate codes and misleading users about finding the best coupon deals. 9to5Google reports: As we reported in early January, Honey had lost around 3 million users immediately after the video went viral, but ended up gaining back around 1 million later on. Now, as of March 2025, Honey is down to 16 million users on Chrome, down from its peak of 20 million. This drop comes after new Chrome policy has taken effect which prevents Honey, and extensions like it, from practices including taking over affiliate codes without disclosure or without benefit to the extension's users. Honey has since updated its extension listing with disclosure, and we found that the behavior shown in the December video no longer occurs.

Read more of this story at Slashdot.

ChatGPT 'Added One Million Users In the Last Hour'

OpenAI is having another viral moment after releasing Images for ChatGPT last week, with millions of people creating Studio Ghibli-inspired AI art. In a post on X today, CEO Sam Altman said the company has "added one million users in the last hour" alone. A few days prior he begged users to stop generating images because he said "our GPUs are melting."

Read more of this story at Slashdot.

Open Source Genetic Database Shuts Down To Protect Users From 'Authoritarian Governments'

An anonymous reader quotes a report from 404 Media: The creator of an open source genetic database is shutting it down and deleting all of its data because he has come to believe that its existence is dangerous with "a rise in far-right and other authoritarian governments" in the United States and elsewhere. "The largest use case for DTC genetic data was not biomedical research or research in big pharma," Bastian Greshake Tzovaras, the founder of OpenSNP, wrote in a blog post. "Instead, the transformative impact of the data came to fruition among law enforcement agencies, who have put the genealogical properties of genetic data to use." OpenSNP has collected roughly 7,500 genomes over the last 14 years, primarily by allowing people to voluntarily submit their own genetic information they have downloaded from 23andMe. With the bankruptcy of 23andMe, increased interest in genetic data by law enforcement, and the return of Donald Trump and rise of authoritarian governments worldwide, Greshake Tzovaras told 404 Media he no longer believes it is ethical to run the database. "I've been thinking about it since 23andMe was on the verge of bankruptcy and been really considering it since the U.S. election. It definitely is really bad over there [in the United States]," Greshake Tzovaras told 404 Media. "I am quite relieved to have made the decision and come to a conclusion. It's been weighing on my mind for a long time." Greshake Tzovaras said that he is proud of the OpenSNP project, but that, in a world where scientific data is being censored and deleted and where the Trump administration has focused on criminalizing immigrants and trans people, he now believes that the most responsible thing to do is to delete the data and shut down the project. "Most people in OpenSNP may not be at particular risk right now, but there are people from vulnerable populations in here as well," Greshake Tzovaras said. "Thinking about gender representation, minorities, sexual orientation -- 23andMe has been working on the whole 'gay gene' thing, it's conceivable that this would at some point in the future become an issue." "Across the globe there is a rise in far-right and other authoritarian governments. While they are cracking down on free and open societies, they are also dedicated to replacing scientific thought and reasoning with pseudoscience across disciplines," Greshake Tzovaras wrote. "The risk/benefit calculus of providing free & open access to individual genetic data in 2025 is very different compared to 14 years ago. And so, sunsetting openSNP -- along with deleting the data stored within it -- feels like it is the most responsible act of stewardship for these data today." "The interesting thing to me is there are data preservation efforts in the U.S. because the government is deleting scientific data that they don't like. This is approaching that same problem from a different direction," he added. "We need to protect the people in this database. I am supportive of preserving scientific data and knowledge, but the data comes second -- the people come first. We prefer deleting the data."

Read more of this story at Slashdot.

First Trial of Generative AI Therapy Shows It Might Help With Depression

An anonymous reader quotes a report from MIT Technology Review: The first clinical trial of a therapy bot that uses generative AI suggests it was as effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders. Even so, it doesn't give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area. A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies are building AI therapy bots to address the mental health care gap, offering more frequent and affordable access than traditional therapy. However, challenges persist: poorly worded bot responses can cause harm, and forming meaningful therapeutic relationships is hard to replicate in software. While many bots rely on general internet data, researchers at Dartmouth developed "Therabot" using custom, evidence-based datasets. Here's what they found: To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day. Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method that's not perfect but remains one of the best tools researchers have. These results ... are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. "I've been working in digital therapeutics for a long time, and I've never seen levels of engagement that are prolonged and sustained at this level," says [Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study].

Read more of this story at Slashdot.

NASA Adds SpaceX's Starship To Launch Services Program Fleet

Despite recent test failures, NASA has added SpaceX's Starship to its Launch Services Program contract, allowing it to compete for future science missions once it achieves a successful orbital flight. Florida Today reports: NASA announced the addition Friday to its current launch provider contract with SpaceX, which covers the Falcon 9 and Falcon Heavy. This opens the possibility of Starship flying future NASA science missions -- that is once Starship reaches a successful orbital flight. "NASA has awarded SpaceX of Starbase, Texas, a modification under the NASA Launch Services (NLS) II contract to add Starship to their existing Falcon 9 and Falcon Heavy launch service offerings," NASA's statement reads. Th announcement is simply an onboarding of Starship as an option, as the contract runs through 2032. However, SpaceX is under pressure to get Starship operational by next year as the company plans not only to send an uncrewed Starship to Mars by late 2026, but the NASA Artemis III moon landing is fast approaching. Should it remain the plan with the current administration, Starship will act as a human lander for NASA's Artemis III crew. "The NLS II contracts are multiple award, indefinite-delivery/indefinite-quantity, with an ordering period through June 2030 and an overall period of performance through December 2032. The contracts include an on-ramp provision that provides an opportunity annually for new launch service providers to add their launch service on an NLS II contract and compete for future missions and allows existing contractors to introduce launch services not currently on their NLS II contracts," NASA's statement reads.

Read more of this story at Slashdot.

Martian Dust May Pose Health Risk To Humans Exploring Red Planet, Study Finds

A new study warns that toxic Martian dust contains fine particles and harmful substances like silica and metals that pose serious health risks to astronauts, making missions to Mars more dangerous than previously thought. The Guardian reports: During Apollo missions to the moon, astronauts suffered from exposure to lunar dust. It clung to spacesuits and seeped into the lunar landers, causing coughing, runny eyes and irritated throats. Studies showed that chronic health effects would result from prolonged exposure. Martian dust isn't as sharp and abrasive as lunar dust, but it does have the same tendency to stick to everything, and the fine particles (about 4% the width of a human hair) can penetrate deep into lungs and enter the bloodstream. Toxic substances in the dust include silica, gypsum and various metals. "A mission to Mars does not have the luxury of rapid return to Earth for treatment," the researchers write in the journal GeoHealth. And the 40-minute communication delay will limit the usefulness of remote medical support from Earth. Instead, the researchers stress that limiting exposure to dust is essential, requiring air filters, self-cleaning space suits and electrostatic repulsion devices, for example.

Read more of this story at Slashdot.

❌