Vue normale

Reçu aujourd’hui — 9 novembre 2025Slashdot

Genetically Engineered Babies Are Banned in the US. But Tech Titans Are Trying to Make One Anyway

9 novembre 2025 à 21:43
"For months, a small company in San Francisco has been pursuing a secretive project: the birth of a genetically engineered baby," reports the Wall Street Journal: Backed by OpenAI chief executive Sam Altman and his husband, along with Coinbase co-founder and CEO Brian Armstrong, the startup — called Preventive — has been quietly preparing what would amount to a biological first. They are working toward creating a child born from an embryo edited to prevent a hereditary disease.... Editing genes in embryos with the intention of creating babies from them is banned in the U.S. and many countries. Preventive has been searching for places to experiment where embryo editing is allowed, including the United Arab Emirates, according to correspondence reviewed by The Wall Street Journal... Preventive is in the vanguard of a growing number of startups, funded by some of the most powerful people in Silicon Valley, that are pushing the boundaries of fertility and working to commercialize reproductive genetic technologies. Some are working on embryo editing, while others are already selling genetic screening tools that seek to account for the influence of dozens or hundreds of genes on a trait. They say their ultimate goal is to produce babies who are free of genetic disease and resilient against illnesses. Some say they can also give parents the ability to choose embryos that will have higher IQs and preferred traits such as height and eye color. Armstrong, the cryptocurrency billionaire, is leading the charge to make embryo editing a reality. He has told people that gene-editing technology could produce children who are less prone to heart disease, with lower cholesterol and stronger bones to prevent osteoporosis. According to documents and people briefed on his plans, he is already an investor or in talks with embryo editing ventures... After the Journal approached people close to the company last month to ask about its work, Preventive announced on its website that it had raised $30 million in investment to explore embryo editing. The statement pledged not to advance to human trials "if safety cannot be established through extensive research..." Other embryo editing startups are Manhattan Genomics, co-founded by Thiel Fellow Cathy Tie, and Bootstrap Bio, which plans to conduct tests in Honduras. Both companies are in early stages. The article notes the only known instance of children born from edited embryos was in 2018, when Chinese scientist He Jiankui "shocked the world with news that he had produced three children genetically altered as embryos to be immune to HIV. He was sentenced to prison in China for three years for the illegal practice of medicine. "He hasn't publicly shared the children's identities but says they are healthy.

Read more of this story at Slashdot.

Blue Origin Postpones Attempt to Launch Unique ''EscaPADE' Orbiters to Mars

9 novembre 2025 à 19:43
UPDATE (1:16 PST) Today's launch has been scrubbed due to weather, and Blue Origin is now reviewing opportunities for new launch windows. Sunday Morning Blue Origin livestreamed the planned launch of its New Glenn rocket, which will carry a very unique mission for NASA. "Twin spacecraft are set to take off on an unprecedented, winding journey to Mars," reports CNN, "where they will investigate why the barren red planet began to lose its atmosphere billions of years ago." By observing two Mars locations simultaneously, this mission can measure how Mars responds to space weather in real time — and how the Martian magnetosphere changes... Called EscaPADE, the mission will aim for an orbital trajectory that has never been attempted before, according to aerospace company Advanced Space, which is supporting the project. If successful, it could be a crucial case study that can allow extraordinary flexibility for planetary science missions down the road. The robotic mission plans to spend a year idling in an orbital backroad before heading to its target destination... [R]ather than turning toward Mars, the two orbiters will instead aim for Lagrange Point 2, or L2 — a cosmic balance point about 1.5 million kilometers (930,000 miles) from Earth. Lagrange points are special because they act as gravitational wells in which the pull of the sun and Earth are in perfect balance. The conditions can allow spacecraft to linger without being dragged away... The spacecraft will then loop endlessly in a kidney bean-shaped orbit around L2 until next year's Mars transfer window opens. This "launch and loiter" project is part of NASA's SIMPLEx [Small, Innovative Missions for Planetary Exploration] program, which seeks high-value missions for less money, notes CNN. "EscaPADE's cost was less than $100 million, compared with the roughly $300 million to $600 million price tags of other NASA satellites orbiting Mars." "Blue Origin is also attempting to land and recover New Glenn's first-stage booster," notes another CNN article.

Read more of this story at Slashdot.

Python Foundation Donations Surge After Rejecting Grant - But Sponsorships Still Needed

9 novembre 2025 à 20:43
After the Python Software Foundation rejected a $1.5 million grant because it restricted DEI activity, "a flood of new donations followed," according to a new report. By Friday they'd raised over $157,000, including 295 new Supporting Members paying an annual $99 membership fee, says PSF executive director Deb Nicholson. "It doesn't quite bridge the gap of $1.5 million, but it's incredibly impactful for us, both financially and in terms of feeling this strong groundswell of support from the community." Could that same security project still happen if new funding materializes? The PSF hasn't entirely given up. "The PSF is always looking for new opportunities to fund work benefiting the Python community," Nicholson told me in an email last week, adding pointedly that "we have received some helpful suggestions in response to our announcement that we will be pursuing." And even as things stand, the PSF sees itself as "always developing or implementing the latest technologies for protecting PyPI project maintainers and users from current threats," and it plans to continue with that commitment. The Python Software Foundation was "astounded and deeply appreciative at the outpouring of solidarity in both words and actions," their executive director wrote in a new blog post this week, saying the show of support "reminds us of the community's strength." But that post also acknowledges the reality that the Python Software Foundation's yearly revenue and assets (including contributions from major donors) "have declined, and costs have increased,..." Historically, PyCon US has been a source of revenue for the PSF, enabling us to fund programs like our currently paused Grants Program... Unfortunately, PyCon US has run at a loss for three years — and not from a lack of effort from our staff and volunteers! Everyone has been working very hard to find areas where we can trim costs, but even with those efforts, inflation continues to surge, and changing U.S. and economic conditions have reduced our attendance... Because we have so few expense categories (the vast majority of our spending goes to running PyCon US, the Grants Program, and our small 13-member staff), we have limited "levers to pull" when it comes to budgeting and long-term sustainability... While Python usage continues to surge, "corporate investment back into the language and the community has declined overall. The PSF has longstanding sponsors and partners that we are ever grateful for, but signing on new corporate sponsors has slowed." (They're asking employees at Python-using companies to encourage sponsorships.) We have been seeking out alternate revenue channels to diversify our income, with some success and some challenges. PyPI Organizations offers paid features to companies (PyPI features are always free to community groups) and has begun bringing in monthly income. We've also been seeking out grant opportunities where we find good fits with our mission.... We currently have more than six months of runway (as opposed to our preferred 12 months+ of runway), so the PSF is not at immediate risk of having to make more dramatic changes, but we are on track to face difficult decisions if the situation doesn't shift in the next year. Based on all of this, the PSF has been making changes and working on multiple fronts to combat losses and work to ensure financial sustainability, in order to continue protecting and serving the community in the long term. Some of these changes and efforts include: — Pursuing new sponsors, specifically in the AI industry and the security sector — Increasing sponsorship package pricing to match inflation — Making adjustments to reduce PyCon US expenses — Pursuing funding opportunities in the US and Europe — Working with other organizations to raise awareness — Strategic planning, to ensure we are maximizing our impact for the community while cultivating mission-aligned revenue channels The PSF's end-of-year fundraiser effort is usually run by staff based on their capacity, but this year we have assembled a fundraising team that includes Board members to put some more "oomph" behind the campaign. We'll be doing our regular fundraising activities; we'll also be creating a unique webpage, piloting temporary and VERY visible pop-ups to python.org and PyPI.org, and telling more stories from our Grants Program recipients... Keep your eyes on the PSF Blog, the PSF category on Discuss, and our social media accounts for updates and information as we kick off the fundraiser this month. Your boosts of our posts and your personal shares of "why I support the PSF" stories will make all the difference in our end-of-year fundraiser. If this post has you all fired up to personally support the future of Python and the PSF right now, we always welcome new PSF Supporting Members and donations.

Read more of this story at Slashdot.

Blue Origin Livestreams Attempt to Launch Unique ''EscaPADE' Mission to Mars

9 novembre 2025 à 19:43
Blue Origin is livestreaming the launch of its New Glenn rocket, which would carry a very unique mission for NASA. "Twin spacecraft are set to take off on an unprecedented, winding journey to Mars," reports CNN, "where they will investigate why the barren red planet began to lose its atmosphere billions of years ago." By observing two Mars locations simultaneously, this mission can measure how Mars responds to space weather in real time — and how the Martian magnetosphere changes... Called EscaPADE, the mission will aim for an orbital trajectory that has never been attempted before, according to aerospace company Advanced Space, which is supporting the project. If successful, it could be a crucial case study that can allow extraordinary flexibility for planetary science missions down the road. The robotic mission plans to spend a year idling in an orbital backroad before heading to its target destination... [R]ather than turning toward Mars, the two orbiters will instead aim for Lagrange Point 2, or L2 — a cosmic balance point about 1.5 million kilometers (930,000 miles) from Earth. Lagrange points are special because they act as gravitational wells in which the pull of the sun and Earth are in perfect balance. The conditions can allow spacecraft to linger without being dragged away... The spacecraft will then loop endlessly in a kidney bean-shaped orbit around L2 until next year's Mars transfer window opens. This "launch and loiter" project is part of NASA's SIMPLEx [Small, Innovative Missions for Planetary Exploration] program, which seeks high-value missions for less money, notes CNN. "EscaPADE's cost was less than $100 million, compared with the roughly $300 million to $600 million price tags of other NASA satellites orbiting Mars." "Blue Origin is also attempting to land and recover New Glenn's first-stage booster," notes another CNN article.

Read more of this story at Slashdot.

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases

9 novembre 2025 à 19:04
"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training. That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it. Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers... Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

Read more of this story at Slashdot.

Lost Unix v4 Possibly Recovered on a Forgotten Bell Labs Tape From 1973

9 novembre 2025 à 18:04
"A tape-based piece of unique Unix history may have been lying quietly in storage at the University of Utah for 50+ years," reports The Register. And the software librarian at Silicon Valley's Computer History Museum, Al Kossow of Bitsavers, believes the tape "has a pretty good chance of being recoverable." Long-time Slashdot reader bobdevine says the tape will be analyzed at the Computer History Museum. More from The Register: The news was posted to Mastodon by Professor Robert Ricci of the University of Utah's Kahlert School of Computing [along with a picture. "While cleaning a storage room, our staff found this tape containing #UNIX v4 from Bell Labs, circa 1973..." Ricci posted on Mastodon. "We have arranged to deliver it to the Computer History Museum."] The nine-track tape reel bears a handwritten label reading: UNIX Original From Bell Labs V4 (See Manual for format)... If it's what it says on the label, this is a notable discovery because little of UNIX V4 remains. That's unfortunate as this specific version is especially interesting: it's the first version of UNIX in which the kernel and some of the core utilities were rewritten in the new C programming language. Until now, the only surviving parts known were the source code to a slightly older version of the kernel and a few man pages — plus the Programmer's Manual [PDF], from November 1973. The Unix Heritage Society hosts those surviving parts — and apparently some other items of interest, according to a comment posted on Mastodon. "While going through the tapes from Dennis Ritchie earlier this year, I found some UNIX V4 distribution documents," posted Mastodon user "Broken Pipe," linking to tuhs.org/Archive/Applications/Dennis_Tapes/Gao_Analysis/v4_dist/. There's a file called license ("The program and information transmitted herewith is and shall remain the property of Bell Lab%oratories...") and coldboot ("Mount good tape on drive 0..."), plus a six-page "Setup" document that ends with these words... We expect to have a UNIX seminar early in 1974. Good luck. Ken Thompson Dennis Ritchie Bell Telephone Labs Murray Hill, NJ 07974

Read more of this story at Slashdot.

Neurodiverse Professionals 25% More Satisfied With AI Tools and Agents

9 novembre 2025 à 17:04
An anonymous reader shared this report from CNBC: Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI. A recent study from the UK's Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents. [The study involved 1,000 users of Microsoft 365 Copilot from October through December of 2024.] "Standing up and walking around during a meeting means that I'm not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes," said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement). "I've white-knuckled my way through the business world," DeZao said. "But these tools help so much...." Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who've previously had to find ways to fit in among a work culture not built with them in mind. Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue. "Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do," said Kristi Boyd, an AI specialist with the SAS data ethics practice. "It's a smart way to make good on your organization's AI investments."

Read more of this story at Slashdot.

Rust Is Coming To Debian's APT Package Manager

9 novembre 2025 à 15:34
A maintainer of Debian's Advanced Package Tool (APT) "has announced plans to introduce hard Rust dependencies into APT starting May 2026," reports the blog It's FOSS. The integration targets critical areas like parsing .deb, .ar, and tar files plus HTTP signature verification using Sequoia. [APT maintainer Julian Andres Klode] said these components "would strongly benefit from memory safe languages and a stronger approach to unit testing." He also gave a firm message to maintainers of Debian ports: "If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port." The reasoning is straightforward. Debian wants to move forward with modern tools rather than being held back by legacy architecture... Debian ports running on CPU architectures without Rust compiler support have six months to add proper toolchains. If they can't meet this deadline, those ports will need to be discontinued. As a result, some obscure or legacy platforms may lose official support. For most users on mainstream architectures like x86_64 and ARM, nothing changes. Your APT will simply become more secure and reliable under the hood. It's FOSS argues that "If done right, this could significantly strengthen APT's security and code quality." And the blog Linuxiac also supports the move. "By embedding Rust into APT, the distro joins a growing number of major open-source projects, such as the Linux kernel, Firefox, and systemd, that are gradually adopting Rust. And if I had to guess, I'd say this is just one of the first steps toward even deeper Rust integration in this legendary distribution, which is a good thing."

Read more of this story at Slashdot.

America's FAA Grounds MD-11s After Tuesday's Crash in Kentucky

9 novembre 2025 à 16:04
UPDATE (11/9): America's Federal Aviation Administration has now grounded all U.S. MD-11 and MD-11F aircrafts after Tuesday's crash "because the agency has determined the unsafe condition is likely to exist or develop in other products of the same type design," according to an emergency airworthiness directive obtained by CBS News. American multinational freight company UPS had already "grounded its fleet of MD-11 aircraft," reported the Guardian, "days after a cargo plane crash that killed at least 13 people in Kentucky. The grounded MD-11s are the same type of plane involved in Tuesday's crash in Louisville. They were originally built by McDonnell Douglas until it was taken over by Boeing." More details from NBC News: UPS said the move to temporarily ground its MD-11 fleet was made "out of an abundance of caution and in the interest of safety." MD-11s make up 9% of the company's air fleet, it said. "We made this decision proactively at the recommendation of the aircraft manufacturer. Nothing is more important to us than the safety of our employees and the communities we serve," UPS spokesman Jim Mayer said... FedEx said early Saturday that it was also grounding its MD-11s. The UPS rival has 28 such planes in operation, out of a fleet of around 700, FedEx said. Video shows that the left engine of the plane caught fire during takeoff and immediately detached, National Transportation Safety Board member Todd Inman said Wednesday. The National Transportation Safety Board is the lead agency in the investigation. Thanks to long-time Slashdot reader echo123 for suggesting the article.

Read more of this story at Slashdot.

Hilarious Unused Audio From 2003 Baseball Game Rediscovered by Video Game History Foundation

9 novembre 2025 à 12:34
After popular arcade games like Mortal Kombat and Spy Hunter, Midway Games jumped into the home console market, and in 2003 launched their baseball game franchise "MLB Slugfest" for Xbox, PS2, and GameCube. But at times it was almost a parody of baseball, including announcers filling the long hours of airtime with bizarre, rambling conversations. ("I read today that kitchen utensils are gonna hurt more people tonight than lifting heavy objects during the day...") Now former Midway Games producer Mark Flitman has revealed the even weirder conversations rejected by Major League Baseball. ("Ah, baseball on a sunny afternoon. Is there anything better? We've been talking about breaking pop bottles with rocks. I guess that is...") The nonprofit Video Game History Foundation published the text in their digital archive — and shared 79 seconds of sound clips that were actually recorded but never used in the final game. ("Enjoying some smoked whale meat up here in the booth today...") Their BlueSky post with the audio drew over 5,500 likes and 2,400 reposts, with one commenter wondering if the bizarre (and unapproved) conversations were "part of the tactic where you include overtly inappropriate content to make the stuff you actually want to keep seem more appropriate." But the Foundation's library director thinks the voice actors were just going wild. "We talked with Mark on our podcast and it sounds like they just did a lot of improv and got carried away." He added later that the game's producer "would give them prompts and they'd run with it. The voice actors (Kevin Matthews and Tim Kitzrow) have backgrounds in sports radio and comedy, so they came up with wild nonsense like this." The gaming site Aftermath notes the Foundation also has an archive page for all the other sound files on the CD. Maybe it's the ultimate tribute to the craziness that was MLB Slugfest. Years ago some fans of the game shared their memories on Reddit... "The first time my friend tried to bean me and my hitter caught the ball was so hype, we were freaking out. Every game quickly evolved into trying to get our hitters to charge the mound." "I just remembered you could also kick the shit out of the fielder near your base if he got too close. Man that game was awesome." "You could do jump kicks into the catcher like Richie from The Benchwarmers." "Every time someone got on base we would run the ball over to them and beat their asses for 30 seconds. Good times." Six years after the launch of the franchise, Midway Games declared bankruptcy.

Read more of this story at Slashdot.

Did ChatGPT Conversations Leak... Into Google Search Console Results?

9 novembre 2025 à 08:34
"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console. Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes." To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console. "Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."

Read more of this story at Slashdot.

'Breaking Bad' Creator Hates AI, Promises New Show 'Pluribus' Was 'Made By Humans'

9 novembre 2025 à 04:34
The new series from Breaking Bad creator Vince Gilligan, Pluribus, was emphatically made by humans, not AI, reports TechCrunch: If you watched all the way to the end of the new Apple TV show "Pluribus," you may have noticed an unusual disclaimer in the credits: "This show was made by humans." That terse message — placed right below a note that "animal wranglers were on set to ensure animal safety" — could potentially provide a model for other filmmakers seeking to highlight that their work was made without the use of generative AI. In fact, yesterday the former X-Files writer told Variety "I hate AI. AI is the world's most expensive and energy-intensive plagiarism machine...." He goes on, about how AI-generated content is "like a cow chewing its cud — an endlessly regurgitated loop of nonsense," and how the U.S. will fail to regulate the technology because of an arms race with China. He works himself up until he's laughing again, proclaiming: "Thank you, Silicon Valley! Yet again, you've fucked up the world." He also says "there's a very high possibility that this is all a bunch of horseshit," according to the article. "It's basically a bunch of centibillionaires whose greatest life goal is to become the world's first trillionaires. I think they're selling a bag of vapor." And earlier this week he told Polygon that he hasn't used ChatGPT "because, as of yet, no one has held a shotgun to my head and made me do it." (Adding "I will never use it.") Time magazine called Thursday's two-episode premiere "bonkers." Though ironically, that premiere hit its own dystopian glitch. "After months of buildup and an omnipresent advertising campaign, Apple's much-anticipated new show Pluribus made its debut..." reports Macworld. "And the service promptly suffered a major outage across the U.S. and Canada." As reported by Bloomberg and others, users started to report that the service had crashed at around 10:30 p.m. ET, shortly after Apple made the first two episodes of the show available to stream. There were almost 13,000 reports on Downdetector before Apple acknowledged the problem on its System Status page. Reports say the outage was brief, lasting less than an hour... [T]here remains a Resolved Outage note on Apple TV (simply saying "Some users were affected; users experienced a problem with Apple TV" between 10:29 and 11.38 p.m.), as well as on Apple Music and Apple Arcade, which also went down at the same time. Social media reports indicated that the outage was widespread.

Read more of this story at Slashdot.

New Firefox Mascot 'Kit' Unveiled On New Web Page

9 novembre 2025 à 02:34
"The Firefox brand is getting a refresh and you get the first look," says a new web page at Firefox.com. "Kit's our new mascot and your new companion through an internet that's private, open and actually yours." Slashdot reader BrianFagioli believes the new mascot "is meant to communicate that message in a warmer, more relatable way." And Firefox is already selling shirts with Kit over the pocket (as well as stickers)...

Read more of this story at Slashdot.

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers'

8 novembre 2025 à 23:34
For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models. "In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not. I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers. Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles. Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls. "In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...

Read more of this story at Slashdot.

Reçu hier — 8 novembre 2025Slashdot

Scientists Edit Gene in 15 Patients That May Permanently Reduce High Cholesterol

8 novembre 2025 à 22:34
A CRISPR-based drug given to study participants by infusion is raising hopes for a much easier way to lower cholesterol, reports CNN: With a snip of a gene, doctors may one day permanently lower dangerously high cholesterol, possibly removing the need for medication, according to a new pilot study published Saturday in the New England Journal of Medicine. The study was extremely small — only 15 patients with severe disease — and was meant to test the safety of a new medication delivered by CRISPR-Cas9, a biological sort of scissor which cuts a targeted gene to modify or turn it on or off. Preliminary results, however, showed nearly a 50% reduction in low-density lipoprotein, or LDL, the "bad" cholesterol which plays a major role in heart disease — the No.1 killer of adults in the United States and worldwide. The study, which will be presented Saturday at the American Heart Association Scientific Sessions in New Orleans, also found an average 55% reduction in triglycerides, a different type of fat in the blood that is also linked to an increased risk of cardiovascular disease. "We hope this is a permanent solution, where younger people with severe disease can undergo a 'one and done' gene therapy and have reduced LDL and triglycerides for the rest of their lives," said senior study author Dr. Steven Nissen, chief academic officer of the Sydell and Arnold Miller Family Heart, Vascular & Thoracic Institute at Cleveland Clinic in Ohio.... Today, cardiologists want people with existing heart disease or those born with a predisposition for hard-to-control cholesterol to lower their LDL well below 100, which is the average in the US, said Dr. Pradeep Natarajan, director of preventive cardiology at Massachusetts General Hospital and associate professor of medicine at Harvard Medical School in Boston... People with a nonfunctioning ANGPTL3 gene — which Natarajan says applies to about 1 in 250 people in the US — have lifelong levels of low LDL cholesterol and triglycerides without any apparent negative consequences. They also have exceedingly low or no risk for cardiovascular disease. "It's a naturally occurring mutation that's protective against cardiovascular disease," said Nissen, who holds the Lewis and Patricia Dickey Chair in Cardiovascular Medicine at Cleveland Clinic. "And now that CRISPR is here, we have the ability to change other people's genes so they too can have this protection." "Phase 2 clinical trials will begin soon, quickly followed by Phase 3 trials, which are designed to show the effect of the drug on a larger population, Nissen said." And CNN quotes Nissen as saying "We hope to do all this by the end of next year. We're moving very fast because this is a huge unmet medical need — millions of people have these disorders and many of them are not on treatment or have stopped treatment for whatever reason."

Read more of this story at Slashdot.

Bank of America Faces Lawsuit Over Alleged Unpaid Time for Windows Bootup, Logins, and Security Token Requests

8 novembre 2025 à 21:34
A former Business Analyst reportedly filed a class action lawsuit claiming that for years, hundreds of remote employees at Bank of America first had to boot up complex computer systems before their paid work began, reports Human Resources Director magazine: Tava Martin, who worked both remotely and at the company's Jacksonville facility, says the financial institution required her and fellow hourly workers to log into multiple security systems, download spreadsheets, and connect to virtual private networks — all before the clock started ticking on their workday. The process wasn't quick. According to the filing in the United States District Court for the Western District of North Carolina, employees needed 15 to 30 minutes each morning just to get their systems running. When technical problems occurred, it took even longer... Workers turned on their computers, waited for Windows to load, grabbed their cell phones to request a security token for the company's VPN, waited for that token to arrive, logged into the network, opened required web applications with separate passwords, and downloaded the Excel files they needed for the day. Only then could they start taking calls from business customers about regulatory reporting requirements... The unpaid work didn't stop at startup. During unpaid lunch breaks, many systems would automatically disconnect or otherwise lose connection, forcing employees to repeat portions of the login process — approximately three to five minutes of uncompensated time on most days, sometimes longer when a complete reboot was required. After shifts ended, workers had to log out of all programs and shut down their computers securely, adding another two to three minutes. Thanks to Slashdot reader Joe_Dragon for sharing the article.

Read more of this story at Slashdot.

Chan Zuckerberg Initiative Shifts Bulk of Philanthropy, 'Going All In on AI-Powered Biology'

8 novembre 2025 à 20:34
The Associated Press reports that "For the past decade, Dr. Priscilla Chan and her husband Mark Zuckerberg have focused part of their philanthropy on a lofty goal — 'to cure, prevent or manage all disease' — if not in their lifetime, then in their children's." During that decade they also funded other initiatives (including underprivileged schools and immigration reform), according to the article. But there's a change coming: Now, the billionaire couple is shifting the bulk of their philanthropic resources to Biohub, the pair's science organization, and focusing on using artificial intelligence to accelerate scientific discovery. The idea is to develop virtual, AI-based cell models to understand how they work in the human body, study inflammation and use AI to "harness the immune system" for disease detection, prevention and treatment. "I feel like the science work that we've done, the Biohub model in particular, has been the most impactful thing that we have done. So we want to really double down on that. Biohub is going to be the main focus of our philanthropy going forward," Zuckerberg said Wednesday evening at an event at the Biohub Imaging Institute in Redwood City, California.... Chan and Zuckerberg have pledged 99% of their lifetime wealth — from shares of Meta Platforms, where Zuckerberg is CEO — toward these efforts... On Thursday, Chan and Zuckerberg also announced that Biohub has hired the team at EvolutionaryScale, an AI research lab that has created large-scale AI systems for the life sciences... Biohub's ambition for the next years and decades is to create virtual cell systems that would not have been possible without recent advances in AI. Similar to how large language models learn from vast databases of digital books, online writings and other media, its researchers and scientists are working toward building virtual systems that serve as digital representations of human physiology on all levels, such as molecular, cellular or genome. As it is open source — free and publicly available — scientists can then conduct virtual experiments on a scale not possible in physical laboratories. "We will continue the model we've pioneered of bringing together scientists and engineers in our own state-of-the-art labs to build tools that advance the field," according to Thursday's blog post. "We'll then use those tools to generate new data sets for training new biological AI models to create virtual cells and immune systems and engineer our cells to detect and treat disease.... "We have also established the first large-scale GPU cluster for biological research, as well as the largest datasets around human cell types. This collection of resources does not exist anywhere else."

Read more of this story at Slashdot.

World's Largest Cargo Sailboat Completes Historic First Atlantic Crossing

8 novembre 2025 à 19:34
Long-time Slashdot reader AmiMoJo shared this report from Marine Insight: The world's largest cargo sailboat, Neoliner Origin, completed its first transatlantic voyage on 30 October despite damage to one of its sails during the journey. The 136-metre-long vessel had to rely partly on its auxiliary motor and its remaining sail after the aft sail was damaged in a storm shortly after departure... Neoline, the company behind the project, said the damage reduced the vessel's ability to perform fully on wind power... The Neoliner Origin is designed to reduce greenhouse gas emissions by 80 to 90 percent compared to conventional diesel-powered cargo ships. According to the United Nations Conference on Trade and Development (UNCTAD), global shipping produces about 3 percent of worldwide greenhouse gas emissions... The ship can carry up to 5,300 tonnes of cargo, including containers, vehicles, machinery, and specialised goods. It arrived in Baltimore carrying Renault vehicles, French liqueurs, machinery, and other products. The Neoliner Origin is scheduled to make monthly voyages between Europe and North America, maintaining a commercial cruising speed of around 11 knots.

Read more of this story at Slashdot.

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI

8 novembre 2025 à 18:34
"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters. Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems. But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S. Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...." A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document. A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say. Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Japanese Volunteer Translators Quit After Mozilla Begins Using Translation Bot

8 novembre 2025 à 17:34
Long-time Slashdot reader AmiMoJo shared this report from Linuxiac: The Japanese branch of Mozilla's Support Mozilla (SUMO) community — responsible for localizing and maintaining Japanese-language support documentation for Firefox and other Mozilla products (consisting of Japanese native speakers) — has officially disbanded after more than two decades of voluntary work... SUMO, short for Support Mozilla, is the umbrella project for Mozilla's user support platform, support.mozilla.org, that brings together volunteers and contributors worldwide who translate, maintain, and update documentation, tutorials, and troubleshooting guides for Firefox, Thunderbird, and other Mozilla products... According to marsf, the long-time locale leader of the Japanese SUMO team, the decision to disband was triggered by the recent introduction of an automated translation system known as Sumobot. Deployed on October 22, the bot began editing and approving Japanese Knowledge Base articles without community oversight. The article notes marsf's complaints in a post to the SUMO discussion forum, including the fact that the new automated system automatically approved machine-translated content with only a 72-hour window for human review. As a result, more than 300 Knowledge Base articles were overwritten on the production server, which marsf called "mass destruction of our work."

Read more of this story at Slashdot.

❌