Vue lecture

Blue Origin Livestreams Attempt to Launch Unique ''EscaPADE' Mission to Mars

Blue Origin is livestreaming the launch of its New Glenn rocket, which would carry a very unique mission for NASA. "Twin spacecraft are set to take off on an unprecedented, winding journey to Mars," reports CNN, "where they will investigate why the barren red planet began to lose its atmosphere billions of years ago." By observing two Mars locations simultaneously, this mission can measure how Mars responds to space weather in real time — and how the Martian magnetosphere changes... Called EscaPADE, the mission will aim for an orbital trajectory that has never been attempted before, according to aerospace company Advanced Space, which is supporting the project. If successful, it could be a crucial case study that can allow extraordinary flexibility for planetary science missions down the road. The robotic mission plans to spend a year idling in an orbital backroad before heading to its target destination... [R]ather than turning toward Mars, the two orbiters will instead aim for Lagrange Point 2, or L2 — a cosmic balance point about 1.5 million kilometers (930,000 miles) from Earth. Lagrange points are special because they act as gravitational wells in which the pull of the sun and Earth are in perfect balance. The conditions can allow spacecraft to linger without being dragged away... The spacecraft will then loop endlessly in a kidney bean-shaped orbit around L2 until next year's Mars transfer window opens. This "launch and loiter" project is part of NASA's SIMPLEx [Small, Innovative Missions for Planetary Exploration] program, which seeks high-value missions for less money, notes CNN. "EscaPADE's cost was less than $100 million, compared with the roughly $300 million to $600 million price tags of other NASA satellites orbiting Mars." "Blue Origin is also attempting to land and recover New Glenn's first-stage booster," notes another CNN article.

Read more of this story at Slashdot.

  •  

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases

"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training. That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it. Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers... Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

Read more of this story at Slashdot.

  •  

Lost Unix v4 Possibly Recovered on a Forgotten Bell Labs Tape From 1973

"A tape-based piece of unique Unix history may have been lying quietly in storage at the University of Utah for 50+ years," reports The Register. And the software librarian at Silicon Valley's Computer History Museum, Al Kossow of Bitsavers, believes the tape "has a pretty good chance of being recoverable." Long-time Slashdot reader bobdevine says the tape will be analyzed at the Computer History Museum. More from The Register: The news was posted to Mastodon by Professor Robert Ricci of the University of Utah's Kahlert School of Computing [along with a picture. "While cleaning a storage room, our staff found this tape containing #UNIX v4 from Bell Labs, circa 1973..." Ricci posted on Mastodon. "We have arranged to deliver it to the Computer History Museum."] The nine-track tape reel bears a handwritten label reading: UNIX Original From Bell Labs V4 (See Manual for format)... If it's what it says on the label, this is a notable discovery because little of UNIX V4 remains. That's unfortunate as this specific version is especially interesting: it's the first version of UNIX in which the kernel and some of the core utilities were rewritten in the new C programming language. Until now, the only surviving parts known were the source code to a slightly older version of the kernel and a few man pages — plus the Programmer's Manual [PDF], from November 1973. The Unix Heritage Society hosts those surviving parts — and apparently some other items of interest, according to a comment posted on Mastodon. "While going through the tapes from Dennis Ritchie earlier this year, I found some UNIX V4 distribution documents," posted Mastodon user "Broken Pipe," linking to tuhs.org/Archive/Applications/Dennis_Tapes/Gao_Analysis/v4_dist/. There's a file called license ("The program and information transmitted herewith is and shall remain the property of Bell Lab%oratories...") and coldboot ("Mount good tape on drive 0..."), plus a six-page "Setup" document that ends with these words... We expect to have a UNIX seminar early in 1974. Good luck. Ken Thompson Dennis Ritchie Bell Telephone Labs Murray Hill, NJ 07974

Read more of this story at Slashdot.

  •  

Neurodiverse Professionals 25% More Satisfied With AI Tools and Agents

An anonymous reader shared this report from CNBC: Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI. A recent study from the UK's Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents. [The study involved 1,000 users of Microsoft 365 Copilot from October through December of 2024.] "Standing up and walking around during a meeting means that I'm not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes," said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement). "I've white-knuckled my way through the business world," DeZao said. "But these tools help so much...." Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who've previously had to find ways to fit in among a work culture not built with them in mind. Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue. "Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do," said Kristi Boyd, an AI specialist with the SAS data ethics practice. "It's a smart way to make good on your organization's AI investments."

Read more of this story at Slashdot.

  •  

Rust Is Coming To Debian's APT Package Manager

A maintainer of Debian's Advanced Package Tool (APT) "has announced plans to introduce hard Rust dependencies into APT starting May 2026," reports the blog It's FOSS. The integration targets critical areas like parsing .deb, .ar, and tar files plus HTTP signature verification using Sequoia. [APT maintainer Julian Andres Klode] said these components "would strongly benefit from memory safe languages and a stronger approach to unit testing." He also gave a firm message to maintainers of Debian ports: "If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port." The reasoning is straightforward. Debian wants to move forward with modern tools rather than being held back by legacy architecture... Debian ports running on CPU architectures without Rust compiler support have six months to add proper toolchains. If they can't meet this deadline, those ports will need to be discontinued. As a result, some obscure or legacy platforms may lose official support. For most users on mainstream architectures like x86_64 and ARM, nothing changes. Your APT will simply become more secure and reliable under the hood. It's FOSS argues that "If done right, this could significantly strengthen APT's security and code quality." And the blog Linuxiac also supports the move. "By embedding Rust into APT, the distro joins a growing number of major open-source projects, such as the Linux kernel, Firefox, and systemd, that are gradually adopting Rust. And if I had to guess, I'd say this is just one of the first steps toward even deeper Rust integration in this legendary distribution, which is a good thing."

Read more of this story at Slashdot.

  •  

America's FAA Grounds MD-11s After Tuesday's Crash in Kentucky

UPDATE (11/9): America's Federal Aviation Administration has now grounded all U.S. MD-11 and MD-11F aircrafts after Tuesday's crash "because the agency has determined the unsafe condition is likely to exist or develop in other products of the same type design," according to an emergency airworthiness directive obtained by CBS News. American multinational freight company UPS had already "grounded its fleet of MD-11 aircraft," reported the Guardian, "days after a cargo plane crash that killed at least 13 people in Kentucky. The grounded MD-11s are the same type of plane involved in Tuesday's crash in Louisville. They were originally built by McDonnell Douglas until it was taken over by Boeing." More details from NBC News: UPS said the move to temporarily ground its MD-11 fleet was made "out of an abundance of caution and in the interest of safety." MD-11s make up 9% of the company's air fleet, it said. "We made this decision proactively at the recommendation of the aircraft manufacturer. Nothing is more important to us than the safety of our employees and the communities we serve," UPS spokesman Jim Mayer said... FedEx said early Saturday that it was also grounding its MD-11s. The UPS rival has 28 such planes in operation, out of a fleet of around 700, FedEx said. Video shows that the left engine of the plane caught fire during takeoff and immediately detached, National Transportation Safety Board member Todd Inman said Wednesday. The National Transportation Safety Board is the lead agency in the investigation. Thanks to long-time Slashdot reader echo123 for suggesting the article.

Read more of this story at Slashdot.

  •  

The Linux Kernel Looks To "Bite The Bullet" In Enabling Microsoft C Extensions

Two patches queued into the Linux kernel's build system development tree, kbuild-next, would enable the -fms-extensions compiler argument everywhere for allowing GCC and LLVM/Clang to use the Microsoft C Extensions when compiling the Linux kernel. Being in kbuild-next these patches will likely be submitted for the Linux 6.19 kernel merge window next month but remains to be seen if there will be any last minute objections to this change...
  •  

Hilarious Unused Audio From 2003 Baseball Game Rediscovered by Video Game History Foundation

After popular arcade games like Mortal Kombat and Spy Hunter, Midway Games jumped into the home console market, and in 2003 launched their baseball game franchise "MLB Slugfest" for Xbox, PS2, and GameCube. But at times it was almost a parody of baseball, including announcers filling the long hours of airtime with bizarre, rambling conversations. ("I read today that kitchen utensils are gonna hurt more people tonight than lifting heavy objects during the day...") Now former Midway Games producer Mark Flitman has revealed the even weirder conversations rejected by Major League Baseball. ("Ah, baseball on a sunny afternoon. Is there anything better? We've been talking about breaking pop bottles with rocks. I guess that is...") The nonprofit Video Game History Foundation published the text in their digital archive — and shared 79 seconds of sound clips that were actually recorded but never used in the final game. ("Enjoying some smoked whale meat up here in the booth today...") Their BlueSky post with the audio drew over 5,500 likes and 2,400 reposts, with one commenter wondering if the bizarre (and unapproved) conversations were "part of the tactic where you include overtly inappropriate content to make the stuff you actually want to keep seem more appropriate." But the Foundation's library director thinks the voice actors were just going wild. "We talked with Mark on our podcast and it sounds like they just did a lot of improv and got carried away." He added later that the game's producer "would give them prompts and they'd run with it. The voice actors (Kevin Matthews and Tim Kitzrow) have backgrounds in sports radio and comedy, so they came up with wild nonsense like this." The gaming site Aftermath notes the Foundation also has an archive page for all the other sound files on the CD. Maybe it's the ultimate tribute to the craziness that was MLB Slugfest. Years ago some fans of the game shared their memories on Reddit... "The first time my friend tried to bean me and my hitter caught the ball was so hype, we were freaking out. Every game quickly evolved into trying to get our hitters to charge the mound." "I just remembered you could also kick the shit out of the fielder near your base if he got too close. Man that game was awesome." "You could do jump kicks into the catcher like Richie from The Benchwarmers." "Every time someone got on base we would run the ball over to them and beat their asses for 30 seconds. Good times." Six years after the launch of the franchise, Midway Games declared bankruptcy.

Read more of this story at Slashdot.

  •  

Cloud Hypervisor 49 Released With AArch64 + Microsoft Hyper-V Improvements

For what began as an Intel open-source project focused on delivering a modern VMM for cloud workloads and written in Rust is seeing increasingly more exposure on AArch64 and Microsoft Windows platforms. In fact, Intel remains largely inactive now with Cloud Hypervisor after their lead maintainer left the company last year and has now been one year since seeing any significant contributions from Intel to this open-source project...
  •  

Did ChatGPT Conversations Leak... Into Google Search Console Results?

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console. Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes." To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console. "Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."

Read more of this story at Slashdot.

  •  

Le logiciel GPU-Tweak III se met à jour et surveillera l'inclinaison de la future ASUS ROG Matrix GeForce RTX 5090 Édition limitée 30e anniversaire

Le logiciel GPU-Tweak III et promet de prendre soin de la future ASUS ROG Matrix GeForce RTX 5090 Édition limitée 30e anniversaire, en surveillant son angle d'inclinaison, il faut dire qu'avec un tarif estimé à plus de 4000 euros, il vaut mieux en prendre soin ! En plus la carte s'annonce comme un joli pavé, avec ses dimensions de 370.3 x 150.5 x 77.3 mm et son occupation de 3.9 slots, on peut donc aisément imaginer qu'elle puisse entrainer une charge conséquente sur le PCB. […]

Lire la suite
  •  

Les prix des CPU AMD et Intel semaine 45-2025 : Le Ryzen 7 7800X3D bradé !!!

On parle des prix des CPU et cette semaine cela bouge pas mal. On commence chez Intel avec le 14600K à son prix le plus bas, il perd 27 euros et s'affiche à 179 euros. Ensuite, le 14700K perd 9 euros, le 245K baisse de 7 euros, et enfin, le 265K de 5 euros. Chez AMD, le 7800X3D baisse fortement, il fait - 42 euros et s'affiche en plus au moins cher chez notre partenaire 1FODISCOUNT au prix de 315,90 euros, un excellent tarif. Ensuite, le 9700X baisse de 9 euros, le 9800X3D augmente de 26 euros et enfin, le 9950X3D perd 6 euros. […]

Lire la suite
  •  

Les montages du dimanche, Saison 2 : Cougar Omnyx par GGF

Et c'est reparti pour une deuxième saison de montage et de watercooling en tout genre. Comme l'année dernière, nous ne parlons pas forcément de mode, mais surtout de beaux montages. Cette année, nous n'en ferons plus qu'un par semaine, le rythme fut parfois délicat à tenir. Comme toujours, n'hésitez pas à envoyer vos montages à Lucas. Ce dimanche, nous vous proposons de découvrir le Cougar Omnyx par GGF : […]

Lire la suite
  •  

'Breaking Bad' Creator Hates AI, Promises New Show 'Pluribus' Was 'Made By Humans'

The new series from Breaking Bad creator Vince Gilligan, Pluribus, was emphatically made by humans, not AI, reports TechCrunch: If you watched all the way to the end of the new Apple TV show "Pluribus," you may have noticed an unusual disclaimer in the credits: "This show was made by humans." That terse message — placed right below a note that "animal wranglers were on set to ensure animal safety" — could potentially provide a model for other filmmakers seeking to highlight that their work was made without the use of generative AI. In fact, yesterday the former X-Files writer told Variety "I hate AI. AI is the world's most expensive and energy-intensive plagiarism machine...." He goes on, about how AI-generated content is "like a cow chewing its cud — an endlessly regurgitated loop of nonsense," and how the U.S. will fail to regulate the technology because of an arms race with China. He works himself up until he's laughing again, proclaiming: "Thank you, Silicon Valley! Yet again, you've fucked up the world." He also says "there's a very high possibility that this is all a bunch of horseshit," according to the article. "It's basically a bunch of centibillionaires whose greatest life goal is to become the world's first trillionaires. I think they're selling a bag of vapor." And earlier this week he told Polygon that he hasn't used ChatGPT "because, as of yet, no one has held a shotgun to my head and made me do it." (Adding "I will never use it.") Time magazine called Thursday's two-episode premiere "bonkers." Though ironically, that premiere hit its own dystopian glitch. "After months of buildup and an omnipresent advertising campaign, Apple's much-anticipated new show Pluribus made its debut..." reports Macworld. "And the service promptly suffered a major outage across the U.S. and Canada." As reported by Bloomberg and others, users started to report that the service had crashed at around 10:30 p.m. ET, shortly after Apple made the first two episodes of the show available to stream. There were almost 13,000 reports on Downdetector before Apple acknowledged the problem on its System Status page. Reports say the outage was brief, lasting less than an hour... [T]here remains a Resolved Outage note on Apple TV (simply saying "Some users were affected; users experienced a problem with Apple TV" between 10:29 and 11.38 p.m.), as well as on Apple Music and Apple Arcade, which also went down at the same time. Social media reports indicated that the outage was widespread.

Read more of this story at Slashdot.

  •  

New Firefox Mascot 'Kit' Unveiled On New Web Page

"The Firefox brand is getting a refresh and you get the first look," says a new web page at Firefox.com. "Kit's our new mascot and your new companion through an internet that's private, open and actually yours." Slashdot reader BrianFagioli believes the new mascot "is meant to communicate that message in a warmer, more relatable way." And Firefox is already selling shirts with Kit over the pocket (as well as stickers)...

Read more of this story at Slashdot.

  •  

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers'

For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models. "In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not. I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers. Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles. Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls. "In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...

Read more of this story at Slashdot.

  •