Vue lecture

Are We Living in a Golden Age of Stupidity?

Test scores across OECD countries peaked around 2012 and have declined since. IQ scores in many developed countries appear to be falling after rising throughout the twentieth century. Nataliya Kosmyna at MIT's Media Lab began noticing changes around two years ago when strangers started emailing her to ask if using ChatGPT could alter their brains. She posted a study in June tracking brain activity in 54 students writing essays. Those using ChatGPT showed significantly less activity in networks tied to cognitive processing and attention compared to students who wrote without digital help or used only internet search engines. Almost none could recall what they had written immediately after submitting their work. She received more than 4,000 emails afterward. Many came from teachers who reported students producing passable assignments without understanding the material. A British survey found that 92% of university students now use AI and roughly 20% have used it to write all or part of an assignment. Independent research has found that more screen time in schools correlates with worse results. Technology companies have designed products to be frictionless, removing the cognitive challenges brains need to learn. AI now allows users to outsource thinking itself.

Read more of this story at Slashdot.

  •  

AWS Outage Takes Thousands of Websites Offline for Three Hours

AWS experienced a three-hour outage early Monday morning that disrupted thousands of websites and applications across the globe. The cloud computing provider reported DNS problems with DynamoDB in its US-EAST-1 region in northern Virginia starting at 12:11 a.m. Pacific time. Over 4 million users reported issues, according to Downdetector. Snapchat saw reports spike from more than 22,000 to around 4,000 as systems recovered. Roblox dropped from over 12,600 complaints to fewer than 500. Reddit and the financial platform Chime remained affected longer. Perplexity, Coinbase and Robinhood attributed their platform disruptions directly to AWS. Gaming platforms including Fortnite, Clash Royale and Clash of Clans went offline. Signal confirmed the messaging app was down. In Britain, Lloyd Bank, Bank of Scotland, Vodafone, BT, and the HMRC website faced problems. United Airlines reported disrupted access to its app and website overnight. Some internal systems were temporarily affected. Delta experienced a small number of minor flight delays. By 3:35 a.m. Pacific time, AWS said the issue had been fully mitigated. Most service operations were succeeding normally though some requests faced throttling during final resolution. AWS holds roughly one-third of the cloud infrastructure market ahead of Microsoft and Google.

Read more of this story at Slashdot.

  •  

Should We Edit Nature to Help It Survive Climate Change?

A recent article in Noema magazines explores the issues in "editing nature to fix our failures." "It turns out playing God is neither difficult nor expensive," the article points out. "For about $2,000, I can go online and order a decent microscope, a precision injection rig, and a vial of enough CRISPR-Cas9 — an enzyme-based genome-editing tool — to genetically edit a few thousand fish embryos..." So when going beyond the kept-in-captivity Dire Wolf to the possibility of bringing back forests of the American chestnut tree, "The process is deceptively simple; the implications are anything but..." If scientists could use CRISPR to engineer a more heat-tolerant coral, it would give coral a better chance of surviving a marine environment made warmer by climate change. It would also keep the human industries that rely on reefs afloat. But should we edit nature to fix our failures? And if we do, is it still natural...? Evolution is not keeping pace with climate change, so it is up to us to give it an assist [according to Christopher Preston, an environmental philosopher from the University of Montana, who wrote a book on CRISPR called "Ma href="https://mitpress.mit.edu/9780262537094/the-synthetic-age/">The Synthetic Age."] In some cases, the urgency is so great that we may not have time to waste. "There's no doubt there are times when you have to act," Preston continued. "Corals are a case where the benefits of reefs are just so enormous that keeping some alive, even if they're genetically altered, makes the risks worth it." Kate Quigley, a molecular ecologist and a principal research scientist at Australia's Minderoo Foundation, says "Engineering the ocean, or the atmosphere, or coral is not something to be taken lightly. Science is incredible. But that doesn't mean we know everything and what the unintended consequences might be." Phillip Cleves, a principal investigator at the Carnegie Institute for Science's embryology department, is already researching whether coral could be bioengineered to be more tolerant to heat. But both of them have concerns: For all the research Quigley and Cleves have dedicated to climate-proofing coral, neither wants to see the results of their work move from experimentation in the lab to actual use in the open ocean. Needing to do so would represent an even greater failure by humankind to protect the environment that we already have. And while genetic editing and selective breeding offer concrete solutions for helping some organisms adapt, they will never be powerful enough to replace everything lost to rising water temperatures. "I will try to prepare for it, but the most important thing we can do to save coral is take strong action on climate change," Quigley told me. "We could pour billions and billions of dollars — in fact, we already have — into restoration, and even if, by some miracle, we manage to recreate the reef, there'd be other ecosystems that would need the same thing. So why can't we just get at the root issue?" And then there's the blue-green algae dilemma: George Church, the Harvard Medical School professor of genetics behind Colossal's dire wolf project, was part of a team that successfully used CRISPR to change the genome of blue-green algae so that it could absorb up to 20% more carbon dioxide via photosynthesis. Silicon Valley tech incubator Y Combinator seized on the advance to call for scaled-up proposals, estimating that seeding less than 1% of the ocean's surface with genetically engineered phytoplankton would sequester approximately 47 gigatons of CO2 a year, more than enough to reverse all of last year's worldwide emissions. But moving from deploying CRISPR for species protection to providing a planetary service flips the ethical calculus. Restoring a chestnut forest or a coral reef preserves nature, or at least something close to it. Genetically manipulating phytoplankton and plants to clean up after our mistakes raises the risk of a moral hazard. Do we have the right to rewrite nature so we can perpetuate our nature-killing ways?

Read more of this story at Slashdot.

  •  

'The AI Revolution's Next Casualty Could Be the Gig Economy'

"The gig economy is facing a reckoning," argues Business Insider's BI Today newsletter." Two stories this past week caught my eye. Uber unveiled a new way for its drivers to earn money. No, not by giving rides, but by helping train the ride-sharing company's AI models instead. On the same day, Waymo announced a partnership with DoorDash to test driverless grocery and meal deliveries. Both moves point toward the same future: one where the very workers who built the gig economy may soon find themselves training the technology that replaces them. Uber's new program allows drivers to earn cash by completing microtasks, such as taking photos and uploading audio clips, that aim to improve the company's AI systems. For drivers, it's a way to diversify income. For Uber, it's a way to accelerate its automated future. There's an irony here. By helping Uber strengthen its AI, drivers could be accelerating the very driverless world they fear... Uber already offers autonomous rides in Waymo vehicles in Atlanta and Austin, and plans to expand. Meanwhile, Waymo is rolling out its pilot partnership with DoorDash [for driverless grocery/meal deliveries] starting in Phoenix.

Read more of this story at Slashdot.

  •  

Windows 11 Update Breaks Recovery Environment, Making USB Keyboards and Mice Unusable

"Windows Recovery Environment (RE), as the name suggests, is a built-in set of tools inside Windows that allow you to troubleshoot your computer, including booting into the BIOS, or starting the computer in safe mode," writes Tom's Hardware. "It's a crucial piece of software that has now, unfortunately, been rendered useless (for many) as part of the latest Windows update." A new bug discovered in Windows 11's October build, KB5066835, makes it so that your USB keyboard and mouse stop working entirely, so you cannot interact with the recovery UI at all. This problem has already been recognized and highlighted by Microsoft, who clarified that a fix is on its way to address this issue. Any plugged-in peripherals will continue to work just fine inside the actual operating system, but as soon as you go into Windows RE, your USB keyboard and mouse will become unresponsive. It's important to note that if your PC fails to start-up for any reason, it defaults to the recovery environment to, you know, recover and diagnose any issues that might've been preventing it from booting normally. Note that those hanging onto old PS/2-connector equipped keyboards and mice seem to be unaffected by this latest Windows software gaffe.

Read more of this story at Slashdot.

  •  

Was the Web More Creative and Human 20 Years Ago?

Readers in 2025 "may struggle to remember the optimism of the aughts, when the internet seemed to offer endless possibilities for virtual art and writing that was free..." argues a new review at Bookforum. "The content we do create online, if we still create, often feels unreflectively automatic: predictable quote-tweet dunks, prefabricated poses on Instagram, TikTok dances that hit their beats like clockwork, to say nothing of what's literally thoughtlessly churned out by LLM-powered bots." They write that author Joanna Walsh "wants us to remember how truly creative, and human, the internet once was," in the golden age of user-generated content — and funny cat picture sites like I Can Has Cheezburger: I Can Has Cheezburger... was an amateur project, an outlet for tech professionals who wanted an easier way to exchange cute cat pics after a hard day at work. In Amateurs!: How We Built Internet Culture and Why It Matters, Walsh documents how unpaid creative labor is the basis for almost everything that's good (and much that's bad) online, including the open-source code Linux, developed by Linus Torvalds when he was still in school ("just as a hobby, won't be big and professional"), and even, in Walsh's account, the World Wide Web itself. The platforms that emerged in the 2000s as "Web 2.0," including Facebook, YouTube, Reddit, and Twitter, allowed anyone to experiment in a space that had been reserved for coders and hackers, making the internet interactive even for the inexpert and virtually unlimited in potential audience. The explosion in amateur creativity that followed took many forms, from memes to tweeted one-liners to diaristic blogs to durational digital performances to sloppy Photoshops to the formal and informal taxonomic structures — wikis, neologisms, digitally native dialects... [U]ser-generated content was also, at bottom, about the bottom line, a business model sold to us under the guise of artistic empowerment. Even referring to an anonymous amateur as a "user," Walsh argues, cedes ground: these platforms are populated by producers, but their owners see us as, and turn us into, "helpless addicts." For some, online amateurism translated to professional success, a viral post earning an author a book deal, or a reputation as a top commenter leading to a staff writing job on a web publication... But for most, these days, participation in the online attention economy feels like a tax, or maybe a trickle of revenue, rather than free fun or a ticket to fame. The few remaining professionals in the arts and letters have felt pressured to supplement their full-time jobs with social media self-promotion, subscription newsletters, podcasts, and short-form video. On what was once called Twitter, users can pay, and sometimes get paid, to post with greater reach... The chapters are bookended by an introduction on the early promise of 2004 and a coda on the defeat of 2025 and supplemented by an appendix with a straightforward timeline of the major events and publications that serve as the book's touchstones... The online spaces where amateur content creators once "created and steered online culture" have been hollowed out and replaced by slop, but what really hurts is that the slop is being produced by bots trained on precisely that amateur content.

Read more of this story at Slashdot.

  •  

A Plan for Improving JavaScript's Trustworthiness on the Web

On Cloudflare's blog, a senior research engineer shares a plan for "improving the trustworthiness of JavaScript on the web." "It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client's browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages? It is interesting to note that smartphone apps don't have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible. It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains. In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web... We hope to build even wider consensus on the solution design in the near future.... We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset. The blog post points out that the WEBCAT protocol (created by the Freedom of Press Foundation) "allows site owners to announce the identities of the developers that have signed the site's integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user... We've made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components." The proposal also envisions a service storing metadata for transparency-enabled sites on the web (along with "witnesses" who verify the prefix tree holding the hashes for domain manifests). "We are still very early in the standardization process," with hopes to soon "begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon. In the meantime, you can follow along with our transparency specification draft,/A>, check out the open problems, and share your ideas."

Read more of this story at Slashdot.

  •  

Should Workers Start Learning to Work With AI?

"My boss thinks AI will solve every problem and is wildly enthusiastic about it," complains a mid-level worker at a Fortune 500 company, who considers the technology "unproven and wildly erratic." So how should they navigate the next 10 years until retirement, they ask the Washington Post's "Work Advice" columnist. The columnist first notes that "Despite promises that AI will eliminate tedious, 'low-value' tasks from our workload, many consumers and companies seem to be using it primarily as a cheap shortcut to avoid hiring professional actors, writers or artists — whose work, in some cases, was stolen to train the tools usurping them..." Kevin Cantera, a reader from Las Cruces, New Mexico [a writer for an education-tech compay], willingly embraced AI for work. But as it turns out, he was training his replacement... Even without the "AI will take our jobs" specter, there's much to be wary of in the AI hype. Faster isn't always better. Parroting and predicting linguistic patterns isn't the same as creativity and innovation... There are concerns about hallucinations, faulty data models, and intentional misuse for purposes of deception. And that's not even addressing the environmental impact of all the power- and water-hogging data centers needed to support this innovation. And yet, it seems, resistance may be futile. The AI genie is out of the bottle and granting wishes. And at the rate it's evolving, you won't have 10 years to weigh the merits and get comfortable with it. Even if you move on to another workplace, odds are AI will show up there before long. Speaking as one grumpy old Luddite to another, it might be time to get a little curious about this technology just so you can separate helpfulness from hype. It might help to think of AI as just another software tool that you have to get familiar with to do your job. Learn what it's good for — and what it's bad at — so you can recommend guidelines for ethical and beneficial use. Learn how to word your wishes to get accurate results. Become the "human in the loop" managing the virtual intern. You can test the bathwater without drinking it. Focus on the little ways AI can accommodate and support you and your colleagues. Maybe it could handle small tasks in your workflow that you wish you could hand off to an assistant. Automated transcriptions and meeting notes could be a life-changer for a colleague with auditory processing issues. I can't guarantee that dabbling in AI will protect your job. But refusing to engage definitely won't help. And if you decide it's time to change jobs, having some extra AI knowledge and experience under your belt will make you a more attractive candidate, even if you never end up having to use it.

Read more of this story at Slashdot.

  •  

To Fight Business 'Enshittification', Cory Doctorow Urges Tech Workers: Join Unions

Cory Doctorow has always warned that companies "enshittify" their services — shifting "as much as they can from users, workers, suppliers, and business customers to themselves." But this week Doctorow writes in Communications of the ACM that enshittification "would be much, much worse if not for tech workers," who have "the power to tell their bosses to go to hell..." When your skills are in such high demand that you can quit your job, walk across the street, and get a better one later that same day, your boss has a real incentive to make you feel like you are their social equal, empowered to say and do whatever feels technically right... The per-worker revenue for successful tech companies is unfathomable — tens or even hundreds of times their wages and stock compensation packages. "No wonder tech bosses are so excited about AI coding tools," Doctorow adds, "which promise to turn skilled programmers from creative problem-solvers to mere code reviewers for AI as it produces tech debt at scale. Code reviewers never tell their bosses to go to hell, and they are a lot easier to replace." So how should tech workers respond in a world where tech workers are now "as disposable as Amazon warehouse workers and drivers...?" Throughout the entire history of human civilization, there has only ever been one way to guarantee fair wages and decent conditions for workers: unions. Even non-union workers benefit from unions, because strong unions are the force that causes labor protection laws to be passed, which protect all workers. Tech workers have historically been monumentally uninterested in unionization, and it's not hard to see why. Why go to all those meetings and pay those dues when you could tell your boss to go to hell on Tuesday and have a new job by Wednesday? That's not the case anymore. It will likely never be the case again. Interest in tech unions is at an all-time high. Groups such as Tech Solidarity and the Tech Workers Coalition are doing a land-office business, and copies of Ethan Marcotte's You Deserve a Tech Union are flying off the shelves. Now is the time to get organized. Your boss has made it clear how you'd be treated if they had their way. They're about to get it. Thanks to long-time Slashdot reader theodp for sharing the article.

Read more of this story at Slashdot.

  •  

GIMP Now Offers an Official Snap Package For Linux Users

Slashdot reader BrianFagioli writes: GIMP has officially launched its own Snap package for Linux, finally taking over from the community-maintained Snapcrafters project. The move means all future GIMP releases will now be built directly from the team's CI pipeline, ensuring faster, more consistent updates across distributions. The developers also introduced a new "gimp-plugins" interface to support external plugins while maintaining Snap's security confinement, with GMIC and OpenVINO already supported. This marks another major step in GIMP's cross-platform packaging efforts, joining Flatpak and MSIX distribution options. The first officially maintained version, Version 3.0.6GIMP 3.0.6, is available now on the "latest/stable" Snap channel, with preview builds rolling out for testers.

Read more of this story at Slashdot.

  •  

Desperate to Stop Waymo's Dead-End Detours, a San Francisco Resident Tried an Orange Cone with a Sign

"This is an attempt to stop Waymo cars from driving into the dead end," complains a home-made sign in San Francisco, "where they are forced to reverse and adversely affect the lives of the residents." On an orange traffic post, the home-made sign declares "NO WAYMO — 8:00 p.m. to 8:00 a.m," with an explanation for the rest of the neighborhood. "Waymo comes at all hours of the night and up to 7 times per hour with flashing lights and screaming reverse sounds, waking people up and destroying the quality of life." SFGate reports that 1,400 people on Reddit upvoted a photo of the sign's text: It delves into the bureaucratic mess — multiple requests to Waymo, conversations with engineers, and 311 [municipal services] tickets, which had all apparently gone ignored — before finally providing instructions for human drivers. "Please move [the cones] back after you have entered so we can continue to try to block the Waymo cars from entering and disrupting the lives of residents." This isn't the first time Waymo's autonomous vehicles have disrupted San Francisco residents' peace. Last year, a fleet of the robotaxis created another sleepless fiasco in the city's SoMa neighborhood, honking at each other for hours throughout the night for two and a half weeks. Other on Reddit shared the concern. "I live at an dead end street in Noe Valley, and these Waymos always stuck there," another commenter posted. "It's been bad for more than a year," agreed another comment. "People on the Internet think you're just a hater but it's a real issue with Waymos." On Thursday "the sign remained at the corner of Lake Street and Second Avenue," notes SFGate. And yet "something appeared to have shifted. "Waymo vehicles weren't allowing drop-offs or pickups on the street, though whether this was due to the home-printed plea, the cone blockage, or simply updating routes remains unclear."

Read more of this story at Slashdot.

  •  

Sony Applies to Establish National Crypto Bank, Issue Stablecoin for US Dollar

An anonymous reader shared this report from Cryptonews: Sony has taken Wall Street by surprise after its banking division, Sony Bank, filed an application with the U.S. Office of the Comptroller of the Currency (OCC) to establish a national crypto bank under its subsidiary "Connectia Trust." The move positions the Japanese tech giant to become one of the first major global corporations to issue a U.S. dollar-backed stablecoin through a federally regulated institution. The application outlines plans to issue a U.S. dollar-pegged stablecoin, maintain the reserve assets backing it, and provide digital asset custody and management services. The filing places Sony alongside an elite list of firms, including Coinbase, Circle, Paxos, Stripe, and Ripple, currently awaiting OCC approval to operate as national digital banks. If approved, Sony would become the first major global technology company to receive a U.S. bank charter specifically tied to stablecoin issuance.... The Office of the Comptroller of the Currency "has received over 15 applications from fintech and crypto entities seeking trust charters," according to the article, calling it "a sign of renewed regulatory openness" under the office's new chief, a former blockchain executive. Meanwhile, the United States has also "conditionally given the nod to a new cryptocurrency-focused national bank launched by California tech billionaire Palmer Luckey," reports SFGate: To bring the bank to life, Luckey joined forces with JoeLonsdale, co-founder of Palantir and venture firm 8VC, and financial backer and fellow Palantir co-founder Peter Thiel, according to the Financial Times. Luckey conceived the idea for Erebor following the collapse of the Silicon Valley Bank in 2023, the Financial Times reported. The bank's name draws inspiration from J.R.R. Tolkien's "The Hobbit," referring to another name for the Lonely Mountain in the novel... The OCC said it applied the "same rigorous review and standards" used in all charter applications. The ["preliminary"] approval was granted in just four months; however, compliance and security checks are expected to take several more months before the new bank can open. "I am committed to a dynamic and diverse federal banking system," America's Comptroller of the Currency said Wednesday, "and our decision today is a first but important step in living up to that commitment." "Permissible digital asset activities, like any other legally permissible banking activity, have a place in the federal banking system if conducted in a safe and sound manner. The OCC will continue to provide a path for innovative approaches to financial services to ensure a strong, diverse financial system that remains relevant over time."

Read more of this story at Slashdot.

  •  

Why Signal's Post-Quantum Makeover Is An Amazing Engineering Achievement

"Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that bring Signal a significant step toward being fully quantum-resistant," writes Ars Technica: The mechanism that has made this constant key evolution possible over the past decade is what protocol developers call a "double ratchet." Just as a traditional ratchet allows a gear to rotate in one direction but not in the other, the Signal ratchets allow messaging parties to create new keys based on a combination of preceding and newly agreed-upon secrets. The ratchets work in a single direction, the sending and receiving of future messages. Even if an adversary compromises a newly created secret, messages encrypted using older secrets can't be decrypted... [Signal developers describe a "ping-pong" behavior as parties take turns replacing ratchet key pairs one at a time.] Even though the ping-ponging keys are vulnerable to future quantum attacks, they are broadly believed to be secure against today's attacks from classical computers. The Signal Protocol developers didn't want to remove them or the battle-tested code that produces them. That led to their decision to add quantum resistance by adding a third ratchet. This one uses a quantum-safe Key-Encapsulation Mechanism (KEM) to produce new secrets much like the Diffie-Hellman ratchet did before, ensuring quantum-safe, post-compromise security... The technical challenges were anything but easy. Elliptic curve keys generated in the X25519 implementation are about 32 bytes long, small enough to be added to each message without creating a burden on already constrained bandwidths or computing resources. A ML-KEM 768 key, by contrast, is 1,000 bytes. Additionally, Signal's design requires sending both an encryption key and a ciphertext, making the total size 2,272 bytes... To manage the asynchrony challenges, the developers turned to "erasure codes," a method of breaking up larger data into smaller pieces such that the original can be reconstructed using any sufficiently sized subset of chunks... The Signal engineers have given this third ratchet the formal name: Sparse Post Quantum Ratchet, or SPQR for short. The third ratchet was designed in collaboration with PQShield, AIST, and New York University. The developers presented the erasure-code-based chunking and the high-level Triple Ratchet design at the Eurocrypt 2025 conference. Outside researchers are applauding the work. "If the normal encrypted messages we use are cats, then post-quantum ciphertexts are elephants," Matt Green, a cryptography expert at Johns Hopkins University, wrote in an interview. "So the problem here is to sneak an elephant through a tunnel designed for cats. And that's an amazing engineering achievement. But it also makes me wish we didn't have to deal with elephants." Thanks to long-time Slashdot reader mspohr for sharing the article.

Read more of this story at Slashdot.

  •  

Are Supershear Earthquakes Even More Dangerous Than We Thought?

Long-time Slashdot reader Bruce66423 shared this article from the Los Angeles Times: Scientists have increasingly observed how the rupturing of a fault during an earthquake can be even faster than the speed of another type of damaging seismic wave, theoretically generating energy on the level of a sonic boom. These shock waves — created during "supershear" earthquakes — can worsen how bad the ground shakes both side to side and up and down along an affected fault area, scientists at USC, Caltech and the University of Illinois Urbana-Champaign wrote in a recent opinion article for the journal Seismological Research Letters. Although not everyone agrees that supershear earthquakes are inherently more destructive than other types, the potential implications are massive and need to be accounted for in seismic forecasts, the scientists contend... In just the last 15 years, 14 of 39 large strike-slip earthquakes have exhibited features of supershear ruptures, the opinion article said.... In California, supershear earthquakes would be expected on the straightest of "strike-slip" faults — in which one block of earth slides past another — like the San Andreas... There are a number of communities directly on top of the San Andreas fault. Among them are Coachella, Indio, Cathedral City, Palm Springs, Desert Hot Springs, Banning, Yucaipa, Highland, San Bernardino, Wrightwood, Palmdale, Gorman, Frazier Park, San Juan Bautista, Palo Alto, Portola Valley, Woodside, San Bruno, South San Francisco, Pacifica, Daly City and Bodega Bay. One earthquake scientist suggests building codes need to be more strict, according to the article. But it also cites a U.S. Geological Survey research geophysicist who isn't convinced by the new opinion article. "I don't think we know yet whether supershear ruptures really are more destructive."

Read more of this story at Slashdot.

  •  

FSF Reminds Consumers That Truly Free OS's Exist

"Microsoft does everything in its power to keep Windows users under its control," warns the Free Software Foundation in a new blog post this week. They argue that the lack of freedom that comes with proprietary code "forces users to surrender to decisions made by Microsoft to maximize its profits and further lock users into its product ecosystem" — describing both the problem and one possible solution: [IT management company Lansweeper] found that of the 30 million enterprise systems they manage, over 40% are incompatible with Windows 11. This is due to the hardware requirements like Treacherous Platform Module version 2.0 — a proprietary chip that uses cryptography that users can't influence or audit to restrict their control over the system. The end of Windows 10 support is the perfect opportunity to break free from this cycle and switch to GNU/Linux operating system (GNU/Linux OS), a system that respects your freedom... The endless, freedom-restricting cycle of planned obsolescence is not inevitable. Instead of paying Microsoft for continued updates or buying new hardware, Windows users left behind by Microsoft should install GNU/Linux. Free Software Foundation certified GNU/Linux distributions respect the user's freedom to run their computer as they wish, to study and modify its source code, and to redistribute copies. They don't require update contracts, often run faster on older hardware, and, most importantly, put you in control. "If you're already a GNU/Linux user, you have an important role to play. Help your friends and family make the switch by sharing your knowledge, help them install a free-as-in-freedom OS. Show them what it means to have real control over their computing!"

Read more of this story at Slashdot.

  •  

Extortion and Ransomware Drive Over Half of Cyberattacks — Sometimes Using AI, Microsoft Finds

Microsoft said in a blog post this week that "over half of cyberattacks with known motives were driven by extortion or ransomware... while attacks focused solely on espionage made up just 4%." And Microsoft's annual digital threats report found operations expanding even more through AI, with cybercriminals "accelerating malware development and creating more realistic synthetic content, enhancing the efficiency of activities such as phishing and ransomware attacks." [L]egacy security measures are no longer enough; we need modern defenses leveraging AI and strong collaboration across industries and governments to keep pace with the threat... Over the past year, both attackers and defenders harnessed the power of generative AI. Threat actors are using AI to boost their attacks by automating phishing, scaling social engineering, creating synthetic media, finding vulnerabilities faster, and creating malware that can adapt itself... For defenders, AI is also proving to be a valuable tool. Microsoft, for example, uses AI to spot threats, close detection gaps, catch phishing attempts, and protect vulnerable users. As both the risks and opportunities of AI rapidly evolve, organizations must prioritize securing their AI tools and training their teams... Amid the growing sophistication of cyber threats, one statistic stands out: more than 97% of identity attacks are password attacks. In the first half of 2025 alone, identity-based attacks surged by 32%. That means the vast majority of malicious sign-in attempts an organization might receive are via large-scale password guessing attempts. Attackers get usernames and passwords ("credentials") for these bulk attacks largely from credential leaks. However, credential leaks aren't the only place where attackers can obtain credentials. This year, we saw a surge in the use of infostealer malware by cybercriminals... Luckily, the solution to identity compromise is simple. The implementation of phishing-resistant multifactor authentication (MFA) can stop over 99% of this type of attack even if the attacker has the correct username and password combination. "Security is not only a technical challenge but a governance imperative..." Microsoft adds in their blog post. "Governments must build frameworks that signal credible and proportionate consequences for malicious activity that violates international rules." (The report also found that America is the #1 most-targeted country — and that many U.S. companies have outdated cyber defenses.) But while "most of the immediate attacks organizations face today come from opportunistic criminals looking to make a profit," Microsoft writes that nation-state threats "remain a serious and persistent threat." More details from the Associated Press: Russia, China, Iran and North Korea have sharply increased their use of artificial intelligence to deceive people online and mount cyberattacks against the United States, according to new research from Microsoft. This July, the company identified more than 200 instances of foreign adversaries using AI to create fake content online, more than double the number from July 2024 and more than ten times the number seen in 2023. Examples of foreign espionage cited by the article: China is continuing its broad push across industries to conduct espionage and steal sensitive data... Iran is going after a wider range of targets than ever before, from the Middle East to North America, as part of broadening espionage operations.. "[O]utside of Ukraine, the top ten countries most affected by Russian cyber activity all belong to the North Atlantic Treaty Organization (NATO) — a 25% increase compared to last year." North Korea remains focused on revenue generation and espionage... There was one especially worrying finding. The report found that critical public services are often targeted, partly because their tight budgets limit their incident response capabilities, "often resulting in outdated software.... Ransomware actors in particular focus on these critical sectors because of the targets' limited options. For example, a hospital must quickly resolve its encrypted systems, or patients could die, potentially leaving no other recourse but to pay."

Read more of this story at Slashdot.

  •  

New Data Shows Record CO2 Levels in 2024. Are Carbon Sinks Failing?

The Guardian reports that atmospheric carbon dioxide "soared by a record amount in 2024 to hit another high, UN data shows." But what's more troubling is why: Several factors contributed to the leap in CO2, including another year of unrelenting fossil fuel burning despite a pledge by the world's countries in 2023 to "transition away" from coal, oil and gas. Another factor was an upsurge in wildfires in conditions made hotter and drier by global heating. Wildfire emissions in the Americas reached historic levels in 2024, which was the hottest year yet recorded. However, scientists are concerned about a third factor: the possibility that the planet's carbon sinks are beginning to fail. About half of all CO2 emissions every year are taken back out of the atmosphere by being dissolved in the ocean or being sucked up by growing trees and plants. But the oceans are getting hotter and can therefore absorb less CO2 while on land hotter and drier conditions and more wildfires mean less plant growth... Atmospheric concentrations of methane and nitrous oxide — the second and third most important greenhouse gases related to human activities — also rose to record levels in 2024. About 40% of methane emissions come from natural sources. But scientists are concerned that global heating is leading to more methane production in wetlands, another potential feedback loop. Thanks to long-time Slashdot reader mspohr for sharing the article.

Read more of this story at Slashdot.

  •  

OpenAI Cofounder Builds New Open Source LLM 'Nanochat' - and Doesn't Use Vibe Coding

An anonymous reader shared this report from Gizmodo: It's been over a year since OpenAI cofounder Andrej Karpathy exited the company. In the time since he's been gone, he coined and popularized the term "vibe coding" to describe the practice of farming out coding projects to AI tools. But earlier this week, when he released his own open source model called nanochat, he admitted that he wrote the whole thing by hand, vibes be damned. Nanochat, according to Karpathy, is a "minimal, from scratch, full-stack training/inference pipeline" that is designed to let anyone build a large language model with a ChatGPT-style chatbot interface in a matter of hours and for as little as $100. Karpathy said the project contains about 8,000 lines of "quite clean code," which he wrote by hand — not necessarily by choice, but because he found AI tools couldn't do what he needed. "It's basically entirely hand-written (with tab autocomplete)," he wrote. "I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful."

Read more of this story at Slashdot.

  •  

Repair Plan Underway to Restore Power at Ukrainian Nuclear Plant

Repair Plan Underway to Restore Power at Ukrainian Nuclear Plant The Associated Press reports: Work has begun to repair the damaged power supply to Ukraine's Zaporizhzhia nuclear power plant, the head of the U.N.'s nuclear watchdog said Saturday. The repairs are hoped to end a precarious four-week outage that saw it dependent on backup generators. Russian and Ukrainian forces established special ceasefire zones for repairs to be safely carried out, said the head of the International Atomic Energy Agency, Rafael Grossi... "Both sides engaged constructively with the IAEA to enable the complex repair plan to proceed," Grossi said in a statement... The Zaporizhzhia plant, Europe's largest nuclear power station, has been operating on diesel back-up generators since Sept. 23 when its last remaining external power line was severed in attacks that Russia and Ukraine each blamed on the other. The plant is in an area under Russian control since early in Moscow's full-scale invasion of Ukraine and is not in service, but it needs reliable power to cool its six shutdown reactors and spent fuel, to avoid any catastrophic nuclear incidents.

Read more of this story at Slashdot.

  •  

Protein Powders and Shakes Contain High Levels of Lead

Long-time Slashdot reader fjo3 shares an announcement from the U.S.-based nonprofit Consumer Reports: Protein powders still carry troubling levels of toxic heavy metals, according to a new Consumer Reports (CR) investigation. Our latest tests of 23 protein powders and ready-to-drink shakes from popular brands found that heavy metal contamination has become even more common among protein products, raising concerns that the risks are growing right alongside the industry itself. For more than two-thirds of the products we analyzed, a single serving contained more lead than CR's food safety experts say is safe to consume in a day — some by more than 10 times... [I]n addition to the average level of lead being higher than what we found 15 years ago, there were also fewer products with undetectable amounts of it. The outliers also packed a heavier punch. Naked Nutrition's Vegan Mass Gainer powder, the product with the highest lead levels, had nearly twice as much lead per serving as the worst product we analyzed in 2010. Nearly all the plant-based products CR tested had elevated lead levels, but some were particularly concerning. Two had so much lead that CR's experts caution against using them at all... Dairy-based protein powders and shakes generally had the lowest amounts of lead, but half of the products we tested still had high enough levels of contamination that CR's experts advise against daily use... Unlike prescription and over-the-counter drugs, the Food and Drug Administration doesn't review, approve, or test supplements like protein powders before they are sold. Federal regulations also don't generally require supplement makers to prove their products are safe, and there are no federal limits for the amount of heavy metals they can contain. The article acknowledges that "Many of these powders are fine to have occasionally, and even those with the highest lead levels are far below the concentration needed to cause immediate harm. That said, because most people don't actually need protein supplements — nutrition experts say the average American already gets plenty — it makes sense to ask whether these products are worth the added exposure."

Read more of this story at Slashdot.

  •