Vue lecture

UV-C Light Kills Nearly Everything - Except This Unusual Organism

"Earth's ozone layer blocks the Sun's shortest wave radiation, called UV-C, which is so damaging to cells in high doses that it's a go-to sterilizer in hospitals," writes Slashdot reader sciencehabit. "UV-C is such a killer, in fact, that scientists have questioned whether life can survive on worlds that lack an ozone layer, such as Mars or distant exoplanets. "But research published this month in Astrobiology suggests one hardy lichen, a hybrid organism made of algae and fungi, may have cracked the UV-C code with a built-in sunscreen, despite never experiencing these rays in its long evolutionary history." Science magazine explains: When scientists brought a sample of the species, the common desert dweller Clavascidium lacinulatum, back to the lab, graduate student Tejinder Singh put the lichen through the wringer. First, Singh dehydrated the lichen, to make sure it couldn't grow back in real time and mask any UV damage. Then he placed the lichen a few centimeters under a UV lamp and blasted it with radiation. The lichen seemed just fine. So Singh purchased the most powerful UV-C lamp he could find online, capable of sending out 20 times more radiation than the amount expected on Mars. When he tested the lamp on the most radiation-resistant life form on Earth, the bacterium Deinococcus radiodurans, it died in less than a minute. After 3 months—likely the highest amount of UV-C radiation ever tested on an organism—Singh pulled the sample so he could finish his master's thesis in time. About half of the lichen's algal cells had survived. Then, when the team ground up and cultured part of the surviving lichen, about half of its algal cells sprouted new, green colonies after 2 weeks, showing it maintained the ability to reproduce. The species may provide a blueprint for surviving on Mars or exoplanets, which don't have an ozone layer to protect them.

Read more of this story at Slashdot.

  •  

In Last-Minute Move, Canada Rescinds Digital Services Tax, Restarts Negotiations

"Canada and the United States have resumed trade negotiations," reports Newsweek, "after Canadian Prime Minister Mark Carney agreed to rescind the country's digital services tax on U.S. technology companies." The development follows President Donald Trump's announcement on Friday that he was suspending all trade talks with Canada "effective immediately" over the tax policy... Canada's quick reversal signals the high stakes involved in maintaining trade relationships with the United States, particularly given the countries' deeply integrated economies. Carney's office confirmed on Sunday that both leaders have agreed to restart negotiations after Canada committed to abandoning the 3 percent levy targeting major U.S. tech giants including Amazon, Google, Meta, Uber, and Airbnb. The tax was scheduled to take effect Monday and would have applied retroactively, creating an estimated $2 billion bill for American companies. The conflict escalated rapidly after Canada's Finance Department confirmed Friday that companies would still be required to make their first digital tax payments Monday, despite ongoing negotiations. The tax targeted revenue generated from Canadian users rather than corporate profits, making it particularly burdensome for technology companies operating internationally... Canada's decision to rescind the tax came "in anticipation" of reaching a broader trade agreement, according to government officials. With negotiations resuming, both countries will likely focus on addressing broader trade issues beyond the digital services tax.

Read more of this story at Slashdot.

  •  

After 45 Years, 74-Year-Old Spreadsheet Legend/EFF Cofounder Mitch Kapor Gets His MIT Degree

Mitch Kapor dropped out of MIT's business school in 1979 — and had soon cofounded the pioneering spreadsheet company Lotus. He also cofounded the EFF, was the founding chair of the Mozilla Foundation, and is now a billionaire (and an VC investor at Kapor Capital). 45 years later, when the 74-year-old was invited to give a guest lecture at MIT's business school last year by an old friend (professor Bill Aulet), he'd teased the billionaire that "there's only one problem, Mitch, I see here you haven't graduated from MIT." The Boston Globe tells the story... After graduating from Yale in 1971 and bouncing around for almost a decade as "a lost and wandering soul," working as a disc jockey, a Transcendental Meditation teacher, and a mental health counselor, Kapor said he became entranced by the possibilities of the new Apple II personal computer. He started writing programs to solve statistics problems and analyze data, which caught the attention of Boston-area software entrepreneurs Dan Bricklin and Bob Frankston, who co-created VisiCalc, one of the first spreadsheet programs. They introduced Kapor to their California-based software publisher, Personal Software. Midway through Kapor's 12-month master's program, the publisher offered him the then-princely sum of about $20,000 if he'd adapt his stats programs to work with VisiCalc. To finish the project, he took a leave from MIT, but then he decided to leave for good to take a full-time job at Personal. Comparing his decision to those of other famed tech founder dropouts, like Bill Gates, Kapor said he felt the startup world was calling to him. "It was just so irresistible," he said. "It felt like I could not let another moment go by without taking advantage of this opportunity or the window would close...." When Aulet made his joke on the phone call with his old friend in 2024, Kapor had largely retired from investing and realized that he wanted to complete his degree. "I don't know what prompted me, but it started a conversation" with MIT about the logistics of finally graduating, Kapor said. By the time Kapor gave the lecture in March, Aulet had discovered Kapor was only a few courses short. MIT does not give honorary degrees, but school officials allow students to make up for missing classes with an independent study and a written thesis. Kapor decided to write a paper on the roots and development of his investing strategy. "It's timely, it's highly relevant, and I have things to say," he said. One 77-page thesis later, Kapor, donning a cap and gown, finally received his master's degree in May, at a ceremony in the Hyatt Regency Hotel in Cambridge, not far from where he founded Lotus.

Read more of this story at Slashdot.

  •  

UK Scientists Plan to Construct Synthetic Human Genetic Material From Scratch

"Researchers are embarking on an ambitious project to construct human genetic material from scratch," reports the Guardian, "to learn more about how DNA works and pave the way for the next generation of medical therapies." Scientists on the Synthetic Human Genome (SynHG) project will spend the next five years developing the tools and knowhow to build long sections of human genetic code in the lab. These will be inserted into living cells to understand how the code operates. Armed with the insights, scientists hope to devise radical new therapies for the treatment of diseases. Among the possibilities are living cells that are resistant to immune attack or particular viruses, which could be transplanted into patients with autoimmune diseases or with liver damage from chronic viral infections. "The information gained from synthesising human genomes may be directly useful in generating treatments for almost any disease," said Prof Jason Chin, who is leading the project at the MRC's Laboratory of Molecular Biology (LMB) in Cambridge... For the SynHG project, researchers will start by making sections of a human chromosome and testing them in human skin cells. The project involves teams from the universities of Cambridge, Kent, Manchester, Oxford and Imperial College London... Embedded in the project is a parallel research effort into the social and ethical issues that arise from making genomes in the laboratory, led by Prof Joy Zhang at the University of Kent. "We're a little way off having anything tangible that can be used as a therapy, but this is the time to start the discussion on what we want to see and what we don't want to see," said Dr Julian Sale, a group leader at the LMB.

Read more of this story at Slashdot.

  •  

Beware of Promoting AI in Products, Researchers Warn Marketers

The Wall Street Journal reports that "consumers have less trust in offerings labeled as being powered by artificial intelligence, which can reduce their interest in buying them, researchers say." The effect is especially pronounced for offerings perceived to be riskier buys, such as a car or a medical-diagnostic service, say the researchers, who were from Washington State University and Temple University. "When we were thinking about this project, we thought that AI will improve [consumers' willingness to buy] because everyone is promoting AI in their products," says Dogan Gursoy, a regents professor of hospitality business management at Washington State and one of the study's authors. "But apparently it has a negative effect, not a positive one." In multiple experiments, involving different people, the researchers split participants into two groups of around 100 each. One group read ads for fictional products and services that featured the terms "artificial intelligence" or "AI-powered," while the other group read ads that used the terms "new technology" or "equipped with cutting-edge technologies." In each test, members of the group that saw the AI-related wording were less likely to say they would want to try, buy or actively seek out any of the products or services being advertised compared with people in the other group. The difference was smaller for items researchers called low risk — such as a television and a generic customer-service offering... Meanwhile, a separate, forthcoming study from market-research firm Parks Associates that used different methods and included a much larger sample size came to similar conclusions about consumers' reaction to AI in products. "We straight up asked consumers, 'If you saw a product that you liked that was advertised as including AI, would that make you more or less likely to buy it?' " says Jennifer Kent, the firm's vice president of research. Of the roughly 4,000 Americans in the survey, 18% said AI would make them more likely to buy, 24% said less likely and to 58% it made no difference, according to the study. "Before this wave of generative AI attention over the past couple of years, AI-enabled features actually have tested very, very well," Kent says.

Read more of this story at Slashdot.

  •  

Earth is Trapping Much More Heat Than Climate Models Forecast

What happens if you track how much heat enters Earth's atmosphere and how much heat leaves? You discover that Earth's energy budget "is now well and truly out of balance," three climate researchers write at The Conversation: Our recent research found this imbalance has more than doubled over the last 20 years. Other researchers have come to the same conclusions. This imbalance is now substantially more than climate models have suggested... These findings suggest climate change might well accelerate in the coming years... [T]he burning of coal, oil and gas has now added more than two trillion tonnes of carbon dioxide and other greenhouse gases to the atmosphere. These trap more and more heat, preventing it from leaving. Some of this extra heat is warming the land or melting sea ice, glaciers and ice sheets. But this is a tiny fraction. Fully 90% has gone into the oceans due to their huge heat capacity... The doubling of the energy imbalance has come as a shock, because the sophisticated climate models we use largely didn't predict such a large and rapid change. Typically, the models forecast less than half of the change we're seeing in the real world. We don't yet have a full explanation. But new research suggests changes in clouds is a big factor. Clouds have a cooling effect overall. But the area covered by highly reflective white clouds has shrunk, while the area of jumbled, less reflective clouds has grown. While we don't know why the cloud are changing, it "might be part of a trend caused by global warming itself, that is, a positive feedback on climate change. These findings suggest recent extremely hot years are not one-offs but may reflect a strengthening of warming over the coming decade or longer...." "We've known the solution for a long time: stop the routine burning of fossil fuels and phase out human activities causing emissions such as deforestation."

Read more of this story at Slashdot.

  •  

For the Free Software Foundation's Summer Fundraiser, the 'GNU Press Shop' is Open

The Free Software Foundation is a non-profit — and they're having some fun with it. They've just announced a summer fundraiser, "and that means the GNU Press Shop is open!" From now until July 28, you can buy your FSF gear at the GNU Press shop. First and foremost, there's the launch of the FSF's fortieth anniversary shirt in a summery yellow. We're taking orders for a limited time for these (until July 28), and then printing them — you should have yours on your shoulders a few weeks after the shop closes. We've also restocked some favorites in the shop: - A fresh batch of the popular Ada & Zangemann: A Tale of Software, Skateboards, and Raspberry Ice Cream book by Matthias Kirschner from the Free Software Foundation Europe (FSFE). This tale of software, skateboards, and raspberry ice cream teaches kids how neat and exciting it is having control over your software, a perfect fun summer read! - Reading is hard in the glaring sun, so shade your eyes with a freshly restocked GNU baseball cap in pitch black with brilliant gold embroidery. These are great for wearing anywhere, especially to free software events. - For privacy, protect yourself from surveillance with ease and panache with this slick webcam guard. We also hope you'll consider becoming an FSF associate member, putting yourself at the heart of our commitment to ensuring a world where all software respects our freedom and dignity. Plus, you'll help us reach our summer fundraising goal of 200 new associate members before July 11, and of course you'll also receive a 20% discount at the GNU Press Shop. A note about shipping: the GNU Press shop opens periodically, and we collect all orders during this time and schedule orders to be sent out on specific shipping dates with the help of volunteers. We will be doing the shipping at the end of the FSF's fundraiser, which means there will be a delay between placing your order and receiving it... If you happen to be in the Boston area in July, and would like to support the FSF's work, we are looking for volunteers to help pack and ship our orders. Also on sale are the book "Free as in Freedom 2.0" (Richard Stallman's 2010 revision of the 2002 biography by Sam Williams with extensive additional commentary) and "Free Software Free Society: Selected Essays of Richard M. Stallman" (the 3rd edition published in 2015). And there's also several other books, t-shirts, other FSF-branded gear, and even a sticker that warns people "There is no cloud... just other people's computers."

Read more of this story at Slashdot.

  •  

New NSA/CISA Report Again Urges the Use of Memory-Safe Programming Language

An anonymous reader shared this report from the tech news site The Register: The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) this week published guidance urging software developers to adopt memory-safe programming languages. "The importance of memory safety cannot be overstated," the inter-agency report says... The CISA/NSA report revisits the rationale for greater memory safety and the government's calls to adopt memory-safe languages (MSLs) while also acknowledging the reality that not every agency can change horses mid-stream. "A balanced approach acknowledges that MSLs are not a panacea and that transitioning involves significant challenges, particularly for organizations with large existing codebases or mission-critical systems," the report says. "However, several benefits, such as increased reliability, reduced attack surface, and decreased long-term costs, make a strong case for MSL adoption." The report cites how Google by 2024 managed to reduce memory safety vulnerabilities in Android to 24 percent of the total. It goes on to provide an overview of the various benefits of adopting MSLs and discusses adoption challenges. And it urges the tech industry to promote memory safety by, for example, advertising jobs that require MSL expertise. It also cites various government projects to accelerate the transition to MSLs, such as the Defense Advanced Research Projects Agency (DARPA) Translating All C to Rust (TRACTOR) program, which aspires to develop an automated method to translate C code to Rust. A recent effort along these lines, dubbed Omniglot, has been proposed by researchers at Princeton, UC Berkeley, and UC San Diego. It provides a safe way for unsafe libraries to communicate with Rust code through a Foreign Function Interface.... "Memory vulnerabilities pose serious risks to national security and critical infrastructure," the report concludes. "MSLs offer the most comprehensive mitigation against this pervasive and dangerous class of vulnerability." "Adopting memory-safe languages can accelerate modern software development and enhance security by eliminating these vulnerabilities at their root," the report concludes, calling the idea "an investment in a secure software future." "By defining memory safety roadmaps and leading the adoption of best practices, organizations can significantly improve software resilience and help ensure a safer digital landscape."

Read more of this story at Slashdot.

  •  

Blue Origin Just Launched Six More Passengers to the Edge of Space

Just four weeks after an early June flight to the edge of space, Blue Origin has again carried six more passengers there and back again, reports CBS News, noting that the 10-minute ride was Blue Origin's 13th flight "out of the discernible atmosphere." The New Shepard capsule's stubby single-stage booster roared to life just after 9:38 a.m. EDT, throttled up to full thrust and smoothly climbed away from Blue Origin's launch site near Van Horn, Texas. The hydrogen-fueled BE-3 engine powering the New Shepard fired for about two-and-a-half minutes, accelerating the spacecraft to just under three times the speed of sound. The capsule then separated from the booster and continued coasting upward along its up-and-down trajectory. At that point, the passengers — Allie and Carl Kuehner, Leland Larson, Freddie Rescigno Jr., Jim Sitkin and Owolabi Salis, the first Nigerian to fly in space — began enjoying about three minutes of weightlessness. Free to unstrap and float about the cabin, the passengers were able to take in the view through the largest windows in any operational spacecraft as the ship climbed to an altitude of just above 65 miles. That's about three miles higher than the internationally recognized boundary between the discernible atmosphere and space. The capsule then began falling back to Earth and the passengers returned to their seats for the descent to touchdown. The reusable booster, meanwhile, made its own return to the launch site, dropping tail first to a rocket-powered touchdown... The company has now launched 74 passengers, including Bezos' wife Lauren Sánchez, and four who have flown twice. By April nearly 120 civilians had already travelled to the edge of space, CBS News reported earlier — while Virgin Galactic is expected to resume flights next year. You can replay the webcast of the mission on Blue Origin's YouTube channel.

Read more of this story at Slashdot.

  •  

Has an AI Backlash Begun?

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong." "The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since... [F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate. Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources." The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people." The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...

Read more of this story at Slashdot.

  •  

To Spam AI Chatbots, Companies Spam Reddit with AI-Generated Posts

The problem? "Companies want their products and brands to appear in chatbot results," reports 9to5Mac. And "Since Reddit forms a key part of the training material for Google's AI, then one effective way to make that happen is to spam Reddit." Huffman has confirmed to the Financial Times that this is happening, with companies using AI bots to create fake posts in the hope that the content will be regurgitated by chatbots: "For 20 years, we've been fighting people who have wanted to be popular on Reddit," Huffman said... "If you want to show up in the search engines, you try to do well on Reddit, and now the LLMs, it's the same thing. If you want to be in the LLMs, you can do it through Reddit." Multiple ad agency execs confirmed to the FT that they are indeed "posting content on Reddit to boost the likelihood of their ads appearing in the responses of generative AI chatbots." Huffman says that AI bots are increasingly being used to make spam posts, and Reddit is trying to block them: For Huffman, success comes down to making sure that posts are "written by humans and voted on by humans [...] It's an arms race, it's a never ending battle." The company is exploring a number of new ways to do this, including the World ID eyeball-scanning device being touted by OpenAI's Sam Altman. It's Reddit's 20th anniversary, notes CNBC. And while "MySpace, Digg and Flickr have faded into oblivion," Reddit "has refused to die, chugging along and gaining an audience of over 108 million daily users..." But now Reddit "faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up." [I]n the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers." But in response, CNBC's headline argues that Reddit "is fighting AI with AI." It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week.

Read more of this story at Slashdot.

  •  

Just How Much Space Data Will the Rubin Observatory Collect?

In its first 10 hours the Rubin space telescope found 2,104 never-before-seen asteroids in our solar system. And Gizmodo reports the data went directly to the International Astronomical Union's Minor Planet Center (MPC), which "plays an essential role in the early detection and monitoring of asteroids that threaten Earth." The MPC has spent years preparing for the deluge of data from Rubin, ramping up its software to process massive amounts of observations. When the first round officially came flooding in on Monday, it was "nerve-racking and exciting simultaneously," Matthew Payne, MPC director, told Gizmodo. But Space.com explains how extraordinary that is. "There are approximately a million known asteroids in our cosmic neighborhood; over the next few years, Rubin could very well hike that figure up to five million." "This is five times more than all the astronomers in the world discovered during the last 200 years since the discovery of the first asteroid," Željko IveziÄ, Deputy Director of Rubin's Legacy Survey of Space and Time, said during the conference. "We can outdo two centuries of effort in just a couple of years...." The plan is for Rubin to capture such massive, high-resolution images of the southern sky once every three nights for at least the next 10 years. You can therefore consider it to be a super-fast, super-efficient and super-thorough cosmic imager. Indeed, those qualities are perfect for spotting some of the smallest details trailing through the space around our planet: asteroids. "We make movies of the night sky to see two things: objects that move and objects that change brightness," IveziÄ said. "Objects that move come in two flavors. Stars in our galaxy move, and they move slowly. Much faster objects are asteroids...." [I]t's tremendously difficult to record an asteroid at all. "Asteroids, they disappear after you get one picture of them," IveziÄ said, calling Rubin's ability to image small objects orbiting the sun "unprecedented." Space.com notes that the ten million galaxies in its first image are just 0.05% of around 20 billion galaxies that Rubin will have imaged by the end of its 10-year "Legacy Survey of Space and Time" investigating dark energy. In fact, in its first year of regular operations, the Observation "will collect more data than all previous optical observatories combined," reports Earth.com. That torrent of information — petabytes of images and catalogs — will be processed in near-real time. Alerts will be issued to the worldwide astronomy community within 60 seconds of any detected change in the sky. By democratizing access to its enormous dataset, Rubin Observatory will empower both professionals and citizen scientists. This will foster discoveries that range from mapping the structure of the Milky Way to refining the rate at which the universe is expanding. Reuters explains just how much data is being generated: The number of alerts the telescope will send every night is equivalent to the inboxes of 83,000 people. It's impossible for someone to look at that one by one," said astrophysicist Francisco Foster. "We're going to have to use artificial intelligence tools." And New Atlas shares some of the "first look" videos released by the Observatory, including one titled The Cosmic Treasure Chest and another on the Trifid and Lagoon Nebulae (which Space.com describe as clouds of gas and dust condensing to birth new stars).

Read more of this story at Slashdot.

  •  

Carbon Record Reveals Evidence of Extensive Human Fire Use 50,000 Years Ago

"It has long been unclear when humans started using fire," writes Phys.org... To address this question, researchers from the Institute of Oceanology of the Chinese Academy of Sciences (IOCAS), alongside collaborators from China, Germany, and France, analyzed the pyrogenic carbon record in a 300,000-year-old sediment core from the East China Sea. "Our findings challenge the widely held belief that humans only began influencing the environment with fire in the recent past, during the Holocene," said Dr. Zhao Debo, the study's corresponding author. This study, published in the Proceedings of the National Academy of Sciences, highlights the presence of charred plant remains — known as pyrogenic carbon — formed when vegetation burns but is not completely consumed by fire. The research reveals a notable increase in fire activity across East Asia approximately 50,000 years ago. This finding aligns with earlier reports of heightened fire activities in Europe, Southeast Asia, and the Papua New Guinea-Australia region respectively, suggesting a continental-scale intensification of fire use during this period... The study highlights that this global rise in fire use coincides with the rapid spread of Homo sapiens, increasing population densities, and a greater reliance on fire, particularly amid cold, glacial conditions... These conclusions have significant implications for understanding Earth's sensitivity to human impacts. If human fire management altered atmospheric carbon levels tens of thousands of years ago, current climate models may underestimate the historical baseline of human-environment interactions.

Read more of this story at Slashdot.

  •  

GeForce RTX 5070 SUPER 18Go et RTX 5070 Ti SUPER 24Go : les caractéristiques en fuite ?

Voilà une nouvelle qui risque fort d'intéresser ceux qui n'ont pu se résoudre à acheter une RTX 5070 à cause de la présence de "seulement" 12 Go de VRAM du bestiau, tandis que la RTX 5070 Ti et ses 16 Go était en revanche trop chère pour leur budget. Il semblerait bien que la GeForce RTX 5070 SUPER...

  •  

Ask Slashdot: Do You Use AI - and Is It Actually Helpful?

"I wonder who actually uses AI and why," writes Slashdot reader VertosCay: Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable. So what's the point? Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?" And if that's the case, then when you add up all that correction time... "Is it actually helpful?" Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?

Read more of this story at Slashdot.

  •  

Mysterious Radio Burst Turns Out to Be From a Dead 1967 NASA Satellite

An anonymous reader shared this report from Smithsonian magazine: Last year, Australian scientists picked up a mysterious burst of radio waves that briefly appeared brighter than all other signals in the sky. Now, the researchers have discovered the blast didn't come from a celestial object, but a defunct satellite orbiting Earth... "We got all excited, thinking maybe we'd discovered a new pulsar or some other object," says Clancy James, a researcher at Australia's Curtin University who is on the Australian Square Kilometer Array Pathfinder (ASKAP) team, to Alex Wilkins at New Scientist. After taking a closer look, however, the team realized that the only viable source for the burst was NASA's dead Relay 2, a short-lived satellite that hasn't been in operation since 1967.... The researchers also discovered that at the time of the event, the satellite was only around 2,800 miles away from Earth, which explains why the signal appeared so strong. The reason behind Relay 2's sudden burst is not clear, but the team has come up with two potential explanations — and neither involves the satellite coming back to life like a zombie. One relates to electrostatic discharge — a build-up of electricity that can result in a sudden blast. Spacecraft get charged with electricity when they pass through plasma, and once enough charge accumulates, it can create a spark. "New spacecraft are built with materials to reduce the build-up of charge, but when Relay 2 was launched, this wasn't well-understood," explains James to Space.com's Robert Lea. The other idea is that a micrometeorite hit the satellite, releasing a small cloud of plasma and radio waves. Karen Aplin, a space scientist at the University of Bristol in England who was not involved in the study, tells New Scientist that it would be tough to differentiate between signals produced by each of those two scenarios, because they would look very similar. The researchers say they favor the first idea, however, because micrometeorites the size of the one that could have caused the signal are uncommon. "Their findings were published in a pre-print paper on the arXiv server that has not yet been peer-reviewed."

Read more of this story at Slashdot.

  •  

New Linux Kernel Drama: Torvalds Drops Bcachefs Support After Clash

Bcachefs "pitches itself as a filesystem that 'doesn't eat your data'," writes the open source/Linux blog It's FOSS. Although it was last October that Bcachefs developer Kent Overstreet was restricted from participating in the Linux 6.13 kernel development cycle (after ending a mailing list post with "Get your head examined. And get the fuck out of here with this shit.") And now with the upcoming Linux kernel 6.17 release, Linus Torvalds has decided to drop Bcachefs support, they report, "owing to growing tensions" with Overstreet: The decision follows a series of disagreements over how fixes and changes for it were submitted during the 6.16 release cycle... Kent filed a pull request to add a new feature called "journal-rewind". It was meant to improve bcachefs repair functionality, but it landed during the release candidate (RC) phase, a time usually reserved for bug fixes, not new features, as Linus pointed out. [Adding "I remain steadfastly convinced that anybody who uses bcachefs is expecting it to be experimental. They had better."] Theodore Ts'o, a long-time kernel developer and maintainer of ext4, also chimed in, saying that Kent's approach risks introducing regressions, especially when changes affect sensitive parts of a filesystem like journaling. He reminded Kent that the rules around the merge window have been a long-standing consensus in the kernel community, and it's Linus's job to enforce them. After some more back and forth, Kent pushed back, arguing that the rules around the merge window aren't absolute and should allow for flexibility, even more so when user data is at stake. He then went ahead and resubmitted the patch, citing instances from XFS and Btrfs where similar fixes made it into the kernel during RCs. Linus merged it into his tree, but ultimately decided to drop Bcachefs entirely in the 6.17 merge window. To which Kent responded by clarifying that he wasn't trying to shut Linus out of Bcachefs' decisions, stressing that he values Linus's input... This of course follows the great Torvalds-Overstreet "filesystem people never learn" throwdown back in April.

Read more of this story at Slashdot.

  •  

AI Improves At Improving Itself Using an Evolutionary Trick

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv. From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges... The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems." ... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.) As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."

Read more of this story at Slashdot.

  •  

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis'

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality." And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot. "I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do." Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t." Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility. Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts. "When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response." But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...." In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

Read more of this story at Slashdot.

  •