Vue normale

Reçu avant avant-hier

Some Angry GitHub Users Are Rebelling Against GitHub's Forced Copilot AI Features

8 septembre 2025 à 11:34
Slashdot reader Charlotte Web shared this report from the Register: Among the software developers who use Microsoft's GitHub, the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories. The second most popular discussion — where popularity is measured in upvotes — is a bug report that seeks a fix for the inability of users to disable Copilot code reviews. Both of these questions, the first opened in May and the second opened a month ago, remain unanswered, despite an abundance of comments critical of generative AI and Copilot... The author of the first, developer Andi McClure, published a similar request to Microsoft's Visual Studio Code repository in January, objecting to the reappearance of a Copilot icon in VS Code after she had uninstalled the Copilot extension... "I've been for a while now filing issues in the GitHub Community feedback area when Copilot intrudes on my GitHub usage," McClure told The Register in an email. "I deeply resent that on top of Copilot seemingly training itself on my GitHub-posted code in violation of my licenses, GitHub wants me to look at (effectively) ads for this project I will never touch. If something's bothering me, I don't see a reason to stay quiet about it. I think part of how we get pushed into things we collectively don't want is because we stay quiet about it." It's not just the burden of responding to AI slop, an ongoing issue for Curl maintainer Daniel Stenberg. It's the permissionless copying and regurgitation of speculation as fact, mitigated only by small print disclaimers that generative AI may produce inaccurate results. It's also GitHub's disavowal of liability if Copilot code suggestions happen to have reproduced source code that requires attribution. It's what the Servo project characterizes in its ban on AI code contributions as the lack of code correctness guarantees, copyright issues, and ethical concerns. Similar objections have been used to justify AI code bans in GNOME's Loupe project, FreeBSD, Gentoo, NetBSD, and QEMU... Calls to shun Microsoft and GitHub go back a long way in the open source community, but moved beyond simmering dissatisfaction in 2022 when the Software Freedom Conservancy (SFC) urged free software supporters to give up GitHub, a position SFC policy fellow Bradley M. Kuhn recently reiterated. McClure says In the last six months their posts have drawn more community support — and tells the Register there's been a second change in how people see GitHub within the last month. After GitHub moved from a distinct subsidiary to part of Microsoft's CoreAI group, "it seems to have galvanized the open source community from just complaining about Copilot to now actively moving away from GitHub."

Read more of this story at Slashdot.

There's 50% Fewer Young Employees at Tech Companies Now Than Two Years Ago

8 septembre 2025 à 07:34
An anonymous reader shared this report from Fortune: The percentage of young Gen Z employees between the ages of 21 and 25 has been cut in half at technology companies over the past two years, according to recent data from compensation management software business Pave with workforce data from more than 8,300 companies. These young workers accounted for 15% of the workforce at large public tech firms in January 2023. By August 2025, they only represented 6.8%. The situation isn't pretty at big private tech companies, either — during that same time period, the proportion of early-career Gen Z employees dwindled from 9.3% to 6.8%. Meanwhile, the average age of a worker at a tech company has risen dramatically over those two and a half years. Between January 2023 and July 2025, the average age of all employees at large public technology businesses rose from 34.3 years to 39.4 years — more than a five year difference. On the private side, the change was less drastic, with the typical age only increasing from 35.1 to 36.6 years old... "If you're 35 or 40 years old, you're pretty established in your career, you have skills that you know cannot yet be disrupted by AI," Matt Schulman, founder and CEO of Pave, tells Fortune. "There's still a lot of human judgment when you're operating at the more senior level...If you're a 22-year-old that used to be an Excel junkie or something, then that can be disrupted. So it's almost a tale of two cities." Schulman points to a few reasons why tech company workforces are getting older and locking Gen Z out of jobs. One is that big companies — like Salesforce, Meta, and Microsoft — are becoming a lot more efficient thanks to the advent of AI. And despite their soaring trillion-dollar profits, they're cutting employees at the bottom rungs in favor of automation. Entry-level jobs have also dwindled because of AI agents, and stalling promotions across many agencies looking to do more with less. Once technology companies weed out junior roles, occupied by Gen Zers, their workforces are bound to rise in age. Schulman tells Fortune Gen Z also has an advantage: that tech corporations can see them as fresh talent that "can just break the rules and leverage AI to a much greater degree without the hindrance of years of bias." And Priya Rathod, workplace trends editor for LinkedIn, tells Fortune there's promising tech-industry entry roles in AI ethics, cybersecurity, UX, and product operations. "Building skills through certifications, gig work, and online communities can open doors.... "For Gen Z, the right certifications or micro credentials can outweigh a lack of years on the resume. This helps them stay competitive even when entry level opportunities shrink."

Read more of this story at Slashdot.

A New Four-Person Crew Will Simulate a Year-Long Mars Mission, NASA Announces

8 septembre 2025 à 04:34
Somewhere in Houston, four research volunteers "will soon participate in NASA's year-long simulation of a Mars mission," NASA announced this week, saying it will provide "foundational data to inform human exploration of the Moon, Mars, and beyond." The 378-day simulation will take place inside a 3D-printed, 1,700-square-foot habitat at NASA's Johnson Space Center in Houston — starting on October 19th and continuing until Halloween of 2026: Through a series of Earth-based missions called CHAPEA (Crew Health and Performance Exploration Analog), NASA aims to evaluate certain human health and performance factors ahead of future Mars missions. The crew will undergo realistic resource limitations, equipment failures, communication delays, isolation and confinement, and other stressors, along with simulated high-tempo extravehicular activities. These scenarios allow NASA to make informed trades between risks and interventions for long-duration exploration missions. "As NASA gears up for crewed Artemis missions, CHAPEA and other ground analogs are helping to determine which capabilities could best support future crews in overcoming the human health and performance challenges of living and operating beyond Earth's resources — all before we send humans to Mars," said Sara Whiting, project scientist with NASA's Human Research Program at NASA Johnson. Crew members will carry out scientific research and operational tasks, including simulated Mars walks, growing a vegetable garden, robotic operations, and more. Technologies specifically designed for Mars and deep space exploration will also be tested, including a potable water dispenser and diagnostic medical equipment... This mission, facilitated by NASA's Human Research Program, is the second one-year Mars surface simulation conducted through CHAPEA. The first mission concluded on July 6, 2024.

Read more of this story at Slashdot.

Microsoft's Analog Optical Computer Shows AI Promise

8 septembre 2025 à 01:34
Four years ago a small Microsoft Research team started creating an analog optical computer. They used commercially available parts like sensors from smartphone cameras, optical lenses, and micro-LED lights finer than a human hair. "As the light passes through the sensor at different intensities, the analog optical computer can add and multiply numbers," explains a Microsoft blog post. They envision the technology scaling to a computer that for certain problems is 100X faster and 100X more energy efficient — running AI workloads "with a fraction of the energy needed and at much greater speed than the GPUs running today's large language models." The results are described in a paper published in the scientific journal Nature, according to the blog post: At the same time, Microsoft is publicly sharing its "optimization solver" algorithm and the "digital twin" it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them. Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real analog optical computer [or "AOC"] behaves; it simulates the same inputs, processes and outputs, but in a digital environment — like a software version of the hardware. This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the analog optical computer hardware. "To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used," Parmigiani said. Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer. "We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare," he said. Further, "we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too." In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy... As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.

Read more of this story at Slashdot.

Microsoft's Cloud Services Disrupted by Red Sea Cable Cuts

7 septembre 2025 à 21:57
An anonymous reader shared this report from the BBC: Microsoft's Azure cloud services have been disrupted by undersea cable cuts in the Red Sea, the US tech giant says. Users of Azure — one of the world's leading cloud computing platforms — would experience delays because of problems with internet traffic moving through the Middle East, the company said. Microsoft did not explain what might have caused the damage to the undersea cables, but added that it had been able to reroute traffic through other paths. Over the weekend, there were reports suggesting that undersea cable cuts had affected the United Arab Emirates and some countries in Asia.... On Saturday, NetBlocks, an organisation that monitors internet access, said a series of undersea cable cuts in the Red Sea had affected internet services in several countries, including India and Pakistan. "We do expect higher latency on some traffic that previously traversed through the Middle East," Microsoft said in their status announcement — while stressing that traffic "that does not traverse through the Middle East is not impacted".

Read more of this story at Slashdot.

Chinese Hackers Impersonated US Lawmaker in Email Espionage Campaign

7 septembre 2025 à 20:34
As America's trade talks with China were set to begin last July, a "puzzling" email reached several U.S. government agencies, law firms, and trade groups, reports the Wall Street Journal. It appeared to be from the chair of a U.S. Congressional committee, Representative John Moolenaar, asking recipients to review an alleged draft of upcoming legislation — sent as an attachment. "But why had the chairman sent the message from a nongovernment address...?" "The cybersecurity firm Mandiant determined the spyware would allow the hackers to burrow deep into the targeted organizations if any of the recipients had opened the purported draft legislation, according to documents reviewed by The Wall Street Journal." It turned out to be the latest in a series of alleged cyber espionage campaigns linked to Beijing, people familiar with the matter said, timed to potentially deploy spyware against organizations giving input on President Trump's trade negotiations. The FBI and the Capitol Police are investigating the Moolenaar emails, and cyber analysts traced the embedded malware to a hacker group known as APT41 — believed to be a contractor for Beijing's Ministry of State Security... The hacking campaign appeared to be aimed at giving Chinese officials an inside look at the recommendations Trump was receiving from outside groups. It couldn't be determined whether the attackers had successfully breached any of the targets. A Federal Bureau of Investigation spokeswoman declined to provide details but said the bureau was aware of the incident and was "working with our partners to identify and pursue those responsible...." The alleged campaign comes as U.S. law-enforcement officials have been surprised by the prolific and creative nature of China's spying efforts. The FBI revealed last month that a Beijing-linked espionage campaign that hit U.S. telecom companies and swept up Trump's phone calls actually targeted more than 80 countries and reached across the globe... The Moolenaar impersonation comes as several administration officials have recently faced impostors of their own. The State Department warned diplomats around the world in July that an impostor was using AI to imitate Secretary of State Marco Rubio's voice in messages sent to foreign officials. Federal authorities are also investigating an effort to impersonate White House chief of staff Susie Wiles, the Journal reported in May... The FBI issued a warning that month that "malicious actors have impersonated senior U.S. officials" targeting contacts with AI-generated voice messages and texts. And in January, the article points out, all the staffers on Moolenaar's committee "received emails falsely claiming to be from the CEO of Chinese crane manufacturer ZPMC, according to people familiar with the episode." Thanks to long-time Slashdot reader schwit1 for sharing the news.

Read more of this story at Slashdot.

Publishers Demand 'AI Overview' Traffic Stats from Google, Alleging 'Forced' Deals

7 septembre 2025 à 19:34
AI Overviews have lowered click-through traffic to Daily Mail sites by as much as 89%, the publisher told a UK government body that regulates competition. So they've joined other top news organizations (including Guardian Media Group and the magazine trade body the Periodical Publishers Association) in asking the regulators "to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers," reports the Guardian: Publishers — already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news — argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or "drop out of all search results", according to several sources... In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content. However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers' overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies. "Google Discover is of zero product importance to Google at all," he says. "It allows Google to funnel more traffic to publishers as traffic from search declines ... Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want." Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models. The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the "value being scraped" out of the £125bn sector. Some publishers have struck bilateral licensing deals with AI companies — such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI — while others such as the BBC have taken action against AI companies alleging copyright theft. "It is a two-pronged attack on publishers, a sort of pincer movement," says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. "Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis." "At the moment the AI and tech community are showing no signs of supporting publisher revenue," says the chief executive of the UK's Periodical Publishers Association...

Read more of this story at Slashdot.

Linus Torvalds Expresses Frustration With 'Garbage' Link Tags In Git Commits

7 septembre 2025 à 18:34
"I have not pulled this, I'm annoyed by having to even look at this, and if you actually expect me to pull this I want a real explanation and not a useless link," Linus Torvalds posted Friday on the Linux kernel mailing list. Phoronix explains: It's become a common occurrence seeing "Link: " tags within Git commits for the Linux kernel that point to the latest Linux kernel mailing list patches of the same patch... Linus Torvalds has had enough and will be more strict against accepting pull requests that have link tags of no value. He commented yesterday on a block pull request that he pulled and then backed out of: "And dammit, this commit has that promising 'Link:' argument that I hoped would explain why this pointless commit exists, but AS ALWAYS that link only wasted my time by pointing to the same damn information that was already there. I was hoping that it would point to some oops report or something that would explain why my initial reaction was wrong. "Stop this garbage already. Stop adding pointless Link arguments that waste people's time. Add the link if it has *ADDITIONAL* information.... "Yes, I'm grumpy. I feel like my main job — really my only job — is to try to make sense of pull requests, and that's why I absolutely detest these things that are automatically added and only make my job harder." A longer discussion ensued... Torvalds: [A] "perfect" model might be to actually have some kind of automation of "unless there was actual discussion about it". But I feel such a model might be much too complicated, unless somebody *wants* to explore using AI because their job description says "Look for actual useful AI uses". In today's tech world, I assume such job descriptions do exist. Sigh... Torvalds: I do think it makes sense for patch series that (a) are more than a small handful of patches and (b) have some real "story" to them (ie a cover letter that actually explains some higher-level issues)... Torvalds also had two responses to a poster who'd said "IMHO it's better to have a Link and it _potentially_ being useful than not to have it and then need to search around for it." Torvalds: No. Really. The issue is "potentially — but very likely not — useful" vs "I HIT THIS TEN+ TIMES EVERY SINGLE F%^& RELEASE". There is just no comparison. I have literally *never* found the original submission email to be useful, and I'm tired of the "potentially useful" argument that has nothing to back it up with. It's literally magical thinking of "in some alternate universe, pigs can fly, and that link might be useful" Torvalds: And just to clarify: the hurt is real. It's not just the disappointment. It's the wasted effort of following a link and having to then realize that there's nothing useful there. Those links *literally* double the effort for me when I try to be careful about patches... The cost is real. The cost is something I've complained about before... Yes, it's literally free to you to add this cost. No, *YOU* don't see the cost, and you think it is helpful. It's not. It's the opposite of helpful. So I want commit messages to be relevant and explain what is going on, and I want them to NOT WASTE MY TIME. And I also don't want to ignore links that are actually *useful* and give background information. Is that really too much to ask for? Torvalds points out he's brought this up four times before — once in 2022. Torvalds: I'm a bit frustrated, exactly because this _has_ been going on for years. It's not a new peeve. And I don't think we have a good central place for that kind of "don't do this". Yes, there's the maintainer summit, but that's a pretty limited set of people. I guess I could mention it in my release notes, but I don't know who actually reads those either.. So I end up just complaining when I see it.

Read more of this story at Slashdot.

Scientists Discuss Next Steps to Prevent Dangerous 'Mirror Life' Research

7 septembre 2025 à 17:34
USA Today has an update on the curtailing of "mirror life" research: Kate Adamala had been working on something dangerous. At her synthetic biology lab, Adamala had been taking preliminary steps toward creating a living cell from scratch with one key twist: All the organism's building blocks would be flipped. Changing these molecules would create an unnatural mirror image of a cell, as different as your right hand from your left. The endeavor was not only a fascinating research challenge, but it also could be used to improve biotechnology and medicine. As Adamala and her colleagues talked with biosecurity experts about the project, however, grave concerns began brewing. "They started to ask questions like, 'Have you considered what happens if that cell gets released or what would happen if it infected a human?'" said Adamala, an associate professor at the University of Minnesota. They hadn't. So researchers brought together dozens of experts in a variety of disciplines from around the globe, including two Nobel laureates, who worked for months to determine the risks of creating "mirror life" and the chances those dangers could be mitigated. Ultimately, they concluded, mirror cells could inflict "unprecedented and irreversible harm" on our world. "We cannot rule out a scenario in which a mirror bacterium acts as an invasive species across many ecosystems, causing pervasive lethal infections in a substantial fraction of plant and animal species, including humans," the scientists wrote in a paper published in the journal Science in December alongside a 299-page technical report... [Report co-author Vaughn Cooper, a professor at the University of Pittsburgh who studies how bacteria adapt to new environments] said it's not yet possible to build a cell from scratch, mirror or otherwise, but researchers have begun the process by synthesizing mirror proteins and enzymes. He and his colleagues estimated that given enough resources and manpower, scientists could create a complete mirror bacteria within a decade. But for now, the world is probably safe from mirror cells. Adamala said virtually everyone in the small scientific community that was interested in developing such cells has agreed not to as a result of the findings. The paper prompted nearly 100 scientists and ethicists from around the world to gather in Paris in June to further discuss the risks of creating mirror organisms. Many felt self-regulation is not enough, according to the institution that hosted the event, and researchers are gearing up to meet again in Manchester, England, and Singapore to discuss next steps.

Read more of this story at Slashdot.

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds

7 septembre 2025 à 16:34
How do AI tools impact college students? 231 students in an object-oriented programming class participated in a study at Estonia's University of Tartu (conducted by an associate professor of informatics and a recently graduated master's student). They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and Spearman's rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning. The researchers say their report provides "quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses." Other results from the survey: 47 respondents (20.3%) never used AI assistants in this course. Only 3.9% of the students reported using AI assistants weekly, "suggesting that reliance on such tools is still relatively low." "Few students feared plagiarism, suggesting students don't link AI use to it — raising academic concerns."

Read more of this story at Slashdot.

New In Firefox Nightly Builds: Copilot Chatbot, New Tab Widgets, JPEG-XL Support

7 septembre 2025 à 15:34
The blog OMG Ubuntu notes that Microsoft Copilot chatbot support has been added in the latest Firefox Nightly builds. "Firefox's sidebar already offers access to popular chatbots, including OpenAI's ChatGPT, Anthropic's Claude, Le Chat's Mistral and Google's Gemini. It previously offered HuggingChat too." As the testing bed for features Mozilla wants to add to stable builds (though not all make it — eh, rounded bottom window corners?), this is something you can expect to find in a future stable update... Copilot in Firefox offers the same features as other chatbots: text prompts, upload files or images, generate images, support for entering voice prompts (for those who fancy their voice patterns being analysed and trained on). And like those other chatbots, there are usage limits, privacy policies, and (for some) account creation needed. In testing, Copilot would only generate half a summary for a webpage, telling me it was too long to produce without me signing in/up for an account. On a related note, Mozilla has updated stable builds to let third-party chatbots summarise web pages when browsing (in-app callout alerts users to the 'new' feature). Users yet to enable chatbots are subtly nudged to do so each time they right-click on web page. [Between "Take Screenshot" and "View Page Source" there's a menu option for "Ask an AI Chatbot."] Despite making noise about its own (sluggish, but getting faster) on-device AI features that are privacy-orientated, Mozilla is bullish on the need for external chatbots. The article suggests Firefox wants to keep up with Edge and Chrome (which can "infuse first-party AI features directly.") But it adds that Firefox's nightly build is also testing some non-AI features, like new task and timer widgets on Firefox's New Tab page. And "In Firefox Labs, there are is an option to enable JPEG XL support, a super-optimised version of JPEG that is gaining traction (despite Google's intransigence). Other Firefox news: There's good news "for users still clinging to Windows 7," writes the Register. Support for Firefox Extended Support Release 115 "is being extended until March 2026." Google "can keep paying companies like Mozilla to make Google the default search engine, as long as these deals aren't exclusive anymore," reports the blog It's FOSS News. (The judge wrote that "Cutting off payments from Google almost certainly will impose substantial — in some cases, crippling — downstream harms to distribution partners..." according to CNBC — especially since the non-profit Mozilla Foundation gets most of its annual revenue from its Google's search deal.) Don't forget you can now search your tabs, bookmarks and browsing history right from the address bar with keywords like @bookmarks, @tabs, and @history. (And @actions pulls up a list of actions like "Open private window" or "Restart Firefox").

Read more of this story at Slashdot.

32% of Senior Developers Say Half Their Shipped Code is AI-Generated

7 septembre 2025 à 14:04
In July 791 professional coders were surveyed by Fastly about their use of AI coding tools, reports InfoWorld. The results? "About a third of senior developers (10+ years of experience) say over half their shipped code is AI-generated," Fastly writes, "nearly two and a half times the rate reported by junior developers (0-2 years of experience), at 13%." "AI will bench test code and find errors much faster than a human, repairing them seamlessly. This has been the case many times," one senior developer said... Senior developers were also more likely to say they invest time fixing AI-generated code. Just under 30% of seniors reported editing AI output enough to offset most of the time savings, compared to 17% of juniors. Even so, 59% of seniors say AI tools help them ship faster overall, compared to 49% of juniors. Just over 50% of junior developers say AI makes them moderately faster. By contrast, only 39% of more senior developers say the same. But senior devs are more likely to report significant speed gains: 26% say AI makes them a lot faster, double the 13% of junior devs who agree. One reason for this gap may be that senior developers are simply better equipped to catch and correct AI's mistakes... Nearly 1 in 3 developers (28%) say they frequently have to fix or edit AI-generated code enough that it offsets most of the time savings. Only 14% say they rarely need to make changes. And yet, over half of developers still feel faster with AI tools like Copilot, Gemini, or Claude. Fastly's survey isn't alone in calling AI productivity gains into question. A recent randomized controlled trial (RCT) of experienced open-source developers found something even more striking: when developers used AI tools, they took 19% longer to complete their tasks. This disconnect may come down to psychology. AI coding often feels smooth... but the early speed gains are often followed by cycles of editing, testing, and reworking that eat into any gains. This pattern is echoed both in conversations we've had with Fastly developers and in many of the comments we received in our survey... Yet, AI still seems to improve developer job satisfaction. Nearly 80% of developers say AI tools make coding more enjoyable... Enjoyment doesn't equal efficiency, but in a profession wrestling with burnout and backlogs, that morale boost might still count for something. Fastly quotes one developer who said their AI tool "saves time by using boilerplate code, but it also needs manual fixes for inefficiencies, which keep productivity in check." The study also found the practice of green coding "goes up sharply with experience. Just over 56% of junior developers say they actively consider energy use in their work, while nearly 80% among mid- and senior-level engineers consider this when coding."

Read more of this story at Slashdot.

Switching Off One Crucial Protein Appears to Reverse Brain Aging in Mice

7 septembre 2025 à 11:34
A research team just discovered older mice have more of the protein FTL1 in their hippocampus, reports ScienceAlert. The hippocampus is the region of the brain involved in memory and learning. And the researchers' paper says their new data raises "the exciting possibility that the beneficial effects of targeting neuronal ferritin light chain 1 (FTL1) at old age may extend more broadly, beyond cognitive aging, to neurodegenerative disease conditions in older people." FTL1 is known to be related to storing iron in the body, but hasn't come up in relation to brain aging before... To test its involvement after their initial findings, the researchers used genetic editing to overexpress the protein in young mice, and reduce its level in old mice. The results were clear: the younger mice showed signs of impaired memory and learning abilities, as if they were getting old before their time, while in the older mice there were signs of restored cognitive function — some of the brain aging was effectively reversed... "It is truly a reversal of impairments," says biomedical scientist Saul Villeda, from the University of California, San Francisco. "It's much more than merely delaying or preventing symptoms." Further tests on cells in petri dishes showed how FTL1 stopped neurons from growing properly, with neural wires lacking the branching structures that typically provide links between nerve cells and improve brain connectivity... "We're seeing more opportunities to alleviate the worst consequences of old age," says Villeda. "It's a hopeful time to be working on the biology of aging." The research was led by a team from the University of California, San Francisco — and published in Nature Aging..

Read more of this story at Slashdot.

First AI-Powered 'Self-Composing' Ransomware Was Actually Just a University Research Project

7 septembre 2025 à 08:08
Cybersecurity company ESET thought they'd discovered the first AI-powered ransomware in the wild, which they'd dubbed "PromptLock". But it turned out to be the work of university security researchers... "Unlike conventional malware, the prototype only requires natural language prompts embedded in the binary," the researchers write in a research paper, calling it "Ransomware 3.0: Self-Composing and LLM-Orchestrated." Their prototype "uses the gpt-oss:20b model from OpenAI locally" (using the Ollama API) to "generate malicious Lua scripts on the fly." Tom's Hardware said that would help PromptLock evade detection: If they had to call an API on [OpenAI's] servers every time they generate one of these scripts, the jig would be up. The pitfalls of vibe coding don't really apply, either, since the scripts are running on someone else's system. The whole thing was actually an experiment by researchers at NYU's Tandon School of Engineering. So "While it is the first to be AI-powered," the school said in an announcement, "the ransomware prototype is a proof-of-concept that is non-functional outside of the contained lab environment." An NYU spokesperson told Tom's Hardware a Ransomware 3.0 sample was uploaded to malware-analsys platform VirusTotal, and then picked up by the ESET researchers by mistake: But the malware does work: NYU said "a simulation malicious AI system developed by the Tandon team carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems." Is that worrisome? Absolutely. But there's a significant difference between academic researchers demonstrating a proof-of-concept and legitimate hackers using that same technique in real-world attacks. Now the study will likely inspire the ne'er-do-wells to adopt similar approaches, especially since it seems to be remarkably affordable. "The economic implications reveal how AI could reshape ransomware operations," the NYU researchers said. "Traditional campaigns require skilled development teams, custom malware creation, and substantial infrastructure investments. The prototype consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services running flagship models." As if that weren't enough, the researchers said that "open-source AI models eliminate these costs entirely," so ransomware operators won't even have to shell out the 70 cents needed to work with commercial LLM service providers... "The study serves as an early warning to help defenders prepare countermeasures," NYU said in an announcement, "before bad actors adopt these AI-powered techniques." ESET posted on Mastodon that "Nonetheless, our findings remain valid — the discovered samples represent the first known case of AI-powered ransomware." And the ESET researcher who'd mistakenly thought the ransomware was "in the wild" had warned that looking ahead, ransomware "will likely become more sophisticated, faster spreading, and harder to detect.... This makes cybersecurity awareness, regular backups, and stronger digital hygiene more important than ever."

Read more of this story at Slashdot.

Blizzard's 'Diablo' Devs Unionize. There's Now 3,500 Unionized Microsoft Workers

1 septembre 2025 à 11:34
PC Gamer reports: The Diablo team is the next in line to unionize at Blizzard. Over 450 developers across multiple disciplines have voted to form a union under the Communications Workers of America (CWA), and they're now the fourth major Blizzard team to do so... A wave of unions have formed at Blizzard in the last year, including the World of Warcraft, Overwatch, and Story and Franchise Development teams. Elsewhere at Microsoft, Bethesda, ZeniMax Online Studios and ZeniMax QA testers have also unionized... The CWA says over 3,500 Microsoft workers have now organized to fight for fair compensation, job security, and improved working conditions. CWA is America's largest communications and media labor union, and in a statement, local 9510 president Jason Justice called the successful vote "part of a much larger story about turning the tide in an industry that has long overlooked its labor. Entertainment workers across film, television, music, and now video games are standing together to have a seat at the table. The strength of our movement comes from that solidarity." And CWA local 6215 president Ron Swaggerty said "Each new organizing effort adds momentum to the nationwide movement for video game worker power." "What began as a trickle has turned into an avalanche," writes the gaming news site Aftermath, calling the latest vote "a direct result of the union neutrality deal Microsoft struck with CWA in 2022 when it was facing regulatory scrutiny over its $68.7 billion purchase of Activision Blizzard." We've come a long way since small units at Raven and Blizzard Albany fended off Activision Blizzard's pre-acquisition attempts at union busting in 2022 and 2023, and not a moment too soon: Microsoft's penchant for mass layoffs has cut some teams to the bone and left others warily counting down the days until their heads land on the chopping block. This new union, workers hope, will act as a bulwark... [B]ased on preliminary conversations with prospective members, they can already hazard a few guesses as to what they'll be arm-wrestling management over at the bargaining table: pay equity, AI, crediting, and remote work.

Read more of this story at Slashdot.

Lawsuit Says Amazon Prime Video Misleads When You 'Buy' a Long-Term Streaming Rental

1 septembre 2025 à 07:34
"Typically when something is available to "buy," ownership of that good or access to that service is offered in exchange for money," writes Ars Technica. "That's not really the case, though, when it comes to digital content." Often, streaming services like Amazon Prime Video offer customers the options to "rent" digital content for a few days or to "buy" it. Some might think that picking "buy" means that they can view the content indefinitely. But these purchases are really just long-term licenses to watch the content for as long as the streaming service has the right to distribute it — which could be for years, months, or days after the transaction. A lawsuit recently filed against Prime Video challenges this practice and accuses the streaming service of misleading customers by labeling long-term rentals as purchases. The conclusion of the case could have implications for how streaming services frame digital content... [The plaintiff's] complaint stands a better chance due to a California law that took effect in January banning the selling of a "digital good to a purchaser with the terms 'buy,' 'purchase,' or any other term which a reasonable person would understand to confer an unrestricted ownership interest in the digital good, or alongside an option for a time-limited rental." There are some instances where the law allows digital content providers to use words like "buy." One example is if, at the time of transaction, the seller receives acknowledgement from the customer that the customer is receiving a license to access the digital content; that they received a complete list of the license's conditions; and that they know that access to the digital content may be "unilaterally revoked...." The case is likely to hinge on whether or not fine print and lengthy terms of use are appropriate and sufficient communication. [The plaintiff]'s complaint acknowledges that Prime Video shows relevant fine print below its "buy" buttons but says that the notice is "far below the 'buy movie' button, buried at the very bottom" of the page and is not visible until "the very last stage of the transaction," after a user has already clicked "buy." Amazon is sure to argue that "If plaintiff didn't want to read her contract, including the small print, that's on her," says consumer attorney Danny Karon. But he tells Ars Technica "I like plaintiff's chances. A normal consumer, after whom the California statute at issue is fashioned, would consider 'buy' or 'purchase' to involve a permanent transaction, not a mere rental... If the facts are as plaintiff alleges, Amazon's behavior would likely constitute a breach of contract or statutory fraud."

Read more of this story at Slashdot.

First 'AI Music Creator' Signed by Record Label. More Ahead, or Just a Copyright Quandry?

1 septembre 2025 à 03:34
"I have no musical talent at all," says Oliver McCann. "I can't sing, I can't play instruments, and I have no musical background at all!" But the Associated Press describes 37-year-old McCann as a British "AI music creator" — and last month McCann signed with an independent record label "after one of his tracks racked up 3 million streams, in what's billed as the first time a music label has inked a contract with an AI music creator." McCann is an example of how ChatGPT-style AI song generation tools like Suno and Udio have spawned a wave of synthetic music, a movement most notably highlighted by a fictitious group, Velvet Sundown, that went viral even though all its songs, lyrics and album art were created by AI. Experts say generative AI is set to transform the music world. However, there are scant details, so far, on how it's impacting the $29.6 billion global recorded music market, which includes about $20 billion from streaming. The most reliable figures come from music streaming service Deezer, which estimates that 18% of songs uploaded to its platform every day are purely AI generated, though they only account for a tiny amount of total streams, hinting that few people are actually listening. Other, bigger streaming platforms like Spotify haven't released any figures on AI music... "It's a total boom. It's a tsunami," said Josh Antonuccio, director of Ohio University's School of Media Arts and Studies. The amount of AI generated music "is just going to only exponentially increase" as young people grow up with AI and become more comfortable with it, he said. [Antonuccio says later the cost of making a hit record "just keeps winnowing down from a major studio to a laptop to a bedroom. And now it's like a text prompt — several text prompts." Though there's a lack of legal clarity over copyright issues.] Generative AI, with its ability to spit out seemingly unique content, has divided the music world, with musicians and industry groups complaining that recorded works are being exploited to train AI models that power song generation tools... Three major record companies, Sony Music Entertainment, Universal Music Group and Warner Records, filed lawsuits last year against Suno and Udio for copyright infringement. In June, the two sides also reportedly entered negotiations that could go beyond settling the lawsuits and set rules for how artists are paid when AI is used to remix their songs. GEMA, a German royalty collection society, has sued Suno, accusing it of generating music similar to songs like "Mambo No. 5" by Lou Bega and "Forever Young" by Alphaville. More than 1,000 musicians, including Kate Bush, Annie Lennox and Damon Albarn, released a silent album to protest proposed changes to U.K. laws on AI they fear would erode their creative control. Meanwhile, other artists, such as will.i.am, Timbaland and Imogen Heap, have embraced the technology. Some users say the debate is just a rehash of old arguments about once-new technology that eventually became widely used, such as AutoTune, drum machines and synthesizers.

Read more of this story at Slashdot.

400 'Tech Utopian' Refuges Consider New Crypto-Friendly State

1 septembre 2025 à 00:50
"Nearly 400 students, many of them entrepreneurs, have so far made the journey to Forest City to study everything from coding to unconventional theories on statehood," reports Bloomberg. "They're building crypto projects, fine-tuning their physiques and testing whether a shared ideology — rather than just shared territory — can bind a community." They have descended on Forest City to attend Network School, the brainchild of former Coinbase Inc. executive and "The Network State" author Balaji Srinivasan. In this troubled megaproject once envisaged to house some 50 times its current population, they're conducting a real-life experiment of sorts with Srinivasan's vision of "startup societies" defined less by historical territory than shared beliefs in technology, cryptocurrency and light regulation... Mornings are spent in product sprints and coding sessions; afternoons in seminars exploring topics from the Meiji Restoration to Singapore's statecraft and the mechanics of decentralized governance. Guest lectures double as both technological deep dives and ideological sermons, according to half a dozen students interviewed by Bloomberg. The campus also mirrors Silicon Valley's infatuation with longevity and health, right down to a commercial-grade gym and specially designed workout routines. Students follow a protein-heavy diet... After co-founding DNA testing startup Counsyl in 2008 and serving as its chief technology officer, Srinivasan spent five years at venture capital firm Andreessen Horowitz, first as general partner and then as board partner. He joined Coinbase as CTO in 2018 when the crypto exchange bought a portfolio company he oversaw and left after a little over a year, according to his LinkedIn profile. In a 2013 speech at Y Combinator's Startup School, Srinivasan brought his ideas about what he saw as a fundamental conflict between some modern nation-states and innovation to a wider audience. In the address, he advocated for Silicon Valley's "ultimate exit" from the U.S., which he argued was obsolete and hostile to innovators. In essence: If the society you live in is broken, why not just "opt out" and create a new one? "The Network State: How To Start a New Country," published in 2022, expanded on Srinivasan's "exit" concept to outline how online, ideologically aligned communities can use crypto and digital tools to form new, decentralized states. A network state can be geographically dispersed and bound together by the internet and blockchains, he says, and the aim is to gain diplomatic recognition... On the Moment of Zen podcast in September 2023, he outlined how the "Gray Tribe" — entrepreneurs, innovators and thinkers — can retake control of San Francisco from the Blues using a variety of tactics, like allying with local police. The effort would involve gaining control of territory, according to Srinivasan, who didn't advocate for violence. "Elections are just the cherry on the cake," he said. "Elections are just a reflection of your total control of the streets." The cost of attending Network School "starts at $1,500 per month, including lodging and food, for those who opt for a shared room."

Read more of this story at Slashdot.

OpenAI Is Scanning Users' ChatGPT Conversations and Reporting Content To Police

31 août 2025 à 23:19
Futurism reports: Earlier this week, buried in the middle of a lengthy blog post addressing ChatGPT's propensity for severe mental health harms, OpenAI admitted that it's scanning users' conversations and reporting to police any interactions that a human reviewer deems sufficiently threatening. "When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement." The announcement raised immediate questions. Don't human moderators judging tone, for instance, undercut the entire premise of an AI system that its creators say can solve broad, complex problems? How is OpenAI even figuring out users' precise locations in order to provide them to emergency responders? How is it protecting against abuse by so-called swatters, who could pretend to be someone else and then make violent threats to ChatGPT in order to get their targets raided by the cops...? The admission also seems to contradict remarks by OpenAI CEO Sam Altman, who recently called for privacy akin to a "therapist or a lawyer or a doctor" for users talking to ChatGPT. "Others argued that the AI industry is hastily pushing poorly-understood products to market, using real people as guinea pigs, and adopting increasingly haphazard solutions to real-world problems as they arise..." Thanks to long-time Slashdot reader schwit1 for sharing the news.

Read more of this story at Slashdot.

Humans Are Being Hired to Make AI Slop Look Less Sloppy

31 août 2025 à 22:19
Graphic designer Lisa Carstens "spends a good portion of her day working with startups and individual clients looking to fix their botched attempts at AI-generated logos," reports NBC News: Such gigs are part of a new category of work spawned by the generative AI boom that threatened to displace creative jobs across the board: Anyone can now write blog posts, produce a graphic or code an app with a few text prompts, but AI-generated content rarely makes for a satisfactory final product on its own... Fixing AI's mistakes is not their ideal line of work, many freelancers say, as it tends to pay less than traditional gigs in their area of expertise. But some say it's what helps pay the bills.... As companies struggle to figure out their approach to AI, recent data provided to NBC News from freelance job platforms Upwork, Freelancer and Fiverr also suggest that demand for various types of creative work surged this year, and that clients are increasingly looking for humans who can work alongside AI technologies without relying on or rejecting them entirely. Data from Upwork found that although AI is already automating lower-skilled and repetitive tasks, the platform is seeing growing demand for more complex work such as content strategy or creative art direction. And over the past six months, Fiverr said it has seen a 250% boost in demand for niche tasks across web design and book illustration, from "watercolor children story book illustration" to "Shopify website design." Similarly, Freelancer saw a surge in demand this year for humans in writing, branding, design and video production, including requests for emotionally engaging content like "heartfelt speeches...." The low pay from clients who have already cheaped out on AI tools has affected gig workers across industries, including more technical ones like coding. For India-based web and app developer Harsh Kumar, many of his clients say they had already invested much of their budget in "vibe coding" tools that couldn't deliver the results they wanted. But others, he said, are realizing that shelling out for a human developer is worth the headaches saved from trying to get an AI assistant to fix its own "crappy code." Kumar said his clients often bring him vibe-coded websites or apps that resulted in unstable or wholly unusable systems. "Even outside of any obvious mistakes made by AI tools, some artists say their clients simply want a human touch to distinguish themselves from the growing pool of AI-generated content online..."

Read more of this story at Slashdot.

❌