Vue lecture

Could Heart Attacks Be Triggered By Infections?

Finland's second-largest university has announced new research suggesting that heart attacks could be an infectious disease. [T]he research found that, in coronary artery disease, atherosclerotic plaques containing cholesterol may harbor a gelatinous, asymptomatic biofilm formed by bacteria over years or even decades. Dormant bacteria within the biofilm remain shielded from both the patient's immune system and antibiotics because they cannot penetrate the biofilm matrix. A viral infection or another external trigger may activate the biofilm, leading to the proliferation of bacteria and an inflammatory response. The inflammation can cause a rupture in the fibrous cap of the plaque, resulting in thrombus [blood clot] formation and ultimately myocardial infarction... "Bacterial involvement in coronary artery disease has long been suspected, but direct and convincing evidence has been lacking," explains professor Pekka Karhunen [who led the study with researchers from the UK and Finland]. "Our study demonstrated the presence of genetic material — DNA — from several oral bacteria inside atherosclerotic plaques." The findings were validated by developing an antibody targeted at the discovered bacteria, which unexpectedly revealed biofilm structures in arterial tissue. Bacteria released from the biofilm were observed in cases of myocardial infarction. The body's immune system had responded to these bacteria, triggering inflammation which ruptured the cholesterol-laden plaque. The observations pave the way for the development of novel diagnostic and therapeutic strategies for myocardial infarction. Furthermore, they advance the possibility of preventing coronary artery disease and myocardial infarction by vaccination. "The research is part of an extensive EU-funded cardiovascular research project involving 11 countries..."

Read more of this story at Slashdot.

  •  

Myanmar's 'Cyber-Slavery Compounds' May Hold 100,000 Trafficked People

It was "little more than empty fields" five years ago — but it's now "a vast, heavily guarded complex stretching for 210 hectares (520 acres)," reports the Guardian, "the frontline of a multibillion-dollar criminal fraud industry fuelled by human trafficking and brutal violence." Myanmar, Cambodia and Laos have in recent years become havens for transnational crime syndicates running scam centres such as KK Park, which use enslaved workers to run complex online fraud and scamming schemes that generate huge profits. There have been some attempts to crack down on the centres and rescue the workers, who can be subjected to torture and trapped inside. But drone images and new research shared exclusively with the Guardian reveal that the number of such centres operating along the Thai-Myanmar border has more than doubled since Myanmar's military seized power in 2021, with construction continuing to this day. Data from the Australian Strategic Policy Institute (Aspi), a defence thinktank in Canberra, shows that the number of Myanmar scam centres on the Thai border has increased from 11 to 27, and they have expanded in size by an average of 5.5 hectares a month. Drone images and photographs of KK Park and other Myanmar scam centres, Tai Chang and Shwe Kokko, taken by the Guardian in August show new features and active building work... Myanmar's military junta has allowed the spread of scam centres inside the country as these criminal enterprises have become an essential part of the country's conflict economy since the coup, helping it rise to the top of the global list of countries harbouring organised crime. According to Aspi's analysis, Myanmar's military, which has lost huge swathes of territory since the coup and is struggling to retain its grip on power, cannot take meaningful measures against the scam compounds without endangering its precarious relations with the crucial armed militias who are profiting from them. While 7,000 people were freed from the compounds earlier this year, "Thai police estimated earlier this year that as many as 100,000 people were held inside Myanmar scam centres," the article notes. Elsewhere the Guardian reports that "The centres are run by Chinese criminal gangs," and describes people who unwittingly came to Thailand for customer service jobs, only to be trafficked to Myanmar's guarded "cyberslavery compounds" and "forced to send thousands of messages from fake social-media profiles, posing as a rich American investor to swindle US real estate agents into cryptocurrency scams." Since 2020, south-east Asia's cyber-slavery industry has entrapped hundreds of thousands of people and forced them to perform "pig butchering" — the brutal term for building trust with a fraud target before scamming them. At first, the industry mostly captured Chinese and Taiwanese people, then it moved on to south-east Asians and Indians — and now Africans. Criminal syndicates have been shifting towards scamming victims in the US and Europe after Chinese efforts to prevent its citizens being targeted, experts told the Guardian. That has led some trafficking networks to seek recruits with English-language and tech skills — including east Africans, thousands of whom are now estimated to be trapped inside south-east Asian compounds, says Benedikt Hofmann, the UN Office on Drugs and Crime's representative for south-east Asia and the Pacific. Thanks to long-time Slashdot reader mspohr for sharing the article.

Read more of this story at Slashdot.

  •  

UAE Lab Releases Open-Source Model to Rival China's DeepSeek

"The United Arab Emirates wants to compete with the U.S. and China in AI," writes Gizmodo, "and a new open source model may be its strongest contender yet. "An Emirati AI lab called the Institute of Foundation Models (IFM) released K2 Think on Tuesday, a model that researchers say rivals OpenAI's ChatGPT and China's DeepSeek in standard benchmark tests." "With just 32 billion parameters, it outperforms flagship reasoning models that are 20x larger," the lab wrote in a press release on Tuesday. DeepSeek's R1 has 671 billion parameters, though only 37 billion are active. Meta's latest Llama 4 models range from 17 billion to 288 billion active parameters. OpenAI doesn't share parameter information. OpenAI doesn't share parameter information. Researchers also claim that K2 Think leads "all open-source models in math performance" across several benchmarks. The model is intended to be more focused on math, coding, and scientific research than most other AI chatbots. The Emirati lab's selling point for the model is similar to DeepSeek's strategy that disrupted the AI market earlier this year: optimized efficiency that will have better or the same computing power at a lower cost... The lab is also aiming to be transparent in everything, "open-sourcing not just models but entire development processes" that provide "researchers with complete materials including training code, datasets, and model checkpoints," IFM said in a press release from May. The UAE and other Arab countries are investing in AI to try reducing their economic dependence on fossil fuels, the article points out.

Read more of this story at Slashdot.

  •  

A Single Exercise Session May Slow Cancer Cell Growth, Study Finds

The Washington Post notes that past research "indicates that exercise helps some cancer survivors avoid recurrence of their disease." But a new study "offers an explanation of how, showing that exercise changes the inner workings of our muscles and cells, although more study is still needed..." The study, published last month, involved 32 women who'd survived breast cancer. After a single session of interval training or weightlifting, their blood contained higher levels of certain molecules, and those factors helped put the brakes on laboratory-grown breast cancer cells. "Our work shows that exercise can directly influence cancer biology, suppressing tumor growth through powerful molecular signals," said Robert Newton, the deputy director of the Exercise Medicine Research Institute at Edith Cowan University in Perth, Australia, and senior author of the new study. His group's experiment adds to mounting evidence that exercise upends the risks of not only developing but also surviving cancer... Scientists know contracting muscles release a slew of hormones and biochemicals, known as myokines, into our bloodstreams and have long suspected these myokines fight cancer. In some past studies with mice and healthy people, blood drawn after exercise and added to live cancer cells killed or suppressed the cancer's growth... [The new study tested cancer cells in high-tech petri dishes with blood drawn from cancer survivors.] Drenched in plasma from either the interval trainers or the lifters, many cancer cells quit growing. Quite a few died. (The blood drawn before exercise had no effects.) The cancer-fighting impacts were greatest with the blood drawn after interval training. Why? Additional testing showed this blood contained the highest concentrations of certain, beneficial myokines, especially IL-6, a protein that affects immune responses and inflammation... What these results mean, Newton said, is that "exercise doesn't just improve fitness and well-being" in people who've had cancer. "It also orchestrates a complex biological response that includes direct anticancer signals from muscles..." Questions remain, of course. Can any type of exercise fight cancer? Newton and other researchers have doubts. The exercise in this study was strenuous, by design. "Earlier studies suggested that the stronger the exercise stimulus, the greater the release of anticancer myokines," Newton said... Even the weight training in this study was less potent than the intense intervals. But Newton believes weight training remains key to cancer fighting. "People with cancer who increase their muscle mass through resistance training also experience greater rises in circulating myokines," he said. More muscle means more myokines.

Read more of this story at Slashdot.

  •  

The Software Engineers Paid To Fix Vibe Coded Messes

"Freelance developers and entire companies are making a business out of fixing shoddy vibe coded software," writes 404 Media, interviewing one of the "dozens of people on Fiverr... now offering services specifically catering to people with shoddy vibe coded projects." Hamid Siddiqi, who offers to "review, fix your vibe code" on Fiverr, told the 404 Media that "Currently, I work with around 15-20 clients regularly, with additional one-off projects throughout the year. ("Siddiqi said common issues he fixes in vibe coded projects include inconsistent UI/UX design in AI-generated frontends, poorly optimized code that impacts performance, misaligned branding elements, and features that function but feel clunky or unintuitive," as well as work o color schemes, animations, and layouts.) And others coders are also pursuing the "vibe coded mess" market: Swatantra Sohni, who started VibeCodeFixers.com, a site for people with vibe coded projects who need help from experienced developers to fix or finish their projects, says that almost 300 experienced developers have posted their profiles to the site. He said so far VibeCodeFixers.com has only connected between 30-40 vibe code projects with fixers, but that he hasn't done anything to promote the service and at the moment is focused on adding as many software developers to the platform as possible... "Most of these vibe coders, either they are product managers or they are sales guys, or they are small business owners, and they think that they can build something," Sohni told me. "So for them it's more for prototyping..." Another big issue Sohni identified is "credit burn," meaning the money vibe coders waste on AI usage fees in the final 10-20 percent stage of developing the app, when adding new features breaks existing features. Sohni told me he thinks vibe coding is not going anywhere, but neither are human developers. "I feel like the role [of human developers] would be slightly limited, but we will still need humans to keep this AI on the leash," he said. The article also notes that established software development companies like Ulam Labs, now say "we clean up after vibe coding. Literally." "Built something fast? Now it's time to make it solid," Ulam Labs pitches on its site," suggesting that for their potential customers "the tech debt is holding you back: no tests, shaky architecture, CI/CD is a dream, and every change feels like defusing a bomb. That's where we come in."

Read more of this story at Slashdot.

  •  

Megaupload Founder Kim Dotcom Loses Latest Bid to Avoid US Extradition

In 2015 Kim Dotcom answered questions from Slashdot's readers. Now CBS News reports on "the latest chapter in a protracted 13-year battle by the U.S. government" to extradite Finnish-German millionaire Kim Dotcom from New Zealand: A New Zealand court has rejected the latest bid by internet entrepreneur Kim Dotcom to halt his deportation to the U.S. on charges related to his file-sharing website Megaupload. Dotcom had asked the High Court to review the legality of an official's August 2024 decision that he should be surrendered to the U.S. to face trial on charges of copyright infringement, money laundering and racketeering... The Megaupload founder had applied for what in New Zealand is called a judicial review, in which a judge is asked to evaluate whether an official's decision was lawful. A judge on Wednesday dismissed Dotcom's arguments that the decision to deport him was politically motivated and that he would face grossly disproportionate treatment in the U.S... New Zealand's government hasn't disclosed what will happen next in the extradition process or divulged an expected timeline for Dotcom to be surrendered to the United States Dotcom "has been free on bail in New Zealand since February 2012," the article points out — and "One of his lawyers, Ron Mansfield, told Radio New Zealand that Dotcom's team had 'much fight left in us as we seek to secure a fair outcome,' but he didn't elaborate..." The article notes that the latest decision "could be challenged in the Court of Appeal, where a deadline for filing is October 8."

Read more of this story at Slashdot.

  •  

'Forever Chemicals' Found In 95% of Beers Tested In the U.S.

ScienceDaily reports: Forever chemicals known as PFAS have turned up in an unexpected place: beer. Researchers tested 23 different beers from across the U.S. and found that 95% contained PFAS, with the highest concentrations showing up in regions with known water contamination. The findings reveal how pollution in municipal water supplies can infiltrate popular products, raising concerns for both consumers and brewers... [PFAS] have been found in surface water, groundwater and municipal water supplies across the U.S. and the world. Although breweries typically have water filtration and treatment systems, they are not designed to remove PFAS... [T]he researchers call for greater awareness among brewers, consumers and regulators to limit overall PFAS exposure. These results also highlight the possible need for water treatment upgrades at brewing facilities as PFAS regulations in drinking water change or updates to municipal water system treatment are implemented. "I hope these findings inspire water treatment strategies and policies that help reduce the likelihood of PFAS in future pours," research lead Jennifer Hoponick Redmon said in a May announcement about their research.

Read more of this story at Slashdot.

  •  

Some Angry GitHub Users Are Rebelling Against GitHub's Forced Copilot AI Features

Slashdot reader Charlotte Web shared this report from the Register: Among the software developers who use Microsoft's GitHub, the most popular community discussion in the past 12 months has been a request for a way to block Copilot, the company's AI service, from generating issues and pull requests in code repositories. The second most popular discussion — where popularity is measured in upvotes — is a bug report that seeks a fix for the inability of users to disable Copilot code reviews. Both of these questions, the first opened in May and the second opened a month ago, remain unanswered, despite an abundance of comments critical of generative AI and Copilot... The author of the first, developer Andi McClure, published a similar request to Microsoft's Visual Studio Code repository in January, objecting to the reappearance of a Copilot icon in VS Code after she had uninstalled the Copilot extension... "I've been for a while now filing issues in the GitHub Community feedback area when Copilot intrudes on my GitHub usage," McClure told The Register in an email. "I deeply resent that on top of Copilot seemingly training itself on my GitHub-posted code in violation of my licenses, GitHub wants me to look at (effectively) ads for this project I will never touch. If something's bothering me, I don't see a reason to stay quiet about it. I think part of how we get pushed into things we collectively don't want is because we stay quiet about it." It's not just the burden of responding to AI slop, an ongoing issue for Curl maintainer Daniel Stenberg. It's the permissionless copying and regurgitation of speculation as fact, mitigated only by small print disclaimers that generative AI may produce inaccurate results. It's also GitHub's disavowal of liability if Copilot code suggestions happen to have reproduced source code that requires attribution. It's what the Servo project characterizes in its ban on AI code contributions as the lack of code correctness guarantees, copyright issues, and ethical concerns. Similar objections have been used to justify AI code bans in GNOME's Loupe project, FreeBSD, Gentoo, NetBSD, and QEMU... Calls to shun Microsoft and GitHub go back a long way in the open source community, but moved beyond simmering dissatisfaction in 2022 when the Software Freedom Conservancy (SFC) urged free software supporters to give up GitHub, a position SFC policy fellow Bradley M. Kuhn recently reiterated. McClure says In the last six months their posts have drawn more community support — and tells the Register there's been a second change in how people see GitHub within the last month. After GitHub moved from a distinct subsidiary to part of Microsoft's CoreAI group, "it seems to have galvanized the open source community from just complaining about Copilot to now actively moving away from GitHub."

Read more of this story at Slashdot.

  •  

There's 50% Fewer Young Employees at Tech Companies Now Than Two Years Ago

An anonymous reader shared this report from Fortune: The percentage of young Gen Z employees between the ages of 21 and 25 has been cut in half at technology companies over the past two years, according to recent data from compensation management software business Pave with workforce data from more than 8,300 companies. These young workers accounted for 15% of the workforce at large public tech firms in January 2023. By August 2025, they only represented 6.8%. The situation isn't pretty at big private tech companies, either — during that same time period, the proportion of early-career Gen Z employees dwindled from 9.3% to 6.8%. Meanwhile, the average age of a worker at a tech company has risen dramatically over those two and a half years. Between January 2023 and July 2025, the average age of all employees at large public technology businesses rose from 34.3 years to 39.4 years — more than a five year difference. On the private side, the change was less drastic, with the typical age only increasing from 35.1 to 36.6 years old... "If you're 35 or 40 years old, you're pretty established in your career, you have skills that you know cannot yet be disrupted by AI," Matt Schulman, founder and CEO of Pave, tells Fortune. "There's still a lot of human judgment when you're operating at the more senior level...If you're a 22-year-old that used to be an Excel junkie or something, then that can be disrupted. So it's almost a tale of two cities." Schulman points to a few reasons why tech company workforces are getting older and locking Gen Z out of jobs. One is that big companies — like Salesforce, Meta, and Microsoft — are becoming a lot more efficient thanks to the advent of AI. And despite their soaring trillion-dollar profits, they're cutting employees at the bottom rungs in favor of automation. Entry-level jobs have also dwindled because of AI agents, and stalling promotions across many agencies looking to do more with less. Once technology companies weed out junior roles, occupied by Gen Zers, their workforces are bound to rise in age. Schulman tells Fortune Gen Z also has an advantage: that tech corporations can see them as fresh talent that "can just break the rules and leverage AI to a much greater degree without the hindrance of years of bias." And Priya Rathod, workplace trends editor for LinkedIn, tells Fortune there's promising tech-industry entry roles in AI ethics, cybersecurity, UX, and product operations. "Building skills through certifications, gig work, and online communities can open doors.... "For Gen Z, the right certifications or micro credentials can outweigh a lack of years on the resume. This helps them stay competitive even when entry level opportunities shrink."

Read more of this story at Slashdot.

  •  

A New Four-Person Crew Will Simulate a Year-Long Mars Mission, NASA Announces

Somewhere in Houston, four research volunteers "will soon participate in NASA's year-long simulation of a Mars mission," NASA announced this week, saying it will provide "foundational data to inform human exploration of the Moon, Mars, and beyond." The 378-day simulation will take place inside a 3D-printed, 1,700-square-foot habitat at NASA's Johnson Space Center in Houston — starting on October 19th and continuing until Halloween of 2026: Through a series of Earth-based missions called CHAPEA (Crew Health and Performance Exploration Analog), NASA aims to evaluate certain human health and performance factors ahead of future Mars missions. The crew will undergo realistic resource limitations, equipment failures, communication delays, isolation and confinement, and other stressors, along with simulated high-tempo extravehicular activities. These scenarios allow NASA to make informed trades between risks and interventions for long-duration exploration missions. "As NASA gears up for crewed Artemis missions, CHAPEA and other ground analogs are helping to determine which capabilities could best support future crews in overcoming the human health and performance challenges of living and operating beyond Earth's resources — all before we send humans to Mars," said Sara Whiting, project scientist with NASA's Human Research Program at NASA Johnson. Crew members will carry out scientific research and operational tasks, including simulated Mars walks, growing a vegetable garden, robotic operations, and more. Technologies specifically designed for Mars and deep space exploration will also be tested, including a potable water dispenser and diagnostic medical equipment... This mission, facilitated by NASA's Human Research Program, is the second one-year Mars surface simulation conducted through CHAPEA. The first mission concluded on July 6, 2024.

Read more of this story at Slashdot.

  •  

Microsoft's Analog Optical Computer Shows AI Promise

Four years ago a small Microsoft Research team started creating an analog optical computer. They used commercially available parts like sensors from smartphone cameras, optical lenses, and micro-LED lights finer than a human hair. "As the light passes through the sensor at different intensities, the analog optical computer can add and multiply numbers," explains a Microsoft blog post. They envision the technology scaling to a computer that for certain problems is 100X faster and 100X more energy efficient — running AI workloads "with a fraction of the energy needed and at much greater speed than the GPUs running today's large language models." The results are described in a paper published in the scientific journal Nature, according to the blog post: At the same time, Microsoft is publicly sharing its "optimization solver" algorithm and the "digital twin" it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them. Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real analog optical computer [or "AOC"] behaves; it simulates the same inputs, processes and outputs, but in a digital environment — like a software version of the hardware. This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the analog optical computer hardware. "To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used," Parmigiani said. Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer. "We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare," he said. Further, "we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too." In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy... As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.

Read more of this story at Slashdot.

  •  

Microsoft's Cloud Services Disrupted by Red Sea Cable Cuts

An anonymous reader shared this report from the BBC: Microsoft's Azure cloud services have been disrupted by undersea cable cuts in the Red Sea, the US tech giant says. Users of Azure — one of the world's leading cloud computing platforms — would experience delays because of problems with internet traffic moving through the Middle East, the company said. Microsoft did not explain what might have caused the damage to the undersea cables, but added that it had been able to reroute traffic through other paths. Over the weekend, there were reports suggesting that undersea cable cuts had affected the United Arab Emirates and some countries in Asia.... On Saturday, NetBlocks, an organisation that monitors internet access, said a series of undersea cable cuts in the Red Sea had affected internet services in several countries, including India and Pakistan. "We do expect higher latency on some traffic that previously traversed through the Middle East," Microsoft said in their status announcement — while stressing that traffic "that does not traverse through the Middle East is not impacted".

Read more of this story at Slashdot.

  •  

Chinese Hackers Impersonated US Lawmaker in Email Espionage Campaign

As America's trade talks with China were set to begin last July, a "puzzling" email reached several U.S. government agencies, law firms, and trade groups, reports the Wall Street Journal. It appeared to be from the chair of a U.S. Congressional committee, Representative John Moolenaar, asking recipients to review an alleged draft of upcoming legislation — sent as an attachment. "But why had the chairman sent the message from a nongovernment address...?" "The cybersecurity firm Mandiant determined the spyware would allow the hackers to burrow deep into the targeted organizations if any of the recipients had opened the purported draft legislation, according to documents reviewed by The Wall Street Journal." It turned out to be the latest in a series of alleged cyber espionage campaigns linked to Beijing, people familiar with the matter said, timed to potentially deploy spyware against organizations giving input on President Trump's trade negotiations. The FBI and the Capitol Police are investigating the Moolenaar emails, and cyber analysts traced the embedded malware to a hacker group known as APT41 — believed to be a contractor for Beijing's Ministry of State Security... The hacking campaign appeared to be aimed at giving Chinese officials an inside look at the recommendations Trump was receiving from outside groups. It couldn't be determined whether the attackers had successfully breached any of the targets. A Federal Bureau of Investigation spokeswoman declined to provide details but said the bureau was aware of the incident and was "working with our partners to identify and pursue those responsible...." The alleged campaign comes as U.S. law-enforcement officials have been surprised by the prolific and creative nature of China's spying efforts. The FBI revealed last month that a Beijing-linked espionage campaign that hit U.S. telecom companies and swept up Trump's phone calls actually targeted more than 80 countries and reached across the globe... The Moolenaar impersonation comes as several administration officials have recently faced impostors of their own. The State Department warned diplomats around the world in July that an impostor was using AI to imitate Secretary of State Marco Rubio's voice in messages sent to foreign officials. Federal authorities are also investigating an effort to impersonate White House chief of staff Susie Wiles, the Journal reported in May... The FBI issued a warning that month that "malicious actors have impersonated senior U.S. officials" targeting contacts with AI-generated voice messages and texts. And in January, the article points out, all the staffers on Moolenaar's committee "received emails falsely claiming to be from the CEO of Chinese crane manufacturer ZPMC, according to people familiar with the episode." Thanks to long-time Slashdot reader schwit1 for sharing the news.

Read more of this story at Slashdot.

  •  

Publishers Demand 'AI Overview' Traffic Stats from Google, Alleging 'Forced' Deals

AI Overviews have lowered click-through traffic to Daily Mail sites by as much as 89%, the publisher told a UK government body that regulates competition. So they've joined other top news organizations (including Guardian Media Group and the magazine trade body the Periodical Publishers Association) in asking the regulators "to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers," reports the Guardian: Publishers — already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news — argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or "drop out of all search results", according to several sources... In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content. However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers' overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies. "Google Discover is of zero product importance to Google at all," he says. "It allows Google to funnel more traffic to publishers as traffic from search declines ... Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want." Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models. The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the "value being scraped" out of the £125bn sector. Some publishers have struck bilateral licensing deals with AI companies — such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI — while others such as the BBC have taken action against AI companies alleging copyright theft. "It is a two-pronged attack on publishers, a sort of pincer movement," says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. "Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis." "At the moment the AI and tech community are showing no signs of supporting publisher revenue," says the chief executive of the UK's Periodical Publishers Association...

Read more of this story at Slashdot.

  •  

Linus Torvalds Expresses Frustration With 'Garbage' Link Tags In Git Commits

"I have not pulled this, I'm annoyed by having to even look at this, and if you actually expect me to pull this I want a real explanation and not a useless link," Linus Torvalds posted Friday on the Linux kernel mailing list. Phoronix explains: It's become a common occurrence seeing "Link: " tags within Git commits for the Linux kernel that point to the latest Linux kernel mailing list patches of the same patch... Linus Torvalds has had enough and will be more strict against accepting pull requests that have link tags of no value. He commented yesterday on a block pull request that he pulled and then backed out of: "And dammit, this commit has that promising 'Link:' argument that I hoped would explain why this pointless commit exists, but AS ALWAYS that link only wasted my time by pointing to the same damn information that was already there. I was hoping that it would point to some oops report or something that would explain why my initial reaction was wrong. "Stop this garbage already. Stop adding pointless Link arguments that waste people's time. Add the link if it has *ADDITIONAL* information.... "Yes, I'm grumpy. I feel like my main job — really my only job — is to try to make sense of pull requests, and that's why I absolutely detest these things that are automatically added and only make my job harder." A longer discussion ensued... Torvalds: [A] "perfect" model might be to actually have some kind of automation of "unless there was actual discussion about it". But I feel such a model might be much too complicated, unless somebody *wants* to explore using AI because their job description says "Look for actual useful AI uses". In today's tech world, I assume such job descriptions do exist. Sigh... Torvalds: I do think it makes sense for patch series that (a) are more than a small handful of patches and (b) have some real "story" to them (ie a cover letter that actually explains some higher-level issues)... Torvalds also had two responses to a poster who'd said "IMHO it's better to have a Link and it _potentially_ being useful than not to have it and then need to search around for it." Torvalds: No. Really. The issue is "potentially — but very likely not — useful" vs "I HIT THIS TEN+ TIMES EVERY SINGLE F%^& RELEASE". There is just no comparison. I have literally *never* found the original submission email to be useful, and I'm tired of the "potentially useful" argument that has nothing to back it up with. It's literally magical thinking of "in some alternate universe, pigs can fly, and that link might be useful" Torvalds: And just to clarify: the hurt is real. It's not just the disappointment. It's the wasted effort of following a link and having to then realize that there's nothing useful there. Those links *literally* double the effort for me when I try to be careful about patches... The cost is real. The cost is something I've complained about before... Yes, it's literally free to you to add this cost. No, *YOU* don't see the cost, and you think it is helpful. It's not. It's the opposite of helpful. So I want commit messages to be relevant and explain what is going on, and I want them to NOT WASTE MY TIME. And I also don't want to ignore links that are actually *useful* and give background information. Is that really too much to ask for? Torvalds points out he's brought this up four times before — once in 2022. Torvalds: I'm a bit frustrated, exactly because this _has_ been going on for years. It's not a new peeve. And I don't think we have a good central place for that kind of "don't do this". Yes, there's the maintainer summit, but that's a pretty limited set of people. I guess I could mention it in my release notes, but I don't know who actually reads those either.. So I end up just complaining when I see it.

Read more of this story at Slashdot.

  •  

Scientists Discuss Next Steps to Prevent Dangerous 'Mirror Life' Research

USA Today has an update on the curtailing of "mirror life" research: Kate Adamala had been working on something dangerous. At her synthetic biology lab, Adamala had been taking preliminary steps toward creating a living cell from scratch with one key twist: All the organism's building blocks would be flipped. Changing these molecules would create an unnatural mirror image of a cell, as different as your right hand from your left. The endeavor was not only a fascinating research challenge, but it also could be used to improve biotechnology and medicine. As Adamala and her colleagues talked with biosecurity experts about the project, however, grave concerns began brewing. "They started to ask questions like, 'Have you considered what happens if that cell gets released or what would happen if it infected a human?'" said Adamala, an associate professor at the University of Minnesota. They hadn't. So researchers brought together dozens of experts in a variety of disciplines from around the globe, including two Nobel laureates, who worked for months to determine the risks of creating "mirror life" and the chances those dangers could be mitigated. Ultimately, they concluded, mirror cells could inflict "unprecedented and irreversible harm" on our world. "We cannot rule out a scenario in which a mirror bacterium acts as an invasive species across many ecosystems, causing pervasive lethal infections in a substantial fraction of plant and animal species, including humans," the scientists wrote in a paper published in the journal Science in December alongside a 299-page technical report... [Report co-author Vaughn Cooper, a professor at the University of Pittsburgh who studies how bacteria adapt to new environments] said it's not yet possible to build a cell from scratch, mirror or otherwise, but researchers have begun the process by synthesizing mirror proteins and enzymes. He and his colleagues estimated that given enough resources and manpower, scientists could create a complete mirror bacteria within a decade. But for now, the world is probably safe from mirror cells. Adamala said virtually everyone in the small scientific community that was interested in developing such cells has agreed not to as a result of the findings. The paper prompted nearly 100 scientists and ethicists from around the world to gather in Paris in June to further discuss the risks of creating mirror organisms. Many felt self-regulation is not enough, according to the institution that hosted the event, and researchers are gearing up to meet again in Manchester, England, and Singapore to discuss next steps.

Read more of this story at Slashdot.

  •  

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds

How do AI tools impact college students? 231 students in an object-oriented programming class participated in a study at Estonia's University of Tartu (conducted by an associate professor of informatics and a recently graduated master's student). They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and Spearman's rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning. The researchers say their report provides "quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses." Other results from the survey: 47 respondents (20.3%) never used AI assistants in this course. Only 3.9% of the students reported using AI assistants weekly, "suggesting that reliance on such tools is still relatively low." "Few students feared plagiarism, suggesting students don't link AI use to it — raising academic concerns."

Read more of this story at Slashdot.

  •  

New In Firefox Nightly Builds: Copilot Chatbot, New Tab Widgets, JPEG-XL Support

The blog OMG Ubuntu notes that Microsoft Copilot chatbot support has been added in the latest Firefox Nightly builds. "Firefox's sidebar already offers access to popular chatbots, including OpenAI's ChatGPT, Anthropic's Claude, Le Chat's Mistral and Google's Gemini. It previously offered HuggingChat too." As the testing bed for features Mozilla wants to add to stable builds (though not all make it — eh, rounded bottom window corners?), this is something you can expect to find in a future stable update... Copilot in Firefox offers the same features as other chatbots: text prompts, upload files or images, generate images, support for entering voice prompts (for those who fancy their voice patterns being analysed and trained on). And like those other chatbots, there are usage limits, privacy policies, and (for some) account creation needed. In testing, Copilot would only generate half a summary for a webpage, telling me it was too long to produce without me signing in/up for an account. On a related note, Mozilla has updated stable builds to let third-party chatbots summarise web pages when browsing (in-app callout alerts users to the 'new' feature). Users yet to enable chatbots are subtly nudged to do so each time they right-click on web page. [Between "Take Screenshot" and "View Page Source" there's a menu option for "Ask an AI Chatbot."] Despite making noise about its own (sluggish, but getting faster) on-device AI features that are privacy-orientated, Mozilla is bullish on the need for external chatbots. The article suggests Firefox wants to keep up with Edge and Chrome (which can "infuse first-party AI features directly.") But it adds that Firefox's nightly build is also testing some non-AI features, like new task and timer widgets on Firefox's New Tab page. And "In Firefox Labs, there are is an option to enable JPEG XL support, a super-optimised version of JPEG that is gaining traction (despite Google's intransigence). Other Firefox news: There's good news "for users still clinging to Windows 7," writes the Register. Support for Firefox Extended Support Release 115 "is being extended until March 2026." Google "can keep paying companies like Mozilla to make Google the default search engine, as long as these deals aren't exclusive anymore," reports the blog It's FOSS News. (The judge wrote that "Cutting off payments from Google almost certainly will impose substantial — in some cases, crippling — downstream harms to distribution partners..." according to CNBC — especially since the non-profit Mozilla Foundation gets most of its annual revenue from its Google's search deal.) Don't forget you can now search your tabs, bookmarks and browsing history right from the address bar with keywords like @bookmarks, @tabs, and @history. (And @actions pulls up a list of actions like "Open private window" or "Restart Firefox").

Read more of this story at Slashdot.

  •  

32% of Senior Developers Say Half Their Shipped Code is AI-Generated

In July 791 professional coders were surveyed by Fastly about their use of AI coding tools, reports InfoWorld. The results? "About a third of senior developers (10+ years of experience) say over half their shipped code is AI-generated," Fastly writes, "nearly two and a half times the rate reported by junior developers (0-2 years of experience), at 13%." "AI will bench test code and find errors much faster than a human, repairing them seamlessly. This has been the case many times," one senior developer said... Senior developers were also more likely to say they invest time fixing AI-generated code. Just under 30% of seniors reported editing AI output enough to offset most of the time savings, compared to 17% of juniors. Even so, 59% of seniors say AI tools help them ship faster overall, compared to 49% of juniors. Just over 50% of junior developers say AI makes them moderately faster. By contrast, only 39% of more senior developers say the same. But senior devs are more likely to report significant speed gains: 26% say AI makes them a lot faster, double the 13% of junior devs who agree. One reason for this gap may be that senior developers are simply better equipped to catch and correct AI's mistakes... Nearly 1 in 3 developers (28%) say they frequently have to fix or edit AI-generated code enough that it offsets most of the time savings. Only 14% say they rarely need to make changes. And yet, over half of developers still feel faster with AI tools like Copilot, Gemini, or Claude. Fastly's survey isn't alone in calling AI productivity gains into question. A recent randomized controlled trial (RCT) of experienced open-source developers found something even more striking: when developers used AI tools, they took 19% longer to complete their tasks. This disconnect may come down to psychology. AI coding often feels smooth... but the early speed gains are often followed by cycles of editing, testing, and reworking that eat into any gains. This pattern is echoed both in conversations we've had with Fastly developers and in many of the comments we received in our survey... Yet, AI still seems to improve developer job satisfaction. Nearly 80% of developers say AI tools make coding more enjoyable... Enjoyment doesn't equal efficiency, but in a profession wrestling with burnout and backlogs, that morale boost might still count for something. Fastly quotes one developer who said their AI tool "saves time by using boilerplate code, but it also needs manual fixes for inefficiencies, which keep productivity in check." The study also found the practice of green coding "goes up sharply with experience. Just over 56% of junior developers say they actively consider energy use in their work, while nearly 80% among mid- and senior-level engineers consider this when coding."

Read more of this story at Slashdot.

  •  

Switching Off One Crucial Protein Appears to Reverse Brain Aging in Mice

A research team just discovered older mice have more of the protein FTL1 in their hippocampus, reports ScienceAlert. The hippocampus is the region of the brain involved in memory and learning. And the researchers' paper says their new data raises "the exciting possibility that the beneficial effects of targeting neuronal ferritin light chain 1 (FTL1) at old age may extend more broadly, beyond cognitive aging, to neurodegenerative disease conditions in older people." FTL1 is known to be related to storing iron in the body, but hasn't come up in relation to brain aging before... To test its involvement after their initial findings, the researchers used genetic editing to overexpress the protein in young mice, and reduce its level in old mice. The results were clear: the younger mice showed signs of impaired memory and learning abilities, as if they were getting old before their time, while in the older mice there were signs of restored cognitive function — some of the brain aging was effectively reversed... "It is truly a reversal of impairments," says biomedical scientist Saul Villeda, from the University of California, San Francisco. "It's much more than merely delaying or preventing symptoms." Further tests on cells in petri dishes showed how FTL1 stopped neurons from growing properly, with neural wires lacking the branching structures that typically provide links between nerve cells and improve brain connectivity... "We're seeing more opportunities to alleviate the worst consequences of old age," says Villeda. "It's a hopeful time to be working on the biology of aging." The research was led by a team from the University of California, San Francisco — and published in Nature Aging..

Read more of this story at Slashdot.

  •