Vue lecture

Rust in Linux's Kernel 'is No Longer Experimental'

Steven J. Vaughan-Nichols files this report from Tokyo: At the invitation-only Linux Kernel Maintainers Summit here, the top Linux maintainers decided, as Jonathan Corbet, Linux kernel developer, put it, "The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the 'experimental' tag will be coming off." As Linux kernel maintainer Steven Rosted told me, "There was zero pushback." This has been a long time coming. This shift caps five years of sometimes-fierce debate over whether the memory-safe language belonged alongside C at the heart of the world's most widely deployed open source operating system... It all began when Alex Gaynor and Geoffrey Thomas at the 2019 Linux Security Summit said that about two-thirds of Linux kernel vulnerabilities come from memory safety issues. Rust, in theory, could avoid these by using Rust's inherently safer application programming interfaces (API)... In those early days, the plan was not to rewrite Linux in Rust; it still isn't, but to adopt it selectively where it can provide the most security benefit without destabilizing mature C code. In short, new drivers, subsystems, and helper libraries would be the first targets... Despite the fuss, more and more programs were ported to Rust. By April 2025, the Linux kernel contained about 34 million lines of C code, with only 25 thousand lines written in Rust. At the same time, more and more drivers and higher-level utilities were being written in Rust. For instance, the Debian Linux distro developers announced that going forward, Rust would be a required dependency in its foundational Advanced Package Tool (APT). This change doesn't mean everyone will need to use Rust. C is not going anywhere. Still, as several maintainers told me, they expect to see many more drivers being written in Rust. In particular, Rust looks especially attractive for "leaf" drivers (network, storage, NVMe, etc.), where the Rust-for-Linux bindings expose safe wrappers over kernel C APIs. Nevertheless, for would-be kernel and systems programmers, Rust's new status in Linux hints at a career path that blends deep understanding of C with fluency in Rust's safety guarantees. This combination may define the next generation of low-level development work.

Read more of this story at Slashdot.

  •  

Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI

"Security, development, and AI now move as one," says Microsoft's director of cloud/AI security product marketing. Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack: The integration, announced this week in San Francisco at the Microsoft Ignite 2025 conference and now available in public preview, connects runtime intelligence from production environments directly into developer workflows. The goal is to help organizations prioritize which vulnerabilities actually matter and use AI to fix them faster. "Throughout my career, I've seen vulnerability trends going up into the right. It didn't matter how good of a detection engine and how accurate our detection engine was, people just couldn't fix things fast enough," said Marcelo Oliveira, VP of product management at GitHub, who has spent nearly a decade in application security. "That basically resulted in decades of accumulation of security debt into enterprise code bases." According to industry data, critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days, said Andrew Flick, senior director of developer services, languages and tools at Microsoft, in a blog post. Meanwhile, applications face attacks as frequently as once every three minutes, Oliveira said. The integration represents the first native link between runtime intelligence and developer workflows, said Elif Algedik, director of product marketing for cloud and AI security at Microsoft, in a blog post... The problem, according to Flick, comes down to three challenges: security teams drowning in alert fatigue while AI rapidly introduces new threat vectors that they have little time to understand; developers lacking clear prioritization while remediation takes too long; and both teams relying on separate, nonintegrated tools that make collaboration slow and frustrating... The new integration works bidirectionally. When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping, Flick said... In the past, this alert would age in a dashboard while developers worked on unrelated fixes because they didn't know this was the critical one, he said. Now, a security campaign can be created in GitHub, filtering for runtime risk like internet exposure or sensitive data, notifying the developer to prioritize this issue. GitHub Copilot "now automatically checks dependencies, scans for first-party code vulnerabilities and catches hardcoded secrets before code reaches developers," the article points out — but GitHub's VP of product management says this takes things even further. "We're not only helping you fix existing vulnerabilities, we're also reducing the number of vulnerabilities that come into the system when the level of throughput of new code being created is increasing dramatically with all these agentic coding agent platforms."

Read more of this story at Slashdot.

  •  

Amazon's AI-Powered IDE Kiro Helps Vibe Coders with 'Spec Mode'

A promotional video for Amazon's Kiro software development system took a unique approach, writes GeekWire. "Instead of product diagrams or keynote slides, a crew from Seattle's Packrat creative studio used action figures on a miniature set to create a stop-motion sequence..." "Can the software development hero conquer the 'AI Slop Monster' to uncover the gleaming, fully functional robot buried beneath the coding chaos?" Kiro (pronounced KEE-ro) is Amazon's effort to rethink how developers use AI. It's an integrated development environment that attempts to tame the wild world of vibe coding... But rather than simply generating code from prompts [in "vibe mode"], Kiro breaks down requests into formal specifications, design documents, and task lists [in "spec mode"]. This spec-driven development approach aims to solve a fundamental problem with vibe coding: AI can quickly generate prototypes, but without structure or documentation, that code becomes unmaintainable... The market for AI-powered development tools is booming. Gartner expects AI code assistants to become ubiquitous, forecasting that 90% of enterprise software engineers will use them by 2028, up from less than 14% in early 2024... Amazon launched Kiro in preview in July, to a strong response. Positive early reviews were tempered by frustration from users unable to gain access. Capacity constraints have since been resolved, and Amazon says more than 250,000 developers used Kiro in the first three months... Now, the company is taking Kiro out of preview into general availability, rolling out new features and opening the tool more broadly to development teams and companies... During the preview period, Kiro handled more than 300 million requests and processed trillions of tokens as developers explored its capabilities, according to stats provided by the company. Rackspace used Kiro to complete what they estimated as 52 weeks of software modernization in three weeks, according to Amazon executives. SmugMug and Flickr are among other companies espousing the virtues of Kiro's spec-driven development approach. Early users are posting in glowing terms about the efficiencies they're seeing from adopting the tool... startups in most countries can apply for up to 100 free Pro+ seats for a year's worth of Kiro credits. Kiro offers property-based testing "to verify that generated code actually does what developers specified," according to the article — plus a checkpointing system that "lets developers roll back changes or retrace an agent's steps when an idea goes sideways..." "And yes, they've been using Kiro to build Kiro, which has allowed them to move much faster."

Read more of this story at Slashdot.

  •  

Linus Torvalds Says Vibe Coding is Fine For Getting Started, 'Horrible Idea' For Maintenance

Linus Torvalds is "fairly positive" about vibe coding as a way for people to get computers to do things they otherwise could not. The Linux kernel maintainer made the comments during an interview at the Linux Foundation Open Source Summit in Seoul earlier this month. But he cautioned that vibe coding would be a "horrible, horrible idea from a maintenance standpoint" for production code. Torvalds told Dirk Hohndel, head of open source at Verizon, that computers have become more complicated than when he learned to code by typing in programs from computer magazines. He said vibe coding offers a path into computing for newcomers. The kernel maintainer is not using AI-assisted coding himself. He said his role has shifted from rejecting new ideas to sometimes pushing for them against opposition from longstanding maintainers who "kind of get stuck in a rut." Rust is "actually becoming a real part of the kernel instead of being this experimental thing," he said. Torvalds said AI crawlers have been "very disruptive to a lot of our infrastructure" because they gather data from kernel.org source code. Kernel maintainers receive bugs and security notices that are "made up by people who misuse AI," though the problem is smaller than for other projects such as curl.

Read more of this story at Slashdot.

  •  

Security Researchers Spot 150,000 Function-less npm Packages in Automated 'Token Farming' Scheme

An anonymous reader shared this report from The Register: Yet another supply chain attack has hit the npm registry in what Amazon describes as "one of the largest package flooding incidents in open source registry history" — but with a twist. Instead of injecting credential-stealing code or ransomware into the packages, this one is a token farming campaign. Amazon Inspector security researchers, using a new detection rule and AI assistance, originally spotted the suspicious npm packages in late October, and, by November 7, the team had flagged thousands. By November 12, they had uncovered more than 150,000 malicious packages across "multiple" developer accounts. These were all linked to a coordinated tea.xyz token farming campaign, we're told. This is a decentralized protocol designed to reward open-source developers for their contributions using the TEA token, a utility asset used within the tea ecosystem for incentives, staking, and governance. Unlike the spate of package poisoning incidents over recent months, this one didn't inject traditional malware into the open source code. Instead, the miscreants created a self-replicating attack, infecting the packages with code to automatically generate and publish, thus earning cryptocurrency rewards on the backs of legitimate open source developers. The code also included tea.yaml files that linked these packages to attacker-controlled blockchain wallet addresses. At the moment, Tea tokens have no value, points out CSO Online. "But it is suspected that the threat actors are positioning themselves to receive real cryptocurrency tokens when the Tea Protocol launches its Mainnet, where Tea tokens will have actual monetary value and can be traded..." In an interview on Friday, an executive at software supply chain management provider Sonatype, which wrote about the campaign in April 2024, told CSO that number has now grown to 153,000. "It's unfortunate that the worm isn't under control yet," said Sonatype CTO Brian Fox. And while this payload merely steals tokens, other threat actors are paying attention, he predicted. "I'm sure somebody out there in the world is looking at this massively replicating worm and wondering if they can ride that, not just to get the Tea tokens but to put some actual malware in there, because if it's replicating that fast, why wouldn't you?" When Sonatype wrote about the campaign just over a year ago, it found a mere 15,000 packages that appeared to come from a single person. With the swollen numbers reported this week, Amazon researchers wrote that it's "one of the largest package flooding incidents in open source registry history, and represents a defining moment in supply chain security...." For now, says Sonatype's Fox, the scheme wastes the time of npm administrators, who are trying to expel over 100,000 packages. But Fox and Amazon point out the scheme could inspire others to take advantage of other reward-based systems for financial gain, or to deliver malware. After deplooying a new detection rule "paired with AI", Amazon's security researchers' write, "within days, the system began flagging packages linked to the tea.xyz protocol... By November 7, the researchers flagged thousands of packages and began investigating what appeared to be a coordinated campaign. The next day, after validating the evaluation results and analyzing the patterns, they reached out to OpenSSF to share their findings and coordinate a response. Their blog post thanks the Open Source Security Foundation (OpenSSF) for rapid collaboration, while calling the incident "a defining moment in supply chain security..."

Read more of this story at Slashdot.

  •  

Could C# Overtake Java in TIOBE's Programming Language Popularity Rankings?

It's been trying to measure the popularity of programming languages since 2000 using metrics like the number of engineers, courses, and third-party vendors. And "The November 2025 TIOBE Index brings another twist below Python's familiar lead," writes TechRepublic. "C solidifies its position as runner-up, C++ and Java lose some ground, and C# moves sharply upward, narrowing the gap with Java to less than a percentage point..." TIO CEO Paul Jansen said this month that "Instead of Python, programming language C# is now the fastest rising language," How did C# achieve this? Java and C# are battling for a long time in the same areas. Right now it seems like C# has removed every reason why not to use C# instead of Java: it is cross platform nowadays, it is open source and it contains all new language features a developer wants. While the financial world is still dominated by Java, all other terrains show equal shares between Java and C#. Besides this, Microsoft is going strong and C# is still their most backed programming language. Interesting note: C# has never been higher than Java in the TIOBE index. Currently the difference between the two rivals is less than 1%. There are exciting times ahead of us. Is C# going to surpass Java for the first time in the TIOBE index history? "The fact that C# has been in the news for the successive betas and pre-release candidates prior to the release of C# 14 may have bumped up its percentage share in the last few months," notes a post on the site i-Programmer. But they also point out that by TIOBE's reckoning, Java — having been overtaken by Python in 2021 — "has been in decline ever since." TechRepublic summarizes the rest of the Top Ten: JavaScript stays in sixth place at 3.42%, and Visual Basic edges up to seventh with 3.31%. Delphi/Object Pascal nudges upward to eighth at 2.06%, while Perl returns to the top 10 in ninth at 1.84% after a sharp year-over-year climb. SQL rounds out the list at tenth with 1.80%, maintaining a foothold that shows the enduring centrality of relational databases. Go, which held eighth place in October, slips out of the top 10 entirely. Here's how TIOBE's methodology ranks programming language popularity in November: Python C C++ Java C# JavaScript Visual Basic Delphi/Object Pascal Perl SQL

Read more of this story at Slashdot.

  •  

Visual Studio 2026 Released

Dave Knott writes: Microsoft has released Visual Studio 2026, the first major version of their flagship compiler in almost four years. Release notes are available here. The compiler has also been updated, including improved (but not yet 100%) C++23 core language and standard library implementations.

Read more of this story at Slashdot.

  •  

The Linux Kernel Looks To 'Bite the Bullet' In Enabling Microsoft C Extensions

Linux kernel developers are moving toward enabling Microsoft C Extensions (-fms-extensions) by default in Linux 6.19, with Linus Torvalds signaling no objection. While some dislike relying on Microsoft-style behavior, the patches in kbuild-next suggest the project is ready to "bite the bullet" and adopt the extensions system-wide. Phoronix reports: Rasmus Villemoes argued with Kbuild: enable -fms-extensions that would allow for "prettier code" and others have noted in the past the potential for saving stack space and all around being beneficial in being able to leverage the Microsoft C behavior: "Once in a while, it turns out that enabling -fms-extensions could allow some slightly prettier code. But every time it has come up, the code that had to be used instead has been deemed 'not too awful' and not worth introducing another compiler flag for. That's probably true for each individual case, but then it's somewhat of a chicken/egg situation. If we just 'bite the bullet' as Linus says and enable it once and for all, it is available whenever a use case turns up, and no individual case has to justify it..." The second patch is kbuild: Add '-fms-extensions' to areas with dedicated CFLAGS to ensure -fms-extensions is passed for the CPU architectures that rely on their own CFLAGS being set rather than the main KBUILD_CFLAGS. Linus Torvalds chimed in on the prior mailing list discussion and doesn't appear to be against enabling -fms-extensions beginning with the Linux 6.19 kernel.

Read more of this story at Slashdot.

  •  

Rust Foundation Announces 'Maintainers Fund' to Ensure Continuity and Support Long-Term Roles

The Rust Foundation has a responsibility to "shed light on the impact of supporting the often unseen work" that keeps the Rust Project running. So this week they announced a new initiative "to provide consistent, transparent, and long term support for the developers who make the Rust programming language possible." It's the Rust Foundation Maintainers Fund, "an initiative we'll shape in close collaboration with the Rust Project Leadership Council and Project Directors to ensure funding decisions are made openly and with accountability." In the months ahead, we'll define the fund's structure, secure contributions, and work with the Rust Project and community to bring it to life. This work will build on lessons from earlier iterations of our grants and fellowships to create a lasting framework for supporting Rust's maintainers... Over the past several months, through ongoing board discussions and input from the Leadership Council, this initiative has taken shape as a way to help maintainers continue their vital development and review work, and plan for the future... This initiative reflects our commitment to Rust being shaped by its people, guided by open collaboration, and backed by a global network of contributors and partners. The Rust Foundation Maintainers Fund will operate within the governance framework shared between the Rust Project and the Rust Foundation, ensuring alignment and oversight at every level... The Rust Foundation's approach to this initiative will be guided by our structure: as a 501( C)(6) nonprofit, we operate under a mandate for transparency and accountability to the Rust Project, language community, and our members. That means we must develop this fund in coordination with the Rust Project's priorities, ensuring shared governance and long-term viability... Our goal is simple: to help the people building Rust continue their essential work with the support they deserve. That means creating the conditions for long term maintainer roles and ensuring continuity for those whose efforts keep the language stable and evolving. Through the Rust Foundation Maintainers Fund, we aim to address these needs directly. "The more companies using Rust can contribute to the Rust Foundation Maintainers Fund, the more we can keep the language and tooling evolving for the benefit of everyone," says Rust Foundation project director Carol Nichols.

Read more of this story at Slashdot.

  •  

GitHub Announces 'Agent HQ', Letting Copilot Subscribers Run and Manage Coding Agents from Multiple Vendors

"AI isn't just a tool anymore; it's an integral part of the development experience," argues GitHub's blog. So "Agents shouldn't be bolted on. They should work the way you already work..." So this week GitHub announced "Agent HQ," which CNBC describes as a "mission control" interface "that will allow software developers to manage coding agents from multiple vendors on a single platform." Developers have a range of new capabilities at their fingertips because of these agents, but it can require a lot of effort to keep track of them all individually, said GitHub COO Kyle Daigle. Developers will now be able to manage agents from GitHub, OpenAI, Google, Anthropic, xAI and Cognition in one place with Agent HQ. "We want to bring a little bit of order to the chaos of innovation," Daigle told CNBC in an interview. "With so many different agents, there's so many different ways of kicking off these asynchronous tasks, and so our big opportunity here is to bring this all together." Agent HQ users will be able to access a command center where they can assign, steer and monitor the work of multiple agents... The third-party agents will begin rolling out to GitHub Copilot subscribers in the coming months, but Copilot Pro+ users will be able to access OpenAI Codex in VS Code Insiders this week, the company said. "We're into this wave two era," GitHub's COO Mario Rodriguez told VentureBeat, an era that's "going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native...." Or, as VentureBeat sees it, GitHub "is positioning itself as the essential orchestration layer beneath them all..." Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape... The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level... Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. [GitHub COO] Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled. Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration. Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time... Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts. This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic. GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents. The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality. More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate. "Don't let this little bit of news float past you like all those self-satisfied marketing pitches we semi-hear and ignore," writes ZDNet: If it works and remains reliable, this is actually a very big deal... Tech companies, especially the giant ones, often like to talk "open" but then do their level best to engineer lock-in to their solution and their solution alone. Sure, most of them offer some sort of export tool, but the barrier to moving from one tool to another is often huge... [T]he idea that you can continue to use your favorite agent or agents in GitHub, fully integrated into the GitHub tool path, is powerful. It means there's a chance developers might not have to suffer the walled garden effect that so many companies have strived for to lock in their customers.

Read more of this story at Slashdot.

  •  

Cloudflare Raves About Performance Gains After Rust Rewrite

"We've spent the last year rebuilding major components of our system," Cloudflare announced this week, "and we've just slashed the latency of traffic passing through our network for millions of our customers," (There's a 10ms cut in the median time to respond, plus a 25% performance boost as measured by CDN performance tests.) They replaced a 15-year-old system named FL (where they run security and performance features), and "At the same time, we've made our system more secure, and we've reduced the time it takes for us to build and release new products." And yes, Rust was involved: We write a lot of Rust, and we've gotten pretty good at it... We built FL2 in Rust, on Oxy [Cloudflare's Rust-based next generation proxy framework], and built a strict module framework to structure all the logic in FL2... Built in Rust, [Oxy] eliminates entire classes of bugs that plagued our Nginx/LuaJIT-based FL1, like memory safety issues and data races, while delivering C-level performance. At Cloudflare's scale, those guarantees aren't nice-to-haves, they're essential. Every microsecond saved per request translates into tangible improvements in user experience, and every crash or edge case avoided keeps the Internet running smoothly. Rust's strict compile-time guarantees also pair perfectly with FL2's modular architecture, where we enforce clear contracts between product modules and their inputs and outputs... It's a big enough distraction from shipping products to customers to rebuild product logic in Rust. Asking all our teams to maintain two versions of their product logic, and reimplement every change a second time until we finished our migration was too much. So, we implemented a layer in our old NGINX and OpenResty based FL which allowed the new modules to be run. Instead of maintaining a parallel implementation, teams could implement their logic in Rust, and replace their old Lua logic with that, without waiting for the full replacement of the old system. Over 100 engineers worked on FL2 — and there was extensive testing, plus a fallback-to-FL1 procedure. But "We started running customer traffic through FL2 early in 2025, and have been progressively increasing the amount of traffic served throughout the year...." As we described at the start of this post, FL2 is substantially faster than FL1. The biggest reason for this is simply that FL2 performs less work [thanks to filters controlling whether modules need to run]... Another huge reason for better performance is that FL2 is a single codebase, implemented in a performance focussed language. In comparison, FL1 was based on NGINX (which is written in C), combined with LuaJIT (Lua, and C interface layers), and also contained plenty of Rust modules. In FL1, we spent a lot of time and memory converting data from the representation needed by one language, to the representation needed by another. As a result, our internal measures show that FL2 uses less than half the CPU of FL1, and much less than half the memory. That's a huge bonus — we can spend the CPU on delivering more and more features for our customers! Using our own tools and independent benchmarks like CDNPerf, we measured the impact of FL2 as we rolled it out across the network. The results are clear: websites are responding 10 ms faster at the median, a 25% performance boost. FL2 is also more secure by design than FL1. No software system is perfect, but the Rust language brings us huge benefits over LuaJIT. Rust has strong compile-time memory checks and a type system that avoids large classes of errors. Combine that with our rigid module system, and we can make most changes with high confidence... We have long followed a policy that any unexplained crash of our systems needs to be investigated as a high priority. We won't be relaxing that policy, though the main cause of novel crashes in FL2 so far has been due to hardware failure. The massively reduced rates of such crashes will give us time to do a good job of such investigations. We're spending the rest of 2025 completing the migration from FL1 to FL2, and will turn off FL1 in early 2026. We're already seeing the benefits in terms of customer performance and speed of development, and we're looking forward to giving these to all our customers. After that, when everything is modular, in Rust and tested and scaled, we can really start to optimize...! Thanks to long-time Slashdot reader Beeftopia for sharing the article.

Read more of this story at Slashdot.

  •  

TypeScript Overtakes Python and JavaScript To Claim Top Spot on GitHub

TypeScript overtook Python and JavaScript in August 2025 to become the most used language on GitHub. The shift marked the most significant language change in more than a decade. The language grew by over 1 million contributors in 2025, a 66% increase year over year, and finished August with 2,636,006 monthly contributors. Nearly every major frontend framework now scaffolds projects in TypeScript by default. Next.js 15, Astro 3, SvelteKit 2, Qwik, SolidStart, Angular 18, and Remix all generate TypeScript codebases when developers create new projects. Type systems reduce ambiguity and catch errors from large language models before production. A 2025 academic study found 94% of LLM-generated compilation errors were type-check failures. Tooling like Vite, ts-node, Bun, and I.D.E. autoconfig hide boilerplate setup. Among new repositories created in the past twelve months, TypeScript accounted for 5,394,256 projects. That represented a 78% increase from the prior year.

Read more of this story at Slashdot.

  •  

Does Generative AI Threaten the Open Source Ecosystem?

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly." That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole. The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...." "Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk." O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

Read more of this story at Slashdot.

  •  

A Plan for Improving JavaScript's Trustworthiness on the Web

On Cloudflare's blog, a senior research engineer shares a plan for "improving the trustworthiness of JavaScript on the web." "It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client's browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages? It is interesting to note that smartphone apps don't have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible. It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains. In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web... We hope to build even wider consensus on the solution design in the near future.... We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset. The blog post points out that the WEBCAT protocol (created by the Freedom of Press Foundation) "allows site owners to announce the identities of the developers that have signed the site's integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user... We've made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components." The proposal also envisions a service storing metadata for transparency-enabled sites on the web (along with "witnesses" who verify the prefix tree holding the hashes for domain manifests). "We are still very early in the standardization process," with hopes to soon "begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon. In the meantime, you can follow along with our transparency specification draft,/A>, check out the open problems, and share your ideas."

Read more of this story at Slashdot.

  •  

OpenAI Cofounder Builds New Open Source LLM 'Nanochat' - and Doesn't Use Vibe Coding

An anonymous reader shared this report from Gizmodo: It's been over a year since OpenAI cofounder Andrej Karpathy exited the company. In the time since he's been gone, he coined and popularized the term "vibe coding" to describe the practice of farming out coding projects to AI tools. But earlier this week, when he released his own open source model called nanochat, he admitted that he wrote the whole thing by hand, vibes be damned. Nanochat, according to Karpathy, is a "minimal, from scratch, full-stack training/inference pipeline" that is designed to let anyone build a large language model with a ChatGPT-style chatbot interface in a matter of hours and for as little as $100. Karpathy said the project contains about 8,000 lines of "quite clean code," which he wrote by hand — not necessarily by choice, but because he found AI tools couldn't do what he needed. "It's basically entirely hand-written (with tab autocomplete)," he wrote. "I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful."

Read more of this story at Slashdot.

  •  

GitHub Will Prioritize Migrating To Azure Over Feature Development

An anonymous reader shares a report: After acquiring GitHub in 2018, Microsoft mostly let the developer platform run autonomously. But in recent months, that's changed. With GitHub CEO Thomas Dohmke leaving the company this August, and GitHub being folded more deeply into Microsoft's organizational structure, GitHub lost that independence. Now, according to internal GitHub documents The New Stack has seen, the next step of this deeper integration into the Microsoft structure is moving all of GitHub's infrastructure to Azure, even at the cost of delaying work on new features. [...] While GitHub had previously started work on migrating parts of its service to Azure, our understanding is that these migrations have been halting and sometimes failed. There are some projects, like its data residency initiative (internally referred to as Project Proxima) that will allow GitHub's enterprise users to store all of their code in Europe, that already solely use Azure's local cloud regions.

Read more of this story at Slashdot.

  •  

The Great Software Quality Collapse

Engineer Denis Stetskov, writing in a blog: The Apple Calculator leaked 32GB of RAM. Not used. Not allocated. Leaked. A basic calculator app is hemorrhaging more memory than most computers had a decade ago. Twenty years ago, this would have triggered emergency patches and post-mortems. Today, it's just another bug report in the queue. We've normalized software catastrophes to the point where a Calculator leaking 32GB of RAM barely makes the news. This isn't about AI. The quality crisis started years before ChatGPT existed. AI just weaponized existing incompetence. [...] Here's what engineering leaders don't want to acknowledge: software has physical constraints, and we're hitting all of them simultaneously. Modern software is built on towers of abstractions, each one making development "easier" while adding overhead: Today's real chain: React > Electron > Chromium > Docker > Kubernetes > VM > managed DB > API gateways. Each layer adds "only 20-30%." Compound a handful and you're at 2-6x overhead for the same behavior. That's how a Calculator ends up leaking 32GB. Not because someone wanted it to -- but because nobody noticed the cumulative cost until users started complaining. [...] We're living through the greatest software quality crisis in computing history. A Calculator leaks 32GB of RAM. AI assistants delete production databases. Companies spend $364 billion to avoid fixing fundamental problems. This isn't sustainable. Physics doesn't negotiate. Energy is finite. Hardware has limits. The companies that survive won't be those who can outspend the crisis. There'll be those who remember how to engineer.

Read more of this story at Slashdot.

  •