Vue lecture

'Coding is Dead': University of Washington CS Program Rethinks Curriculum For the AI Era

The University of Washington's Paul G. Allen School of Computer Science & Engineering is overhauling its approach to computer science education as AI reshapes the tech industry. Director Magdalena Balazinska has declared that "coding, or the translation of a precise design into software instructions, is dead" because AI can now handle that work. The Pacific Northwest's premier tech program now allows students to use GPT tools in assignments, requiring them to cite AI as a collaborator just as they would credit input from a fellow student. The school is considering "coordinated changes to our curriculum" after encouraging professors to experiment with AI integration.

Read more of this story at Slashdot.

  •  

Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model

"Apple quietly dropped a new AI model on Hugging Face with an interesting twist," writes 9to5Mac. "Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once." "The result is faster code generation, at a performance that rivals top open-source coding models." Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom... An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested... Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising... This behavior is especially useful for programming, where global structure matters more than linear token prediction... [Apple] released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month... [W]ith an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there. Even more interestingly, Apple's model is built on top of Qwen2.5-7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5-Coder-7B), then Apple took it and made its own adjustments. They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples. "Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion..." the article points out. But "the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas."

Read more of this story at Slashdot.

  •  

How Do You Teach Computer Science in the Age of AI?

"A computer science degree used to be a golden ticket to the promised land of jobs," a college senior tells the New York Times. But "That's no longer the case." The article notes that in the last three years there's been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies "relying more on AI for some aspects of coding, eliminating some entry-level work." So what do college professors teach when AI "is coming fastest and most forcefully to computer science"? Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy... Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills. The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of "a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce," said Mary Lou Maher, a computer scientist and a director of the Computing Research Association. The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal. The article raises other possibilities. Experts also suggest the possibility of "a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets." Stanford CS professor Alex Aiken even argues that "The growth in software engineering jobs may decline, but the total number of people involved in programming will increase." Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school's undergraduate programs believes that coursework "should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools."

Read more of this story at Slashdot.

  •  

Microsoft Open Sources Copilot Chat for VS Code on GitHub

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools... As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective. "If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable. It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

Read more of this story at Slashdot.

  •  

'The Computer-Science Bubble Is Bursting'

theodp writes: The job of the future might already be past its prime," writes The Atlantic's Rose Horowitch in The Computer-Science Bubble Is Bursting. "For years, young people seeking a lucrative career were urged to go all in on computer science. From 2005 to 2023, the number of comp-sci majors in the United States quadrupled. All of which makes the latest batch of numbers so startling. This year, enrollment grew by only 0.2 percent nationally, and at many programs, it appears to already be in decline, according to interviews with professors and department chairs. At Stanford, widely considered one of the country's top programs, the number of comp-sci majors has stalled after years of blistering growth. Szymon Rusinkiewicz, the chair of Princeton's computer-science department, told me that, if current trends hold, the cohort of graduating comp-sci majors at Princeton is set to be 25 percent smaller in two years than it is today. The number of Duke students enrolled in introductory computer-science courses has dropped about 20 percent over the past year." "But if the decline is surprising, the reason for it is fairly straightforward: Young people are responding to a grim job outlook for entry-level coders. In recent years, the tech industry has been roiled by layoffs and hiring freezes. The leading culprit for the slowdown is technology itself. Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words. This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true." Meanwhile, writing in the Communications of the ACM, Orit Hazzan and Avi Salmon ask: Should Universities Raise or Lower Admission Requirements for CS Programs in the Age of GenAI? "This debate raises a key dilemma: should universities raise admission standards for computer science programs to ensure that only highly skilled problem-solvers enter the field, lower them to fill the gaps left by those who now see computer science as obsolete due to GenAI, or restructure them to attract excellent candidates with diverse skill sets who may not have considered computer science prior to the rise of GenAI, but who now, with the intensive GenAI and vibe coding tools supporting programming tasks, may consider entering the field?

Read more of this story at Slashdot.

  •  

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage." This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency." Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs. Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java. "While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."

Read more of this story at Slashdot.

  •  

Bill Atkinson, Hypercard Creator and Original Mac Team Member, Dies at Age 74

AppleInsider reports: The engineer behind much of the Mac's early graphical user interfaces, QuickDraw, MacPaint, Hypercard and much more, William D. "Bill" Atkinson, died on June 5 of complications from pancreatic cancer... Atkinson, who built a post-Apple career as a noted nature photographer, worked at Apple from 1978 to 1990. Among his lasting contributions to Apple's computers were the invention of the menubar, the selection lasso, the "marching ants" item selection animation, and the discovery of a midpoint circle algorithm that enabled the rapid drawing of circles on-screen. He was Apple Employee No. 51, recruited by Steve Jobs. Atkinson was one of the 30 team members to develop the first Macintosh, but also was principle designer of the Lisa's graphical user interface (GUI), a novelty in computers at the time. He was fascinated by the concept of dithering, by which computers using dots could create nearly photographic images similar to the way newspapers printed photos. He is also credited (alongside Jobs) for the invention of RoundRects, the rounded rectangles still used in Apple's system messages, application windows, and other graphical elements on Apple products. Hypercard was Atkinson's main claim to fame. He built the a hypermedia approach to building applications that he once described as a "software erector set." The Hypercard technology debuted in 1987, and greatly opened up Macintosh software development. In 2012 some video clips of Atkinson appeared in some rediscovered archival footage. (Original Macintosh team developer Andy Hertzfeld uploaded "snippets from interviews with members of the original Macintosh design team, recorded in October 1983 for projected TV commercials that were never used.") Blogger John Gruber calls Atkinson "One of the great heroes in not just Apple history, but computer history." If you want to cheer yourself up, go to Andy Hertzfeld's Folklore.org site and (re-)read all the entries about Atkinson. Here's just one, with Steve Jobs inspiring Atkinson to invent the roundrect. Here's another (surely near and dear to my friend Brent Simmons's heart) with this kicker of a closing line: "I'm not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied." Some of his code and algorithms are among the most efficient and elegant ever devised. The original Macintosh team was chock full of geniuses, but Atkinson might have been the most essential to making the impossible possible under the extraordinary technical limitations of that hardware... In addition to his low-level contributions like QuickDraw, Atkinson was also the creator of MacPaint (which to this day stands as the model for bitmap image editorsâ — âPhotoshop, I would argue, was conceptually derived directly from MacPaint) and HyperCard ("inspired by a mind-expanding LSD journey in 1985"), the influence of which cannot be overstated. I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he's on the short list. What a man, what a mind, what gifts to the world he left us.

Read more of this story at Slashdot.

  •  

Ask Slashdot: How Important Is It For Programmers to Learn Touch Typing?

Once upon a time, long-time Slashdot reader tgibson learned how to type on a manual typewriter, back in an 8th grade classroom. And to this day, they write, "my bias is to nod approvingly at touch typists and roll my eyes at those who need to stare at the keyboard while typing..." But how true is that for computer professionals today? After 15 years I left industry and became a post-secondary computer science educator. Occasionally I rant to my students about the importance of touch-typing as a skill to have as a software engineer. But I've been out of the game for some time now. Those of you hiring or working with freshly-minted software engineers, what's your take? One anonymous Slashdot reader responded: Oh, you mean the kid in the next cubicle that has said "Hey Siri" 297 times this morning? I'll let you know when he starts typing. A minor suggestion to office managers... please purchase a very quiet keyboard. Fellow cube-mates who are accomplished typists would consider that struggling audibly to be akin to nails on a blackboard... Share your own thoughts in the comments. How important is it for programmers to learn touch typing?

Read more of this story at Slashdot.

  •  

'For Algorithms, a Little Memory Outweighs a Lot of Time'

MIT comp-sci professor Ryan Williams suspected that a small amount of memory "would be as helpful as a lot of time in all conceivable computations..." writes Quanta magazine. "In February, he finally posted his proof online, to widespread acclaim..." Every algorithm takes some time to run, and requires some space to store data while it's running. Until now, the only known algorithms for accomplishing certain tasks required an amount of space roughly proportional to their runtime, and researchers had long assumed there's no way to do better. Williams' proof established a mathematical procedure for transforming any algorithm — no matter what it does — into a form that uses much less space. What's more, this result — a statement about what you can compute given a certain amount of space — also implies a second result, about what you cannot compute in a certain amount of time. This second result isn't surprising in itself: Researchers expected it to be true, but they had no idea how to prove it. Williams' solution, based on his sweeping first result, feels almost cartoonishly excessive, akin to proving a suspected murderer guilty by establishing an ironclad alibi for everyone else on the planet. It could also offer a new way to attack one of the oldest open problems in computer science. "It's a pretty stunning result, and a massive advance," said Paul Beame, a computer scientist at the University of Washington. Thanks to long-time Slashdot reader mspohr for sharing the article.

Read more of this story at Slashdot.

  •  

Andrew Ng Says Vibe Coding is a Bad Name For a Very Real and Exhausting Job

An anonymous reader shares a report: Vibe coding might sound chill, but Andrew Ng thinks the name is unfortunate. The Stanford professor and former Google Brain scientist said the term misleads people into imagining engineers just "go with the vibes" when using AI tools to write code. "It's unfortunate that that's called vibe coding," Ng said at a firechat chat in May at conference LangChain Interrupt. "It's misleading a lot of people into thinking, just go with the vibes, you know -- accept this, reject that." In reality, coding with AI is "a deeply intellectual exercise," he said. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day." Despite his gripe with the name, Ng is bullish on AI-assisted coding. He said it's "fantastic" that developers can now write software faster with these tools, sometimes while "barely looking at the code."

Read more of this story at Slashdot.

  •  

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code. The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase. The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.

Read more of this story at Slashdot.

  •  

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago. These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations. The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.

Read more of this story at Slashdot.

  •  

How Stack Overflow's Reputation System Led To Its Own Downfall

A new analysis argues that Stack Overflow's decline began years before AI tools delivered the "final blow" to the once-dominant programming forum. The site's monthly questions dropped from a peak of 200,000 to a steep collapse that began in earnest after ChatGPT's 2023 launch, but usage had been declining since 2014, according to data cited in the InfoWorld analysis. The platform's remarkable reputation system initially elevated it above competitors by allowing users to earn points and badges for helpful contributions, but that same system eventually became its downfall, the piece argues. As Stack Overflow evolved into a self-governing platform where high-reputation users gained moderation powers, the community transformed from a welcoming space for developer interaction into what the author compares to a "Stanford Prison Experiment" where moderators systematically culled interactions they deemed irrelevant.

Read more of this story at Slashdot.

  •  

Amid Turmoil, Stack Overflow Asks About AI, Salary, Remote Work in 15th Annual Developer Survey

Stack Overflow remains in the midst of big changes to counter an AI-fueled drop in engagement. So "We're wondering what kind of online communities Stack Overflow users continue to support in the age of AI," writes their senior analyst, "and whether AI is becoming a closer companion than ever before." For their 15th year of their annual reader survey, this means "we're not just collecting data; we're reflecting on the last year of questions, answers, hallucinations, job changes, tech stacks, memory allocations, models, systems and agents — together..." Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks. Career shifts: We're keen to understand if you've considered a career change or transitioned roles and if AI is impacting your approach to learning or using existing tools. Did we make up the difference in salaries globally for tech workers...? They're also re-visiting "a key finding from recent surveys highlighted a significant statistic: 80% of developers reported being unhappy or complacent in their jobs." This raised questions about changing office (and return-to-office) culture and the pressures of the industry, along with whether there were any insights into what could help developers feel more satisfied at work. Prior research confirmed that flexibility at work used to contribute more than salary to job satisfaction, but 2024's results show us that remote work is not more impactful than salary when it comes to overall satisfaction... [For some positions job satisfaction stayed consistent regardless of salary, though it increased with salary for other positions. And embedded developers said their happiness increased when they worked with top-quality hardware, while desktop developers cited "contributing to open source" and engineering managers were happier when "driving strategy".] In 2024, our data showed that many developers experienced a pay cut in various roles and programming specialties. In an industry often seen as highly lucrative, this was a notable shift of around 7% lower salaries across the top ten reporting countries for the same roles. This year, we're interested in whether this trend has continued, reversed, or stabilized. Salary dynamics is an indicator for job satisfaction in recent surveys of Stack Overflow users and understanding trends for these roles can perhaps improve the process for finding the most useful factors contributing to role satisfaction outside of salary. And of course they're asking about AI — while noting last year's survey uncovered this paradox. "While AI usage is growing (70% in 2023 vs. 76% in 2024 planning to or currently using AI tools), developer sentiment isn't necessarily following suit, as 77% in of all respondents in 2023 are favorable or very favorable of AI tools for development compared to 72% of all respondents in 2024." Concerns about accuracy and misinformation were prevalent among some key groups. More developers learning to code are using or are interested in using AI tools than professional developers (84% vs. 77%)... Developers with 10 — 19 years experience were most likely (84%) to name "increase in productivity" as a benefit of AI tools, higher than developers with less experience (<80%)... Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

Read more of this story at Slashdot.

  •  

Is AI Turning Coders Into Bystanders in Their Own Jobs?

AI's downside for software engineers for now seems to be a change in the quality of their work," reports the New York Times. "Some say it is becoming more routine, less thoughtful and, crucially, much faster paced... The new approach to coding at many companies has, in effect, eliminated much of the time the developer spends reflecting on his or her work." And Amazon CEO Andy Jassy even recently told shareholders Amazon would "change the norms" for programming by how they used AI. Those changing norms have not always been eagerly embraced. Three Amazon engineers said managers had increasingly pushed them to use AI in their work over the past year. The engineers said the company had raised output goals [which affect performance reviews] and had become less forgiving about deadlines. It has even encouraged coders to gin up new AI productivity tools at an upcoming hackathon, an internal coding competition. One Amazon engineer said his team was roughly half the size it was last year, but it was expected to produce roughly the same amount of code by using AI. Other tech companies are moving in the same direction. In a memo to employees in April, the CEO of Shopify, a company that helps entrepreneurs build and manage e-commerce websites, announced that "AI usage is now a baseline expectation" and that the company would "add AI usage questions" to performance reviews. Google recently told employees that it would soon hold a companywide hackathon in which one category would be creating AI tools that could "enhance their overall daily productivity," according to an internal announcement. Winning teams will receive $10,000. The shift has not been all negative for workers. At Amazon and other companies, managers argue that AI can relieve employees of tedious tasks and enable them to perform more interesting work. Jassy wrote last year that the company had saved "the equivalent of 4,500 developer-years" by using AI to do the thankless work of upgrading old software... As at Microsoft, many Amazon engineers use an AI assistant that suggests lines of code. But the company has more recently rolled out AI tools that can generate large portions of a program on its own. One engineer called the tools "scarily good." The engineers said that many colleagues have been reluctant to use these new tools because they require a lot of double-checking and because the engineers want more control. "It's more fun to write code than to read code," said Simon Willison, an AI fan who is a longtime programmer and blogger, channelling the objections of other programmers. "If you're told you have to do a code review, it's never a fun part of the job. When you're working with these tools, it's most of the job." "This shift from writing to reading code can make engineers feel like bystanders in their own jobs," the article points out (adding "The automation of coding has special resonance for Amazon engineers, who have watched their blue-collar counterparts undergo a similar transition..." "While there is no rush to form a union for coders at Amazon, such a move would not be unheard of. When General Motors workers went on strike in 1936 to demand recognition of their union, the United Auto Workers, it was the dreaded speedup that spurred them on."

Read more of this story at Slashdot.

  •  

Python Can Now Call Code Written in Chris Lattner's Mojo

Mojo (the programming language) reached a milestone today. The story so far... Chris Lattner created the Swift programming language (and answered questions from Slashdot readers in 2017 on his way to new jobs at Tesla, Google, and SiFive). But in 2023, he'd created a new programming language called Mojo — a superset of Python with added functionality for high performance code that takes advantage of modern accelerators — as part of his work at AI infrastructure company Modular.AI. And today Modular's product manager Brad Larson announced Python users can now call Mojo code from Python. (Watch for it in Mojo's latest nightly builds...) The Python interoperability section of the Mojo manual has been expanded and now includes a dedicated document on calling Mojo from Python. We've also added a couple of new examples to the modular GitHub repository: a "hello world" that shows how to round-trip from Python to Mojo and back, and one that shows how even Mojo code that uses the GPU can be called from Python. This is usable through any of the ways of installing MAX [their Modular Accelerated Xecution platform, an integrated suite of AI compute tools] and the Mojo compiler: via pip install modular / pip install max, or with Conda via Magic / Pixi. One of our goals has been the progressive introduction of MAX and Mojo into the massive Python codebases out in the world today. We feel that enabling selective migration of performance bottlenecks in Python code to fast Mojo (especially Mojo running on accelerators) will unlock entirely new applications. I'm really excited for how this will expand the reach of the Mojo code many of you have been writing... It has taken months of deep technical work to get to this point, and this is just the first step in the roll-out of this new language feature. I strongly recommend reading the list of current known limitations to understand what may not work just yet, both to avoid potential frustration and to prevent the filing of duplicate issues for known areas that we're working on. "We are really interested in what you'll build with this new functionality, as well as hearing your feedback about how this could be made even better," the post concludes. Mojo's licensing makes it free on any device, for any research, hobby or learning project, as well as on x86 or ARM CPUs or NVIDIA GPU.

Read more of this story at Slashdot.

  •  

'Rust is So Good You Can Get Paid $20K to Make It as Fast as C'

The Prossimo project (funded by the nonprofit Internet Security Research Group) seeks to "move the Internet's security-sensitive software infrastructure to memory safe code." Two years ago the Prossimo project made an announcement: they'd begun work on rav1d, a safer high performance AV1 decoder written in Rust, according to a new update: We partnered with Immunant to do the engineering work. By September of 2024 rav1d was basically complete and we learned a lot during the process. Today rav1d works well — it passes all the same tests as the dav1d decoder it is based on, which is written in C. It's possible to build and run Chromium with it. There's just one problem — it's not quite as fast as the C version... Our Rust-based rav1d decoder is currently about 5% slower than the C-based dav1d decoder (the exact amount differs a bit depending on the benchmark, input, and platform). This is enough of a difference to be a problem for potential adopters, and, frankly, it just bothers us. The development team worked hard to get it to performance parity. We brought in a couple of other contractors who have experience with optimizing things like this. We wrote about the optimization work we did. However, we were still unable to get to performance parity and, to be frank again, we aren't really sure what to do next. After racking our brains for options, we decided to offer a bounty pool of $20,000 for getting rav1d to performance parity with dav1d. Hopefully folks out there can help get rav1d performance advanced to where it needs to be, and ideally we and the Rust community will also learn something about how Rust performance stacks up against C. This drew a snarky response from FFmpeg, the framework that powers audio and video processing for everyone from VLC to Twitch. "Rust is so good you can get paid $20k to make it as fast as C," they posted to their 68,300 followers on X.com. Thanks to the It's FOSS blog for spotting the announcement.

Read more of this story at Slashdot.

  •  

Ask Slashdot: Would You Consider a Low-Latency JavaScript Runtime For Your Workflow?

Amazon's AWS Labs has created LLRT an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient serverless applications. Slashdot reader BitterEpic wants to know what you think of it: Traditional JavaScript runtimes like Node.js rely on garbage collection, which can introduce unpredictable pauses and slow down performance, especially during cold starts in serverless environments like AWS Lambda. LLRT's manual memory management, courtesy of Rust, eliminates this issue, leading to smoother, more predictable performance. LLRT also has a runtime under 2MB, a huge reduction compared to the 100MB+ typically required by Node.js. This lightweight design means lower memory usage, better scalability, and reduced operational costs. Without the overhead of garbage collection, LLRT has faster cold start times and can initialize in milliseconds—perfect for latency-sensitive applications where every millisecond counts. For JavaScript developers, LLRT offers the best of both worlds: rapid development with JavaScript's flexibility, combined with Rust's performance. This means faster, more scalable applications without the usual memory bloat and cold start issues. Still in beta, LLRT promises to be a major step forward for serverless JavaScript applications. By combining Rust's performance with JavaScript's flexibility, it opens new possibilities for building high-performance, low-latency applications. If it continues to evolve, LLRT could become a core offering in AWS Lambda, potentially changing how we approach serverless JavaScript development. Would you consider Javascript as the core of your future workflow? Or maybe you would prefer to go lower level with quckjs?

Read more of this story at Slashdot.

  •  

Stack Overflow Seeks Realignment 'To Support the Builders of the Future in an AI World'

"The world has changed," writes Stack Overflow's blog. "Fast. Artificial intelligence is reshaping how we build, learn, and solve problems. Software development looks dramatically different than it did even a few years ago — and the pace of change is only accelerating." And they believe their brand "at times" lost "fidelity and clarity. It's very much been always added to and not been thought of holistically. So, it's time for our brand to evolve too," they write, hoping to articulate a perspective "forged in the fires of community, powered by collaboration, shaped by AI, and driven by people." The developer news site DevClass notes the change happens "as the number of posts to its site continues a dramatic decline thanks to AI-driven alternatives." According to a quick query on the official data explorer, the sum of questions and answers posted in April 2025 was down by over 64 percent from the same month in 2024, and plunged more than 90 percent from April 2020, when traffic was near its peak... Although declining traffic is a sign of Stack Overflow's reduced significance in the developer community, the company's business is not equally affected so far. Stack Exchange is a business owned by investment company Prosus, and the Stack Exchange products include private versions of its site (Stack Overflow for Teams) as well as advertising and recruitment. According to the Prosus financial results, in the six months ended September 2024, Stack Overflow increased its revenue and reduced its losses. The company's search for a new direction though confirms that the fast-disappearing developer engagement with Stack Overflow poses an existential challenge to the organization. DevClass says Stack Overflow's parent company "is casting about for new ways to provide value (and drive business) in this context..." The company has already experimented with various new services, via its Labs research department, including an AI Answer Assistant and Question Assistant, as well as a revamped jobs site in association with recruitment site Indeed, Discussions for technical debate, and extensions for GitHub Copilot, Slack, and Visual Studio Code. From the official announcement on Stack Overflow's blog: This rebrand isn't just a fresh coat of paint. It's a realignment with our purpose: to support the builders of the future in an AI world — with clarity, speed, and humanity. It's about showing up in a way that reflects who we are today, and where we're headed tomorrow. "We have appointed an internal steering group and we have engaged with an external expert partner in this area to help bring about the required change," notes a post in Stack Exchange's "meta" area. This isn't just about a visual update or marketing exercise — it's going to bring about a shift in how we present ourselves to the world which you will feel everywhere from the design to the copywriting, so that we can better achieve our goals and shared mission. As the emergence of AI has called into question the role of Stack Overflow and the Stack Exchange Network, one of the desired outputs of the rebrand process is to clarify our place in the world. We've done work toward this already — our recent community AMA is an example of this — but we want to ensure that this comes across in our brand and identity as well. We want the community to be involved and have a strong voice in the process of renewing and refreshing our brand. Remember, Stack Overflow started with a public discussion about what to name it! And another another post two months ago Stack Exchange is exploring early ideas for expanding beyond the "single lane" Q&A highway. Our goal right now is to better understand the problems, opportunities, and needs before deciding on any specific changes... The vision is to potentially enable: - A slower lane, with high-quality durable knowledge that takes time to create and curate, like questions and answers. - A medium lane, for more flexible engagement, with features like Discussions or more flexible Stack Exchanges, where users can explore ideas or share opinions. - A fast lane for quick, real-time interaction, with features like Chat that can bring the community together to discuss topics instantly. With this in mind, we're seeking your feedback on the current state of Chat, what's most important to you, and how you see Chat fitting into the future. In a post in Stack Exchange's "meta" area, brand design director David Longworth says the "tension mentioned between Stack Overflow and Stack Exchange" is probably the most relevant to the rebranding. But he posted later that "There's a lot of people behind the scenes on this who care deeply about getting this right! Thank you on behalf of myself and the team."

Read more of this story at Slashdot.

  •  

Curl Warns GitHub About 'Malicious Unicode' Security Issue

A Curl contributor replaced an ASCII letter with a Unicode alternative in a pull request, writes Curl lead developer/founder Daniel Stenberg. And not a single human reviewer on the team (or any of their CI jobs) noticed. The change "looked identical to the ASCII version, so it was not possible to visually spot this..." The impact of changing one or more letters in a URL can of course be devastating depending on conditions... [W]e have implemented checks to help us poor humans spot things like this. To detect malicious Unicode. We have added a CI job that scans all files and validates every UTF-8 sequence in the git repository. In the curl git repository most files and most content are plain old ASCII so we can "easily" whitelist a small set of UTF-8 sequences and some specific files, the rest of the files are simply not allowed to use UTF-8 at all as they will then fail the CI job and turn up red. In order to drive this change home, we went through all the test files in the curl repository and made sure that all the UTF-8 occurrences were instead replaced by other kind of escape sequences and similar. Some of them were also used more or less by mistake and could easily be replaced by their ASCII counterparts. The next time someone tries this stunt on us it could be someone with less good intentions, but now ideally our CI will tell us... We want and strive to be proactive and tighten everything before malicious people exploit some weakness somewhere but security remains this never-ending race where we can only do the best we can and while the other side is working in silence and might at some future point attack us in new creative ways we had not anticipated. That future unknown attack is a tricky thing. In the original blog post Stenberg complained he got "barely no responses" from GitHub (joking "perhaps they are all just too busy implementing the next AI feature we don't want.") But hours later he posted an update. "GitHub has told me they have raised this as a security issue internally and they are working on a fix."

Read more of this story at Slashdot.

  •