Vue lecture

☕️ Round 2 des extensions .marque (brandTLD) : l’ICANN a validé le Guide de candidature

En avril 2026 se déroulera le deuxième tour des extensions personnalisées de noms de domaine, les .marques. Lors du premier tour, plusieurs centaines d’entités avaient répondu présentes. Nous pouvons par exemple citer les extensions .leclerc, .bnpparibas, .lancaster, .sncf, .google, ainsi que des extensions géographiques comme les .paris, .bzh, .alsace, etc.

Au début du mois, le Conseil d’administration de l’ICANN a adopté officiellement « le Guide de candidature, ouvrant la voie à la série de 2026 », comme le rapporte Abondance. Dans son communiqué, l’ICANN explique que ce guide « définit les exigences et les procédures applicables à toute entité posant une candidature à un gTLD ».

Désormais, le calendrier se resserre : « le Guide de candidature doit être mis à disposition au moins quatre mois avant l’ouverture de la fenêtre de candidature. L’adoption par le Conseil d’administration lors de la réunion ICANN84 a pour effet que l’organisation ICANN (ICANN org) est chargée de publier le guide au plus tard le 30 décembre 2025 ».

Lors du Summit d’OVHcloud nous avons demandé aux équipes en charge des noms de domaine si un accompagnement des clients souhaitant se lancer dans des .marques était prévu. Le sujet est discuté en interne, mais rien n’est acté pour l’instant.

En attendant, une version non finalisée du guide est disponible à cette adresse (pdf de 440 pages). L’Afnic rappelle les couts qui « représentent certes un investissement au démarrage, mais qui doivent être analysés à la lumière des économies et des bénéfices générés » : 227 500 dollars de frais de dossier pour le dépôt initial la première année, puis à partir de 25 000 dollars par an. Il faut ajouter des coûts techniques variables. Les entités intéressées peuvent contacter l’Afnic pour un accompagnement.

  •  

☕️ Memtest86+ passe en version 8.00, avec un mode sombre

Memtest86+ est un nom qui rappelle certainement des souvenirs aux moins jeunes d’entre nous. Ce petit utilitaire permet de tester de fond en comble la mémoire d’un ordinateur. Il est revenu sur le devant de la scène en 2022 avec une nouvelle version entièrement réécrite. La première nouvelle version était la 6.00 publiée en octobre 2022. En janvier 2024, la 7.00 était mise en ligne.

Et c’est maintenant au tour de Memtest86+ 8.00 de débarquer. Les notes de versions sont assez peu détaillées puisqu’elles indiquent simplement la prise en charge des « derniers processeurs Intel et AMD ».

Des correctifs sur la DDR5 sont de la partie, avec aussi les informations sur la température. Un mode sombre est aussi proposé en option. Tous les détails et les téléchargements sont disponibles dans ce dépôt GitHub. Vous pouvez également passer par le site officiel.

  •  

☕️ Round 2 des extensions .marque (brandTLD) : l’ICANN a validé le Guide de candidature

En avril 2026 se déroulera le deuxième tour des extensions personnalisées de noms de domaine, les .marques. Lors du premier tour, plusieurs centaines d’entités avaient répondu présentes. Nous pouvons par exemple citer les extensions .leclerc, .bnpparibas, .lancaster, .sncf, .google, ainsi que des extensions géographiques comme les .paris, .bzh, .alsace, etc.

Au début du mois, le Conseil d’administration de l’ICANN a adopté officiellement « le Guide de candidature, ouvrant la voie à la série de 2026 », comme le rapporte Abondance. Dans son communiqué, l’ICANN explique que ce guide « définit les exigences et les procédures applicables à toute entité posant une candidature à un gTLD ».

Désormais, le calendrier se resserre : « le Guide de candidature doit être mis à disposition au moins quatre mois avant l’ouverture de la fenêtre de candidature. L’adoption par le Conseil d’administration lors de la réunion ICANN84 a pour effet que l’organisation ICANN (ICANN org) est chargée de publier le guide au plus tard le 30 décembre 2025 ».

Lors du Summit d’OVHcloud nous avons demandé aux équipes en charge des noms de domaine si un accompagnement des clients souhaitant se lancer dans des .marques était prévu. Le sujet est discuté en interne, mais rien n’est acté pour l’instant.

En attendant, une version non finalisée du guide est disponible à cette adresse (pdf de 440 pages). L’Afnic rappelle les couts qui « représentent certes un investissement au démarrage, mais qui doivent être analysés à la lumière des économies et des bénéfices générés » : 227 500 dollars de frais de dossier pour le dépôt initial la première année, puis à partir de 25 000 dollars par an. Il faut ajouter des coûts techniques variables. Les entités intéressées peuvent contacter l’Afnic pour un accompagnement.

  •  

☕️ Memtest86+ passe en version 8.00, avec un mode sombre

Memtest86+ est un nom qui rappelle certainement des souvenirs aux moins jeunes d’entre nous. Ce petit utilitaire permet de tester de fond en comble la mémoire d’un ordinateur. Il est revenu sur le devant de la scène en 2022 avec une nouvelle version entièrement réécrite. La première nouvelle version était la 6.00 publiée en octobre 2022. En janvier 2024, la 7.00 était mise en ligne.

Et c’est maintenant au tour de Memtest86+ 8.00 de débarquer. Les notes de versions sont assez peu détaillées puisqu’elles indiquent simplement la prise en charge des « derniers processeurs Intel et AMD ».

Des correctifs sur la DDR5 sont de la partie, avec aussi les informations sur la température. Un mode sombre est aussi proposé en option. Tous les détails et les téléchargements sont disponibles dans ce dépôt GitHub. Vous pouvez également passer par le site officiel.

  •  

La carte graphique la plus performante au monde en jeu testée. Vaut-elle ses 4399€ ?

En aout 2025, ASUS officialisait une carte "anniversaire 30 ans" de tous les extrêmes : l'ASUS GeForce RTX 5090 ROG Matrix Platinum 32GB GDDR7 30th Anniversary Edition de son nom complet. Dotée de 4 ventilateurs et d'une limite de puissance poussée à 800 W grâce à une double alimentation externe via...

  •  

'We Could've Asked ChatGPT': UK Students Fight Back Over Course Taught By AI

An anonymous reader shared this report from the Guardian: James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible". "If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching... For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback from ChatGPT or using AI-generated images in courses. "I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.

Read more of this story at Slashdot.

  •  

Napster Said It Raised $3 Billion From a Mystery Investor. But Now the 'Investor' and 'Money' Are Gone

An anonymous reader shared this report from Forbes: On November 20, at approximately 4 p.m. Eastern time, Napster held an online meeting for its shareholders; an estimated 700 of roughly 1,500 including employees, former employees and individual investors tuned in. That's when its CEO John Acunto told everyone he believed that the never-identified big investor — who the company had insisted put in $3.36 billion at a $12 billion valuation in January, which would have made it one of the year's biggest fundraises — was not going to come through. In an email sent out shortly after, it told existing investors that some would get a bigger percentage of the company, due to the canceled shares, and went on to describe itself as a "victim of misconduct," adding that it was "assisting law enforcement with their ongoing investigations." As for the promised tender offer, which would have allowed shareholders to cash out, that too was called off. "Since that investor was also behind the potential tender, we also no longer believe that will occur," the company wrote in the email. At this point it seems unlikely that getting bigger stakes in the business will make any of the investors too happy. The company had been stringing its employees and investors along for nearly a year with ever-changing promises of an impending cash infusion and chances to sell their shares in a tender offer that would change everything. In fact, it was the fourth time since 2022 they've been told they could soon cash out via a tender offer, and the fourth time the potential deal fell through. Napster spokesperson Gillian Sheldon said certain statements about the fundraise "were made in good faith based on what we understood at the time. We have since uncovered indications of misconduct that suggest the information provided to us then was not accurate." The article notes America's Department of Justice has launched an investigation (in which Napster is not a target), while the Securities and Exchange Commission has a separate ongoing investigation from 2022 into Napster's scrapped reverse merger. While Napster announced they'd been acquired for $207 million by a tech company named Infinite Reality, Forbes says that company faced "a string of lawsuits from creditors alleging unpaid bills, a federal lawsuit to enforce compliance with an SEC subpoena (now dismissed) and exaggerated claims about the extent of their partnerships with Manchester City Football Club and Google. The company also touted 'top-tier' investors who never directly invested in the firm, and its anonymous $3 billion investment that its spokesperson told Forbes in March was in "an Infinite Reality account and is available to us" and that they were 'actively leveraging' it..." And by the end, "Napster appears to have been scrambling to raise cash to keep the lights on, working with brokers and investment advisors including a few who had previously gotten into trouble with regulators.... If it turns out that Napster knew the fundraise wasn't happening and it benefited from misrepresenting itself to investors or acquirees, it could face much bigger problems. That's because doing so could be considered securities fraud."

Read more of this story at Slashdot.

  •  

New Research Finds America's Top Social Media Sites: YouTube (84%) Facebook (71%), Instagram (50%)

Pew Research surveyed 5,022 Americans this year (between February 5 and June 18), asking them "do you ever use" YouTube, Facebook, and nine of the other top social media platforms. The results? YouTube 84% Facebook 71% Instagram 50% TikTok 37% WhatsApp 32% Reddit 26% Snapchat 25% X.com (formerly Twitter) 21% Threads 8% Bluesky 4% Truth Social 3% An announcement from Pew Research adds some trends and demographics: The Center has long tracked use of many of these platforms. Over the past few years, four of them have grown in overall use among U.S. adults — TikTok, Instagram, WhatsApp and Reddit. 37% of U.S. adults report using TikTok, which is slightly up from last year and up from 21% in 2021. Half of U.S. adults now report using Instagram, which is on par with last year but up from 40% in 2021. About a third say they use WhatsApp, up from 23% in 2021. And 26% today report using Reddit, compared with 18% four years ago. While YouTube and Facebook continue to sit at the top, the shares of Americans who report using them have remained relatively stable in recent years... YouTube and Facebook are the only sites asked about that a majority in all age groups use, though for YouTube, the youngest adults are still the most likely to do so. This differs from Facebook, where 30- to 49-year-olds most commonly say they use it (80%). Other interesting statistics: "More than half of women report using Instagram (55%), compared with under half of men (44%). Alternatively, men are more likely to report using platforms such as X and Reddit." "Democrats and Democratic-leaning independents are more likely to report using WhatsApp, Reddit, TikTok, Bluesky and Threads."

Read more of this story at Slashdot.

  •  

Was the Moon-Forming Protoplanet 'Theia' a Neighbor of Earth?

Theia crashed into earth and formed the moon, the theory goes. But then where did Theia come from? The lead author on a new study says "The most convincing scenario is that most of the building blocks of Earth and Theia originated in the inner Solar System. Earth and Theia are likely to have been neighbors." Though Theia was completely destroyed in the collision, scientists from the Max Planck Institute for Solar System Research led a team that was able to measure the ratio of tell-tale isotopes in Earth and Moon rocks, Euronews explains: The research team used rocks collected on Earth and samples brought back from the lunar surface by Apollo astronauts to examine their isotopes. These isotopes act like chemical fingerprints. Scientists already knew that Earth and Moon rocks are almost identical in their metal isotope ratios. That similarity, however, has made it hard to learn much about Theia, because it has been difficult to separate material from early Earth and material from the impactor. The new research attempts a kind of planetary reverse engineering. By examining isotopes of iron, chromium, zirconium and molybdenum, the team modelled hundreds of possible scenarios for the early Earth and Theia, testing which combinations could produce the isotope signatures seen today. Because materials closer to the Sun formed under different temperatures and conditions than those further out, those isotopes exist in slightly different patterns in different regions of the Solar System. By comparing these patterns, researchers concluded that Theia most likely originated in the inner Solar System, even closer to the Sun than the early Earth. The team published their findings in the journal Science. Its title? "The Moon-forming impactor Theia originated from the inner Solar System."

Read more of this story at Slashdot.

  •  

Cryptologist DJB Criticizes Push to Finalize Non-Hybrid Security for Post-Quantum Cryptography

In October cryptologist/CS professor Daniel J. Bernstein alleged that America's National Security Agency (and its UK counterpart GCHQ) were attempting to influence NIST to adopt weaker post-quantum cryptography standards without a "hybrid" approach that would've also included pre-quantum ECC. Bernstein is of the opinion that "Given how many post-quantum proposals have been broken and the continuing flood of side-channel attacks, any competent engineering evaluation will conclude that the best way to deploy post-quantum [PQ] encryption for TLS, and for the Internet more broadly, is as double encryption: post-quantum cryptography on top of ECC." But he says he's seen it playing out differently: By 2013, NSA had a quarter-billion-dollar-a-year budget to "covertly influence and/or overtly leverage" systems to "make the systems in question exploitable"; in particular, to "influence policies, standards and specification for commercial public key technologies". NSA is quietly using stronger cryptography for the data it cares about, but meanwhile is spending money to promote a market for weakened cryptography, the same way that it successfully created decades of security failures by building up the market for, e.g., 40-bit RC4 and 512-bit RSA and Dual EC. I looked concretely at what was happening in IETF's TLS working group, compared to the consensus requirements for standards-development organizations. I reviewed how a call for "adoption" of an NSA-driven specification produced a variety of objections that weren't handled properly. ("Adoption" is a preliminary step before IETF standardization....) On 5 November 2025, the chairs issued "last call" for objections to publication of the document. The deadline for input is "2025-11-26", this coming Wednesday. Bernstein also shares concerns about how the Internet Engineering Task Force is handling the discussion, and argues that the document is even "out of scope" for the IETF TLS working group This document doesn't serve any of the official goals in the TLS working group charter. Most importantly, this document is directly contrary to the "improve security" goal, so it would violate the charter even if it contributed to another goal... Half of the PQ proposals submitted to NIST in 2017 have been broken already... often with attacks having sufficiently low cost to demonstrate on readily available computer equipment. Further PQ software has been broken by implementation issues such as side-channel attacks. He's also concerned about how that discussion is being handled: On 17 October 2025, they posted a "Notice of Moderation for Postings by D. J. Bernstein" saying that they would "moderate the postings of D. J. Bernstein for 30 days due to disruptive behavior effective immediately" and specifically that my postings "will be held for moderation and after confirmation by the TLS Chairs of being on topic and not disruptive, will be released to the list"... I didn't send anything to the IETF TLS mailing list for 30 days after that. Yesterday [November 22nd] I finished writing up my new objection and sent that in. And, gee, after more than 24 hours it still hasn't appeared... Presumably the chairs "forgot" to flip the censorship button off after 30 days. Thanks to alanw (Slashdot reader #1,822) for spotting the blog posts.

Read more of this story at Slashdot.

  •  

Google Revisits JPEG XL in Chromium After Earlier Removal

"Three years ago, Google removed JPEG XL support from Chrome, stating there wasn't enough interest at the time," writes the blog Windows Report. "That position has now changed." In a recent note to developers, a Chrome team representative confirmed that work has restarted to bring JPEG XL to Chromium and said Google "would ship it in Chrome" once long-term maintenance and the usual launch requirements are met. The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents. There has been continuous demand from developers and users who ask for its return. Before Google ships the feature in Chrome, the company wants the integration to be secure and supported over time. A developer has submitted new code that reintroduces JPEG XL to Chromium. This version is marked as feature complete. The developer said it also "includes animation support," which earlier implementations did not offer.

Read more of this story at Slashdot.

  •  

Mozilla Announces 'TABS API' For Developers Building AI Agents

"Fresh from announcing it is building an AI browsing mode in Firefox and laying the groundwork for agentic interactions in the Firefox 145 release, the corp arm of Mozilla is now flexing its AI muscles in the direction of those more likely to care," writes the blog OMG Ubuntu: If you're a developer building AI agents, you can sign up to get early access to Mozilla's TABS API, a "powerful web content extraction and transformation toolkit designed specifically for AI agent builders"... The TABS API enables devs to create agents to automate web interactions, like clicking, scrolling, searching, and submitting forms "just like a human". Real-time feedback and adaptive behaviours will, Mozilla say, offer "full control of the web, without the complexity." As TABS is not powered by a Mozilla-backed LLM you'll need to connect it to your choice of third-party LLM for any relevant processing... Developers get 1,000 requests monthly on the free tier, which seems reasonable for prototyping personal projects. Complex agentic workloads may require more. Though pricing is yet to be locked in, the TABS API website suggests it'll cost ~$5 per 1000 requests. Paid plans will offer additional features too, like lower latency and, somewhat ironically, CAPTCHA solving so AI can 'prove' it's not a robot on pages gated to prevent automated activities. Google, OpenAI, and other major AI vendors offer their own agentic APIs. Mozilla is pitching up late, but it plans to play differently. It touts a "strong focus on data minimisation and security", with scraped data treated ephemerally — i.e., not kept. As a distinction, that matters. AI agents can be given complex online tasks that involve all sorts of personal or sensitive data being fetched and worked with.... If you're minded to make one, perhaps without a motivation to asset-strip the common good, Mozilla's TABS API look like a solid place to start.

Read more of this story at Slashdot.

  •  

One Company's Plan to Sink Nuclear Reactors Deep Underground

Long-time Slashdot reader jenningsthecat shared this article from IEEE Spectrum: By dropping a nuclear reactor 1.6 kilometers (1 mile) underground, Deep Fission aims to use the weight of a billion tons of rock and water as a natural containment system comparable to concrete domes and cooling towers. With the fission reaction occurring far below the surface, steam can safely circulate in a closed loop to generate power. The California-based startup announced in October that prospective customers had signed non-binding letters of intent for 12.5 gigawatts of power involving data center developers, industrial parks, and other (mostly undisclosed) strategic partners, with initial sites under consideration in Kansas, Texas, and Utah... The company says its modular approach allows multiple 15-megawatt reactors to be clustered on a single site: A block of 10 would total 150 MW, and Deep Fission claims that larger groupings could scale to 1.5 GW. Deep Fission claims that using geological depth as containment could make nuclear energy cheaper, safer, and deployable in months at a fraction of a conventional plant's footprint... The company aims to finalize its reactor design and confirm the pilot site in the coming months. [Company founder Liz] Muller says the plan is to drill the borehole, lower the canister, load the fuel, and bring the reactor to criticality underground in 2026. Sites in Utah, Texas, and Kansas are among the leading candidates for the first commercial-scale projects, which could begin construction in 2027 or 2028, depending on the speed of DOE and NRC approvals. Deep Fission expects to start manufacturing components for the first unit in 2026 and does not anticipate major bottlenecks aside from typical long-lead items. In short "The same oil and gas drilling techniques that reliably reach kilometer-deep wells can be adapted to host nuclear reactors..." the article points out. Their design would also streamline construction, since "Locating the reactors under a deep water column subjects them to roughly 160 atmospheres of pressure — the same conditions maintained inside a conventional nuclear reactor — which forms a natural seal to keep any radioactive coolant or steam contained at depth, preventing leaks from reaching the surface." Other interesting points from the article: They plan on operating and controlling the reactor remotely from the surface. Company founder Muller says if an earthquake ever disrupted the site, "you seal it off at the bottom of the borehole, plug up the borehole, and you have your waste in safe disposal." For waste management, the company "is eyeing deep geological disposal in the very borehole systems they deploy for their reactors." "The company claims it can cut overall costs by 70 to 80 percent compared with full-scale nuclear plants." "Among its competition are projects like TerraPower's Natrium, notes the tech news site Hackaday, saying TerraPower's fast neutron reactors "are already under construction and offer much more power per reactor, along with Natrium in particular also providing built-in grid-level storage. "One thing is definitely for certain..." they add. "The commercial power sector in the US has stopped being mind-numbingly boring."

Read more of this story at Slashdot.

  •  

Could High-Speed Trains Shorten US Travel Times While Reducing Emissions?

With some animated graphics, CNN "reimagined" what three of America's busiest air and road travel routes would look like with high-speed trains, for "a glimpse into a faster, more connected future." The journey from New York City to Chicago could take just over six hours by high-speed train at an average speed of 160 mph, cutting travel time by more than 13 hours compared with the current Amtrak route... The journey from San Francisco to Los Angeles could be completed in under three hours by high-speed train... The journey from Atlanta to Orlando could be completed in under three hours by high-speed train that reaches 160 mph, cutting travel time by over half compared with driving... While high-speed rail remains a fantasy in the United States, it is already hugely successful across the globe. Passengers take 3 billion trips annually on more than 40,000 miles of modern high-speed railway across the globe, according to the International Union of Railways. China is home to the world's largest high-speed rail network. The 809-mile train journey from Beijing to Shanghai takes just four and a half hours... In Europe, France's Train a Grand Vitesse (TGV) is recognized as a pioneer of high-speed rail technology. Spain soon followed France's success and now hosts Europe's most extensive high-speed rail network... [T]rain travel contributes relatively less pollution of every type, said Jacob Mason of the Institute for Transportation and Development Policy, from burning less gasoline to making less noise than cars and taking up less space than freeways. The reduction in greenhouse gas emissions is staggering: Per kilometer traveled, the average car or a short-haul flight each emit more than 30 times the CO2 equivalent than Eurostar high-speed trains, according to data from the UK government.

Read more of this story at Slashdot.

  •  

Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI

"Security, development, and AI now move as one," says Microsoft's director of cloud/AI security product marketing. Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack: The integration, announced this week in San Francisco at the Microsoft Ignite 2025 conference and now available in public preview, connects runtime intelligence from production environments directly into developer workflows. The goal is to help organizations prioritize which vulnerabilities actually matter and use AI to fix them faster. "Throughout my career, I've seen vulnerability trends going up into the right. It didn't matter how good of a detection engine and how accurate our detection engine was, people just couldn't fix things fast enough," said Marcelo Oliveira, VP of product management at GitHub, who has spent nearly a decade in application security. "That basically resulted in decades of accumulation of security debt into enterprise code bases." According to industry data, critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days, said Andrew Flick, senior director of developer services, languages and tools at Microsoft, in a blog post. Meanwhile, applications face attacks as frequently as once every three minutes, Oliveira said. The integration represents the first native link between runtime intelligence and developer workflows, said Elif Algedik, director of product marketing for cloud and AI security at Microsoft, in a blog post... The problem, according to Flick, comes down to three challenges: security teams drowning in alert fatigue while AI rapidly introduces new threat vectors that they have little time to understand; developers lacking clear prioritization while remediation takes too long; and both teams relying on separate, nonintegrated tools that make collaboration slow and frustrating... The new integration works bidirectionally. When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping, Flick said... In the past, this alert would age in a dashboard while developers worked on unrelated fixes because they didn't know this was the critical one, he said. Now, a security campaign can be created in GitHub, filtering for runtime risk like internet exposure or sensitive data, notifying the developer to prioritize this issue. GitHub Copilot "now automatically checks dependencies, scans for first-party code vulnerabilities and catches hardcoded secrets before code reaches developers," the article points out — but GitHub's VP of product management says this takes things even further. "We're not only helping you fix existing vulnerabilities, we're also reducing the number of vulnerabilities that come into the system when the level of throughput of new code being created is increasing dramatically with all these agentic coding agent platforms."

Read more of this story at Slashdot.

  •