Vue lecture

☕️ Informatique quantique : Lucy et ses 12 qubits s’installent au Très Grand Centre de Calcul

Quandela (fabricant français d’ordinateurs quantiques), le GENCI (Grand équipement national de calcul intensif) et le CEA (Commissariat à l’énergie atomique et aux énergies alternatives) ont annoncé en fin de semaine dernière la livraison de Lucy : « un ordinateur quantique photonique universel numérique de 12 qubits ». Il est installé au Très Grand Centre de Calcul (TGCC) du CEA dans le 91.

Le CEA affirme que c’est « l’ordinateur quantique photonique le plus puissant au monde », mais rappelons qu’il existe bien d’autres technologies pour réaliser les qubits des ordinateurs quantiques. La machine a été acquise par « l’EuroHPC Joint Undertaking dans le cadre du consortium EuroQCS-France ».

Quandela s’était déjà trouvé une place chez OVHcloud. L’hébergeur roubaisien avait inauguré son MosaiQ en mars 2024, une machine quantique de Quandela avec deux qubits photoniques. Elle est évolutive et peut passer facilement à six qubits si besoin, puis 12 et même 24 à condition d’installer une seconde machine pour cette dernière étape. Quandela s’est aussi installée au Canada en 2024, chez Exaion.

OVHcloud et Quandela nous expliquaient alors que la consommation électrique classique en utilisation de la machine à deux qubits était de l’ordre de 2,5 kW, mais avec cinq kW recommandés car l’ordinateur a besoin de plus de puissance au démarrage. Pour plus de détails sur la machine quantique d’OVHcloud vous pouvez lire cette actualité.

Le CEA met en avant l’expertise française et européenne :

« Assemblé en seulement douze mois dans le site industriel de Quandela, le système illustre la force de la collaboration européenne. Les modules cryogéniques ont été conçus par attocube systems AG près de Munich, les dispositifs quantiques ont été fabriqués sur la ligne pilote de Quandela à Palaiseau, et l’intégration finale a été réalisée dans son usine de Massy. Avec 80% de ses composants – et l’ensemble des composants critiques – d’origine européenne, Lucy incarne la capacité de l’Europe à concevoir et livrer des technologies quantiques souveraines ».

L’ouverture aux chercheurs européens est prévue pour début 2026.

  •  

☕️ Sora d’OpenAI utilisée pour insérer des propos racistes dans la bouche de célébrités

Sora a déjà permis des deepfakes et des représentations racistes et sexistes de personnages historiques. Mais l’application d’OpenAI rend aussi possible la création de vidéos dans lesquelles des personnes connues semblent proférer des insultes racistes.

Ainsi, comme l’explique Rolling Stone, des deepfakes commencent à circuler sur le nouveau réseau social d’OpenAI (non accessible officiellement en France) où l’on voit des personnalités étatsuniennes comme le boxeur et youtubeur Jack Paul proférer des insultes racistes. Ou plutôt, qui semblent proférer. Car les utilisateurs qui créent ce genre de vidéos s’appuient sur la proximité sonore de certains mots pour passer outre les blocages.

Un crâne ouvert au sommet sert de piscine à un homme qui se baigne dans une bouée canard, le tout sur fond bleu tirant vers le noir.

Ainsi, par exemple, Jack Paul est représenté dans un supermarché criant « I hate juice » (je déteste le jus), qui se rapproche fortement de façon sonore d’une phrase antisémite, jouant sur le rapprochement sonore de « juice » (jus) et de « jews » (juifs). La vidéo a été générée par Sora le 12 octobre et est encore en ligne actuellement.

De la même façon, une vidéo met en scène Sam Altman crier « I hate knitters » (je déteste les personnes qui tricotent), pour se rapprocher du terme « niggers » (nègres).

Les chercheurs de l’entreprise Copyleaks qui ont repéré le phénomène expliquent : « ce comportement illustre une tendance peu surprenante en matière de contournement basé sur les prompts, où les utilisateurs testent intentionnellement les systèmes à la recherche de faiblesses dans la modération du contenu. Lorsqu’ils sont associés à des ressemblances avec des personnes reconnaissables, ces deepfakes deviennent plus viraux et plus préjudiciables, se propageant rapidement sur la plateforme et au-delà (toutes les vidéos que nous avons examinées pouvaient être téléchargées, ce qui permettait leur publication croisée ailleurs) ».

  •  

☕️ Informatique quantique : Lucy et ses 12 qubits s’installent au Très Grand Centre de Calcul

Quandela (fabricant français d’ordinateurs quantiques), le GENCI (Grand équipement national de calcul intensif) et le CEA (Commissariat à l’énergie atomique et aux énergies alternatives) ont annoncé en fin de semaine dernière la livraison de Lucy : « un ordinateur quantique photonique universel numérique de 12 qubits ». Il est installé au Très Grand Centre de Calcul (TGCC) du CEA dans le 91.

Le CEA affirme que c’est « l’ordinateur quantique photonique le plus puissant au monde », mais rappelons qu’il existe bien d’autres technologies pour réaliser les qubits des ordinateurs quantiques. La machine a été acquise par « l’EuroHPC Joint Undertaking dans le cadre du consortium EuroQCS-France ».

Quandela s’était déjà trouvé une place chez OVHcloud. L’hébergeur roubaisien avait inauguré son MosaiQ en mars 2024, une machine quantique de Quandela avec deux qubits photoniques. Elle est évolutive et peut passer facilement à six qubits si besoin, puis 12 et même 24 à condition d’installer une seconde machine pour cette dernière étape. Quandela s’est aussi installée au Canada en 2024, chez Exaion.

OVHcloud et Quandela nous expliquaient alors que la consommation électrique classique en utilisation de la machine à deux qubits était de l’ordre de 2,5 kW, mais avec cinq kW recommandés car l’ordinateur a besoin de plus de puissance au démarrage. Pour plus de détails sur la machine quantique d’OVHcloud vous pouvez lire cette actualité.

Le CEA met en avant l’expertise française et européenne :

« Assemblé en seulement douze mois dans le site industriel de Quandela, le système illustre la force de la collaboration européenne. Les modules cryogéniques ont été conçus par attocube systems AG près de Munich, les dispositifs quantiques ont été fabriqués sur la ligne pilote de Quandela à Palaiseau, et l’intégration finale a été réalisée dans son usine de Massy. Avec 80% de ses composants – et l’ensemble des composants critiques – d’origine européenne, Lucy incarne la capacité de l’Europe à concevoir et livrer des technologies quantiques souveraines ».

L’ouverture aux chercheurs européens est prévue pour début 2026.

  •  

☕️ Sora d’OpenAI utilisée pour insérer des propos racistes dans la bouche de célébrités

Sora a déjà permis des deepfakes et des représentations racistes et sexistes de personnages historiques. Mais l’application d’OpenAI rend aussi possible la création de vidéos dans lesquelles des personnes connues semblent proférer des insultes racistes.

Ainsi, comme l’explique Rolling Stone, des deepfakes commencent à circuler sur le nouveau réseau social d’OpenAI (non accessible officiellement en France) où l’on voit des personnalités étatsuniennes comme le boxeur et youtubeur Jack Paul proférer des insultes racistes. Ou plutôt, qui semblent proférer. Car les utilisateurs qui créent ce genre de vidéos s’appuient sur la proximité sonore de certains mots pour passer outre les blocages.

Un crâne ouvert au sommet sert de piscine à un homme qui se baigne dans une bouée canard, le tout sur fond bleu tirant vers le noir.

Ainsi, par exemple, Jack Paul est représenté dans un supermarché criant « I hate juice » (je déteste le jus), qui se rapproche fortement de façon sonore d’une phrase antisémite, jouant sur le rapprochement sonore de « juice » (jus) et de « jews » (juifs). La vidéo a été générée par Sora le 12 octobre et est encore en ligne actuellement.

De la même façon, une vidéo met en scène Sam Altman crier « I hate knitters » (je déteste les personnes qui tricotent), pour se rapprocher du terme « niggers » (nègres).

Les chercheurs de l’entreprise Copyleaks qui ont repéré le phénomène expliquent : « ce comportement illustre une tendance peu surprenante en matière de contournement basé sur les prompts, où les utilisateurs testent intentionnellement les systèmes à la recherche de faiblesses dans la modération du contenu. Lorsqu’ils sont associés à des ressemblances avec des personnes reconnaissables, ces deepfakes deviennent plus viraux et plus préjudiciables, se propageant rapidement sur la plateforme et au-delà (toutes les vidéos que nous avons examinées pouvaient être téléchargées, ce qui permettait leur publication croisée ailleurs) ».

  •  

Trois nouveaux Peerless Assassin 120 Vision chez Thermalright

La gamme Perrless Assassin de Thermalright prend encore un peu de volume avec l'arrivée de trois nouvelles références Peerless Assassin 120 Visio. La nomenclature est simple, nous avons du dual tower en 120 mm avec un écran sur le top, le tout avec une finition noir accompagnée ou non de RGB, plus du blanc avec du RGB. L'écran, d'une diagonale de 2.8", propose un affichage en 320 x 240 et permet de mettre en avant de nombreuses informations de monitoring grâce au logiciel de Thermalright, qui offre une grande personnalisation avec des thèmes. Reste donc à savoir vers quel coloris se diriger, les performances étant théoriquement identiques. […]

Lire la suite
  •  

La RTX 5080 FE existe en version White !

Alors oui, il s'agit d'une création personnelle, nous sommes lin d'une commercialisation, mais la RTX 5080 FE White existe en bel et bien ! L'idée nous vient de comradelochenko, qui présente le résultat sur Reddit, il a repeint la partie centrale et les ailettes en blanc, avec de la peinture Duracoat, initialement pensée pour les armes à feu, qui n'a pas besoin d'après et qui résiste à la chaleur. Il a dû démonter entièrement sa carte, l'occasion de changer la pâte thermique et les pads thermiques, l'opération ne fut pas aisée malgré son expérience d'ingénieur... Alors cela vous donne envie d'une version White de la fameuse Founders Edition ? […]

Lire la suite
  •  

8BitDo NES40 Collection, encore du très lourd pour les collectionneurs compulsifs

Chez 8BitDo, on aime beaucoup les séries commémoratives. Déjà à l'honneur la console NES est de nouveau sur le devant de la scène avec trois produits inspirés par les couleurs de la machine de Nintendo. Et comme toujours, il y a des choses surprenantes... Avec une manette, un clavier et une petite enceinte nomade, voilà trois nouveautés qui pourront décorer un bureau ou une salle de jeu. A condition d'avoir les moyens de se faire plaisir, 8BitDo étant dans un autre monde pour le clavier... Du coup, commençons par celui-ci ! […]

Lire la suite
  •  

La gamme Peerless Vision de Thermalright passe au 240 mm, parce qu'on peut vouloir un grand écran et un petit radiateur

Après le Peerless Vision 360, place au Peerless Vision 240, un modèle plus petit pour le radiateur, mais pas pour l'écran. Thermalright garde son écran de 3.95" en 480 x 480 avec une grosse suite logicielle derrière et ne fait qui réduire la taille du radiateur, du moins sur le papier. En effet, les deux versions du 360 mm sont remplacées par une seule possibilité pour le 240 mm, avec des ventilateurs TL-M12QW. La raison est simple, la marque n'a tout simplement pas (encore ?) de version 240 mm de son TL-UB36W qui regroupe trois ventilateurs dans un cadre unique. […]

Lire la suite
  •  

Mozilla to Require Data-Collection Disclosure in All New Firefox Extensions

"Mozilla is introducing a new privacy framework for Firefox extensions that will require developers to disclose whether their add-ons collect or transmit user data..." reports the blog Linuxiac: The policy takes effect on November 3, 2025, and applies to all new Firefox extensions submitted to addons.mozilla.org. According to Mozilla's announcement, extension developers must now include a new key in their manifest.json files. This key specifies whether an extension gathers any personal data. Even extensions that collect nothing must explicitly state "none" in this field to confirm that no data is being collected or shared. This information will be visible to users at multiple points: during the installation prompt, on the extension's listing page on addons.mozilla.org, and in the Permissions and Data section of Firefox's about:addons page. In practice, this means users will be able to see at a glance whether a new extension collects any data before they install it.

Read more of this story at Slashdot.

  •  

Il y aurait encore un autre Ryzen X3D à venir, c'est pas un Granite Ridge !

Les rumeurs sur les prochains CPU d'AMD vont bon train en ce moment. Marché, offre, demande, les raisons qui pousseraient la firme à multiplier les processeurs sont du même genre que l'énigme jamais résolue de l'œuf et de la poule. Y a surement des histoires de recyclage, et parfois les marketeux se...

  •  

Microsoft Disables Preview In File Explorer To Block Attacks

Slashdot reader joshuark writes: Microsoft says that the File Explorer (formerly Windows Explorer) now automatically blocks previews for files downloaded from the Internet to block credential theft attacks via malicious documents, according to a report from BleepingComputer. This attack vector is particularly concerning because it requires no user interaction beyond selecting a file to preview and removes the need to trick a target into actually opening or executing it on their system. For most users, no action is required since the protection is enabled automatically with the October 2025 security update, and existing workflows remain unaffected unless you regularly preview downloaded files. "This change is designed to enhance security by preventing a vulnerability that could leak NTLM hashes when users preview potentially unsafe files," Microsoft says in a support document published Wednesday. It is important to note that this may not take effect immediately and could require signing out and signing back in.

Read more of this story at Slashdot.

  •  

California Colleges Test AI Partnerships. Critics Complain It's Risky and Wasteful

America's largest university system, with 460,000 students, is the 22-campus "Cal State" system, reports the New York Times. And it's recently teamed with Amazon, OpenAI and Nvidia, hoping to embed chatbots in both teaching and learning to become what it says will be America's "first and largest AI-empowered" university" — and prepare students for "increasingly AI-driven" careers. It's part of a trend of major universities inviting tech companies into "a much bigger role as education thought partners, AI instructors and curriculum providers," argues the New York Times, where "dominant tech companies are now helping to steer what an entire generation of students learn about AI, and how they use it — with little rigorous evidence of educational benefits and mounting concerns that chatbots are spreading misinformation and eroding critical thinking..." "Critics say Silicon Valley's effort to make AI chatbots integral to education amounts to a mass experiment on young people." As part of the effort, [Cal State] is paying OpenAI $16.9 million to provide ChatGPT Edu, the company's tool for schools, to more than half a million students and staff — which OpenAI heralded as the world's largest rollout of ChatGPT to date. Cal State also set up an AI committee, whose members include representatives from a dozen large tech companies, to help identify the skills California employers need and improve students' career opportunities... Cal State is not alone. Last month, California Community Colleges, the nation's largest community college system, announced a collaboration with Google to supply the company's "cutting edge AI tools" and training to 2.1 million students and faculty. In July, Microsoft pledged $4 billion for teaching AI skills in schools, community colleges and to adult workers... [A]s schools like Cal State work to usher in what they call an "AI-driven future," some researchers warn that universities risk ceding their independence to Silicon Valley. "Universities are not tech companies," Olivia Guest and Iris van Rooij, two computational cognitive scientists at Radboud University in the Netherlands, recently said in comments arguing against fast AI adoption in academia. "Our role is to foster critical thinking," the researchers said, "not to follow industry trends uncritically...." Some faculty members have pushed back against the AI effort, as the university system faces steep budget cuts. The multimillion-dollar deal with OpenAI — which the university did not open to bidding from rivals like Google — was wasteful, they added. Faculty senates on several Cal State campuses passed resolutions this year criticizing the AI initiative, saying the university had failed to adequately address students using chatbots to cheat. Professors also said administrators' plans glossed over the risks of AI to students' critical thinking and ignored troubling industry labor practices and environmental costs. Martha Kenney, a professor of women and gender studies at San Francisco State University, described the AI program as a Cal State marketing vehicle helping tech companies promote unproven chatbots as legitimate educational tools. The article notes that Cal State's chief information officer "defended the OpenAI deal, saying the company offered ChatGPT Edu at an unusually low price. "Still, California's community college system landed AI chatbot services from Google for more than 2 million students and faculty — nearly four times the number of users Cal State is paying OpenAI for — for free."

Read more of this story at Slashdot.

  •  

Linux 6.18-rc3 Released With Latest Fixes

The Linux 6.18-rc3 kernel is now available for testing in working toward the Linux 6.18 stable kernel release in just about one month. Linux 6.18 is expected to become this year's Long Term Support "LTS" kernel...
  •  

GM Plans to Drop Apple CarPlay and Android Auto From All Its Cars

GM plans to dump Apple CarPlay and Android Auto on all its car new vehicles "in the near future," reports the Verge. In an episode of the Verge's Decoder podcast, GM CEO Mary Barra confirmed the upcoming change to "phone projections" for GM cars: The timing is unclear, but Barra pointed to a major rollout of what the company is calling a new centralized computing platform, set to launch in 2028, that will involve eventually transitioning its entire lineup to a unified in-car experience. In place of phone projection, GM is working to update its current Android-powered infotainment implementation with a Google Gemini-powered assistant and an assortment of other custom apps, built both in-house and with partners. GM's 2023 decision to drop CarPlay and Android Auto support in its EVs has proved controversial, though for now GM has maintained support for phone projection in its gas-powered vehicles.

Read more of this story at Slashdot.

  •  

Some US Electricity Prices are Rising -- But It's Not Just Data Centers

North Dakota experienced an almost 40% increase in electricity demand "thanks in part to an explosion of data centers," reports the Washington Post. Yet the state saw a 1% drop in its per kilowatt-hour rates. "A new study from researchers at Lawrence Berkeley National Laboratory and the consulting group Brattle suggests that, counterintuitively, more electricity demand can actually lower prices..." Between 2019 and 2024, the researchers calculated, states with spikes in electricity demand saw lower prices overall. Instead, they found that the biggest factors behind rising rates were the cost of poles, wires and other electrical equipment — as well as the cost of safeguarding that infrastructure against future disasters... [T]he largest costs are fixed costs — that is, maintaining the massive system of poles and wires that keeps electricity flowing. That system is getting old and is under increasing pressures from wildfires, hurricanes and other extreme weather. More power customers, therefore, means more ways to divvy up those fixed costs. "What that means is you can then take some of those fixed infrastructure costs and end up spreading them around more megawatt-hours that are being sold — and that can actually reduce rates for everyone," said Ryan Hledik [principal at Brattle and a member of the research team]... [T]he new study shows that the costs of operating and installing wind, natural gas, coal and solar have been falling over the past 20 years. Since 2005, generation costs have fallen by 35 percent, from $234 billion to $153 billion. But the costs of the huge wires that transmit that power across the grid, and the poles and wires that deliver that electricity to customers, are skyrocketing. In the past two decades, transmission costs nearly tripled; distribution costs more than doubled. Part of that trend is from the rising costs of parts: The price of transformers and wires, for example, has far outpaced inflation over the past five years. At the same time, U.S. utilities haven't been on top of replacing power poles and lines in the past, and are now trying to catch up. According to another report from Brattle, utilities are already spending more than $10 billion a year replacing aging transmission lines. And finally, escalating extreme-weather events are knocking out local lines, forcing utilities to spend big to make fixes. Last year, Hurricane Beryl decimated Houston's power grid, forcing months of costly repairs. The threat of wildfires in the West, meanwhile, is making utilities spend billions on burying power lines. According to the Lawrence Berkeley study, about 40 percent of California's electricity price increase over the last five years was due to wildfire-related costs. Yet the researchers tell the Washington Post that prices could still increase if utilities have to quickly build more infrastructure just to handle data center. But their point is "This is a much more nuanced issue than just, 'We have a new data center, so rates will go up.'" As the article points out, "Generous subsidies for rooftop solar also increased rates in certain states, mostly in places such as California and Maine... If customers install rooftop solar panels, demand for electricity shrinks, spreading those fixed costs over a smaller set of consumers.

Read more of this story at Slashdot.

  •  

Does Generative AI Threaten the Open Source Ecosystem?

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly." That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole. The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...." "Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk." O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

Read more of this story at Slashdot.

  •  

Can YouTube Replace 'Traditional' TV?

Can YouTube capture the hours people spending watching "traditional" TV? YouTube's CEO recently said its viewership on TV sets has "surpassed mobile and is now the primary device for YouTube viewing in the U.S.," writes The Hollywood Reporter. And YouTube is shelling out big money to stay on top: It's come a long way since the 19-second "me at the zoo" video was uploaded in April 2005. Now, per a KPMG report released Sept. 23, YouTube is second only to Comcast in terms of annual content spend, inclusive of payments to creators and media companies, paying out as much as Netflix and Paramount combined, $32 billion... The only question is what genres it will take over next, and how quickly it will do so. From talk shows to scripted dramas to, yes, live sports, there are signs that the platform's ambitions will collide with the traditional TV business sooner rather than later... YouTube has slowly, then all at once, become the de facto home for what had been late night, not only for the shows on linear TV, but for an emerging crop of new talent born on the platform. As it happens, late night itself transformed YouTube when the Saturday Night Live skit "Lazy Sunday" went viral 20 years ago on the platform, which had only been live for a few months... As consumer preferences collide with a burgeoning ecosystem of video podcasts (YouTube now claims more than 1 billion podcast users monthly), the world of late night, and for that matter TV talk shows more generally, increasingly revolves around the platform. One current late night producer says that almost every A-list booking now includes some sort of sketch or bit that they think will play well on YouTube, but booking those guests in the first place has become less of a sure thing. A veteran Hollywood publicist says that for many of their clients, they are now recommending that YouTube podcasts or shows become the first stop, or at least a major stop, on press tours... Nielsen has been tracking the streaming platforms that consumers watch on their TV screens ever since it launched what it calls The Gauge in 2021. But over the past year, YouTube's domination of The Gauge has unnerved executives at some competitors. The most recent Gauge report showed that YouTube was by far the most watched video platform, holding 13.1 percent share. Netflix, in second place, was at 8.7 percent. The article suggests YouTube's last challenge may be "scripted" entertainment — where their business model is different than Netflix or HBO. "On YouTube, it is up to the creator to finance and produce their content, and while the platform regularly releases new tools to help them (including AI-enabled tech that suggests video ideas and can create short background videos for use in Shorts), scripted entertainment is a particularly tricky challenge, requiring writers, directors, sets, costumes, lighting, editing, special effects and other production requirements that may go beyond the typical creator-led show."

Read more of this story at Slashdot.

  •  

Bill Gates-Backed 345 MWe Advanced Nuclear Reactor Secures Crucial US Approval

Long-time Slashdot reader schwit1 shares this article from Interesting Engineering: Bill Gates-backed TerraPower's innovative Natrium reactor project in Wyoming has cleared a critical federal regulatory hurdle. The US Nuclear Regulatory Commission (NRC) has successfully completed its final Environmental Impact Statement (EIS) for the project, known as Kemmerer Unit 1, and found no adverse impacts that would block its construction. The commission officially recommended that a construction permit be issued to TerraPower subsidiary USO for the facility in Lincoln County. This announcement marks a significant milestone, making the Natrium project the first-ever advanced commercial nuclear power plant in the country to successfully complete this rigorous environmental review process... The first-of-a-kind design utilizes an 840 MW (thermal) pool-type reactor connected to a molten salt-based energy storage system. This storage technology is the plant's most unique feature. It is designed to keep the base output steady, ensuring constant reliability, but it also allows the plant to function like a massive battery. The system can store heat and boost the plant's output to 500 MWe when demand peaks, allowing it to ramp up power quickly to support the grid. TerraPower says it is the only advanced reactor design with this unique capability. The Natrium plant is strategically designed to replace electricity generation capacity following the planned retirement of existing coal-fired facilities in the region. While the regulatory process for the nuclear components continues, construction on the non-nuclear portions of the site already began in June 2024. When completed, the Natrium plant is poised to be the first utility-scale advanced nuclear power plant in the United States. The next step for the construction permit application is a final safety evaluation, which is anticipated by December 31, 2025, according to announcement from TerraPower, which notes that the project is being developed through a public-private partnership with the U.S. Energy Department. "When completed, the Natrium plant will be the first utility-scale advanced nuclear power plant in the United States."

Read more of this story at Slashdot.

  •  

Is AI Responsible for Job Cuts - Or Just a Good Excuse?

Has AI just become an easy excuse for firms looking to downsize, asks CNBC: Fabian Stephany, assistant professor of AI and work at the Oxford Internet Institute, said there might be more to job cuts than meets the eye. Previously there may have been some stigma attached to using AI, but now companies are "scapegoating" the technology to take the fall for challenging business moves such as layoffs. "I'm really skeptical whether the layoffs that we see currently are really due to true efficiency gains. It's rather really a projection into AI in the sense of 'We can use AI to make good excuses,'" Stephany said in an interview with CNBC. Companies can essentially position themselves at the frontier of AI technology to appear innovative and competitive, and simultaneously conceal the real reasons for layoffs, according to Stephany... Some companies that flourished during the pandemic "significantly overhired" and the recent layoffs might just be a "market clearance...." One founder, Jean-Christophe Bouglé even said in a popular LinkedIn post that AI adoption is at a "much slower pace" than is being claimed and in large corporations "there's not much happening" with AI projects even being rolled back due to cost or security concerns. "At the same time there are announcements of big layoff plans 'because of AI.' It looks like a big excuse, in a context where the economy in many countries is slowing down..." The Budget Lab, a non-partisan policy research center at Yale University, released a report on Wednesday which showed that U.S. labor has actually been little disrupted by AI automation since the release of ChatGPT in 2022... Additionally, New York Fed economists released research in early September which showed that AI use amongst firms "do not point to significant reductions in employment" across the services and manufacturing industry in the New York-Northern New Jersey region.

Read more of this story at Slashdot.

  •