Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Linux Foundation Announces Launch of 'High Performance Software Foundation'

Par : EditorDavid
19 mai 2024 à 23:59
This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which "aims to build, promote, and advance a portable core software stack for high performance computing" (or HPC) by "increasing adoption, lowering barriers to contribution, and supporting development efforts." It promises initiatives focused on "continuously built, turnkey software stacks," as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are: - Spack: the HPC package manager. - Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way. - Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures. - HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers. - Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem. - E4S: a curated, hardened distribution of scientific software packages. As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software. The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing. In a statement, an AMD vice president said that by joining "we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government." And an AWS executive said the high-performance computing community "has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we're looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit." The new foundation will "set up a technical advisory committee to manage working groups tackling a variety of HPC topics," according to the announcement, following a governance model based on the Cloud Native Computing Foundation.

Read more of this story at Slashdot.

Intel Aurora Supercomputer Breaks Exascale Barrier

Par : BeauHD
14 mai 2024 à 02:02
Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel's newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD's Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world's best performance for AI at 10.61 "AI exaflops." Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial "partial run" in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has. Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it's still unable to dethrone AMD's Frontier system, which is also an HPE supercomputer. AMD's Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel's Aurora supercomputer uses the company's latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it's fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.

Read more of this story at Slashdot.

Defense Think Tank MITRE To Build AI Supercomputer With Nvidia

Par : BeauHD
8 mai 2024 à 03:30
An anonymous reader quotes a report from the Washington Post: A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with buzzy chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday. MITRE, a federally funded, not-for-profit research organization that has supplied U.S. soldiers and spies with exotic technical products since the 1950s, says the project could improve everything from Medicare to taxes. "There's huge opportunities for AI to make government more efficient," said Charles Clancy, senior vice president of MITRE. "Government is inefficient, it's bureaucratic, it takes forever to get stuff done. ... That's the grand vision, is how do we do everything from making Medicare sustainable to filing your taxes easier?" [...] The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year. [...] Clancy said the planned supercomputer will run 256 Nvidia graphics processing units, or GPUs, at a cost of $20 million. This counts as a small supercomputer: The world's fastest supercomputer, Frontier in Tennessee, boasts 37,888 GPUs, and Meta is seeking to build one with 350,000 GPUs. But MITRE's computer will still eclipse Stanford's Natural Language Processing Group's 68 GPUs, and will be large enough to train large language models to perform AI tasks tailored for government agencies. Clancy said all federal agencies funding MITRE will be able to use this AI "sandbox." "AI is the tool that is solving a wide range of problems," Clancy said. "The U.S. military needs to figure out how to do command and control. We need to understand how cryptocurrency markets impact the traditional banking sector. ... Those are the sorts of problems we want to solve."

Read more of this story at Slashdot.

Europe Plans To Build 100-Qubit Quantum Computer By 2026

Par : BeauHD
27 avril 2024 à 03:30
An anonymous reader quotes a report published last week by Physics World: Researchers at the Dutch quantum institute QuTech in Delft have announced plans to build Europe's first 100-quantum bit (qubit) quantum computer. When complete in 2026, the device will be made publicly available, providing scientists with a tool for quantum calculations and simulations. The project is funded by the Dutch umbrella organization Quantum Delta NL via the European OpenSuperQPlus initiative, which has 28 partners from 10 countries. Part of the 10-year, 1 billion-euro European Quantum Flagship program, OpenSuperQPlus aims to build a 100-qubit superconducting quantum processor as a stepping stone to an eventual 1000-qubit European quantum computer. Quantum Delta NL says the 100-qubit quantum computer will be made publicly available via a cloud platform as an extension of the existing platform Quantum Inspire that first came online in 2020. It currently includes a two-qubit processor of spin qubits in silicon, as well as a five-qubit processor based on superconducting qubits. Quantum Inspire is currently focused on training and education but the upgrade to 100 qubits is expected to allow research into quantum computing. Lead researcher from QuTech Leonardo DiCarlo believes the R&D cycle has "come full circle," where academic research first enabled spin-off companies to grow and now their products are being used to accelerate academic research.

Read more of this story at Slashdot.

New Advances Promise Secure Quantum Computing At Home

Par : BeauHD
12 avril 2024 à 10:00
Scientists from Oxford University Physics have developed a breakthrough in cloud-based quantum computing that could allow it to be harnessed by millions of individuals and companies. The findings have been published in the journal Physical Review Letters. Phys.Org reports: In the new study, the researchers use an approach dubbed "blind quantum computing," which connects two totally separate quantum computing entities -- potentially an individual at home or in an office accessing a cloud server -- in a completely secure way. Importantly, their new methods could be scaled up to large quantum computations. "Using blind quantum computing, clients can access remote quantum computers to process confidential data with secret algorithms and even verify the results are correct, without revealing any useful information. Realizing this concept is a big step forward in both quantum computing and keeping our information safe online," said study lead Dr. Peter Drmota, of Oxford University Physics. The researchers created a system comprising a fiber network link between a quantum computing server and a simple device detecting photons, or particles of light, at an independent computer remotely accessing its cloud services. This allows so-called blind quantum computing over a network. Every computation incurs a correction that must be applied to all that follow and needs real-time information to comply with the algorithm. The researchers used a unique combination of quantum memory and photons to achieve this. The results could ultimately lead to commercial development of devices to plug into laptops, to safeguard data when people are using quantum cloud computing services. "We have shown for the first time that quantum computing in the cloud can be accessed in a scalable, practical way which will also give people complete security and privacy of data, plus the ability to verify its authenticity," said Professor David Lucas, who co-heads the Oxford University Physics research team and is lead scientist at the UK Quantum Computing and Simulation Hub, led from Oxford University Physics.

Read more of this story at Slashdot.

❌
❌