Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Qualcomm Intros Snapdragon X Plus, Details Complete Snapdragon X Launch Day Chip Stack

24 avril 2024 à 13:15

As Qualcomm prepares for the mid-year launch of their forthcoming Snapdragon X SoCs for PCs, and the eagerly anticipated Oryon CPU cores within, the company is finally shoring up their official product plans, and releasing some additional technical details in the process. Thus far the company has been demonstrating their Snapdragon X Elite SoC in its highest-performing, fully-enabled configuration. But the retail Snapdragon X Elite will not be a single part; instead, Qualcomm is preparing a whole range of chip configurations for various price/performance tiers in the market. Altogether, there will be 3 Snapdragon X Elite SKUs that differ in CPU and GPU performance.

As well, the company is introducing a second Snapdragon X tier, Snapdragon X Plus, for those SKUs positioned below the Elite performance tier. As of today, this will be a single configuration. But if the Snapdragon X lineup is successful and demand warrants it, I would not be surprised to see Qualcomm expand it further – as they have certainly left themselves the room for it in their product stack. In the meantime, with Qualcomm’s expected launch competition now shipping (Intel Core Ultra Meteor Lake and AMD Ryzen Mobile 8040 Hawk Point), the company is also very confident that even these reduced performance Snapdragon X Plus chips will be able to beat Intel and AMD in multithreaded performance – never mind the top-tier Snapdragon X Elite chips.

Qualcomm will be launching this expanded four chip stack at once; so both Snapdragon X Elite and Snapdragon X Plus tier devices should be available at the same time. The company’s goal is still to have devices on the shelf “mid-year”, although the company isn’t providing any more precise guidance than that. With Qualcomm’s CEO, Cristiano Amon, set to deliver a Computex keynote in June, I expect we’ll get more specific details on timings then, along with the company and its partners using the event to announce and showcase some retail laptop designs. So this is very much looking like a summer launch at the moment.

In the meantime, Qualcomm is already showing off what their Snapdragon X Plus chips can do with a fresh set of live benchmarks, akin to their Snapdragon X Elite performance previews from October 2023. We’ll dive into those in a bit, but suffice it to say, Qualcomm knows the score, and they want to make sure the entire world knows when they’re winning.

AMD Announces Ryzen Pro 8000 and Ryzen Pro 8040 Series CPUs: Commercial Desktop Gets AI

19 avril 2024 à 14:00

AMD is looking to drive the AI PC market with options across multiple product lines, which aren't limited to consumer processors. While primarily designed for the commercial sector, AMD has announced the Ryzen Pro 8000 'Phoenix' series of APUs for desktops, which AMD claims is the first professional-grade CPU to include an NPU designed to provide on-chip AI neural processing capabilities. AMD has also announced the Ryzen Pro 8040 'Hawk Point' series of mobile processors designed for commercial laptops and notebooks.

AMD's Ryzen Pro 8000 and Ryzen Pro 8040 series processors come with support from AMD's Pro Manageability and AMD Pro Business Ready suites and are built with AMD's current generation Zen 4 cores. The Ryzen Pro 8000 and Ryzen Pro 8040 series processors are similar to their consumer-level counterparts. However, they have additional security features such as AMD Memory Guard, AMD Secure Processor, and Microsoft Pluton.

Touching on the differentiating factors between the non-Pro-consumer chips and the Ryzen Pro series, there is plenty for the commercial and enterprise market regarding security. In what is a first, the Ryzen Pro 8000 series is the first desktop platform to integrate Microsoft Pluton security features designed to protect when connecting to the cloud. Other features include AMD Memory Guard, which encrypts login credentials, keys, and text files stored in the DRAM. AMD Pro Security ties the AMD Zen 4 shadow stack and other layers in directly with the software stack, which, in this case, is Microsoft Windows 11 OS security. 

Another notable feature that AMD is hammering home is the on-chip AI capabilities of the included Ryzen AI neural processor unit (NPU), which allows enterprises to run AI workloads locally to mitigate privacy concerns by transferring data to and from the cloud. Although the current generation of NPUs embedded into processors are limited in what they can do, Ryzen AI is a driving factor within the AI PC, as manufacturers and SDVs are looking to utilize AI-accelerated features built into software, such as Microsoft with their AI-powered Copilot tool.

Although there are requirements that now must be met to ensure a PC is considered an 'AI PC,' Microsoft announced that their AI PC requirement is 45 TOPS of performance from the NPU alone, which none of the current generation of chips from AMD and Intel currently meet. In the desktop space, AMD currently has the lead as Intel has presently no offerings with an NPU, although, in the mobile space, AMD with their Ryzen 8040 (Hawk Point) and Intel with their Meteor Lake processors provide plenty of choice for users.

AMD Ryzen Pro 8000 Series (Zen 4)
AnandTech Cores
Threads
Base
Freq
Boost
Freq
L3
Cache
iGPU
 
TDP
Ryzen 7 Pro 8700G 8C / 16T 4200 5100 16 MB R780M (12 CUs) 45-65 W
Ryzen 7 Pro 8700GE 8C / 16T 3650 5100 16 MB R780M (12 CUs) 35 W
Ryzen 5 Pro 8600G 6C / 12T 4350 5000 16 MB R760M (8 CUs) 45-65 W
Ryzen 5 Pro 8500G 6C / 12T 3550 5000 16 MB R740M (4 CUs) 45-65 W
Ryzen 5 Pro 8600GE 6C / 12T 3900 5000 16 MB R760M (8 CUs) 35 W
Ryzen 5 Pro 8500GE 6C / 12T 3400 5000 16 MB R740M (4 CUs) 35 W
Ryzen 3 Pro 8300G 4C / 8T 3450 4900 8 MB R740M (4 CUs) 45-65 W
Ryzen 3 Pro 8300GE 4C / 8T 3500 4900 8 MB R740M (4 CUs) 35 W

Looking at the AMD Ryzen Pro 8000 series, AMD has announced eight new processors that include the same specifications as the non-Pro Ryzen 8000G APU counterparts. Two primary types of Ryzen Pro 8000 processors are set to be available: four with a configurable TDP of between 45 and 65 W and four with a flat TDP of 35 W for lower-powered environments. Leading the line-up is the Ryzen 7 Pro 8700G, which is identical in core specifications to the Ryzen 7 8700G APU, and has an 8C/16T (Zen 4) configuration with a base frequency of 4.2 GHz and a boost frequency of up to 5.1 GHz.

Even the Ryzen 7 Pro 8700GE, which is the 35 W version, has a 5.1 GHz boost frequency, although it has a slower base clock of 3.65 GHz. Both models have 16 MB of L3 cache, including AMD's integrated Radeon 780M (12 CUs) mobile graphics. All of the eight Ryzen Pro 8000 series models range from 4C/8T offerings with 8 MB of L3 cache and 4.9 GHz boost clocks, 6C/12T models with 5.0 GHz boost clocks and 16 MB of L3 cache, and those as mentioned above 8700/8700GE with 8C/16T.

While we take all performance figures given by manufacturers and vendors with a pinch of salt, AMD claims their Ryzen Pro 8000 series offers up to 19% better performance than Intel's 14th-gen Core series processors. AMD's match-up is the Ryzen 7 Pro 8700G vs. the Intel Core i7-14700, with AMD claiming a 47% victory in the Passmark 11 benchmark and 3X the graphics performance in 3D Mark Time Spy. This isn't entirely surprising because the Ryzen 7 Pro 8700G benefits from integrated RDNA3 graphics and AMD's Zen 4 cores.

AMD Ryzen Pro 8040 Series (Zen 4)
AnandTech Cores
Threads
Base
Freq
Boost
Freq
L3
Cache
iGPU TDP
Ryzen 9 Pro 8945HS 8C / 16T 4000 5200 16 MB 12 35-54 W
Ryzen 7 Pro 8845HS 8C / 16T 3800 5100 16 MB 12 35-54 W
Ryzen 7 Pro 8840HS 8C / 16T 3300 5100 16 MB 12 20-28 W
Ryzen 5 Pro 8645HS 6C / 12T 4300 5000 16 MB 8 35-54 W
Ryzen 5 Pro 8640HS 6C / 12T 3500 4900 16 MB 8 20-28 W
 
Ryzen 7 Pro 8840U 8C / 16T 3300 5100 16 MB 12 15-28 W
Ryzen 5 Pro 8640U 6C / 12T 3500 4900 16 MB 8 15-28 W
Ryzen 5 Pro 8540U* 6C / 12T 3200 4900 16 MB 4 15-28 W
*Ryzen 5 Pro 8540U is the only chip without AMD's Ryzen AI NPU

Moving onto AMD's latest Ryzen Pro 8040 processors for the mobile market, AMD has refreshed their Hawk Point family for the enterprise market. AMD has eight new processors, which are segmented into two families, the HS series and the U series. The HS series has five new chips, which range from 6C/12T up to 8C/16T, all with varying levels of clock speed and TDPs. At the top of the line-up is the Ryzen 9 Pro 8945HS, which is a direct replacement for the Ryzen 9 Pro 7940HS, and as such, it comes with the same 4.0 GHz base clock and 5.2 GHz boost clocks.

Pivitong to TDP, AMD offers the Ryzen 9 Pro 8945HS, Ryzen 7 Pro 8845HS, and Ryzen 5 Pro 8645HS with a configurable TDP of between 35 and 54 W. In contrast, the Ryzen 7 Pro 8840HS and the Ryzen 5 Pro 8640HS are designed for lower-powered laptops with a cTDP of 20-28 W. Regarding cache, all of the announced Ryzen Pro 8040 series models come with 16 MB of L3 Cache. At the same time, specifications such as the integrated graphics and clock speeds all correspond to the consumer line-up, the Ryzen 8040 series.

AMD's in-house performance figures show the Ryzen 7 Pro 8840U at 15 W performing better than Intel's Core Ultra 7 165H at 28 W. Still, as we always do with performance figures provided by vendors, take these with a pinch of salt. AMD claims a 30% combined increase in performance across the board in workloads, including Geekbench v6, Blender, PCMark 10, PCMark Night Raid, and UL Procyon. While there are plenty of different areas where performance gains and losses can be achieved, AMD does claim that their Ryzen 9 Pro 8945HS at 45 W vs. the Intel Core Ultra 9 185H at 45 W is 50% better in Topaz Labs Video AI Gaia 4X software; they did both use discrete graphics in this test according to AMD's slide deck.

The other notable thing is that all of the Ryzen Pro 8040 series processors, except the bottom SKU, the Ryzen 5 Pro 8540U, come with AMD's Ryzen AI NPU integrated into the silicon. While the AI PC ecosystem is still growing, AMD and over 150+ ISVs look to continue the trend that AI will power more software features in the future than we've seen so far. We are still in the infancy stage of the ecosystem despite much of the marketing targeting the AI functionality, but as we see higher-performing NPUs coming in the next generation of chips, at least ones that can match Microsoft 45 NPU TOPS requirement to run Copilot locally, much of the benefit of the NPU is currently down to how much power can be saved.

The introduction of the Ryzen Pro 8000/8040 series completes AMD's commercial client platform, along with the readily available Ryzen Threadripper Pro 7000-WX series for commercial and professional workstations. What sets these AMD Ryzen Pro series processors apart from the consumer (non-Pro) variants is support for the AMD Pro Manageability toolkit, which includes features such as cloud-based remote manageability to enable off-site IT technicians the ability to access devices remotely, as well WPA3 SAE encryption, which provides client-to-cloud protection for enterprises over shared networks.

AMD has not announced when the Ryzen Pro 8000 series APUs or the Ryzen Pro 8040 mobile chips will be available for purchase. However, we expect a wide array of OEMs, such as HP and Lenovo, to be already in the process of readying solutions that should hit the market soon.

Intel and Sandia National Labs Roll Out 1.15B Neuron “Hala Point” Neuromorphic Research System

17 avril 2024 à 15:00

While neuromorphic computing remains under research for the time being, efforts into the field have continued to grow over the years, as have the capabilities of the specialty chips that have been developed for this research. Following those lines, this morning Intel and Sandia National Laboratories are celebrating the deployment of the Hala Point neuromorphic system, which the two believe is the highest capacity system in the world. With 1.15 billion neurons overall, Hala Point is the largest deployment yet for Intel’s Loihi 2 neuromorphic chip, which was first announced at the tail-end of 2021.

The Hala Point system incorporates 1152 Loihi 2 processors, each of which is capable of simulating a million neurons. As noted back at the time of Loihi 2’s launch, these chips are actually rather small – just 31 mm2 per chip with 2.3 billion transistors each, as they’re built on the Intel 4 process (one of the only other Intel chips to do so, besides Meteor Lake). As a result, the complete system is similarly petite, taking up just 6 rack units of space (or as Sandia likes to compare it to, about the size of a microwave), with a power consumption of 2.6 kW. Now that it’s online, Hala Point has dethroned the SpiNNaker system as the largest disclosed neuromorphic system, offering admittedly just a slightly larger number of neurons at less than 3% of the power consumption of the 100 kW British system.


A Single Loihi 2 Chip (31 mm2)

Hala Point will be replacing an older Intel neuromorphic system at Sandia, Pohoiki Springs, which is based on Intel’s first-generation Loihi chips. By comparison, Hala Point offers ten-times as many neurons, and upwards of 12x the performance overall,

Both neuromorphic systems have been procured by Sandia in order to advance the national lab’s research into neuromorphic computing, a computing paradigm that behaves like a brain. The central thought (if you’ll excuse the pun) is that by mimicking the wetware writing this article, neuromorphic chips can be used to solve problems that conventional processors cannot solve today, and that they can do so more efficiently as well.

Sandia, for its part, has said that it will be using the system to look at large-scale neuromorphic computing, with work operating on a scale well beyond Pohoiki Springs. With Hala Point offering a simulated neuron count very roughly on the level of complexity of an owl brain, the lab believes that a larger-scale system will finally enable them to properly exploit the properties of neuromorphic computing to solve real problems in fields such as device physics, computer architecture, computer science and informatics, moving well beyond the simple demonstrations initially achieved at a smaller scale.

One new focus from the lab, which in turn has caught Intel’s attention, is the applicability of neuromorphic computing towards AI inference. Because the neural networks themselves behind the current wave of AI systems are attempting to emulate the human brain, in a sense, there is an obvious degree of synergy with the brain-mimicking neuromorphic chips, even if the algorithms differ in some key respects. Still, with energy efficiency being one of the major benefits of neuromorphic computing, it’s pushed Intel to look into the matter further – and even build a second, Hala Point-sized system of their own.

According to Intel, in their research on Hala Point, the system has reached efficiencies as high as 15 TOPS-per-Watt at 8-bit precision, albeit while using 10:1 sparsity, making it more than competitive with current-generation commercial chips. As an added bonus to that efficiency, the neuromorphic systems don’t require extensive data processing and batching in advance, which is normally necessary to make efficient use of the high density ALU arrays in GPUs and GPU-like processors.

Perhaps the most interesting use case of all, however, is the potential for being able to use neuromorphic computing to enable augmenting neural networks with additional data on the fly. The idea behind this being to avoid re-training, as current LLMs require, which is extremely costly due to the extensive computing resources required. In essence, this is taking another page from how brains operate, allowing for continuous learning and dataset augmentation.

But for the moment, at least, this remains a subject of academic study. Eventually, Intel and Sandia want systems like Hala Point to lead to the development of commercial systems – and presumably, at even larger scales. But to get there, researchers at Sandia and elsewhere will first need to use the current crop of systems to better refine their algorithms, as well as better figure out how to map larger workloads to this style of computing in order to prove their utility at larger scales.

AMD Quietly Launches Ryzen 7 8700F and Ryzen 5 8400F Processors

11 avril 2024 à 19:30

AMD has recently expanded its Ryzen 8000 series by introducing the Ryzen 7 8700F and Ryzen 5 8400F processors. Initially launched in China, these chips were added to AMD's global website, signaling they are available worldwide, apparently from April 1st. Built from the recent Zen 4-based Phoenix APUs using the TSMC 4nm node as their Zen 4 mobile chips, these new CPUs lack integrated graphics. However, the Ryzen 7 8700F does include the integrated Ryzen AI NPU for added capabilities in a world currently dominated by AI and moving it directly into the PC.

The company's decision to announce these chips in China aligns with its strategy to offer Ryzen solutions at every price point in the market. Although AMD didn't initially disclose the full specifications of these F-series models, and we did reach out to the company to ask about them, they refused to discuss them with us. Their listing on the website has now been updated with a complete list of specifications and features, with everything but the price mentioned.

AMD Ryzen 8000G vs. Ryzen 8000F Series (Desktop)
Zen 4 (Phoenix)
AnandTech Cores/Threads Base
Freq
Turbo
Freq
GPU GPU
Freq
Ryzen AI
(NPU)
L3 Cache
(MB)
TDP MSRP
Ryzen 7
Ryzen 7 8700G 8/16 4200 5100 R780M
12 CUs
2900 Y 16 65W $329
Ryzen 7 8700F 8/16 4100 5000 - - Y 16 65W ?
Ryzen 5
Ryzen 5 8600G 6/12 4300 5000 R760M
8 CUs
2800 Y 16 65W $229
Ryzen 5 8400F 6/12 4200 4700 - - N 16 65W ?

The Ryzen 7 8700F features an 8C/16T design, with 16MB of L3 cache and the same 65W TDP as the Ryzen 7 8700G. Although the base clock speed is 4.1 GHz, it boosts to 5.0 GHz; this is 100 MHz less on both base/boost clocks than the 8700G. Meanwhile, the Ryzen 5 8400F is a slightly scaled-down version of the Ryzen 8600G APU, with 6C/12, 16MB of L3 cache, and again has a 100 MHz reduction to base clocks compared to the 8600G. Unlike the Ryzen 5 8400F, the Ryzen 7 8700F keeps AMD's Ryzen AI NPU, adding additional capability for generative AI. 

The Ryzen 5 8400F can boost up to 4.7 GHz, 300 MHz slower than the Ryzen 5 8600G. AMD also allows overclocking for these new F-series chips, which means users could potentially boost the performance of these processors to match their G-series equivalents.

Pricing details are still pending, but to remain competitive, AMD will likely need to price these CPUs below the 8700G and 8600G, as well as the Ryzen 7 7700 and Ryzen 5 7600. These CPUs offer, albeit very limited, integrated graphics and have double the L3 cache capacity, along with higher boost clocks than the 8000F series chips, so pricing is something to consider whenever pricing becomes available.

Intel Teases Lunar Lake At Intel Vision 2024: 100+ TOPS Overall, 45 TOPS From NPU Alone

11 avril 2024 à 17:00

During the main keynote at Intel Vision 2024, Intel CEO Pat Gelsinger flashed a completed Lunar Lake chip off, much like EVP and General Manager of Intel's Client Computing Group (CCG) Michelle Johnston Holthaus did back at CES 2024. The contrast between the two glimpses of the Lunar Lake chip is that Pat Gelsinger gave us something juicier than just a photo op. He clarified and claimed the levels of AI performance we can expect to see when Lunar Lake launches.

According to Intel's CEO Pat Gelsinger, Lunar Lake, scheduled to be launched towards the end of this year, is set to raise the bar even further regarding on-chip AI capabilities and performance. At Intel's own Vision event, aptly named Intel Vision, current CEO of Intel Pat Gelsinger stated during his presentation that Lunar Lake will be the 'flagship SoC' for the next generation of AI PCs. Intel claims that Lunar Lake will have 3X the AI performance of their current Meteor Lake SoC, which is impressive as Meteor Lake is estimated to be running around 34 TOPS combined with the NPU, GPU, and CPU.

Factoring in the NPU within Meteor Lake, 11 of the 34 TOPS come solely from the NPU. Still, Intel claims that the NPU on Lunar Lake will hit a large 45 TOPs, akin to the Hailo-10 add-in card and similar to Qualcomm's Snapdragon X Elite processor. Factoring in the integrated graphics and the compute cores, Intel is claiming a combined total of over 100 TOPS, and with Microsoft's self-imposed guidelines of what constitutes an 'AI PC' coming in at 40 TOPS, Intel's NPU fits the bill.

Intel also alludes to how they are gaining a load of TOPS performance from the NPU, whether that be with new technologies; the NPU will likely be built in a more advanced node, perhaps Intel 18A. Another thing Intel didn't highlight was how they were measuring the TOPS performance, whether that be INT8 or INT4.

Still, one thing is clear: Intel wants to increase on-chip AI capabilities in desktop PCs and notebooks with each generation. Intel is also attempting to leverage more AI performance to help boost its goal to ship 100 million AI PCs by the end of 2025. Intel has already announced that it's shipped 5 million thus far and plans to sell another 40 million units by the end of the year.

The Intel Core Ultra 7 155H Review: Meteor Lake Marks A Fresh Start To Mobile CPUs

11 avril 2024 à 12:30

One of the most significant talking points of the last six months in mobile computing has been Intel and their disaggregated Meteor Lake SoC architecture. Meteor Lake, along with the new Core and Core Ultra naming scheme, also heralds the dawn of their first tiled architecture for the mobile landscape on the latest Intel 4 node with Foveros packaging. In December last year, Intel unveiled their premier Meteor lake-based Core Ultra H series, with five SKUs ranging from two with 4P+8E+2LP/18T and three with 6P+8E+2LP/22T models. Since then, many vendors and manufacturers have launched notebooks capitalizing on Intel's latest multi-tiled Meteor Lake SoC architecture as the heart of power and performance, driving their latest models into 2024.

Today, we will focus on an attractive ultrabook via the ASUS Zenbook 14 OLED (UX3405MA), which features a thin and light design and is powered by Intel's latest Meteor Lake Core Ultra 7 155H processor. While much of the attention is going to come on how the Intel Core Ultra 7 155H with its 6P+8E+2LP/22T configuration and 8 Arc Xe integrated graphics cores will perform, the ASUS Zenbook 14 OLED UX3405MA has plenty of features within its sleek Ponder Blue colored shell to make it very interesting. Included is a 14" 3K (2880 x 1800) touchscreen OLED panel with a 120 Hz refresh rate, 32 GB of LPDDR5X memory (soldered), and a 1 TB NVMe M.2 SSD for storage.

Intel To Discontinue Boxed 13th Gen Core CPUs for Enthusiasts

11 avril 2024 à 00:00

In an unexpected move, Intel has announced plans to phase out the boxed versions of its enthusiasts-class 13th Generation Core 'Raptor Lake' processors. According to a product change notification (PCN) published by the company last month, Intel plans to stop shipping these desktop CPUs by late June. In its place will remain Intel's existing lineup of boxed 14th Generation Core processors, which are based on the same 'Raptor Lake' silicon and typically carry higher performance for similar prices.

Intel customers and distributors interested in getting boxed versions 13th Generation Core i5-13600K/KF, Core i7-13700K/KF, and Core i9-13900K/KF/KS 'Raptor Lake' processors with unlocked multiplier should place their orders by May 24, 2024. The company will ship these units by June 28, 2024. Meanwhile, the PCN does not mention any change to the availability of tray versions of these CPUs, which are sold to OEMs and wholesalers.

The impending discontinuation of Intel's boxed 13th Generation Core processors comes as the company's current 14th Generation product line, 'Raptor Lake Refresh' is largely a rehash of the same silicon at slightly higher clockspeeds. Case in point: all of the discontinued SKUs are based on Intel's B0 Raptor Lake silicon, which is still being used for their 14th Gen counterparts. So Intel has not discontinued producing any Raptor Lake silicon; only the number of retail SKUs is getting cut-down.

As outlined in our 14th Generation Core/Raptor Lake Refresh review, the 14th Gen chips largely make their 13th Gen counterparts redundant, offering better performance at every tier for the same list price. And with virtually all current generation motherboards supporting both generation of chips, apparently Intel feels there's little reason to keep around what's essentially older, slower SKUs of the same silicon.

Interestingly, the retirement of the enthusiast-class 13th Generation Core chips is coming before Intel discontinues their even older 12th Generation Core 'Alder Lake' processors. 12th Gen chips are still available to this day in both boxed and tray versions, and the Alder Lake silicon itself is still widely in use in multiple product families. So even though Alder Lake shares the same platform as Raptor Lake, the chips based on that silicon haven't been rendered redundant in the same way that 13th Gen Core chips have.

Ultimately, it would seem that Intel is intent on consolidating and simplifying its boxed retail chip offerings by retiring their near-duplicate SKUs. Which for PC buyers could present a minor opportunity for a deal, as retailers work to sell off their remaining 13th Gen enthusiast chips.

Google Develops In-House Arm 'Axion' CPU for Datacenters

9 avril 2024 à 22:00

Google was among the first hyperscalers build custom silicon for its services, starting first with tensor processing units (TPUs) for its AI initiatives, and then video transcoding units (VCUs) for the YouTube service. But unlike its industry peers, the company has been slower to adopt custom CPU designs, prefering to stick to off-the-shelf chips from the major CPUs. This is finally changing at Google, with the announcement that the company has developed its own in-house datacenter CPU, the Axion.

Google's Axion processor is based on the Arm Neoverse V2 (Arm v9) platform, which is Arm's current-generation design for high-performance server CPUs, and is already employed in other chips such as NVIDIA's Grace and Amazon's Graviton4. Within Google, Axion is aimed at a wide variety of workloads, including web and app servers, data analytics, microservices, and AI training. Google claims that the Axion processors boast up to 50% higher performance and up to 60% better energy efficiency compared to current-generation x86-based processors, as well as offer a 30% higher performance compared to competing Arm-based CPUs for datacenters. Though as is increasingly common for the cryptic cloud side of Google's business, least for now the company isn't specifying what processors they're comparing Axion to in these metrics.

While Google is not disclosing core counts or the full specifications of its Axion CPUs, the company is revealing that they are incorporating their own secret sauce into the silicon in the form of the company's Titanium purpose-built microcontrollers. These microcontrollers are designed to handle basic operations like networking and security, as well as offload storage I/O processing to Hyperdisk block storage service. As a result of this offloading, virtually all of the CPU core resources should be available to actual workloads. As for the chip's memory subsystem, Axion uses conventional dual-rank DDR5 memory modules.

"Google's announcement of the new Axion CPU marks a significant milestone in delivering custom silicon that is optimized for Google's infrastructure, and built on our high-performance Arm Neoverse V2 platform," said Rene Haas, CEO of Arm. "Decades of ecosystem investment, combined with Google's ongoing innovation and open-source software contributions ensure the best experience for the workloads that matter most to customers running on Arm everywhere." 

Google has previously deployed Arm-based processors for its own services, including BigTable, Spanner, BigQuery, and YouTube Ads and is ready to offer instances based on its Armv9-based Axion CPUs to its customers that can use software developed for Arm architectures.

Sources: GoogleWall Street Journal

Intel Unveils New Branding For 6th Generation Xeon Processors: Intel Xeon 6

9 avril 2024 à 15:35

At Intel's Vision 2024 event, which is being held in Phoenix, AZ, has seen several key announcements. On the datacenter CPU front, Intel is using the show to unveil their newest branding for their venerable family of Xeon processors. Beginning with this year's sixth generation of processors, Intel is "evolving" the Xeon brand by retiring the "Xeon Scalable" branding in favor of Intel's new and simplified "Xeon 6" brand.

The Xeon 6 family is set to launch later this year with two primary variants: an all-performance (P) core chip codenamed Granite Rapids, and an all-efficiency (E) core chip codenamed Sierra Forest. Both of these chips will be sold under the Xeon 6 brand and sit on top of the same motherboard platform, with the Xeon 6 branding intended in part to underscore this shared platform. Though speaking of the chips themselves, at this time Intel isn't illustrating how the two sub-series of chips will be differentiated in terms of product numbers.

Over the last year, we've extensively covered Intel's Granite Rapids and Sierra Forest. For more information about Granite Rapids and Sierra Forest, here are some of our key pieces:

Intel debuted their Xeon Scalable branding in 2017 with the launch of the Xeon Platinum 8100 series, which was built using their Skylake microarchitecture. At the time Xeon Scalable replaced Intel's older Xeon E/EP/EX vX branding, resetting the generation count in the process.

Moving forward to 2024, Intel is looking to build an ecosystem befitting the current demands of technologies within key areas such as data centers, Edge, and the PC. Intel is laying the foundations for what it calls 'Intel Enterprise AI.' Using a vast array of frameworks and accelerators and working closely with partners, ISVs, and GSIs to create a large and open ecosystem, the newly branded Intel Xeon 6 platforms will be key in the enterprise market as we advance.

Intel has adopted a newer and simpler nomenclature for Granite Rapids and Sierra Forest, starting with the Intel Xeon 6 processors. Sierra Forest Xeon 6 processors are set to launch in Q2 of 2024, which include a chip featuring 288 E-cores. It will be the first product to adopt this new branding, which is designed to ease customer navigation between models. Meanwhile the Xeon 6 P-core Granite Rapids processors will come later.

Ultimately, the Xeon brand itself and what it entails (enterprise, workstation, server, and data center) isn't going anywhere. Instead, Intel is putting an increased focus on the generation number of the platform by moving it front and center, to more clearly highlight what generation of technology a part belongs to.

As mentioned, Intel's Xeon 6 processors, based on their Sierra Forest architecture, are set to launch in Q2 2024, while the Granite Rapids Xeon 6 platform is expected to come sometime in the second half of 2024.

PCIe 7.0 Draft 0.5 Spec Available: 512 GB/s over PCIe x16 On Track For 2025

4 avril 2024 à 12:00

PCI-SIG this week released version 0.5 of the PCI-Express 7.0 specification to its members. This is the second draft of the spec and the final call for PCI-SIG members to submit their new features to the standard. The latest update on the development of the specification comes a couple months shy of a year after the PCI-SIG published the initial Draft 0.3 specificaiton, with the PCI-SIG using the latest update to reiterate that development of the new standard remains on-track for a final release in 2025.

PCIe 7.0 is is the next generation interconnect technology for computers that is set to increase data transfer speeds to 128 GT/s per pin, doubling the 64 GT/s of PCIe 6.0 and quadrupling the 32 GT/s of PCIe 5.0. This would allow a 16-lane (x16) connection to support 256 GB/sec of bandwidth in each direction simultaneously, excluding encoding overhead. Such speeds will be handy for future datacenters as well as artificial intelligence and high-performance computing applications that will need even faster data transfer rates, including network data transfer rates.

To achieve its impressive data transfer rates, PCIe 7.0 doubles the bus frequency at the physical layer compared to PCIe 5.0 and 6.0. Otherwise, the standard retains pulse amplitude modulation with four level signaling (PAM4), 1b/1b FLIT mode encoding, and the forward error correction (FEC) technologies that are already used for PCIe 6.0. Otherwise, PCI-SIG says that the PCIe 7.0 speicification also focuses on enhanced channel parameters and reach as well as improved power efficiency. 

Overall, the engineers behind the standard have their work cut out for them, given that PCIe 7.0 requires doubling the bus frequency at the physical layer, a major development that PCIe 6.0 sidestepped with PAM4 signaling. Nothing comes for free in regards to improving data signaling, and with PCIe 7.0, the PCI-SIG is arguably back to hard-mode development by needing to improve the physical layer once more – this time to enable it to run at around 30GHz. Though how much of this heavy lifting will be accomplished through smart signaling (and retimers) and how much will be accomplished through sheer materials improvements, such as thicker printed circuit boards (PCBs) and low-loss materials, remains to be seen.

The next major step for PCIe 7.0 is finalization of the version 0.7 of specification, which is considered the Complete Draft, where all aspects must be fully defined, and electrical specifications must be validated through test chips. After this iteration of the specification is released, no new features can be added. PCIe 6.0 eventually went through 4 major drafts – 0.3, 0.5, 0.7, and 0.9 – before finally being finalized, so PCIe 7.0 is likely on the same track.

Once finalized in 2025, it should take a few years for the first PCIe 7.0 hardware to hit the shelves. Although development work on controller IP and initial hardware is already underway, that process extends well beyond the release of the final PCIe specification.

Report: China to Pivot from AMD & Intel CPUs To Domestic Chips in Government PCs

26 mars 2024 à 20:00

China has initiated a policy shift to eliminate American processors from government computers and servers, reports Financial Times. The decision is aimed to gradually eliminate processors from AMD and Intel from system used by China's government agencies, which will mean lower sales for U.S.-based chipmakers and higher sales of China's own CPUs.

The new procurement guidelines, introduced quietly at the end of 2023, mandates government entities to prioritize 'safe and reliable' processors and operating systems in their purchases. This directive is part of a concerted effort to bolster domestic technology and parallels a similar push within state-owned enterprises to embrace technology designed in China.

The list of approved processors and operating systems, published by China's Information Technology Security Evaluation Center, exclusively features Chinese companies. There are 18 approved processors that use a mix of architectures, including x86 and ARM, while the operating systems are based on open-source Linux software. Notably, the list includes chips from Huawei and Phytium, both of which are on the U.S. export blacklist.

This shift towards domestic technology is a cornerstone of China's national strategy for technological autonomy in the military, government, and state sectors. The guidelines provide clear and detailed instructions for exclusively using Chinese processors, marking a significant step in China's quest for self-reliance in technology.

State-owned enterprises have been instructed to complete their transition to domestic CPUs by 2027. Meanwhile, Chinese government entites have to submit progress reports on their IT system overhauls quarterly. Although some foreign technology will still be permitted, the emphasis is clearly on adopting local alternatives.

The move away from foreign hardware is expected to have a measurable impact on American tech companies. China is a major market for AMD (accounting for 15% of sales last year) and Intel (commanding 27% of Intel's revenue), contributing to a substantial portion of their sales. Additionally, Microsoft, while not disclosing specific figures, has acknowledged that China accounts for a small percentage of its revenues. And while government sales are only a fraction of overall China sales (as compared to the larger commercial PC business) the Chinese government is by no means a small customer.

Analysts questioned by Financial Times predict that the transition to domestic processors will advance more swiftly for server processors than for client PCs, due to the less complex software ecosystem needing replacement. They estimate that China will need to invest approximately $91 billion from 2023 to 2027 to overhaul the IT infrastructure in government and adjascent industries.

Intel Announces Core i9-14900KS: Raptor Lake-R Hits Up To 6.2 GHz

14 mars 2024 à 15:00

For the last several generations of desktop processors from Intel, the company has released a higher clocked, special-edition SKU under the KS moniker, which the company positions as their no-holds-barred performance part for that generation. For the 14th Generation Core family, Intel is keeping that tradition alive and well with the announcement of the Core i9-14900KS, which has been eagerly anticipated for months and finally unveiled for launch today. The Intel Core i9-14900KS is a special edition processor with P-Core turbo clock speeds of up to 6.2 GHz, which makes it the fastest desktop processor in the world... at least in terms of advertised frequencies it can achieve.

With their latest KS processor, Intel is looking to further push the envelope on what can be achieved with the company's now venerable Raptor Lake 8+16 silicon. With a further 200 MHz increase in clockspeeds at the top end, Intel is looking to deliver unrivaled desktop performance for enthusiasts. At the same time, as this is the 4th iteration of the "flagship" configuration of the RPL 8+16 die, Intel is looking to squeeze out one more speed boost from the Alder/Raptor family in order to go out on a high note before the entire architecture starts to ride off into the sunset later this year. To get there, Intel will need quite a bit of electricity, and $689 of your savings.

SiPearl's Rhea-2 CPU Added to Roadmap: Second-Gen European CPU for HPC

8 mars 2024 à 21:00

SiPearl, a processor designer supported by the European Processor Initiative, is about to start shipments of its very first Rhea processor for high-performance computing workloads. But the company is already working on its successor currently known as Rhea-2, which is set to arrive sometimes in 2026 in Exascale supercomputers.

SiPearl's Rhea-1 datacenter-grade system-on-chip packs 72 off-the-shelf Arm Neoverse V1 cores designed for HPC and connected using a mesh network. The CPU has an hybrid memory subsystem that supports both HBM2E and DDR5 memory to get both high memory bandwidth and decent memory capacity as well as supports PCIe interconnects with the CXL protocol on top. The CPU was designed by a contract chip designer and is made by TSMC on its N6 (6 nm-class) process technology.

The original Rhea is to a large degree a product aimed to prove that SiPearl, a European company, can deliver a datacenter-grade processor. This CPU now powers Jupiter, Europe's first exascale system that uses nodes powered by four Rhea CPUs and NVIDIA's H200 AI and HPC GPUs. Given that Rhea is SiPearl's first processor, the project can be considered as fruitful.

With its 2nd generation Rhea processors, SiPearl will have to develop something that is considerably more competitive. This is perhaps why Rhea-2 will use a dual-chiplet implementation. Such a design will enable SiPearl to pack more processing cores and therefore offer higher performance. Of course, it remains to be seen how many cores SiPearl plans to integrate into Rhea 2, but at least the CPU company is set to adopt the same design methodologies as AMD and Intel.

Given the timing for SiPearl's Rhea 2 and the company's natural with to preserve software compatibility with Rhea 1, it is reasonable to expect the processor to adopt Arm's Neoverse V3 cores for its second processor. Arm's Neoverse V3 offer quite a significant uplift compared to Neoverse V2 (and V1) and can scale to up to 128 cores per socket, which should be quite decent for HPC applications in 2025 – 2026.

While SiPearl will continue developing CPUs, it remains to be seen whether EPI will manage to deliver AI and HPC accelerators that are competitive against those from NVIDIA, AMD, and Intel.

Intel CEO Pat Gelsinger to Deliver Computex Keynote, Showcasing Next-Gen Products

8 mars 2024 à 11:00

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, has announced that Pat Gelsinger, chief executive of Intel, will deliver a keynote at Computex 2024 on June 4, 2024. Focusing on the trade show's theme of artificial intelligence, he will showcase Intel's next-generation AI-enhanced products for client and datacenter computers.

According to TAITRA's press release, Pat Gelsinger will discuss how Intel's product lineup, including the AI-accelerated Intel Xeon, Intel Gaudi, and Intel Core Ultra processor families, opens up new opportunities for client PCs, cloud computing, datacenters, and network and edge applications. He will also discuss superior performance-per-watt and lower cost of ownership of Intel's Xeon processors, which enhance server capacity for AI workloads.

The most intriguing part of Intel's Computex keynote will of course be the company's next-generation AI-enhanced products for client and datacenter computers. At this point Intel is prepping numerous products that pose a lot of interest, including the following:

  • Arrow Lake and Lunar Lake processors made on next-generation process technologies for desktop and mobile PCs and featuring all-new microarchitectures;
  • Granite Rapids CPUs for datacenters based on a high-performance microarchitecture;
  • Sierra Forest processors with up to 288 cores for cloud workloads based on codenamed Crestmont energy-efficient cores;
  • Gaudi 3 processors for AI workloads that promise to quadruple BF16 performance compared to Gaudi 2.
  • Battlemage graphics processing units.

All of these products are due to be released in 2024-2025, so Intel could well demonstrate them and showcase their performance advantages, or even formally launch some of them, at Computex. What remains to be seen is whether Intel will also give a glimpse at products that are further away, such as Clearwater Forest and Falcon Shores.

Tenstorrent Licenses RISC-V CPU IP to Build 2nm AI Accelerator for Edge

28 février 2024 à 20:30

Tenstorrent this week announced that it had signed a deal to license out its RISC-V CPU and AI processor IP to Japan's Leading-edge Semiconductor Technology Center (LSTC), which will use the technology to build its edge-focused AI accelerator. The most curious part of the announcement is that this accelerator will rely on a multi-chiplet design and the chiplets will be made by Japan's Rapidus on its 2nm fabrication process, and then will be packaged by the same company.

Under the terms of the agreement, Tenstorrent will license its datacenter-grade Ascalon general-purpose processor IP to LSTC and will help to implement the chiplet using Rapidus's 2nm fabrication process. Tenstorrent's Ascalon is a high-performance out-of-order RISC-V CPU design that features an eight-wide decoding. The Ascalon core packs six ALUs, two FPUs, and two 256-bit vector units and when combined with a 2nm-class process technology promises to offer quite formidable performance.

The Ascalon was developed by a team led by legendary CPU designer Jim Keller, the current chief executive of Tenstorrent, who used to work on successful projects by AMD, Apple, Intel, and Tesla.

In addition to general-purpose CPU IP licensing, Tenstorrent will co-design 'the chip that will redefine AI performance in Japan.' This apparently means that Tenstorrent  does not plan to license LSTC its proprietary  Tensix cores tailored for neural network inference and training, but will help to design a proprietary AI accelerator generally for inference workloads.

"The joint effort by Tenstorrent and LSTC to create a chiplet-based edge AI accelerator represents a groundbreaking venture into the first cross-organizational chiplet development in semiconductor industry," said Wei-Han Lien, Chief Architect of Tenstorrent's RISC-V products. "The edge AI accelerator will incorporate LSTC's AI chiplet along with Tenstorrent's RISC-V and peripheral chiplet technology. This pioneering strategy harnesses the collective capabilities of both organizations to use the adaptable and efficient nature of chiplet technology to meet the increasing needs of AI applications at the edge."

Rapidus aims to start production of chips on its 2nm fabrication process that is currently under development sometimes in 2027, at least a year behind TSMC and a couple of years behind Intel. Yet, if it starts high-volume 2nm manufacturing in 2027, it will be a major breakthrough from Japan, which is trying hard to return to the global semiconductor leaders.

Building an edge AI accelerator based on Tenstorrent's IP and Rapidus's 2nm-class production node is a big deal for LSTC, Tenstorrent, and Rapidus as it is a testament for technologies developed by these three companies.

"I am very pleased that this collaboration started as an actual project from the MOC conclusion with Tenstorrent last November," said Atsuyoshi Koike, president and CEO of Rapidus Corporation. "We will cooperate not only in the front-end process but also in the chiplet (back-end process), and work on as a leading example of our business model that realizes everything from design to back-end process in a shorter period of time ever."

Intel Brings vPro to 14th Gen Desktop and Core Ultra Mobile Platforms for Enterprise

27 février 2024 à 16:00

As part of this week's MWC 2024 conference, Intel is announcing that it is adding support for its vPro security technologies to select 14th Generation Core series processors (Raptor Lake-R) and their latest Meteor Lake-based Core Ultra-H and U series mobile processors. As we've seen from more launches than we care to count of Intel's desktop and mobile platforms, they typically roll out their vPro platforms sometime after they've released their full stack of processors, including overclockable K series SKUs and lower-powered T series SKUs, and this year is no exception. Altogether, Intel is announcing vPro Essential and vPro Enterprise support for several 14th Gen Core series SKUs and Intel Core Ultra mobile SKUs.

Intel's vPro security features is something we've covered previously – and on that note, Intel has a new Silicon Security Engine giving the chips the ability to authentical the systems firmware. Intel also states that Intel Threat Detection within vPro has been enhanced and adds an additional layer for the NPU, with an xPU model (CPU/GPU/NPU) to help detect a variety of attacks, and also enables 3rd party software to fun faster. Intel claims is the only AI-based security deployment within a Windows PC to date. Both the total Enterprise securities and the cut-down Essentials vPro hardware-level security to select 14th Gen Core series processors, as well as their latest mobile-focused Meteor Lake processors with Arc graphics launched last year.

Intel 14th Gen vPro: Raptor Lake-R Gets Secured

As we've seen over the last few years with a global shift towards remote work due to the Coronavirus pandemic, the need for up-to-date security in small and larger enterprises is just as critical as it has ever been. Remote and employees in offices alike must have access to the latest software and hardware frameworks to ensure the security of vital data, and that's where Intel vPro comes in.

To quickly recap the current state of affairs, let's take a look at the two levels of Intel vPro securities available,  vPro Essentials and vPro Enterprise, and how they differ.

Intel's vPro Essentials was first launched back in 2022 and is a subset of Intel's complete vPro package, which is now commonly known as vPro Enterprise. The Intel vPro Essentials security package is essentially (as per the name) tailored and designed for small businesses, providing a solid foundation in security without penalizing performance. It integrates hardware-enhanced security features, ensuring hardware-level protection against emerging threats from right from its installation. It also utilizes real-time intelligence for workload optimization and Intel's Thread Detection Technology. It adds an additional layer below the operating system that uses AI-based threat detection to mitigate OS-level threats and attacks.

Pivoting to Intel vPro Enterprise security features, this is designed for SMEs to meet the high demands of large-scale business environments. It offers advanced security features and remote management capabilities, which are crucial for businesses operating with sensitive data and requiring high levels of cybersecurity. Additionally, the platform provides enhanced performance and reliability, making it suitable for intensive workloads and multitasking in a professional setting. Integrating these features from the vPro Enterprise platform ensures that large enterprises can maintain high productivity levels while ensuring data security and efficient IT management with the latest generations of processors, such as the Intel Core 14th Gen family.

Much like we saw when Intel announced their vPro for the 13th Gen Core series, it's worth noting that both the 14th and 13th Gen Core series are based on the same Raptor Lake architecture and, as such, are identical in every aspect bar base and turbo core frequencies.

Intel 14th Gen Core with vPro for Desktop
(Raptor Lake-R)
AnandTech Cores
P+E/T
P-Core
Base/Turbo
(MHz)
E-Core
Base/Turbo
(MHz)
L3 Cache
(MB)
Base
W
Turbo
W
vPRO
Support
(Ent/Ess)
Price
($)
i9-14900K 8+16/32 3200 / 6000 2400 / 4400 36 125 253 Enterprise $589
i9-14900 8+16/32 2000 / 5600 1500 / 4300 36 65 219 Both $549
i9-14900T 8+16/32 1100 / 5500 800 / 4000 36 35 106 Both $549
 
i7-14700K 8+12/28 3400 / 5600 2500 / 4300 33 125 253 Enterprise $409
i7-14700 8+12/28 2100 / 5400 1500 / 4200 33 65 219 Both $384
i7-14700T 8+12/28 1300 / 5000 900 / 3700 33 35 106 Both $384
 
i5-14600K 6+8/20 3500 / 5300 2600 / 4000 24 125 181 Enterprise $319
i5-14600 6+8/20 2700 / 5200 2000 / 3900 24 65 154 Both $255
i5-14500 6+8/20 2600 / 5000 1900 / 3700 24 65 154 Both $232
i5-14600T 6+8/20 1800 / 5100 1200 / 3600 24 35 92 Both $255
i5-14500T 6+8/20 1700 / 4800 1200 / 3400 24 35 92 Both $232

While Intel isn't technically launching any new chip SKUs (either desktop or mobile) with vPro support, the vPro desktop platform features are enabled through the use of specific motherboard chipsets, with both Q670 and W680 chipsets offering sole support for vPro on 14th Gen. Unless users are using either a Q670 or W680 motherboard with the specific chips listed above. vPro Essentials or Enterprise will not be enabled or work with each processor unless installed into a motherboard from one of these chipsets.

As with the previous 13th Gen Core series family (Raptor Lake), the 14th Gen, which is a direct refresh of these, follows a similar pattern. Specific SKUs from the 14th Gen family include support only for the full-fledged vPro Enterprise, including the Core i5-14600K, the Core i7-14700K, and the flagship Core i9-14900K. Intel's vPro Enterprise security features are supported on both Q670 and W680 motherboards, giving users more choice in which board they opt for.

The rest of the above Intel 14th Gen Core series stack, including the non-monikered chips, e.g., the Core i5-14600, as well as the T series, which are optimized for efficient workloads with a lower TDP than the rest of the stack, all support both vPro Enterprise and vPro Essentials. This includes two processors from the Core i9 family, including the Core i9-14900 and Core i9-14900T, two from the i7 series, the Core i7-14700 and Core i7-14700T, and four from the i5 series, the Core i5-14600, Core i5-14500, the Core i5-14600T and the COre i5-14500T.


The ASRock Industrial IMB-X1231 W680 mini-ITX motherboard supports vPro Enterprise and Essentials

For the processors mentioned above (non-K), different levels of vPro support are offered depending on the motherboard chipset. If a user wishes to use a Q670 motherboard, then users can specifically opt to use Intel's cut-down vPro Essentials security features. Intel states that users with a Q670 or W680 can use the full vPro Enterprise security features, including the Core i9-14900K, the Core i7-14700K, and the Core i5-14600K. Outside of this, none of the 14th Gen SKUs with the KF (unlocked with no iGPU) and F (no iGPU) monikers are listed with support for vPro.

Intel Meteor Lake with vPro: Core Ultra H and U Series get Varied vPro Support

Further to the Intel 14th Gen Core series for desktops, Intel has also enabled vPro support for their latest Meteor Lake-based Core Ultra H and U series mobile processors. Unlike the desktop platform for vPro, things are a little different in the mobile space, as Intel offers vPro on their mobile SKUs, either with vPro Enterprise or vPro Essentials, not both.

Intel Core Ultra H and U-Series Processors with vPro
(Meteor Lake)
AnandTech Cores
(P+E+LP/T)
P-Core Turbo
Freq
E-Core Turbo
Freq
GPU GPU Freq L3 Cache
(MB)
vPro Support
(Ent/Ess)
Base TDP Turbo TDP
Ultra 9  
Core Ultra 9 185H 6+8+2/22 5100 3800 Arc Xe (8) 2350 24 Enterprise 45 W 115 W
Ultra 7  
Core Ultra 7 165H 6+8+2/22 5000 3800 Arc Xe (8) 2300 24 Enterprise 28 W 64/115 W
Core Ultra 7 155H 6+8+2/22 4800 3800 Arc Xe (8) 2250 24 Essentials 28 W 64/115 W
Core Ultra 7 165U 2+8+2/14 4900 3800 Arc Xe (4) 2000 12 Enterprise 15 W 57 W
Core Ultra 7 164U 2+8+2/14 4800 3800 Arc Xe (4) 1800 12 Enterprise 9 W 30 W
Core Ultra 7 155U 2+8+2/14 4800 3800 Arc Xe (4) 1950 12 Essentials 15 W 57 W
Ultra 5  
Core Ultra 5 135H 4+8+2/18 4600 3600 Arc Xe
(7)
2200 18 Enterprise 28 W 64/115 W
Core Ultra 5 125H 4+8+2/18 4500 3600 Arc Xe (7) 2200 18 Essentials 28 W 64/115 W
Core Ultra 5 135U 2+8+2/14 4400 3600 Arc Xe (4) 1900 12 Enterprise 15 W 57 W
Core Ultra 5 134U 2+8+2/14 4400 3800 Arc Xe (4) 1750 12 Enterprise 9 W 30 W
Core Ultra 5 125U 2+8+2/14 4300 3600 Arc Xe (4) 1850 12 Essentials 15 W 57 W

The above table highlights not just the specifications of each Core Ultra 9, 7, and 5 SKU but also denotes which model gets what level of vPro support. Starting with the Core Ultra 9 185H processor, the current mobile flagship chip on Meteor Lake, this chip supports vPro Enterprise. Along with the other top-tier SKU from each of the Core Ultra 9, 7, and 5 families, including the Core Ultra 7 165H and the Core Ultra 135H, other chips with vPro Enterprise support include the Core Ultra 7 165U and Core Ultra 7 164U, as well as the Core Ultra 5 135U and Core Ultra 5 134U.

Intel's other Meteor Lake chips, including the Core Ultra 7 155H, the Core Ultra 7 155U, the Core Ultra 5 125H, and the Core Ultra 5 125U, only come with support Intel's vPro Essentials features and not with support for Enterprise This presents a slight 'dropping of the ball' from Intel on this, which we highlighted in our Intel 13th Gen Core gets vPro piece last year.

Intel vPro Support Announcement With No New Hardware, Why Announce Later?

It is worth noting that Intel's announcement of adding vPro support to their first launch of Meteor Lake Core Ultra SKUs isn't entirely new; Intel did highlight that Meteor Lake would support vPro last year within their Series 1 Product Brief dated 12/20/2023. Intel's formal announcement of vPro support for Meteor Lake is more about which SKU has which level of support, and we feel this could pose problems to users who have already purchased Core Ultra series notebooks for business and enterprise use. Multiple outlets, including Newegg and directly from HP, are alluding to mentioning vPro whatsoever.

This could mean that a user has purchased a notebook with, say, a Core Ultra 5 125H (vPro Essentials), which would be used within an SME or by said SME as a bulk purchase but wouldn't be aware that the chip doesn't have vPro Enterprise, from which they personally and from a business standpoint could benefit from the additional securities. We reached out to Intel, and they sent us the following statement.

"Since we are launching vPro powered by Intel Core Ultra & Intel Core 14th Gen this week, prospective buyers will begin seeing the relevant system information on OEM and enterprise retail partner (eg. CDW) websites in the weeks ahead. This will include information on whether a system is equipped with vPro Enterprise or Essentials so that they can purchase the right system for their compute needs."

Intel Previews Sierra Forest with 288 E-Cores, Announces Granite Rapids-D for 2025 Launch at MWC 2024

26 février 2024 à 12:25

At MWC 2024, Intel confirmed that Granite Rapids-D, the successor to Ice Lake-D processors, will come to market sometime in 2025. Furthermore, Intel also provided an update on the 6th Gen Xeon Family, codenamed Sierra Forest, which is set to launch later this year and will feature up to 288 cores designed for vRAN network operators to improve performance in boost per rack for 5G workloads.

These chips are designed for handling infrastructure, applications, and AI workloads and aim to capitalize on current and future AI and automation opportunities, enhancing operational efficiency and ownership costs in next-gen applications and reflecting Intel's vision of integrating 'AI Everywhere' across various infrastructures.

Intel Sierra Forest: Up to 288 Efficiency Cores, Set for 2H 2024

The first of Intel's announcements at MWC 2024 focuses on their upcoming Sierra Forest platform, which is scheduled for the 1st half of 2024. Initially announced in February 2022 during Intel's Investor Meeting, Intel is splitting its server roadmap into solutions featuring only performance (P) and efficiency (E) cores. We already know that Sierra Forest's new chips feature a full E-core architecture designed for maximum efficiency in scale-out, cloud-native, and contained environments.

These chips utilize CPU chiplets built on the Intel 3 process alongside twin I/O chiplets based on the Intel 7 node. This combination allows for a scalable architecture, which can accommodate increasing core counts by adding more chiplets, optimizing performance for complex computing environments.

Intel's Sierra Forest, Intel's full E-core designed Xeon processor family, is anticipated to significantly enhance power efficiency with up to 288 E-cores per socket. Intel also claims that Sierra Forest is expected to deliver 2.7 times the performance-per-rack compared to an unspecified platform from 2021; this could be either Ice Lake or Cascade Lake, but Intel didn't mention which.

Additionally, Intel is promising savings of up to 30% in Infrastructure Power Management with Sierra Forest as their Infrastructure Power Manager (IPM) application is now available commercially for 5G cores. Power manageability and efficiency are growing challenges for network operators, so IPM is designed to allow network operators to optimize energy efficiency and TCO savings.

Intel also includes vRAN, which is vital for modern mobile networks, and many operators are forgoing opting for specific hardware and instead leaning towards virtualized radio access networks (vRANs). Using vRAN Boost, which is an integrated accelerator within Xeon Processors, Intel states that the 4th Gen Xeon should be able to reduce power consumption by around 20% while doubling the available network capacity.

Intel's push for 'AI Everywhere' is also a constant focus here, with AI's role in vRAN management becoming more crucial. Intel has announced the vRAN AI Developer Kit, which is available to select partners. This allows partners and 5G network providers to develop AI models to optimize for vRAN applications, tailor their vRAN-based functions to more use cases, and adapt to changes within those scenarios.

Intel Granite Rapids-D: Coming in 2025 For Edge Solutions

Intel's Granite Rapids-D, designed for Edge solutions, is set to bolster Intel's role in virtual radio access network (vRAN) workloads in 2025. Intel also promises marked efficiency enhancements and some vRAN Boost optimizations similar to those expected on Sierra Forest. Set to follow on from the current Ice Lake-D for the edge; Intel is expected to use the performance (P) cores used within Granite Rapids server parts and optimize the V/F curve designed for the lower-powered Edge platform. As outlined by Intel, the previous 4th generation Xeon platform effectively doubled vRAN capacity, enhancing network capabilities while reducing power consumption by up to 20%.

Granite Rapids-D aims to further these advancements, utilizing Intel AVX for vRAN and integrated Intel vRAN Boost acceleration, thereby offering substantial cost and performance benefits on a global scale. While Intel hasn't provided a specific date (or month) of when we can expect to see Granite Rapids-D in 2025, Intel is currently in the process of sampling these next-gen Xeon-D processors with partners, aiming to ensure a market-ready platform at launch.

Related Reading

AMD Fixed the STAPM Throttling Issue, So We Retested The Ryzen 7 8700G and Ryzen 5 8600G

23 février 2024 à 13:00

When we initially reviewed the latest Ryzen 8000G APUs from AMD last month, the Ryzen 7 8700G and Ryzen 5 8600G, we became aware of an issue that caused the APUs to throttle after a few minutes. This posed an issue for a couple of reasons, the first being it compromised our data to reflect the true capabilities of the processors, and the second, it highlighted an issue that AMD forgot to disable from their mobile series of Pheonix chips (Ryzen 7040) when implementing it over to the desktop.

We updated the data in our review of the Ryzen 7 8700G and Ryzen 5 8600G to reflect performance with STAPM on the initial firmware and with STAPM removed with the latest firmware. Our updated and full review can be accessed by clicking the link below:

As we highlighted in our Ryzen 8000G APU STAPM Throttling article, AMD, through AM5 motherboard vendors such as ASUS, has implemented updated firmware that removes the STAPM limitation. Just to quickly recap the Skin Temperature-Aware Power Management (STAPM) feature and what it does, AMD introduced it in 2014. STAPM itself is a feature implemented into their mobile processors. It is designed to extend the on-die power management by considering the processor's internal temperatures taken by on-chip thermal diodes and the laptop's surface temperature (i.e., the skin temperature).

The aim of STAPM is to prevent laptops from becoming uncomfortably warm for users, allowing the processor to actively throttle back its heat generation based on the thermal parameters between the chassis and the processor itself. The fundamental issue with STAPM in the case of the Ryzen 8000G APUs, including the Ryzen 7 8700G and Ryzen 5 8600G, is that these are mobile processors packaged into a format for use with the AM5 desktop platform. As a desktop platform is built into a chassis that isn't placed on a user's lap, the STAPM feature becomes irrelevant.

As we saw when we ran a gaming load over a prolonged period of time on the Ryzen 7 8700G with the firmware available at launch, we hit power throttling (STAPM) after around 3 minutes. As we can see in the above chart, power dropped from a sustained value of 83-84 W down to around 65 W, representing a drop in power of around 22%. While we know Zen 4 is a very efficient architecture at lower power values, overall performance will drop once this limit is hit. Unfortunately, AMD forgot to remove STAPM limits when transitioning Pheonix to the AM5 platform.

Retesting the same game (F1 2023) at the same settings (720p High) with the firmware highlighting that STAPM had been removed, we can see that we aren't experiencing any of the power throttling we initially saw. We can see power is sustained for over 10 minutes of testing (we did test for double this), and we saw no drops in package power, at least not from anything related to STAPM. This means for users on the latest firmware on whatever AM5 motherboard is being used, power and, ultimately, performance remain consistent with what the Ryzen 7 8700G should have been getting at launch.

The key question is, does removing the STAPM impact our initial results in our review of the Ryzen 7 8700G and Ryzen 5 8600G? And if so, by how much, or if at all? We added the new data to our review of the Ryzen 7 8700G and Ryzen 5 8600G but kept the initial results so that users can see if there are any differences in performance. Ultimately, benchmark runs are limited to the time it takes to run them, but in real-world scenarios, tasks such as video rendering and longer sustained loads are more likely to show gains in performance. After all, a drop of 22% in power is considerable, especially over a task that could take an hour.

(4-1d) Blender 3.6: Pabellon Barcelona (CPU Only)

Using one of our longer benchmarks, such as Blender 3.6, to highlight where performance gains are notable when using the latest firmware with the STAPM limitations removed, we saw an increase in performance of around 7.5% on the Ryzen 7 8700G with this removed. In the same benchmark, we saw an increase of around 4% on the Ryzen 5 8600G APU.

Over all of the Blender 3.6 tests in the rendering section of our CPU performance suite, performance gains hovered between 2 and 4.4% on the Ryzen 5 8600G, and between 5 and 7.5% on the Ryzen 8700G, which isn't really free performance, it's the performance that should have been there to begin with at launch.

IGP World of Tanks - 768p Min - Average FPS

Looking at how STAPM affected our initial data, we can see that the difference in World of Tanks at 768p Minumum settings had a marginal effect at best through STAPM by around 1%. Given how CPU-intensive World of Tanks is, and combining this with integrated graphics, the AMD Ryzen APUs (5000G and 8000G) both shine compared to Intel's integrated UHD graphics in gaming. Given that gaming benchmarks are typically time-limited runs, it's harder to identify performance gains. The key to takeaway here is that with the STAPM limitation removed, the performance shouldn't drop over sustained periods of time, so our figures above and our updated review data aren't compromised.

(i-3) Total War Warhammer 3 - 1440p Ultra - Average FPS

Regarding gaming with a discrete graphics card, we saw no drastic changes in performance, as highlighted by our Total War Warhammer 3 at 1440p Ultra benchmark. Across the board, in our discrete graphics results with both the Ryzen 7 8700G and the Ryzen 5 8600G, we saw nothing but marginal differences in performance (less than 1%). As we've mentioned, removing the STAPM limitations doesn't necessarily improve performance. Still, it allows the APUs to keep the same performance level for sustained periods, which is how it should have been at launch. With STAPM applied as with the initial firmware at launch on AM5 motherboards, power would drop by around 22%, limiting the full performance capability over prolonged periods.

As we've mentioned, we have updated our full review of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs to reflect our latest data gathered from testing on the latest firmware. Still, we can fully confirm that the STAPM issue has been fixed and that the performance is as it should be on both chips.

You can access all of our updated data in our review of the Ryzen 7 8700G and Ryzen 5 8600G by clicking the link below.

AMD CEO Dr. Lisa Su to Deliver Opening Keynote at Computex 2024

22 février 2024 à 20:00

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, announced today that Dr. Lisa Su, AMD's chief executive officer, will give the trade show's Opening Keynote. Su's speech is set for the morning of June 3, 2024, shortly before the formal start of the show. According to AMD, the keynote talk will be "highlighting the next generation of AMD products enabling new experiences and breakthrough AI capabilities from the cloud to the edge, PCs and intelligent end devices."

This year's Computex is focused on six key areas: AI computing, Advanced Connectivity, Future Mobility, Immersive Reality, Sustainability, and Innovations. Being a leading developer of CPUs, AI and HPC GPUs, consumer GPUs, and DPUs, AMD can talk most of these topics quite applicably.

As AMD is already mid-cycle on most of their product architectures, the company's most recent public roadmaps have them set to deliver major new CPU and GPU architectures before the end of 2024 with Zen 5 CPUs and RDNA 4 GPUs, respectively. AMD has not previously given any finer guidance on when in the year to expect this hardware, though AMD's overall plans for 2024 are notably more aggressive than the start of their last architecture cycle in 2022. Of note, the company has previously indicated that it intends to launch all 3 flavors of the Zen 5 architecture this year – not just the basic core, but also Zen 5c and Zen 5 with V-Cache – as well as a new mobile SoC (Strix Point). By comparison, it took AMD well into 2023 to do the same with Zen 4 after starting with a fall 2022 launch for those first products.


AMD 2022 Financial Analyst Day CPU Core Roadmap

This upcoming keynote will be Lisa Su's third Computex keynote after her speeches at Computex 2019 and Computex 2022. In both cases she also announced upcoming AMD products.

In 2019, she showcased performance improvements of then upcoming 3rd Generation Ryzen desktop processors and 7nm EPYC datacenter processors. Lisa Su also highlighted AMD's advancements in 7nm process technology, showcasing the world's first 7nm gaming GPU, the Radeon VII, and the first 7nm datacenter GPU, the Radeon Instinct MI60.

In 2022, the head of AMD offered a sneak peek at the then-upcoming Ryzen 7000-series desktop processors based on the Zen 4 architecture, promising significant performance improvements. She also teased the next generation of Radeon RX 7000-series GPUs with the RDNA 3 architecture.

❌
❌