Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Update on Intel's Panther Lake at Computex 2024, Intel Powering Up Intel 18A Wafer Next Week

During the Intel keynote hosted by CEO Pat Gelsinger, he gave the world a glimpse into the Intel Client roadmap until 2026. Meteor Lake launched last year on that roadmap, and Lunar Lake, which we dived into yesterday as Intel disclosed technical details about the upcoming platform. Pat also presented a wafer on stage, Panther Lake, and he gave some additional information about Intel's forthcoming Panther Lake platform, which is expected in 2025.

We covered Intel's initial announcement about the Panther Lake platform last year. It is set to be Intel's first client platform using its Intel 18A node. Aside from once again affirming that things are on track for a 2026 launch, Pat Gelsinger, Intel's CEO, also confirmed that they will be powering on the first 18A wafer for Panther Lake as early as next week.

Intel CPU Architecture Generations
  Alder/Raptor Lake Meteor
Lake
Lunar
Lake
Arrow
Lake
Panther
Lake
P-Core Architecture Golden Cove/
Raptor Cove
Redwood Cove Lion Cove Lion Cove Cougar Cove?
E-Core Architecture Gracemont Crestmont Skymont Crestmont? Darkmont?
GPU Architecture Xe-LP Xe-LPG Xe2 Xe2? ?
NPU Architecture N/A NPU 3720 NPU 4 ? ?
Active Tiles 1 (Monolithic) 4 2 4? ?
Manufacturing Processes Intel 7 Intel 4 + TSMC N6 + TSMC N5 TSMC N3B + TSMC N6 Intel 20A + More Intel 18A + ?
Segment Mobile + Desktop Mobile LP Mobile HP Mobile + Desktop Mobile?
Release Date (OEM) Q4'2021 Q4'2023 Q3'2024 Q4'2024 2025

One element to consider from last year is that Lunar Lake is built using TSMC, with the Lunar Lake compute tile with Xe2-LPG graphics on TSMC N3B, and the I/O tile on TSMC N6. Pat confirmed on stage that Panther Lake will be on Intel 18A. Still, he didn't confirm whether the chip will be made purely at Intel, or a mix between Intel and external foundries (ala Meteor Lake). Intel has also yet to confirm the CPU cores to be used, but from what our sources tell us, it sounds like it will be the new Cougar Cove and Darkmont cores.

As we head into the second half of 2024 and after Lunar Lake launches, Intel may divulge more information, including the architectural advancements Panther Lake is expected to bring. Until then, we will have to wait and see.

AMD Announces Zen 5-based EPYC “Turin” Processors: Up to 192 Cores, Coming in H2’2024

With AMD’s Zen 5 CPU architecture only a month away from its first product releases, the new CPU architecture was placed front and center for AMD’s prime Computex 2024 keynote. Outlining how Zen 5 will lead to improved products across AMD’s entire portfolio, the company laid out their product plans for the full triad: mobile, desktop, and servers. And while server chips will be the last parts to be released, AMD also saved the best for last by showcasing a 192 core EPYC “Turin” chip.

Turin is the catch-all codename for AMD’s Zen 5-based EPYC server processors – what will presumably be the EPYC 9005 series. The company has previously disclosed the name in earnings calls and other investor functions, outlining that the chip was already sampling to customers and that the silicon was “looking great.”

The Computex reveal, in turn, is the first time that the silicon has been shown off to the public. And with it, we’ve received the first official confirmation of the chip’s specifications. With SKUs up to 192 CPU cores, it’s going to be a monster of an x86 CPU.

AMD EPYC CPU Generations
AnandTech EPYC 5th Gen
(Turin, Z5c)
EPYC 9704
(Bergamo)
EPYC 9004
(Genoa)
EPYC 7003
(Milan)
CPU Architecture Zen 5c Zen 4c Zen 4 Zen 3
Max CPU Cores 192 128 96 64
Memory Channels 12 x DDR5 12 x DDR5 12 x DDR5 8 x DDR4
PCIe Lanes 128 x 5.0 128 x 5.0 128 x 5.0 128 x 4.0
L3 Cache ? 256MB 384MB 256MB
Max TDP 360W? 360W 400W 280W
Socket SP5 SP5 SP5 SP3
Manufacturing
Process
CCD: TSMC N3
IOD:TSMC N6
CCD: TSMC N5
IOD: TSMC N6
CCD: TSMC N5
IOD: TSMC N6
CCD: TSMC N7
IOD: GloFo 14nm
Release Date H2'2024 06/2023 11/2022 03/2021

Though only a brief tease, AMD’s Turin showcase did confirm a few, long-suspected details about the platform. AMD will once again be using their socket SP5 platform for Turin processors, which means the chips are drop-in compatible with EPYC 9004 Genoa (and Bergamo). The reuse of SP5 means that customers and server vendors can immediately swap out chips without having to build/deploy whole new systems. It also means that Turin will have the same base memory and I/O options as the EPYC 9004 series: 12 channels of DDR5 memory, and 128 PCIe 5.0 lanes.

In terms of power consumption, existing SP5 processors top out at 400 Watts, and we’d expect the same for these new, socket-compatible chips.

As for the Turin chip itself, while AMD is not going into further detail on its configuration, all signs point to this being a Zen 5c configuration – that is, built using CCDs designed around AMD’s compact Zen 5 core configuration. This would make the Turin chip on display the successor to Bergamo (EPYC 9704), which was AMD’s first compact core server processor, using Zen 4c cores. AMD’s compact CPU cores generally trade off per-core performance in favor of allowing more CPU cores overall, with lower clockspeed limits (by design) and less cache memory throughout the chip.

According to AMD, the CCDs on this chip were fabbed on a 3nm process (undoubtedly TSMC’s), with AMD apparently looking to take advantage of the densest process available in order to maximize the number of CPU cores the can place on a single chip. Even then, the CCDs featured here are quite sizable, and while we’re waiting for official die size numbers, it would come as no surprise if Zen 5’s higher transistor count more than offset the space savings of moving to 3nm. Still, AMD has been able to squeeze 12 CCDs on to the chip – 4 more than Bergamo – which is what’s allowing them to offer 192 CPU cores instead of 128 as in the last generation.

Meanwhile, the IOD is confirmed to be produced on 6nm. Judging from that fact, the pictures, and what AMD’s doing with their Zen 5 desktop products, there is a very good chance that AMD is using either the same or a very similar IOD as on Genoa/Bergamo. Which goes hand-in-hand with the socket/platform at the other end of the chip staying the same.

AMD’s brief teaser did not discuss at all any other Turin configurations. So there is nothing else official to share about Turin chips built using full-sized Zen 5 CPU cores. With that said, we know that the full-fat cores going into the Ryzen 9000 desktop series pack 8 cores to a CCD and are being fabbed on a 4nm process – not 3nm – so that strongly implies that EPYC Zen 5 CCDs will be the same. Which, if that pans out, means that Turin chips using high performance cores will max out at 96 cores, the same as Genoa.

Hardware configurations aside, AMD also showcased a couple of benchmarks, pitting the new EPYC chips against Intel’s Xeons. As you’d expect in a keynote teaser, AMD was winning handily. Though it is interesting to note that the chips benchmarked were all 128 core Turins, rather than on the 192 core model being shown off today.

AMD will be shipping EPYC Turin in the second half of this year. More details on the chips and configurations will follow once AMD gets closer to the EPYC launch.

AMD Launching New CPUs for AM4: Ryzen 5000XT Series Coming in July

During their opening keynote at Computex 2024, AMD announced their intention to launch a pair of new Ryzen 5000 processors for their legacy AM4 platform. The new chips, both getting the XT suffix, will be the Ryzen 9 5900XT, a 16 core Zen 3 part, while the Ryzen 7 5800XT will be an 8 core Zen 3.

The new chips are intended to underscore AMD's ongoing commitment to supporting their consumer platforms over several years. And while the specification changes are rather minor overall – the Zen 3 CPU architecture has long since been taken as far as it can reasonable go – it does give AMD a chance to refresh the platform by slinging hardware at new price points. AMD did something very similar for the Ryzen 3000 generation with the late-model Ryzen 3000 XT chips.

AMD Ryzen 5000XT Series Processors
(Zen 3)
AnandTech Cores /
Threads
Base
Freq
Turbo
Freq
L2
Cache
L3
Cache
TDP
Ryzen 9 5950X 16C / 32T 3.4 GHz 4.9 GHz 8 MB 64 MB 105 W
Ryzen 9 5900XT 16C / 32T 3.3 GHz 4.8 GHz 8 MB 64 MB 105 W
Ryzen 9 5900X 12C / 24T 3.7 GHz 4.8 GHz 6 MB 64 MB 105 W
Ryzen 7 5800XT 8C / 16T 3.8 GHz 4.8 GHz 4 MB 32 MB 105 W
Ryzen 7 5800X 8C / 16T 3.8 GHz 4.7 GHz 4 MB 32 MB 105 W

We've dedicated many column inches covering Zen 3 and the Ryzen 5000 series since they launched in late 2020, so there isn't anything new to add here. Zen 3 is no longer AMD's latest and greatest, but the platform as a whole is quite cheap to produce, making it a viable budget offering for new builds, or offering one last upgrade for old builds.

The Ryzen 9 5900XT is a 16 core part, and isn't to be confused with the Ryzen 9 5900X, which is a 12 core part. It ships with a peak turbo clockspeed of 4.8GHz, 100 MHz lower than the top-tier Ryzen 9 5950X. This makes it's XT designation somewhat of a misnomer compared to previous generations of XT chips, although it's clear that AMD has boxed themselves into a corner with their naming scheme, as they both need a way to designate that this is a new chip, and yet still place it below the 5950X.

Looking at the second chip, we have the Ryzen 7 5800XT. This is an 8 core part that does improve on its predecessor, offering a 4.8GHz max turbo clock that is 100MHz higher than the Ryzen 7 5800X's. Both chips otherwise share the same characteristics, including 6 MB of L2 cache and 32 MB of L3 cache, and all four of the chips – including the two new XT series and the corresponding X series chips – all come with a 105 Watt TDP.

In terms of motherboard compatibility, all of the AM4 motherboards that currently support the Ryzen 5000 series are also compatible with the Ryzen 5000XT series, although users are likely to need to perform a firmware update to ensure maximum compatibility; they are the same chips, but the microcodes are likely different.

AMD has provided some gaming performance figures comparing the Ryzen 9 5900XT to Intel's 13th Gen Core i7-13700K. It does offer very modest yet marginal gains in games by up to 4%; it's not mind-blowing, but the price could be the decisive factor here.

Regarding price, AMD hasn't disclosed anything official yet ahead of the expected launch of the Ryzen 5000XT series chips in July. It's hard to make a case for a pair of chips to be considered a fully-fledged series, but it does open up the doors for AMD to perhaps launch more 5000XT series chips in the future.

AMD Announces The Ryzen AI 300 Series For Mobile: Zen 5 With RDNA 3.5, and XDNA2 NPU With 50 TOPS

During AMD's opening keynote at Computex 2024, company CEO Dr. Lisa Su revealed AMD's latest AI PC-focused chip lineup for the mobile market, the Ryzen AI 300 series. Based on AMD's new Zen 5 CPU microarchitecture, the Ryzen AI 300 series – codenamed Strix Point – is intended to offer an across-the-board improvement in mobile SoC performance, with AMD proclaiming that the Ryzen AI 300 series will offer the fastest AI inference performance within the compact and portable PC market.

Under the hood, the new mobile SoC from AMD incorporates not only their new Zen 5 CPU architecture, but also their new RDNA 3.5-based integrated graphics, and the third generation XDNA2-based NPU, the latter of which is rated to deliver 50 TOPS of performance for AI-based workloads. As a result, the Ryzen AI 300 series represents a significant upgrade in AMD's mobile chip lineup, with all of the major aspects of the platform receiving a major upgrade versus their Zen 4-era Phoenix/Hawk Point SoCs. The one thing the new platform won't get, however, is a process node improvement; AMD is building Strix Point on a 4nm node, just like Phoenix/Hawk Point before it.

For this morning's announcement, AMD has unveiled the first two Ryzen AI 300 SKUs designed for notebooks. The first of these is the Ryzen AI 9 HX 370, which features 12 Zen 5 cores with a maximum boost frequency of up to 5.1 GHz, and comes equipped with 36 MB cache (12 MB L2 + 24 MB L3). The other chip to be announced is the Ryzen AI 9 365, which has two fewer Zen 5 cores (10 cores) and operates with a 5,0 GHz boost frequency and a 10 MB L2 + 24 MB L3 cache allocation.

AMD Unveils Ryzen 9000 CPUs For Desktop, Zen 5 Takes Center Stage at Computex 2024

During AMD's Computex 2024 kick-off keynote, AMD's CEO, Dr. Lisa Su, officially unveiled and announced the company's next generation of Ryzen processors. Today marks the first unveiling of AMD's highly anticipated Zen 5 microarchitecture via the Ryzen 9000 series, which is set to bring several advancements over Zen 4 and the Ryzen 7000 series for desktop PCs, which will launch sometime in July 2024.

AMD has unveiled four new chip SKUs using its Zen 5 microarchitecture. The AMD Ryzen 9 9950X processor will be the new consumer flagship part, featuring 16 CPU cores and a speedy 5.7 GHz maximum boost frequency. The other SKUs include, 6, 8, and 12 core parts, giving users a varied combination of core and thread counts. All four of these initial chips will be X-series chips, meaning they will have an unlocked multipliers and higher TDPs/clockspeeds.

In regards to performance, AMD is touting an average (geomean) IPC increase in desktop workloads for Zen 5 of 16%. And with the new desktop Ryzen chips' turbo clockspeeds remaining largely identical to their Ryzen 7000 predecessors, this should translate into similar performance expectations for the new chips.

The AMD Ryzen 9000 series will also launch on the AM5 socket, which debuted with AMD's Ryzen 7000 series and marks AMD's commitment to socket/platform longevity. Along with the Ryzen 9000 series will come a pair of new high-performance chipsets: the X870E (Extreme) and the regular X870 chipsets. The fundamental features that vendors will integrate into their specific motherboards remain tight-lipped. Still, we do know that USB 4.0 ports are standard on the X870E/X870 boards, along with PCIe 5.0 for both PCIe graphics and NVMe storage, with higher AMD EXPO memory profile support expected than previous generations.

Computex 2024 Keynote Preview: The Great PC Powers Convene

The annual Computex computer expo kicks off in Taepei this weekend. And this year’s show is shaping up to be the most packed in years.

Computex rivals CES for the most important PC trade show of the year, and in most years is attended by not only the numerous local Taiwanese firms (Asus, MSI, ASRock, and others), but the major chip developers have been increasing their own presence as well. These days, while CES itself tends to land more high-profile announcements, in recent years it’s been Computex that has delivered on more substantial announcements. This is largely because tech firms have aligned their product schedules to roll out near gear in the second half of the year, when retail sales are stronger due to the back-to-school and holiday shopping periods.

This year’s show, in turn, is looking to be an especially big year for the PC ecosystem. All the major PC chip firms – AMD, Intel, NVIDIA, and the 4th Musketeer, Qualcomm – are holding keynote addresses at this year’s show, where they’re expected to announce new slates of PC products to ship later this year. In a normal year there is typically only major announcements from one or two of the major chip firms, so having all four of them at the show delivering lengthy keynotes is setting things up for what should be an exceptional show.

Arm Unveils 2024 CPU Core Designs, Cortex X925, A725 and A520: Arm v9.2 Redefined For 3nm

As the semiconductor industry continues to evolve, Arm stands at the forefront of innovation for its core and IP architecture, especially in the mobile space, by pushing the boundaries of technology to deliver cutting-edge solutions for end users. For 2024, Arm's year-on-year strategic advancements focus on enhancing last year's Armv9.2 architecture with a new twist. Arm has rebranded and re-strategized its efforts by introducing Arm Compute Subsystem (CSS), the direct successor to last year's Total Compute Solutions (TSC2023) platform.

Arm is also transitioning its latest IP and Cortex core designs, including the largest Cortex X925, the middle Cortex A725, and the refreshed and smaller Cortex A520 to the more advanced 3 nm process technology. Arm promises that the 3 nm process node will deliver unprecedented performance gains compared to last year's designs, power efficiency and scalability improvements, and new front and back-end refinements to its Cortex series of cores. Arms' new solutions look to power the next-generation mobile and AI applications as Arm, along with its complete AArch64 64-bit instruction execution and approach to solutions geared towards mobile and notebooks, look set to redefine end users' expectations within the Android and Windows on Arm products.

One More EPYC: AMD Launches Entry-Level Zen 4-based EPYC 4004 Series

Ever since AMD re-emerged as a major competitor within the x86 CPU scene, one of AMD’s top priorities has been to win over customers in the highly lucrative and profitable server market. It’s a strategy that’s paid off well for AMD, as while they’re still the minority player in the space, they’ve continued to whittle away at what was once Intel’s absolute control over the market, slowly converting more and more customers over to the EPYC ecosystem.

Now as the Zen 4 CPU architecture approaches its second birthday, AMD is launching one final line of EPYC chips, taking aim at yet another Xeon market segment. This time it’s all about the entry-level 1P server market – small scale, budget-conscientious users who only need a handful of CPU cores – which AMD is addressing with their new EPYC 4004 series processors.

Within AMD’s various product stacks, the new EPYC 4004 family essentially replaces Ryzen chips for use in servers. Ryzen for servers was never a dedicated product lineup within AMD, but none the less it has been a product segment within the company since 2019, with AMD aiming it at smaller-scale hosting providers who opted to use racks of consumer-scale hardware, rather than going the high-density route with high core count EPYC processors.

With the upgrade to EPYC status, that hardware ecosystem is being re-deployed as a proper lineup with dedicated chips, and a handful of additional features befitting an EPYC chip. Consequently, AMD is also expanding the scope of the market segments they’re targeting by a hair, roping in small business (SMB) users, whom AMD wasn’t previously chasing. Though regardless of the name on the market segment, the end result is that AMD is carving out a budget-priced series of EPYC chips with 4 to 16 cores based on their consumer platforms.

Underlying the new EPYC 4004 series is AMD’s tried and true AM5 platform and Raphael processors, which we know better as the Ryzen 7000 series. Their new EPYC counterparts are an 8 chip stack that is comprised almost entirely of rebranded Ryzen 7000 SKUs, with all the same core counts, clockspeeds, and TDPs as their counterparts. The sole exception here being the very cheapest chip of the bunch, the 4 core 4124P.

AMD EPYC 4004 Processors
AnandTech Core/
Thread
Base
Freq
1T
Freq
L3
Cache
PCIe Memory TDP
(W)
Price
(1KU)
Ryzen Version
4584PX 16 32 4200 5700 128MB (3D) 28 x 5.0 2 x DDR5-5200 UDIMM 120 $699 7950X3D
4484PX 12 24 4400 5600 128MB (3D) 120 $599 7900X3D
4564P 16 32 4500 5700 64MB 170 $699 7950X
4464P 12 24 3700 5400 64MB 65 $429 7900
4364P 8 16 4500 5400 32MB 105 $399 7700X
4344P 8 16 3800 5300 32MB 65 $329 7700
4244P 6 12 3800 5100 32MB 65 $229 7600
4124P 4 8 3800 5100 16MB 65 $149 New

Since these are all based on AMD’s consumer discrete CPUs, the underlying architecture in all of these chips is Zen 4 throughout. So despite being positioned below the EPYC 8004 Siena series, you won’t find any Zen 4c CPU cores here; everything is full-fat Zen 4 CCDs. Which means that while there are relatively few cores overall (for an EPYC processor), they are all high-performing cores, with nothing turboing lower than 5.1GHz.

Notably here, AMD is mixing in some of their 3D V-Cache chip SKUs as well, which are signified with the “PX” suffix. Based on the 7950X3D and 7900X3D respectively, both of these chips have 1 CCD with V-Cache stacked on top of them, affording the chip a total of 128MB of L3 cache. The remaining 6 SKUs all get the “P” suffix – indicating they’re 1 socket processors – and come with TDPs ranging from 65 Watts to 170 Watts.

This does mean that, by EPYC server standards, the 4004 series is not particularly energy efficient. This is a lineup that is intended to be cost-effective first and foremost. Instead, energy efficiency remains the domain of the EPYC 8004, with its modestly-clocked many-core Zen4c designs.

The reuse of Zen 4/AM5 means that the EPYC 4004 series comes with all of the features we’ve come to expect from the platform, including 28 lanes of PCIe 5.0, 2 channels (128-bits) of DDR5 memory at speeds up to DDR5-5200, and even integrated graphics. Since this is a server part, ECC is officially supported on the chips – though do note that like the Ryzen Pro workstation chips, this is UDIMM-only; registered DIMMs (RDIMMs) are not supported.

AMD isn’t disclosing the chipset being paired with the EPYC 4004 processors, and while it’s undoubtedly going to be AMD’s favorite ASMedia-designed I/O chipset, it’s interesting to note that it’s at the motherboard level where the new EPYC platform’s real server credentials are at. Separating itself from rank-and-file Ryzens, the EPYC 4004 platform is getting several additional enterprise features, including baseboard management controller (BMC) support, software RAID (RAIDXpert2 for Server), and official server OS support. To be sure, this is still a fraction of the features found in a high-end enterprise solution like the EPYC 9004/8004 series, but it’s some additional functionality befitting of a platform meant to be used in servers.

AMD’s new chips, in turn, are designed to compete against Intel’s entry-level Xeon-E family. Itself a redress of consumer hardware (Raptor Lake), the Xeon-E family is a P-core only chip lineup, with Intel offering SKUs with 4, 6, or 8 CPU cores. This leaves the EPYC 4004 family somewhat uniquely positioned compared to the Xeon-E family, as Intel doesn’t have anything that’s a true counterpart to AMD’s 12 and 16 core chips; after Xeon-E comes the far more capable (and expensive) Xeon-w family. So part of AMD’s strategy with the EPYC 4004 family is to serve a niche that Intel does not.

(As a side bonus, AMD’s core counts also end up playing well with Windows Server 2022 licensing. The Standard license covers up to 16 cores, so a top-end EPYC 4004 chip lets server owners max out their license, amortizing the software cost over more cores)

With regards to performance, Raptor Lake versus Zen 4 is largely settled by now. So I won’t spend too much time on AMD’s (many) benchmark slides. But suffice it to say, with a significant core count advantage, AMD can deliver an equally significant performance advantage in highly multi-threaded workloads (though in that scenario, it does come with a similar spike in power consumption compared to the 95 Watt Intel chips).

Wrapping things up, AMD is launching the new EPYC 4004 product stack immediately. With many of AMD’s regular server partners already signed up – and the core hardware readily available – there won’t be much of a ramp-up period to speak of.

Intel Teases Lunar Lake CPU Ahead of Computex: Most Power Efficient x86 Chip Yet

The next few weeks in the PC industry are going to come fast and furious. Between today and mid-June are multiple conferences and trade-shows, including Microsoft Build and the king of PC trade shows: Computex Taiwan. With all three PC CPU vendors set to present, there’s a lot going on, and a lot of product announcements to be had. But even before those trade shows start, Intel is looking to make the first move this afternoon with an early preview on its next-gen mobile processor, Lunar Lake.

While Intel hasn’t said too much about what to expect from their Computex 2024 keynote thus far, it’s clear that Intel’s next-gen CPUs – Lunar Lake for mobile, and Arrow Lake for Mobile/Desktop – are going to be two of the major stars of the show. At this point Intel has previously teased and/or demoed both chips (Lunar more so than Arrow), and this afternoon the company is releasing a bit more information on Lunar Lake even before Computex kicks off.

Officially, today’s reveal is a preview of Intel’s next Tech Tour event, which is taking place at the end of May. Unofficially, this is the exact same date and time as the embargo on Qualcomm Snapdragon X laptop announcements, which are slated to hit retail shelves next month. Lunar Lake laptops, by contrast, will not hit retail shelves until Q4 of this year. So although the additional technical details from today’s disclosure are great to have, looking at the bigger picture it’s difficult to interpret this reveal as anything less than a bald-faced effort to interdict the Snapdragon X launch (not that Qualcomm hasn’t also been crowing about SDX for the last 7 months). Which, if nothing else, goes to show the current tumultuous state of the laptop CPU market, and that Intel isn’t nearly as secure in their position as they have traditionally been.

Intel Core i9-14900KS Review: The Swan Song of Raptor Lake With A Super Fast 6.2 GHz Turbo

For numerous generations of their desktop processor releases, Intel has made available a selection of high-performance special edition "KS" CPUs that add a little extra compared to their flagship chip. With a lot of interest, primarily from the enthusiasts looking for the fastest processors, Intel's latest Core i9-14900KS represents a super-fast addition to its 14th Generation Core lineup with out-of-the-box turbo clock speeds of up to 6.2 GHz and represents the last processor to end an era as Intel is removing the 'i' from its legendary nomenclature for future desktop chip releases.

Reaching speeds of up to 6.2 GHz, this sets up the Core i9-14900KS as the fastest desktop CPU in the world right now, at least in terms of frequencies out of the box. Building on their 'regular' flagship chip, the Core i9-14900, the Core i9-14900KS is also using their refreshed Raptor Lake (RPL-R) 8P+16E core chip design with a 200 MHz higher boost clock speed and also has a 100 MHz bump on P-Core base frequency. 

This new KS series SKU shows Intel's drive to offer an even faster alternative to their desktop regular K series offerings, and with the Core i9-14900KS, they look to provide the best silicon from their Raptor Lake Refresh series with more performance available to unlock to those who can. The caveat is that achieving these ridiculously fast clock speeds of 6.2 GHz on the P-Core comes at the cost of power and heat; keeping a processor pulling upwards of 350 W is a challenge in its own right, and users need to factor this in if even contemplating a KS series SKU.

In our previous KS series review, the Core i9-13900KS reached 360 W at its peak, considerably more than the Core i9-13900K. The Core i9-14900KS, built on the same core architecture, is expected to surpass that even further than the Core i9-14900K. We aim to compare Intel's final Core i series processor to the best of what both Intel and AMD have available, and it will be interesting to see how much performance can be extrapolated from the KS compared to the regular K series SKU.

AMD Hits Record High Share in x86 Desktops and Servers in Q1 2024

Coming out of the dark times that preceded the launch of AMD's Zen CPU architecture in 2017, to say that AMD has turned things around on the back of Zen would be an understatement.  Ever since AMD launched its first Zen-based Ryzen and EPYC processors for client and server computers, it has been consistently gaining x86 market share, growing from a bit player to a respectable rival to Intel (and all at Intel's expense).

The first quarter of this year was no exception, according to Mercury Research, as the company achieved record high unit shares on x86 desktop and x86 server CPU markets due to success of its Ryzen 8000-series client products and 4th Generation EPYC processors.

"Mercury noted in their first quarter report that AMD gained significant server and client revenue share driven by growing demand for 4th Gen EPYC and Ryzen 8000 series processors," a statement by AMD reads.

Desktop PCs: AMD Achieves Highest Share in More Than a Decade 

Desktops, particularly DIY desktops, have always been AMD's strongest market. After the company launched its Ryzen processors in 2017, it doubled its presence in desktops in just three years. But in the recent years the company had to prioritize production of more expensive CPUs for datacenters, which lead to some erosion of its desktop and mobile market shares.

As the company secured more capacity at TSMC, it started to gradually increase production of desktop processors. In Q4 last year it introduced its Zen 4-based Ryzen 8000/Ryzen 8000 Pro processors for mainstream desktops, which appeared to be pretty popular with PC makers.

As a result of this and other factors, AMD increased unit sales of its desktop CPUs by 4.7% year-over-year in Q1 2024 and its market share achieved 23.9%, which is the highest desktop CPU market share the company commanded in over a decade. Interestingly, AMD does not attribute its success on the desktop front to any particular product or product family, which implies that there are multiple factors at play.

Mobile PCs: A Slight Drop for AMD amid Intel's Meteor Lake Ramp

AMD has been gradually regaining its share inside laptops for about 1.5 years now and sales of its Zen 4-based Ryzen 7040-series processors were quite strong in Q3 2023 and Q4 2023, when the company's unit share increased to 19.5% and 20.3%, respectively, as AMD-based notebook platforms ramped up. By contrast, Intel's Core Ultra 'Meteor Lake' powered machines only began to hit retail shelves in Q4'23, which affected sales of its processors for laptops.

In the first quarter AMD's unit share on the market of CPUs for notebooks decreased to 19.3%, down 1% sequentially. Meanwhile, the company still demonstrated significant year-over-year unit share increase of 3.1% and revenue share increase of 4%, which signals rising average selling price of AMD's latest Ryzen processors for mobile PCs.

Client PCs: Slight Gain for AMD, Small Loss for Intel

Overall, Intel remained the dominant force in client PC sales in the first quarter of 2024, with a 79.4% market share, leaving 20.6% for AMD. This is not particularly surprising given how strong and diverse Intel's client products lineup is. Even with continued success, it will take AMD years to grow sales by enough to completely flip the market.

But AMD actually gained a 0.3% unit share sequentially and a 3.6% unit share year-over year. Notably, however, AMD's revenue share of client PC market is significantly lower than its unit share (16.3% vs 20.6%), so the company is still somewhat pigeonholed into selling more budget and fewer premium processors overall. But the company still made a strong 3.8% gain since the first quarter 2023, when its revenue share was around 12.5% amid unit share of 17%.

Servers: AMD Grabs Another Piece of the Market

AMD's EPYC datacenter processors are undeniably the crown jewel of the company's CPU product lineup. While AMD's market share in desktops and laptops fluctuated in the recent years, the company has been steadily gaining presence in servers both in terms of units and in terms of revenue in the highly lucrative (and profitable) server market.

In Q1 2024, AMD's unit share on the market of CPUs for servers increased to 23.6%, a 0.5% gain sequentially and a massive 5% gain year-over-year driven by the ramp of platforms based on AMD's 4th Generation EPYC processors. With a 76.4% unit market share, Intel continues to dominate in servers, but it is evident that AMD is getting stronger.

AMD's revenue share of the x86 server market reached 33%, up 5.2% year-over-year and 1.2% from the previous quarter. This signals that the company is gaining traction in expensive machines with advanced CPUs. Keeping in mind that for now Intel does not have direct rivals for AMD's 96-core and 128-core processors, it is no wonder that AMD has done so well growing their share of the server market.

"As we noted during our first quarter earnings call, server CPU sales increased YoY driven by growth in enterprise adoption and expanded cloud deployments," AMD said in a statement.

Intel Issues Official Statement Regarding 14th and 13th Gen Instability, Recommends Intel Default Settings

Further to our last piece which we detailed Intel's issue to motherboard vendors to follow with stock power settings for Intel's 14th and 13th Gen Core series processors, Intel has now issued a follow-up statement to this. Over the last week or so, motherboard vendors quickly released firmware updates with a new profile called 'Intel Baseline', which motherboard vendors assumed would address the instability issues. 

As it turns out, Intel doesn't seem to accept this as technically, these Intel Baseline profiles are not to be confused with Intel's default specifications. This means that Intel's Baseline profiles seemingly give the impression that they are operating at default settings, hence the terminology 'baseline' used, but this still opens motherboard vendors to use their interpretations of MCE or Multi-Core Enhancement.

To clarify things for consumers, Intel has sent us the following statement:

Several motherboard manufacturers have released BIOS profiles labeled ‘Intel Baseline Profile’. However, these BIOS profiles are not the same as the 'Intel Default Settings' recommendations that Intel has recently shared with its partners regarding the instability issues reported on 13th and 14th gen K SKU processors.

These ‘Intel Baseline Profile’ BIOS settings appear to be based on power delivery guidance previously provided by Intel to manufacturers describing the various power delivery options for 13th and 14th Generation K SKU processors based on motherboard capabilities.

Intel is not recommending motherboard manufacturers to use ‘baseline’ power delivery settings on boards capable of higher values.

Intel’s recommended ‘Intel Default Settings’ are a combination of thermal and power delivery features along with a selection of possible power delivery profiles based on motherboard capabilities.

Intel recommends customers to implement the highest power delivery profile compatible with each individual motherboard design as noted in the table below:


Click to Enlarge Intel's Default Settings

What Intel's statement is effectively saying to consumers, is that users shouldn't be using the Baseline Power Delivery profiles which are offered by motherboard vendors through a plethora of firmware updates. Instead, Intel is recommending users opt for Intel Default Settings, which follows what the specific processor is rated for by Intel out of the box to achieve the clock speeds advertised, without users having to worry about firmware 'over' optimization which can cause instability as there have been many reports of happening.

Not only this, but the Intel Default settings offer a combination of thermal specifications and power capabilities, including voltage and frequency curve settings that apply to the capability of the motherboard used, and the power delivery equipped on the motherboard. At least for the most part, Intel is recommending users with 14th and 13th-Gen Core series K, KF, and KS SKUs that they do not recommend users opt in using the Baseline profiles offered by motherboard vendors.

Digesting the contrast between the two statements, the key differential is that Intel's priority is reducing the current going through the processor, which for both the 14th and 13th Gen Core series processors is a maximum of 400 A, even when using the Extreme profile. We know those motherboard vendors on their Z790 and Z690 motherboards opt for an unrestricted power profile, which is essentially 'unlimited' power and current to maximize performance at the cost of power consumption and heat, which does exacerbate problems and can lead to frequent bouts of instability, especially on high-intensity workloads.

Another variable Intel is recommending is that the AC Load Line must match the design target of the processor, with a maximum value of 1.1 mOhm, and that the DC Load Line must be equal to the AC Load Line; not above or below this recommendation for maximum stability. Intel also recommends that CEP, eTVB, C-states, TVB, and TVB Voltage Optimizations be active on the Extreme profile to ensure stability and performance are consistent.

Given Intel is essentially recommending users not to use what motherboard vendors are offering to fix, we agree that when motherboards come out of the box, they should operate at 'Default' settings until asked otherwise. We understand that motherboard vendors have the desire to showcase what they can do with their wares, features, and firmware, but ultimately there is some real lack of communication between Intel and its partners regarding this issue.

Following Intel's statement, they do recommend customers implement the highest power delivery profile which is compatible with the caliber of motherboard used by following the specification and design. According to Intel, this isn't open for interpretation despite what motherboard vendors have offered so far, and we do expect that there is likely to be more to come in this saga of constant developments regarding the instability issue.

Upcoming AMD Ryzen AI 9 HX 170 Processor Leaked By ASUS?

In what appears to be a mistake or a jump of the gun by ASUS, they have seemingly published a list of specifications for one of its key notebooks that all but allude to the next generation of AMD's mobile processors. While we saw AMD toy with a new nomenclature for their Phoenix silicon (Ryzen 7040 series), it seems as though AMD is once again changing things around where their naming scheme for processors is concerned.

The ASUS listing, which has now since been deleted, but as of writing is still available through Google's cache, highlights a model that is already in existence, the VivoBook S 16 OLED (M5606), but is listed with an unknown AMD Ryzen AI 9 HX 170 processor. Which, based on its specificiations, is certainly not part of the current Hawk Point (Phoenix/Phoenix 2) platform.


The cache on Google shows the ASUS Vivobook S 16 OLED with a Ryzen AI 9 HX 170 Processor

While it does happen in this industry occasionally, what looks like an accidental leak by ASUS on one of their product pages has unearthed an unknown processor from AMD. This first came to our attention via a post on Twitter by user @harukaze5719. While we don't speculate on rumors, we confirmed this ourselves by digging through Google's cache. Sure enough, as the image above from Google highlights, it lists a newly unannounced model of Ryzen mobile processor. Under the listing via the product compare section for the ASUS Vivobook S 16 OLED (M5606) notebook, it is listed with the AMD Ryzen AI 9 HX 170, which appears to be one of AMD's upcoming Zen 5-based mobile chips codenamed Strix Point.

So with the seemingly new nomenclature that AMD has gone with, it has a clear focus on AI, or rather Ryzen AI, by including it in the name. The Ryzen AI 9 HX 170 looks set to be a 12C/24T Zen 5 mobile variant, with their Ryzen AI NPU or similar integrated within the chip. Given that Microsoft has defined that only processors with an NPU with 45 TOPS of performance or over constitute being considered an 'AI PC', it's likely the Xilinx (now AMD Xilinx) based NPU will meet these requirements as the listing states the chip has up to 77 TOPS of AI performance available. The HX series is strikingly similar to AMD's (and Intel's) previous HX naming series for their desktop replacement SKUs for laptops, so assuming any of the details of ASUS's error are correct, then this is presumably a very high-end, high-TDP part.


AMD Laptop Roadmap from Zen 2 in 2019 to Zen 5 on track for release in 2024

We've known for some time that AMD plans to release AMD's Zen 5-based Strix Point line-up sometime in 2024. Given the timing of Computex 2024, which is just over four weeks away, we still don't quite have the full picture of Zen 5's performance and its architectural shift over Zen 4. AMD CEO Dr. Lisa Su also confirmed that Zen 5 will come with enhanced RDNA graphics within the Strix Point SoC by stating "Strix combines our next-gen Zen 5 core with enhanced RDNA graphics and an updated Ryzen AI engine to significantly increase the performance, energy efficiency, and AI capabilities of PCs,"

While it's entirely possible as we lead up to Computex 2024 that AMD is prepared to announce more details about Zen 5, nothing is confirmed. We do know that the CEO of AMD, Dr. Lisa Su is scheduled to deliver the opening keynote of the show, Dr. Lisa Su unveiled their Zen 4 microarchitecture at Computex 2022 during AMD's keynote and even unveiled their 3D V-Cache stacking, which we know today as the Ryzen X3D CPUs back at Computex 2021.

With that in mind, AMD and Dr. Lisa Su love to announce new products and architectures at Computex, so we just have to wait until the beginning of next month. How AMD denotes the nomenclature for the upcoming Zen 5 mobile and desktop processors remains to be seen, but hopefully, all will be revealed soon. Regarding the ASUS Vivobook S 16 OLED (M5606), we currently don't know any of the other specifications at this time. Still, we expect them to be available once AMD has updated us with information on Zen 5 and Strix Point.

Apple Announces M4 SoC: Latest and Greatest Starts on 2024 iPad Pro

Setting things up for what is certainly to be an exciting next few months in the world of CPUs and SoCs, Apple this morning has announced their next-generation M-series chip, the M4. Introduced just over six months after the M3 and the associated 2023 Apple MacBook family, the M4 is going to start its life on a very different track, launching alongside Apple’s newest iPad Pro tablets. With their newest chip, Apple is promising class-leading performance and power efficiency once again, with a particular focus on machine learning/AI performance.

The launch of the M4 comes as Apple’s compute product lines have become a bit bifurcated. On the Mac side of matters, all of the current-generation MacBooks are based on the M3 family of chips. On the other hand, the M3 never came to the iPad family – and seemingly never will. Instead, the most recent iPad Pro, launched in 2022, was an M2-based device, and the newly-launched iPad Air for the mid-range market is also using the M2. As a result, the M3 and M4 exist in their own little worlds, at least for the moment.

Given the rapid turn-around between the M3 and M4, we’ve not come out of Apple’s latest announcement expecting a ton of changes from one generation to the next. And indeed, details on the new M4 chip are somewhat limited out of the gate, especially as Apple publishes fewer details on the hardware in its iPads in general. Coupled with that is a focus on comparing like-for-like hardware – in this case, M4 iPads to M2 iPads – so information is thinner than I’d like to have. None the less, here’s the AnandTech rundown on what’s new with Apple’s latest M-series SoC.

Apple M-Series (Vanilla) SoCs
SoC M4 M3 M2
CPU Performance 4-core 4-core
16MB Shared L2
4-core (Avalanche)
16MB Shared L2
CPU Efficiency 6-core 4-core
4MB Shared L2
4-core (Blizzard)
4MB Shared L2
GPU 10-Core
Same Architecture as M3
10-Core
New Architecture - Mesh Shaders & Ray Tracing
10-Core
3.6 TFLOPS
Display Controller 2 Displays? 2 Displays 2 Displays
Neural Engine 16-Core
38 TOPS (INT8)
16-Core
18 TOPS (INT16)
16-Core
15.8 TOPS (INT16)
Memory
Controller
LPDDR5X-7500
8x 16-bit CH
120GB/sec Total Bandwidth (Unified)
LPDDR5-6250
8x 16-bit CH
100GB/sec Total Bandwidth (Unified)
LPDDR5-6250
8x 16-bit CH
100GB/sec Total Bandwidth (Unified)
Max Memory Capacity 24GB? 24GB 24GB
Encode/
Decode
8K
H.264, H.265, ProRes, ProRes RAW, AV1 (Decode)
8K
H.264, H.265, ProRes, ProRes RAW, AV1 (Decode)
8K
H.264, H.265, ProRes, ProRes RAW
USB USB4/Thunderbolt 3
? Ports
USB4/Thunderbolt 3
2x Ports
USB4/Thunderbolt 3
2x Ports
Transistors 28 Billion 25 Billion 20 Billion
Mfc. Process TSMC N3E TSMC N3B TSMC N5P

At a high level, the M4 features some kind of new CPU complex (more on that in a second), along with a GPU that seems to be largely lifted from the M3 – which itself was a new GPU architecture. Of particular focus by Apple is the neural engine (NPU), which is still a 16-core design, but now offers 38 TOPS of performance. And memory bandwidth has been increased by 20% as well, helping to keep the more powerful chip fed.

One of the few things we can infer with a high degree of certainty is the manufacturing process being used here. Apple’s description of a “second generation 3nm process” lines up perfectly in timing with TSMC’s second-generation 3nm process, N3E. The enhanced version of their 3nm process node is a bit of a sidegrade to the N3B process used by the M3 series of chips; N3E is not quite as dense as N3B, but according to TSMC it offers slightly better performance and power characteristics. The difference is close enough that architecture plays a much bigger role, but in the race for energy efficiency, Apple will take any edge they can get.

Apple’s position as TSMC’s launch-partner for new process nodes has been well-established over the years, and Apple appears to be the first company out the door launching chips on the N3E process. They will not be the last, however, as virtually all of TSMC’s high-performance customers are expected to adopt N3E over the next year. So Apple’s immediate chip manufacturing advantage, as usual, will only be temporary.

Apple’s early-leader status likely also plays into why we’re seeing the M4 now for iPads – a relatively low volume device at Apple – and not the MacBook lineup. At some point, TSMC’s N3E production capacity will catch up, and then-some. I won’t hazard a guess as to what Apple has planned for that lineup at that point, as I can’t really see Apple discontinuing M3 chips so quickly, but it also leaves them in an awkward spot having to sell M3 Macs when the M4 exists.

No die sizes have been quoted for the new chip (or die shots posted), but at 28 billion transistors in total, it’s only a marginally larger transistor count than the M3, indicating that Apple hasn’t thrown an excessive amount of new hardware into the chip. (ed: does anyone remember when 3B transistors was a big deal?)

M4 CPU Architecture: Improved ML Acceleration

Starting on the CPU side of things, we’re facing something of an enigma with Apple’s M4 CPU core design. The combination of Apple’s tight-lipped nature and lack of performance comparisons to the M3 means that we haven’t been provided much information on how the CPU designs compare. So if M4 represents a watershed moment for Apple’s CPU designs – a new Monsoon/A11 – or a minor update akin to the Everest CPU cores in A17, remains to be seen. Certainly we hope for the latter, but absent further details, we’ll work with what we do know.

Apple’s brief keynote presentation on the SoC noted that both the performance and efficiency cores implement improved branch predication, and in the case of performance cores, a wider decode and execution engine. However these are the same broad claims that Apple made for the M3, so this is not on its own indicative of a new CPU architecture.

What is unique to Apple’s M4 CPU claims however are “next-generation ML accelerators” for both CPU core types. This goes hand-in-hand with Apple’s broader focus on ML/AI performance in the M4, though the company isn’t detailing on just what these accelerators entail. With the NPU to do all of the heavy lifting, the purpose of AI enhancements on the CPU cores is less about total throughput/performance and more about processing light inference workloads mixed inside more general-purpose workloads without having to spend the time and resources firing up the dedicated NPU.

A grounded guess here would be that Apple has updated their poorly-documented AMX matrix units, which have been a part of the M series of SoCs since the beginning. However recent AMX versions already support common ML number formats like FP16, BF16, and INT8, so if Apple has made changes here, it’s not something simple and straightforward such as adding (more) common formats. At the same time if it is AMX, it’s a bit surprising to see Apple mention it at all, since they are otherwise so secretive about the units.

The other reasonable alternative would be that Apple has made some changes to their SIMD units within their CPUs to add common ML number formats, as these units are more directly accessible by developers. But at the same time, Apple has been pushing developers to use higher-level frameworks to begin with (which is how AMX is accessed), so this could really go either way.

In any case, whatever the CPU cores are that underpin M4, there is one thing that is certain: there are more of them. The full M4 configuration is 4 performance cores paired with 6 efficiency cores, 2 more efficiency cores than found on the M3. Cut-down iPad models get a 3P+6E configuration, while the higher-tier configurations get the full 4P+6E experience – so the performance impact there will likely be tangible.

Everything else held equal, the addition of two more efficiency cores shouldn’t massively improve on CPU performance over the M3’s 4P+4E configuration. But then Apple’s efficiency cores should not be underestimated, as even Apple’s efficiency cores are relatively powerful thanks to their use of out-of-order execution. Especially when fixed workloads can be held on the efficiency cores and not promoted to the performance cores, there’s a lot of room for energy efficiency gains.

Otherwise, Apple hasn’t published any detail performance graphs for the new SoC/CPU cores, so there’s little in the way of hard numbers to talk about. But the company is claiming that the M4 delivers 50% faster CPU performance than the M2. This presumably is for a multi-threaded workload that can leverage the M4’s CPU core count advantage. Alternatively, in their keynote Apple is also claiming that they can deliver M2 performance at half the power, which as a combination of process node improvements, architectural improvements, and CPU core count increases, seems like a reasonable claim.

As always, however, we’ll have to see how independent benechmarks pan out.

M4 GPU Architecture: Ray Tracing & Dynamic Caching Return

Compared to the CPU situation on the M4, the GPU situation is much more straightforward. Having just recently introduced a new GPU architecture in the M3 – a core type that Apple doesn’t iterate on as often as the CPU – Apple has all but confirmed that the GPU in the M4 is the same architecture that was found in the M3.

With 10 GPU cores, at a high level the configuration is otherwise identical to what was found on the M3. Whether that means the various blocks and caches are truly identical to the M3 remains to be seen, but Apple isn’t making any claims about the M4’s GPU performance that could be in any way interpreted as it being superior to the M3’s GPU. Indeed, the smaller form factor of the iPad and more limited cooling capabilities means that the GPU is going to be thermally constrained under any sustained workload to begin with, especially compared to what the M3 can do in an actively-cooled device like the 14-Inch MacBook Pro.

At any rate, this means the M4 comes with all of the major new architectural features introduced with the M3’s GPU: ray tracing, mesh shading, and dynamic caching. Ray tracing needs little introduction at this point, while mesh shading is a significant, next-generation means of geometry processing. Meanwhile, dynamic caching is Apple’s term for their refined memory allocation technique on M-series chips, which avoids over-allocating memory to the GPU from Apple’s otherwise unified memory pool.

GPU rendering aside, the M4 also gets the M3’s updated media engine block, which coming from the M2 is a relatively big deal for iPad uses. Most notably, the M3/M4’s media engine block added support for AV1 video decoding, the next-generation open video codec. And while Apple is more than happy to pay royalties on HEVC/H.265 to ensure it’s available within their ecosystem, the royalty-free AV1 codec is expected to take on a lot of significance and use in the coming years, leaving the iPad Pro in a better position to use the newest codec (or, at least, not have to inefficiently decode AV1 in software).

What is new to the M4 on the display side of matters, however, is a new display engine. The block responsible for compositing images and driving the attached displays on a device, Apple never gives this block a particularly large amount of attention, but when they do make updates to it, it normally comes with some immediate feature improvements.

The key change here seems to be enabling Apple’s new sandwiched “tandem” OLED panel configuration, which is premiering in the iPad Pro. The iPad’s Ultra Retina XDR display places two OLED panels directly on top of each other in order allow for a display that can cumulatively hit Apple’s brightness target of 1600 nits – something that a single one of their OLED panels is apparently incapable of doing. This in turn requires a display controller that knows how to manipulate the panels, not just driving a mirrored set of displays, but accounting for the performance losses that would stem from having one panel below another.

And while not immediately relevant to the iPad Pro, it will be interesting to see if Apple used this opportunity to increase the total number of displays the M4 can drive, as vanilla M-series SoCs have normally been limited to 2 displays, much to the consternation of MacBook users. The fact that the M4 can drive the tandem OLED panels and an external 6K display on top of that is promising, but we’ll see how this translates to the Mac ecosystem if and when the M4 lands in a Mac.

M4 NPU Architecture: Something New, Something Faster

Arguably Apple’s biggest focus with the M4 SoC is the company’s NPU, otherwise known as their neural engine. The company has been shipping a 16-core design since the M1 (and smaller designs on the A-series chips for years before that), each generation delivering a modest increase in performance. But with the M4 generation, Apple says they are delivering a much bigger jump in performance.

Still a 16-core design, the M4 NPU is rated for 38 TOPS, just over twice that of the 18 TOPS neural engine in the M3. And coincidentally, only a few TOPS more than the neural engine in the A17. So as a baseline claim, Apple is pitching the M4 NPU as being significantly more powerful than what’s come in the M3, never mind the M2 that powers previous iPads – or going even farther back, 60x faster than the A11’s NPU.

Unfortunately, the devil is (once again) in the details here as Apple isn’t listing the all-important precision information – whether this figure is based on INT16, INT8, or even INT4 precision. As the precision de jure for ML inference right now, INT8 is the most likely option, especially as this is what Apple quoted for the A17 last year. But freely mixing precisions, or even just not disclosing them, is headache-inducing to say the least. And it makes like-for-like specification comparisons difficult.

In any case, even if most of this performance improvement comes from INT8 support versus INT16/FP16 support, the M4 NPU is slated to deliver significant performance improvements to AI performance, similar to what’s already happened with the A17. And as Apple was one of the first chip vendors to ship a consumer SoC with what we now recognize as an NPU, the company isn’t afraid to beat its chest a bit on the matter, especially comparing it to what is going on in the PC realm. Especially as Apple’s offering is a complete hardware/software ecosystem, the company has the advantage of being able mold their software around using their own NPU, rather than waiting for the killer app to be invented for it.

M4 Memory: Adopting Faster LPDDR5X

Last, but certainly not least, the M4 SoC is also getting a notable improvement in its memory capabilities. Given the memory bandwidth figures Apple is quoting for the M4 – 120GB/second – all signs point to them finally adopting LPDDR5X for their new SoC.

The mid-generation update to the LPDDR5 standard, LPDDR5X allows for higher memory clockspeeds than LPDDR5, which topped out at 6400 MT/second. While LPDDR5X is available at speeds up to 8533 MT/second right now (and faster speeds to come), based on Apple’s 120GB/second figure for the M4, this puts the memory clockspeed at roughly LPDDR5X-7500.

Since the M4 is going into an iPad first, for the moment we don’t have proper idea of its maximum memory capacity. The M3 could house up to 24GB of memory, and while it’s highly unlikely Apple has regressed here, there’s also no sign whether they’ve been able to increase it to 32GB, either. In the meantime, the iPads Pro will all either come with 8GB or 16GB of RAM, depending on the specific model.

2024 M4 iPad Pros: Coming Next Week

Wrapping things up, in traditional Apple fashion, prospective tablet buyers will get the chance to see the M4 in action sooner than later. The company has already opened up pre-orders for the new iPad Pros, with the first deliveries slated to take place next week, on May 15th.

Apple is offering two sizes of the 2024 iPad Pro: 11 inches and 13 inches. Screen size aside, both sizes are getting access to the same M4 and memory configurations. 256GB/512GB models get a 3P+6E core CPU configuration and 8GB of RAM, meanwhile the 1TB and 2TB models get a fully-enabled M4 SoC with a 4P+6E CPU configuration and 16GB of RAM. The GPU configuration on both models is identical, with 10 GPU cores.

Pricing starts at $999 for the 256GB 11-inch model and $1299 for the 256GB 13-inch model. Meanwhile a max-configuration 13-inch model with 2TB of storage, Apple’s nano-texture matte display, and cellular capabilities will set buyers back a cool $2599.

AMD Zen 5 Status Report: EPYC "Turin" Is Sampling, Silicon Looking Great

As part of AMD's Q1'2024 earnings announcement this week, the company is offering a brief status update on some of their future products set to launch later this year. Most important among these is an update on their Zen 5 CPU architecture, which is expected to launch for both client and server products later this year.

Highlighting their progress so far, AMD is confirming that EPYC "Turin" processors have begun sampling, and that these early runs of AMD's next-gen datacenter chips are meeting the company's expectations.

"Looking ahead, we are very excited about our next-gen Turin family of EPYC processors featuring our Zen 5 core," said Lisa Su, chief executive officer of AMD, at the conference call with analysts and investors (via SeekingAlpha). "We are widely sampling Turin, and the silicon is looking great. In the cloud, the significant performance and efficiency increases of Turin position us well to capture an even larger share of both first and third-party workloads."

Overall, it looks like AMD is on-track to solidify its position, and perhaps even increase its datacenter market share with its EPYC Turin processors. According to AMD, the company's server partners are developing a 30% larger number of designs for Turin than they did Genoa. This underscores how AMD's partners are preparing for even more market share growth on the back of AMD's ongoing success, not to mention the improved performance and power efficiency that the Zen 5 architecture should offer.

"In addition, there are 30% more Turin platforms in development from our server partners, compared to 4th Generation EPYC platforms, increasing our enterprise and with new solutions optimized for additional workloads," Su said. "Turin remains on track to launch later this year."

AMD's EPYC 'Turin' processors will be drop-in compatible with existing SP5 platforms (i.e., will come in an LGA 6096 package), which will facilitate its faster ramp and adoption of the platform both by cloud giants and server makers. In addition, AMD's next-generation EPYC CPUs are expected to feature more than 96 cores and a more versatile memory subsystem.

In Light of Stability Concerns, Intel Issues Request to Motherboards Vendors to Actually Follow Stock Power Settings

Across the internet, from online forums such as Reddit to various other tech media outlets, there's a lot of furor around reports of Intel's top-end 14th and 13th Gen K series of processors running into stability issues. As Intel's flagship chips, these parts come aggressively clocked in order to maximize performance through various implementations of boost and turbo, leaving them running close to their limits out of the box. But with high-end motherboards further goosing these chips to wring even more performance out of them, it would seem that the Intel desktop ecosystem has finally reached a tipping point where all of these efforts to boost performance have pushed these flagship chips to unstable conditions. To that end, Intel has released new gudiance to its consumer motherboard partners, strongly encouraging them to actually implment Intel's stock power settings, and to use those baseline settings as their out-of-the-box default.

While the underlying conditions are nothing new – we've published stories time and time again about motherboard features such as multi-core enhancement (MCE) and raised power consumption limits that seek to maximize how hard and how long systems are able to turbo boost – the issue has finally come to a head in the last couple of months thanks to accumulating reports of system instability with Intel's 13900K and 14900K processors. These instability problems are eventually solved by either tamping down on these motherboard performance-boosting features – bringing the chips back down to something closer to Intel's official operating parameters – or downclocking the chips entirely.

Intel first began publicly investigating the matter on the 27th of February, when Intel's Communications Manager, Thomas Hannaford, posted a thread on Intel's Community Product Support Forms titled "Regarding Reports of 13th/14th Gen Unlocked Desktop Users Experiencing Stability Issues". In this thread, Thomas Hannaford said, "Intel is aware of reports regarding Intel Core 13th and 14th Gen unlocked desktop processors experiencing issues with certain workloads. We're engaged with our partners and are conducting analysis of the reported issues. If you are experiencing these issues, please reach out to Intel Customer Support for further assistance in the interim."

Since that post went up, additional reports have been circulating about instability issues across various online forums and message boards. The underlying culprit has been theorized to be motherboards implementing an array of strategies to improve chip performance, including aggressive multi-core enhancement settings, "unlimited" PL2 turbo, and reduced load line calibration settings. At no point do any of these settings overclock a CPU and push it to a higher clockspeed than it's validated for, but these settings do everything possible to keep a chip at the highest clockspeed possible at all times – and in the process seem to have gone a step too far.


From "Why Intel Processors Draw More Power Than Expected: TDP and Turbo Explained"

We wrote a piece initially covering multi-core enhancement in 2012, detailing how motherboard manufacturers try to stay competitive with each other and leverage any headroom within the silicon to output the highest performance levels. And more recently, we've talked about how desktop systems with Intel chips are now regularly exceeding their rated TDPs – sometimes by extreme amounts – as motherboard vendors continue to push them to run as hard as possible for the best performance.

But things have changed since 2012. At the time, this wasn't so much of an issue, as overclocking was actually very favorable to increasing the performance of processors. But in 2024 with chips such as the Intel Core i9-14900K, we have CPUs shipping with a maximum turbo clock speed of 6.0 GHz and a peak power consumption of over 400 Watts, figures that were only a pipe dream a decade ago.

Jumping to the present time, over the weekend Intel released a statement about the matter to its partners, outlining their investigation so far and their suggestions/requests to their partners. That statement was quickly leaked to the press, with Igorslab.de and others breaking the news. Since then, we've been able to confirm through official sources that this is a real and accurate statement from Intel.

This statement reads as follows:

Intel® has observed that this issue may be related to out of specification operating conditions resulting in sustained high voltage and frequency during periods of elevated heat.

Analysis of affected processors shows some parts experience shifts in minimum operating voltages which may be related to operation outside of Intel® specified operating conditions.

While the root cause has not yet been identified, Intel® has observed the majority of reports of this issue are from users with unlocked/overclock capable motherboards.

Intel® has observed 600/700 Series chipset boards often set BIOS defaults to disable thermal and power delivery safeguards designed to limit processor exposure to sustained periods of high voltage and frequency, for example:

– Disabling Current Excursion Protection (CEP)
– Enabling the IccMax Unlimited bit
– Disabling Thermal Velocity Boost (TVB) and/or Enhanced Thermal Velocity Boost (eTVB)
– Additional settings which may increase the risk of system instability:
– Disabling C-states
– Using Windows Ultimate Performance mode
– Increasing PL1 and PL2 beyond Intel® recommended limits

Intel® requests system and motherboard manufacturers to provide end users with a default BIOS profile that matches Intel® recommended settings.

Intel® strongly recommends customer's default BIOS settings should ensure operation within Intel's recommended settings.

In addition, Intel® strongly recommends motherboard manufacturers to implement warnings for end users alerting them to any unlocked or overclocking feature usage.

Intel® is continuing to actively investigate this issue to determine the root cause and will provide additional updates as relevant information becomes available.

Intel® will be publishing a public statement regarding issue status and Intel® recommended BIOS setting recommendations targeted for May 2024.

One subtle undertone in this statement is that everything seems to revolve around motherboards, specifically their default settings. Looking to clarify matters, Intel has told me today that they aren't blaming motherboard vendors in the above statement to partners and OEMs. However, having had experience with multiple Z790 motherboards with Intel's Core i9-14900K, we know each vendor has a different idea of what the word 'default' means – and that none of them involve strictly sticking to Intel's own suggested values. These profiles within the firmware unlock power constraints to a very high level and go above and beyond what Intel recommends. One example is ICCMAX, which Intel recommends at 400A or below, whereas multiple Z790 motherboards will greatly exceed this value out of the box.

Impressing buyers and outperforming the competitors has become integral to every motherboard manufacturer's strategy, thanks to the highly competitive and commoditized nature of the motherboard market. As a result, the user experience is sometimes relegated to a low-priority goal. And while this focus on performance and overclocking features plays well in reviews and to overclockers and tinkerers looking to push their CPU to its very limit, as we are now seeing, it seems to have come at the cost of out-of-the-box stability, with overly-aggressive settings leading to systems being unstable even at default settings.

Especially concerning here is what all of this means for a CPU's VCore voltage, which is another aspect of system performance that motherboard vendors have complete control over. With the need to quickly modulate the VCore voltage to keep up with the load on the processor – to keep it high enough for stability, but not allow it to spike so high as to risk damage – it's a careful balancing act for motherboard vendors even when they're not trying to squeeze out every last bit of performance from a CPU. And when they are trying to squeeze out every last bit, then VCore is something to minimize in order to improve how long and hard a CPU can turbo, pushing a chip further towards potential instability.

Pivoting to some real-world data highlighting these potential issues, when we reviewed the Intel Core i9-14900K, Intel's flagship Raptor Lake Refresh (RPL-R) processor, we tested with the default settings on both of our Z790 motherboards. From the above data, we can see the MSI MEG Z790 Ace Max was drawing up to 415 W when using Linx to place a very heavy workload on the chip. We also ran the same chip and workload on ASRock's Z790 Taichi Carrara to provide additional data points, where we found that it's power consumption maxed out at 375 W, around 10% lower than the MSI board.

In both cases, this is much higher than Intel's official PL2 limit for the Intel Core i9-14900K, which says that the chip should top out at 253 W for moderate periods of load. But, as we've seen time and time again, the official TDP ratings from Intel do not mean much to high-end motherboards, which almost universally default to higher settings. Motherboard vendors want to be competitive, and as such, higher default power settings allow vendors to claim that they deliver better performance than their rivals.

As further evidence of this, check out some of our recent motherboard reviews. I have assembled a small list of links to those reviews, where we've seen excessive CPU voltage or power consumption (or more often, both) when using the default settings on each motherboard, in each of the below reviews we see much higher power levels than Intel's official TDP values, which over the last several years we've come to expect. Still, some can be too high, especially with an already close-to-the-limit Core i9-14900K.

We have been communicating with Intel for most of the day to get official answers to what's happening. To that end, we have received an official statement from Intel, which reads as follows:

The recently publicized communications between Intel and its motherboard partners regarding motherboard settings and Intel Core 13th & 14th Gen K-SKU processors is intended to provide guidance on Intel recommended default settings. We are continuing to investigate with our partners the recent user reports of instability in certain workloads on these processors.

This BIOS default settings guidance is meant to improve stability for currently installed processors while Intel continues investigating root cause, not ascribe blame to Intel's partners:

Intel Raptor Lake (13th)/Raptor Lake Refresh (14th) Gen K Series SKU
Official Recommendations
Parameter/Feature
(In BIOS/Software Settings)
Value/Setting
Current Excursion Protection (CEP) Enable
Enhanced Thermal Velocity Boost (eTVB) Enable
Thermal Velocity Boost (TVB) Enable
TVB Voltage Optimizations Enable
ICCMAX Unlimited Bit Disable
TjMAX Offset 0
C-states Enable
ICCMAX Varies, Never >400A*
ICCMAX_App Varies*
Power Limits (PL's) Varies*

* Please see the 13th Generation Intel® Core™ and Intel® Core™ 14th Generation Processors datasheet for more information

Intel continues to work with its partners to develop appropriate mitigations going forward.

Intel's official statement to us, which is likely their standpoint for the general public, highlights a list of recommended BIOS and software settings, such as those found in Intel's Extreme Tuning Utility (XTU). There's no mention of specific motherboard vendors or models, but the above settings should alleviate crashing and instability issues by preventing motherboards from pushing CPUs too hard.

It remains to be seen just how motherboard vendors will opt to address the issue, as all of the motherboard vendors we contacted today didn't have anything official to say about the matter. With that said, however, a few motherboard vendors have recently released a wave of new BIOSes, adding a new profile called "Intel Baseline" or similar. In all cases, these new BIOSes seem to do exactly what it says on the label, configuring the system to run at Intel's actual, suggested stock settings, and thus ensuring the stability of system in exchange for reduced performance.

With that said, these new Intel baseline settings are still not being used as the default settings for high-end motherboards. So the out-of-the-box user experience is still for MCE and other features to be enabled, pushing these processors to their performance limit. Users who actually want baseline performance – and the guaranteed stability it comes with – will still need to go into the BIOS and explicitly select this profile.

Ultimately, given the spec-defying state of high-end motherboards over the last decade, this is a badly-needed improvement. But still, as Intel has yet to wrap up their root cause investigation and issue formal guidance to consumers, we're not quite to the end of this saga just yet. There are still some developments to come, as we expect to hear more in May.

Qualcomm Intros Snapdragon X Plus, Details Complete Snapdragon X Launch Day Chip Stack

As Qualcomm prepares for the mid-year launch of their forthcoming Snapdragon X SoCs for PCs, and the eagerly anticipated Oryon CPU cores within, the company is finally shoring up their official product plans, and releasing some additional technical details in the process. Thus far the company has been demonstrating their Snapdragon X Elite SoC in its highest-performing, fully-enabled configuration. But the retail Snapdragon X Elite will not be a single part; instead, Qualcomm is preparing a whole range of chip configurations for various price/performance tiers in the market. Altogether, there will be 3 Snapdragon X Elite SKUs that differ in CPU and GPU performance.

As well, the company is introducing a second Snapdragon X tier, Snapdragon X Plus, for those SKUs positioned below the Elite performance tier. As of today, this will be a single configuration. But if the Snapdragon X lineup is successful and demand warrants it, I would not be surprised to see Qualcomm expand it further – as they have certainly left themselves the room for it in their product stack. In the meantime, with Qualcomm’s expected launch competition now shipping (Intel Core Ultra Meteor Lake and AMD Ryzen Mobile 8040 Hawk Point), the company is also very confident that even these reduced performance Snapdragon X Plus chips will be able to beat Intel and AMD in multithreaded performance – never mind the top-tier Snapdragon X Elite chips.

Qualcomm will be launching this expanded four chip stack at once; so both Snapdragon X Elite and Snapdragon X Plus tier devices should be available at the same time. The company’s goal is still to have devices on the shelf “mid-year”, although the company isn’t providing any more precise guidance than that. With Qualcomm’s CEO, Cristiano Amon, set to deliver a Computex keynote in June, I expect we’ll get more specific details on timings then, along with the company and its partners using the event to announce and showcase some retail laptop designs. So this is very much looking like a summer launch at the moment.

In the meantime, Qualcomm is already showing off what their Snapdragon X Plus chips can do with a fresh set of live benchmarks, akin to their Snapdragon X Elite performance previews from October 2023. We’ll dive into those in a bit, but suffice it to say, Qualcomm knows the score, and they want to make sure the entire world knows when they’re winning.

AMD Announces Ryzen Pro 8000 and Ryzen Pro 8040 Series CPUs: Commercial Desktop Gets AI

AMD is looking to drive the AI PC market with options across multiple product lines, which aren't limited to consumer processors. While primarily designed for the commercial sector, AMD has announced the Ryzen Pro 8000 'Phoenix' series of APUs for desktops, which AMD claims is the first professional-grade CPU to include an NPU designed to provide on-chip AI neural processing capabilities. AMD has also announced the Ryzen Pro 8040 'Hawk Point' series of mobile processors designed for commercial laptops and notebooks.

AMD's Ryzen Pro 8000 and Ryzen Pro 8040 series processors come with support from AMD's Pro Manageability and AMD Pro Business Ready suites and are built with AMD's current generation Zen 4 cores. The Ryzen Pro 8000 and Ryzen Pro 8040 series processors are similar to their consumer-level counterparts. However, they have additional security features such as AMD Memory Guard, AMD Secure Processor, and Microsoft Pluton.

Touching on the differentiating factors between the non-Pro-consumer chips and the Ryzen Pro series, there is plenty for the commercial and enterprise market regarding security. In what is a first, the Ryzen Pro 8000 series is the first desktop platform to integrate Microsoft Pluton security features designed to protect when connecting to the cloud. Other features include AMD Memory Guard, which encrypts login credentials, keys, and text files stored in the DRAM. AMD Pro Security ties the AMD Zen 4 shadow stack and other layers in directly with the software stack, which, in this case, is Microsoft Windows 11 OS security. 

Another notable feature that AMD is hammering home is the on-chip AI capabilities of the included Ryzen AI neural processor unit (NPU), which allows enterprises to run AI workloads locally to mitigate privacy concerns by transferring data to and from the cloud. Although the current generation of NPUs embedded into processors are limited in what they can do, Ryzen AI is a driving factor within the AI PC, as manufacturers and SDVs are looking to utilize AI-accelerated features built into software, such as Microsoft with their AI-powered Copilot tool.

Although there are requirements that now must be met to ensure a PC is considered an 'AI PC,' Microsoft announced that their AI PC requirement is 45 TOPS of performance from the NPU alone, which none of the current generation of chips from AMD and Intel currently meet. In the desktop space, AMD currently has the lead as Intel has presently no offerings with an NPU, although, in the mobile space, AMD with their Ryzen 8040 (Hawk Point) and Intel with their Meteor Lake processors provide plenty of choice for users.

AMD Ryzen Pro 8000 Series (Zen 4)
AnandTech Cores
Threads
Base
Freq
Boost
Freq
L3
Cache
iGPU
 
TDP
Ryzen 7 Pro 8700G 8C / 16T 4200 5100 16 MB R780M (12 CUs) 45-65 W
Ryzen 7 Pro 8700GE 8C / 16T 3650 5100 16 MB R780M (12 CUs) 35 W
Ryzen 5 Pro 8600G 6C / 12T 4350 5000 16 MB R760M (8 CUs) 45-65 W
Ryzen 5 Pro 8500G 6C / 12T 3550 5000 16 MB R740M (4 CUs) 45-65 W
Ryzen 5 Pro 8600GE 6C / 12T 3900 5000 16 MB R760M (8 CUs) 35 W
Ryzen 5 Pro 8500GE 6C / 12T 3400 5000 16 MB R740M (4 CUs) 35 W
Ryzen 3 Pro 8300G 4C / 8T 3450 4900 8 MB R740M (4 CUs) 45-65 W
Ryzen 3 Pro 8300GE 4C / 8T 3500 4900 8 MB R740M (4 CUs) 35 W

Looking at the AMD Ryzen Pro 8000 series, AMD has announced eight new processors that include the same specifications as the non-Pro Ryzen 8000G APU counterparts. Two primary types of Ryzen Pro 8000 processors are set to be available: four with a configurable TDP of between 45 and 65 W and four with a flat TDP of 35 W for lower-powered environments. Leading the line-up is the Ryzen 7 Pro 8700G, which is identical in core specifications to the Ryzen 7 8700G APU, and has an 8C/16T (Zen 4) configuration with a base frequency of 4.2 GHz and a boost frequency of up to 5.1 GHz.

Even the Ryzen 7 Pro 8700GE, which is the 35 W version, has a 5.1 GHz boost frequency, although it has a slower base clock of 3.65 GHz. Both models have 16 MB of L3 cache, including AMD's integrated Radeon 780M (12 CUs) mobile graphics. All of the eight Ryzen Pro 8000 series models range from 4C/8T offerings with 8 MB of L3 cache and 4.9 GHz boost clocks, 6C/12T models with 5.0 GHz boost clocks and 16 MB of L3 cache, and those as mentioned above 8700/8700GE with 8C/16T.

While we take all performance figures given by manufacturers and vendors with a pinch of salt, AMD claims their Ryzen Pro 8000 series offers up to 19% better performance than Intel's 14th-gen Core series processors. AMD's match-up is the Ryzen 7 Pro 8700G vs. the Intel Core i7-14700, with AMD claiming a 47% victory in the Passmark 11 benchmark and 3X the graphics performance in 3D Mark Time Spy. This isn't entirely surprising because the Ryzen 7 Pro 8700G benefits from integrated RDNA3 graphics and AMD's Zen 4 cores.

AMD Ryzen Pro 8040 Series (Zen 4)
AnandTech Cores
Threads
Base
Freq
Boost
Freq
L3
Cache
iGPU TDP
Ryzen 9 Pro 8945HS 8C / 16T 4000 5200 16 MB 12 35-54 W
Ryzen 7 Pro 8845HS 8C / 16T 3800 5100 16 MB 12 35-54 W
Ryzen 7 Pro 8840HS 8C / 16T 3300 5100 16 MB 12 20-28 W
Ryzen 5 Pro 8645HS 6C / 12T 4300 5000 16 MB 8 35-54 W
Ryzen 5 Pro 8640HS 6C / 12T 3500 4900 16 MB 8 20-28 W
 
Ryzen 7 Pro 8840U 8C / 16T 3300 5100 16 MB 12 15-28 W
Ryzen 5 Pro 8640U 6C / 12T 3500 4900 16 MB 8 15-28 W
Ryzen 5 Pro 8540U* 6C / 12T 3200 4900 16 MB 4 15-28 W
*Ryzen 5 Pro 8540U is the only chip without AMD's Ryzen AI NPU

Moving onto AMD's latest Ryzen Pro 8040 processors for the mobile market, AMD has refreshed their Hawk Point family for the enterprise market. AMD has eight new processors, which are segmented into two families, the HS series and the U series. The HS series has five new chips, which range from 6C/12T up to 8C/16T, all with varying levels of clock speed and TDPs. At the top of the line-up is the Ryzen 9 Pro 8945HS, which is a direct replacement for the Ryzen 9 Pro 7940HS, and as such, it comes with the same 4.0 GHz base clock and 5.2 GHz boost clocks.

Pivitong to TDP, AMD offers the Ryzen 9 Pro 8945HS, Ryzen 7 Pro 8845HS, and Ryzen 5 Pro 8645HS with a configurable TDP of between 35 and 54 W. In contrast, the Ryzen 7 Pro 8840HS and the Ryzen 5 Pro 8640HS are designed for lower-powered laptops with a cTDP of 20-28 W. Regarding cache, all of the announced Ryzen Pro 8040 series models come with 16 MB of L3 Cache. At the same time, specifications such as the integrated graphics and clock speeds all correspond to the consumer line-up, the Ryzen 8040 series.

AMD's in-house performance figures show the Ryzen 7 Pro 8840U at 15 W performing better than Intel's Core Ultra 7 165H at 28 W. Still, as we always do with performance figures provided by vendors, take these with a pinch of salt. AMD claims a 30% combined increase in performance across the board in workloads, including Geekbench v6, Blender, PCMark 10, PCMark Night Raid, and UL Procyon. While there are plenty of different areas where performance gains and losses can be achieved, AMD does claim that their Ryzen 9 Pro 8945HS at 45 W vs. the Intel Core Ultra 9 185H at 45 W is 50% better in Topaz Labs Video AI Gaia 4X software; they did both use discrete graphics in this test according to AMD's slide deck.

The other notable thing is that all of the Ryzen Pro 8040 series processors, except the bottom SKU, the Ryzen 5 Pro 8540U, come with AMD's Ryzen AI NPU integrated into the silicon. While the AI PC ecosystem is still growing, AMD and over 150+ ISVs look to continue the trend that AI will power more software features in the future than we've seen so far. We are still in the infancy stage of the ecosystem despite much of the marketing targeting the AI functionality, but as we see higher-performing NPUs coming in the next generation of chips, at least ones that can match Microsoft 45 NPU TOPS requirement to run Copilot locally, much of the benefit of the NPU is currently down to how much power can be saved.

The introduction of the Ryzen Pro 8000/8040 series completes AMD's commercial client platform, along with the readily available Ryzen Threadripper Pro 7000-WX series for commercial and professional workstations. What sets these AMD Ryzen Pro series processors apart from the consumer (non-Pro) variants is support for the AMD Pro Manageability toolkit, which includes features such as cloud-based remote manageability to enable off-site IT technicians the ability to access devices remotely, as well WPA3 SAE encryption, which provides client-to-cloud protection for enterprises over shared networks.

AMD has not announced when the Ryzen Pro 8000 series APUs or the Ryzen Pro 8040 mobile chips will be available for purchase. However, we expect a wide array of OEMs, such as HP and Lenovo, to be already in the process of readying solutions that should hit the market soon.

Intel and Sandia National Labs Roll Out 1.15B Neuron “Hala Point” Neuromorphic Research System

While neuromorphic computing remains under research for the time being, efforts into the field have continued to grow over the years, as have the capabilities of the specialty chips that have been developed for this research. Following those lines, this morning Intel and Sandia National Laboratories are celebrating the deployment of the Hala Point neuromorphic system, which the two believe is the highest capacity system in the world. With 1.15 billion neurons overall, Hala Point is the largest deployment yet for Intel’s Loihi 2 neuromorphic chip, which was first announced at the tail-end of 2021.

The Hala Point system incorporates 1152 Loihi 2 processors, each of which is capable of simulating a million neurons. As noted back at the time of Loihi 2’s launch, these chips are actually rather small – just 31 mm2 per chip with 2.3 billion transistors each, as they’re built on the Intel 4 process (one of the only other Intel chips to do so, besides Meteor Lake). As a result, the complete system is similarly petite, taking up just 6 rack units of space (or as Sandia likes to compare it to, about the size of a microwave), with a power consumption of 2.6 kW. Now that it’s online, Hala Point has dethroned the SpiNNaker system as the largest disclosed neuromorphic system, offering admittedly just a slightly larger number of neurons at less than 3% of the power consumption of the 100 kW British system.


A Single Loihi 2 Chip (31 mm2)

Hala Point will be replacing an older Intel neuromorphic system at Sandia, Pohoiki Springs, which is based on Intel’s first-generation Loihi chips. By comparison, Hala Point offers ten-times as many neurons, and upwards of 12x the performance overall,

Both neuromorphic systems have been procured by Sandia in order to advance the national lab’s research into neuromorphic computing, a computing paradigm that behaves like a brain. The central thought (if you’ll excuse the pun) is that by mimicking the wetware writing this article, neuromorphic chips can be used to solve problems that conventional processors cannot solve today, and that they can do so more efficiently as well.

Sandia, for its part, has said that it will be using the system to look at large-scale neuromorphic computing, with work operating on a scale well beyond Pohoiki Springs. With Hala Point offering a simulated neuron count very roughly on the level of complexity of an owl brain, the lab believes that a larger-scale system will finally enable them to properly exploit the properties of neuromorphic computing to solve real problems in fields such as device physics, computer architecture, computer science and informatics, moving well beyond the simple demonstrations initially achieved at a smaller scale.

One new focus from the lab, which in turn has caught Intel’s attention, is the applicability of neuromorphic computing towards AI inference. Because the neural networks themselves behind the current wave of AI systems are attempting to emulate the human brain, in a sense, there is an obvious degree of synergy with the brain-mimicking neuromorphic chips, even if the algorithms differ in some key respects. Still, with energy efficiency being one of the major benefits of neuromorphic computing, it’s pushed Intel to look into the matter further – and even build a second, Hala Point-sized system of their own.

According to Intel, in their research on Hala Point, the system has reached efficiencies as high as 15 TOPS-per-Watt at 8-bit precision, albeit while using 10:1 sparsity, making it more than competitive with current-generation commercial chips. As an added bonus to that efficiency, the neuromorphic systems don’t require extensive data processing and batching in advance, which is normally necessary to make efficient use of the high density ALU arrays in GPUs and GPU-like processors.

Perhaps the most interesting use case of all, however, is the potential for being able to use neuromorphic computing to enable augmenting neural networks with additional data on the fly. The idea behind this being to avoid re-training, as current LLMs require, which is extremely costly due to the extensive computing resources required. In essence, this is taking another page from how brains operate, allowing for continuous learning and dataset augmentation.

But for the moment, at least, this remains a subject of academic study. Eventually, Intel and Sandia want systems like Hala Point to lead to the development of commercial systems – and presumably, at even larger scales. But to get there, researchers at Sandia and elsewhere will first need to use the current crop of systems to better refine their algorithms, as well as better figure out how to map larger workloads to this style of computing in order to prove their utility at larger scales.

❌