Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

German Court Bans Sales of Select Intel CPUs in Germany Over Patent Dispute

7 février 2024 à 21:30

A German court has sided with R2 Semiconductor against Intel, ruling that the chip giant infringed one of R2's patent. This decision could lead to sales ban of select Intel processors as well as products based on them in Germany. Intel, for its part, has accused R2 of being a patent troll wielding a low-quality patent, and has said that it will appeal the decision.

The regional court in Düsseldorf, Germany, ruled that Intel infringed a patent covering an integrated voltage regulator technology that belongs to Palo Alto, California-based R2 Semiconductor. The court on Wednesday issued an injunction against sales of Intel's Core-series 'Ice Lake,' 'Tiger Lake,' 'Alder Lake,' and Xeon Scalable 'Ice Lake Server' processors as well as PCs and servers based on these CPUs. Some of these processors have already been discontinued, but there are Alder Lake chips are available in retail and inside many systems that are still on the shelves. Though the ruling does not mean that these CPUs will disappear from the German market immediately.

Meanwhile, the injunction does not cover Intel's current-generation Core 'Raptor Lake' and Core Ultra 'Meteor Lake' processors for desktops and laptops, according to The Financial Times, so the impact of the injunction is set to be fairly limited.

Intel has expressed its disappointment with the verdict and announced its intention to challenge the decision. The company criticized R2 Semiconductor's litigation strategy, accusing it of pursuing serial lawsuits against big companies, particularly after Intel managed to invalidate one of R2's U.S. patents.

"R2 files serial lawsuits to extract large sums from innovators like Intel," a statement by Intel reads. "R2 first filed suit against Intel in the U.S., but after Intel invalidated R2's low-quality U.S. patent R2 shifted its campaign against Intel to Europe. Intel believes companies like R2, which appears to be a shell company whose only business is litigation, should not be allowed to obtain injunctions on CPUs and other critical components at the expense of consumers, workers, national security, and the economy."

In its lawsuit against Intel, R2 requested the court to halt sales of infringing processors, sales of products equipped with these CPUs, and to mandate a recall of items containing these processors, as Intel revealed last September. The company contended that imposing an injunction would be an excessive response.

Meanwhile, it is important to note that in this legal battle Intel is safeguarding its customers by assuming responsibility for any legal expenses or compensations they may incur. Consequently, as of September, Intel was unable to provide a reliable estimate of the possible financial impact or the scope of potential losses that could result from the legal battle as they can be vast.

In a stark contrast with Intel, R2 welcomes the court's decisions and presents the company's own view on the legal dispute.

"We are delighted that the highly respected German court has issued an injunction and unequivocally found that Intel has infringed R2's patents for integrated voltage regulators," said David Fisher, CEO of R2. "We intend to enforce this injunction and protect our valuable intellectual property. The global patent system is here precisely for the purpose of protecting inventors like myself and R2 Semiconductor."

R2 claims that Intel planned to invest in R2 in 2015, about two years after the company first brought its Fully Integrated Voltage Regulator (FIVR) technology to market with its 4th Generation Core 'Haswell' processors, but then abandoned talks.

"R2 has been a semiconductor IP developer, similar to Arm and Rambus, for more than 15 years," Fisher said. "Intel is intimately familiar with R2's business — in fact, the companies were in the final stages of an investment by Intel into R2 in 2015 when Intel unilaterally terminated the process. R2 had asked if a technical paper Intel had just published about their approach to their FIVR technology, which had begun shipping in their chips, was accurate. The next and final communication was from Intel's patent counsel. That was when it became clear to me that Intel was using R2's patented technology in their chips without attribution or compensation."

The head of R2 states that Intel is the only company that R2 has ever sued, which contradicts Intel's R2 accusation of being a patent troll.

"That is how these lawsuits emerged, and Intel is the only entity R2 has ever accused of violating its patents," Fisher stated. "It is unsurprising but disappointing that Intel continues to peddle its false narratives rather than taking responsibility for its repeated and chronic infringement of our patents."

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More

13 février 2024 à 12:00

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinney’s LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles – right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

Arm and Samsung to Co-Develop 2nm GAA-Optimized Cortex Cores

22 février 2024 à 12:00

Arm and Samsung this week announced their joint design-technology co-optimization (DTCO) program for Arm's next-generation Cortex general-purpose CPU cores as well as Samsung's next-generation process technology featuring gate-all-around (GAA) multi-bridge-channel field-effect transistors (MBCFETs). 

"Optimizing Cortex-X and Cortex-A processors on the latest Samsung process node underscores our shared vision to redefine what’s possible in mobile computing, and we look forward to continuing to push boundaries to meet the relentless performance and efficiency demands of the AI era," said Chris Bergey, SVP and GM, Client Business at Arm.

Under the program, the companies aim to deliver tailored versions of Cortex-A and Cortex-X cores made on Samsung's 2 nm-class process technology for various applications, including smartphones, datacenters, infrastructure, and various customized system-on-chips. For now, the companies does not say whether they aim to co-optimize Arm's Cortex cores for Samsung's 1st generation 2 nm production node called SF2 (due in 2025), or the plan is to optimize these cores for all SF2-series technologies, including SF2 and SF2P.

GAA nanosheet transistors with channels that are surrounded by gates on all four sides have a lot of options for optimization. For example, nanosheet channels can be widened to increase drive current and boost performance or shrunken to reduce power consumption and cost. Depending on the application, Arm and Samsung will have plenty of design choices.

Keeping in mind that we are talking about Cortex-A cores aimed at a wide variety of applications as well as Cortex-X cores designed specifically to deliver maximum performance, the results of the collaborative work promise to be quite decent. In particular, we are looking forward Cortex-X cores with maximized performance, Cortex-A cores with optimized performance and power consumption, and Cortex-A cores with reduced power consumption.

Nowadays collaboration between IP (intellectual property) developers, such as Arm, and foundries, such as Samsung Foundry, is essential to maximize performance, reduce power consumption, and optimize transistor density. The joint work with Arm will ensure that Samsung's foundry partners will have access to processor cores that can deliver exactly what they need.

AMD CEO Dr. Lisa Su to Deliver Opening Keynote at Computex 2024

22 février 2024 à 20:00

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, announced today that Dr. Lisa Su, AMD's chief executive officer, will give the trade show's Opening Keynote. Su's speech is set for the morning of June 3, 2024, shortly before the formal start of the show. According to AMD, the keynote talk will be "highlighting the next generation of AMD products enabling new experiences and breakthrough AI capabilities from the cloud to the edge, PCs and intelligent end devices."

This year's Computex is focused on six key areas: AI computing, Advanced Connectivity, Future Mobility, Immersive Reality, Sustainability, and Innovations. Being a leading developer of CPUs, AI and HPC GPUs, consumer GPUs, and DPUs, AMD can talk most of these topics quite applicably.

As AMD is already mid-cycle on most of their product architectures, the company's most recent public roadmaps have them set to deliver major new CPU and GPU architectures before the end of 2024 with Zen 5 CPUs and RDNA 4 GPUs, respectively. AMD has not previously given any finer guidance on when in the year to expect this hardware, though AMD's overall plans for 2024 are notably more aggressive than the start of their last architecture cycle in 2022. Of note, the company has previously indicated that it intends to launch all 3 flavors of the Zen 5 architecture this year – not just the basic core, but also Zen 5c and Zen 5 with V-Cache – as well as a new mobile SoC (Strix Point). By comparison, it took AMD well into 2023 to do the same with Zen 4 after starting with a fall 2022 launch for those first products.


AMD 2022 Financial Analyst Day CPU Core Roadmap

This upcoming keynote will be Lisa Su's third Computex keynote after her speeches at Computex 2019 and Computex 2022. In both cases she also announced upcoming AMD products.

In 2019, she showcased performance improvements of then upcoming 3rd Generation Ryzen desktop processors and 7nm EPYC datacenter processors. Lisa Su also highlighted AMD's advancements in 7nm process technology, showcasing the world's first 7nm gaming GPU, the Radeon VII, and the first 7nm datacenter GPU, the Radeon Instinct MI60.

In 2022, the head of AMD offered a sneak peek at the then-upcoming Ryzen 7000-series desktop processors based on the Zen 4 architecture, promising significant performance improvements. She also teased the next generation of Radeon RX 7000-series GPUs with the RDNA 3 architecture.

AMD Fixed the STAPM Throttling Issue, So We Retested The Ryzen 7 8700G and Ryzen 5 8600G

23 février 2024 à 13:00

When we initially reviewed the latest Ryzen 8000G APUs from AMD last month, the Ryzen 7 8700G and Ryzen 5 8600G, we became aware of an issue that caused the APUs to throttle after a few minutes. This posed an issue for a couple of reasons, the first being it compromised our data to reflect the true capabilities of the processors, and the second, it highlighted an issue that AMD forgot to disable from their mobile series of Pheonix chips (Ryzen 7040) when implementing it over to the desktop.

We updated the data in our review of the Ryzen 7 8700G and Ryzen 5 8600G to reflect performance with STAPM on the initial firmware and with STAPM removed with the latest firmware. Our updated and full review can be accessed by clicking the link below:

As we highlighted in our Ryzen 8000G APU STAPM Throttling article, AMD, through AM5 motherboard vendors such as ASUS, has implemented updated firmware that removes the STAPM limitation. Just to quickly recap the Skin Temperature-Aware Power Management (STAPM) feature and what it does, AMD introduced it in 2014. STAPM itself is a feature implemented into their mobile processors. It is designed to extend the on-die power management by considering the processor's internal temperatures taken by on-chip thermal diodes and the laptop's surface temperature (i.e., the skin temperature).

The aim of STAPM is to prevent laptops from becoming uncomfortably warm for users, allowing the processor to actively throttle back its heat generation based on the thermal parameters between the chassis and the processor itself. The fundamental issue with STAPM in the case of the Ryzen 8000G APUs, including the Ryzen 7 8700G and Ryzen 5 8600G, is that these are mobile processors packaged into a format for use with the AM5 desktop platform. As a desktop platform is built into a chassis that isn't placed on a user's lap, the STAPM feature becomes irrelevant.

As we saw when we ran a gaming load over a prolonged period of time on the Ryzen 7 8700G with the firmware available at launch, we hit power throttling (STAPM) after around 3 minutes. As we can see in the above chart, power dropped from a sustained value of 83-84 W down to around 65 W, representing a drop in power of around 22%. While we know Zen 4 is a very efficient architecture at lower power values, overall performance will drop once this limit is hit. Unfortunately, AMD forgot to remove STAPM limits when transitioning Pheonix to the AM5 platform.

Retesting the same game (F1 2023) at the same settings (720p High) with the firmware highlighting that STAPM had been removed, we can see that we aren't experiencing any of the power throttling we initially saw. We can see power is sustained for over 10 minutes of testing (we did test for double this), and we saw no drops in package power, at least not from anything related to STAPM. This means for users on the latest firmware on whatever AM5 motherboard is being used, power and, ultimately, performance remain consistent with what the Ryzen 7 8700G should have been getting at launch.

The key question is, does removing the STAPM impact our initial results in our review of the Ryzen 7 8700G and Ryzen 5 8600G? And if so, by how much, or if at all? We added the new data to our review of the Ryzen 7 8700G and Ryzen 5 8600G but kept the initial results so that users can see if there are any differences in performance. Ultimately, benchmark runs are limited to the time it takes to run them, but in real-world scenarios, tasks such as video rendering and longer sustained loads are more likely to show gains in performance. After all, a drop of 22% in power is considerable, especially over a task that could take an hour.

(4-1d) Blender 3.6: Pabellon Barcelona (CPU Only)

Using one of our longer benchmarks, such as Blender 3.6, to highlight where performance gains are notable when using the latest firmware with the STAPM limitations removed, we saw an increase in performance of around 7.5% on the Ryzen 7 8700G with this removed. In the same benchmark, we saw an increase of around 4% on the Ryzen 5 8600G APU.

Over all of the Blender 3.6 tests in the rendering section of our CPU performance suite, performance gains hovered between 2 and 4.4% on the Ryzen 5 8600G, and between 5 and 7.5% on the Ryzen 8700G, which isn't really free performance, it's the performance that should have been there to begin with at launch.

IGP World of Tanks - 768p Min - Average FPS

Looking at how STAPM affected our initial data, we can see that the difference in World of Tanks at 768p Minumum settings had a marginal effect at best through STAPM by around 1%. Given how CPU-intensive World of Tanks is, and combining this with integrated graphics, the AMD Ryzen APUs (5000G and 8000G) both shine compared to Intel's integrated UHD graphics in gaming. Given that gaming benchmarks are typically time-limited runs, it's harder to identify performance gains. The key to takeaway here is that with the STAPM limitation removed, the performance shouldn't drop over sustained periods of time, so our figures above and our updated review data aren't compromised.

(i-3) Total War Warhammer 3 - 1440p Ultra - Average FPS

Regarding gaming with a discrete graphics card, we saw no drastic changes in performance, as highlighted by our Total War Warhammer 3 at 1440p Ultra benchmark. Across the board, in our discrete graphics results with both the Ryzen 7 8700G and the Ryzen 5 8600G, we saw nothing but marginal differences in performance (less than 1%). As we've mentioned, removing the STAPM limitations doesn't necessarily improve performance. Still, it allows the APUs to keep the same performance level for sustained periods, which is how it should have been at launch. With STAPM applied as with the initial firmware at launch on AM5 motherboards, power would drop by around 22%, limiting the full performance capability over prolonged periods.

As we've mentioned, we have updated our full review of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs to reflect our latest data gathered from testing on the latest firmware. Still, we can fully confirm that the STAPM issue has been fixed and that the performance is as it should be on both chips.

You can access all of our updated data in our review of the Ryzen 7 8700G and Ryzen 5 8600G by clicking the link below.

Intel Previews Sierra Forest with 288 E-Cores, Announces Granite Rapids-D for 2025 Launch at MWC 2024

26 février 2024 à 12:25

At MWC 2024, Intel confirmed that Granite Rapids-D, the successor to Ice Lake-D processors, will come to market sometime in 2025. Furthermore, Intel also provided an update on the 6th Gen Xeon Family, codenamed Sierra Forest, which is set to launch later this year and will feature up to 288 cores designed for vRAN network operators to improve performance in boost per rack for 5G workloads.

These chips are designed for handling infrastructure, applications, and AI workloads and aim to capitalize on current and future AI and automation opportunities, enhancing operational efficiency and ownership costs in next-gen applications and reflecting Intel's vision of integrating 'AI Everywhere' across various infrastructures.

Intel Sierra Forest: Up to 288 Efficiency Cores, Set for 2H 2024

The first of Intel's announcements at MWC 2024 focuses on their upcoming Sierra Forest platform, which is scheduled for the 1st half of 2024. Initially announced in February 2022 during Intel's Investor Meeting, Intel is splitting its server roadmap into solutions featuring only performance (P) and efficiency (E) cores. We already know that Sierra Forest's new chips feature a full E-core architecture designed for maximum efficiency in scale-out, cloud-native, and contained environments.

These chips utilize CPU chiplets built on the Intel 3 process alongside twin I/O chiplets based on the Intel 7 node. This combination allows for a scalable architecture, which can accommodate increasing core counts by adding more chiplets, optimizing performance for complex computing environments.

Intel's Sierra Forest, Intel's full E-core designed Xeon processor family, is anticipated to significantly enhance power efficiency with up to 288 E-cores per socket. Intel also claims that Sierra Forest is expected to deliver 2.7 times the performance-per-rack compared to an unspecified platform from 2021; this could be either Ice Lake or Cascade Lake, but Intel didn't mention which.

Additionally, Intel is promising savings of up to 30% in Infrastructure Power Management with Sierra Forest as their Infrastructure Power Manager (IPM) application is now available commercially for 5G cores. Power manageability and efficiency are growing challenges for network operators, so IPM is designed to allow network operators to optimize energy efficiency and TCO savings.

Intel also includes vRAN, which is vital for modern mobile networks, and many operators are forgoing opting for specific hardware and instead leaning towards virtualized radio access networks (vRANs). Using vRAN Boost, which is an integrated accelerator within Xeon Processors, Intel states that the 4th Gen Xeon should be able to reduce power consumption by around 20% while doubling the available network capacity.

Intel's push for 'AI Everywhere' is also a constant focus here, with AI's role in vRAN management becoming more crucial. Intel has announced the vRAN AI Developer Kit, which is available to select partners. This allows partners and 5G network providers to develop AI models to optimize for vRAN applications, tailor their vRAN-based functions to more use cases, and adapt to changes within those scenarios.

Intel Granite Rapids-D: Coming in 2025 For Edge Solutions

Intel's Granite Rapids-D, designed for Edge solutions, is set to bolster Intel's role in virtual radio access network (vRAN) workloads in 2025. Intel also promises marked efficiency enhancements and some vRAN Boost optimizations similar to those expected on Sierra Forest. Set to follow on from the current Ice Lake-D for the edge; Intel is expected to use the performance (P) cores used within Granite Rapids server parts and optimize the V/F curve designed for the lower-powered Edge platform. As outlined by Intel, the previous 4th generation Xeon platform effectively doubled vRAN capacity, enhancing network capabilities while reducing power consumption by up to 20%.

Granite Rapids-D aims to further these advancements, utilizing Intel AVX for vRAN and integrated Intel vRAN Boost acceleration, thereby offering substantial cost and performance benefits on a global scale. While Intel hasn't provided a specific date (or month) of when we can expect to see Granite Rapids-D in 2025, Intel is currently in the process of sampling these next-gen Xeon-D processors with partners, aiming to ensure a market-ready platform at launch.

Related Reading

Intel Brings vPro to 14th Gen Desktop and Core Ultra Mobile Platforms for Enterprise

27 février 2024 à 16:00

As part of this week's MWC 2024 conference, Intel is announcing that it is adding support for its vPro security technologies to select 14th Generation Core series processors (Raptor Lake-R) and their latest Meteor Lake-based Core Ultra-H and U series mobile processors. As we've seen from more launches than we care to count of Intel's desktop and mobile platforms, they typically roll out their vPro platforms sometime after they've released their full stack of processors, including overclockable K series SKUs and lower-powered T series SKUs, and this year is no exception. Altogether, Intel is announcing vPro Essential and vPro Enterprise support for several 14th Gen Core series SKUs and Intel Core Ultra mobile SKUs.

Intel's vPro security features is something we've covered previously – and on that note, Intel has a new Silicon Security Engine giving the chips the ability to authentical the systems firmware. Intel also states that Intel Threat Detection within vPro has been enhanced and adds an additional layer for the NPU, with an xPU model (CPU/GPU/NPU) to help detect a variety of attacks, and also enables 3rd party software to fun faster. Intel claims is the only AI-based security deployment within a Windows PC to date. Both the total Enterprise securities and the cut-down Essentials vPro hardware-level security to select 14th Gen Core series processors, as well as their latest mobile-focused Meteor Lake processors with Arc graphics launched last year.

Intel 14th Gen vPro: Raptor Lake-R Gets Secured

As we've seen over the last few years with a global shift towards remote work due to the Coronavirus pandemic, the need for up-to-date security in small and larger enterprises is just as critical as it has ever been. Remote and employees in offices alike must have access to the latest software and hardware frameworks to ensure the security of vital data, and that's where Intel vPro comes in.

To quickly recap the current state of affairs, let's take a look at the two levels of Intel vPro securities available,  vPro Essentials and vPro Enterprise, and how they differ.

Intel's vPro Essentials was first launched back in 2022 and is a subset of Intel's complete vPro package, which is now commonly known as vPro Enterprise. The Intel vPro Essentials security package is essentially (as per the name) tailored and designed for small businesses, providing a solid foundation in security without penalizing performance. It integrates hardware-enhanced security features, ensuring hardware-level protection against emerging threats from right from its installation. It also utilizes real-time intelligence for workload optimization and Intel's Thread Detection Technology. It adds an additional layer below the operating system that uses AI-based threat detection to mitigate OS-level threats and attacks.

Pivoting to Intel vPro Enterprise security features, this is designed for SMEs to meet the high demands of large-scale business environments. It offers advanced security features and remote management capabilities, which are crucial for businesses operating with sensitive data and requiring high levels of cybersecurity. Additionally, the platform provides enhanced performance and reliability, making it suitable for intensive workloads and multitasking in a professional setting. Integrating these features from the vPro Enterprise platform ensures that large enterprises can maintain high productivity levels while ensuring data security and efficient IT management with the latest generations of processors, such as the Intel Core 14th Gen family.

Much like we saw when Intel announced their vPro for the 13th Gen Core series, it's worth noting that both the 14th and 13th Gen Core series are based on the same Raptor Lake architecture and, as such, are identical in every aspect bar base and turbo core frequencies.

Intel 14th Gen Core with vPro for Desktop
(Raptor Lake-R)
AnandTech Cores
P+E/T
P-Core
Base/Turbo
(MHz)
E-Core
Base/Turbo
(MHz)
L3 Cache
(MB)
Base
W
Turbo
W
vPRO
Support
(Ent/Ess)
Price
($)
i9-14900K 8+16/32 3200 / 6000 2400 / 4400 36 125 253 Enterprise $589
i9-14900 8+16/32 2000 / 5600 1500 / 4300 36 65 219 Both $549
i9-14900T 8+16/32 1100 / 5500 800 / 4000 36 35 106 Both $549
 
i7-14700K 8+12/28 3400 / 5600 2500 / 4300 33 125 253 Enterprise $409
i7-14700 8+12/28 2100 / 5400 1500 / 4200 33 65 219 Both $384
i7-14700T 8+12/28 1300 / 5000 900 / 3700 33 35 106 Both $384
 
i5-14600K 6+8/20 3500 / 5300 2600 / 4000 24 125 181 Enterprise $319
i5-14600 6+8/20 2700 / 5200 2000 / 3900 24 65 154 Both $255
i5-14500 6+8/20 2600 / 5000 1900 / 3700 24 65 154 Both $232
i5-14600T 6+8/20 1800 / 5100 1200 / 3600 24 35 92 Both $255
i5-14500T 6+8/20 1700 / 4800 1200 / 3400 24 35 92 Both $232

While Intel isn't technically launching any new chip SKUs (either desktop or mobile) with vPro support, the vPro desktop platform features are enabled through the use of specific motherboard chipsets, with both Q670 and W680 chipsets offering sole support for vPro on 14th Gen. Unless users are using either a Q670 or W680 motherboard with the specific chips listed above. vPro Essentials or Enterprise will not be enabled or work with each processor unless installed into a motherboard from one of these chipsets.

As with the previous 13th Gen Core series family (Raptor Lake), the 14th Gen, which is a direct refresh of these, follows a similar pattern. Specific SKUs from the 14th Gen family include support only for the full-fledged vPro Enterprise, including the Core i5-14600K, the Core i7-14700K, and the flagship Core i9-14900K. Intel's vPro Enterprise security features are supported on both Q670 and W680 motherboards, giving users more choice in which board they opt for.

The rest of the above Intel 14th Gen Core series stack, including the non-monikered chips, e.g., the Core i5-14600, as well as the T series, which are optimized for efficient workloads with a lower TDP than the rest of the stack, all support both vPro Enterprise and vPro Essentials. This includes two processors from the Core i9 family, including the Core i9-14900 and Core i9-14900T, two from the i7 series, the Core i7-14700 and Core i7-14700T, and four from the i5 series, the Core i5-14600, Core i5-14500, the Core i5-14600T and the COre i5-14500T.


The ASRock Industrial IMB-X1231 W680 mini-ITX motherboard supports vPro Enterprise and Essentials

For the processors mentioned above (non-K), different levels of vPro support are offered depending on the motherboard chipset. If a user wishes to use a Q670 motherboard, then users can specifically opt to use Intel's cut-down vPro Essentials security features. Intel states that users with a Q670 or W680 can use the full vPro Enterprise security features, including the Core i9-14900K, the Core i7-14700K, and the Core i5-14600K. Outside of this, none of the 14th Gen SKUs with the KF (unlocked with no iGPU) and F (no iGPU) monikers are listed with support for vPro.

Intel Meteor Lake with vPro: Core Ultra H and U Series get Varied vPro Support

Further to the Intel 14th Gen Core series for desktops, Intel has also enabled vPro support for their latest Meteor Lake-based Core Ultra H and U series mobile processors. Unlike the desktop platform for vPro, things are a little different in the mobile space, as Intel offers vPro on their mobile SKUs, either with vPro Enterprise or vPro Essentials, not both.

Intel Core Ultra H and U-Series Processors with vPro
(Meteor Lake)
AnandTech Cores
(P+E+LP/T)
P-Core Turbo
Freq
E-Core Turbo
Freq
GPU GPU Freq L3 Cache
(MB)
vPro Support
(Ent/Ess)
Base TDP Turbo TDP
Ultra 9  
Core Ultra 9 185H 6+8+2/22 5100 3800 Arc Xe (8) 2350 24 Enterprise 45 W 115 W
Ultra 7  
Core Ultra 7 165H 6+8+2/22 5000 3800 Arc Xe (8) 2300 24 Enterprise 28 W 64/115 W
Core Ultra 7 155H 6+8+2/22 4800 3800 Arc Xe (8) 2250 24 Essentials 28 W 64/115 W
Core Ultra 7 165U 2+8+2/14 4900 3800 Arc Xe (4) 2000 12 Enterprise 15 W 57 W
Core Ultra 7 164U 2+8+2/14 4800 3800 Arc Xe (4) 1800 12 Enterprise 9 W 30 W
Core Ultra 7 155U 2+8+2/14 4800 3800 Arc Xe (4) 1950 12 Essentials 15 W 57 W
Ultra 5  
Core Ultra 5 135H 4+8+2/18 4600 3600 Arc Xe
(7)
2200 18 Enterprise 28 W 64/115 W
Core Ultra 5 125H 4+8+2/18 4500 3600 Arc Xe (7) 2200 18 Essentials 28 W 64/115 W
Core Ultra 5 135U 2+8+2/14 4400 3600 Arc Xe (4) 1900 12 Enterprise 15 W 57 W
Core Ultra 5 134U 2+8+2/14 4400 3800 Arc Xe (4) 1750 12 Enterprise 9 W 30 W
Core Ultra 5 125U 2+8+2/14 4300 3600 Arc Xe (4) 1850 12 Essentials 15 W 57 W

The above table highlights not just the specifications of each Core Ultra 9, 7, and 5 SKU but also denotes which model gets what level of vPro support. Starting with the Core Ultra 9 185H processor, the current mobile flagship chip on Meteor Lake, this chip supports vPro Enterprise. Along with the other top-tier SKU from each of the Core Ultra 9, 7, and 5 families, including the Core Ultra 7 165H and the Core Ultra 135H, other chips with vPro Enterprise support include the Core Ultra 7 165U and Core Ultra 7 164U, as well as the Core Ultra 5 135U and Core Ultra 5 134U.

Intel's other Meteor Lake chips, including the Core Ultra 7 155H, the Core Ultra 7 155U, the Core Ultra 5 125H, and the Core Ultra 5 125U, only come with support Intel's vPro Essentials features and not with support for Enterprise This presents a slight 'dropping of the ball' from Intel on this, which we highlighted in our Intel 13th Gen Core gets vPro piece last year.

Intel vPro Support Announcement With No New Hardware, Why Announce Later?

It is worth noting that Intel's announcement of adding vPro support to their first launch of Meteor Lake Core Ultra SKUs isn't entirely new; Intel did highlight that Meteor Lake would support vPro last year within their Series 1 Product Brief dated 12/20/2023. Intel's formal announcement of vPro support for Meteor Lake is more about which SKU has which level of support, and we feel this could pose problems to users who have already purchased Core Ultra series notebooks for business and enterprise use. Multiple outlets, including Newegg and directly from HP, are alluding to mentioning vPro whatsoever.

This could mean that a user has purchased a notebook with, say, a Core Ultra 5 125H (vPro Essentials), which would be used within an SME or by said SME as a bulk purchase but wouldn't be aware that the chip doesn't have vPro Enterprise, from which they personally and from a business standpoint could benefit from the additional securities. We reached out to Intel, and they sent us the following statement.

"Since we are launching vPro powered by Intel Core Ultra & Intel Core 14th Gen this week, prospective buyers will begin seeing the relevant system information on OEM and enterprise retail partner (eg. CDW) websites in the weeks ahead. This will include information on whether a system is equipped with vPro Enterprise or Essentials so that they can purchase the right system for their compute needs."

Tenstorrent Licenses RISC-V CPU IP to Build 2nm AI Accelerator for Edge

28 février 2024 à 20:30

Tenstorrent this week announced that it had signed a deal to license out its RISC-V CPU and AI processor IP to Japan's Leading-edge Semiconductor Technology Center (LSTC), which will use the technology to build its edge-focused AI accelerator. The most curious part of the announcement is that this accelerator will rely on a multi-chiplet design and the chiplets will be made by Japan's Rapidus on its 2nm fabrication process, and then will be packaged by the same company.

Under the terms of the agreement, Tenstorrent will license its datacenter-grade Ascalon general-purpose processor IP to LSTC and will help to implement the chiplet using Rapidus's 2nm fabrication process. Tenstorrent's Ascalon is a high-performance out-of-order RISC-V CPU design that features an eight-wide decoding. The Ascalon core packs six ALUs, two FPUs, and two 256-bit vector units and when combined with a 2nm-class process technology promises to offer quite formidable performance.

The Ascalon was developed by a team led by legendary CPU designer Jim Keller, the current chief executive of Tenstorrent, who used to work on successful projects by AMD, Apple, Intel, and Tesla.

In addition to general-purpose CPU IP licensing, Tenstorrent will co-design 'the chip that will redefine AI performance in Japan.' This apparently means that Tenstorrent  does not plan to license LSTC its proprietary  Tensix cores tailored for neural network inference and training, but will help to design a proprietary AI accelerator generally for inference workloads.

"The joint effort by Tenstorrent and LSTC to create a chiplet-based edge AI accelerator represents a groundbreaking venture into the first cross-organizational chiplet development in semiconductor industry," said Wei-Han Lien, Chief Architect of Tenstorrent's RISC-V products. "The edge AI accelerator will incorporate LSTC's AI chiplet along with Tenstorrent's RISC-V and peripheral chiplet technology. This pioneering strategy harnesses the collective capabilities of both organizations to use the adaptable and efficient nature of chiplet technology to meet the increasing needs of AI applications at the edge."

Rapidus aims to start production of chips on its 2nm fabrication process that is currently under development sometimes in 2027, at least a year behind TSMC and a couple of years behind Intel. Yet, if it starts high-volume 2nm manufacturing in 2027, it will be a major breakthrough from Japan, which is trying hard to return to the global semiconductor leaders.

Building an edge AI accelerator based on Tenstorrent's IP and Rapidus's 2nm-class production node is a big deal for LSTC, Tenstorrent, and Rapidus as it is a testament for technologies developed by these three companies.

"I am very pleased that this collaboration started as an actual project from the MOC conclusion with Tenstorrent last November," said Atsuyoshi Koike, president and CEO of Rapidus Corporation. "We will cooperate not only in the front-end process but also in the chiplet (back-end process), and work on as a leading example of our business model that realizes everything from design to back-end process in a shorter period of time ever."

Intel CEO Pat Gelsinger to Deliver Computex Keynote, Showcasing Next-Gen Products

8 mars 2024 à 11:00

Taiwan External Trade Development Council (TAITRA), the organizer of Computex, has announced that Pat Gelsinger, chief executive of Intel, will deliver a keynote at Computex 2024 on June 4, 2024. Focusing on the trade show's theme of artificial intelligence, he will showcase Intel's next-generation AI-enhanced products for client and datacenter computers.

According to TAITRA's press release, Pat Gelsinger will discuss how Intel's product lineup, including the AI-accelerated Intel Xeon, Intel Gaudi, and Intel Core Ultra processor families, opens up new opportunities for client PCs, cloud computing, datacenters, and network and edge applications. He will also discuss superior performance-per-watt and lower cost of ownership of Intel's Xeon processors, which enhance server capacity for AI workloads.

The most intriguing part of Intel's Computex keynote will of course be the company's next-generation AI-enhanced products for client and datacenter computers. At this point Intel is prepping numerous products that pose a lot of interest, including the following:

  • Arrow Lake and Lunar Lake processors made on next-generation process technologies for desktop and mobile PCs and featuring all-new microarchitectures;
  • Granite Rapids CPUs for datacenters based on a high-performance microarchitecture;
  • Sierra Forest processors with up to 288 cores for cloud workloads based on codenamed Crestmont energy-efficient cores;
  • Gaudi 3 processors for AI workloads that promise to quadruple BF16 performance compared to Gaudi 2.
  • Battlemage graphics processing units.

All of these products are due to be released in 2024-2025, so Intel could well demonstrate them and showcase their performance advantages, or even formally launch some of them, at Computex. What remains to be seen is whether Intel will also give a glimpse at products that are further away, such as Clearwater Forest and Falcon Shores.

SiPearl's Rhea-2 CPU Added to Roadmap: Second-Gen European CPU for HPC

8 mars 2024 à 21:00

SiPearl, a processor designer supported by the European Processor Initiative, is about to start shipments of its very first Rhea processor for high-performance computing workloads. But the company is already working on its successor currently known as Rhea-2, which is set to arrive sometimes in 2026 in Exascale supercomputers.

SiPearl's Rhea-1 datacenter-grade system-on-chip packs 72 off-the-shelf Arm Neoverse V1 cores designed for HPC and connected using a mesh network. The CPU has an hybrid memory subsystem that supports both HBM2E and DDR5 memory to get both high memory bandwidth and decent memory capacity as well as supports PCIe interconnects with the CXL protocol on top. The CPU was designed by a contract chip designer and is made by TSMC on its N6 (6 nm-class) process technology.

The original Rhea is to a large degree a product aimed to prove that SiPearl, a European company, can deliver a datacenter-grade processor. This CPU now powers Jupiter, Europe's first exascale system that uses nodes powered by four Rhea CPUs and NVIDIA's H200 AI and HPC GPUs. Given that Rhea is SiPearl's first processor, the project can be considered as fruitful.

With its 2nd generation Rhea processors, SiPearl will have to develop something that is considerably more competitive. This is perhaps why Rhea-2 will use a dual-chiplet implementation. Such a design will enable SiPearl to pack more processing cores and therefore offer higher performance. Of course, it remains to be seen how many cores SiPearl plans to integrate into Rhea 2, but at least the CPU company is set to adopt the same design methodologies as AMD and Intel.

Given the timing for SiPearl's Rhea 2 and the company's natural with to preserve software compatibility with Rhea 1, it is reasonable to expect the processor to adopt Arm's Neoverse V3 cores for its second processor. Arm's Neoverse V3 offer quite a significant uplift compared to Neoverse V2 (and V1) and can scale to up to 128 cores per socket, which should be quite decent for HPC applications in 2025 – 2026.

While SiPearl will continue developing CPUs, it remains to be seen whether EPI will manage to deliver AI and HPC accelerators that are competitive against those from NVIDIA, AMD, and Intel.

Intel Announces Core i9-14900KS: Raptor Lake-R Hits Up To 6.2 GHz

14 mars 2024 à 15:00

For the last several generations of desktop processors from Intel, the company has released a higher clocked, special-edition SKU under the KS moniker, which the company positions as their no-holds-barred performance part for that generation. For the 14th Generation Core family, Intel is keeping that tradition alive and well with the announcement of the Core i9-14900KS, which has been eagerly anticipated for months and finally unveiled for launch today. The Intel Core i9-14900KS is a special edition processor with P-Core turbo clock speeds of up to 6.2 GHz, which makes it the fastest desktop processor in the world... at least in terms of advertised frequencies it can achieve.

With their latest KS processor, Intel is looking to further push the envelope on what can be achieved with the company's now venerable Raptor Lake 8+16 silicon. With a further 200 MHz increase in clockspeeds at the top end, Intel is looking to deliver unrivaled desktop performance for enthusiasts. At the same time, as this is the 4th iteration of the "flagship" configuration of the RPL 8+16 die, Intel is looking to squeeze out one more speed boost from the Alder/Raptor family in order to go out on a high note before the entire architecture starts to ride off into the sunset later this year. To get there, Intel will need quite a bit of electricity, and $689 of your savings.

Report: China to Pivot from AMD & Intel CPUs To Domestic Chips in Government PCs

26 mars 2024 à 20:00

China has initiated a policy shift to eliminate American processors from government computers and servers, reports Financial Times. The decision is aimed to gradually eliminate processors from AMD and Intel from system used by China's government agencies, which will mean lower sales for U.S.-based chipmakers and higher sales of China's own CPUs.

The new procurement guidelines, introduced quietly at the end of 2023, mandates government entities to prioritize 'safe and reliable' processors and operating systems in their purchases. This directive is part of a concerted effort to bolster domestic technology and parallels a similar push within state-owned enterprises to embrace technology designed in China.

The list of approved processors and operating systems, published by China's Information Technology Security Evaluation Center, exclusively features Chinese companies. There are 18 approved processors that use a mix of architectures, including x86 and ARM, while the operating systems are based on open-source Linux software. Notably, the list includes chips from Huawei and Phytium, both of which are on the U.S. export blacklist.

This shift towards domestic technology is a cornerstone of China's national strategy for technological autonomy in the military, government, and state sectors. The guidelines provide clear and detailed instructions for exclusively using Chinese processors, marking a significant step in China's quest for self-reliance in technology.

State-owned enterprises have been instructed to complete their transition to domestic CPUs by 2027. Meanwhile, Chinese government entites have to submit progress reports on their IT system overhauls quarterly. Although some foreign technology will still be permitted, the emphasis is clearly on adopting local alternatives.

The move away from foreign hardware is expected to have a measurable impact on American tech companies. China is a major market for AMD (accounting for 15% of sales last year) and Intel (commanding 27% of Intel's revenue), contributing to a substantial portion of their sales. Additionally, Microsoft, while not disclosing specific figures, has acknowledged that China accounts for a small percentage of its revenues. And while government sales are only a fraction of overall China sales (as compared to the larger commercial PC business) the Chinese government is by no means a small customer.

Analysts questioned by Financial Times predict that the transition to domestic processors will advance more swiftly for server processors than for client PCs, due to the less complex software ecosystem needing replacement. They estimate that China will need to invest approximately $91 billion from 2023 to 2027 to overhaul the IT infrastructure in government and adjascent industries.

PCIe 7.0 Draft 0.5 Spec Available: 512 GB/s over PCIe x16 On Track For 2025

4 avril 2024 à 12:00

PCI-SIG this week released version 0.5 of the PCI-Express 7.0 specification to its members. This is the second draft of the spec and the final call for PCI-SIG members to submit their new features to the standard. The latest update on the development of the specification comes a couple months shy of a year after the PCI-SIG published the initial Draft 0.3 specificaiton, with the PCI-SIG using the latest update to reiterate that development of the new standard remains on-track for a final release in 2025.

PCIe 7.0 is is the next generation interconnect technology for computers that is set to increase data transfer speeds to 128 GT/s per pin, doubling the 64 GT/s of PCIe 6.0 and quadrupling the 32 GT/s of PCIe 5.0. This would allow a 16-lane (x16) connection to support 256 GB/sec of bandwidth in each direction simultaneously, excluding encoding overhead. Such speeds will be handy for future datacenters as well as artificial intelligence and high-performance computing applications that will need even faster data transfer rates, including network data transfer rates.

To achieve its impressive data transfer rates, PCIe 7.0 doubles the bus frequency at the physical layer compared to PCIe 5.0 and 6.0. Otherwise, the standard retains pulse amplitude modulation with four level signaling (PAM4), 1b/1b FLIT mode encoding, and the forward error correction (FEC) technologies that are already used for PCIe 6.0. Otherwise, PCI-SIG says that the PCIe 7.0 speicification also focuses on enhanced channel parameters and reach as well as improved power efficiency. 

Overall, the engineers behind the standard have their work cut out for them, given that PCIe 7.0 requires doubling the bus frequency at the physical layer, a major development that PCIe 6.0 sidestepped with PAM4 signaling. Nothing comes for free in regards to improving data signaling, and with PCIe 7.0, the PCI-SIG is arguably back to hard-mode development by needing to improve the physical layer once more – this time to enable it to run at around 30GHz. Though how much of this heavy lifting will be accomplished through smart signaling (and retimers) and how much will be accomplished through sheer materials improvements, such as thicker printed circuit boards (PCBs) and low-loss materials, remains to be seen.

The next major step for PCIe 7.0 is finalization of the version 0.7 of specification, which is considered the Complete Draft, where all aspects must be fully defined, and electrical specifications must be validated through test chips. After this iteration of the specification is released, no new features can be added. PCIe 6.0 eventually went through 4 major drafts – 0.3, 0.5, 0.7, and 0.9 – before finally being finalized, so PCIe 7.0 is likely on the same track.

Once finalized in 2025, it should take a few years for the first PCIe 7.0 hardware to hit the shelves. Although development work on controller IP and initial hardware is already underway, that process extends well beyond the release of the final PCIe specification.

Intel Unveils New Branding For 6th Generation Xeon Processors: Intel Xeon 6

9 avril 2024 à 15:35

At Intel's Vision 2024 event, which is being held in Phoenix, AZ, has seen several key announcements. On the datacenter CPU front, Intel is using the show to unveil their newest branding for their venerable family of Xeon processors. Beginning with this year's sixth generation of processors, Intel is "evolving" the Xeon brand by retiring the "Xeon Scalable" branding in favor of Intel's new and simplified "Xeon 6" brand.

The Xeon 6 family is set to launch later this year with two primary variants: an all-performance (P) core chip codenamed Granite Rapids, and an all-efficiency (E) core chip codenamed Sierra Forest. Both of these chips will be sold under the Xeon 6 brand and sit on top of the same motherboard platform, with the Xeon 6 branding intended in part to underscore this shared platform. Though speaking of the chips themselves, at this time Intel isn't illustrating how the two sub-series of chips will be differentiated in terms of product numbers.

Over the last year, we've extensively covered Intel's Granite Rapids and Sierra Forest. For more information about Granite Rapids and Sierra Forest, here are some of our key pieces:

Intel debuted their Xeon Scalable branding in 2017 with the launch of the Xeon Platinum 8100 series, which was built using their Skylake microarchitecture. At the time Xeon Scalable replaced Intel's older Xeon E/EP/EX vX branding, resetting the generation count in the process.

Moving forward to 2024, Intel is looking to build an ecosystem befitting the current demands of technologies within key areas such as data centers, Edge, and the PC. Intel is laying the foundations for what it calls 'Intel Enterprise AI.' Using a vast array of frameworks and accelerators and working closely with partners, ISVs, and GSIs to create a large and open ecosystem, the newly branded Intel Xeon 6 platforms will be key in the enterprise market as we advance.

Intel has adopted a newer and simpler nomenclature for Granite Rapids and Sierra Forest, starting with the Intel Xeon 6 processors. Sierra Forest Xeon 6 processors are set to launch in Q2 of 2024, which include a chip featuring 288 E-cores. It will be the first product to adopt this new branding, which is designed to ease customer navigation between models. Meanwhile the Xeon 6 P-core Granite Rapids processors will come later.

Ultimately, the Xeon brand itself and what it entails (enterprise, workstation, server, and data center) isn't going anywhere. Instead, Intel is putting an increased focus on the generation number of the platform by moving it front and center, to more clearly highlight what generation of technology a part belongs to.

As mentioned, Intel's Xeon 6 processors, based on their Sierra Forest architecture, are set to launch in Q2 2024, while the Granite Rapids Xeon 6 platform is expected to come sometime in the second half of 2024.

Google Develops In-House Arm 'Axion' CPU for Datacenters

9 avril 2024 à 22:00

Google was among the first hyperscalers build custom silicon for its services, starting first with tensor processing units (TPUs) for its AI initiatives, and then video transcoding units (VCUs) for the YouTube service. But unlike its industry peers, the company has been slower to adopt custom CPU designs, prefering to stick to off-the-shelf chips from the major CPUs. This is finally changing at Google, with the announcement that the company has developed its own in-house datacenter CPU, the Axion.

Google's Axion processor is based on the Arm Neoverse V2 (Arm v9) platform, which is Arm's current-generation design for high-performance server CPUs, and is already employed in other chips such as NVIDIA's Grace and Amazon's Graviton4. Within Google, Axion is aimed at a wide variety of workloads, including web and app servers, data analytics, microservices, and AI training. Google claims that the Axion processors boast up to 50% higher performance and up to 60% better energy efficiency compared to current-generation x86-based processors, as well as offer a 30% higher performance compared to competing Arm-based CPUs for datacenters. Though as is increasingly common for the cryptic cloud side of Google's business, least for now the company isn't specifying what processors they're comparing Axion to in these metrics.

While Google is not disclosing core counts or the full specifications of its Axion CPUs, the company is revealing that they are incorporating their own secret sauce into the silicon in the form of the company's Titanium purpose-built microcontrollers. These microcontrollers are designed to handle basic operations like networking and security, as well as offload storage I/O processing to Hyperdisk block storage service. As a result of this offloading, virtually all of the CPU core resources should be available to actual workloads. As for the chip's memory subsystem, Axion uses conventional dual-rank DDR5 memory modules.

"Google's announcement of the new Axion CPU marks a significant milestone in delivering custom silicon that is optimized for Google's infrastructure, and built on our high-performance Arm Neoverse V2 platform," said Rene Haas, CEO of Arm. "Decades of ecosystem investment, combined with Google's ongoing innovation and open-source software contributions ensure the best experience for the workloads that matter most to customers running on Arm everywhere." 

Google has previously deployed Arm-based processors for its own services, including BigTable, Spanner, BigQuery, and YouTube Ads and is ready to offer instances based on its Armv9-based Axion CPUs to its customers that can use software developed for Arm architectures.

Sources: GoogleWall Street Journal

Intel To Discontinue Boxed 13th Gen Core CPUs for Enthusiasts

11 avril 2024 à 00:00

In an unexpected move, Intel has announced plans to phase out the boxed versions of its enthusiasts-class 13th Generation Core 'Raptor Lake' processors. According to a product change notification (PCN) published by the company last month, Intel plans to stop shipping these desktop CPUs by late June. In its place will remain Intel's existing lineup of boxed 14th Generation Core processors, which are based on the same 'Raptor Lake' silicon and typically carry higher performance for similar prices.

Intel customers and distributors interested in getting boxed versions 13th Generation Core i5-13600K/KF, Core i7-13700K/KF, and Core i9-13900K/KF/KS 'Raptor Lake' processors with unlocked multiplier should place their orders by May 24, 2024. The company will ship these units by June 28, 2024. Meanwhile, the PCN does not mention any change to the availability of tray versions of these CPUs, which are sold to OEMs and wholesalers.

The impending discontinuation of Intel's boxed 13th Generation Core processors comes as the company's current 14th Generation product line, 'Raptor Lake Refresh' is largely a rehash of the same silicon at slightly higher clockspeeds. Case in point: all of the discontinued SKUs are based on Intel's B0 Raptor Lake silicon, which is still being used for their 14th Gen counterparts. So Intel has not discontinued producing any Raptor Lake silicon; only the number of retail SKUs is getting cut-down.

As outlined in our 14th Generation Core/Raptor Lake Refresh review, the 14th Gen chips largely make their 13th Gen counterparts redundant, offering better performance at every tier for the same list price. And with virtually all current generation motherboards supporting both generation of chips, apparently Intel feels there's little reason to keep around what's essentially older, slower SKUs of the same silicon.

Interestingly, the retirement of the enthusiast-class 13th Generation Core chips is coming before Intel discontinues their even older 12th Generation Core 'Alder Lake' processors. 12th Gen chips are still available to this day in both boxed and tray versions, and the Alder Lake silicon itself is still widely in use in multiple product families. So even though Alder Lake shares the same platform as Raptor Lake, the chips based on that silicon haven't been rendered redundant in the same way that 13th Gen Core chips have.

Ultimately, it would seem that Intel is intent on consolidating and simplifying its boxed retail chip offerings by retiring their near-duplicate SKUs. Which for PC buyers could present a minor opportunity for a deal, as retailers work to sell off their remaining 13th Gen enthusiast chips.

The Intel Core Ultra 7 155H Review: Meteor Lake Marks A Fresh Start To Mobile CPUs

11 avril 2024 à 12:30

One of the most significant talking points of the last six months in mobile computing has been Intel and their disaggregated Meteor Lake SoC architecture. Meteor Lake, along with the new Core and Core Ultra naming scheme, also heralds the dawn of their first tiled architecture for the mobile landscape on the latest Intel 4 node with Foveros packaging. In December last year, Intel unveiled their premier Meteor lake-based Core Ultra H series, with five SKUs ranging from two with 4P+8E+2LP/18T and three with 6P+8E+2LP/22T models. Since then, many vendors and manufacturers have launched notebooks capitalizing on Intel's latest multi-tiled Meteor Lake SoC architecture as the heart of power and performance, driving their latest models into 2024.

Today, we will focus on an attractive ultrabook via the ASUS Zenbook 14 OLED (UX3405MA), which features a thin and light design and is powered by Intel's latest Meteor Lake Core Ultra 7 155H processor. While much of the attention is going to come on how the Intel Core Ultra 7 155H with its 6P+8E+2LP/22T configuration and 8 Arc Xe integrated graphics cores will perform, the ASUS Zenbook 14 OLED UX3405MA has plenty of features within its sleek Ponder Blue colored shell to make it very interesting. Included is a 14" 3K (2880 x 1800) touchscreen OLED panel with a 120 Hz refresh rate, 32 GB of LPDDR5X memory (soldered), and a 1 TB NVMe M.2 SSD for storage.

Intel Teases Lunar Lake At Intel Vision 2024: 100+ TOPS Overall, 45 TOPS From NPU Alone

11 avril 2024 à 17:00

During the main keynote at Intel Vision 2024, Intel CEO Pat Gelsinger flashed a completed Lunar Lake chip off, much like EVP and General Manager of Intel's Client Computing Group (CCG) Michelle Johnston Holthaus did back at CES 2024. The contrast between the two glimpses of the Lunar Lake chip is that Pat Gelsinger gave us something juicier than just a photo op. He clarified and claimed the levels of AI performance we can expect to see when Lunar Lake launches.

According to Intel's CEO Pat Gelsinger, Lunar Lake, scheduled to be launched towards the end of this year, is set to raise the bar even further regarding on-chip AI capabilities and performance. At Intel's own Vision event, aptly named Intel Vision, current CEO of Intel Pat Gelsinger stated during his presentation that Lunar Lake will be the 'flagship SoC' for the next generation of AI PCs. Intel claims that Lunar Lake will have 3X the AI performance of their current Meteor Lake SoC, which is impressive as Meteor Lake is estimated to be running around 34 TOPS combined with the NPU, GPU, and CPU.

Factoring in the NPU within Meteor Lake, 11 of the 34 TOPS come solely from the NPU. Still, Intel claims that the NPU on Lunar Lake will hit a large 45 TOPs, akin to the Hailo-10 add-in card and similar to Qualcomm's Snapdragon X Elite processor. Factoring in the integrated graphics and the compute cores, Intel is claiming a combined total of over 100 TOPS, and with Microsoft's self-imposed guidelines of what constitutes an 'AI PC' coming in at 40 TOPS, Intel's NPU fits the bill.

Intel also alludes to how they are gaining a load of TOPS performance from the NPU, whether that be with new technologies; the NPU will likely be built in a more advanced node, perhaps Intel 18A. Another thing Intel didn't highlight was how they were measuring the TOPS performance, whether that be INT8 or INT4.

Still, one thing is clear: Intel wants to increase on-chip AI capabilities in desktop PCs and notebooks with each generation. Intel is also attempting to leverage more AI performance to help boost its goal to ship 100 million AI PCs by the end of 2025. Intel has already announced that it's shipped 5 million thus far and plans to sell another 40 million units by the end of the year.

AMD Quietly Launches Ryzen 7 8700F and Ryzen 5 8400F Processors

11 avril 2024 à 19:30

AMD has recently expanded its Ryzen 8000 series by introducing the Ryzen 7 8700F and Ryzen 5 8400F processors. Initially launched in China, these chips were added to AMD's global website, signaling they are available worldwide, apparently from April 1st. Built from the recent Zen 4-based Phoenix APUs using the TSMC 4nm node as their Zen 4 mobile chips, these new CPUs lack integrated graphics. However, the Ryzen 7 8700F does include the integrated Ryzen AI NPU for added capabilities in a world currently dominated by AI and moving it directly into the PC.

The company's decision to announce these chips in China aligns with its strategy to offer Ryzen solutions at every price point in the market. Although AMD didn't initially disclose the full specifications of these F-series models, and we did reach out to the company to ask about them, they refused to discuss them with us. Their listing on the website has now been updated with a complete list of specifications and features, with everything but the price mentioned.

AMD Ryzen 8000G vs. Ryzen 8000F Series (Desktop)
Zen 4 (Phoenix)
AnandTech Cores/Threads Base
Freq
Turbo
Freq
GPU GPU
Freq
Ryzen AI
(NPU)
L3 Cache
(MB)
TDP MSRP
Ryzen 7
Ryzen 7 8700G 8/16 4200 5100 R780M
12 CUs
2900 Y 16 65W $329
Ryzen 7 8700F 8/16 4100 5000 - - Y 16 65W ?
Ryzen 5
Ryzen 5 8600G 6/12 4300 5000 R760M
8 CUs
2800 Y 16 65W $229
Ryzen 5 8400F 6/12 4200 4700 - - N 16 65W ?

The Ryzen 7 8700F features an 8C/16T design, with 16MB of L3 cache and the same 65W TDP as the Ryzen 7 8700G. Although the base clock speed is 4.1 GHz, it boosts to 5.0 GHz; this is 100 MHz less on both base/boost clocks than the 8700G. Meanwhile, the Ryzen 5 8400F is a slightly scaled-down version of the Ryzen 8600G APU, with 6C/12, 16MB of L3 cache, and again has a 100 MHz reduction to base clocks compared to the 8600G. Unlike the Ryzen 5 8400F, the Ryzen 7 8700F keeps AMD's Ryzen AI NPU, adding additional capability for generative AI. 

The Ryzen 5 8400F can boost up to 4.7 GHz, 300 MHz slower than the Ryzen 5 8600G. AMD also allows overclocking for these new F-series chips, which means users could potentially boost the performance of these processors to match their G-series equivalents.

Pricing details are still pending, but to remain competitive, AMD will likely need to price these CPUs below the 8700G and 8600G, as well as the Ryzen 7 7700 and Ryzen 5 7600. These CPUs offer, albeit very limited, integrated graphics and have double the L3 cache capacity, along with higher boost clocks than the 8000F series chips, so pricing is something to consider whenever pricing becomes available.

❌
❌