Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierAnandTech

Best CPUs for Gaming: February 2024

2 février 2024 à 13:00

As the first quarter of 2024 is already well underway, there's not been much in the way of launches since last year. Of particular note is AMD's launch of their Ryzen 8000G series APUs, which are based on their mobile silicon using the Zen 4 architecture. This release adds a different dynamic to the CPU market, with AMD building on their still popular Ryzen 5000G series of APUs, which still regularly feature as some of the best-selling CPUs on the market despite their age. With the Ryzen 7 8700G (8C/16T) and Ryzen 5 8600G (6C/12T), which we reviewed, AMD combines their Phoenix silicon with the latest Radeon RDNA3 integrated graphics into a desktop package designed for their AM5 platform.

Last year (October), we also saw Intel introduce their 14th Gen Core family, headlined by the Core i9-14900K with its impressive 6.0 GHz speed. AMD's Ryzen 7000 series continues to fiercely compete at the high end of the market, offering CPU enthusiasts and gamers on all budgets a wide range of processors to select from. The availability of processors, motherboards, and DDR5 memory remains strong at key retailers, ensuring that the current market is well-prepared to meet the high demand. As we go into 2024, the CPU market shows a somewhat strong and diverse outlook, with various options for consumers of all ilks.

Update: Samsung Announces 990 EVO SSD, Energy-Efficiency with Dual-Mode PCIe Gen4 x4 and Gen5 x2

23 janvier 2024 à 19:30

After Samsung's earlier product page snafu, the company is officially launching their next-generation mainstream client SSD today. The 990 EVO will be available in both 1TB and 2TB capacities, and offers an interesting mix of both PCIe Gen 5 and PCIe Gen 4 support by allowing up to 2 lanes of PCIe connectivity at Gen 5 speeds, or up to 4 lanes at Gen 4 and below.

The release of the 990 EVO marks the return of the EVO SSD brand after it was quietly put aside during the 980 generation, when Samsung's sole non-PRO drive was the vanilla 980 SSD. Consequently, Samsung's own performance comparisons for the new drive are against the most recent EVO, the 970 EVO Plus, though similar to how the vanilla 980 was effectively the 970 EVO successor, in many ways this is the successor to the 980.

The drives are available immediately from Samsung. The company has set the retail prices of the drives at $125 for the 1TB model, and $210 for the 2TB. These are stiff prices for a drive debuting in the highly-competitive mainstream SSD market, though admittedly not unusual for a Samsung drive launch.

Our original story (with updated technical specifications) follows as below:


Originally Published: 01/09/2024

Samsung's launch of the 990 EVO M.2 2280 SSD appears to be imminent, as official product pages with specifications went live in certain regions a few days back before getting pulled down.

The most interesting aspect the 990 EVO is not the claimed speeds, but the fact that it can operate in either Gen 4 or Gen 5 modes with different number of lanes. The recently launched mobile platforms from both AMD and Intel use Gen 4 lanes for the storage subsystem. However, with progress in technology it is inevitable that this will move to Gen 5 in the future. In the meanwhile, thermal constraints in mobile systems may prevent notebook manufacturers from going in for desktop Gen 5 speeds (8 - 14 GBps). An attractive option for such cases would be to move to a two-lane Gen 5 implementation that would help in retaining the same Gen 4 x4 bandwidth capability, but cut down on the BOM cost by reducing the number of pins / lane count on the host side. It appears that Samsung's 990 EVO is a platform designed with such a scenario in mind.

PCIe PHYs / controllers have backward compatibility, and the 990 EVO's SSD controller incorporates a 4-lane Gen 5 controller and PHY. During the training phase with the host, both the link bandwidth and lane count can be negotiated. It appears that the SSD is configured to advertise Gen 5 speeds to the host if only two lanes are active.

Samsung appears to be marketing only 1TB and 2TB capacities of the 990 EVO. Based on the product photos online, the models appear to be single-sided units (making them compatible with a wider variety of mobile platforms). The flash packages appear to be 1TB each, and the EVO moniker / advertisement of Host Memory Buffer support / controller package markings in the product photos points to a DRAM-less SSD controller - the Piccolo S4LY022. The quoted performance numbers appear low for a 176L / 236L V-NAND product. TechPowerUp believes that these SSDs are using an updated V6 (133L, termed V6 Prime) with better efficiency and yields compared to the regular V6.

Samsung 990 EVO Specifications
Capacity 1 TB 2 TB
Controller Samsung S4LY022 Piccolo
NAND Flash Samsung Updated 6th Gen. V-NAND (133L 3D TLC)
Form-Factor, Interface Single-Sided M.2-2280, PCIe 4.0 x4 / 5.0 x2, NVMe 2.0
Sequential Read 5000 MB/s 5000 MB/s
Sequential Write 4200 MB/s 4200 MB/s
Random Read IOPS 680K 700K
Random Write IOPS 800K 800K
SLC Caching Yes
TCG Opal Encryption Yes
Warranty 5 years
Write Endurance 600 TBW
0.3 DWPD
1200 TBW
0.3 DWPD

Samsung is also touting much-improved power efficiency, with transfer rates being 2 - 3x per Watt compared to the 970 EVO. The Piccolo controller's 5nm fabrication process and the V6 Prime's efficiency improvements have a significant say in that aspect.

Pricing and concrete launch dates for the 990 EVO are not available yet. The delta in specifications for the 1TB and 2TB models will be updated in the table above once the drives are officially announced. The 1TB model is priced at $125 and the 2TB version at $210. Both SKUs are available for purchase today.

Apple to Cut Blood Oxygen Feature from Newly-Sold Apple Watches in the U.S.

18 janvier 2024 à 12:45

Following the latest legal defeat in Apple's ongoing patent infringement fight over blood oxygen sensors, the company is set to remove its blood oxygen measurement feature from its Watch Series 9 and Watch Ultra 2 sold in the U.S. The decision comes after the U.S. Court of Appeals for the Federal Circuit declined to extend a pause on an import ban imposed by the U.S. International Trade Commission (USITC) last year, making way for the ban to finally take effect.

The legal setback stems from a ruling that Apple's watches infringed on patents related to blood oxygen measurement that belong to Masimo, which sued Apple in 2020. The U.S. Court of Appeals' decision means that Apple must stop selling watches with this feature while the appeal, which could last a year or more, is in progress.

As the ruling bars Apple from selling additional watches with this feature, the company has been left with a handful of options to comply with the ruling. Ceasing watch sales entirely certainly works – though is unpalatable for obvious reasons – which leaves Apple with removing the feature from their watches in some manner. Any hardware retool to avoid infringing upon Masimo's patents would take upwards of several quarters, so for the immediate future, Apple will be taking the unusual step of disabling the blood oxygen sensor feature in software instead, leaving the physical hardware on-device but unused.

The new, altered Apple Watch models will be available from Thursday in Apple's retail and online stores. Despite the change, the company maintains that the USITC's decision is erroneous and continues to appeal. Apple stresses that the blood oxygen feature will still be available in models sold outside the U.S., and perhaps most critically, watches sold in the U.S. before this change will keep their blood oxygen measuring capability.

"Pending the appeal, Apple is taking steps to comply with the ruling while ensuring customers have access to Apple Watch with limited disruption," the company said in a statement published by Bloomberg.

It is noteworthy that the Patent Trial and Appeal Board invalidated 15 of 17 Masimo's patents it reviewed, a verdict that Masimo is currently challenging. In Masimo's trial for trade secret misappropriation last May, a judge ruled out half of Masimo's 10 allegations due to a lack of adequate evidence. Regarding the remaining allegations, most jurors agreed with Apple's position, but the trial ultimately ended with an 11-1, non-unanimous decision, resulting in a mistrial. Scheduling of a new trial to settle the matter is still pending. In the meantime, Apple has been left with little choice but to downgrade its products to keep selling them in the U.S.

TSMC Posts Q4'23 Earnings: 3nm Revenue Share Jumps to 15%, 5nm Overtakes 7nm For 2023

19 janvier 2024 à 13:00

Taiwan Semiconductor Manufacturing Co. released its Q4'2023 and full year 2023 financial results this week. And along with a look at the financial state of the firm as it enters 2024, the company's earnings info also offers a fresh look at the utilization of their various fab nodes. Of particular interest, with TSMC ramping up production of chips on its N3 (3 nm-class) process technology, N3 (aka N3B) has already reached the point where it accounts for 15% of TSMC's revenue in Q4 2023. This is a very fast – albeit not record fast – revenue ramp.

In the fourth quarter of 2023, sales of wafers processed using N3 accounted for 15% of TSMC's total wafer revenue, whereas revenue of N5 and N7 accounted for 39% and 17% respectively. In terms of dollars, N3 revenue for TSMC was $2.943 billion, N5 sales totaled $6.867 billion, and N7 revenue reached $3.3354 billion. In general, advanced technologies (N7, N5, N3) commanded 67% of TSMC's revenue, whereas the larger grouping of FinFET-based process technologies accounted for 75% of the company's total wafer revenue.

It is noteworthy that revenue share contributions made by system-on-chips (SoCs) for smartphones and high-performance computing (a vague term that TSMC uses to describe everything from game consoles to notebooks and from workstations to datacenter-grade processors) were equal in Q4 2023: 43% each. Automotive chips accounted for 5%, and Internet-of-Things chips contributed another 5%.

"Our fourth quarter business was supported by the continued strong ramp of our industry-leading 3-nanometer technology," said Wendell Huang, VP and Chief Financial Officer of TSMC. "Moving into first quarter 2024, we expect our business to be impacted by smartphone seasonality, partially offset by continued HPC-related demand."

To put TSMC's N3 revenue share ramp into context, we need to compare it to the ramp of the foundry's previous-generation all-new node: N5 (5 nm-class), which entered high-volume manufacturing in mid-2020. TSMC began to recognize its N5 revenue in Q3 2020 and the production node accounted for 8% of the company's sales back then, which totaled $0.97 billion. In the second quarter of its availability (Q4 2020), N5 accounted for 20% of TSMC's revenue, or $2.536 billion.

There is a major catch about TSMC's N5 ramp in 2020 that muddles comparisons a bit, however. The world's largest contract maker of chips sold boatload of N5-based system-on-chips to Huawei at the time (shipping them physically before the U.S. sanctions against the company became effective in September, 2020) as well as Apple.  By contrast, it is widely believed that Apple is the sole client to use TSMC's N3B technology due to costs. Which means that, even with the quick ramp, TSMC has fewer customers in the early days of N3 than they did in N5, contributing to the slower ramp for N3.

As for the entire year, N3 wafer revenue account for 6% of TSMC's total wafer revenue in 2023. Meanwhile, N5 revenue has finally overtaken N7 revenue in FY2023, after being edged out by N7 in FY2022. For 2023, N5 wafers accounted for 33% of TSMC's revenue, and N7 wafers were responsible for 19% of the company's revenue.

TSMC's fourth quarter revenue totaled $19.62 billion, which represents a 1.5% year-over-year decrease, or a quarterly increase of 13.6% over Q3 2023. Meanwhile, the company shipped 2.957 million 300-mm equivalent wafers in Q4 2023, up 1.9% sequentially. The company's gross margin for the quarter was 53.0%, operating margin was 41.6%, and net profit margin was 38.2%.

TSMC expects revenue in Q1 2024 to be between $18.0 billion and $18.8 billion, whereas gross margin is projected to be between 52% and 54%.

TSMC 2nm Update: Two Fabs in Construction, One Awaiting Government Approval

19 janvier 2024 à 15:15

When Taiwan Semiconductor Manufacturing Co. (TSMC) is prepping to roll out an all-new process technology, it usually builds a new fab to meet demand of its alpha customers and then either adds capacity by upgrading existing fabs or building another facility. With N2 (2nm-class), the company seems to be taking a slightly different approach as it is already constructing two N2-capable fabs and is awaiting for a government approval for the third one.

We are also preparing our N2 volume production starting in 2025," said Mark Liu, TSMC's outgoing chairman, at the company's earnings call with financial analysts and investors. "We plan to build multiple fabs or multiple phases of 2nm technologies in both Hsinchu and Kaohsiung science parks to support the strong structural demand from our customers. […] "In the Taichung Science Park, the government approval process is ongoing and is also on track."

TSMC is gearing up to construct two fabrication plants capable of producing N2 chips in Taiwan. The first fab is planned to be located near Baoshan in Hsinchu County, neighboring its R1 research and development center, which was specifically build to develop N2 technology and its successor. This facility is expected to commence high-volume manufacturing (HVM) of 2nm chips in the latter half of 2025. The second N2-capable fabrication plant by is to be located in the Kaohsiung Science Park, part of the Southern Taiwan Science Park near Kaohsiung. The initiation of HVM at this plant is projected to be slightly later, likely around 2026.

In addition, the foundry is working to get government approvals to build a yet another N2-capable fab in the Taichung Science Park. If the company starts to construct this facility in 2025, the fab could go online as soon as in 2027.

With three fabs capable of making chis using its 2nm process technologies, TSMC is poised to offer vast 2nm capacity for years to come.

TSMC expects to start HVM using its N2 process technology that uses gate-all-around (GAA) nanosheet transistors around the second half of 2025. TSMC's 2nd generation 2nm-class process technology — N2P — will add backside power delivery. This technology will be used for mass production in 2026.

The Corsair A115 CPU Cooler Review: Massive Air Cooler Is Effective, But Expensive

22 janvier 2024 à 14:00

With recent high-performance CPUs exhibiting increasingly demanding cooling requirements, we've seen a surge in releases of new dual-tower air cooler designs. Though not new by any means, dual-tower designs have taken on increased importance as air cooler designers work to keep up with the significant thermal loads generated by the latest processors. And even in systems that aren't running the very highest-end or hottest CPUs, designers have been looking for ways to improve on air cooling efficiency, if only to hold the line on noise levels while the average TDP of enthusiast-class processors continues to eke up. All of which has been giving dual-tower coolers a bigger presence within the market.

At this point many major air cooler vendors are offering at least one dual-tower cooler, and, underscoring this broader shift in air cooler design, they're being joined by the liquid-cooling focused Corsair. Best known within the PC cooling space for their expansive lineup of all-in-one (AIO) liquid PC CPU coolers, Corsair has enjoyed a massive amount of success with their AIO coolers. But perhaps as a result of this, the company has exhibited a notable reticence towards venturing into the air cooler segment, and it's been years since the company last introduced a new CPU air cooler. This absence is finally coming to an end, however, with the launch of a new dual-tower air cooler.

Our review today centers on Corsair's latest offering in the high-end CPU air cooler market, the A115. Designed to challenge established models like the Noctua NH-D15, the A115 is Cosair's effort to jump in to the high-end air cooling market with both feet and a lot of bravado. The A115 boasts substantial dimensions to maximize its cooling efficiency, aiming not just to meet but to surpass the cooling requirements of the most demanding mainstream CPUs. This review will thoroughly examine the A115's performance characteristics and its competitive standing in the aftermarket cooling market.

Wi-Fi Alliance Introduces Wi-Fi CERTIFIED 7: 802.11be Prepares for Draft Standard Exit

23 janvier 2024 à 17:00

The final approval of the 802.11be standard may only be scheduled for December 2024, but that has not put a spanner in the works of the Wi-Fi Alliance in creating a Wi-Fi 7 certification program.

At the 2024 CES, the program was officially announced with products based on silicon from Broadcom, Intel, Mediatek, and Qualcomm obtaining the Wi-Fi CERTIFIED 7 tag. Broadcom, Mediatek, and Qualcomm have already been through two generations of Wi-Fi 7 products, and it is promising to finally see Wi-Fi 7 exit draft status. This enables faster adoption on the client side, as well. The key features of Wi-Fi CERTIFIED 7 are based on the efforts of the IEEE 802.11be EHT (Extremely High Throughput) working group.

The introduction of 6 GHz support in Wi-Fi 6E in select regions opened up channels that were hitherto unavailable for in-home wireless use. Wi-Fi CERTIFIED 7 brings in support for 320 MHz channels. These ultra-wide channels are available only in the 6 GHz band.

These channels are responsible for the high throughput promised in Wi-Fi CERTIFIED 7. However, the non-availability of 6 GHz in many regions has proved to be a deterrent for client device vendors. Many of these companies do not want to spend extra for features that are not available across all geographies. It is likely that many client devices (particularly on the smartphone side) will ship without support for 320 MHz channels initially.

Multi-Link Operation (MLO) is yet another technique to boost available bandwidth for a single client. Wi-Fi CERTIFIED 7 allows clients to connect to the access point through multiple bands at the same time. It also increases the reliability of connections.

Wi-Fi 7 also brings in 4K QAM , allowing up to 12 bits to be encoded per symbol. This represents an increase in spectral efficiency of 20% over Wi-Fi 6 (which only required support for 1024 QAM).

Dense constellations require extremely sophisticated circuitry at both the transmitter (linear power amplifiers) and receiver ends (to avoid symbol decoding without errors). Those are part of the advancements that we can see in Wi-Fi CERTIFIED 7 devices.

Some of the other key updates in Wi-Fi CERTIFIED 7 include support for 512 compressed block acks, multiple resouce units to a single station / client, and triggered uplink access.

802.11n introduced the concept of block acks at the MAC layer where multiple wireless 'frames' (MAC Protocol Data Units or MPDUs to be more exact) can be acknowledged by the receiver in one response. The ack indicates the missed MPDUs, if any, in the previously transmitted set. In Wi-Fi 6, the limit for the number of MPDUs per block ack was 256. In Wi-Fi 7, this has been pushed up to 512. Spreading out this communication allows for better resource usage.

Wi-Fi 6 introduced the concept of resource units in the OFDMA scheme wherein the radio channel gets partitioned into smaller frequency allocations called RUs. These allow small packets to be transmitted to multiple users at the same time. In Wi-Fi 6, each user could get only one RU. Wi-Fi 7 allows for better efficiency by enabling allocation of non-contiguous RUs to a single user.


Benefits of Multiple RU Allocation to a Single User (Source: Mediatek)

Wi-Fi 6 introduced the concept of triggered uplink access, allowing clients to simultaneously transmit data back to the access point in an independent manner. This transmission is synchronized by the AP sending out a trigger frame containing the resource unit allocation information for each client. Wi-Fi 7 optimizes this scheme further for QoS requirements and latency-sensitive streams.

In the meanwhile, the 802.11 working group has already started the ground work for Wi-Fi 8. 802.11bn (ultra-high reliability or UHR) aims to bring more resilience to high-speed Wi-Fi networks by allowing multi-link operation distributed over multiple access points, coordination between multiple access points, and power saving features on the access point side.


Timeline for 802.11bn (EHR): Wi-Fi 8 Deployments in 2027 - 2028? (Source: What Will Wi-Fi 8 Be? A Primer on IEEE 802.11bn Ultra High Reliability [PDF])

The Wi-Fi Alliance expects a wide range of application scenarios for Wi-Fi 7, now that certification is in place.

These include mobile gaming, video conferencing, industrial IoT, automotive, multi-user AR / VR / XR, immersive e-training modules, and other use-cases. Wi-Fi 6 brought in a number of technological advancements to Wi-Fi, and Wi-Fi 7 has added to that. Unfortunately, AR / VR / XR has been trying to break into the mainstream for quite some time, but has met with muted success. It is one of the primary single-client use-cases that can benefit from features like MLO in Wi-Fi 7.

Advancements in spectral efficiency over the last few generations have helped greatly in enterprise deployments. These are scenarios where it is necessary to service a large number of clients with a single access point while maintaining acceptable QoS. User experience in MDUs (multi-dwelling units / apartments) where multiple wireless networks jostle with each other has also improved. That said, vendors are still in search of the ideal single-client scenario to bring out the benefits of Wi-Fi 7 - wireline speeds have largely been stagnant over the last decade, and there are very few ISPs offering gigabit speeds at reasonable prices or over a wide enough area. Both wireline and wireless technologies have to evolve in tandem to bring consumer benefit and pull them in with attractive use-cases. As it currently stands, the pace of progress in Wi-Fi has largely surpassed wired networks over the last couple of decades.

Asus Launches USB4 Add-In-Card: Two 40 Gbps Ports for Desktops

24 janvier 2024 à 13:00

Asus has introduced a USB4 PCIe add-in-card for the company's desktop motherboards, allowing users to add two USB4 ports to their systems. The card can be used to connect up to four devices and a display to each of its ports, and can even be used to charge laptops that support USB charging.

The Asus USB4 PCIe Gen4 Card is based on ASMedia's ASM4242 controller and supports two USB4 ports at 40 Gbps data rates, with up to 60W USB Power Delivery. The board also has two DisplayPort inputs to in order to route graphics through the card as well in order to make full use of the versatility offered by USB4 and the Type-C cable. Alternatively, one can connect the card to the motherboard TB3/TB4 header and use integrated GPU to handle displays connected using USB-C cables.

One of the main advantages that the ports of Asus USB4 PCIe Gen4 card have over USB4 ports found on some motherboards is that it supports 60W Quick Charge 4+ to devices, which enables to charge laptops or connect devices that demand more than 15W of power (but less than 60W).

There is a catch about the Asus USB4 PCIe Gen4 card though: it is only compatible with Asus motherboards and needs a motherboard with a Thunderbolt or USB4 header (which is mostly designed to use integrated GPU). The company says that many of its AM5 and Intel 700-based motherboards have an appropriate header, so the device can be used on most of its current-generation boards.

The card operates on a PCIe 4.0 x4 interface, providing 7.877 GB/s of bandwidth to the ASMedia controller.  The card also features a six-pin auxiliary PCIe connector to supply the additional power needed for the card's high-powered ports.

Asus has yet to reveal recommended price and availability date of its USB4 expansion card. Given that this is not the industry's first card of this kind, expect it to be competitively priced in comparison to existing Thunderbolt 3/4 expansion cards, which have been on the market for a while.

MLCommons To Develop PC Client Version of MLPerf AI Benchmark Suite

24 janvier 2024 à 15:30

MLCommons, the consortium behind the MLPerf family of machine learning benchmarks, is announcing this morning that the organization will be developing a new desktop AI benchmarking suite under the MLPerf banner. Helmed by the body’s newly-formed MLPerf Client working group, the task force will be developing a client AI benchmark suit aimed at traditional desktop PCs, workstations, and laptops. According to the consortium, the first iteration of the MLPerf Client benchmark suite will be based on Meta’s Llama 2 LLM, with an initial focus on assembling a benchmark suite for Windows.

The de facto industry standard benchmark for AI inference and training on servers and HPC systems, MLCommons has slowly been extending the MLPerf family of benchmarks to additional devices over the past several years. This has included assembling benchmarks for mobile devices, and even low-power edge devices. Now, the consortium is setting about covering the “missing middle” of their family of benchmarks with an MLPerf suite designed for PCs and workstations. And while this is far from the group’s first benchmark, it is in some respects their most ambitious effort to date.

The aim of the new MLPerf Client working group will be to develop a benchmark suitable for client PCs – which is to say, a benchmark that is not only sized appropriately for the devices, but is a real-world client AI workload in order to provide useful and meaningful results. Given the cooperative, consensus-based nature of the consortium’s development structure, today’s announcement comes fairly early in the process, as the group is just now getting started on developing the MLPerf Client benchmark. As a result, there are still a number of technical details about the final benchmark suite that need to be hammered out over the coming months, but to kick things off the group has already narrowed down some of the technical aspects of their upcoming benchmark suite.

Perhaps most critically, the working group has already settled on basing the initial version of the MLPerf Client benchmark around Meta's Llama 2 large language model, which is already used in other versions of the MLPerf suite. Specifically, the group is eyeing 7 billion parameter version of that model (Llama-2-7B), as that’s believed to be the most appropriate size and complexity for client PCs (at INT8 precision, the 7B model would require roughly 7GB of RAM). Past that however, the group still needs to determine the specifics of the benchmark, most importantly the tasks which the LLM will be benchmarked executing on.

With the aim of getting it on PCs of all shapes and sizes, from laptops to workstations, the MLPerf Client working group is going straight for mass market adoption by targeting Windows first – a far cry from the *nix-focused benchmarks they’re best known for. To be sure, the group does plan to bring MLPerf Client to additional platforms over time, but their first target is to hit the bulk of the PC market where Windows reigns supreme.

In fact, the focus on client computing is arguably the most ambitious part of the project for a group that already has ample experience with machine learning workloads. Thus far, the other versions of MLPerf have been aimed at device manufacturers, data scientists, and the like – which is to say they’ve been barebones benchmarks. Even the mobile very of the MLPerf benchmark isn’t very accessible to end-users, as it’s distributed as a source-code release intended to be compiled on the target system. The MLPerf Client benchmark for PCs, on the other hand, will be a true client benchmark, distributed as a compiled application with a user-friendly front-end. Which means the MLPerf Client working group is tasked with not only figuring out what the most representative ML workloads will be for a client, but then how to tie that together into a useful graphical benchmark.

Meanwhile, although many of the finer technical points of the MLPerf Client benchmark suite remain to be sorted out, talking to MLCommons representatives, it sounds like the group has a clear direction in mind on the APIs and runtimes that they want the benchmark to run on: all of them. With Windows offering its own machine learning APIs (WinML and DirectML), and then most hardware vendors offering their own optimized platforms on top of that (CUDA, OpenVino, etc), there are numerous possible execution backends for MLPerf Client to target. And, keeping in line with the laissez faire nature of the other MLPerf benchmarks, the expectation is that MLPerf Client will support a full gamut of common and vendor-proprietary backends.

In practice, then, this would be very similar to how other desktop client AI benchmarks work today, such as UL’s Procyon AI benchmark suite, which allows for plugging in to multiple execution backends. The use of different backends does take away a bit from true apples-to-apples testing (though it would always be possible to force fallback to a common API like DirectML), but it gives the hardware vendors room to optimize the execution of the model to their hardware. MLPerf takes the same approach to their other benchmarks right now, essentially giving hardware vendors free reign to come up with new optimizations – including reduced precision and quantization – so long as they don’t lose inference accuracy and fail meet the benchmark’s overall accuracy requirements.

Even the type of hardware used to execute the benchmark is open to change: while the benchmark is clearly aimed at leveraging the new field of NPUs, vendors are also free to run it on GPUs and CPUs as they see fit. So MLPerf Client will not exclusively be an NPU or GPU benchmark.

Otherwise, keeping everyone on equal footing, the working group itself is a who’s who of hardware and software vendors. The list includes not only Intel, AMD, and NVIDIA, but Arm, Qualcomm, Microsoft, Dell, and others. So there is buy-in from all of the major industry players (at least in the Windows space), which has been critical for driving the acceptance of MLPerf for servers, and will similarly be needed to drive acceptance of MLPerf client.

The MLPerf Client benchmark itself is still quite some time from release, but once it’s out, it will be joining the current front-runners of UL’s Procyon AI benchmark and Primate Labs’ Geekbench ML, both of which already offer Windows client AI benchmarks. And while benchmark development is not necessarily a competitive field, MLCommons is hoping that their open, collaborative approach will be something that sets them apart from existing benchmarks. The nature of the consortium means that every member gets a say (and a vote) on matters, which isn’t the case for proprietary benchmarks. But it also means the group needs a complete consensus in order to move forward.

Ultimately, the initial version of the MLPerf Client benchmark is being devised as more of a beginning than an end product in and of itself. Besides expanding the benchmark to additional platforms beyond Windows, the working group will also eventually be looking at additional workloads to add to the suite – and, presumably, adding more models beyond Llama 2. So while the group has a good deal of work ahead of them just to get the initial benchmark out, the plan is for MLPerf Client to be long-lived, long-supported benchmark as the other MLPerf benchmarks are today.

Intel's First High-Volume Foveros Packaging Facility, Fab 9, Starts Operations

25 janvier 2024 à 14:00

Intel this week has started production at Fab 9, the company's latest and most advanced chip packaging plant. Joining Intel's growing collection of facilities in New Mexico, Fab 9 is tasked with packaging chips using Intel's Foveros technology, which is currently used to build the company's latest client Core Ultra (Meteor Lake) processors and Data Center Max GPU (Ponte Vecchio) for artificial intelligence (AI) and high-performance computing (HPC) applications.

The fab near Rio Rancho, New Mexico, cost Intel $3.5 billion to build and equip. The high price tag of the fab – believed to be the single most expensive advanced packaging facility ever built – underscores just how serious Intel is regarding its advanced packaging technologies and production capacity. Intel's product roadmaps call for making significant use of multi-die/chiplet designs going forward, and coupled with Intel Foundry Services customers' needs, the company is preparing for a significant jump in production volumes for Foveros, EMIB, and other advanced packaging techniques.

Intel's Foveros is a die-to-die stacking technology that uses a base die produced using the company's low-power 22FFL fabrication process and chiplet dies stacked on top of it. The base die can act like an interconnection between the dies it hosts, or can integrate certain I/O or logic. The current generation Foveros supports bumps that are as small as 36 microns and can enable up to 770 connections per square millimeter, but as the bumps become 25 and 18 microns eventually, the technology will increase connection density and performance (both in terms of bandwidth and in terms of supported power delivery).

A Foveros base die can be as big as 600 mm2, but for applications that require base dies larger than 600 mm2 (such as those used for datacenter products), Intel can stitch multiple base dies together using co-EMIB packaging technology.

Finally coming into full production, the new Fab 9 (which has inherited its name from what was once a 6-inch wafer lithography fab) is slated to be Intel's crown jewel for Foveros chip packaging for at least the next couple of years. While the company has "advanced packaging" capabilities in Malaysia (PGAT) as well, those facilities are currently only tooled for EMIB production, meaning that all of Intel's Foveros packaging is taking place on its New Mexico campus. As Intel's first high-volume Foveros packaging facility, the additional capacity should greatly expand Intel's total Foveros packaging throughput, though the company isn't providing specific volume figures.

With Intel's Fab 11x directly next door, the pair of facilities are also Intel's first co-located packaging advanced packaging site, allowing Intel to cut down on how many dies they have to import from other Intel fabs. Though as Fab 11x is not an Intel 4 facility, in the case of Meteor Lake it is only suitable for producing the 22FFL base die. Intel is still importing the Intel 4-built CPU die (Oregon & Ireland), as well as the TSMC-manufactured graphics, SoC, and I/O dies (Taiwan).

"Today, we celebrate the opening of Intel's first high-volume semiconductor operations and the only U.S. factory producing the world's most advanced packaging solutions at scale," said Keyvan Esfarjani, Intel executive vice president and chief global operations officer. "This cutting-edge technology sets Intel apart and gives our customers real advantages in performance, form factor and flexibility in design applications, all within a resilient supply chain. Congratulations to the New Mexico team, the entire Intel family, our suppliers, and contractor partners who collaborate and relentlessly push the boundaries of packaging innovation."

Intel Teams Up with UMC for 12nm Fab Node at IFS

25 janvier 2024 à 20:00

Intel and UMC on Thursday said they had entered into an agreement to jointly to develop a 12 nm photolithography process for high-growth markets such as mobile, communication infrastructure, and networking. Under the terms of the deal, the two companies will co-design a 12 nm-class foundry node that Intel Foundry Services (IFS) will use at its fabs in Arizona to produce a variety of chips.

The new 12 nm manufacturing process will be developed in Arizona and used in Fabs 12, 22, and 32 at Intel's Ocotillo Technology Fabrication site in Arizona. The two companies will jointly work on the fabrication technology itself, process design kit (PDK), electronic design automation (EDA) tools, and intellectual properties (IP) solutions from ecosystem partners to enable quick deployment of the node by customers once the tech is production ready in 2027.

Intel's Fabs 12, 22, and 32 in Arizona are currently capable of making chips on Intel's 7nm-class, 10 nm, 14 nm, and 22 nm manufacturing processes. So as Intel rolls out its Intel 4, Intel 3, and Intel 20A/18A production at other sites and winds down production of Intel 7-based products, these Arizona fabs will be freed to produce chips on a variety of legacy and low-cost nodes, including the 12 nm fabrication process co-developed by UMC and Intel.

While Intel itself has a variety of highly-customized process technologies for internal use to produce its own CPUs and similar products, its IFS division essentially has only three: Intel 16 for cost-conscientious customers designing inexpensive low-power products (including those with RF support), Intel 3 for those who develop high-performance solutions yet want to stick to familiar FinFET transistors, and Intel 18A aimed at developers seeking for no-compromise performance and transistor density enabled by gate-all-around RibbonFET transistors and PowerVia backside power delivery. To be a major foundry player, three process technologies are not enough; IFS needs to address as many customers as possible, and this is where collaboration with UMC comes into play.

UMC already has hundreds of customers who develop a variety of products for automotive, consumer electronics, Internet-of-Things, smartphone, storage, and similar verticals. Those customers are quite used to working with UMC, but the best technology that the foundry has is its 14 nm-class 14FFC node. By co-designing a 12 nm-class process technology with Intel, UMC will be able to address customers who need something more advanced than its own 14 nm node, but without having to develop an all-new manufacturing process itself and procuring advanced fabs tools. Meanwhile, Intel gains customers for its fully depreciated (and presumably underutilized) fabs.

The collaboration on a 12 nm node extends the process technology offerings for both companies. What remains to be seen is whether Intel's own 16 nm-class process technology will compete and/or overlap with the jointly developed 12 nm node. To avoid this, we would expect UMC to add some of its know-how to the new tech and make it easier for customers to migrate to this process from its 28 nm-class and 14FFC offerings, which guarantees that the 12 nm node will be used for years to come.

Intel's partnership with UMC comes on the heels of the company's plan to build 65nm chips for Tower Semiconductor at its Fab 11X. Essentially, both collaborations allow Intel's IFS to use its fully depreciated fabs, gain relationship with fabless chip designers, and earn money. Meanwhile, its partners expand their capacity and reach without making heavy capital investments.

"Our collaboration with Intel on a U.S.-manufactured 12 nm process with FinFET capabilities is a step forward in advancing our strategy of pursuing cost-efficient capacity expansion and technology node advancement in continuing our commitment to customers," said Jason Wang, UMC co-president. "This effort will enable our customers to smoothly migrate to this critical new node, and also benefit from the resiliency of an added Western footprint. We are excited for this strategic collaboration with Intel, which broadens our addressable market and significantly accelerates our development roadmap leveraging the complementary strengths of both companies."

TeamGroup Reveals 14GB/s Innogrit IG5666-Based T-Force Ge Pro PCIe 5.0 SSD

26 janvier 2024 à 12:00

Virtually all client SSDs with a PCIe 5.0 x4 interface released to date use Phison's PS5026-E26 controller. Apparently, TeamGroup decided to try something different and introduced a drive powered by a completely different platform, the Innogrit IG5666. The T-Force Ge Pro SSD not only uses an all-new platform, but it also boasts with fast 3D NAND to enable a sequential read speed of up to 14 GB/s, which almost saturates the PCIe 5.0 x4 bus.

TeamGroup's T-Force Ge Pro PCIe 5.0 SSDs will be among the first drives to use the Innogrit IG5666 controller, which packs multiple cores that can handle an LDPC ECC algorithm with a 4096-bit code length, features low power consumption, has eight NAND channels, is made on a 12 nm-class process technology, and has a PCIe 5.0 x4 host interface. The drives will be available in 1 TB, 2 TB, and 4 TB configurations as well as will rely on high-performance 3D TLC NAND memory with a 2400 MT/s interface speed to guarantee maximum performance.

Indeed, 2 TB and 4TB T-Force Ge Pro drives are rated for an up to 14,000 MB/s sequential read speed as well as an up to 11,800 MB/s sequential write speed, which is in line with the highest-end SSDs based on the Phison E26 controller. Meanwhile, TeamGroup does not disclose random performance offered by these SSDs.

What is noteworthy is that to T-Force Ge Pro drives are equipped with a simplistic graphene heatspreader, which is said to be enough to sustain such high-performance levels under loads. Usage of such a cooler makes it easy to fit a T-Force Ge Pro into almost any system, a major difference with many of Phison E26-based drives. Of course, only reviews will reveal whether such a cooling system is indeed enough to properly cool the SSDs, but the fact that TeamGroup decided to go with a thin cooler is notable.

TeamGroup is set to offer its T-Force Ge Pro SSDs with a five-year warranty. Amazon, Newegg, and Amazon Japan will start taking pre-orders on these drives on February 9, 2024. Prices are currently unknown.

AMD Ryzen 7 8700G and Ryzen 5 8600G Review: Zen 4 APUs with RDNA3 Graphics

29 janvier 2024 à 13:00

One of the most desired desktop chips designed for low-cost systems has been AMD's APUs or Accelerated Processing Units. The last time we saw AMD launch a series of APUs for desktops was back in 2021, with the release of their Cezanne-based Ryzen 5000G series, which combined Zen 3 cores with Radeon Vega-based integrated graphics. During CES 2024, AMD announced the successor to Cezanne, with new Phoenix-based APUs, aptly named the Ryzen 8000G series.

The latest Ryzen 8000G series is based on their mobile Phoenix architecture and has been refitted for AMD's AM5 desktop platform. Designed to give users and gamers on a budget a pathway to build a capable yet cheaper system without the requirement of a costly discrete graphics card hanging over their head, the Ryzen 8000G series consists of three SKUs, ranging from an entry-level Phoenix 2 based Zen 4 and Zen 4c hybrid chip, all the way to a full Zen 4 8C/16T model with AMD's latest mobile RDNA3 integrated graphics. 

The Ryzen 7 8700G with 8C/16T, 16 MB of L3 cache, and AMD's Radeon 780M graphics are sitting at the top of the pile. The other chip we're taking a look at today is the middle-of-the-road AMD Ryzen 5 8600G, which has a 6C/12T configuration with fully-fledged mobile Zen 4 cores, with a third option limited to just OEMs currently, with four cores, including one full Zen 4 core and three smaller and more efficient Zen 4c cores.

The other notable inclusion of AMD's Ryzen 8000G series is it brings their Ryzen AI NPU into the desktop market for the first time. It is purposely built for AI inferencing workloads such as Generative AI and is optimized and designed to be more efficient and improve AI performance.

Much of the onus on the capability of AMD's Ryzen 8000G series will be how much of an impact the switch to Zen 4 and RDNA3 integrated graphics commands over the Ryzen 5000G series with Zen 3 and Vega, which is already three years old at this point. The other element is how the mobile-based Phoenix Zen 4 cores compare to the full-fat Raphael Zen 4 cores. In our review and analysis of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs, we aim to find out.

Corsair Launches MP600 Elite: Inexpensive Phison E27T-Based Drives

31 janvier 2024 à 14:30

While enthusiasts are now focused mostly on SSDs with a PCIe 5.0 interface, there are many people who will be just fine with drives featuring a PCIe 4.0 interface to upgrade their PlayStation 5 or PCs bought a few years ago. To address these customers, SSD makers need to offer something that offers the right balance between price and performance. This is exactly what Corsair does with its MP600 Elite devices.

Corsair this week has released a new line of SSDs aimed at the mainstream market, the MP600 Elite. The drives are based on Phison's low-power highly-integrated PS5027-E27T platform, which is geared towards building mainstream, DRAM-less drives. The controller supports both 3D TLC and 3D QLC NAND flash via a four channel, Toggle 5.0/ONFi 5.0 interface, with data transfer rates up to 3600 MT/s. Meanwhile host connectivity is provided via a PCIe 4.0 x4 interface.

With its MP600 Elite SSDs, Corsair is not trying to offer the fastest PCIe Gen4 drives on the market, but rather attempts to offer the maximum value for 3D TLC-powered 1 TB, 2 TB as well as 4 TB configurations. The drives will offer sequential read performance of up to 7,000 MB/s and write performance of up to 6,500 MB/s, as well as random read and write speeds of up to 1,000K and 1,200K IOPS respectively, which is not bad for a PCIe Gen4 SSDs.

To maximize compatibility of its MP600 Elite drives (and make it compatible with Sony's PlayStation 5 and PlayStation 5 Slim), Corsair offers them both with a tiny aluminum heatspreader and an even thinner graphene heatspreader.

The main idea behind the Corsair MP600 Elite is its affordability: it does not require DRAM or a sophisticated cooling system, which optimizes the manufacturer's costs. Meanwhile, Corsair offers 1 TB MP600 Elite SSD with a graphene heatspreader for $89.99 and 2 TB MP600 Elite SSD with a graphene heatspreader for $164.99 (whereas versions with an aluminum heatsink are $5 cheaper), which is not particularly cheap. For example, a faster Corsair MP600 Pro LPX 2 TB costs $169.99.

Every drive is comes with a five-year warranty and can endure up to 1,200 terabytes written (TBW).

AMD: Zen 5-Based CPUs for Client and Server Applications On-Track for 2024

2 février 2024 à 14:30

As part of their quarterly earnings call this week, AMD re-emphasized that its Zen 5-architecture processors for both client and datacenter applications will be available this year. While the company is not making any new disclosured on products or providing a timeline beyond "later this year", the latest statement from AMD serves as a reiteration of AMD's plans, and confirmation that those plans are still on schedule.

So far, we have heard about three Zen 5-based products from AMD: the Strix Point accelerated processing units (APUs) for laptops (and perhaps eventually desktops), the Granite Ridge processors for enthusiast-grade desktops, and Turin CPUs for datacenters. During the conference call with analysts and investors, AMD's Lisa Su confirmed plans to launch Turin and Strix this year.

"Looking ahead, customer excitement for our upcoming Turin family of EPYC processors is very strong," said Lisa Su, chief executive officer of AMD, at the company's earnings call this week (via SeekingAlpha). "Turin is a drop-in replacement for existing 4th Generation EPYC platforms that extends our performance, efficiency and TCO leadership with the addition of our next-gen Zen 5 core, new memory expansion capabilities, and higher core counts."

The head of AMD also confirmed that Turin will be drop-in compatible with existing SP5 platforms (i.e., will come in an LGA 6096 package), feature more than 96 cores, and more memory expansion capabilities (i.e., enhanced support for CXL and perhaps support for innovative DIMMs). Meanwhile, the new CPUs will also offer higher per-core performance and higher performance efficiency.


AMD High Performance CPU Core Roadmap. From AMD Financial Analyst Day 2022

As far as Strix Point is concerned, Lisa Su confirmed that this is a Zen 5 part featuring an 'enhanced RDNA 3' graphics core (also known as Navi 3.5), and an updated neural processing unit.

"Strix combines our next-gen Zen 5 core with enhanced RDNA graphics and an updated Ryzen AI engine to significantly increase the performance, energy efficiency, and AI capabilities of PCs," Su said. "Customer momentum for Strix is strong with the first notebooks on track to launch later this year."

It's notable that the head of AMD did not mention Granite Ridge CPUs foe enthusiast-grade desktops during the conference call. Though as desktop CPUs tend to have smaller margins than mobile or server parts, they are often AMD's least interesting products to investors. Despite that omission, AMD has always launched their consumer desktop chips ahead of their server chips – in part due to the longer validation times required on the latter – so Turin being confirmed for 2024 is still a positive sign for Granite Ridge.

AMD Set to Fix Ryzen 8000G APU STAPM Throttling Issue, Sustained Loads Affected

2 février 2024 à 15:30

Earlier this week, we published our review of AMD's latest Zen 4 based APUs, the Ryzen 7 8700G and Ryzen 5 8600G. While we saw much better gaming performance using the integrated graphics compared to the previous Ryzen 5000G series of APUs, including the Ryzen 7 5700G, the team over at Gamers Nexus has since highlighted an issue with Skin Temperature-Aware Power Management, or STAPM, for short. This particular issue is something we have investigated ourselves, and we can confirm that there is a throttling issue within the current firmware (at the time of writing) with AMD's Ryzen 8000G APUs.

First, it's essential to understand what the Skin Temperature-Aware Power Management (STAPM) feature is and what it does. Introduced by AMD in back 2014, STAPM is a key feature within their mobile processors. STAPM extends the on-die power management by considering the processor's internal temperatures taken by on-chip thermal diodes and the laptop's surface temperature (i.e. the skin temperature). The primary goal of STAPM is to prevent laptops from becoming uncomfortably warm for users, allowing the processor to actively throttle back its heat generation based on the thermal parameters between the chassis and the processor itself.

This is where things relate directly to AMD's Ryzen 8000G series APUs. The Ryzen 8000G series of APUs is based on AMD's Phoenix silicon, which is already in use in their Ryzen Mobile 7040/8040 chips. Which means all of AMD's initial engineering for the platform was for mobile devices, and then extended to the Ryzen 8000G desktop platform. Besides the obvious physical differences, the Ryzen 8000G APUs feature a much higher 65 W TDP (88W PPT) to reflect their desktop-focused operation, making these chips the least power constrained version of Phoenix to date.

The issue is that AMD has essentially 'forgotten' to disable these STAPM features within their firmware, causing both the Ryzen 8000G APUs' Zen 4 cores and RDNA3 integrated graphics to throttle after prolonged periods of sustained load. As we can see from our investigation of the issue, in F1 2023 at 720p High settings, within 3 minutes of playing, we saw a drop in power by around 22%, which will undoubtedly impact both CPU and the integrated graphics performance during prolonged periods.

This directly affects the data in our review of the AMD Ryzen 7 8700G and Ryzen 5 8600G APUs, as the STAPM issues inherently mean that in very prolonged cases, the results may vary. Unfortunately, this issue apparently affects all AM5 motherboards and BIOSes currently available, so there's no way to properly run a Ryzen 8000G chip without STAPM throttling for the time being.

For the moment, we're putting a disclaimer on our Ryzen 8000G review, noting the issue. Once a fix is available from AMD, we'll be going back and re-testing the two chips we have to collect proper results, as well as to better quantify the performance impact of this unnecessary throttling.

Meanwhile, we reached out to AMD to confirm the issue officially, and a few minutes ago the company got back to us with a response.

"It has come to our attention that STAPM limits are being incorrectly applied to 8000 Series processors. This is causing them to drop their PPT limits under sustained load. We are working on a BIOS update to correct this behavior."

The fix that AMD will seemingly apply is through updated AGESA firmware, which, from their standpoint, should be simple in practice. Perhaps the biggest outstanding question is when this fix is coming, though we can't imagine AMD taking too long with this matter.

We must also thank Gamers Nexus for highlighting and providing additional context to the STAPM-related problems from which the Ryzen 8000G APUs suffer from. The video review from Gamers Nexus of the AMD Ryzen 7 8700G and Ryzen 5 8600 APU can be found above. Once a firmware fix has been provided, we will update our data set within our review of the Ryzen 7 8700G and Ryzen 5 8600G.

Palit Releases Fanless Version of NVIDIA's New GeForce RTX 3050 6GB

2 février 2024 à 18:00

NVIDIA today is quietly launching a new entry-level graphics card for the retail market, the GeForce RTX 3050 6GB. Based on a cut-down version of their budget Ampere-architecture GA107 GPU, the new card brings what was previously an OEM-only product to the retail market. Besides adding another part to NVIDIA's deep product stack, the launch of the RTX 3050 6GB also comes with another perk: lower power consumption thanks to this part targeting system installs where an external PCIe power connector would not be needed. NVIDIA's partners, in turn, have not wasted any time in taking advantage of this, and today Palit is releasing its first fanless KalmX board in years: the GeForce RTX 3050 KalmX 6GB.

The GeForce RTX 3050 6GB is based on the GA107 graphics processor with 2304 CUDA cores, which is paired with 6GB of GDDR6 attached to a petite 96-bit memory bus (versus 128-bit for the full RTX 3050 8GB). Coupled with a boost clock rating of just 1470 MHz, the RTX 3050 6GB delivers tangibly lower compute performance than the fully-fledged RTX 3050 — 6.77 FP32 TFLOPS vs 9.1 FP32 TFLOPS — but these compromises offer an indisputable advantage: a 70W power target.

Palit is the first company that takes advantage of this reduced power consumption of the GeForce RTX 3050 6 GB, as the company has launched a passively cooled graphics card based on this part, the first in four years. The Palit GeForce RTX 3050 KalmX 6GB (NE63050018JE-1170H) uses a custom printed circuit board (PCB) that not only offers modern DisplayPort 1.4a and HDMI 2.1 outputs, but, as we still see in some entry-level cards, a dual-link DVI-D connector (a first for an Ampere-based graphics card).

The dual-slot passive cooling system with two heat pipes is certainly the main selling point of Palit's GeForce RTX 3050 KalmX 6GB. The product is pretty large though — it measures 166.3×137×38.3 mm — and will not fit into tiny desktops. Still, given the fact that fanless systems are usually not the most compact ones, this may not be a significant limitation of the new KalmX device.

Another advantage of Palit's GeForce RTX 3050 KalmX 6GB in particular and NVIDIA's GeForce RTX 3050 6GB in general is that it can be powered entirely via a PCIe slot, which eliminates the need for an auxiliary PCIe power connectors (which are sometimes not present in cheap systems from big OEMs).

Wccftech reports that NVIDIA's GeForce RTX 3050 6GB graphics cards will carry a recommended price tag of $169 and indeed these cards are available for $170 - $180. This looks to be a quite competitive price point as the product offers higher compute performance than that of AMD's Radeon RX 6400 ($125) and Radeon RX 6500 XT ($140). Meanwhile, it remains to be seen how much will Palit charge for its uniquely positioned GeForce RTX 3050 KalmX 6GB.

Sales of Client CPUs Soared in Q4 2023: Jon Peddie Research

6 février 2024 à 12:30

Global client PC CPU shipments hit 66 million units in the fourth quarter of 2024, up both sequentially and year-over-year, a notable upturn in the PC processor market, according to the latest report from Jon Peddie Research. The data indicates that PC makers have depleted their CPU stocks and returned to purchases of processors from Intel during the quarter. This might also highlight that PC makers now have an optimistic business outlook.

AMD, Intel, and other suppliers shipped 66 million processors for client PCs during the fourth quarter of 2023, a 7% increase from the previous quarter (62 million) and a 22% rise from the year before (54 million). Despite a challenging global environment, the CPU market is showing signs of robust health.

70% of client PC CPUs sold in Q4 2023 were aimed at notebooks, which is up significantly from 63% represented by laptop CPUs in Q4 2022. Indeed, notebook PCs have been outselling desktop computers for years, so, unsurprisingly, the industry shipped more laptop-bound processors than desktop-bound CPUs. What is perhaps surprising is that the share of desktop CPUs in Q4 2022 shipments was 37%.

"Q4's increase in client CPU shipments from last quarter is positive news in what has been depressing news in general," said Jon Peddie, president of JPR. "The increase in upsetting news in the Middle East, combined with the ongoing war in Ukraine, the trade war with China, and the layoffs at many organizations, has been a torrent of bad news despite decreased inflation and increased GDP in the U.S. CPU shipments are showing continued gains and are a leading indicator."

Meanwhile, integrated graphics processors (iGPUs) also grew, with shipments reaching 60 million units, up by 7% quarter-to-quarter and 18% year-over-year. Because the majority of client CPUs now feature a built-in GPU in one form or another, it is reasonable to expect shipments of iGPUs to grow along with shipments of client CPUs. 

Jon Peddie Research predicts that iGPUs will dominate the PC segment, with their penetration expected to skyrocket to 98% within the next five years. This forecast may point to a future where integrated graphics become ubiquitous, though we would not expect discrete graphics cards to be extinct. 

Meanwhile, the server CPU segment painted a different picture in Q4 2023, with a modest 2.8% growth from the previous quarter but a significant 26% decline year-over-year, according to JPR. 

Despite these challenges, the overall positive momentum in the CPU market, as reported by Jon Peddie Research, suggests a sector that is adapting and thriving even amidst economic and geopolitical uncertainties.

AMD Unveils Their Embedded+ Architecture, Ryzen Embedded with Versal Together

6 février 2024 à 13:00

One area of AMD's product portfolio that doesn't get as much attention as the desktop and server parts is their Embedded platform. AMD's Embedded series has been important for on-the-edge devices, including industrial, automotive, healthcare, digital gaming machines, and thin client systems. Today, AMD has unveiled their latest Embedded architecture, Embedded+, which combines their Ryzen Embedded processors based on the Zen+ architecture with their Versal adaptive SoCs onto a single board.

The Embedded+ architecture integrates the capabilities of their Ryzen Embedded processors with their Versal AI Edge adaptive SoCs onto one packaged board. AMD targets key areas that require good computational power and power efficiency. This synergy enables Embedded+ to handle AI inferencing and manage complex sensor data in real-time, which is crucial for applications in dynamic and demanding environments.

Giving ODMs the ability to have both Ryzen Embedded and their Versal SoCs onto a single board is particularly beneficial for industries requiring low-latency response times between hardware and software, including autonomous vehicles, diagnostic equipment in healthcare, and precision machinery in industrial automation. The AMD Embedded+ architecture can also support various workloads across different processor types, including x86 and ARM, along with AI engines and FPGA fabric, which offers flexibility and scalability of embedded computing solutions within industries.

The Embedded+ platform from AMD offers plenty of compatibility with various sensor types and their corresponding interfaces. It facilitates direct connectivity with standard peripherals and industrial sensors through Ethernet, USB, and HDMI/DP interfaces. The AMD Ryzen Embedded processors within the architecture can handle inputs from traditional image sensors such as RGB, monochrome, and even advanced neuromorphic types while supporting industry-standard image sensor interfaces like MIPI and LVDS.

Further enhancing its capability, the AMD Versal AI Edge adaptive SoCs on the Embedded+ motherboard offer adaptable I/O options for real-time sensor input and industrial networking. This includes interfacing with LiDAR, RADAR, and other delicate and sophisticated sensors necessary for modern embedded systems in the industrial, medical, and automotive sectors. The platform's support for various product-level sensor interfaces, such as GMSL and Ethernet-based vision protocols, means it is designed and ready for integration into complex, sensor-driven systems.

AMD has also announced a new pre-integrated solution, which will be available for ODMs starting today. The Sapphire Technology VPR-4616-MB platform is a compact, Mini-ITX form factor motherboard that leverages the AMD Versal AI Edge 2302 SoC combined with an AMD Ryzen Embedded R2314 processor, which is based on Zen+ and has 4C/4T with 6 Radeon Vega compute units. It features a custom expansion connector for I/O boards, supporting a wide array of connectivity options, including dual DDR4 SO-DIMM slots with up to 64 GB capacity, one PCIe 3.0 x4 M.2 slot, and one SATA port for conventional HDDs and SSDs, The VPR-4616-MB also has a good array of networking capabilities including 2.5 Gb Ethernet and an M.2 Key E 2230 PCIe x1 slot for a wireless interface. It also supports the Linux-based Ubuntu 22.04 operating system.

Also announced is a series of expansion boards that significantly broaden support for the Embedded+ architecture. The Octo GMSL Camera I/O board is particularly noteworthy for its ability to interface with multiple cameras simultaneously. It is undoubtedly suitable for high bandwidth vision-based systems, integral to sectors such as advanced driver-assistance systems (ADAS) and automated surveillance systems. These systems often require the integration of numerous image inputs for real-time processing and analysis, and the Octo GMSL board is engineered to meet this demand specifically.

Additionally, a dual Ethernet I/O board is available, capable of supporting 10/100/1000 Mb connections, catering to environments that demand high-speed network communications. The Dual 10 Gb SFP+ board has 16 GPIOs for even higher bandwidth requirements, providing ample data transfer rates for tasks like real-time video streaming and large-scale sensor data aggregation. These expansion options broaden the scope of what the Embedded+ architecture is capable of in an edge and industrial scenario.

The Sapphire VPR-4616-MB is available for customers to purchase now and in a complete system configuration, including storage, memory, power supply, and chassis.

❌
❌