Vue lecture

End of the Road: An AnandTech Farewell

It is with great sadness that I find myself penning the hardest news post I’ve ever needed to write here at AnandTech. After over 27 years of covering the wide – and wild – world of computing hardware, today is AnandTech’s final day of publication.

For better or worse, we’ve reached the end of a long journey – one that started with a review of an AMD processor, and has ended with the review of an AMD processor. It’s fittingly poetic, but it is also a testament to the fact that we’ve spent the last 27 years doing what we love, covering the chips that are the lifeblood of the computing industry.

A lot of things have changed in the last quarter-century – in 1997 NVIDIA had yet to even coin the term “GPU” – and we’ve been fortunate to watch the world of hardware continue to evolve over the time period. We’ve gone from boxy desktop computers and laptops that today we’d charitably classify as portable desktops, to pocket computers where even the cheapest budget device puts the fastest PC of 1997 to shame.

The years have also brought some monumental changes to the world of publishing. AnandTech was hardly the first hardware enthusiast website, nor will we be the last. But we were fortunate to thrive in the past couple of decades, when so many of our peers did not, thanks to a combination of hard work, strategic investments in people and products, even more hard work, and the support of our many friends, colleagues, and readers.

Still, few things last forever, and the market for written tech journalism is not what it once was – nor will it ever be again. So, the time has come for AnandTech to wrap up its work, and let the next generation of tech journalists take their place within the zeitgeist.

It has been my immense privilege to write for AnandTech for the past 19 years – and to manage it as its editor-in-chief for the past decade. And while I carry more than a bit of remorse in being AnandTech’s final boss, I can at least take pride in everything we’ve accomplished over the years, whether it’s lauding some legendary products, writing technology primers that still remain relevant today, or watching new stars rise in expected places. There is still more that I had wanted AnandTech to do, but after 21,500 articles, this was a good start.

And while the AnandTech staff is riding off into the sunset, I am happy to report that the site itself won’t be going anywhere for a while. Our publisher, Future PLC, will be keeping the AnandTech website and its many articles live indefinitely. So that all of the content we’ve created over the years remains accessible and citable. Even without new articles to add to the collection, I expect that many of the things we’ve written over the past couple of decades will remain relevant for years to come – and remain accessible just as long.

The AnandTech Forums will also continue to be operated by Future’s community team and our dedicated troop of moderators. With forum threads going back to 1999 (and some active members just as long), the forums have a history almost as long and as storied as AnandTech itself (wounded monitor children, anyone?). So even when AnandTech is no longer publishing articles, we’ll still have a place for everyone to talk about the latest in technology – and have those discussions last longer than 48 hours.

Finally, for everyone who still needs their technical writing fix, our formidable opposition of the last 27 years and fellow Future brand, Tom’s Hardware, is continuing to cover the world of technology. There are a couple of familiar AnandTech faces already over there providing their accumulated expertise, and the site will continue doing its best to provide a written take on technology news.

So Many Thank Yous

As I look back on everything AnandTech has accomplished over the past 27 years, there are more than a few people, groups, and companies that I would like to thank on behalf of both myself and AnandTech as a whole.

First and foremost, I cannot thank enough all the editors who have worked for AnandTech over the years. There are far more of you than I can ever name, but AnandTech’s editors have been the lifeblood of the site, bringing over their expertise and passion to craft the kind of deep, investigative articles that AnandTech is best known for. These are the finest people I’ve ever had the opportunity to work with, and it shouldn’t come as any surprise that these people have become even bigger successes in their respective fields. Whether it’s hardware and software development, consulting and business analysis, or even launching rockets into space, they’ve all been rock stars whom I’ve been fortunate to work with over the past couple of decades.


Ian Cutress, Anton Shilov, and Gavin Bonshor at Computex 2019

And a special shout out to the final class of AnandTech editors, who have been with us until the end, providing the final articles that grace this site. Gavin Bonshor, Ganesh TS, E. Fylladitakis, and Anton Shilov have all gone above and beyond to meet impossible deadlines and go half-way around the world to report on the latest in technology.

Of course, none of this would have been possible without the man himself, Anand Lal Shimpi, who started this site out of his bedroom 27 years ago. While Anand retired from the world of tech journalism a decade ago, the standard he set for quality and the lessons he taught all of us have continued to resonate within AnandTech to this very day. And while it would be tautological to say that there would be no AnandTech without Anand, it’s none the less true – the mark on the tech publishing industry that we’ve been able to make all started with him.


MWC 2014: Ian Cutress, Anand Lal Shimpi, Joshua Ho

I also want to thank the many, many hardware and software companies we’ve worked with over the years. More than just providing us review samples and technical support, we’ve been given unique access to some of the greatest engineers in the industry. People who have built some of the most complex chips ever made, and casually forgotten more about the subject than we as tech journalists will ever know. So being able to ask those minds stupid questions, and seeing the gears turn in their heads as they explain their ideas, innovations, and thought processes has been nothing short of an incredible learning experience. We haven’t always (or even often) seen eye-to-eye on matters with all of the companies we've covered, but as the last 27 years have shown, sharing the amazing advancements behind the latest technologies has benefited everyone, consumers and companies alike.

Thank yous are also due to AnandTech’s publishers over the years – Future PLC, and Purch before them. AnandTech’s publishers have given us an incredible degree of latitude to do things the AnandTech way, even when it meant taking big risks or not following the latest trend.  A more cynical and controlling publisher could have undoubtedly found ways to make more money from the AnandTech website, but the resulting content would not have been AnandTech. We’ve enjoyed complete editorial freedom up to our final day, and that’s not something so many other websites have had the luxury to experience. And for that I am thankful.


CES 2016: Ian Cutress, Ganesh TS, Joshua Ho, Brett Howse, Brandon Chester, Billy Tallis

Finally, I cannot thank our many readers enough. Whether you’ve been following AnandTech since 1997 or you’ve just recently discovered us, everything we’ve published here we’ve done for you. To show you what amazing things were going on in the world of technology, the radical innovations driving the next generation of products, or a sober review that reminds us all that there’s (almost) no such thing as bad products, just bad pricing. Our readers have kept us on our toes, pushing us to do better, and holding us responsible when we’ve strayed from our responsibilities.

Ultimately, a website is only as influential as its readers, otherwise we would be screaming into the void that is the Internet. For all the credit we can claim as writers, all of that pales in comparison to our readers who have enjoyed our content, referenced it, and shared it with the world. So from the bottom of my heart, thank you for sticking with us for the past 27 years.

Continuing the Fight Against the Cable TV-ification of the Web

Finally, I’d like to end this piece with a comment on the Cable TV-ification of the web. A core belief that Anand and I have held dear for years, and is still on our About page to this day, is AnandTech’s rebuke of sensationalism, link baiting, and the path to shallow 10-o'clock-news reporting. It has been our mission over the past 27 years to inform and educate our readers by providing high-quality content – and while we’re no longer going to be able to fulfill that role, the need for quality, in-depth reporting has not changed. If anything, the need has increased as social media and changing advertising landscapes have made shallow, sensationalistic reporting all the more lucrative.


Speaking of TV: Anand Hosting The AGN Hardware Show (June 1998)

For all the tech journalists out there right now – or tech journalists to be – I implore you to remain true to yourself, and to your readers' needs. In-depth reporting isn’t always as sexy or as exciting as other avenues, but now, more than ever, it’s necessary to counter sensationalism and cynicism with high-quality reporting and testing that is used to support thoughtful conclusions. To quote Anand: “I don't believe the web needs to be academic reporting or sensationalist garbage - as long as there's a balance, I'm happy.”

Signing Off One Last Time

Wrapping things up, it has been my privilege over the last 19 years to write for one of the most impactful tech news websites that has ever existed. And while I’m heartbroken that we’re at the end of AnandTech’s 27-year journey, I can take solace in everything we’ve been able to accomplish over the years. All of which has been made possible thanks to our industry partners and our awesome readers.

On a personal note, this has been my dream job; to say I’ve been fortunate would be an understatement. And while I’ll no longer be the editor-in-chief of AnandTech, I’m far from being done with technology as a whole. I’ll still be around on Twitter/X, and we’ll see where my own journey takes me next.

To everyone who has followed AnandTech over the years, fans, foes, readers, competitors, academics, engineers, and just the technologically curious who want to learn a bit more about their favorite hardware, thank you for all of your patronage over the years. We could not have accomplished this without your support.

-Thanks,
Ryan Smith

The Corsair iCUE LINK TITAN 360 RX RGB AIO Cooler Review: Meticulous, But Pricey

Corsair, a longstanding and esteemed manufacturer in the PC components industry, initially built its reputation on memory-related products. However, nearly two decades ago, Corsair began diversifying its product line. This expansion started cautiously, with a limited number of products, but quickly proved to be highly successful, propelling Corsair into the industry powerhouse it is today.

One of Corsair's most triumphant product categories is all-in-one (AIO) liquid coolers. This success is particularly notable given that their initial foray into liquid cooling in 2003 did not meet expectations. However, Corsair didn’t throw in the towel. Undeterred, they re-entered the market years later, leveraging the growing popularity of user-friendly, maintenance-free AIO designs. This gamble paid off handsomely, as AIO coolers are now one of Corsair’s flagship product lines, boasting a wide array of models.

In this review, we focus on the latest addition to Corsair's AIO cooler lineup: the iCUE LINK TITAN 360 RX. This model is similar to the iCUE LINK H150i RGB, but introduces subtle yet significant improvements, including a performance upgrade with an enhanced pump. The TITAN 360 RX continues Corsair's tradition of innovation and quality, seamlessly integrating into the iCUE ecosystem for an optimized user experience. Its single-cable design ensures a clean and effortless installation, making it a standout in Corsair's evolving cooler lineup.

The iBUYPOWER AW4 360 AIO Cooler Review: A Good First Effort

iBUYPOWER is a U.S.-based company known for its custom-built gaming PCs and peripherals. Established in 1999, the company offers a wide range of self-branded products, including pre-built desktop computers, laptops, and gaming accessories. These products are designed to cater to various performance needs, from casual gaming to high-end competitive gaming. iBUYPOWER is particularly recognized for its customizable gaming PCs, allowing users to choose specific components according to their preferences. The company's self-branded peripherals, like keyboards, mice, and headsets, are designed to complement their gaming systems, providing a cohesive experience for gamers.

iBUYPOWER also offers a selection of cooling-related products, including air and liquid cooling solutions, tailored to ensure optimal thermal performance and custom aesthetics for their gaming systems. Most of these products are from other manufacturers, but the company is also branching out into selling their own cooling related products. Most notable of these is the new AW4 360 mm AIO liquid cooler. This review will focus on the AW4 AIO, evaluating its design, cooling efficiency, and overall performance within high-demand gaming and computing environments.

The Cougar Poseidon Ultra 360 ARGB AIO Cooler Review: Bright Lights, Average Cooling

Cougar, established in 2008, has become a notable name in the PC hardware market, particularly among gamers and enthusiasts. While Cougar might appear to be a relatively recent addition to the industry, it is backed by HEC/Compucase, a veteran in the PC market known primarily for its OEM products. Cougar was created as a subsidiary to focus on developing and marketing high-performance products tailored to the needs of gamers and PC enthusiasts.

Initially, Cougar focused primarily on PC cases, gradually expanding its product lineup as the brand gained recognition. Over the years, Cougar has successfully diversified its offerings to include a wide range of products, from gaming chairs to mechanical keyboards. This strategic expansion has allowed Cougar to establish a strong presence in the gaming hardware market.

In this review, we are focusing on Cougar's latest entry into the liquid cooling market, the Poseidon Ultra 360 ARGB cooler. The Poseidon Ultra 360 ARGB is a high-performance, all-in-one liquid cooler featuring a 360mm radiator and vibrant ARGB lighting, designed to appeal to both performance enthusiasts and those looking for a visually striking setup. This review will delve into the AIO cooler’s key features, cooling efficiency, and noise levels, to determine how it stands up against the competition in the increasingly crowded liquid cooler market.

Sabrent Rocket nano V2 External SSD Review: Phison U18 in a Solid Offering

Sabrent's lineup of internal and external SSDs is popular among enthusiasts. The primary reason is the company's tendency to be among the first to market with products based on the latest controllers, while also delivering an excellent value proposition. The company has a long-standing relationship with Phison and adopts its controllers for many of their products. The company's 2 GBps-class portable SSD - the Rocket nano V2 - is based on Phison's U18 native controller. Read on for a detailed look at the Rocket nano V2 External SSD, including an analysis of its performance consistency, power consumption, and thermal profile.

The Endorfy Fortis 5 Dual Fan CPU Cooler Review: Towering Value

Standard CPU coolers, while adequate for managing basic thermal loads, often fall short in terms of noise reduction and superior cooling efficiency. This limitation drives advanced users and system builders to seek aftermarket solutions tailored to their specific needs. The high-end aftermarket cooler market is highly competitive, with manufacturers striving to offer products with exceptional performance.

Endorfy, previously known as SilentiumPC, is a Polish manufacturer that has undergone a significant transformation to expand its presence in global markets. The brand is known for delivering high-performance cooling solutions with a strong focus on balancing efficiency and affordability. By rebranding as Endorfy, the company aims to enter premium market segments while continuing to offer reliable, high-quality cooling products.

SilentiumPC became very popular in the value/mainstream segments of the PC market with their products, the spearhead of which probably was the Fera 5 cooler that we reviewed a little over two years ago and had a remarkable value for money. Today’s review places Endorfy’s largest CPU cooler, the Fortis 5 Dual Fan, on our laboratory test bench. The Fortis 5 is the largest CPU air cooler the company currently offers and is significantly more expensive than the Fera 5, yet it still is a single-tower cooler that strives to strike a balance between value, compatibility, and performance.

ACEMAGIC F2A 125H SFF PC Review: Mid-Range Meteor Lake at 65W

Intel's Meteor Lake series of processors was launched in September 2023 with a focus on mobile platforms. Multiple mini-PC vendors have utilized these processors to market offerings in the SFF / UCFF desktop market. ACEMAGIC is an Asian manufacturer with products in multiple categories including micro-PCs, UCFF (ultra-compact form-factor) and SFF (small form-factor) PCs, and notebooks. They were one of the first to market with Meteor Lake-based desktop systems.

The ACEMAGIC F2A 125H is the entry-level version of the F2A line, equipped with an Intel Core Ultra 5 125H processor. It is a bit larger than the traditional NUCs, slotting it in the SFF category. However, that allows for the processor to be operated at 65W (compared to the 28 - 40W adopted in the UCFF systems). Read on for a comprehensive look at the performance and features of the ACEMAGIC F2A 125H, including some comments on the pros and cons of the higher operating power as well as other design decisions.

MediaTek to Add NVIDIA G-Sync Support to Monitor Scalers, Make G-Sync Displays More Accessible

NVIDIA on Tuesday said that future monitor scalers from MediaTek will support its G-Sync technologies. NVIDIA is partnering with MediaTek to integrate its full range of G-Sync technologies into future monitors without requiring a standalone G-Sync module, which makes advanced gaming features more accessible across a broader range of displays.

Traditionally, G-Sync technology relied on a dedicated G-sync module – based on an Altera FPGA – to handle syncing display refresh rates with the GPU in order to reduce screen tearing, stutter, and input lag. As a more basic solution, in 2019 NVIDIA introduced G-Sync Compatible certification and branding, which leveraged the industry-standard VESA AdaptiveSync technology to handle variable refresh rates. In lieu of using a dedicated module, leveraging AdaptiveSync allowed for cheaper monitors, with NVIDIA's program serving as a stamp of approval that the monitor worked with NVIDIA GPUs and met NVIDIA's performance requirements. Still, G-Sync Compatible monitors still lack some features that, to date, require the dedicated G-Sync module.

Through this new partnership with MediaTek, MediaTek will bring support for all of NVIDIA's G-Sync technologies, including the latest G-Sync Pulsar, directly into their scalers. G-Sync Pulsar enhances motion clarity and reduces ghosting, providing a smoother gaming experience. In addition to variable refresh rates and Pulsar, MediaTek-based G-Sync displays will support such features as variable overdrive, 12-bit color, Ultra Low Motion Blur, low latency HDR, and Reflex Analyzer. This integration will allow more monitors to support a full range of G-Sync features without having to incorporate an expensive FPGA.

The first monitors to feature full G-Sync support without needing an NVIDIA module include the AOC Agon Pro AG276QSG2, Acer Predator XB273U F5, and ASUS ROG Swift 360Hz PG27AQNR. These monitors offer 360Hz refresh rates, 1440p resolution, and HDR support.

What remains to be seen is which specific MediaTek's scalers will support NVIDIA's G-Sync technology – or if the company is going to implement support into all of their scalers going forward. It also remains to be seen whether monitors with NVIDIA's dedicated G-Sync modules retain any advantages over displays with MediaTek's scalers.

Qualcomm Adds Snapdragon 7s Gen 3: Mid-Tier Snapdragon Gets Cortex-A720 Treatment

Qualcomm this morning is taking the wraps off of a new smartphone SoC for the mid-range market, the Snapdragon 7s Gen 3. The second of Qualcomm’s down-market ‘S’ tier Snapdragon 7 parts, the 7s series is functionally the entry-level tier for the Snapdragon 7 family – and really, most Qualcomm-powered handsets in North America.

With three tiers of Snapdragon 7 chips, the 7s can easily be lost in the noise that comes with more powerful chips. But the latest iteration of the 7s is a bit more interesting than usual, as rather than reusing an existing die, Qualcomm has seemingly minted a whole new die for this part. As a result, the company has upgraded the 7s family to use Arm’s current Armv9 CPU cores, while using bits and pieces of Qualcomm’s latest IPs elsewhere.

Qualcomm Snapdragon 7-Class SoCs
SoC Snapdragon 7 Gen 3
(SM7550-AB)
Snapdragon 7s Gen 3
(SM7635)
Snapdragon 7s Gen 2
(SM7435-AB)
CPU 1x Cortex-A715
@ 2.63GHz

3x Cortex-A715
@ 2.4GHz

4x Cortex-A510
@ 1.8GHz
1x Cortex-A720
@ 2.5GHz

3x Cortex-A720
@ 2.4GHz

4x Cortex-A520
@ 1.8GHz
4x Cortex-A78
@ 2.4GHz

4x Cortex-A55
@ 1.95GHz
GPU Adreno Adreno Adreno
DSP / NPU Hexagon Hexagon Hexagon
Memory
Controller
2x 16-bit CH

@ 3200MHz LPDDR5 / 25.6GB/s

@ 2133MHz LPDDR4X / 17.0GB/s
2x 16-bit CH

@ 3200MHz LPDDR5 / 25.6GB/s

@ 2133MHz LPDDR4X / 17.0GB/s
2x 16-bit CH

@ 3200MHz LPDDR5 / 25.6GB/s

@ 2133MHz LPDDR4X / 17.0GB/s
ISP/Camera Triple 12-bit Spectra ISP

1x 200MP or 64MP with ZSL
or
32+21MP with ZSL
or
3x 21MP with ZSL

4K HDR video & 64MP burst capture
Triple 12-bit Spectra ISP

1x 200MP or 64MP with ZSL
or
32+21MP with ZSL
or
3x 21MP with ZSL

4K HDR video & 64MP burst capture
Triple 12-bit Spectra ISP

1x 200MP or 48MP with ZSL
or
32+16MP with ZSL
or
3x 16MP with ZSL

4K HDR video & 48MP burst capture
Encode/
Decode
4K60 10-bit H.265

H.265, VP9 Decoding

Dolby Vision, HDR10+, HDR10, HLG

1080p120 SlowMo
4K60 10-bit H.265

H.265, VP9 Decoding

HDR10+, HDR10, HLG

1080p120 SlowMo
4K60 10-bit H.265

H.265, VP9 Decoding

HDR10, HLG

1080p120 SlowMo
Integrated Radio FastConnect 6700
Wi-Fi 6E + BT 5.3
2x2 MIMO
FastConnect
Wi-Fi 6E + BT 5.4
2x2 MIMO
FastConnect 6700
Wi-Fi 6E + BT 5.2
2x2 MIMO
Integrated Modem X63 Integrated

(5G NR Sub-6 + mmWave)
DL = 5.0 Gbps
5G/4G Dual Active SIM (DSDA)
Integrated

(5G NR Sub-6 + mmWave)
DL = 2.9 Gbps
5G/4G Dual Active SIM (DSDA)
X62 Integrated

(5G NR Sub-6 + mmWave)
DL = 2.9 Gbps
5G/4G Dual Active SIM (DSDA)
Mfc. Process TSMC N4P TSMC N4P Samsung 4LPE

Officially, the Snapdragon 7s is classified as a 1+3+4 design – meaning there’s 1 prime core, 3 performance cores, and 4 efficiency cores. In this case, Qualcomm is using the same architecture for both the prime and efficiency cores, Arm’s current-generation Cortex-A720 design. The prime core gets to turbo as high as 2.5GHz, while the remaining A720 cores will turbo as high as 2.4GHz.

These are joined by the 4 efficiency cores, which, as is tradition, are based upon Arm’s current A5xx cores, in this case, A520. These can boost as high as 1.8GHz.

Compared to the outgoing Snapdragon 7s Gen 2, the switch in Arm cores represents a fairly significant upgrade, replacing an A78/A55 setup with the aforementioned A720/A520 setup. Notably, clockspeeds are pretty similar to the previous generation part, so most of the unconstrained performance uplift on this generation is being driven by improvements in IPC, though the faster prime core should offer a bit more kick for single-threaded workloads.

All told, touting a 20% improvement in CPU performance over the 7s Gen 2, though that claim doesn’t clarify whether it’s single or multi-threaded performance (or a mixture of both).

Meanwhile, graphics are driven by one of Qualcomm’s Adreno GPUs. As is usually the case, the company is not offering any significant details on the specific GPU configuration being used – or even what generation it is. A high-level look at the specifications doesn’t reveal any major features that weren’t present in other Snapdragon 7 parts. And Qualcomm isn’t bringing high-end features like ray tracing down to such a modest part. That said, I’ve previously heard through the tea leaves that this may be a next-generation (Adreno 800 series) design; though if that’s the case, Qualcomm is certainly not trying to bring attention to it.

Curiously, however, the video decode block on the SoC seems rather dated. Despite this being a new die, Qualcomm has opted not to include AV1 decoding – or, at least, opted not to enable it – so H.265 and VP9 are the most advanced codecs supported.

Compared to CPU performance gains, Qualcomm’s expected GPU performance gains are more significant. The company is claiming that the7s Gem 3 will deliver a 40% improvement in GPU performance over the 7s Gen 2.

Finally, the Hexagon NPU block on the SoC incorporates some of Qualcomm’s latest IP, as the company continues their focused AI push across all of their chip segments. Notably, the version of the NPU used here gets INT4 support for low precision client inference, which is new to the Snapdragon 7s family. As with Qualcomm’s other Gen 3 SoCs, the big drive here is for local (on-device) LLM execution.

With regards to performance, Qualcomm says that customers should expect to see a 30% improvement in AI performance relative to the 7s Gen 2.

Feeding all of these blocks is a 32-bit memory controller. Interestingly, Qualcomm has opted to support older LPDDR4X even with this newer chip, so the maximum memory bandwidth depends on the memory type used. For LPDDR4X-4266 that will be 17GB/sec, and for LPDDR5-6400 that will be 25.6GB/sec. In both cases, this is identical to the bandwidth available for the 7s Gen 2.

Rounding out the package, the 7s Gen 3 does incorporate some newer/more powerful camera hardware as well. We’re still looking at a trio of 12-bit Spectra ISPs, but the maximum resolution in zero shutter lag and burst modes has been bumped up to 64MPix. Video recording capabilities are otherwise identical on paper, as the 7s Gen 2 already supported 4K HDR capture.

Meanwhile on the wireless communication side of matters, the 7s Gen 3 packs one of Qualcomm’s integrated Snapdragon 5G modems. As with its predecessor, the 7s Gen 3 supports both Sub-6 and mmWave bands, with a maximum (theoretical) throughput of 2.9Gbps.

Eagle-eyed chip watchers will note, however, that Qualcomm is doing away with any kind of version information as of this part. So while the 7s Gen 2 used a Snapdragon X62 modem, the 7s Gen 3’s modem has no such designation – it’s merely an integrated Snapdragon modem. According to the company, this change has been made to “simplify overall branding and to be consistent with other IP blocks in the chipset.”

Similarly, the Wi-Fi/Bluetooth block has lost its version number; it is now merely a FastConnect block. In regards to features and specifications, this appears to be the same Wi-Fi 6E block that we’ve seen in half a dozen other Snapdragon SoCs, offering 2 spatial streams at channel widths up to 160MHz. It is worth noting, however, that since this is a newer SoC it’s certified for Bluetooth 5.4 support, versus the 5.2/5.3 certification other Snapdragon 7 chips have carried.

Finally, the Snapdragon 7s Gen 3 itself is being built on TSMC’s N4P process, the same process we’ve seen the last several Qualcomm SoCs use. And with this, Qualcomm has now fully migrated the entire Snapdragon 8 and Snapdragon 7 lines off of Samsung’s 4nm process nodes; all of their contemporary chips are now built at TSMC. And like similar transitions in the past, this shift in process nodes is coming with a boost to power efficiency. While it’s not the sole cause, overall Qualcomm is touting a 12% improvement in power savings.

Wrapping things up, Qualcomm’s launch customer for the Snapdragon 7s Gen 3 will be Xiaomi, who will be the first to launch a new phone with the chip. Following them will be many of the other usual suspects, including Realme and Sharp, while the much larger Samsung is also slated to use the chip at some point in the coming months.

CXL Gathers Momentum at FMS 2024

The CXL consortium has had a regular presence at FMS (which rechristened itself from 'Flash Memory Summit' to the 'Future of Memory and Storage' this year). Back at FMS 2022, the company had announced v3.0 of the CXL specifications. This was followed by CXL 3.1's introduction at Supercomputing 2023. Having started off as a host to device interconnect standard, it had slowly subsumed other competing standards such as OpenCAPI and Gen-Z. As a result, the specifications started to encompass a wide variety of use-cases by building a protocol on top of the the ubiquitous PCIe expansion bus. The CXL consortium comprises of heavyweights such as AMD and Intel, as well as a large number of startup companies attempting to play in different segments on the device side. At FMS 2024, CXL had a prime position in the booth demos of many vendors.

The migration of server platforms from DDR4 to DDR5, along with the rise of workloads demanding large RAM capacity (but not particularly sensitive to either memory bandwidth or latency), has opened up memory expansion modules as one of the first set of widely available CXL devices. Over the last couple of years, we have had product announcements from Samsung and Micron in this area.

SK hynix CMM-DDR5 CXL Memory Module and HMSDK

At FMS 2024, SK hynix was showing off their DDR5-based CMM-DDR5 CXL memory module with a 128 GB capacity. The company was also detailing their associated Heterogeneous Memory Software Development Kit (HMSDK) - a set of libraries and tools at both the kernel and user levels aimed at increasing the ease of use of CXL memory. This is achieved in part by considering the memory pyramid / hierarchy and relocating the data between the server's main memory (DRAM) and the CXL device based on usage frequency.

The CMM-DDR5 CXL memory module comes in the SDFF form-factor (E3.S 2T) with a PCIe 3.0 x8 host interface. The internal memory is based on 1α technology DRAM, and the device promises DDR5-class bandwidth and latency within a single NUMA hop. As these memory modules are meant to be used in datacenters and enterprises, the firmware includes features for RAS (reliability, availability, and serviceability) along with secure boot and other management features.

SK hynix was also demonstrating Niagara 2.0 - a hardware solution (currently based on FPGAs) to enable memory pooling and sharing - i.e, connecting multiple CXL memories to allow different hosts (CPUs and GPUs) to optimally share their capacity. The previous version only allowed capacity sharing, but the latest version enables sharing of data also. SK hynix had presented these solutions at the CXL DevCon 2024 earlier this year, but some progress seems to have been made in finalizing the specifications of the CMM-DDR5 at FMS 2024.

Microchip and Micron Demonstrate CZ120 CXL Memory Expansion Module

Micron had unveiled the CZ120 CXL Memory Expansion Module last year based on the Microchip SMC 2000 series CXL memory controller. At FMS 2024, Micron and Microchip had a demonstration of the module on a Granite Rapids server.

Additional insights into the SMC 2000 controller were also provided.

The CXL memory controller also incorporates DRAM die failure handling, and Microchip also provides diagnostics and debug tools to analyze failed modules. The memory controller also supports ECC, which forms part of the enterprise class RAS feature set of the SMC 2000 series. Its flexibility ensures that SMC 2000-based CXL memory modules using DDR4 can complement the main DDR5 DRAM in servers that support only the latter.

Marvell Announces Structera CXL Product Line

A few days prior to the start of FMS 2024, Marvell had announced a new CXL product line under the Structera tag. At FMS 2024, we had a chance to discuss this new line with Marvell and gather some additional insights.

Unlike other CXL device solutions focusing on memory pooling and expansion, the Structera product line also incorporates a compute accelerator part in addition to a memory-expansion controller. All of these are built on TSMC's 5nm technology.

The compute accelerator part, the Structera A 2504 (A for Accelerator) is a PCIe 5.0 x16 CXL 2.0 device with 16 integrated Arm Neoverse V2 (Demeter) cores at 3.2 GHz. It incorporates four DDR5-6400 channels with support for up to two DIMMs per channel along with in-line compression and decompression. The integration of powerful server-class ARM CPU cores means that the CXL memory expansion part scales the memory bandwidth available per core, while also scaling the compute capabilities.

Applications such as Deep-Learning Recommendation Models (DLRM) can benefit from the compute capability available in the CXL device. The scaling in the bandwidth availability is also accompanied by reduced energy consumption for the workload. The approach also contributed towards disaggregation within the server for a better thermal design as a whole.

The Structera X 2404 (X for eXpander) will be available either as a PCIe 5.0 (single x16 or two x8) device with four DDR4-3200 channels (up to 3 DIMMs per channel). Features such as in-line (de)compression, encryption / decryption, and secure boot with hardware support are present in the Structera X 2404 as well. Compared to the 100 W TDP of the Structera X 2404, Marvell expects this part to consume around 30 W. The primary purpose of this part is to enable hyperscalers to recycle DDR4 DIMMs (up to 6 TB per expander) while increasing server memory capacity.

Marvell also has a Structera X 2504 part that supports four DDR5-6400 channels (with two DIMMs per channel for up to 4 TB per expander). Other aspects remain the same as that of the DDR4-recycling part.

The company stressed upon some unique aspects of the Structera product line - the inline compression optimizes available DRAM capacity, and the 3 DIMMs per channel support for the DDR4 expander maximizes the amount of DRAM per expander (compared to competing solutions). The 5nm process lowers the power consumption, and the parts support accesses from multiple hosts. The integration of Arm Neoverse V2 cores appears to be a first for a CXL accelerator, and enables delegation of compute tasks to improve overall performance of the system.

While Marvell announced specifications for the Structera parts, it does appear that sampling is at least a few quarters away. One of the interesting aspects about Marvell's roadmaps / announcements in recent years has been their focus on creating products tuned to the demands of high-volume customers. The Structera product line is no different - hyperscalers are hungry to recycle their DDR4 memory modules and apparently can't wait to get their hands on the expander parts.

CXL is just starting its slow ramp-up, and the hockey stick segment of the growth curve is definitely definitely not in the near term. However, as more host systems with CXL support start to get deployed, products like the Structera accelerator line start to make sense from a server efficiency viewpoint.

Fadu's FC5161 SSD Controller Breaks Cover in Western Digital's PCIe Gen5 Enterprise Drives

When Western Digital introduced its Ultrastar DC SN861 SSDs earlier this year, the company did not disclose which controller it used for these drives, which made many observers presume that WD was using an in-house controller. But a recent teardown of the drive shows that is not the case; instead, the company is using a controller from Fadu, a South Korean company founded in 2015 that specializes on enterprise-grade turnkey SSD solutions.

The Western Digital Ultrastar DC SN861 SSD is aimed at performance-hungry hyperscale datacenters and enterprise customers which are adopting PCIe Gen5 storage devices these days. And, as uncovered in photos from a recent Storage Review article, the drive is based on Fadu's FC5161 NVMe 2.0-compliant controller. The FC5161 utilizes 16 NAND channels supporting an ONFi 5.0 2400 MT/s interface, and features a combination of enterprise-grade capabilities (OCP Cloud Spec 2.0, SR-IOV, up to 512 name spaces for ZNS support, flexible data placement, NVMe-MI 1.2, advanced security, telemetry, power loss protection) not available on other off-the-shelf controllers – or on any previous Western Digital controllers.  

The Ultrastar DC SN861 SSD offers sequential read speeds up to 13.7 GB/s as well as sequential write speeds up to 7.5 GB/s. As for random performance, it boasts with an up to 3.3 million random 4K read IOPS and up to 0.8 million random 4K write IOPS. The drives are available in capacities between 1.6 TB and 7.68 TB with one or three drive writes per day (DWPD) over five years rating as well as in U.2 and E1.S form-factors. 

While the two form factors of the SN861 share a similar technical design, Western Digital has tailored each version for distinct workloads: the E1.S supports FDP and performance enhancements specifically for cloud environments. By contrast, the U.2 model is geared towards high-performance enterprise tasks and emerging applications like AI.

Without any doubts, Western Digital's Ultrastar DC SN861 is a feature-rich high-performance enterprise-grade SSD. It has another distinctive feature: a 5W idle power consumption, which is rather low by the standards of enterprise-grade drives (e.g., it is 1W lower compared to the SN840). While the difference with predecessors may be just 1W, hyperscalers deploy thousands of drives and for their TCO every watt counts.

Western Digital's Ultrastar DC SN861 SSDs are now available for purchase to select customers (such as Meta) and to interested parties. Prices are unknown, but they will depend on such factors as volumes.

Sources: FaduStorage Review

PCI-SIG Demonstrates PCIe 6.0 Interoperability at FMS 2024

As the deployment of PCIe 5.0 picks up steam in both datacenter and consumer markets, PCI-SIG is not sitting idle, and is already working on getting the ecosystem ready for the updats to the PCIe specifications. At FMS 2024, some vendors were even talking about PCIe 7.0 with its 128 GT/s capabilities despite PCIe 6.0 not even starting to ship yet. We caught up with PCI-SIG to get some updates on its activities and have a discussion on the current state of the PCIe ecosystem.

PCI-SIG has already made the PCIe 7.0 specifications (v 0.5) available to its members, and expects full specifications to be officially released sometime in 2025. The goal is to deliver a 128 GT/s data rate with up to 512 GBps of bidirectional traffic using x16 links. Similar to PCIe 6.0, this specification will also utilize PAM4 signaling and maintain backwards compatibility. Power efficiency as well as silicon die area are also being kept in mind as part of the drafting process.

The move to PAM4 signaling brings higher bit-error rates compared to the previous NRZ scheme. This made it necessary to adopt a different error correction scheme in PCIe 6.0 - instead of operating on variable length packets, PCIe 6.0's Flow Control Unit (FLIT) encoding operates on fixed size packets to aid in forward error correction. PCIe 7.0 retains these aspects.

The integrators list for the PCIe 6.0 compliance program is also expected to come out in 2025, though initial testing is already in progress. This was evident by the FMS 2024 demo involving Cadence's 3nm test chip for its PCIe 6.0 IP offering along with Teledyne Lecroy's PCIe 6.0 analyzer. These timelines track well with the specification completion dates and compliance program availability for previous PCIe generations.

We also received an update on the optical workgroup - while being optical-technology agnostic, the WG also intends to develop technology-specific form-factors including pluggable optical transceivers, on-board optics, co-packaged optics, and optical I/O. The logical and electrical layers of the PCIe 6.0 specifications are being enhanced to accommodate the new optical PCIe standardization and this process will also be done with PCIe 7.0 to coincide with that standard's release next year.

The PCI-SIG also has ongoing cabling initiatives. On the consumer side, we have seen significant traction for Thunderbolt and external GPU enclosures. However, even datacenters and enterprise systems are moving towards cabling solutions as it becomes evident that disaggregation of components such as storage from the CPU and GPU are better for thermal design. Additionally maintaining signal integrity over longer distances becomes difficult for on-board signal traces. Cabling internal to the computing systems can help here.

OCuLink emerged as a good candidate and was adopted fairly widely as an internal link in server systems. It has even made an appearance in mini-PCs from some Chinese manufacturers in its external avatar for the consumer market, albeit with limited traction. As speeds increase, a widely-adopted standard for external PCIe peripherals (or even connecting components within a system) will become imperative.

DapuStor and Memblaze Target Global Expansion with State-of-the-Art Enterprise SSDs

The growth in the enterprise SSD (eSSD) market has outpaced that of the client SSD market over the last few years. The requirements of AI servers for both training and inference has been the major impetus in this front. In addition to the usual vendors like Samsung, Solidigm, Micron, Kioxia, and Western Digital serving the cloud service providers (CSPs) and the likes of Facebook, a number of companies have been at work inside China to service the burgeoning eSSD market within.

In our coverage of the Microchip Flashtec 5016, we had noted Longsys's use of Microchip's SSD controllers to prepare and market enterprise SSDs under the FORESEE brand. Long before that, two companies - DapuStor and Memblaze - started releasing eSSDs specifically focusing on the Chinese market.

There are two drivers for the current growth spurt in the eSSD market. On the performance side, usage of eTLC behind a Gen 5 controller is allowing vendors to advertise significant benefits over the Gen 4 drives in the previous generation. At the same time, a capacity play is happening where there is a race to cram as much NAND as possible into a single U.2 / EDSFF enclosure. QLC is being used for this purpose, and we saw a number of such 128 TB-class eSSDs on display at FMS 2024.

DapuStor and Memblaze have both been relying on SSD controllers from Marvell for their flagship drives. Their latest product iterations for the Gen 5 era use the Marvell Bravera SC5 controller. Similar to the Flashtec controllers, these are not meant to be turnkey solutions. Rather, the SSD vendor has considerable flexibility in implementing specific features for their desired target market.

At FMS 2024, both DapuStor and Memblaze were displaying their latest solutions for the Gen 5 market. Memblaze was celebrating the sale of 150K+ units of their flagship Gen 5 solution - the PBlaze7 7940 incorporating Micron's 232L 3D eTLC with Marvell's Bravera SC5 controller. This SSD (available in capacities up to 30.72 TB) boasts of 14 GBps reads / 10 GBps writes along with random read / write performance of 2.8 M / 720K - all with a typical power consumption south of 16 W. Additionally, the support for some of NVMe features such as software-enabled flash (SEF) and zoned name space (ZNS) had helped Memblaze and Marvell to receive a 'Best of Show' award under the 'Most Innovative Customer Implementation' category.

DapuStor had their current lineup on display (including the Haishen H5000 series with the same Bravera SC5 controller). Additionally, the company had an unannounced proof-of-concept 61.44 TB QLC SSD on display. Despite the label carrying the Haishen5 series tag (its current members all use eTLC NAND), this sample comes with QLC flash.

DapuStor has already invested resources into implementing the flexible data placement (FDP) NVMe feature into the firmware of this QLC SSD. The company also had an interesting presentation session dealing with usage of CXL memory expansion to store the FTL for high-capacity enterprise SSDs - though this is something for the future and not related to any current product in the market.

Having established themselves within the Chinese market, both DapuStor and Memblaze are looking to expand in other markets. Having products with leading performance numbers and features in the eSSD growth segment will stand them in good stead in this endeavor.

Phison Enterprise SSDs at FMS 2024: Pascari Branding and Accelerating AI

At FMS 2024, Phison devoted significant booth space to their enterprise / datacenter SSD and PCIe retimer solutions, in addition to their consumer products. As a controller / silicon vendor, Phison had historically been working with drive partners to bring their solutions to the market. On the enterprise side, their tie-up with Seagate for the X1 series (and the subsequent Nytro-branded enterprise SSDs) is quite well-known. Seagate supplied the requirements list and had a say in the final firmware before qualifying the drives themselves for their datacenter customers. Such qualification involves a significant resource investment that is possible only by large companies (ruling out most of the tier-two consumer SSD vendors).

Phison had demonstrated the Gen 5 X2 platform at last year's FMS as a continuation of the X1. However, with Seagate focusing on its HAMR ramp, and also fighting other battles, Phison decided to go ahead with the qualification process for the X2 process themselves. In the bigger scheme of things, Phison also realized that the white-labeling approach to enterprise SSDs was not going to work out in the long run. As a result, the Pascari brand was born (ostensibly to make Phison's enterprise SSDs more accessible to end consumers).

Under the Pascari brand, Phison has different lineups targeting different use-cases: from high-performance enterprise drives in the X series to boot drives in the B series. The AI series comes in variants supporting up to 100 DWPD (more on that in the aiDAPTIVE+ subsection below).

The D200V Gen 5 took pole position in the displayed drives, thanks to its leading 61.44 TB capacity point (a 122.88 TB drive is also being planned under the same line). The use of QLC in this capacity-focused line brings down the sustained sequential write speeds to 2.1 GBps, but these are meant for read-heavy workloads.

The X200, on the other hand, is a Gen 5 eTLC drive boasting up to 8.7 GBps sequential writes. It comes in read-centric (1 DWPD) and mixed workload variants (3 DWPD) in capacities up to 30.72 TB. The X100 eTLC drive is an evolution of the X1 / Seagate Nytro 5050 platform, albeit with newer NAND and larger capacities.


These drives come with all the usual enterprise features including power-loss protection, and FIPS certifiability. Though Phison didn't advertise this specifically, newer NVMe features like flexible data placement should become part of the firmware features in the future.

100 GBps with Dual HighPoint Rocket 1608 Cards and Phison E26 SSDs

Though not strictly an enterprise demo, Phison did have a station showing 100 GBps+ sequential reads and writes using a normal desktop workstation. The trick was installing two HighPoint Rocket 1608A add-in cards (each with eight M.2 slots) and placing the 16 M.2 drives in a RAID 0 configuration.

HighPoint Technology and Phison have been working together to qualify E26-based drives for this use-case, and we will be seeing more on this in a later review.

aiDAPTIV+ Pro Suite for AI Training

One of the more interesting demonstrations in Phison's booth was the aiDAPTIV+ Pro suite. At last year's FMS, Phison had demonstrated a 40 DWPD SSD for use with Chia (thankfully, that fad has faded). The company has been working on the extreme endurance aspect and moved it up to 60 DWPD (which is standard for the SLC-based cache drives from Micron and Solidigm).

At FMS 2024, the company took this SSD and added a middleware layer on top to ensure that workloads remain more sequential in nature. This drives up the endurance rating to 100 DWPD. Now, this middleware layer is actually part of their AI training suite targeting small business and medium enterprises who do not have the budget for a full-fledged DGX workstation, or for on-premises fine-tuning.




Re-training models by using these AI SSDs as an extension of the GPU VRAM can deliver significant TCO benefits for these companies, as the costly AI training-specific GPUs can be replaced with a set of relatively low-cost off-the-shelf RTX GPUs. This middleware comes with licensing aspects that are essentially tied to the purchase of the AI-series SSDs (that come with Gen 4 x4 interfaces currently in either U.2 or M.2 form-factors). The use of SSDs as a caching layer can enable fine-tuning of models with a very large number of parameters using a minimal number of GPUs (not having to use them primarily for their HBM capacity).

Intel Sells Its Arm Shares, Reduces Stakes in Other Companies

Intel has divested its entire stake in Arm Holdings during the second quarter, raising approximately $147 million. Alongside this, Intel sold its stake in cybersecurity firm ZeroFox and reduced its holdings in Astera Labs, all as part of a broader effort to manage costs and recover cash amid significant financial challenges.

The sale of Intel's 1.18 million shares in Arm Holdings, as reported in a recent SEC filing, comes at a time when the company is struggling with substantial financial losses. Despite the $147 million generated from the sale, Intel reported a $120 million net loss on its equity investments for the quarter, which is a part of a larger $1.6 billion loss that Intel faced during this period.

In addition to selling its stake in Arm, Intel also exited its investment in ZeroFox and reduced its involvement with Astera Labs, a company known for developing connectivity platforms for enterprise hardware. These moves are in line with Intel's strategy to reduce costs and stabilize its financial position as it faces ongoing market challenges.

Despite the divestment, Intel's past investment in Arm was likely driven by strategic considerations. Arm Holdings is a significant force in the semiconductor industry, with its designs powering most mobile devices, and, for obvious reasons, Intel would like to address these. Intel and Arm are also collaborating on datacenter platforms tailored for Intel's 18A process technology. Additionally, Arm might view Intel as a potential licensee for its technologies and a valuable partner for other companies that license Arm's designs.

Intel's investment in Astera Labs was also a strategic one as the company probably wanted to secure steady supply of smart retimers, smart cable modems, and CXL memory controller, which are used in volumes in datacenters and Intel is certainly interested in selling as many datacenter CPUs as possible.

Intel's financial struggles were highlighted earlier this month when the company released a disappointing earnings report, which led to a 33% drop in its stock value, erasing billions of dollars of capitalization. To counter these difficulties, Intel announced plans to cut 15,000 jobs and implement other expense reductions. The company has also suspended its dividend, signaling the depth of its efforts to conserve cash and focus on recovery. When it comes to divestment of Arm stock, the need for immediate financial stabilization has presumably taken precedence, leading to the decision.

The AMD Ryzen 9 9950X and Ryzen 9 9900X Review: Flagship Zen 5 Soars - and Stalls

Earlier this month, AMD launched the first two desktop CPUs using their latest Zen 5 microarchitecture: the Ryzen 7 9700X and the Ryzen 5 9600X. As part of the new Ryzen 9000 family, it gave us their latest Zen 5 cores to the desktop market, as AMD actually launched Zen 5 through their mobile platform last month, the Ryzen AI 300 series (which we reviewed).

Today, AMD is launching the remaining two Ryzen 9000 SKUs first announced at Computex 2024, completing the current Ryzen 9000 product stack. Both chips hail from the premium Ryzen 9 series, which includes the flagship Ryzen 9 9950X, which has 16 Zen 5 cores and can boost as high as 5.7 GHz, while the Ryzen 9 9900X has 12 Zen 5 cores and offers boost clock speeds of up to 5.6 GHz.

Although they took slightly longer than expected to launch, as there was a delay from the initial launch date of July 31st, the full quartet of Ryzen 9000 X series processors armed with the latest Zen 5 cores are available. All of the Ryzen 9000 series processors use the same AM5 socket as the previous Ryzen 7000 (Zen 4) series, which means users can use current X670E and X670 motherboards with the new chips. Unfortunately, as we highlighted in our Ryzen 7 9700X and Ryzen 5 9600X review, the X870E/X870 motherboards, which were meant to launch alongside the Ryzen 9000 series, won't be available until sometime in September.

We've seen how the entry-level Ryzen 5 9600X and the mid-range Ryzen 7 9700X perform against the competition, but it's time to see how far and fast the flagship Ryzen 9 pairing competes. The Ryzen 9 9950X (16C/32T) and the Ryzen 9 9900X (12C/24T) both have a higher TDP (170 W/120 W respectively) than the Ryzen 7 and Ryzen 5 (65 W), but there are more cores, and Ryzen 9 is clocked faster at both base and turbo frequencies. With this in mind, it's time to see how AMD's Zen 5 flagship Ryzen 9 series for desktops performs with more firepower, with our review of the Ryzen 9 9950X and Ryzen 9 9900 processors.

G.Skill Intros Low Latency DDR5 Memory Modules: CL30 at 6400 MT/s

G.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.

With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.

Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.

G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.

The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.

G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.

The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.

Samsung's 128 TB-Class BM1743 Enterprise SSD Displayed at FMS 2024

Samsung had quietly launched its BM1743 enterprise QLC SSD last month with a hefty 61.44 TB SKU. At FMS 2024, the company had the even larger 122.88 TB version of that SSD on display, alongside a few recorded benchmarking sessions. Compared to the previous generation, the BM1743 comes with a 4.1x improvement in I/O performance, improvement in data retention, and a 45% improvement in power efficiency for sequential writes.

The 128 TB-class QLC SSD boasts of sequential read speeds of 7.5 GBps and write speeds of 3 GBps. Random reads come in at 1.6 M IOPS, while 16 KB random writes clock in at 45K IOPS. Based on the quoted random write access granularity, it appears that Samsung is using a 16 KB indirection unit (IU) to optimize flash management. This is similar to the strategy adopted by Solidigm with IUs larger than 4K in their high-capacity SSDs.

A recorded benchmark session on the company's PM9D3a 8-channel Gen 5 SSD was also on display.

The SSD family is being promoted as a mainstream option for datacenters, and boasts of sequential reads up to 12 GBps and writes up to 6.8 GBps. Random reads clock in at 2 M IOPS, and random writes at 400 K IOPS.

Available in multiple form-factors up to 32 TB (M.2 tops out at 2 TB), the drive's firmware includes optional support for flexible data placement (FDP) to help address the write amplification aspect.

The PM1753 is the current enterprise SSD flagship in Samsung's lineup. With support for 16 NAND channels and capacities up to 32 TB, this U.2 / E3.S SSD has advertised sequential read and write speeds of 14.8 GBps and 11 GBps respectively. Random reads and writes for 4 KB accesses are listed at 3.4 M and 600 K IOPS.

Samsung claims a 1.7x performance improvement and a 1.7x power efficiency improvement over the previous generation (PM1743), making this TLC SSD suitable for AI servers.

The 9th Gen. V-NAND wafer was also available for viewing, though photography was prohibited. Mass production of this flash memory began in April 2024.

Kioxia Demonstrates Optical Interface SSDs for Data Centers

A few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display.

For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available.

The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links.

Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem.

Silicon Motion Demonstrates Flexible Data Placement on MonTitan Gen 5 Enterprise SSD Platform

At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).

FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.

Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.

At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.

Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.

❌