Vue normale

G.Skill Intros Low Latency DDR5 Memory Modules: CL30 at 6400 MT/s

13 août 2024 à 20:45

G.Skill on Tuesday introduced its ultra-low-latency DDR5-6400 memory modules that feature a CAS latency of 30 clocks, which appears to be the industry's most aggressive timings yet for DDR5-6400 sticks. The modules will be available for both AMD and Intel CPU-based systems.

With every new generation of DDR memory comes an increase in data transfer rates and an extension of relative latencies. While for the vast majority of applications, the increased bandwidth offsets the performance impact of higher timings, there are applications that favor low latencies. However, shrinking latencies is sometimes harder than increasing data transfer rates, which is why low-latency modules are rare.

Nonetheless, G.Skill has apparently managed to cherry-pick enough DDR5 memory chips and build appropriate printed circuit boards to produce DDR5-6400 modules with CL30 timings, which are substantially lower than the CL46 timings recommended by JEDEC for this speed bin. This means that while JEDEC-standard modules have an absolute latency of 14.375 ns, G.Skill's modules can boast a latency of just 9.375 ns – an approximately 35% decrease.

G.Skill's DDR5-6400 CL30 39-39-102 modules have a capacity of 16 GB and will be available in 32 GB dual-channel kits, though the company does not disclose voltages, which are likely considerably higher than those standardized by JEDEC.

The company plans to make its DDR5-6400 modules available both for AMD systems with EXPO profiles (Trident Z5 Neo RGB and Trident Z5 Royal Neo) and for Intel-powered PCs with XMP 3.0 profiles (Trident Z5 RGB and Trident Z5 Royal). For AMD AM5 systems that have a practical limitation of 6000 MT/s – 6400 MT/s for DDR5 memory (as this is roughly as fast as AMD's Infinity Fabric can operate at with a 1:1 ratio), the new modules will be particularly beneficial for AMD's Ryzen 7000 and Ryzen 9000-series processors.

G.Skill notes that since its modules are non-standard, they will not work with all systems but will operate on high-end motherboards with properly cooled CPUs.

The new ultra-low-latency memory kits will be available worldwide from G.Skill's partners starting in late August 2024. The company did not disclose the pricing of these modules, but since we are talking about premium products that boast unique specifications, they are likely to be priced accordingly.

Samsung's 128 TB-Class BM1743 Enterprise SSD Displayed at FMS 2024

13 août 2024 à 18:00

Samsung had quietly launched its BM1743 enterprise QLC SSD last month with a hefty 61.44 TB SKU. At FMS 2024, the company had the even larger 122.88 TB version of that SSD on display, alongside a few recorded benchmarking sessions. Compared to the previous generation, the BM1743 comes with a 4.1x improvement in I/O performance, improvement in data retention, and a 45% improvement in power efficiency for sequential writes.

The 128 TB-class QLC SSD boasts of sequential read speeds of 7.5 GBps and write speeds of 3 GBps. Random reads come in at 1.6 M IOPS, while 16 KB random writes clock in at 45K IOPS. Based on the quoted random write access granularity, it appears that Samsung is using a 16 KB indirection unit (IU) to optimize flash management. This is similar to the strategy adopted by Solidigm with IUs larger than 4K in their high-capacity SSDs.

A recorded benchmark session on the company's PM9D3a 8-channel Gen 5 SSD was also on display.

The SSD family is being promoted as a mainstream option for datacenters, and boasts of sequential reads up to 12 GBps and writes up to 6.8 GBps. Random reads clock in at 2 M IOPS, and random writes at 400 K IOPS.

Available in multiple form-factors up to 32 TB (M.2 tops out at 2 TB), the drive's firmware includes optional support for flexible data placement (FDP) to help address the write amplification aspect.

The PM1753 is the current enterprise SSD flagship in Samsung's lineup. With support for 16 NAND channels and capacities up to 32 TB, this U.2 / E3.S SSD has advertised sequential read and write speeds of 14.8 GBps and 11 GBps respectively. Random reads and writes for 4 KB accesses are listed at 3.4 M and 600 K IOPS.

Samsung claims a 1.7x performance improvement and a 1.7x power efficiency improvement over the previous generation (PM1743), making this TLC SSD suitable for AI servers.

The 9th Gen. V-NAND wafer was also available for viewing, though photography was prohibited. Mass production of this flash memory began in April 2024.

Kioxia Demonstrates Optical Interface SSDs for Data Centers

13 août 2024 à 16:00

A few years back, the Japanese government's New Energy and Industrial Technology Development Organization (NEDO ) allocated funding for the development of green datacenter technologies. With the aim to obtain up to 40% savings in overall power consumption, several Japanese companies have been developing an optical interface for their enterprise SSDs. And at this year's FMS, Kioxia had their optical interface on display.

For this demonstration, Kioxia took its existing CM7 enterprise SSD and created an optical interface for it. A PCIe card with on-board optics developed by Kyocera is installed in the server slot. An optical interface allows data transfer over long distances (it was 40m in the demo, but Kioxia promises lengths of up to 100m for the cable in the future). This allows the storage to be kept in a separate room with minimal cooling requirements compared to the rack with the CPUs and GPUs. Disaggregation of different server components will become an option as very high throughput interfaces such as PCIe 7.0 (with 128 GT/s rates) become available.

The demonstration of the optical SSD showed a slight loss in IOPS performance, but a significant advantage in the latency metric over the shipping enterprise SSD behind a copper network link. Obviously, there are advantages in wiring requirements and signal integrity maintenance with optical links.

Being a proof-of-concept demonstration, we do see the requirement for an industry-standard approach if this were to gain adoption among different datacenter vendors. The PCI-SIG optical workgroup will need to get its act together soon to create a standards-based approach to this problem.

Silicon Motion Demonstrates Flexible Data Placement on MonTitan Gen 5 Enterprise SSD Platform

13 août 2024 à 14:00

At FMS 2024, the technological requirements from the storage and memory subsystem took center stage. Both SSD and controller vendors had various demonstrations touting their suitability for different stages of the AI data pipeline - ingestion, preparation, training, checkpointing, and inference. Vendors like Solidigm have different types of SSDs optimized for different stages of the pipeline. At the same time, controller vendors have taken advantage of one of the features introduced recently in the NVM Express standard - Flexible Data Placement (FDP).

FDP involves the host providing information / hints about the areas where the controller could place the incoming write data in order to reduce the write amplification. These hints are generated based on specific block sizes advertised by the device. The feature is completely backwards-compatible, with non-FDP hosts working just as before with FDP-enabled SSDs, and vice-versa.

Silicon Motion's MonTitan Gen 5 Enterprise SSD Platform was announced back in 2022. Since then, Silicon Motion has been touting the flexibility of the platform, allowing its customers to incorporate their own features as part of the customization process. This approach is common in the enterprise space, as we have seen with Marvell's Bravera SC5 SSD controller in the DapuStor SSDs and Microchip's Flashtec controllers in the Longsys FORESEE enterprise SSDs.

At FMS 2024, the company was demonstrating the advantages of flexible data placement by allowing a single QLC SSD based on their MonTitan platform to take part in different stages of the AI data pipeline while maintaining the required quality of service (minimum bandwidth) for each process. The company even has a trademarked name (PerformaShape) for the firmware feature in the controller that allows the isolation of different concurrent SSD accesses (from different stages in the AI data pipeline) to guarantee this QoS. Silicon Motion claims that this scheme will enable its customers to get the maximum write performance possible from QLC SSDs without negatively impacting the performance of other types of accesses.

Silicon Motion and Phison have market leadership in the client SSD controller market with similar approaches. However, their enterprise SSD controller marketing couldn't be more different. While Phison has gone in for a turnkey solution with their Gen 5 SSD platform (to the extent of not adopting the white label route for this generation, and instead opting to get the SSDs qualified with different cloud service providers themselves), Silicon Motion is opting for a different approach. The flexibility and customization possibilities can make platforms like the MonTitan appeal to flash array vendors.

Rapidus Wants to Offer Fully Automated Packaging for 2nm Fab to Cut Chip Lead Times

13 août 2024 à 12:00

One of the core challenges that Rapidus will face when it kicks off volume production of chips on its 2nm-class process technology in 2027 is lining up customers. With Intel, Samsung, and TSMC all slated to offer their own 2nm-class nodes by that time, Rapidus will need some kind of advantage to attract customers away from its more established rivals. To that end, the company thinks they've found their edge: fully automated packaging that will allow for shorter chip lead times than manned packaging operations.

In an interview with Nikkei, Rapidus' president, Atsuyoshi Koike, outlined the company's vision to use advanced packaging as a competitive edge for the new fab. The Hokkaido facility, which is currently under construction and is expecting to begin equipment installation this December, is already slated to both produce chips and offer advanced packaging services within the same facility, an industry first. But ultimately, Rapidus biggest plan to differentiate itself is by automating the back-end fab processes (chip packaging) to provide significantly faster turnaround times.

Rapidus is targetting back-end production in particular as, compared to front-end (lithography) production, back-end production still heavily relies on human labor. No other advanced packaging fab has fully automated the process thus far, which provides for a degree of flexibility, but slows throughput. But with automation in place to handle this aspect of chip production, Rapidus would be able to increase chip packaging efficiency and speed, which is crucial as chip assembly tasks become more complex. Rapidus is also collaborating with multiple Japanese suppliers to source materials for back-end production. 

"In the past, Japanese chipmakers tried to keep their technology development exclusively in-house, which pushed up development costs and made them less competitive," Koike told Nikkei. "[Rapidus plans to] open up technology that should be standardized, bringing down costs, while handling important technology in-house." 

Financially, Rapidus faces a significant challenge, needing a total of ¥5 trillion ($35 billion) by the time mass production starts in 2027. The company estimates that ¥2 trillion will be required by 2025 for prototype production. While the Japanese government has provided ¥920 billion in aid, Rapidus still needs to secure substantial funding from private investors.

Due to its lack of track record and experience of chip production as. well as limited visibility for success, Rapidus is finding it difficult to attract private financing. The company is in discussions with the government to make it easier to raise capital, including potential loan guarantees, and is hopeful that new legislation will assist in this effort.

Kioxia Demonstrates RAID Offload Scheme for NVMe Drives

12 août 2024 à 18:30

At FMS 2024, Kioxia had a proof-of-concept demonstration of their proposed a new RAID offload methodology for enterprise SSDs. The impetus for this is quite clear: as SSDs get faster in each generation, RAID arrays have a major problem of maintaining (and scaling up) performance. Even in cases where the RAID operations are handled by a dedicated RAID card, a simple write request in, say, a RAID 5 array would involve two reads and two writes to different drives. In cases where there is no hardware acceleration, the data from the reads needs to travel all the way back to the CPU and main memory for further processing before the writes can be done.

Kioxia has proposed the use of the PCIe direct memory access feature along with the SSD controller's controller memory buffer (CMB) to avoid the movement of data up to the CPU and back. The required parity computation is done by an accelerator block resident within the SSD controller.

In Kioxia's PoC implementation, the DMA engine can access the entire host address space (including the peer SSD's BAR-mapped CMB), allowing it to receive and transfer data as required from neighboring SSDs on the bus. Kioxia noted that their offload PoC saw close to 50% reduction in CPU utilization and upwards of 90% reduction in system DRAM utilization compared to software RAID done on the CPU. The proposed offload scheme can also handle scrubbing operations without taking up the host CPU cycles for the parity computation task.

Kioxia has already taken steps to contribute these features to the NVM Express working group. If accepted, the proposed offload scheme will be part of a standard that could become widely available across multiple SSD vendors.

Western Digital Introduces 4 TB microSDUC, 8 TB SDUC, and 16 TB External SSDs

12 août 2024 à 17:30

Western Digital's BiCS8 218-layer 3D NAND is being put to good use in a wide range of client and enterprise platforms, including WD's upcoming Gen 5 client SSDs and 128 TB-class datacenter SSD. On the external storage front, the company demonstrated four different products: for card-based media, 4 TB microSDUC and 8 TB SDUC cards with UHS-I speeds, and on the portable SSD front we had two 16 TB drives. One will be a SanDisk Desk Drive with external power, and the other in the SanDisk Extreme Pro housing with a lanyard opening in the case.

All of these are using BiCS8 QLC NAND, though I did hear booth talk (as I was taking leave) that they were not supposed to divulge the use of QLC in these products. The 4 TB microSDUC and 8 TB SDUC cards are rated for UHS-I speeds. They are being marketed under the SanDisk Ultra branding.

The SanDisk Desk Drive is an external SSD with a 18W power adapter, and it has been in the market for a few months now. Initially launched in capacities up to 8 TB, Western Digital had promised a 16 TB version before the end of the year. It appears that the product is coming to retail quite soon. One aspect to note is that this drive has been using TLC for the SKUs that are currently in the market, so it appears unlikely that the 16 TB version would be QLC. The units (at least up to the 8 TB capacity point) come with two SN850XE drives. Given the recent introduction of the 8 TB SN850X, an 'E' version with tweaked firmware is likely to be present in the 16 TB Desk Drive.

The 16 TB portable SSD in the SanDisk Extreme housing was a technology demonstration. It is definitely the highest capacity bus-powered portable SSD demonstrated by any vendor at any trade show thus far. Given the 16 TB Desk Drive's imminent market introduction, it is just a matter of time before the technology demonstration of the bus-powered version becomes a retail reality.

The Noctua NH-D15 G2 LBC Cooler Review: Notoriously Big, Incredibly Good

12 août 2024 à 16:00

When you buy a retail computer CPU, it usually comes with a standard cooler. However, most enthusiasts find that the stock cooler just does not cut it in terms of performance. So, they often end up getting a more advanced cooler that better suits their needs. Choosing the right cooler isn't a one-size-fits-all deal – it is a bit of a journey. You have to consider what you need, what you want, your budget, and how much space you have in your setup. All these factors come into play when picking out the perfect cooler.

When it comes to high-performance coolers, Noctua is a name that frequently comes up among enthusiasts. Known for their exceptional build quality and superb cooling performance, Noctua coolers have been a favorite in the PC building community for years. A typical Noctua cooler will be punctuated by incredibly quiet fans and top-notch cooling efficiency overall, which has made them ideal for overclockers and builders who want to keep their systems running cool and quiet.

In this review, we'll be taking a closer look at the NH-D15 G2 cooler, the successor to the legendary NH-D15. This cooler comes with a hefty price tag of $150 but promises to deliver the best performance that an air cooler can currently achieve. The NH-D15 G2 is available in three versions: one standard version as well as two specialized variants – LBC (Low Base Convexity) and HBC (High Base Convexity). These variants are designed to make better contact with specific CPUs; the LBC is recommended for AMD AM5 processors, while the HBC is tailored for Intel LGA1700 processors, mirroring the slightly different geometry of their respective heatspeaders. Conversely, the standard version is an “one size fits all” approach for users who care more about long-term compatibility over squeezing out every ounce of potential the cooler has.

Kioxia Details BiCS 8 NAND at FMS 2024: 218 Layers With Superior Scaling

12 août 2024 à 14:00

Kioxia's booth at FMS 2024 was a busy one with multiple technology demonstrations keeping visitors occupied. A walk-through of the BiCS 8 manufacturing process was the first to grab my attention. Kioxia and Western Digital announced the sampling of BiCS 8 in March 2023. We had touched briefly upon its CMOS Bonded Array (CBA) scheme in our coverage of Kioxial's 2Tb QLC NAND device and coverage of Western Digital's 128 TB QLC enterprise SSD proof-of-concept demonstration. At Kioxia's booth, we got more insights.

Traditionally, fabrication of flash chips involved placement of the associate logic circuitry (CMOS process) around the periphery of the flash array. The process then moved on to putting the CMOS under the cell array, but the wafer development process was serialized with the CMOS logic getting fabricated first followed by the cell array on top. However, this has some challenges because the cell array requires a high-temperature processing step to ensure higher reliability that can be detrimental to the health of the CMOS logic. Thanks to recent advancements in wafer bonding techniques, the new CBA process allows the CMOS wafer and cell array wafer to be processed independently in parallel and then pieced together, as shown in the models above.

The BiCS 8 3D NAND incorporates 218 layers, compared to 112 layers in BiCS 5 and 162 layers in BiCS 6. The company decided to skip over BiCS 7 (or, rather, it was probably a short-lived generation meant as an internal test vehicle). The generation retains the four-plane charge trap structure of BiCS 6. In its TLC avatar, it is available as a 1 Tbit device. The QLC version is available in two capacities - 1 Tbit and 2 Tbit.

Kioxia also noted that while the number of layers (218) doesn't compare favorably with the latest layer counts from the competition, its lateral scaling / cell shrinkage has enabled it to be competitive in terms of bit density as well as operating speeds (3200 MT/s). For reference, the latest shipping NAND from Micron - the G9 - has 276 layers with a bit density in TLC mode of 21 Gbit/mm2, and operates at up to 3600 MT/s. However, its 232L NAND operates only up to 2400 MT/s and has a bit density of 14.6 Gbit/mm2.

It must be noted that the CBA hybrid bonding process has advantages over the current processes used by other vendors - including Micron's CMOS under array (CuA) and SK hynix's 4D PUC (periphery-under-chip) developed in the late 2010s. It is expected that other NAND vendors will also move eventually to some variant of the hybrid bonding scheme used by Kioxia.

Intel Publishes First Microcode Update for Raptor Lake Stability Issue, BIOSes Going Out Now

9 août 2024 à 19:00

Following Intel’s run of financial woes and Raptor Lake chip stability issues, the company could use some good news on a Friday. And this week they’re delivering just that, with the first version of the eagerly awaited microcode fix for desktop Raptor Lake processors – as well as the first detailed explanation of the underlying issue.

The new microcode release, version 0x129, is Intel’s first stab at addressing the elevated voltage issue that has seemingly been the cause of Raptor Lake processor degradation over the past year and a half. Intel has been investigating the issue all year, and after a slow start, in recent weeks has begun making more significant progress, identifying what they’re calling an “elevated operating voltage” issue in high-TDP desktop Raptor Lake (13th & 14th Generation Core) chips. Back in late July the company was targeting a mid-August release date for a microcode patch to fix (or rather, prevent) the degradation issue, and just ahead of that deadline, Intel has begun shipping the microcode to their motherboard partners.

Even with this new microcode, however, Intel is not done with the stability issue. Intel is still investigating whether it’s possible to improve the stability of already-degraded processors, and the overall tone of Intel’s announcement is very much that of a beta software fix – Intel won’t be submitting this specific microcode revision for distribution via operating system updates, for example. So even if this microcode is successful in stopping ongoing degradation, it seems that Intel hasn’t closed the book on the issue entirely, and that the company is presumably working towards a fix suitable for wider release.

Capping At 1.55v: Elevated Voltages Beget Elevated Voltages

So just what does the 0x129 microcode update do? In short, it caps the voltage of affected Raptor Lake desktop chips at a still-toasty (but in spec) 1.55v. As noted in Intel’s previous announcements, excessive voltages seem to be at the cause of the issue, so capping voltages at what Intel has determined is the proper limit should prevent future chip damage.

The company’s letter to the community also outlines, for the first time, just what is going on under the hood with degraded chips. Those chips that have already succumbed to the issue from repeated voltage spikes have deteriorated in such a way that the minimum voltage needed to operate the chip – Vmin – has increased beyond Intel’s original specifications. As a result, those chips are no longer getting enough voltage to operate.

Seasoned overclockers will no doubt find that this is a familiar story, as this is one of the ways that overclocked processors degrade over time. In those cases – as it appears to be with the Raptor Lake issue – more voltage is needed to keep a chip stable, particularly in workloads where the voltage to the chip is already sagging.

And while all signs point to this degradation being irreversible (and a lot of RMAs in Intel’s future), there is a ray of hope. If Intel’s analysis is correct that degraded Raptor Lake chips can still operate properly with a higher Vmin voltage, then there is the possibility of saving at least some of these chips, and bringing them back to stability.

This “Vmin shift,” as Intel is calling it, is the company’s next investigative target. According to the company’s letter, they are aiming to provide updates by the “end of August.”

In the meantime, Intel’s eager motherboard partners have already begun releasing BIOSes with the new microcode, with ASUS and MSI even jumping the gun and sending out BIOSes before Intel had a chance to properly announce the microcode. Both vendors are releasing these as beta BIOSes, reflecting the general early nature of the microcode fix itself. And while we expect most users will want to get this microcode in place ASAP to mitigate further damage on affected chips, it would be prudent to treat these beta BIOSes as just that.

Along those lines, as noted earlier, Intel is only distributing the 0x129 microcode via BIOS updates at this time. This microcode will not be coming to other systems via operating system updates. At this point we still expect distribution via OS updates to be the end game for this fix, but for now, Intel isn’t providing a timeline or other guidance for when that might happen. So for PC enthusiasts, at least, a BIOS update is the only way to get it for now.

Performance Impact: Generally Nil – But Not Always

Finally, Intel’s message also provides a bit of guidance on the performance impact of the new microcode, based on their internal testing. Previously the company has indicated that they expected no significant performance impact, and based on their expanded testing, by and large this remains the case. However, there are going to be some workloads that suffer from performance regressions as a result.

So far, Intel has found a couple of workloads where they are seeing regressions. This includes PugetBench GPU Effects Score and, on the gaming side of matters, Hitman 3: Dartmoor. Otherwise, virtually everything else Intel has tested, including common benchmarks like Cinebench, and major games, are not showing performance regressions. So the overall outcome of the fix is not quite a spotless recovery, but it’s also not leading to widespread performance losses, either.

As for AnandTech, we’ll be digging into this on our own benchmark suite as time allows. We have one more CPU launch coming up next week, so there’s no shortage of work to be done in the next few days. (Sorry, Gavin!)

Intel’s Full Statement

Intel is currently distributing to its OEM/ODM partners a new microcode patch (0x129) for its Intel Core 13th/14th Gen desktop processors which will address incorrect voltage requests to the processor that are causing elevated operating voltage.

For all Intel Core 13th/14th Gen desktop processor users: This patch is being distributed via BIOS update and will not be available through operating system updates. Intel is working with its partners to ensure timely validation and rollout of the BIOS update for systems currently in service.

Instability Analysis Update – Microcode Background and Performance Implications

In addition to extended warranty coverage, Intel has released three mitigations related to the instability issue – commonly experienced as consistent application crashes and repeated hangs – to help stabilize customer systems with Intel Core 13th and 14th gen desktop processors:
  1. Intel default settings to avoid elevated power delivery impact to the processor (May 2024)
  2. Microcode 0x125 to fix the eTVB issue in i9 processors (June 2024)
  3. Microcode 0x129 to address elevated voltages (August 2024)
Intel’s current analysis finds there is a significant increase to the minimum operating voltage (Vmin) across multiple cores on affected processors due to elevated voltages. Elevated voltage events can accumulate over time and contribute to the increase in Vmin for the processor.

The latest microcode update (0x129) will limit voltage requests above 1.55V as a preventative mitigation for processors not experiencing instability symptoms. This latest microcode update will primarily improve operating conditions for K/KF/KS processors. Intel is also confirming, based on extensive validation, all future products will not be affected by this issue.

Intel is continuing to investigate mitigations for scenarios that can result in Vmin shift on potentially impacted Intel Core 13th and 14th Gen desktop processors. Intel will provide updates by end of August.  

Intel’s internal testing – utilizing Intel Default Settings - indicates performance impact is within run-to-run variation (eg. 3DMark: Timespy, WebXPRT 4, Cinebench R24, Blender 4.2.0) with a few sub-tests showing moderate impacts (WebXPRT Online Homework; PugetBench GPU Effects Score). For gaming workloads tested, performance has also been within run-to-run variation (eg. Cyberpunk 2077, Shadow of the Tomb Raider, Total War: Warhammer III – Mirrors of Madness) with one exception showing slightly more impact (Hitman 3: Dartmoor). However, system performance is dependent on configuration and several other factors.

For unlocked Intel Core 13th and 14th Gen desktop processors, this latest microcode update (0x129) will not prevent users from overclocking if they so choose. Users can disable the eTVB setting in their BIOS if they wish to push above the 1.55V threshold. As always, Intel recommends users proceed with caution when overclocking their desktop processors, as overclocking may void their warranty and/or affect system health. As a general best practice, Intel recommends customers with Intel Core 13th and 14th Gen desktop processors utilize the Intel Default Settings.

In light of the recently announced extended warranty program, Intel is reaffirming its confidence in its products and is committed to making sure all customers who have or are currently experiencing instability symptoms on their 13th and/or 14th Gen desktop processors are supported in the exchange process. Users experiencing consistent instability symptoms should reach out to their system manufacturer (OEM/System Integrator purchase), Intel Customer Support (boxed processor), or place of purchase (tray processor) further assistance.
-Intel Community Post

Phison Introduces E29T Gen 4 Controller for Mainstream Client SSDs

9 août 2024 à 18:15

At FMS 2024, Phison gave us the usual updates on their client flash solutions. The E31T Gen 5 mainstream controller has already been seen at a few tradeshows starting with Computex 2023, while the USB4 native flash controller for high-end PSSDs was unveiled at CES 2024. The new solution being demonstrated was the E29T Gen 4 mainstream DRAM-less controller. Phison believes that there is still performance to be eked out on the Gen 4 platform with a low-cost DRAM-less solution.


Phison NVMe SSD Controller Comparison
  E31T E29T E27T E26 E18
Market Segment Mainstream Consumer High-End Consumer
Manufacturing
Process
7nm 12nm 12nm 12nm 12nm
CPU Cores 2x Cortex R5 1x Cortex R5 1x Cortex R5 2x Cortex R5 3x Cortex R5
Error Correction 7th Gen LDPC 7th Gen LDPC 5th Gen LDPC 5th Gen LDPC 4th Gen LDPC
DRAM No No No DDR4, LPDDR4 DDR4
Host Interface PCIe 5.0 x4 PCIe 4.0 x4 PCIe 4.0 x4 PCIe 5.0 x4 PCIe 4.0 x4
NVMe Version NVMe 2.0 NVMe 2.0 NVMe 2.0 NVMe 2.0 NVMe 1.4
NAND Channels, Interface Speed 4 ch,
3600 MT/s
4 ch,
3600 MT/s
4 ch,
3600 MT/s
8 ch,
2400 MT/s
8 ch,
1600 MT/s
Max Capacity 8 TB 8 TB 8 TB 8 TB 8 TB
Sequential Read 10.8 GB/s 7.4 GB/s 7.4 GB/s 14 GB/s 7.4 GB/s
Sequential Write 10.8 GB/s 6.5 GB/s 6.7 GB/s 11.8 GB/s 7.0 GB/s
4KB Random Read IOPS 1500k 1200k 1200k 1500k 1000k
4KB Random Write IOPS 1500k 1200k 1200k 2000k 1000k

Compared to the E27T, the key update is the use of a newer LDPC engine that enables better SSD lifespan as well as compatibility with the latest QLC flash, along with additional power optimizations.

The company also had a U21 USB4 PSSD reference design (complete with a MagSafe-compatible casing) on display, along with the usual CrystalDiskMark benchmark results. We were given to understand that PSSDs based on the U21 controller are very close to shipping into retail.

Phison has been known for taking the lead in introducing SSD controllers based on the latest and greatest interface options - be it PCIe 4.0, PCIe 5.0, or USB4. The competition is usually in the form of tier-one vendors opting for their in-house solution, or Silicon Motion stepping in a few quarters down the line after the market takes off with a more power-efficient solution. With the E29T, Phison is aiming to ensure that they still have a viable play in the mainstream Gen 4 market with their latest LDPC engine and supporting the highest available NAND flash speeds.

U.S. Signs $1.5B in CHIPS Act Agreements With Amkor and SKhynix for Chip Packaging Plants

9 août 2024 à 13:00

Under the CHIPS & Science Act, the U.S. government provided tens of billions of dollars in grants and loans to the world's leading maker of chips, such as Intel, Samsung, and TSMC, which will significantly expand the country's semiconductor production industry in the coming years. However, most chips are typically tested, assembled, and packaged in Asia, which has left the American supply chain incomplete. Addressing this last gap in the government's domestic chip production plans, these past couple of weeks the U.S. government signed memorandums of understanding worth about $1.5 billion with Amkor and SK hynix to support their efforts to build chip packaging facilities in the U.S.

Amkor to Build Advanced Packaging Facility with Apple in Mind

Amkor plans to build a $2 billion advanced packaging facility near Peoria, Arizona, to test and assemble chips produced by TSMC at its Fab 21 near Phoenix, Arizona. The company signed a MOU that offers $400 million in direct funding and access to $200 million in loans under the CHIPS & Science Act. In addition, the company plans to take advantage of a 25% investment tax credit on eligible capital expenditures.

Set to be strategically positioned near TSMC's upcoming Fab 21 complex in Arizona, Amkor's Peoria facility will occupy 55 acres and, when fully completed, will feature over 500,000 square feet (46,451 square meters) of cleanroom space, more than twice the size of Amkor's advanced packaging site in Vietnam. Although the company has not disclosed the exact capacity or the specific technologies the facility will support, it is expected to cater to a wide range of industries, including automotive, high-performance computing, and mobile technologies. This suggests the new plant will offer diverse packaging solutions, including traditional, 2.5D, and 3D technologies.

Amkor has collaborated extensively with Apple on the vision and initial setup of the Peoria facility, as Apple is slated to be the facility's first and largest customer, marking a significant commitment from the tech giant. This partnership highlights the importance of the new facility in reinforcing the U.S. semiconductor supply chain and positioning Amkor as a key partner for companies relying on TSMC's manufacturing capabilities. The project is expected to generate around 2,000 jobs and is scheduled to begin operations in 2027. 

SK hynix to Build HBM4 in the U.S.

This week SK hynix also signed a preliminary agreement with the U.S. government to receive up to $450 million in direct funding and $500 million in loans to build an advanced memory packaging facility in West Lafayette, Indiana. 

The proposed facility is scheduled to begin operations in 2028, which means that it will assemble HBM4 or HBM4E memory. Meanwhile, DRAM devices for high bandwidth memory (HBM) stacks will still be produced in South Korea. Nonetheless, packing finished HBM4/HBM4E in the U.S. and possibly integrating these memory modules with high-end processors is a big deal.

In addition to building its packaging plant, SK hynix plans to collaborate with Purdue University and other local research institutions to advance semiconductor technology and packaging innovations. This partnership is intended to bolster research and development in the region, positioning the facility as a hub for AI technology and skilled employment.

Sources: AmkorSK hynix

Intel Postpones Innovation 2024 Event, Cites Poor Finances

8 août 2024 à 23:15

As Intel looks to streamline its business operations and get back to profitability in the face of weak revenues and other business struggles, nothing is off the table as the company looks to cut costs into 2025 – not even Intel’s trade shows. In an unexpected announcement this afternoon, Intel has begun informing attendees of its fall Innovation 2024 trade show that the event has been postponed. Previously scheduled for September of this year, Innovation is now slated to take place at some point in 2025.

Innovation is Intel’s regular technical showcase for developers, customers, and the public, and is the successor to the company’s legendary IDF show. In recent years the show has been used to deliver status updates on Intel’s fabs, introduce new client platforms like Panther Lake, launch new products, and more.

But after 3 years of shows, the future of Innovation is up in the air, as Intel has officially postponed the show – and with a less-than-assuring commitment to when it may return.

In a message posted on the Innovation 2024 website (registration required), and separately sent out via email, Intel announced the postponement of the show. In lieu of the show, Intel still plans on holding smaller developer events.

Innovation 2024 Update

After careful consideration, we have made the decision to postpone our Intel-hosted event, Intel Innovation in September, until 2025. For the remainder of 2024, we will continue to host smaller, more targeted events, webinars, hackathons and meetups worldwide through Intel Connection and Intel AI Summit events, as well as have a presence at other industry moments.

Depending on your development needs, please leverage the following developer resources to learn more: developer.intel.com, developer.intel.com/ai, open.intel.com and intel.com/support. Click here for a full list of Developer events.
-Intel Innovation Website

Separately, in a statement sent to PCMag, the company cited its current financial situation, and that they “are having to make some tough decisions as we continue to align our cost structure and look to assess how we rebuild a sustainable engine of process technology leadership.”

While Intel had not yet published a full agenda for the now-delayed show, Innovation 2024 was expected to be a major showcase for Intel’s Lunar Lake and Arrow Lake client processors, both of which are due this fall. Arrow Lake in particular is Intel’s lead product for their 20A process node – their first node implementing RibbonFETs and PowerVia backside power delivery – so its launch will be an important moment for the company. And while the postponement of Innovation won’t impact those launches, it means that Intel won’t have access to the same stage or built-in audience that comes with hosting your own trade show. Never mind the lost opportunities for software developers, who are the core audience for the show.

Officially, the show is just postponed. But given the lead time needed to reserve the San Jose Convention Center and similar venues, it’s unclear whether Intel will be able to host a show before the second half of 2025 – at which point we’d be closer to Innovation 2025, making Innovation 2024 de facto cancelled.

In the meantime, the company has already announced that they’ll be launching Lunar Lake at IFA in Germany in September. So that remains the next big trade show for Intel’s client chip group.

Microchip Demonstrates Flashtec 5016 Enterprise SSD Controller

8 août 2024 à 21:30

Microchip recently announced the availability of their second PCIe Gen 5 enterprise SSD controller - the Flashtec 5016. Like the 4016, this is also a 16-channel controller, but there are some key updates:

  • PCIe 5.0 lane organization: Operation in x4 or dual independent x2 / x2 mode in the 5016, compared to the x8, or x4, or dual independent x4 / x2 mode in the 4016.
  • DRAM support: Four ranks of DDR5-5200 in the 5016, compared to two ranks of DDR4-3200 in the 4016.
  • Extended NAND support: 2400 MT/s NAND in the 4016, compared to the 3200 MT/s NAND support in the 5016.
  • Performance improvements: The 5016 is capable of delivering 3.5M+ random read IOPS compared to the 3M+ of the 4016.

Microchip's enterprise SSD controllers provide a high level of flexibility to SSD vendors by providing them with significant horsepower and accelerators. The 5016 includes Cortex-A53 cores for SSD vendors to run custom applications relevant to SSD management. However, compared to the Gen4 controllers, there are two additional cores in the CPU cluster. The DRAM subsystem includes ECC support (both out-of-band and inline, as desired by the SSD vendor).

At FMS 2024, the company demonstrated an application of the neural network engines embedded in the Gen5 controllers. Controllers usually employ a 'read-retry' operation with altered read-out voltages for flash reads that do not complete successfully. Microchip implemented a machine learning approach to determine the read-out voltage based on the health history of the NAND block using the NN engines in the controller. This approach delivers tangible benefits for read latency and power consumption (thanks to a smaller number of errors on the first read).

The 4016 and 5016 come with a single-chip root of trust implementation for hardware security. A secure boot process with dual-signature authentication ensures that the controller firmware is not maliciously altered in the field. The company also brought out the advantages of their controller's implementation of SR-IOV, flexible data placement, and zoned namespaces along with their 'credit engine' scheme for multi-tenant cloud workloads. These aspects were also brought out in other demonstrations.

Microchip's press release included quotes from the usual NAND vendors - Solidigm, Kioxia, and Micron. On the customer front, Longsys has been using Flashtec controllers in their enterprise offerings along with YMTC NAND. It is likely that this collaboration will continue further using the new 5016 controller.

Western Digital Previews M.2 2280 PCIe 5.0 x4 NVMe Client SSDs: 15GBps at Under 7 Watts

8 août 2024 à 16:00

Western Digital's FMS 2024 demonstrations included a preview of their upcoming PCIe 5.0 x4 M.2 2280 NVMe SSDs for mobile workstations and consumer desktops. The Gen 5 client SSD market has been dominated by solutions based on Phison's E26 controller. The first generation products launched with slower NAND flash, while the more recent ones have exceeded the 14 GBps barrier by utilizing Micron's 2400 MT/s 232L 3D TLC. Western Digital has been conservative over the last year or so by focusing more on the mainstream / mid-range market in terms of new product introductions (such as the WD Blue SN5000, WD_BLACK SN770M, and the WD Blue SN580). Their SSD lineup is due for an update with Gen 5 drives being sorely missed. The SSDs being demonstrated at FMS 2024 will end up doing just that.

Western Digital's technology demonstrations in this segment involved two different M.2 2280 SSDs - one for the performance segment, and another for the mainstream market. They both utilize in-house controllers - while the performance segment drive uses a 8-channel controller with DRAM for the flash translation layer, the mainstream one utilizes a 4-channel DRAM-less controller. Both drives being benchmarked live were equipped with BiCS8 218-layer 3D TLC.

Western Digital is touting the power efficiency of their platform as a key differentiator, promising south of 7W (performance drive) and 5W (mainstream DRAM-less drive) for the complete SSD under stressful traffic. This makes it suitable for use in mobile workstations, but a good fit for desktops as well.

Demonstrated performance numbers indicate almost 15 GBps sequential reads and 2M+ random read IOPS for the performance drive, and 10.7 GBps sequential reads for the mainstream version. Western Digital might have missed the Gen 5 bus as it started out slowly. However, the technology demonstrations with the in-house controller and NAND indicate that WD has caught up just as the Gen 5 market is about to take off.|

Imec Successfully Demonstrates High-NA Lithography for Logic and DRAM Patterning for First Time

8 août 2024 à 14:00

Imec and ASML have announced that the two companies have printed the first logic and DRAM patterns using ASML's experimental Twinscan EXE:5000 EUV lithography tool, the industry's first High-NA EUV scanner. The lithography system achieved resolution that is good enough for 1.4nm-class process technology with just one exposure, which confirms the capabilities of the system and that development of the High-NA ecosystem remains on-track for use in commercial chip production later this decade.

"The results confirm the long-predicted resolution capability of High NA EUV lithography, targeting sub 20nm pitch metal layers in one single exposure," said Luc Van den hove, president and CEO of imec. "High NA EUV will therefore be highly instrumental to continue the dimensional scaling of logic and memory technologies, one of the key pillars to push the roadmaps deep into the ‘angstrom era'. These early demonstrations were only possible thanks to the set-up of the joint ASML-imec lab allowing our partners to accelerate the introduction of High NA lithography into manufacturing."

The successful test printing comes after ASML and Imec have spent the last several months laying the groundwork for the test. Besides the years required to build the complex scanner itself, engineers from ASML, Imec, and their partners needed to develop newer photoresists, underlayers, and reticles. Then they had to take an existing production node and tune it for High-NA EUV tools, including doing optical proximity correction (OPC) and tuning etching processes.

The culmination of these efforts was that, using ASML's pre-production Twinscan EXE:5000 system, Imec was able to successfully pattern random logic structures with 9.5nm dense metal lines, which corresponds to a 19nm pitch and sub-20nm tip-to-tip dimensions. Similarly, Imec also set new high marks in feature density in other respects, including patterning of 2D features at a 22nm pitch, and printing random vias with a 30nm center-to-center distance, demonstrating high pattern fidelity and critical dimension uniformity.

The overall result is that Imec's experiments have proven that ASML's High-NA scanner is delivering on its intended capabilities, printing features at a fine enough resolution for fabricating logic on a 1.4nm-class process technology – and all with a single exposure. The latter is perhaps the most important aspect of this tooling, as the high cost and complexity of the High-NA tool itself (said to be around $400 million) is intended to be offset by being able to return to single-patterning, which allows for higher tool productivity and fewer steps overall.

Imec hasn't just been printing logic structures, either; the group successfully patterned DRAM designs as well, printing both a storage node landing pad alongside the bit line periphery for memory in a single exposure. As with their logic tests, this would allow DRAM designs to be printed in just one exposure, reducing cycle times and eventually costs.


9,5nm random logic structure (19nm pitch) after pattern transfer

"We are thrilled to demonstrate the world's first High NA-enabled logic and memory patterning in the joint ASML-imec lab as an initial validation of industry applications," said Steven Scheer, senior vice president of compute technologies & systems/compute system scaling at imec. "The results showcase the unique potential for High NA EUV to enable single-print imaging of aggressively-scaled 2D features, improving design flexibility as well as reducing patterning cost and complexity. Looking ahead, we expect to provide valuable insights to our patterning ecosystem partners, supporting them in further maturing High NA EUV specific materials and equipment."

Silicon Motion SM2322 USB 3.2 Gen 2x2 Native Controller: Extended QLC Support for 8 TB PSSDs

8 août 2024 à 12:00

Silicon Motion's SM2320 native USB 3.2 Gen 2x2 controller for USB flash drives and portable SSDs has enjoyed great market success with a large number of design wins over the last few years. Silicon Motion proudly displayed a selection of products based on the SM2320 on the show floor at FMS 2024.

The SM2320 went into mass production in Q3 2021. Since then, the NAND flash market has seen considerable change. QLC is becoming more and more reliable and common, leading to the launch of high-capacity cost-effective 4 TB and 8 TB SSDs. Newer NAND generations with flash operating at higher speeds have also made an appearance.

The SM2320, fabricated in TSMC's 28nm node, supported four channels of NAND flash running at up to 800 MT/s. The new SM2322 uses the same process node and retains support for the same number of flash channels and chip enables (8 CEs per channel). However, the NAND can now operate at up to 1200 MT/s.

The SM2322 also improves the QLC support, thanks to the implementation of a better ECC scheme. While the SM2320 opted for a 2KB LDPC implementation, the SM2322 goes in for a 4KB LDPC solution. The use of a larger region enables extension of the NAND's useful life.

The SM2322 and SM2320 packages are similar in size, and Silicon Motion expects PSSD designs using the SM2320 to adopt the SM2322 with different NAND (higher capacity / speeds) using the same enclosure. Products based on the SM2322 are expected to appear in the market before the end of the year.

Silicon Motion SM2508 PCIe 5.0 x4 NVMe SSD Controller Set for Mass Production

7 août 2024 à 21:00

Silicon Motion has been teasing their SM2508 client SSD controller for more than a year now at various trade shows. The controller is finally set for mass production, just in time as the mainstream segment of the Gen 5 SSD market is poised to take off. Silicon Motion expects SSDs based on the SM2508 to be available for purchase by the end of the year.

At FMS 2024, the company was reusing the same information cards seen at Computex in June. The specifications of the SM2508 from our Computex coverage are reproduced here.

Silicon Motion NVMe Client SSD Controller Comparison
  SM2508 SM2264 SM2268XT2 SM2269XT
Market Segment High-End Mainstream
Manufacturing Process 6nm 12nm 12nm 12nm
CPU Cores 4x Cortex R8 4x Cortex R8 2x Cortex R8 2x Cortex R8
Error Correction 4K+ LDPC 4K LDPC 4K+ LDPC 4K LDPC
DRAM DDR4, LPDDR4X DDR4, LPDDR4X No No
Host Interface PCIe 5.0 x4 PCIe 4.0 x4 PCIe 4.0 x4 PCIe 4.0 x4
NVMe Version NVMe 2.0 NVMe 1.4 NVMe 2.0 NVMe 1.4
NAND Channels, Interface Speed 8 ch,
3600 MT/s
8 ch,
1600 MT/s
4 ch,
3600 MT/s
4 ch,
1600 MT/s
Sequential Read 14.5 GB/s 7.5 GB/s 7.4 GB/s 5.1 GB/s
Sequential Write 14 GB/s 7 GB/s 6.7 GB/s 4.8 GB/s
4KB Random Read IOPS 2500k 1300k 1200k 900k
4KB Random Write IOPS 2500k 1200k 1200k 900k

Current Gen 5 SSDs in the consumer client market are currently all based on Phison's E26 controller. The appearance of newer platform solutions for SSD vendors is bound to be good from both an end-user pricing and adoption perspective.

Solidigm 122 TB Enterprise QLC SSD Announced for Early 2025 Release

7 août 2024 à 19:30

Solidigm's D5-P5336 61.44 TB enterprise QLC SSD released in mid-2023 has seen unprecedented demand over the last few quarters, driven by the insatiable demand for high-capacity storage in AI datacenters. Multiple vendors have recognized and started preparing products to service this demand, but Solidigm appears to have taken the lead in actual market availability.

At FMS 2024, Solidigm previewed a U.2 version of their upcoming 122 TB enterprise QLC SSD. The proof-of-concept Gen 4 drives were running live in a 2U server, and Solidigm is preparing them for an early 2025 release.

Given the capacity play, Solidigm will be relying on QLC technology. However, the company was coy about confirming the NAND generation used in the product.

Floating gate architecture retains programmed voltage levels for a longer duration compared to charge trap, allowing QLC implementation
Source: The Advantages of Floating Gate Technology (YouTube)

The 61.44 TB D5-P5336 currently utilizes Solidigm's 192L 3D QLC based on the floating gate architecture. This has a distinct advantage for QLC endurance compared to the charge trap architecture also available to Solidigm from SK hynix. That said, SK hynix's 238L NAND also has a QLC avatar, which gives Solidigm the flexibility to use either NAND for the production version of the 122 TB drive. Solidigm expects to confirm this by the end of year in preparation for volume shipment in the first half of 2025.

Corsair Transitions to Cybenetics Certification for Power Supplies

7 août 2024 à 18:00

Corsair, a prominent figure in PC components, has announced a strategic shift in its approach to power supply unit (PSU) certifications. The company is dropping the widely recognized 80 PLUS certification in favor of the newer but more comprehensive Cybenetics certification.

According to the press release, the primary reason for Corsair’s move to Cybenetics certifications lies in the program's dual focus on both energy efficiency and noise levels. While the 80 PLUS certification has been a standard in the industry for decades, it exclusively measures energy conversion efficiency at four load levels (10%, 20%, 50%, 100%). Despite its long-standing presence, the 80 PLUS program has not seen significant updates in over 15 years, which limits its ability to provide a holistic view of PSU performance.

On the other hand, Cybenetics offers a more nuanced approach. It evaluates PSUs across multiple load levels and includes noise level assessments. This dual certification system rates efficiency on a familiar scale (Bronze to Titanium, plus a higher certification called Diamond) and noise levels from Standard (noisy) to A++ (virtually silent). By incorporating noise measurements, Cybenetics provides a more comprehensive overview of PSU performance, addressing an important aspect often overlooked by other certification programs. Cybenetics also enforces Power Factor, 5VSB efficiency, and Vampire Power thresholds, all important to the overall efficiency of a PSU.

Even though they're dropping 80 PLUS in favor of Cybernetics, Corsair is being highly diplomatic with their press release. They even suggest that the reader should not disregard either in favor of the other.

Our opinion is a bit harsher: the simplicity of the 80 PLUS certification program has led to two major flaws. First, manufacturers have primarily focused on maximizing efficiency at three specific load points, neglecting overall performance. Second, the majority of PSUs have clustered around the 80 PLUS Gold and Platinum certifications, with very few achieving the stringent Titanium level. This results in hundreds of PSUs with significantly different technical capabilities sharing the same certification badge, creating a misleading uniformity that fails to reflect true performance disparities.

Furthermore, almost every PSU platform that has been released over the past 15 years would achieve 80Plus Gold status or greater, with very few products falling down to the 80Plus Bronze certification and almost zero meeting the 80Plus White and 80Plus Silver requirements, making the three lowermost certifications practically defunct. Cybenetics dual certification certainly does not solve every issue and cannot fully assess everything there is to assess about a PSU, but it certainly makes much more information available to the user and allows users to at least factor in acoustics performance when purchasing a product.

The issue that seems to remain is that, due to the slackest requirements, manufacturers were almost always certifying their units with an input voltage of 115 VAC, resulting in myriads of units carrying a certification badge that would fail the same 80Plus certification requirements with an input voltage of 230 VAC. Unfortunately, this is also true for the Cybenetics standard, as the badges do not inform the user about the input voltage that the certification was attained with. However, as the Cybenetics standard revolves around average efficiency and not efficiency at specific load points, the majority of the PSUs should meet both efficiency thresholds and not the other way around.

Certification processes can be costly for manufacturers. By opting for the Cybenetics program, Corsair possibly aims to get the most value from its certification investments. Cybenetics offers more detailed and up-to-date testing methodologies, ensuring that the data provided is more reflective of real-world usage scenarios. In any case, Corsair’s shift to Cybenetics certification marks a significant development in the evaluation of PSUs and has the potential to create waves in the market.

Ultimately, this move has the potential disrupt the status quo. With Corsair's sheer size and influence in the larger power supply market, this could very well prompt other manufacturers to follow suit, and possibly even reshape consumer expectations and benchmarks for PSU quality.

❌