Kicking off yet another earnings season, we once again start with Intel. The reigning 800lb gorilla of the chipmaking world is reporting its Q4 2021 and full-year financial results, closing the book on an eventful 2021 for the company. The first full year of the pandemic has seen Intel once again set revenue records, making this the sixth record year in a row, but it’s also clear that headwinds are going to be approaching for the company, both in respect to shifts in product demand and in the sizable investments needed to build the next generation of leading-edge fabs.
First announced as part of NVIDIA’s CES 2022 presentation, the company’s new GeForce RTX 3050 desktop video card is finally rolling out to retailers this month. The low-end video card is being positioned to round out the bottom of NVIDIA’s product stack, offering a modern, Ampere-based video card for a more entry-level market. All of this comes as PC video card market still in chaos due to a combination of the chip crunch and crypto miner demand, so any additional cards are most welcome – and likely to be sucked up rather quickly, even at a MSRP of $249 (and higher).
For users buying a memory kit of DDR5, if they want to adhere to Intel specifications, will buy a DDR5-4800 kit. Though through XMP, there are other faster kits available - we've even tested G.Skill's DDR5-6000 kit in our memory scaling article. But going above and beyond that, there's overclocking.
Back in November 2021, extreme overclocker 'Hocayu' managed to achieve DDR5-8704 using G.Skill's Trident Z5 DDR5-6000 memory. As always with these records, they are made to be broken, and fellow Hong-Kong native lupin_no_musume has managed to surpass this with an impressive DDR5-8888, also using G.Skill Trident Z5 memory, with an ASUS's ROG Maximus Z690 Apex motherboard, Intel's Core i9-12900K processor, and some liquid nitrogen.
Without trying to sound controversial, indeed, extreme overclocking isn't as popular as it once was. That isn't to say it doesn't have a purpose - using sub-ambient cooling methods such as liquid nitrogen, dry ice, and even liquid helium can boost frequencies on processors and graphics cards well beyond what's achievable with standard cooling. Doing this not only shows the potential of hardware, but it also gives companies 'bragging rights' as being the proud owners of overclocking world records. Car companies boast about the best Nürburgring record for a variety of categories, or tuning their mainstream offerings, in a similar fashion.
This not only pushes the boundaries of what DDR5 memory is capable of, but it's also an impressive feat given DDR5 is relatively nascent. For reference, going from DDR5-6000 to DDR5-8888 represents an overclock of around 48% over the XMP profile and a crazy 85% overclock over the JEDEC specification of DDR5-4800. It should be worth noting that this is an all out data rate record regardless of latency, which in this case was increased to 88 over the standard 40, for stability. Going back to the car analogy, this would be akin to speed records on the drag strip, rather than on the oval.
Screenshot from G.Skill Trident Z5 DDR5-8888 CPU-Z validation (link)
While speeds of DDR5-8888 are not attainable in the form of purchasable memory kits for Alder Lake, G.Skill did unveil a retail kit that tops out at DDR5-7000. We also reported back in November 2021 that S.K. Hynix was planning for DDR5-8400 at 1.1 volts, but that's actually part of the extended JEDEC specifications for when processors get verfied at that speed.
Khronos this morning is taking the wraps off of Vulkan 1.3, the newest iteration of the group’s open and cross-platform API for graphics programming.
Vulkan 1.3 follows Khronos’s usual 2 year release cadence for the API, and it comes at a critical juncture for the API and its future development. Vulkan has been a full and official specification since 2016, turning 6 years old this year. This has given the API plenty of time to mature and have its kinks worked out, as well as to be adopted by software and hardware developers alike. But it also means that with the core aspects of the API having been hammered out, where to go next has become less obvious/harmonious. And with the API in use for everything from smartphones to high-end PCs, Vulkan is beginning to fragment at points thanks to the wide range of capabilities in devices.
As a result, for Vulkan 1.3, Khronos and its consortium members are taking aim at the future of the API, particularly from a development standpoint. Vulkan is still in a healthy place now, but in order to keep it that way, Khronos needs to ensure that Vulkan has room to grow with new features and functionality, but all without leaving behind a bunch of perfectly good hardware in the process. Thankfully, this isn’t a new problem for the consortium – it’s something virtually every standard faces if it lives long enough to become widely used – so Khronos is hitting the ground running with some further refinements to Vulkan.
Vulkan 1.3 Core
But before we get into Khronos’s fragmentation-fighting efforts, let’s first talk about what’s coming to the Vulkan 1.3 core specification. The core spec covers all of the features a Vulkan implementation is required to support, from the most basic smartphone to the most powerful workstation. As a result it has a somewhat narrow scope in terms of graphical features, but as the name says on the tin, it’s the common core of the API.
As with previous versions of the spec, Khronos is targeting this to work on existing Vulkan-compliant hardware. Specifically, Vulkan 1.3 is designed to work on OpenGL ES 3.1 hardware, meaning that of the new features being rolled into the core spec, none of them can be beyond what ES 3.1 hardware can do.
Consequently, Vulkan 1.3’s core spec isn’t focused on adding new graphical features or the like. By design, graphical feature additions are handled by extensions. Instead, the 1.3 core spec additions are largely a quality-of-life update for Vulkan developers, with a focus on adding features that simplify some aspect of the rendering process or add more control over it.
Altogether, Khronos is moving 23 existing extensions into the Vulkan 1.3 core spec. Most of these extensions are very much inside-baseball fodder for graphics programmers, but there are a couple of highlights. These include the integer dot product function, which is already widely used for machine learning inference on GPUs, as well as support for dynamic rendering. These functions already exist as extensions – so many developers can and are already using them – but by moving them into the core spec, they are now required for all Vulkan 1.3 implementations, opening them up to a wider array of developers.
But arguably the single most important addition coming to Vulkan isn’t an extension being promoted into the core specification. Rather, it’s entirely new functionality entirely, in the form of feature profiles.
Vulkan Profiles: Simplifying Feature Sets and Roadmaps
Up until now, Vulkan has not offered a concept of feature levels or other organizational grouping for additional feature sets. Beyond the core specification, everything in Vulkan is optional, all 280+ extensions. Meaning that for developers who are building applications that tap into features that go beyond the core spec – which has quickly become almost everything not written for a smartphone – there hasn’t been good guidance available on what extensions are supported on what platforms, or even what extensions are meant to go together.
The freedom to easily add extensions to Vulkan is one of the standard’s greatest strengths, but it’s also a liability if it’s not done in an organized fashion. And with the core spec essentially locked at the ES 3.1 level for the time being, this means that the number of possible and optional extensions has continued to bloom over the last 6 years.
So in an effort to bring order to the potential chaos, as well as to create a framework for planning future updates, Khronos is adding profiles to the Vulkan standard.
Profiles, in a nutshell, are precisely defined lists of supported features and formats. Profiles don’t define any new API calls (that’s done by creating new extensions outright), so they are very simple conceptually. But, absent any kind of way to define feature sets, they are very important going forward for Vulkan.
The power of profiles is that they allow for 280+ extensions to be organized into a much smaller number of overlapping profiles. Rather than needing to check to see if a specific PC video card supports a given extension, for example, a developer can just code against a (theoretical) “Modern Windows PC” profile, which in turn would contain all of the extensions commonly supported by current-generation PCs. Or alternatively, a mobile developer could stick to an Android-friendly profile, and quickly see what features they can use that will be supported by most devices.
At a high level, profiles are the solution to the widening gap between baseline ES 3.1 hardware, and what current and future hardware can do. Rather than risk fragmenting the Vulkan specification itself (and thus ending up with an OpenGL vs. OpenGL ES redux), profiles allow Vulkan to remain whole while giving various classes and generations of hardware their own common feature sets.
In line with the open and laissez faire nature of the Khronos consortium, profiles are not centrally controlled and can be defined by anyone, be it hardware devs, software devs, potato enthusiasts, or even Khronos itself. Similarly, whether a hardware/platform vendor wants to support a given profile is up to them; if they do, then they will need to make sure they expose the required extensions and formats. So this won’t be as neat and tidy as, say, Direct3D feature levels, but it will still be functional while offering the flexibility the sometimes loose consortium needs.
That said, Khronos’s expectation that we should only see a limited number of widely used profiles, many of which they’ll be involved with in some fashion. So 280 extensions should not become 280 profiles, at least as long as the hardware vendors can find some common ground across their respective platforms.
Finally, on a technical level, it’s worth noting that profiles aren’t just a loose list of features, but they do have technical requirements. Specifically, profiles are built as JSON lists, which along with providing a means to check profile compatibility, also open the door to things like generating human-readable versions of profiles. It’s a small distinction, but it will help developers quickly implement profile support in a generic fashion, relying on the specific JSON lists to guide their programs the rest of the way.
Profiles are also not limited to being built upon Vulkan 1.3. Despite being introduced at the same time as 1.3, they are actually a super-feature of sorts that can work with previous Vulkan versions, as all of the heavy lifting is being done at the application and SDK level. So it will be possible to have a profile that only calls for a Vulkan 1.0 implementation, for example.
Google’s Android Baseline 2021 Profile
The first profile out the door, in turn, comes from Google. The Android author is defining a Vulkan profile for their market that, at a high level, will help to better define and standardize what feature are available on most Android devices.
Interestingly, Google’s profile is built upon Vulkan 1.0, and not a newer version of Vulkan. From what we’re told, there are features in the Vulkan 1.1 core specification that are still not widely supported by mobile devices (even with the ES 3.1 hardware compatibility goal), and as a result, any kind of common progression with Vulkan on Android has become stalled. So since Google can’t get Vulkan 1.1/1.2/1.3 more widely supported across Android devices, the company is doing the next best thing and using a profile to define a bunch of common post-1.0 extensions that are supported by the current crop of devices.
The net result of this is the Android Baseline 2021 Profile. By establishing a baseline profile for the ecosystem, Google is aiming to not only make newer functionality more accessible to developers, but to simplify graphics programming in the process. Essentially, the Baseline 2021 Profile is a fix for existing fragmentation within the Android ecosystem by establishing a reasonable set of commonly supported features and formats.
Of particular note, Google’s profile calls for support for both ETC and ASTC texture compression formats. As well, sample shading and multi-sample interpolation are on the list as well. Given that this is a baseline specification, there aren’t any high-concept next-generation features contained within the profile. But over time, that will change. Google has already indicated that they will be developing a 2022 profile for later this year, and will continue to keep adding further baseline profiles as the situation warrants.
Finally, Google’s use of profiles is also a solid example of taking advantage of the application-centric nature of profiles. According to Google, developers will be able to use profiles on the “vast majority” of Android devices without the need for over-the-air updates for those devices. Since profiles are handled at the application/SDK level, all the device itself needs to present are the necessary Vulkan extensions, which in accordance with a baseline specification are already present and supported in the bulk of Android devices.
Vulkan Roadmap 2022: Making Next-Generation Features Common Features
Last but certainly not least, the other big development to stem from the addition of profiles is a renewed path forward for developing and adopting new features for next-generation hardware. As mentioned previously, Vulkan has until now lacked a way to define feature sets for more advanced (non-core) features, which profiles are finally resolving. As a result, Khronos and the hardware vendors finally have the tools they need to establish baselines for not just low-end hardware, but high-end hardware as well.
In other words, profiles will provide the means to finally create some common standards that incorporate next-generation hardware and the latest programming features.
Because of Vulkan core’s ES 3.1 hardware requirements, there is a significant number of advanced features that have remained optional extensions. This includes everything from ray tracing and sample rate shading to more basic features like anisotropic filtering, multiple processor scheduling, and bindless resources (descriptor indexing). To be sure, these are all features that developers have had access to for years as extensions, but lacking profiles, there has been no assurance for developers that a given feature is going to be in all the platforms they want to target.
To that end, Khronos and its members have developed the Vulkan Roadmap 2022, which is both a roadmap of features they want to become common, as well as a matching profile to go with the roadmap. Conceptually, the Vulkan Roadmap 2022 feature set can be thought of as the inverse of Google’s baseline profile; instead of basing a profile around low-end devices, Roadmap 2022 excises low-end devices entirely in order to focus on common features found in newer hardware.
Roadmap 2022 is being based around features found in mid-end and high-end devices, mobile and PC alike. So while it significantly raises the bar in terms of features supported, it’s still not leaving mobile devices behind entirely – nor would it necessarily be ideal to do so. In practice, this means that Roadmap 2022 is slated to become the common Vulkan feature set for mid-end and better devices across the hardware spectrum.
Meanwhile, adoption of Roadmap 2022 should come very quickly since it’s based around features and formats already supported in existing hardware. AMD and NVIDIA have already committed to enabling support for the necessary features in their Vulkan 1.3 drivers, which are out today in beta and should reach maturity in a couple of months. In fact, the biggest hold-up to using profiles is Khronos itself – the Vulkan SDK won’t get profile support until next month.
Finally, according to Khronos Roadmap 2022 is just the start of the roadmapping process for the group. After getting caught-up with current-generation hardware with this year’s profile, the group will be developing longer-term roadmaps for Vulkan profiles. Specifically, the group wants to get far enough ahead of the process that profiles are being planned out years in advance, when the next-generation of hardware is still under development. This would enable Khronos to have a compete pipeline of profiles in the works, giving hardware and software developers a roadmap for the next couple of years of Vulkan features.
Ultimately, having a roadmap will serve to help keep the development of advanced features for Vulkan on-track. Freed from having to support the oldest of hardware, the Vulkan group members will be able to focus on developing and implementing new features, knowing exactly when support is expected/planned/desired to arrive. Up until now the planning process has been weighed down by the lack of a timeline for making new features a requirement (de jure or otherwise), so having a formal process to standardize advanced features will go a long way towards speeding up and simplifying that process.
At CES this year, Intel officially announced its expanded Alder Lake lineup including the performance-laptop focused H-Series processors, which traditionally fit in the 45-Watt range. Today we finally get to take a look at the 12th generation H-Series processors, the first mobile incarnation of Alder Lake, and see how Intel's fastest mobile platform stacks up to not only Intel’s previous 11th generation Tiger Lake platform, but also AMD’s Ryzen 5000 series.
- Interview with Intel’s Dan Ragland, Head of Overclocking: Tuning Alder Lake For Performance
The topic of overclocking has been an interesting one to track over the years. Over a decade ago, when dealing with 2-4 core processors, an effective overclock gave a substantial performance uplift, often allowing a $200 processor to perform like the one that cost $999. However, core counts have increased over the last couple of years, but also companies like Intel are getting better at understanding their silicon, and are able to ship it out of the box almost at the silicon limit anyway. So what use is overclocking? We turned to Dan Ragland, who runs Intel’s Overclocking Lab in Hillsboro, Oregon, to find out what overclocking now means for Intel, what it means for Alder Lake, but also how Intel is going to approach overclocking in the future.
With fab expansions on tap across the entire semiconductor industry, Intel today is laying out their own plans for significantly increasing their production capacity by announcing their intention to build a new $20 billion fab complex in Ohio. With the paperwork already inked and construction set to begin in late 2022, Intel will be building two new leading-edge fabs in their new Ohio location to support future chip needs. And should further demand call for it, the Ohio complex has space to house several more fabs.
Intel’s announcement follows ongoing concerns about chip fab capacity and national security, as like other chip fabs, Intel is looking to expand their capacity in future years amidst the current chip crunch. All the while, the United States government has become increasingly mindful about how much chip production takes place in geopolitically tricky Taiwan, placing additional pressure on firms to build additional fabs within the US. To that end, Intel has been not-so-secretly undertaking a search to find a good location for a new fab campus, and they have finally found their answer in Ohio.
The new site, Intel’s first new manufacturing site location in 40 years, is located in New Albany, Ohio, just outside of Columbus. Up until now, all of Intel’s major chip fab sites have been in the western United States – Oregon, Arizona, and at one point, Silicon Valley – so the Ohio site is a significant move for the company. All told, the Ohio “mega-site”, as Intel likes to call it, covers nearly 1000 acres. And while Intel is only initially planning for two fabs, the site offers plenty of room to grow, offering enough space for a total of 8 fabs.
The immediate goal of the company – and the crux of today’s announcement – revolves around the building of two new leading-edge fabs at the Ohio location. According to Intel, these two fabs will begin construction late this year, with production coming online in 2025. The company isn’t formally stating what the initial process node will be – instead saying that it will be using the "industry's most advanced transistor technologies" – however if the company is indeed building truly bleeding-edge fabs, then 2025 lines up with Intel’s 18A process, which will be 4 generations newer than what Intel is using now (Intel 7).
Altogether, Intel expects the project to cost about $20 billion, which is similar to what Intel will be spending for its two new Arizona fabs, which were announced just under a year ago. And further down the line, should Intel opt to fill the rest of the property with the other 6 fabs that the site can support, the company expects that the total price tag could reach nearly $100 billion. Ultimately, the company is making it clear that they are priming the site not just to met their mid-term production needs with the initial two fabs, but are making sure to have the space ready for further capacity expansion over the long-term.
As to whether Intel eventually builds those further 6 fabs, that will depend on a few factors. Key among these will be demand from Intel Foundry Services clients; while Intel will be using some of the Ohio site’s capacity for their own needs, the site will also be used to fab chips for IFS customers. If Intel’s bid to break into the contract fab business is successful, and the company is able to woo over additional clients/orders, then they will need to build additional fabs to meet that demand.
Also hanging in the balance is what the US Government opts for, both in terms of orders and incentives. The Ohio fabs will be used for domestic production of sensitive chips, as the US looks to secure its supply lines. Meanwhile the CHIPS for America Act and its 53 billion in incentives will also be a. Intel for its part isn’t playing coy about its interest in the CHIPS money, explicitly stating that “The scope and pace of Intel’s expansion in Ohio, however, will depend heavily on funding from the CHIPS Act”. In some respects Intel is taking a bit of a gamble by investing in the Ohio location before any CHIPS funding is approved – on a pure cost basis, overseas production is traditionally cheaper – so there is certainly a political element in announcing these fabs and selecting an Ohio location. And as an added incentive to the US Government, Pat Gelsinger has told Time that Intel would even be interested in bringing some chip packaging, assembly, and testing back to the US if the CHIPS Act were funded, which in turn would allow Intel to do every last step of production within the US.
But more immediately, Intel’s focus is on getting its first two Ohio fabs up and running. Along with building the facilities they’ll need a workforce to operate them, and as a result the company is also pledging $100M over a decade in funding for local educational efforts. As with similar local industry efforts, that investment would be focused on helping local colleges and universities establish semiconductor manufacturing curricula to help train the technical workforce required.
And while outside of Intel’s own investment scope, the creation of their Ohio fab complex means that Intel’s suppliers are also coming along for the ride. According to the company, Air Products, Applied Materials, LAM Research and Ultra Clean Technology have all indicated that they’ll be setting up facilities in the area. All of which the company is using to further underscore the size of the project and the value it brings to the area – and why they deserve that CHIPS Act funding.
Ultimately, the addition of a third US fab site and two more fabs to Intel’s portfolio is the latest step Intel has taken under Pat Gelsinger’s IDM 2.0 strategy. Gelsinger opted to go all-in on having Intel fab chips for themselves and others, and this is the kind of expansion that Gelsinger has been alluding to as necessary to make IDM 2.0 a reality. Taken altogether, Intel now has 4 leading-edge fabs set to come online in the 2024-2025 timeframe, and with any luck on Intel’s part, there will be room for several more to come.
- Intel Has Two Generations of Bitcoin ASIC: BZM1 is Built on 7nm, 137 GigaHash/sec at 2.5 W
It has been noted in the media that at the upcoming ISSCC conference at the end of February, Intel is set to give a talk entitled ‘Bonanza Mine: An Ultra-Low Voltage Energy Efficient Bitcoin Mining ASIC’. It already has a lot of attention, as it confirms the fact that Intel is working towards blockchain-enabling hardware. Through a number of channels, we’ve been able to acquire more details about this chip ahead of the conference.
Following its announcement back at CES, today AMD is formally launching the entry-level member of its Radeon RX 6000 series of video cards: the Radeon RX 6500 XT. Based on AMD’s new Navi 24 GPU – the first GPU made on TSMC’s N6 process – AMD is broadening their desktop video card lineup by adding a new low-end option. And while the $199 price tag is unlikely to arouse much enthusiasm, the addition of another video card SKU – and one that’s relatively useless for crypto mining – is likely to be a welcome relief for the capacity-constrained discrete video card market.
- Intel Expands 12th Gen Core to Ultraportable Laptops, from 5-cores at 9 W to 14-cores at 28 W
Over the years Intel has prided itself on its ability to provide processors that fit into the ultraportable, professional market. We’re talking thin and light designs with obscene levels of performance and battery life for the form factor. It’s so important to Intel, that over the years they’ve produced several design and validation standards relating to how the best ultraportables should be developed, such as low power displays, the best connectivity standards, and approaching all-day battery life. It surprised me somewhat that Intel didn’t really discuss its next generation of processors for these devices at CES at the beginning of the year, focusing their keynote almost entirely on the 45 W prosumer and workhorse designs instead. To find out about the more mainstream and ultraportable silicon, we had to dig into the back end of our press deck to get details.
With the Unify, we are looking at a premium model with support for DDR5 memory, but it doesn't feature any RGB LED lighting. The MSI MEG Z690 Unify represents MSI's Enthusiast Gaming series, which combines elements of an enthusiast-level motherboard, but with all the features designed for users to make the most of the latest controller sets and 12th gen features. Some of the features include five M.2 slots, support for DDR5-6666 memory, dual 2.5 GbE and Wi-Fi 6E networking, as well as an advertised 21-phase power delivery.
ASRock has been at the forefront of the small form-factor (SFF) PC revolution right from the Sandy Bridge days. Starting with the Core HT series in the early 2010s, the company moved on to the Beebox (NUC clones) , and recently settled on the DeskMini lineup (based on mini-STX boards). At the 2022 CES, the company is introducing a new SFF PC - the DeskMeet. It is meant to be a step up from the DeskMini - allowing for a custom motherboard, more RAM slots, and space for discrete GPUs.
ASRock is delivering all this in a chassis with a 8L volume with two configurations - one based on the Intel B660 platform, and another based on the AMD X300 chipset (AM4 socket). The key specifications of the two systems are summarized in the table below.
|ASRock DeskMeet SFF PCs - 2022 Lineup|
|DeskMeet B660||DeskMeet X300|
|CPU||Intel 12th Gen Core Processors||AMD AM4 Socket Ryzen Desktop APUs / CPUs (Ryzen 2000/3000/4000/5000) (up to 65W)|
|Cooler||Stock coolers / up to 54mm in height|
|Chipset||Intel B660||AMD X300|
4x DDR4 DIMM Slots (up to 128GB)
4x DDR4 DIMM Slots (up to 128GB)
ECC / non-ECC, un-buffered
|Discrete GPU Support||Up to 200mm in length|
|Networking||1x RJ-45 Gigabit LAN (Intel I219V)
M.2 2230 Slot for Wi-Fi + BT Module
|1x RJ-45 Gigabit LAN (Realtek RTL8111H)
M.2 2230 Slot for Wi-Fi + BT Module
|Storage||3x SATA III 6Gbps
1x Hyper M.2 2280 (PCIe Gen 4 x4 / SATA III 6Gbps)
1x Hyper M.2 2280 (PCIe Gen 4 x4)
|2x SATA III 6Gbps
1x Ultra M.2 2280 (PCIe Gen 3 x4)
|Expansion Slots||1x PCIe 4.0 x16||1x PCIe 3.0 x16|
|Audio Codec||Realtek ALC897|
|Front I/O||1x Headset
1x USB 3.2 Gen 1 Type-C
2x USB 3.2 Gen 1 Type-A
2x USB 2.0 Type-A
|Rear I/O||1x DP 1.4a
1x HDMI 2.0a
2x USB 2.0 Type-A
2x USB 3.2 Gen 1 Type-A
HD Audio Jack (Line In / Speaker / Microphone)/td>
|1x DP 1.4a
1x HDMI 2.0a
2x USB 2.0 Type-A
2x USB 3.2 Gen 1 Type-A
HD Audio Jack (Line Out)/td>
|Power Supply||500W (80+ Bronze / 550W Peak)|
|Dimensions||168mm x 219.3mm x 218.3mm|
While the absence of high-end I/O ports like Thunderbolt 4 and USB 3.2 Gen 2x2 or high-end wired networking is a tad disappointing, ASRock is making up for that by supporting PCIe expansion cards and quad-DIMM configurations for up to 128GB of RAM. ASRock calls these boards -ITX, but they are not truly mini-ITX in size.
The space inside the chassis allows for multiple configurations - with or without a discrete GPU, ability to mount multiple 3.5" drives etc. The DeskMeet aims to provide as many features and flexibilities as possible within the constraints dictated by the chipsets.
In other ASRock SFF PC news, the company has also released a new mini-STX platform using the Intel B660 chipset for Alder Lake. The DeskMini B660 retains the chassis design of the previous generations.
Thanks to the use of the B660 chipset, the front Type-C port is now USB 3.2 Gen 2x2 (20 Gbps). The rear Type-C port supports 10 Gbps data transfer alone with DP 1.4a, and 60W PD. The other interesting aspect is the availability of a PCIe 5.0 M.2 2280 SSD slot, in addition to a PCIe 4.0 M.2 2280 one. A M.2 2230 slot for Wi-Fi, and a gigabit Ethernet slot are the regular features retained from the previous DeskMini units.
After a couple of years of staid SFF PCs with rather unimpressive updates, ASRock is promising interesting offerings in 2022. No specific launch prices or retail availability timelines were provided for the new systems.
While not the absolute first company in the market to talk about putting different types of silicon inside the same package, AMD’s launch of Ryzen 3000 back in July 2019 was a first in bringing high performance x86 computing through the medium of chiplets. The chiplet paradigm has worked out very well for the company, having high performance cores on optimized TSMC 7nm silicon, while farming the more analog operations to cheaper GlobalFoundries 14nm silicon, and building a high speed interconnect between them. Compared to a monolithic design, AMD ends up using the better process for each feature, smaller chips that afford better yields and binning, and the major cost adder becomes the packaging. But how low cost can these chiplet designs go? I put this question to AMD’s CEO Dr. Lisa Su.
In AMD’s consumer-focused product stack, the only products it ships with chiplets are the high-performance Ryzen 3000 and Ryzen 5000 series processors. These range in price from $199 for the six-core Ryzen 5 3600, up to $799 for the 16-core Ryzen 9 5950X.
Everything else consumer focused is a single piece of silicon, not chiplets. Everything in AMD’s mobile portfolio relies on single pieces of silicon, and they are also migrated into desktop form factors in AMD’s desktop APU strategy. We’re seeing a clear delineation between where chiplets make financial sense, and where they do not. From AMD’s latest generation of processors, the Ryzen 5 5600X is still a $299 cost at retailers.
One of the issues here is that a chiplet design requires additional packaging steps. The silicon from which these processors are made have to sit in a PCB or substrate, and depending on what you want to do with the substrate can influence its cost. Chiplet designs require high speed connections between chiplets, as well as power and communications to the rest of the system. The act of putting the chiplets on a singular substrate also has an effective cost, requiring accuracy - even if 99% accurate placement per chiplet on a substrate means a 3 chiplet product as a 3% yield loss from packaging, raising costs. Beyond this, AMD has to ship its 14nm dies for its products from New York to Asia first, to package them with the TSMC compute dies, before shipping the final product around the world. That might be reduced in future, as AMD is believed to make its next-generation chiplet designs all within Asia.
Ultimately there has to be a tipping point where simply building a monolithic silicon product becomes better for total cost than trying to ship chiplets around and spend lots of money on new packaging techniques. I asked the question to Dr. Lisa Su, acknowledging that AMD doesn’t sell its latest generation below $300, as to whether $300 is the realistic tipping point from the chiplet to the non-chiplet market.
Dr. Su explained how in their product design stages, AMD’s architects look at every possible way of putting chips together. She explained that this means monolithic, chiplet, packaging, process technologies, as the number of potential variables in all of this have direct knock-on effects for supply chain and cost and availability, as well as the end performance of the product. Dr. Su stated quote succinctly that AMD looks for what is best for performance, power, cost – and what you say on the tipping point may be true. That being said, Dr. Su was keen not to directly say this is the norm, detailing that she would expect in the future that the dynamic might change as silicon costs rise, as this changes that optimization point. But it was clear in our discussions that AMD is always looking at the variables, with Dr. Su ending on a happy note that at the right time, you’ll see chiplets at the lower end of the market.
Personally, I think it’s quite telling that the market is very malleable to chiplets right now in the $300+ ecosystem. TSMC D0 yields of N7 (and N5) are reportedly some of the industry best, which means that AMD’s mobile processors in the ~200 sq mm range can roll off the production line and cater for everything up to that $300 value (and perhaps some beyond). Going bigger brings in die size yield constraints, where chiplets make sense. We’re now in at a stage where if Moore’s Law continues, how much compute can we fit in that 200 sq mm sized silicon, and which markets can benefit from it – or are we going to get to a point where so many more features are added that silicon sizes would increase, necessarily pushing everything down the chiplet route. As part of the discussion, Dr. Su mentioned economies of scale when it comes to packaging, so it will be interesting to see how this dynamic shakes out. But for now it seems, AMD’s way to address the sub-$300 market is going to be with either last generation hardware, or monolithic silicon.
This article was updated to clear up some of the language around certainty and conjecture based on rumor.
This morning the PCI Special Interest Group (PCI-SIG) is releasing the much-awaited final (1.0) specification for PCI Express 6.0. The next generation of the ubiquitous bus is once again doubling the data rate of a PCIe lane, bringing it to 8GB/second in each direction – and far, far higher for multi-lane configurations. With the final version of the specification now sorted and approved, the group expects the first commercial hardware to hit the market in 12-18 months, which in practice means it should start showing up in servers in 2023.
NVIDIA this morning is quietly adding to its menagerie of high-end video cards with a third version of the GeForce RTX 3080, the simply-named GeForce RTX 3080 12GB. Just as the name says on the tin, this latest GeForce card is more or less a version of the existing RTX 3080 with 12GB of memory, and the additional capacity and memory bandwidth benefits that come from that. This latest video card launch is relatively subdued launch for the company, and NVIDIA is not making much fanfare for the new card – nor are they announcing a price for it.
At the close of play today, Intel is announcing two major changes at the top of its organization. The big one is that EVP and GM of Intel’s Client Computing Group, Gregory Bryant, who led the company in their CES messaging only last week, is moving on to new ventures. After his 30-year stint at Intel, he is to be replaced by 25-year veteran Michelle Johnston Holthaus, currently EVP and Intel’s Chief Revenue Officer in charge of Communications, Sales, and Marketing. There’s also a new CFO coming in from Micron.
It’s a big surprise, seeing Gregory Bryant leave Intel. He has been leading the consumer platform team for a number of years, since June 2017, covering the last few generations of CPU and mobile launches through Intel’s tough times with bringing 10nm to revenue. He has overseen the launch of the Intel Evo initiative, and comes from an engineering background, although preferred to let other senior engineers talk to their strengths. Greg will leave Intel at the end of the month, for a new opportunity. It's actually quite a strange announcement, given that he led Intel's presentations at CES only last week - a time when he must've known that one foot was out the door. Analysts are reporting that his goals involve becoming a CEO somewhere, and that the opportunity he is leaving for is a big one. It also reduces the number of Bryants at senior levels of Intel down from three to one (Diane Bryant left in 2017).
Bryant’s replacement is Intel EVP and Chief Revenue Officer Michelle Johnston Holthaus. She is another Intel lifer, having spent 25 years at the company, joining in 1996. She has held multiple roles in reseller management, HQ central marketing and operations, global account management, and currently sits as GM of the Sales, Marketing, and Communications group. In recent memory Holthaus was the keynote speaker at Intel’s Partner Summit. Bringing her to the role is likely a move to strengthen Intel’s bonds with its OEM partners, an aspect that is seeing increased competition in traditional client OEM markets. It also means that Intel has two women leading its largest business units – Holthaus for Client Computing, and Sandra Rivera for Datacenter.
During the transition, Intel will be searching for a new leader of Intel’s sales, marketing, and communications group.
Also announced today is that Intel is to get a new Chief Financial Officer. It was already announced that current CFO, George Davis, was to retire in 2022 and the search was on for a replacement. That replacement has been found in the form of David Zinsner, who comes from his role as CFO at Micron. This position is effective January 17th, which is shortly before Intel’s end-of-year financial results on January 26th. George Davis is to remain at Intel in an advisory role until May to ensure a seamless transition. It’s key to point out that Intel’s search for a new CFO has been thought to aim to find an individual who has the same CAPEX expenditure aspirations as CEO Pat Gelsinger.
When AMD started using TSMC’s 7nm process for the Zen 2 processor family that launched in November 2019, one of the overriding messages of that launch was that it was important to be on the leading edge of process node technology to be competitive. That move to TSMC N7 was aided by the small chiplets used in the desktop processors at the time, ensuring a higher yield and better binning curves for desktop and enterprise processors. However, between now and then, we’ve seen other companies take advantage of TSMC’s 5nm, 4nm, and talk about TSMC’s 3nm process coming to market over the next 12-24 months. During our roundtable discussion with CEO Dr. Lisa Su, I asked if the need to stay on the leading edge still held true.
To put this into perspective, AMD announced late in 2021 that it would be using TSMC’s 5nm process for its Zen 4 chiplets in enterprise CPUs in the second half of 2022. Then in early 2022, the company reiterated the use of Zen 4 chiplets, but this time in desktop processors again by the end of 2022. This is a significant delay between the first use of TSMC 5nm by the smartphone vendors, which reached mass production in Q3 2020, with Apple and Huawei being the first to take advantage. Even today, if we go beyond 5nm, Mediatek has already announced that its upcoming Dimensity 9000 smartphone chip is on TSMC 4nm and will come to market earlier this year. TSMC’s 3nm process is expected to ramp production at the end of 2022, for a consumer launch in early 2023. By those metrics, AMD is behind a process node or two by the time Zen 4 chiplets come to market later this year.
I asked Dr. Su in our roundtable about whether the need to be on the leading edge process is critical to be competitive for them. Having innovated around chiplets, I asked whether being the lead partner with foundry partners and packaging partners (known as OSATs) is of major importance, especially when the lead competition seem ready to throw money at TSMC to take that volume. How would AMD be able to aggressively assert a market-leading position in light of the complexity of manufacturing and the financial power of the competition?
Dr. Su stated that AMD is continuing to innovate in all areas. For AMD it seems, leading the chiplet technology has helped to bring the package together. She went on to say that AMD has had strong delivery of 7nm, is introducing 6nm, followed by Zen 4 and 5nm, talking about 2D chiplets and 3D chiplets – AMD has all these things in the tool chest and are using the right technology for the right application. Dr Su reinforced that technology roadmaps are all about making the right choices and the right junctures, and explicitly stated that our 5nm technology is highly optimized for high-performance computing – it’s not necessarily the same as some other 5nm technologies out there.
While not explicitly stating that the need to be leading edge is no longer critical, this messaging follows the enhanced narrative from AMD that in the era of chiplets, it’s how they’re combined and packaged that is becoming important, arguably more important than exactly what process node is being used. We’ve seen this messaging before from AMD’s main competitor Intel, where back in 2017 the company stated that it will heavily rely on optimized chiplets for each use case – this was crystallized further in 2020 suggesting 24-36 chiplets on a single consumer desktop processor for purpose-built client designs. That being said, it has been constantly rumored that Intel will be a big customer of TSMC 3nm in the following years, so it will be interesting to see where AMD can take advantage of several years of chiplet expertise and packaging tools by comparison.
One constant theme throughout AMD’s recent resurgence into high-performance computing has been the messaging around the scalability of its platform. Building a processor that can scale both from single digit watts all the way up to big water cooled compute servers is no easy task, but also combining multiple types of processors into a single chip to also scale just adds layers of difficulty. AMD were keen to point this out at its recent CES presentation, stating that the RDNA2 graphics architecture is immensely scalable, from mobile to notebook to desktop to server, but also through to embedded, industrial, and automotive. It’s that last part I asked CEO Dr. Lisa Su about.
Last year it was announced, and subsequently confirmed through model numbers, that the Tesla infotainment systems in the Model X and Model S are using AMD’s embedded platform to drive the display and graphics in those vehicles. Our understanding is that the first versions of that silicon in those vehicles are based on Zen plus Vega, so I asked Dr Su about what she meant by RDNA2 being in automotive solutions. Beyond that, I also asked about the AMD and Tesla relationship.
Dr Su reaffirmed that RDNA2 was ever-prevalent in the ecosystem, from consoles to PCs, but she also mentioned the Samsung [partnership] in the mobile space. She stated that Tesla is always pushing the envelope and that [AMD] appreciates that they’ve chosen Ryzen and Radeon in vehicles like the Model S and Model X. She went on to say that they’ve also started with the Model 3 and Model Y, adopting [AMD] technologies for their infotainment solutions. There was no explicit detailing about the depth of the relationship or the extent of the agreements between the two, but it seems clear that four of Tesla’s major vehicles using AMD are a sizeable win for the company.
From an outside perspective, it’s interesting just how, where, and which embedded technologies are used in different markets. We hear about so few (AMD plays big in gambling machines, for example) because of the nature of those markets and how accessible they are to the public. At one stage AMD showcased me around their showroom in the Santa Clara HQ that had a number of these implementations, even going back as far as the old G-series embedded silicon, given that the silicon has to be supported for 10-15 years. I wonder if AMD has updated that showroom – I’m going to have to go visit again soon.
*AMD after the interview with Dr Su clarified that Tesla using Ryzen embedded + Navi (RDNA2) in Model S and X. They just started shipping Model 3 and Y (higher volume vehicles) with Ryzen embedded.
Over the last several years there has been a renewed push towards privacy features from the laptop industry. With the majority of PC sales being laptops, and battery life improving dramatically, use of laptops in public spaces for business use has increased accordingly. Quite a few business laptops now offer things like privacy shutters for the webcam, as an example, but much more can be done to protect business information from prying eyes in public.
One of the recent solutions has been integrated privacy screens, which dramatically reduce the viewing angle of displays so that if someone attempts to glance over at your screen while you are working, they will see almost nothing. While a good solution, these privacy screens can impact the device usage as well to the detriment of the user experience, which is why, for example, HP’s Sure View integrated privacy screen can be toggled on and off.
A new solution has popped up this year at CES from several manufacturers, and that is to actively reject shoulder surfing by use IR cameras to detect unwanted eyes and then blur the display if they are detected. I remember first seeing Tobii Eye Tracking hardware and software at MSI’s booth at CES in, I believe, 2015. Tobii uses IR cameras to track eye movements, and at the time, was touted as a gaming feature. Tobii as a brand is still best known in the consumer space for their gaming efforts, but they are now partnering with MSI on their business lineup to provide Tobii Aware, which leverages the concepts of their gaming products for business privacy functionality.
With Tobii Aware, the laptop will be able to continuously provide authentication for the correct user, so if that user turns their head, the display will blur, then when they turn back, it will come back into focus. Presence detection is another feature that has become a focus, including in Windows itself, and the device can automatically lock itself if you step away. Tobii will also allow you to have either visual clues, or privacy screen activation or blurring if someone is trying to shoulder surf your work.
Tobii is not the only player in this space. Lenovo has partnered with Lattice Semiconductor to integrate FPGAs for Computer Vision into the new Lenovo ThinkPad X1 for presence detection, which will not only increase privacy and allow for more accurate screen unlocks – even with a mask on – but also is touted as a battery saving feature since the PC will only wake up when the right person walks up to it, and not just a pet walking by or someone else in the area. The ThinkPad X1 will also automatically dim the display when it is not being looked at, and as the display is the largest power draw in the entire system, it can further improve battery life. This is even more important for OLED displays which are becoming more common in the laptop space.
AMD is also in this game, partnering with a company called Eyeware to bring a downloadable application for Radeon users in the first half of 2022. The AMD/Eyeware solution is a little different, in that rather than using cameras to actively spot shoulder surfers, it's based around watching what the user is doing. Eyeware wants to use real-time eye tracking to determine what the user is looking at, and then blur/dim everything else, essentially fuctioning as a form of passive rejection of shoulder surfing.
While laptop privacy has certainly been an active development feature for several manufacturers over the last few years, there is little doubt that the current working environment, with the dramatic shift to remote work over the last two years, has pushed the idea of protecting business information further along than perhaps would have happened organically. With the data now being accessed out of the office with a much higher frequency, containing that data from curious eyes is most certainly something that all businesses would want. The new upcoming hardware and software combinations from several players should help to alleviate some of the concern, although of course the protection of business data is still, even with these protections, something that workers will need to be trained on.