First Look at Intel’s Next-Gen Meteor Lake CPUs, Sapphire Rapids Xeons & Ponte Vecchio GPUs Fresh Out of Arizona’s Fab 42

by 9SIX

CNET has managed to capture the first die shots of several next-generation Intel Meteor Lake CPUs, Sapphire Rapids Xeons & Ponte Vecchio GPUs that are being tested & produced inside the chipmaker’s Fab 42, situated in Arizona, US.

Glorious Die Shots of Intel’s Next-Gen Meteor Lake CPUs, Sapphire Rapids Xeons & Ponte Vecchio GPUs Captured at Fab 42 in Arizona

The die shots were captured by CNET’s Senior Reporter, Stephen Shankland, who visited Intel’s Fab 42, located in Arizona, US. All the magic happens here as the Fabrication factory is producing next-generation chips for consumers, data centers, and the high-performance computing segments. The Fab 42 will handle Intel’s next-generation chips produced on the 10nm (Intel 7) and 7nm (Intel 4) process nodes. Some of the key products that will utilize these next-generation nodes include the Meteor Lake client processors, Sapphire Rapids Xeon processors, and Ponte Vecchio GPUs for HPC.

Intel 4 Powered Meteor Lake CPUs For Client Computing

The first product to talk about is Meteor Lake. Heading to client desktop PCs in 2023, the Meteor Lake CPUs will be the first true multi-chiplet design from Intel. CNET managed to get shots of the first Meteor Lake test chips which look remarkably similar to the renders that Intel teased back at their Architecture Day 2021 event. The Meteor Lake test vehicle pictured above is used to ensure that the Forveros packaging design works correctly and as expected. Meteor Lake CPUs will utilize Intel’s Forveros packaging technology to interconnect the various core IPs integrated on the chip.
Intel Meteor Lake 7nm CPUs 1

Intel Meteor Lake 7nm CPUs 3

Intel Meteor Lake 7nm CPUs 4

Intel Meteor Lake test chips prepare Chipzilla for the final production of next-gen Core CPUs.

The die has four chiplets that are connected together on the same substrate. Based on what Intel has shown in their renders, the top die should be the Compute Tile, the middle tile should be the SOC-LP tile and the lower-most die should be the GPU tile. However, based on the die sizes, that doesn’t go along well. The middle die could be the main Compute tile that houses the cores and the smaller die below it could be the SOC-LP tile that includes the IO. The topmost die should be the GPU while the smaller die next to it could be a separate cache or another IO tile. THis is just pure speculation for now as these are test chips and the final design may end up being different.

We also get a first look at the Meteor Lake test chip wafer which measures 300mm diagonally. The wafer comprises test chips which are dummy dies, once again to make sure that the interconnects on the chip work as intended. Intel already achieved Power-On for its Meteor Lake Compute CPU tile so we can expect final chips to be produced by the 2nd of 2022 for launch in 2023.

Here’s Everything We Know About The 14th Gen Meteor Lake 7nm CPUs

We already got some details from Intel such as the fact that Intel’s Meteor Lake line of desktop and mobility CPUs are expected to be based on a new line of Cove core architecture. This is rumored to be known as the ‘Redwood Cove’ and will be based on a 7nm EUV process node. It is stated that the Redwood Cove is designed from the ground up to be an agnostic node which means that it can be fabricated at different fabs. There are references mentioned that point out to TSMC to be a backup or even a partial supplier for the Redwood Cove-based chips. This might tell us why Intel is stating multiple manufacturing processes for the CPU family.

The Meteor Lake CPUs may possibly be the first CPU generation from Intel to say farewell to the ring bus interconnect architecture. There are also rumors that Meteor Lake could be a fully 3D-Stacked design and could utilize an I/O die sourced from an external fab (TSMC sighted again). It is highlighted that Intel will be officially utilizing its Foveros Packaging Technology on the CPU to inter-connect the various dies on the chip (XPU). This also aligns with Intel referring to each tile on 14th Gen chips individually (Compute Tile = CPU Cores).

The Meteor Lake Desktop CPU family is expected to retain support on the LGA 1700 socket which is the same socket used by Alder Lake & Raptor Lake processors. We can expect DDR5 memory and PCIe Gen 5.0 support. The platform will support both DDR5 & DDR4 memory with the mainstream and budget tier options going for DDR4 memory DIMMs while the premium & high-end offerings going for DDR5 DIMMs. The site also lists down both Meteor Lake P and Meteor Lake M CPUs that will be aimed at mobility platforms.

Intel Mainstream Desktop CPU Generations Comparison:

Intel CPU Family Processor Process Processors Cores/Threads (Max) TDPs Platform Chipset Platform Memory Support PCIe Support Launch
Sandy Bridge (2nd Gen) 32nm 4/8 35-95W 6-Series LGA 1155 DDR3 PCIe Gen 2.0 2011
Ivy Bridge (3rd Gen) 22nm 4/8 35-77W 7-Series LGA 1155 DDR3 PCIe Gen 3.0 2012
Haswell (4th Gen) 22nm 4/8 35-84W 8-Series LGA 1150 DDR3 PCIe Gen 3.0 2013-2014
Broadwell (5th Gen) 14nm 4/8 65-65W 9-Series LGA 1150 DDR3 PCIe Gen 3.0 2015
Skylake (6th Gen) 14nm 4/8 35-91W 100-Series LGA 1151 DDR4 PCIe Gen 3.0 2015
Kaby Lake (7th Gen) 14nm 4/8 35-91W 200-Series LGA 1151 DDR4 PCIe Gen 3.0 2017
Coffee Lake (8th Gen) 14nm 6/12 35-95W 300-Series LGA 1151 DDR4 PCIe Gen 3.0 2017
Coffee Lake (9th Gen) 14nm 8/16 35-95W 300-Series LGA 1151 DDR4 PCIe Gen 3.0 2018
Comet Lake (10th Gen) 14nm 10/20 35-125W 400-Series LGA 1200 DDR4 PCIe Gen 3.0 2020
Rocket Lake (11th Gen) 14nm 8/16 35-125W 500-Series LGA 1200 DDR4 PCIe Gen 4.0 2021
Alder Lake (12th Gen) Intel 7 16/24 35-125W 600 Series LGA 1700 DDR5 / DDR4 PCIe Gen 5.0 2021
Raptor Lake (13th Gen) Intel 7 24/32 35-125W 700-Series LGA 1700 DDR5 / DDR4 PCIe Gen 5.0 2022
Meteor Lake (14th Gen) Intel 4 TBA 35-125W 800 Series? LGA 1700 DDR5 PCIe Gen 5.0? 2023
Arrow Lake (15th Gen) Intel 4? 40/48 TBA 900-Series? TBA DDR5 PCIe Gen 5.0? 2024
Lunar Lake (16th Gen) Intel 3? TBA TBA 1000-Series? TBA DDR5 PCIe Gen 5.0? 2025
Nova Lake (17th Gen) Intel 3? TBA TBA 2000-Series? TBA DDR5? PCIe Gen 6.0? 2026

Intel 7 Powered Sapphire Rapids CPUs For Xeon Data Center & Servers

We also get a more detailed look at the Intel Sapphire Rapids-SP Xeon CPU substrate, chiplets, and full package design (both standard and HBM variants). The standard variant features four tiles that will incorporate the compute chiplets. There are also four pin-outs for the HBM packages. The chip will communicate with all 8 chiplets (four compute / four HBM) through EMIB interconnects which are the smaller rectangular bars on the edge of each die.

A substrate of the Intel Sapphire Rapids-SP Xeon CPU with HBM2e memory. (Image Credits: CNET)

The final product can be seen below and shows the four Xeon Compute tiles in the middle with four smaller HBM2 packages on the sides. Intel recently confirmed that it’s Sapphire Rapids-SP Xeon CPUs will feature up to 64 GB HBM2e memory onboard the CPUs. This is the full-fledged CPU design shown here and shows that it’s ready for deployment in next-generation data centers by 2022.

The final 4th Gen Sapphire Rapids-SP Xeon CPU with its multi-chiplet design housing Compute & HBM2e tiles. (Image Credits: CNET)

Here’s Everything We Know About The 4th Gen Intel Sapphire Rapids-SP Xeon Family

According to Intel, the Sapphire Rapids-SP will come in two package variants, a standard, and an HBM configuration. The standard variant will feature a chiplet design composed of four XCC dies that will feature a die size of around 400mm2. This is the die size for a singular XCC die and there will be four in total on the top Sapphire Rapids-SP Xeon chip. Each die will be interconnected via EMIB which has a pitch size of 55u and a core pitch of 100u.

The standard Sapphire Rapids-SP Xeon chip will feature 10 EMIB interconnects and the entire package will measure at a mighty 4446mm2. Moving over to the HBM variant, we are getting an increased number of interconnects which sit at 14 and are needed to interconnect the HBM2E memory to the cores.

The four HBM2E memory packages will feature 8-Hi stacks so Intel is going for at least 16 GB of HBM2E memory per stack for a total of 64 GB across the Sapphire Rapids-SP package. Talking about the package, the HBM variant will measure at an insane 5700mm2 or 28% larger than the standard variant. Compared to the recently leaked EPYC Genoa numbers, the HBM2E package for Sapphire Rapids-SP would end up 5% larger while the standard package will be 22% smaller.

  • Intel Sapphire Rapids-SP Xeon (Standard Package) – 4446mm2
  • Intel Sapphire Rapids-SP Xeon (HBM2E Package) – 5700mm2
  • AMD EPYC Genoa (12 CCD Package) – 5428mm2

Intel also states that the EMIB link provides twice the bandwidth density improvement and 4 times better power efficiency compared to standard package designs. Interestingly, Intel calls the latest Xeon lineup Logically monolithic which means that they are referring to the interconnect that’ll offer the same functionality as a single-die would but technically, there are four chiplets that will be interconnected together. You can read the full details regarding the standard 56 core & 112 thread Sapphire Rapids-SP Xeon CPUs here.

Intel Xeon SP Families:

Family Branding Skylake-SP Cascade Lake-SP/AP Cooper Lake-SP Ice Lake-SP Sapphire Rapids Emerald Rapids Granite Rapids Diamond Rapids
Process Node 14nm+ 14nm++ 14nm++ 10nm+ Intel 7 Intel 7 Intel 4 Intel 3?
Platform Name Intel Purley Intel Purley Intel Cedar Island Intel Whitley Intel Eagle Stream Intel Eagle Stream Intel Mountain Stream
Intel Birch Stream
Intel Mountain Stream
Intel Birch Stream
MCP (Multi-Chip Package) SKUs No Yes No No Yes TBD TBD (Possibly Yes) TBD (Possibly Yes)
Socket LGA 3647 LGA 3647 LGA 4189 LGA 4189 LGA 4677 LGA 4677 LGA 4677 TBD
Max Core Count Up To 28 Up To 28 Up To 28 Up To 40 Up To 56 Up To 64? Up To 120? TBD
Max Thread Count Up To 56 Up To 56 Up To 56 Up To 80 Up To 112 Up To 128? Up To 240? TBD
Max L3 Cache 38.5 MB L3 38.5 MB L3 38.5 MB L3 60 MB L3 105 MB L3 120 MB L3? TBD TBD
Memory Support DDR4-2666 6-Channel DDR4-2933 6-Channel Up To 6-Channel DDR4-3200 Up To 8-Channel DDR4-3200 Up To 8-Channel DDR5-4800 Up To 8-Channel DDR5-5600? TBD TBD
PCIe Gen Support PCIe 3.0 (48 Lanes) PCIe 3.0 (48 Lanes) PCIe 3.0 (48 Lanes) PCIe 4.0 (64 Lanes) PCIe 5.0 (80 lanes) PCIe 5.0 PCIe 6.0? PCIe 6.0?
TDP Range 140W-205W 165W-205W 150W-250W 105-270W Up To 350W Up To 350W TBD TBD
3D Xpoint Optane DIMM N/A Apache Pass Barlow Pass Barlow Pass Crow Pass Crow Pass? Donahue Pass? Donahue Pass?
Competition AMD EPYC Naples 14nm AMD EPYC Rome 7nm AMD EPYC Rome 7nm AMD EPYC Milan 7nm+ AMD EPYC Genoa ~5nm AMD Next-Gen EPYC (Post Genoa) AMD Next-Gen EPYC (Post Genoa) AMD Next-Gen EPYC (Post Genoa)
Launch 2017 2018 2020 2021 2022 2023? 2024? 2025?

Intel 7 Powered Ponte Vecchio GPUs For HPC

Lastly, we have a great view of the Intel Ponte Vecchio GPU, the next-generation HPC solution. Ponte Vecchio was designed and created under the leadership of Raja Koduri who has been providing us with great tidbits regarding the design philosophy and the insane compute power which this chip packs.

Intel’s Ponte Vecchio is a gold mine of chiplets, housing over 47 different tiles on the same package. (Image Credits: CNET)

Here’s Everything We Know About The Intel 7 Powered Ponte Vecchio GPUs

Moving over to Ponte Vecchio, Intel outlined some key features of its flagship data center GPU such as 128 Xe cores, 128 RT units, HBM2e memory, and a total of 8 Xe-HPC GPUs that will be connected together. The chip will feature up to 408 MB of L2 cache in two separate stacks that will connect via the EMIB interconnect. The chip will feature multiple dies based on Intel’s own ‘Intel 7’ process and TSMC’s N7 / N5 process nodes.

Intel also previously detailed the package and die size of its flagship Ponte Vecchio GPU based on the Xe-HPC architecture. The chip will consist of 2 tiles with 16 active dies per stack. The maximum active top die size is going to be 41mm2 while the base die size which is also referred to as the ‘Compute Tile’ sits at 650mm2. We have all the chiplets and process nodes that the Ponte Vecchio GPUs will utilize, listed below:

  • Intel 7nm
  • TSMC 7nm
  • Foveros 3D Packaging
  • EMIB
  • 10nm Enhanced Super Fin
  • Rambo Cache
  • HBM2

Following is how Intel gets to 47 tiles on the Ponte Vecchio chip:

  • 16 Xe HPC (internal/external)
  • 8 Rambo (internal)
  • 2 Xe Base (internal)
  • 11 EMIB (internal)
  • 2 Xe Link (external)
  • 8 HBM (external)

The Ponte Vecchio GPU makes use of 8 HBM 8-Hi stacks and contains a total of 11 EMIB interconnects. The whole Intel Ponte Vecchio package would measure 4843.75mm2. It is also mentioned that the bump pitch for Meteor Lake CPUs using High-Density 3D Forveros packaging will be 36u.

The Ponte Vecchio GPU will be competing against NVIDIA and AMD HPC GPUs in 2022. (Image Credits: CNET)

The Ponte Vecchio GPU is not 1 chip but a combination of several chips. It’s a chiplet powerhouse, packing the most chiplets on any GPU/CPU out there, 47 to be precise. And these are not based on just one process node but several process nodes as we had detailed just a few days back.

Next-Gen Data Center GPU Accelerators

GPU Name AMD Instinct MI200 NVIDIA Hopper GH100 Intel Xe HPC
Flagship Product AMD Instinct MI250X NVIDIA H100 Intel Ponte Vecchio
Packaging Design MCM (Infinity Fabric) MCM (NVLINK) MCM (EMIB + Forveros)
GPU Architecture Aldebaran (CDNA 2) Hopper GH100 Xe-HPC
GPU Process Node 6nm 5nm? 7nm (Intel 4)
GPU Cores 14,080 18,432? 32,768?
GPU Clock Speed 1700 MHz TBA TBA
L2 / L3 Cache 2 x 8 MB TBA 2 x 204 MB
FP16 Compute 383 TOPs TBA TBA
FP32 Compute 95.7 TFLOPs TBA ~45 TFLOPs (A0 Silicon)
FP64 Compute 47.9 TFLOPs TBA TBA
Memory Capacity 128 GB HBM2E 128 GB HBM2E? TBA
Memory Clock 3.2 Gbps TBA TBA
Memory Bus 8192-bit 8192-bit? 8192-bit
Memory Bandwidth 3.2 TB/s ~2.5 TB/s? 5 TB/s
Form Factor Dual Slot, Full Length / OAM Dual Slot, Full Length / OAM OAM
Cooling Passive Cooling
Liquid Cooling
Passive Cooling
Liquid Cooling
Passive Cooling
Liquid Cooling
TDP Q4 2021 2H 2022 2022-2023?

Intel’s Fab 42 is soon expected to merge with the upcoming Fab 52 and Fab 62 in the coming years which will be producing what comes next. Intel’s CEO, Pat Gelsinger, already broke ground on the Fabs back in September and this is where you will see the production of the next-gen sub-Intel 7 products.

Intel Process Roadmap

Process Name Intel 10nm SuperFin Intel 7 Intel 4 Intel 3 Intel 20A Intel 18A
Production In High-Volume (Now) In Volume (Now) 2H 2022 2H 2023 2H 2024 2H 2025
Perf/Watt (over 10nm ESF) N/A 10-15% 20% 18% >20%? TBA
EUV N/A N/A Yes Yes Yes High-NA EUV
Transistor Architecture FinFET Optimized FinFET Optimized FinFET Optimized FinFET RibbonFET Optimized RibbonFET
Products Tiger Lake Alder Lake
Sapphire Rapids
Meteor Lake
Granite Rapids
Xe-HPC / Xe-HP?
Lunar Lake?
Diamond Rapids?

Source: wccftech

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept

Update Required Flash plugin