December 4, 2021
CPUs / GPUs / News

AMD Set to Announce Zen 3D Primarily based Milan-X CPUs + Intuition MI200 GPUs: Watch the Reside Occasion Right here

AMD is all set to announce its next-generation of information middle choices in roughly 24 hours from now. We’re speaking concerning the Zen 3D-based Milan-X processors, that includes 3D stacked V-Cache and the chiplet based mostly Intuition MI200 GPU accelerators. Milan-X will retain the Zen 3 core and the N7 course of from TSMC, and as such, will be considered a particular refresh or area of interest stack, very similar to the upcoming Sapphire Rapids-SP with on-die HBM reminiscence.

CPU Title Cores/Threads Base Clock Enhance Clock L3 Cache (V-Cache + L3 Cache) L2 Cache TDP
AMD EPYC 7773X 64/128 2.2 GHz 3.5 GHz 512 + 256 MB 32 MB 280W
AMD EPYC 7573X 32/64 2.8 GHz 3.6 GHz 512 + 256 MB 32 MB 280W
AMD EPYC 7473X 24/48 2.8 GHz 3.7 GHz 512 + 256 MB 12 MB 240W
AMD EPYC 7373X 16/32 3.05 GHz 3.8 GHz 512 + 256 MB 8 MB 240W

Trying on the specs, the whole lot’s principally similar to the vanilla Milan components, together with the bottom and enhance clocks, the TDP in addition to the L2 cache (aside from the crapton of L3 cache). Which means efficiency positive aspects (as already indicated earlier) will fluctuate from software to software, and gained’t be a lot pronounced in each workload.

You may watch the AMD Accelerated Information Heart Keynote right here

The precise specs of the MI2150X have been shared. It’ll include a complete of 110 CUs with a lift clock of 1.7GHz. Which means we’re possible eight reminiscence stacks, every that includes eight 2GB dies. This means a whole bus width of 8,196-bits (1,024-bits x8 controllers), leading to an total bandwidth of three.68 TB, roughly the identical because the HBM variants of Sapphire Rapids-SP.

On the coronary heart of the GPU core, there will likely be two 55 CU chiplets, leading to an total compute power of 110 CU, with a formidable enhance clock of 1.7GHz. Since Alderbaran can execute double-precision directions (FP64) at native speeds, it will lead to a double-precision throughput of 47.9 TFLOPs, an insane 4 occasions greater than its predecessor, the MI100.

Even NVIDIA’s Ampere-based A100 Tensor core accelerator is able to “solely” 19.5 TFLOPs of FP64 compute. When it comes to mixed-precision compute, we’re 383 TFOPs of FP16 and BFLOAT16. Compared, the MI100, topped out at “simply” 184 and 92 TFLOPs within the two knowledge sorts, respectively.

The MI250X could have a TDP of 500W which is a bit on the excessive facet however is probably going a results of the HBM reminiscence. The MI250 ought to come will a decrease enhance clock and presumably lesser reminiscence as nicely. A scalpel to the GPU core is unlikely however I wouldn’t rule it out.

The AMD Radeon Intuition MI200 GPU will, over the subsequent yr, start to energy three huge programs on three continents: the US’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system.

Leave a Reply

Your email address will not be published. Required fields are marked *