Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

AMD creates the largest chip Instinct MI300 accelerator card, including 128GB HBM3 video memory and 146 billion transistor

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Thanks to CTOnews.com netizen OC_Formula for clue delivery! CTOnews.com January 8 News At CES 2023, AMD revealed Instinct MI300, an APU accelerator card product for the next generation of data centers. The chip encapsulates CPU, GPU and memory in one package, significantly reducing DDR memory processing and CPU-GPU PCIe processing, resulting in significantly improved performance and efficiency.

The accelerator card features a Chiplet design with 13 small chips, based on 3D stacking, including 24 Zen4 CPU cores, combined with CDNA 3 and 8 HBM3 memory stacks, integrated 5nm and 6nm IP, containing a total of 128GB HBM3 memory and 146 billion transistors, and will be available in the second half of 2023.

At present, AMD Instinct MI300 has more transistors than Intel's Ponte Vecchio with 100 billion transistors, making it AMD's largest chip ever produced. From the photos of Ms. Su Zifeng holding Instinct MI300 in her hand, we can also see that its size has exceeded half a human hand and looks quite exaggerated.

AMD says it has nine 5nm chips based on 3D stacks (three CPUs and six GPUs according to previous rules) and four 6nm chips surrounded by packaged HBM memory chips, totaling 146 billion transistor parts. AMD says the AI performance of this accelerator card is much higher than the previous generation (MI250X).

AMD has only released this information so far, and the production chip will be available in the second half of 2023, when there may be competitors such as NVIDIA Grace and Hopper GPU, but it should be earlier than Intel's Falcon Shores.

From the MI300 sample presented by AMD representatives, these nine chiplets feature an active design that enables communication not only between I / O tiles, but also between memory controllers interfacing with HBM3 stacks, resulting in incredible data throughput, while also allowing the CPU and GPU to process the same data in memory at the same time (zero copy), saving power, improving performance, and simplifying processes.

CTOnews.com has learned that AMD claims Instinct MI300 delivers 8x AI performance and 5x performance-per-watt boost (based on sparsity FP8 benchmark) for MI250 accelerator cards, which can reduce training time for very large AI models such as ChatGPT and DALL-E from months to weeks, saving millions of dollars in electricity bills.

Instinct MI300 will be used in the upcoming 20 billion times El Capitan supercomputer in the United States, which means El Capitan will be the fastest supercomputer in the world when it is deployed in 2023.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report