In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Thanks to CTOnews.com netizens Xiao Zhan cut, West window old clues delivery! CTOnews.com May 29 news, Nvidia today released a number of major news at the 2023 Taipei computer Show, the most striking of which is that its Grace Hopper super chip has been fully put into production. These chips are the core components of Nvidia's new DGX GH200 artificial intelligence supercomputing platform and MGX system, and they are designed to handle a large number of generative artificial intelligence tasks. Nvidia also announced its new Spectrum-X Ethernet network platform, optimized for artificial intelligence servers and supercomputing clusters.
Grace Hopper super chip is a CPU+GPU integration scheme based on Arm architecture developed by Nvidia. It integrates 72-core Grace CPU, Hopper GPU, 96GB HBM3 and 512 GB LPDDR5X in the same package, with a total of 200 billion transistors. This combination provides amazing data bandwidth between CPU and GPU, up to 1 TB / s, providing a huge advantage for some memory-constrained workloads.
The DGX GH200 artificial intelligence supercomputing platform is a system and reference architecture designed by Nvidia for the most high-end artificial intelligence and high-performance computing workloads. The current DGX A100 system can only combine eight A100 GPU as a unit. Considering the explosive growth of artificial intelligence, Nvidia customers urgently need larger and more powerful systems. Designed to provide maximum throughput and scalability, DGX GH200 avoids the limitations of standard cluster connectivity options such as InfiniBand and Ethernet by using Nvidia's custom NVLink Switch chips.
Details of DGX GH200 are unclear, but it has been confirmed that Nvidia uses a new NVLink Switch system with 36 NVLink switches that connect 256 GH200 Grace Hopper chips to 144TB's shared memory into a single unit. Nvidia CEO Huang Renxun said the GH200 chip is a "giant GPU". This is the first time Nvidia has used the NVLink Switch topology to build an entire supercomputer cluster, which Nvidia says provides 10 times more GPU-to-GPU and seven times CPU-to-GPU bandwidth than previous-generation systems. It is also designed to provide interconnection power efficiency up to 5 times higher than competitors and bisection bandwidth up to 128 TB / s. The system has 150 miles of fiber (CTOnews.com Note: about 241.4 kilometers) and weighs 40,000 pounds, but appears like a single GPU. Nvidia said that 256 Grace Hopper super chips have improved the "AI performance" of DGX GH200 to exaflop (1 trillion times).
Nvidia will provide DGX GH200's reference blueprint to its main customers Google, Meta and Microsoft, and will also use the system as a reference architecture design for cloud service providers and very large-scale data centers. Nvidia itself will deploy a new Nvidia Helios supercomputer, consisting of four DGX GH200 systems, for its own research and development work. The four systems have 1024 Grace Hopper chips and are connected by Nvidia's Quantum-2 InfiniBand 400Gb / s network.
Nvidia DGX is for the most advanced systems, HGX is for very large data centers, and the new MGX system is somewhere in between, and DGX and HGX will coexist with the new MGX system. Nvidia's OEM partners face new challenges in designing servers in the artificial intelligence center, which will slow down design and deployment. Nvidia's new MGX reference architecture is designed to accelerate this process, providing more than 100 reference designs.
The MGX system consists of modular designs that cover all aspects of Nvidia's CPU and GPU, DPU and network systems, but also include designs based on common x86 and Arm processors. Nvidia also offers air-cooled and liquid-cooled design options to suit a variety of application scenarios. Asustek, gigabyte, Wing Rock and Pegatron will all use the MGX reference architecture to develop systems that will be launched later this year and early next year.
As for the new Spectrum-X network platform, Nvidia calls it a "high-performance Ethernet platform built for artificial intelligence". The Spectrum-X design uses Nvidia's 51 Tb / s Spectrum-4 400 GbE Ethernet switch and Nvidia Bulefield-3 DPU with software and SDK to enable developers to adapt the system to the unique needs of AI workloads.
Compared with other Ethernet-based systems, Nvidia says Spectrum-X is lossless, providing better QoS and latency. It also has new adaptive routing technology, which is particularly useful in multi-tenant environments.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.