In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Thanks to CTOnews.com netizens Wu Yanzu in South China for the delivery of clues! CTOnews.com September 26 news, Nvidia previously launched the Hopper H100 GPU, including two versions, one for SXM5 and one for PCIe, with the same video memory capacity of 80 GB, but the former uses the new HBM3 standard, while the latter uses the HBM2e standard.
Now, according to s-ss.cc, NVIDIA may be developing a completely new version of PCIe's Hopper H100 GPU. Most importantly, the new graphics card may not be equipped with 80 GB HBM2e, but 120GB HBM2e memory.
As you can see from the picture below, he has got an ADLCE engineering sample card. We don't have any further information about this card, but the H100 GPU of 120GB memory can be looked forward to.
The new card should be the same as the previous version, including blood-filled GH100 GPU,16896 CUDA, and the video memory bandwidth will reach 3TB / S, the same as the H100 core and performance of the SXM interface version.
The whistleblower pointed out that the single-precision performance of this H100 120GB PCIE version is the same as that of the SXM version, and the single-precision floating-point performance is about 60TFLOPS.
The full specifications of GH100 GPU are as follows:
8 GPC,72 TPC (9 TPC / GPC), 2 SM / TPC, each full GPU 144x SM
128 FP32 CUDA cores per SM, 18432 FP32 CUDA cores per full GPU
4 fourth-generation Tensor cores per SM, 576 per full GPU
6 HBM3 or HBM2e stacks, 12 512-bit memory controllers
60 MB secondary cache
In addition, with regard to the ADLCE engineering sample card, this should be the ES engineering sample of RTX4090, but the TDP is limited to 350W, so the single precision performance is only more than 60 TFLOPS.
CTOnews.com learned that the H100, released in April 2022, consists of 80 billion transistors and uses a number of groundbreaking technologies, including a powerful new Transformer engine and NVIDIA NVLink interconnection technology, to accelerate the largest AI models, such as advanced recommendation systems and large language models, and to drive innovation in areas such as conversational AI and drug discovery.
Nvidia said the H100 enabled companies to reduce the cost of deploying AI, increasing energy efficiency by 3.5 times with the same AI performance compared to the previous generation, reducing the total cost of ownership to 1 pound 3 and using less server nodes than the previous generation.
The NVIDIA DGX H100 system has now begun to accept customer bookings. The system contains 8 H100 GPU,FP8 precision with peak performance up to 32 PFlops. Each DGX system contains NVIDIA Base Command and NVIDIA AI Enterprise software that enables cluster deployment from a single node to NVIDIA DGX SuperPOD, supporting advanced AI development efforts for large language models and other large workloads.
H100 systems provided by the world's leading computer manufacturers are expected to ship in the next few weeks, with more than 50 server models on the market by the end of this year and dozens more in the first half of 2023. Partners who are already building the system include Atos, Cisco, Dell Technology, Fujitsu, gigabyte Technology, Wisdom, Lenovo and Nano.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.