In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
CTOnews.com NVIDIA today released a press release stating that its H100 GPU set six new records in the MLPerf benchmark.
CTOnews.com reported in June that 3584 H100 GPU clusters completed a GPT-3-based large-scale benchmark in just 11 minutes.
The MLPerf LLM benchmark is based on OpenAI's GPT-3 model and contains 175 billion parameters.
Lambda Labs estimates that training such a large model requires approximately 3.14E23 FLOPS of computation.
Nvidia's latest Eos AI supercomputer, equipped with 10752 H100 Tensor Core GPUs and NVIDIA's Quantum-2 InfiniBand network, trained GPT-3 in just 3.9 minutes, a full seven minutes faster than June's test results.
Nvidia's other record-setting achievement in the post is the progress made in "system scaling," which has increased efficiency to 93% through various software optimizations.
Efficient scaling is very important in the industry because achieving high computing power requires more hardware resources, and if there is not enough software support, the efficiency of the system will be greatly affected.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.