Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

With more than 4000 chips in series, Google says its supercomputer is faster and more energy efficient than Nvidia

2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

News from CTOnews.com on April 5, Alphabet Inc. Google, a subsidiary, released new details on Tuesday about the supercomputers it uses to train artificial intelligence models, saying these systems are faster and more power-efficient than similar systems from Nvidia.

Google has designed a chip called the Tensor processing Unit (TPU) to train artificial intelligence models, which are used in more than 90 per cent of the company's artificial intelligence training. These models can be used for tasks such as answering questions in human language or generating images.

According to CTOnews.com, Google's TPU is now in its fourth generation. Google published a scientific paper on Tuesday detailing how they use custom-made optical switches to connect more than 4000 chips into a supercomputer.

Improving these connections has become a key point of competition among companies building artificial intelligence supercomputers, as the so-called large language models that power technologies such as Google's Bard or OpenAI's ChatGPT have exploded, meaning they are too big to be stored on a single chip.

These models must be divided into thousands of chips, and then the chips must work together for weeks or more to train the model. Google's PaLM model-its largest publicly disclosed language model to date-was trained for 50 days by spreading it across two supercomputers with 4000 chips.

Google says its supercomputers can easily reconfigure connections between chips in real time, helping to avoid problems and improve performance.

Google researcher Norm Jouppi and Google Distinguished engineer David Patterson wrote in a blog post about the system: "Circuit switching makes it easy for us to bypass faulty components. This flexibility even allows us to change the topology of supercomputer interconnection to speed up the performance of the ML (machine learning) model."

Although Google has only now released the details of its supercomputer, it was launched internally in 2020, running at a data center in Mayes County County, Oklahoma. Google says start-up Midjourney uses the system to train its model, which generates images after entering text.

Google said in its paper that for systems of the same size, its supercomputers are 1.7 times faster and 1.9 times more energy efficient than systems based on Nvidia A100 chips. Google said it did not compare its fourth-generation product with Nvidia's current flagship H100 chip, which is listed after Google's chip and is made with newer technology. Google has hinted that it may be developing a new TPU to compete with Nvidia H100.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report