Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Intel ARM Invid promotes draft specification to unify AI data interchange format

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

On September 15, chip companies Intel, ARM and Nvidia jointly released a draft specification for a so-called general exchange format for artificial intelligence, which aims to make the process of processing artificial intelligence faster and more efficient.

In the draft, Intel, ARM and Nvidia recommend that artificial intelligence systems use the 8-bit FP8 floating-point processing format. They say that the FP8 floating-point processing format has the potential to optimize hardware memory usage, thereby accelerating the development of artificial intelligence. This format is suitable for both artificial intelligence training and reasoning, and is helpful to develop faster and more efficient artificial intelligence systems.

When developing artificial intelligence systems, the key problem faced by data scientists is not only to collect a large amount of data to train the system. In addition, we need to choose a format to express the system weight, which is an important factor for artificial intelligence to learn from the training data to affect the prediction effect of the system. Weights allow artificial intelligence systems such as GPT-3 to automatically generate entire paragraphs from a long sentence prompt, as well as DALL-E 2 artificial intelligence to generate realistic portraits based on a particular title.

The commonly used formats of weights in artificial intelligence systems are semi-precision floating-point FP16 and single-precision floating-point FP32, the former using 16-bit data to represent system weights, and the latter using 32-bit data. Semi-precision floating-point numbers and lower-precision floating-point numbers can reduce the memory space needed to train and run artificial intelligence systems, speed up computing, and even reduce bandwidth resources and power consumption. However, because there are fewer digits than single-precision floating-point numbers, the accuracy will be reduced.

However, many companies in the industry, including Intel, ARM and Nvidia, choose the 8-bit FP8 floating-point format as the best choice. Sasha Narasimhan, director of product marketing at Nvidia, pointed out in a blog post that the FP8 floating-point format is as accurate as semi-precision floating-point numbers in use cases such as computer vision and image generation systems, with "significant" acceleration.

Nvidia, ARM and Intel say they will make the FP8 floating-point format an open standard that other companies can use without a license. The three companies described FP8 in detail in a white paper. Narasimhan said that these specifications will be submitted to the technical standardization body IEEE to see if the FP8 format can become a common standard in the artificial intelligence industry.

"We believe that a common exchange format will lead to rapid advances in hardware and software platforms, improved interoperability, and thus advances in artificial intelligence computing," Narasimhan said. "

Of course, the reason why the three companies spare no effort to promote FP8 format to become a general exchange format is also out of the consideration of their own research. Nvidia's GH100 Hopper architecture already supports FP8 format, and Intel's Gaudi2 artificial intelligence training chipset also supports FP8 format.

But the common FP8 format will also benefit competitors such as SambaNova, AMD, Groq, IBM, Graphcore and Cerebras, all of which have experimented with or adopted the FP8 format when developing artificial intelligence systems. Simon Knowles, co-founder and chief technology officer of artificial intelligence system developer Graphcore, wrote in a blog post in July that "the emergence of 8-bit floating point numbers brings huge advantages to artificial intelligence computing in terms of processing performance and efficiency." Knowles also said it was "an opportunity" for the industry to set a "single open standard", which was much better than competing with each other in multiple formats.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report