Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Popular science: what on earth is the DPU that is popular all over the network

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Whether you are in the ICT industry or not, there is one word you must have heard in the past two years, and that is DPU.

As a new concept of science and technology, DPU is rising at an astonishing speed and has become the focus of the whole industry and even the whole society. The investment sector, in particular, is simply chasing DPU as a potential technology to subvert the industry in the future.

What on earth is DPU? What is the difference between it and the familiar CPU and GPU?

Today, Xiao Zaojun will take you to thoroughly understand the context of DPU.

The English full name of █ what is DPUDPU is called Data Processing Unit, that is, data processor.

Huang Renxun, chief executive of Nvidia, said in a speech: "DPU will be one of the three pillars of future computing, and the future data center will be equipped with 'CPU + DPU + GPU'. CPU is used for general computing, GPU is used to accelerate computing, and DPU is used for data processing."

Sounds a little dizzy, what kind of data processing does DPU do? With CPU and GPU, why DPU?

As we all know, the famous von Neumann architecture was widely used in later computers.

Von Neumann architecture

This is an architecture with computing and storage as the core. CPU, as a processor unit, is responsible for a variety of arithmetic and logical calculations. Memory (storage) and hard disk (external storage) are responsible for storing data and interacting with CPU.

In addition to CPU, memory and hard drives, there are input and output devices such as keyboards and monitors. With the passage of time, later, we have a mouse, but also a video card, network card. Finally, it formed the basic structure of the computer that we all see now.

With the graphics card, there is GPU (Graphics Processing Unit), graphics processor. Everyone has played games, and we all know very well that with the rapid development of multimedia graphics software such as games and 3D design, the workload to deal with is getting larger and larger and more and more complex. CPU is really too busy, so there is a GPU that specializes in image and graphics-related operations to share the pressure of CPU.

The reason for the emergence of DPU is the same. It is also because it is difficult for CPU to afford some complex calculations, so it needs to carry out the division of tasks.

What exactly is █ DPU? it's finally at a critical point. After talking for a long time, what kind of work does DPU share in CPU?

To be honest, it's a little difficult to explain the work shared by DPU technically, and it's not easy to understand. However, we can explain it from the work scene.

Roughly speaking, the work shared by DPU can be summarized into four keywords, namely, virtualization, networking, storage, and security.

In particular, it is important to remind you that DPU is a concept that revolves around the data center. In other words, it is mainly used in large-scale computing scenarios such as data centers, rather than each of our desktops, laptops, or mobile phones (at least not yet).

DPU serves cloud computing, and its main role is to improve the efficiency of computing infrastructure such as data centers, reduce energy waste, and then reduce costs.

The aforementioned virtualization, networking, storage, and security are very important tasks in the data center, as well as tasks that consume a lot of computing resources.

Take the Internet as an example.

In the data center, a large amount of data is transmitted all the time. When sending and receiving data, the host needs to carry out a large amount of network protocol processing. According to the traditional computing architecture, these protocol processing is done by CPU.

It has been calculated that about 4 Xeon CPU cores are needed to process 10G network at wire speed. In other words, just for the processing of network data packets, it will take up half of the 8-core CPU. Now the data center network is constantly upgrading, from 10G to 40G, 100G, or even 400G "speed" network, how to bear these performance overhead?

These expenses are so high that they are called "Datacenter Tax".

The business program is not yet running, just accessing the network data consumes so much computing resources, which is unbearable. Therefore, some companies put forward the concept of SmartNIC (intelligent network card), which "offloads" the work of network protocol processing from CPU to the network card, so as to share the load of CPU.

In 2015, AWS, a cloud computing company, was the first to explore this SmartNIC model. They acquired Annapurna Labs, a chipmaker, and officially launched Nitro in 2017. In the same year, Aliyun also announced the X-Dragon architecture with similar functions.

In March 2019, Nvidia bought Mellanox, an Israeli chip company, for $6.9 billion. Nvidia combines Mellanox's ConnectX series of high-speed network card technology with its own existing technology, and officially launched two DPU products: BlueField-2 DPU and BlueField-2X DPU in 2020.

Since then, the concept of DPU has officially entered the public view. The year 2020 is also known as the first year of DPU.

Because of the relationship between DPU and SmartNIC, DPU is generally regarded as an extended and upgraded version of SmartNIC.

Based on SmartNIC, DPU offloads storage, security, and virtualization workloads from CPU to itself.

In the late 1990s, when the virtualization technology represented by VMWare first appeared, it was completely simulated by software and lacked the support of hardware, so the performance was very poor and almost difficult to use.

Later, in 2005, with the evolution of technology, the hardware virtualization problems of CPU and memory were gradually solved, which not only greatly improved the performance of the virtualization system, but also activated the development prospect and value of this technology. As we all know, our entire cloud computing architecture is based on virtualization technology.

The process of the development of virtualization technology is a process in which hardware capabilities continue to replace software capabilities. We mentioned earlier that AWS publishes the Nitro system. In addition to SmartNIC, this system also completes the hardware virtualization of I / O. It also uninstalls the virtualization hypervisor Hypervisior from CPU to dedicated hardware. As a result, the performance loss of virtualization technology is close to zero, and the burden on CPU is further reduced.

The same is true of storage.

Nowadays, the data center requires a high rate of storage reads and writes. After the price of SSD has gradually dropped, connecting SSD to the system through local PCIe or high-speed network has become a mainstream technical route. For distributed systems, on the basis of InfiniBand, FC (Fiber Channel) and Ethernet, RDMA (remote Direct data access) technology becomes popular.

In RDMA mode, the application's data no longer passes through CPU and complex operating systems, and communicates directly with the network card. This means that DPU can undertake the storage-related high-speed interface standard protocol processing, further sharing the pressure for CPU.

Finally, take a look at safety.

In the current increasingly severe security situation, in order to ensure the security and reliability of the network and system, a large number of encryption algorithms are introduced. In the past, CPU is responsible for the encryption and decryption of these algorithms.

But in fact, the network interface is the ideal privacy boundary. Encryption and decryption on the network interface is the most reasonable. Therefore, such as the asymmetric encryption algorithm SM2, the hash algorithm SM3 and the symmetric block cipher algorithm SM4 in the national secret standard can all be calculated by DPU. In the future, after the blockchain technology is mature and applied, the related algorithms can also be unloaded from CPU to DPU.

To sum up, we should also understand that the essence of the function of DPU is to uninstall, accelerate and isolate-unload part of the work of CPU to yourself; use your computing skills to accelerate the operation of these jobs; the whole process achieves the isolation of computing.

The future prospect of █ DPU DPU is a new type of programmable multi-core processor and a SoC (System On Chip) chip. It meets industry standards, has high computing power, and has a high-performance network interface that can parse and process data at a high speed, and efficiently transfer data to CPU and GPU.

The biggest difference between DPU and CPU is that CPU is good at general computing tasks (can take any task and compare "miscellaneous"), while DPU is better at basic layer application tasks (doing specific tasks and comparing "focus"), such as network protocol processing, exchange routing computing, encryption and decryption, data compression and other "dirty work".

Therefore, DPU is a good helper of CPU and will form an "iron triangle" with CPU and GPU, completely subverting the computing model of the data center.

That's why DPU is getting a lot of attention these days.

As mentioned at the beginning of this article, the current popularity of DPU can no longer be described by words. Capital's enthusiasm for DPU is even more impressive. Both giants and startups are flocking to the DPU track. The market of DPU is still heating up, and the development prospect is promising.

Take Nvidia as an example. After BlueField-2 DPU and BlueField-2X, in April 2021, NVIDIA released a new generation of data processor-BlueField-3 DPU.

BlueField-3 DPU

This is the first DPU designed for AI and accelerated computing, optimized for multi-tenant, cloud native environments, providing data center-level software definition and hardware-accelerated networking, storage, security, and management services.

It is said that the data center services provided by one BlueField-3 DPU can be equivalent to services that can only be implemented by up to 300 x86 cores. This frees up a lot of CPU resources for running critical business applications.

In order to give full play to the core value of DPU in modern data centers, we can not do without the blessing of software. In other words, a chip without software is just expensive sand.

In order to build a more powerful DPU ecology, Nvidia launched a tailor-made software development platform for BlueField DPU-NVIDIA DOCA.

The full name of DOCA is Data Center Infrastructure On A Chip Architecture, or "online data center infrastructure architecture". With DOCA, developers can use industry-standard API to quickly create network, storage, security and management services, as well as a range of AI / HPC applications and services on NVIDIA BlueField DPU.

In May 2022, NVIDIA released DOCA 1.3. This version not only adds 121new API development interfaces, but also adds functions such as DOCA Flow library, communication channel library (Communication Channel), regular expression library (Regular Expression) and OVN-based data path encryption, as well as services such as HBN (host-based network), which are welcomed by developers.

Architecture of DOCA 1.3

Recently, NVIDIA released DOCA 1.4, which supports DPU firmware upgrades without rebooting the host, 32GB DDR memory support on BlueField-2 DPU 25G & 100GW / BMC products, new host support for AArch64 servers, and routing functions based on longest prefix matching (LPM) pipes.

With DOCA 1.4, developers can load the development environment on BlueField DPU more flexibly, simply and quickly, thus launching new products quickly.

█ conclusion: according to the forecast, by 2025, the global market capacity of DPU is expected to reach 12 billion US dollars.

As Moore's Law gradually enters the bottleneck, in order to make more efficient use of computing resources, we need to vigorously develop DPU, let CPU, GPU, DPU have a reasonable division of labor, and each focus more on what they are good at. In this way, we can maximize the energy efficiency of the data center and provide a strong and green driving force for the digital transformation of the whole society.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report