In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you the Linux multi-queue network card hardware example analysis, I believe that most people do not know much about it, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to know it!
Multi-queue network card is a technology, which is originally used to solve the problem of network IO QoS (quality of service). Later, with the continuous improvement of the bandwidth of network IO, single-core CPU can not fully meet the needs of network card. Through the support of multi-queue network card driver, each queue is bound to different cores through interruption to meet the needs of network card.
The common ones are Intel 82575, 82576 Magi Boardcom 57711 and so on. Taking the company's servers using more Intel 82575 network cards as an example, this paper analyzes the hardware implementation of multi-queue network cards and the support of linux kernel software.
1. Hardware implementation of multi-queue network card
Figure 1.1 is a hardware logic diagram of Intel 82575 with four hardware queues. When a message is received, a stream will always receive the same queue through the SIP, Sport, DIP, DPort quadruple of the hash packet header. The interrupt bound to the queue is also triggered.
Figure 1.182575 hardware logic diagram
2.2.6.21 implementation of network card driver before
Kernel does not support multi-queue feature before 2.6.21, and a network card can only apply for one interrupt number, so only one core is processing packets received by the network card at a time. As shown in figure 2.1, the protocol stack receives messages from each hardware queue through NAPI polling to the net_device data structure of figure 2.2, and sends messages to the network card through QDisc queues.
Figure 2.1 pre-2.6.21 kernel protocol stack
Figure 2.2 net_device before 2.6.21
3. Implementation of network card driver after 2.6.21
2.6.21 starts to support the multi-queue feature. When the network card driver is loaded, the number of hardware queue of the network card is obtained by obtaining the network card model. Combined with the number of CPU cores, the number of network card queue to be activated (Sum) is finally obtained through Sum=Min (network card queue,CPU core), and the Sum interrupt number is applied for and assigned to each active queue.
As shown in figure 3.1, when a queue receives a message, the corresponding interrupt is triggered, and the core that receives the interrupt is added to the NET_RX_SOFTIRQ queue of the core responsible for packet receiving in the protocol stack (NET_RX_SOFTIRQ has an instance on each core). In NET_RX_SOFTIRQ, the packet receiving interface of NAPI is called to receive the message into the net_device data structure with multiple netdev_queue in CPU as shown in figure 3.2.
In this way, each core of CPU can receive packets concurrently, so it will not be because one core can not meet the demand, resulting in the decline of network IO performance.
Figure 3.1 after 2.6.21, the kernel protocol stack
Figure 3.2 2.6.21 after net_device
4. Break binding
When CPU can receive packets in parallel, different cores will receive messages from the same queue, which will lead to the problem of message disorder. The solution is to bind the interrupt of an queue to the only core, so as to avoid the problem of disorder. At the same time, if the network traffic is large, the soft interrupt can be evenly distributed to each core to prevent CPU from becoming a bottleneck.
Figure 4.1 / proc/interrupts
5. Break affinity correction
Some multi-queue Nic drivers do not implement very well. After initialization, there will be a problem that the tx and rx of the same queue in figure 4.1 are interrupted and bound to different cores, so that data flows between core0 and core1, resulting in increased data exchange between cores, reduced cache hit rate and reduced efficiency.
Figure 5.1 unreasonable break binding
David Miller, the head of the linux network subsystem, provides a script that first retrieves the information in the / proc/interrupts file, obtains the interrupt MASK according to the VEC in the eth0-rx-0 ($VEC) in figure 4.1, and sets the MASK
Write to the smp_affinity corresponding to the interrupt number 53. Because the VEC of eth-rx-0 is the same as that of eth-tx-0, the tx and rx interrupts that implement the same queue are bound to a core, as shown in figure 4.3.
Figure 4.2 set_irq_affinity
Figure 4.3 reasonable break binding
The set_irq_affinity script is located in http://mirror.oa.com/tlinux/tools/set_irq_affinity.sh.
The above is all the contents of the article "sample Analysis of Multi-queue Network Card hardware in Linux". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.