In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Install dpdk
Official URL https://github.com/iqiyi/dpvs
Dpdk-17.05.2 is compatible with dpvs
Wget https://fast.dpdk.org/rel/dpdk-17.05.2.tar.xz
Tar vxf dpdk-17.05.2.tar.xz
Download dpvs
Git clone https://github.com/iqiyi/dpvs.git
Patch dpdk and add kni driver
Cd
Cp patch/dpdk-stable-17.05.2/*.patch dpdk-stable-17.05.2/
Cd dpdk-stable-17.05.2/
Patch-p 1
< 0001-PATCH-kni-use-netlink-event-for-multicast-driver-par.patch 另一个补丁,uoa模块 patch -p1 < 0002-net-support-variable-IP-header-len-for-checksum-API.patch 编译dpdk并安装 cd dpdk-stable-17.05.2/ make config T=x86_64-native-linuxapp-gcc make export RTE_SDK=$PWD 启动hugepage 服务器是numa系统(centos) echo 8192 >/ sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
Echo 8192 > / sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
Mkdir / mnt/huge
Mount-t hugetlbfs nodev / mnt/huge
Note: this operation is temporary. If the server has other applications, it may already take up hugepage. Another method is given later.
Install the uio driver and bind the network card
Modprobe uio
Cd dpdk-stable-17.05.2
Insmod build/kmod/igb_uio.ko
Insmod build/kmod/rte_kni.ko
Check the network card status. / usertools/dpdk-devbind.py-- status
Network devices using kernel driver
= =
0000VO1VOR 00.0 'I350 Gigabit Network Connection 1521' if=eth0 drv=igb unused=
00.1 'I350 Gigabit Network Connection 1521' if=eth2 drv=igb unused=
00.2i350 Gigabit Network Connection 1521' if=eth3 drv=igb unused=
0000VO1VOR 00.3 'I350 Gigabit Network Connection 1521' if=eth4 drv=igb unused=
Bind eth3
. / usertools/dpdk-devbind.py-b igb_uio 0000VO1VOULAR 00.2
Note: the network card bound here had better not be used, because the network card needs to be removed from down before it can be bound.
Compile dpvs
Cd dpdk-stable-17.05.2/
Export RTE_SDK=$PWD
Cd
Make
Make install
Note: there may be a dependent package error during installation, which one is prompted, yum installation is fine.
Compiled file
Ls bin/
Dpip dpvs ipvsadm keepalived
Start dpvs
Cp conf/dpvs.conf.single-nic.sample / etc/dpvs.conf
Cd / bin
. / dpvs &
Check to see if it starts properly
. / dpip link show
1: dpdk0: socket 0 mtu 1500 rx-queue 8 tx-queue 8
UP 10000 Mbps full-duplex fixed-nego promisc-off
Addr A0:36:9F:9D:61:F4 OF_RX_IP_CSUM OF_TX_IP_CSUM OF_TX_TCP_CSUM OF_TX_UDP_CSUM
Take the DR model as an example
Official URL https://github.com/iqiyi/dpvs/blob/master/doc/tutorial.md, various lvs mode configurations
Add lan ip 37 to dpvs, this step must be before adding vip
. / dpip addr add 192.168.1.37/24 dev dpdk0
Add vip 57 to dpvs
. / dpip addr add 192.168.1.57/32 dev dpdk0
Set the algorithm to rr,vip to 57
. / ipvsadm-A-t 192.168.1.57 ipvsadm 80-s rr
Add back-end machine 11
. / ipvsadm-a-t 192.168.1.57 Suzhou 80-r 192.168.1.11-g
Execute on 11 machines
Ip addr add 192.168.1.11/32 dev lo
Sysctl-w net.ipv4.conf.lo.arp_ignore=1
When dpvs starts, it sometimes reports an error. The culprit is memory fragmentation. App cannot apply for enough consecutive large chunks of memory, so it can only apply for a lot of small chunks of memory. So that the number of memory blocks exceeds the 256 set by the system.
The solution is to apply for large pages of memory at system startup, or as soon as possible after system startup, to avoid memory fragmentation.
Https://www.cnblogs.com/cobbliu/p/6603391.html
To save trouble, you can add the kernel parameter / etc/boot/grub2.cfg
Default_hugepagesz=1G hugepagesz=1G hugepages=8G
To quote the conclusions of others:
Conclusion: no matter how fast the DPDK is, it takes a short time to receive the packet and send it to the application layer, rather than "forwarding" fast. After receiving the package, all kinds of checks and tables are checked (usually concurrent environment, locked, unlocked, etc.). Haha) basically, the processing time is far longer than the cost of DPDK itself.
If you want to be faster than Linux, you need to understand why the Linux network protocol stack is "slow", which is slower than DPDK processing. For most applications, the upper layer of business latency is no longer necessary to work on the network. In short, whether the delay can be reduced depends on the application environment (can you afford to buy so many physical machines? Is there a corresponding stable and reliable talent support? And then do profiling to see where the bottleneck is. Don't take it for granted, DPDK.
For example, to be the DNS of UDP, you can use DPDK to improve QPS by bypassing the Linux stack. If I do routing, I don't think it can match the hardware, and I don't agree with this approach. In order to reduce latency, DPDK has to let CPU run with full load when there is no packet. If you want to improve throughput, the delay will follow. If multiple working programs run together in the system, the boss is poor or unwilling to buy a good machine, and the development and operation and maintenance skills do not follow, DPDK will also be disabled.
The hardware is almost the same, network IO+ memory programs, Linux run 10 gigabytes is no problem.
If packet forwarding is done, the performance will certainly be greatly improved compared with x86 linux. In fact, most of the bottlenecks do not lie in the network handled by dpdk.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.