In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains the "Linux system dual network card bonding configuration steps", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn "Linux system dual network card bonding configuration steps" bar!
Create a file ifcfg-bond0 or cp ifcfg-eth0 ifcfg-bond0 under the / etc/sysconfig/network-scripts directory
The editing content of Vi ifcfg-bond0 is as follows
DEVICE=bond0
BOOTPROTO=static
IPADDR=192.168.10.1
NETMASK=255.255.255.0
GATEWAY=192.168.10.254
ONBOOT=yes
TYPE=Ethernet
The editing content of VI ifcfg-eth0 is as follows
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
MASTER=bond0
SLAVE=yes
The editing content of VI ifcfg-eth2 is as follows
DEVICE=eth2
BOOTPROTO=static
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Edit vi / etc/ modprobe.conf
Join at the end
Alias bond0 bonding
Options bond0 miimon=100 mode=0
Description: miimon is used for link monitoring. For example: miimon=100, then the system monitors the link connection status every 100ms, and if one line fails, it will be transferred to another line; the value of mode indicates the working mode. There are four modes: 0meme, 1meme, 2meme, 3, and two are commonly used. Mode=0 indicates that load balancing (round-robin) is a load balancing method, and both NICs work. Mode=1 said that fault-tolerance (active-backup) provides redundancy and works in an active and standby mode, that is, by default, only one network card works and the other is backed up. Bonding can only provide link monitoring, that is, whether the link from the host to the switch is connected. If only the external link down of the switch is down, and the switch itself is not faulty, then bonding will continue to use the link without a problem, and the Bond configuration is complete.
Restart the network to service network restart
Looking at the bond0 file in the / proc/net/bonding/ directory, you can see the running status of the dual Nic.
The concept and introduction of linux dual Nic binding:
Linux sets the binding of bond Nic
Linux dual network card binding implementation is the use of two network cards virtual into a network card; linux setting bond network card binding-some use.
The implementation of Linux dual network card binding is to use two network cards to form a virtual network card. The aggregated device looks like a separate Ethernet interface device. Generally speaking, the two network cards have the same IP address and parallel links are aggregated into a logical link to work. In fact, this technology has long existed in Sun and Cisco, known as Trunking and Etherchannel technology, and is also used in the 2.4.x kernel of Linux, known as bonding. The earliest application of bonding technology is on the cluster-beowulf, which is designed to improve the data transmission between cluster nodes. Let's discuss the principle of bonding. What is bonding needs to start with the promisc mode of the network card. We know that under normal circumstances, the network card only receives the Ethernet frame in which the destination hardware address (MAC Address) is its own Mac and filters out other data frames to reduce the burden on the driver. But the network card also supports another mode called hybrid promisc, which can receive all frames on the network, such as tcpdump, which is running in this mode. Bonding also runs in this mode, and modifies the mac address in the driver to change the Mac address of the two network cards to the same, so that it can receive data frames of a specific mac. Then the corresponding data frames are transmitted to the bond driver for processing.
After talking about the theory for a long time, the configuration is actually very simple, with a total of four steps:
The operating system of the experiment is Redhat Linux Enterprise 3.0.
The prerequisite for binding: the chipset model is the same, and the network card should have its own independent BIOS chip.
1. Edit the virtual network interface profile and specify the network card IP
Vi / etc/sysconfig/ network-scripts/ ifcfg-bond0
[root@rhas-13 root] # cp / etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
2 # vi ifcfg-bond0
Change the first line to DEVICE=bond0
# cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=static
IPADDR=172.31.0.13
NETMASK=255.255.252.0
BROADCAST=172.31.3.254
ONBOOT=yes
TYPE=Ethernet
The idea here is not to specify the IP address, subnet mask, or ID of a single network card. Assign the above information to the virtual adapter (bonding).
[root@rhas-13 network-scripts] # cat ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
[root@rhas-13 network-scripts] # cat ifcfg-eth2
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
3 # vi / etc/modules.conf
Edit the / etc/modules.conf file and add the following line so that the system loads the bonding module at startup and the external virtual network interface device is bond0
Add the following two lines
Alias bond0 bonding
Options bond0 miimon=100 mode=1
Description: miimon is used for link monitoring. For example: miimon=100, then the system monitors the link connection status every 100ms, and if one line fails, it will be transferred to another line; the value of mode indicates the working mode. There are four modes: 0meme, 1meme, 2meme, 3, and two are commonly used.
Mode=0 indicates that load balancing (round-robin) is a load balancing method, and both NICs work.
Mode=1 said that fault-tolerance (active-backup) provides redundancy and works in an active and standby mode, that is, by default, only one network card works and the other is backed up.
Bonding can only provide link monitoring, that is, whether the link from the host to the switch is connected. If only the external link down of the switch is down and the switch itself is not faulty, then bonding will continue to use the link because there is nothing wrong with it.
4 # vi / etc/rc.d/rc.local
Add two lines
Ifenslave bond0 eth0 eth2
Route add-net 172.31.3.254 netmask 255.255.255.0 bond0
By this time the configuration has been completed and the machine is restarted.
Restart will see the following message to indicate that the configuration is successful.
.
Bringing up interface bond0 OK
Bringing up interface eth0 OK
Bringing up interface eth2 OK
.
Let's discuss the following situations when mode is 0 and 1 respectively.
Mode=1 works in active / standby mode, and eth2 as a backup network card is no arp.
[root@rhas-13 network-scripts] # ifconfig verifies the configuration information of the Nic
Bond0 Link encap:Ethernet HWaddr 00:0E:7F:25 9:8B
Inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:18495 errors:0 dropped:0 overruns:0 frame:0
TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:0
RX bytes:1587253 (1.5 Mb) TX bytes:89642 (87.5 Kb)
Eth0 Link encap:Ethernet HWaddr 00:0E:7F:25 9:8B
Inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:9572 errors:0 dropped:0 overruns:0 frame:0
TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:1000
RX bytes:833514 (813.9 Kb) TX bytes:89642 (87.5 Kb)
Interrupt:11
Eth2 Link encap:Ethernet HWaddr 00:0E:7F:25 9:8B
Inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1
RX packets:8923 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:1000
RX bytes:753739 (736.0 Kb) TX bytes:0 (0.0b)
Interrupt:15
That is to say, in the active and standby mode, when a network interface fails (for example, the main switch is powered off, etc.), the system will work according to the order of the network card specified in cat / etc/rc.d/rc.local, and the machine can still serve externally, thus playing the function of failure protection.
In mode=0 load balancing mode, it can provide twice the bandwidth. Let's take a look at the configuration information of the network card.
[root@rhas-13 root] # ifconfig
Bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
Inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:2817 errors:0 dropped:0 overruns:0 frame:0
TX packets:95 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:0
RX bytes:226957 (221.6 Kb) TX bytes:15266 (14.9 Kb)
Eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
Inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:1406 errors:0 dropped:0 overruns:0 frame:0
TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:1000
RX bytes:113967 (111.2 Kb) TX bytes:7268 (7.0 Kb)
Interrupt:11
Eth2 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
Inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:1411 errors:0 dropped:0 overruns:0 frame:0
TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:1000
RX bytes:112990 (110.3 Kb) TX bytes:7998 (7.8 Kb)
Interrupt:15
In this case, the failure of a network card will only reduce the server exit bandwidth and will not affect the network usage.
You can grasp the working status of bond0 in detail by looking at the working status query of bonding.
[root@rhas-13 bonding] # cat / proc/net/bonding/bond0
Bonding.c:v2.4.1 (September 15, 2003)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Multicast Mode: all slaves
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0e:7f:25:d9:8a
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0e:7f:25:d9:8b
Implementation of load balancing and failure protection by binding network card under QuickLinux AS4
Load balancing under Linux (to put it bluntly, when n network cards are bound together when a speed becomes n times as fast as before) is called bonding under linux, not to mention the theory. There are many such articles on the web, but most of them are a little different from the actual configuration process. Let's talk about the specific implementation methods on the quicklinux AS4 of 2.6 kernel.
The first step is to edit the virtual network interface profile.
Create / etc/sysconfig/network-scripts/ifcfg-bond0 file
The contents are as follows:
DEVICE=bond0
IPADDR=192.168.0.1
NETMASK=255.255.255.0
BROADCAST=192.168.0.255
NETWORK=192.168.0.1
ONBOOT=yes
Ip address is configured according to your own needs (a simple way is to copy an ifcfg-eht0 and modify the DEVICE=bond0)
The second step is to edit the configuration file of the real network card (it is recommended to make a backup before modification)
/ etc/sysconfig/network-scripts/ifcfg-eth*
The contents are as follows
BOOTPROTO=none
TYPE=Ethernet
DEVICE=eth* (* same as in that profile name)
ONBOOT=yes
MASTER=bond0
Slave=yes
Note: the ip address of the real Nic can no longer be set. Press this setting for all the real NICs to be bound as needed. The main reason is that MASTER=bond0 and slave=yes must not be mistaken (the simple way is to modify one and copy it again)
The third step is to modify / etc/modprobe.conf file by adding the following two lines (backup is recommended)
Alias bond0 bonding
Options bond0 miimon=100 mode=0
Note: 1. Miimon is the time interval of link monitoring in milliseconds. Miimon=100 means to detect the connection between the network card and the switch every 100ms, and use another link if not.
2. Mode=0 means load balancing, and both NICs work.
Mode=1 means redundancy. The network card only has one job, and if there is a problem, the other will be enabled.
Step 4. Add the following line to / etc/rc.d/rc.local (execute this line at startup)
Ifenslave bond0 eth0 eth2... Eht*
Load balancing has worked properly after reboot.
However, it is also important to note that the setting of the shorewall firewall is best done before configuring load balancing, because shorewall is the easiest to configure under webmin.
After load balancing starts, the network card in the original "network card interface" should be changed from eht* to bond0. If you don't change the setting, all communications will be cut off.
You can't stop the firewall with service shorewall stop, because when shorewall is turned off, it starts the shutdown rule (routerstopped). The shutdown rule defaults to not allowing any communication.
Therefore, it is recommended to establish firewall rules before configuring load balancing. Otherwise, you will have to change the shorewall configuration file manually.
The above implementation method has been implemented under the QuickLinux as4 2.6.11-8 kernel, and the network card is dual intel pro1000 speed phase.
When satisfactory, the server dual network cards and two client computers are connected to the same ordinary switch, and the two client networks simultaneously
The speed of copying large files with totle copy on SMB can reach above 10000KB/S.
Realization of load balancing by binding dual Network Cards under Linux
The Linux dual network card binding implementation we introduce here is to use two network cards to form a virtual network card. The aggregated device looks like a separate Ethernet interface device. Generally speaking, the two network cards have the same IP address and parallel links are aggregated into a logical link. In fact, this technology has long existed in Sun and Cisco, known as Trunking and Etherchannel technology, and is also used in the 2.4.x kernel of Linux, known as bonding.
The earliest application of bonding technology is on the cluster-beowulf, which is designed to improve the data transmission between cluster nodes. Let's discuss the principle of bonding. What is bonding needs to start with the promisc mode of the network card. We know that under normal circumstances, the network card only receives the Ethernet frame in which the destination hardware address (MAC Address) is its own Mac and filters out other data frames to reduce the burden on the driver. But the network card also supports another mode called hybrid promisc, which can receive all frames on the network, such as tcpdump, which is running in this mode. Bonding also runs in this mode, and modifies the mac address in the driver to change the Mac address of the two network cards to the same, so that it can receive data frames of a specific mac. Then the corresponding data frames are transmitted to the bond driver for processing. After talking about the theory for a long time, in fact, the configuration is very simple, a total of four steps: the experimental operating system is Redhat Linux Enterprise 3.0. The prerequisite for binding: the chipset model is the same, and the network card should have its own independent BIOS chip.
Topology diagram of binding of double network cards
1. Edit the virtual network interface profile and specify the network card IP:
Vi / etc/sysconfig/ network-scripts/ ifcfg-bond0
[root@rhas-13 root] # cp / etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
2 、 # vi ifcfg-bond0
Change the first line to DEVICE=bond0:
# cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=static
IPADDR=172.31.0.13
NETMASK=255.255.252.0
BROADCAST=172.31.3.254
ONBOOT=yes
TYPE=Ethernet
The idea here is not to specify the IP address, subnet mask, or ID of a single network card. Assign the above information to the virtual adapter (bonding).
[root@rhas-13 network-scripts] # cat ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
[root@rhas-13 network-scripts] # cat ifcfg-eth2
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
3 、 # vi / etc/modules.conf
Edit the / etc/modules.conf file and add the following line so that the system loads the bonding module at startup and the external virtual network interface device is bond0. Add the following two lines:
Alias bond0 bonding
Options bond0 miimon=100 mode=1
Thank you for reading, the above is the content of "configuration steps of dual network card bonding under Linux system". After the study of this article, I believe you have a deeper understanding of the configuration steps of dual network card bonding under Linux system, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.