In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
In the daily operation and maintenance work, it is sometimes necessary to configure multiple IP addresses on a physical network card. This is the concept of network card sub-interface, and multiple network cards are used to bind network cards. Generally speaking, it is an IP address used by multiple network cards. I will explain the implementation process in detail below.
& create a network card sub-interface
In the CentOS system, the network is managed by the NetworkManager service, which provides a graphical interface, but this service does not support the setting of the physical network card sub-interface, so when configuring the network card sub-interface, we need to disable this service.
Temporary shutdown: service NetworkManager stop
Permanent shutdown: chkconfig NetworkMangager off
You need to do this if you sometimes need to temporarily create subinterfaces.
[root@server ~] # ip addr add 10.1.252.100 Universe 16 dev eth0 label eth0:0
Note: once the network service is restarted, it will fail.
To create a permanent network card subinterface, you need to write it to the configuration file of the network card. The configuration file path of the network card is the file that begins with the device name with ifcfg in the / etc/sysconfig/network-scripts/ directory, and the configuration file of the subinterface I set is called eth0:0.
Vim / etc/sysconfig/network-scripts/ifcfg-eth0:0 (if you edit the Nic configuration file every time, you can define an alias every time the path feels too long, and cd directly change the directory to the current directory of the file)
The subinterface name of the DEVICE=eth0:0 / / Nic
The protocol used by BOOTPROTO=none / / is static here
IP address of the IPADDR=192.168.1.100 / / subinterface
Subnet mask of NETMASK=255.255.255.0 / / subinterfac
Gateway of GATEWAY=192.168.1.254 / / subinterface
Dns specified by DNS1=8.8.8.8 / / subinterface
You need to restart the network service after editing the configuration file of the network card.
[root@server network-scripts] # service network restart
[root@server network-scripts] # ifconfig
Eth0 Link encap:Ethernet HWaddr00:0C:29:D1:18:FD
Inet addr:10.1.252.100 Bcast:10.1.255.255 Mask:255.255.0.0
Inet6 addr: fe80::20c:29ff:fed1:18fd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:103623 errors:0 dropped:0 overruns:0 frame:0
TX packets:824 errors:0 dropped:0 overruns:0 carrier:0
Collisions:0 txqueuelen:1000
RX bytes:7615694 (7.2MiB) TXbytes:80710 (78.8KiB)
Eth0:0 Link encap:Ethernet HWaddr00:0C:29:D1:18:FD
Inet addr:192.168.1.100 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:
At this point, the network subinterface is configured.
& Network card binding
Before I explain how to bind bonding Nic, I will first talk about the principle of bond and the working mode of bond. Finally, I will implement the configuration of Nic binding.
Bonding
It is to bind multiple NICs to the same IP address to provide services to achieve high availability or load balancing. Of course, it is impossible to set the same IP address for two network cards. Through bonding, a virtual network card provides a connection, and the physical network card is modified to the same MAC address.
Normally, the network card only accepts Ethernet frames whose destination hardware address is its own MAC, and filters out other data frames to reduce the burden. But the network card also supports the hybrid promisc mode, receiving all the frames on the network, tcpdump and bonding run in this mode, the mac address in the driver changes the MAC address of the two network cards to the same, can accept specific mac data frames, and then transmit the corresponding data frames to the bond driver for processing. The dual network card works as a virtual network card (bond0), and the virtual network card also needs a driver, which is called bonding.
The working mode of bonding
Mode 0 (balance-rr)
Round-robin policy: packets are sent on each slave interface sequentially from beginning to end. This model provides load balancing and fault tolerance, and both network cards work. But we know that if the packets of a connection or session are sent from different interfaces and pass through different links midway, there is likely to be the problem of disorderly arrival of packets on the client side. Packets that arrive out of order need to be re-required to be sent, so that the throughput of the network will decline.
Mode 1 (active-backup)
Master / slave policy: only one slave is activated in the binding. Other slave is activated when and only if the active slvae interface fails. To avoid confusion on the switch, the bound MAC address is visible on only one external port. This mode only provides fault tolerance; thus it can be seen that the advantage of this algorithm is that it can provide high availability of network connections, but its resource utilization is low, only one interface is working, and when there are N network interfaces, the resource utilization is 1pm N.
Mode 2 (balance-xor)
Balancing policy: packets are transmitted based on the specified transport HASH policy. The default policy is: (source MAC address XOR destination MAC address)% number of slave. Other transport policies can be specified through the xmit_hash_policy option, which provides load balancing and fault tolerance
Mode 3 (broadcast)
Broadcast strategy: transmit all insulation on all slave interfaces. This model provides fault tolerance.
Mode 4 (802.ad) IEEE 802.3ad Dynamic link aggregation (IEEE 802.3ad dynamic link aggregation)
Features: create an aggregation group that shares the same speed and duplex settings. Multiple slave work under the same active polymer according to the 802.3ad specification.
The slave election for outbound traffic is based on the transport hash policy, which can be changed from the default XOR policy to other policies through the xmit_hash_policy option. It should be noted that not all transmission strategies are 802.3ad adaptive, especially considering the packet disorder mentioned in section 43.2.4 of 802.3ad standard. Different implementations may have different adaptations.
Note: if you want to make mode 0 load balance, it is not enough to set options bond0 miimon=100 mode=0 here. The switch connected to the network card must be specially configured (these two ports should be aggregated), because the two network cards for bonding use the same MAC address. Analyze the principle (bond runs under mode 0):
The IP of the network card bound to bond under mode 0 is modified to the same mac address. If these network cards are all connected to the same switch, there will be multiple ports corresponding to the mac address in the arp table of the switch. Which port should the switch forward packets destined for this mac address? Normally, mac addresses are unique in the world, and a single mac address corresponds to multiple ports must confuse the switch. Therefore, if the bond under mode0 is connected to the switch, the ports of the switch should be aggregated (cisco is called ethernetchannel,foundry and portgroup), because after the switch has done aggregation, several ports under the aggregation are also bundled into a mac address. Our solution is that two network cards can be connected to different switches.
Mode 5 (balance-tlb)
Adapter transport load balancing: no special switch (switch) supported channel bonding is required. Outbound traffic is allocated on each slave based on the current load (based on speed). If the slave that is receiving data fails, another slave takes over the MAC address of the failed slave.
Mode 6 (balance alb)
Adapter adaptive load balancing: this mode includes balance-tlb mode, plus receive load balancing (receive load balance, rlb) for IPV4 traffic, and does not require any switch (switch) support. Receiving load balancing is implemented through ARP negotiation. The bonding driver intercepts the ARP reply sent by the local machine and rewrites the source hardware address to the unique hardware address of a slave in the bond, so that different peers use different hardware addresses to communicate.
There are seven types of mode bound to Nic (06.6) bond0, bond1, bond2, bond3, bond4, bond5, bond6 and so on, of which three are commonly used:
Mode=0: load balancing mode, with automatic backup, but requires "Switch" support and setting.
Mode=1: automatic backup mode. If one line is disconnected, the other lines will be backed up automatically.
Mode=6: load balancing mode, automatic backup, no need for "Switch" support and setting.
Here I configure the mode 1 mode. I use the vmware virtual machine to do the experiment. Before doing the experiment, you need to add another network card, so that there will be two network cards in the linux system.
Step 1: create a profile for the bonding device
[root@server network-scripts] # vimifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
IPADRR=10.1.252.100
NETMASK=255.255.0.0
GATEWAY=10.1.0.1
DNS1=8.8.8.8
BONDING_OPTS= "miion=100 mode=1"
Step 2: edit the configuration files of the two physical network cards
[root@servernetwork-scripts] # vim ifcfg-eth0
DEVICE=eth0
MASTER=bond0
SLAVE=yes
[root@servernetwork-scripts] # vim ifcfg-eth2
DEVICE=eth2
MASTER=bond0
SLAVE=yes
Note: miimon is used for link detection. If miimon=100, the system detects the link status every 100ms, and if one line fails, it transfers to the other.
Mode=1 indicates that the working mode is the active / standby mode.
MASTER=bond0 master device is bond0
Step 3: modify the modprobe related settings file and load the bonding module
1. Vim / etc/modprobe.d/bonding.conf appends the following at the end of the file
Alias bond0 bonding
Options bonding mode=1 miimon=200
2. Load module
Modprobe bonding
3. Confirm whether the module is loaded successfully
Lsmod | grep bonding
4. Restart the network service, and then check the status and test of bond.
When the configuration is complete, you only need to restart the network service. Test the IP address interface of ping bond0 using another host. Next, test the status of the bond. Down one of the network cards to see if the other network card can be topped up. If so, it is successful.
Check the status of bond: watch-n 1 cat / proc/net/bonding/bond orbit the status of bond
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:d1:18:fd
Slave queue ID: 0
Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:d1:18:07
Slave queue ID: 0
When I down the eth0 network card, the currently active network card becomes eth2.
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:d1:18:fd
Slave queue ID: 0
Slave Interface: eth0
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:d1:18:07
Slave queue ID: 0
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.