Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build a High availability load balancing Cluster with heartbeat+lvs

2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

I would like to share with you how to build a highly available load balancer cluster with heartbeat+lvs. I believe most people don't know much about it, so share this article for your reference. I hope you will learn a lot after reading this article. Let's take a look at it.

Heartbeat+lvs implements the principle of high availability load balancing:

Two heartbeat (ldirectord) hosts form a high availability cluster, while supervising the lvs (load balancing cluster) as a whole constitutes a high availability load balancing cluster of heartbeat+lvs. When using heartbeat, the watchdog module is loaded to detect the heartbeat service, and when the heartbeat service fails, the host will be restarted.

Note: however, when the kernel crashes, watchdog becomes modprobe softdog, because watchdog is based on kernel-level software services (equivalent to soft fence, a hardware protection mechanism).

Experimental environment: CentOS 6.4

The system architecture is mainly composed of four hosts, two heartbeat hosts as lvs and heartbeat, and two Real Server hosts as real servers.

Preparation for the experiment:

1. Define the resolution of each node in the local / etc/hosts file.

two。 Close selinux and iptables

3. Software download: heartbeat-3.0.4-1.el6.x86_64.rpm heartbeat-devel-3.0.4-1.el6.x86_64.rpm

Ldirectord-3.9.2-1.2.x86_64.rpm heartbeat-libs-3.0.4-1.el6.x86_64.rpm

The steps of the experiment:

1. Heartbeat installation and configuration:

# yum localinstall * .rpm is installed in yum mode, which can solve the dependency of local rpm package (provided that yum source is required)

# less / etc/ha.d/README.config

Ha.cf Main configuration file heartbeat High availability Master profile

Haresources Resource configuration file resource file

Authkeys Authentication information certification document

# cd / usr/share/doc/heartbeat-3.0.4/

# cp authkeys haresources ha.cf / etc/ha.d/

# vim ha.cf

Debugfile / var/log/ha-debug

Logfile / var/log/ha-log

Keepalive 2 specifies a heartbeat interval of 2s

Deadtime 30 backup node automatically takes over resources after 30s

Warntime 10 heartbeat delay 10s the standby computer will issue a warning if the standby machine does not accept the heartbeat of the primary node within 10s

The time it takes for initdead 60 to restore the network after restart (at least twice as long as deadtime)

The port used by udpport 666broadcast communication

Bcast eth0 uses broadcast ()

Auto_failback on failover

Watchdog / dev/watchdog this has to load a module

Node server66.example.com primary and secondary nodes

Node server68.example.com

Ping 192.168.0.253 tests connectivity. * is the gateway.

The respawn option is optional and lists the processes started and shut down with heartbeat, which are typically plug-ins integrated with heartbeat and can be automatically restarted if they fail. Ipfail is used by default

Respawn hacluster / usr/lib64/heartbeat/ipfail detects and handles network faults

# apiauth client-name gid=gidlist uid=uidlist

Users and groups running apiauth ipfail gid=haclient uid=hacluster ipfail

Load watchdog, soft fence monitors heartbeat and restarts

# modprobe softdog

# vi / etc/rc.local set boot to load automatically

Modprobe softdog

# vim authkeys authentication file permissions must be 600

Auth 3

# 1 crc

# 2 sha1 HI!

3 md5 Hello!

# chmod 600 authkeys

# vim haresources

Server68.example.com IPaddr::192.168.0.234/24/eth0 httpd defines the master node, virtual ip and monitoring services

To make sure that the httpd service of the primary node is started

By default, heartbeat can monitor the directory service:

/ etc/init.d/; / etc/ha.d/resource.d/; / etc/ha.d/rc.d/

When the heartbeat installation and configuration is complete, you can install heartbeat on another host and configure it accordingly. (note that when configuring a cluster, try to select hosts with the same configuration, so as to facilitate future management and troubleshooting)

Test heartbeat:

# / etc/init.d/heartbeat start starts the service on two heartbeat hosts respectively

# tail-f / var/log/message check the log and find that the server68 host takes over VIP resources. You can ping and VIP at this time.

At the same time, the heartbeat service monitors the local httpd service and finds that httpd will also be enabled.

Second, build lvs load balancing cluster.

Perform the same installation and configuration operation on the previous two heartbeat hosts (some parameters must be specified)

Three working modes (NAT/DR/TNU) and eight scheduling algorithms are used to use lvs load balancing, which is explained here.

There are generally three ways to configure lvs: through the ipvsadm command

Configure through ldirectord (heartbeat plug-in)

Configure through Red Hat Visual piranha software

Configure lvs through the ipvsadm command:

# ipvsadm-A-t 192.168.0.224 VIP 80-s rr defines a VIP, using polling

# ipvsadm-a-t 192.168.0.224 DR 80-r 192.168.0.103 DR 80-g definition DR mode

# ipvsadm-a-t 192.168.0.224 ipvsadm 80-r 192.168.0.191purl 80-g

This chapter will use ldirectord to configure lvs:

How ldirectord works:

Ldirectord requires you to enable the apache server in the real server and establish the files and contents specified in the configuration file in the root directory of each real server web server. Then ldirectord loops through this file to determine whether the real server is alive, and if it does not, automatically set its weight to 0 to ensure that the connection of later customers will not lead to the failed real server, if the real service is repaired online. It then sets its weight to enable it to continue to provide services for client connections.

Ldirectord mainly creates the ipvs virtual server table by calling ipvsadm.

# yum install ipvs-y

# yum localinstall ldirectord****.rpm

Packages required for perl-IO-Socket-INET6 ldirectord startup

# / etc/init.d/ldirectord start

Use ldirectord to configure lvs and give ldirectord to heartbeat control:

Give the lvs to ldirectord to monitor:

Note: install ldirectord on heartbeat

Install lvs on heartbeat

The master and standby configuration files are consistent

# cp-r / usr/share/doc/packages/ldirectord/ldirectord.cf / etc/ha.d/

# vim / etc/ha.d/ldirectord.cf

Virtual=192.168.0.224:80 defines virtual resource VIP

Real=192.168.0.103:80 gate specifies the real backend server and adopts DR scheduling mode

Real=192.168.0.191:80 gate

Fallback=127.0.0.1:80 gate when the real server goes down, the machine automatically takes over

Service=http

Scheduler=rr uses polling scheduling algorithm

# persistent=600

# netmask=255.255.255.255

Protocol=tcp

Checktype=negotiate

Checkport=80

# vim / etc/init.d/ldirectord

#. / etc/ha.d/shellfuncs comment out

# / etc/init.d/ldirectord start

Real Server needs to be configured as follows:

Note: when configuring lvs load balancer, the real backend server needs to establish VIP and disable arp (using arptables software)

# yum install arptables_jf-y

# ifconfig lo:0 192.168.0.224 netmask 255.255.255.255

# arptables-An IN-d 192.168.0.224-j DROP VIP setting

# arptables-An OUT-s 192.168.0.224-j mangle--mangle-ip-s 192.168.0.103 add RS

# / etc/init.d/arptables_jf save

# chkconfig arptables_jf on

Lvs load balancing cluster is configured successfully

Test: accessing http://192.168.0.224 in the browser will automatically load the web release pages of 103and 191, and the refresh will automatically jump to the configuration.

Third, integrate heartbeat+lvs to achieve high availability load balancing.

First install and configure LVS on another heartbeat host, and start the appropriate service.

Modify the heartbeat resource configuration file:

# vim haresources

Server68.example.com IPaddr::192.168.0.224/24/eth0 httpd ldirectord

Note: the active and standby heartbeat nodes must be synchronized

In this way, heartbeat can be used to monitor and control lvs (in fact, heartbeat can monitor script files in three directories, directly monitoring ldirectord, and ldirectord is used to configure and monitor LVS)

Test: now turn off ldirectord directly

If you turn on heartbeat, you will find that ldirectord is enabled, and the access flow device can access the contents of RS.

Actually, it's from lvs load balancer.

Test high availability and load balancing!

1. When any heartbeat host is shut down, another heartbeat host detects and takes over the service (the lvs that VIP and ldirectord,ldirectord actually monitor), so it does not affect customer access to the real back-end service.

two。 Load balancer test: during the continuous refresh process, the pages published by different Real Server hosts will be refreshed.

These are all the contents of the article "how to build a high-availability load balancing cluster in heartbeat+lvs". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report