Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Using Heartbeat to achieve High availability of web Server

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Using Heartbeat to achieve High availability of web Server

Heartbeat Overview:

The Heartbeat project is a part of the Linux-HA project, which implements a highly available cluster system. Heartbeat service and cluster communication are two key components of high availability cluster. In the Heartbeat project, these two functions are implemented by the heartbeat module.

Port number: 694

1) how heartbeat works:

The core of heartbeat includes two. Part, heartbeat monitoring part and resource takeover part, heartbeat monitoring can be carried out through network links and serial ports, and redundant links are supported. They send messages to each other to tell each other their current status. If the message sent by the other party is not received within the specified time, the other party is considered invalid. At this time, it is necessary to start the resource takeover module to take over the resources or services running on the other host.

2) High availability cluster

A high availability cluster refers to a group of independent computers connected by hardware and software, which behave as a single system in front of users, and one or more nodes within such a set of computer systems stop working. the service will switch from the failed node to the working node without causing service interruption. As can be seen from this definition, the cluster must detect when nodes and services fail and when they become available. This task is usually accomplished by a set of code called "heartbeat". In Linux-HA, this function is done by a program called heartbeat

3) the Heartbeat-3.X version is later divided into four modules:

1) ClusterLabs-resource-agents-v3.9.2-0-ge261943.tar.gz # Cluster Experimental Resource Agent

2) Heartbeat-3-0-7e3a82377fa8.tar.bz2 # heartbeat main package

3) pacemaker-1.1.9-1512.el6.src.rpm # pacemaker

4) Reusable-Cluster-Components-glue--glue-1.0.9.tar.bz2 # reusable cluster components

One: experimental topology

Second: experimental objectives

1: using heartbeat to achieve high availability of web server

Three: experimental environment

Note1:WEB main xuegod63.cn 192.168.1.63

Note2:WEB backup xuegod64.cn 192.168.1.64

NFS xuegod62.cn 192

Preparation: keep the hosts of the two nodes consistent

1. Modify the hostname and take effect permanently

# vim / etc/sysconfig/network

HOSTNAME=xuegod63.cn

two。 Analysis

# vim / etc/hosts

192.168.1.63 xuegod63.cn

192.168.1.64 xuegod64.cn

3: time is the same

[root@xuegod63 ~] # date

Sunday, October 30, 2016, 15:18:47 CST

4. Turn off the firewall, turn off the selinux

[root@xuegod63 ~] # service iptables stop

Four: experimental code

-

1. Configure xuegod62 as a NFS server to provide storage resources

1) install NFS service

[root@xuegod62] # rpm-ivh / mnt/Packages/nfs-utils-1.2.3-39.el6.x86_64.rpm

2) write test web pages and share them

[root@xuegod62 ~] # mkdir / wwwdir

[root@xuegod62 ~] # echo "heartbeat http ha" > / wwwdir/index.html

[root@xuegod62 ~] # vim / etc/exports # write shared files and network segments

/ wwwdir 192.168.1.0 wwwdir 24 (rw):

3) add shared directory permissions:

[root@xuegod62 ~] # chmod 777-R / wwwdir/

4) enable nfs service

[root@xuegod62 ~] # service nfs restart

[root@xuegod62 ~] # chkconfig nfs on

2. Xuegod63 tests mounting the nfs storage and installing the httpd web server:

1) install Apache service and test whether NFS files are shared successfully

[root@xuegod63 ~] # yum install httpd-y

[root@xuegod63 ~] # showmount-e 192.168.1.62 # View NFS shared files

Export list for 192.168.1.62:

/ wwwdir 192.168.1.0/24

2) Mount the folder to the root directory of the local website

[root@xuegod63] # mount-t nfs 192.168.1.62:/wwwdir / var/www/html/

[root@xuegod63] # df-h

[root@xuegod63 ~] # service httpd restart

3) Test:

[root@xuegod63 ~] # yum install elinks-y

[root@xuegod63] # elinks-- dump 192.168.1.63

Heartdeat http ha

Httpd//:192.168.1.63

4) unload resources: these resources are loaded directly through heartbeat later.

[root@xuegod63 ~] # umount / var/www/html/

[root@xuegod63 ~] # service httpd stop

[root@xuegod63 ~] # chkconfig httpd off # depends on hartbeat to start later

3. Xuegod64 tests mounting the nfs storage and installing the httpd web server:

1) install Apache service and test whether NFS files are shared successfully

[root@xuegod64 ~] # yum install httpd-y

[root@xuegod64] # showmount-e 192.168.1.62

Export list for 192.168.1.62:

/ wwwdir 192.168.1.0/24

2) Mount the folder to the root directory of the local website

[root@xuegod64] # mount-t nfs 192.168.1.62:/wwwdir / var/www/html/

[root@xuegod64 ~] # service httpd restart

3) Test:

[root@xuegod64 ~] # yum install elinks-y

[root@xuegod64] # elinks-- dump 192.168.1.64

Heartdeat http ha

4) unload resources: these resources are loaded directly through heartbeat later.

[root@xuegod64 ~] # umount / var/www/html/

[root@xuegod64 ~] # service httpd stop

[root@xuegod64 ~] # chkconfig httpd off

4. Xuegod63 install heartbeat

1) configure the yum source:

[root@xuegod63 ~] # cat / etc/yum.repos.d/rhel-source.repo

[rhel-source]

Name=Red Hat Enterprise Linux $releasever-$basearch-Source

Baseurl= file:///mnt/

Enabled=1

Gpgcheck=0

Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

# add the following red content

[rhel-ha]

Name=Red ha

Baseurl= file:///mnt/HighAvailability

Enabled=1

Gpgcheck=0

Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[root@xuegod63 ~] # yum clean all

[root@xuegod63 ~] # yum list

# Note: if you do not configure this baseurl= file:///mnt/HighAvailabilit, you cannot install cluster-glue and resource-agents using yum.

There are cluster-glue, resource-agents and pacemaker in the source of centos 6.2 or 5, but there is no heartbeat. Fortunately, heartbeat's source provides. Spec can use rpmbuild to generate rpm packages.

2) install heartbeat

[root@xuegod63] # tar-jxvf / root/Heartbeat-3-0-958e11be8686.tar.bz2

[root@xuegod63] # cd Heartbeat-3-0-958e11be8686

[root@xuegod63 Heartbeat-3-0-958e11be8686] # rpmbuild-ba heartbeat-fedora.spec

Error: File / root/rpmbuild/SOURCES/heartbeat.tar.bz2: # error message. There is that file or directory, but the directory will be created

[root@xuegod63] # tar-jxvf / root/Heartbeat-3-0-958e11be8686.tar.bz2-C / root/rpmbuild/SOURCES/

[root@xuegod63 ~] # cd / root/rpmbuild/SOURCES/

[root@xuegod63 SOURCES] # mv Heartbeat-3-0-958e11be8686 heartbeat # change the name

[root@xuegod63 SOURCES] # tar-jcvf heartbeat.tar.bz2 heartbeat # is packaged to generate a heartbeat.tar.bz2 package. If you directly use the source code package, you will get an error when you generate the rpm package.

(1) start generating RPM packages

Parameter: rpmbuild parameter (- bb compiles only binary rpm packages-bs compiles only source srpm packages-ba compiles both binary and source srpm packages)

[root@xuegod64 ~] # yum install ncurses-devel openssl-devel gettext bison flex mailx cluster-glue-libs-devel docbook-dtds docbook-style-xsl libtool-ltdl-devel-y

[root@xuegod63 SOURCES] # cd heartbeat

[root@xuegod63 heartbeat] # rpmbuild-ba heartbeat-fedora.spec

3): install the software package

[root@xuegod63 ~] # cd / root/rpmbuild/RPMS/x86_64/

[root@xuegod63 x8634] # yum install-y cluster-glue resource-agents

[root@xuegod63 x861464] # rpm-ivh heartbeat-libs-3.0.6-1.el6.x86_64.rpm

[root@xuegod63 x861464] # rpm-ivh heartbeat-3.0.6-1.el6.x86_64.rpm

5. Install: heartbeat on xuegod64

1) copy the software package and yum configuration file to xuegod64:

[root@xuegod63 ~] # cd / root/rpmbuild/RPMS/x86_64/

[root@xuegod63 x86: 64] # scp-r. / * 192.168.1.64:/root/

[root@xuegod63 x86_64] # scp / etc/yum.repos.d/rhel-source.repo 192.168.1.64:/etc/yum.repos.d/

[root@xuegod64 ~] # cd / root/rpmbuild/RPMS/x86_64/

[root@xuegod64 x8634] # yum install ncurses-devel openssl-devel gettext bison flex mailx cluster-glue-libs-devel docbook-dtds docbook-style-xsl-y

[root@xuegod64 x861464] # rpm-ivh heartbeat-libs-3.0.5-1.el6.x86_64.rpm

[root@xuegod64 x8634] # yum install-y cluster-glue resource-agents

[root@xuegod64 x861464] # rpm-ivh heartbeat-3.0.6-1.el6.x86_64.rpm

2) View the generated users and groups:

[root@xuegod64 ~] # grep haclient / etc/group

Haclient:x:489:

[root@xuegod64 ~] # id hacluster

Uid=495 (hacluster) gid=489 (haclient) groups=489 (haclient)

6. Configure heartbeat: xuegod63 and xuegod64

Generate heartbeat configuration files on xuegod63 (copy files configured by xuegod63 on xuegod64)

1) copy the configuration file

[root@xuegod63 ~] # cp / usr/share/doc/heartbeat-3.0.6/ha.cf / etc/ha.d/ # main configuration file

[root@xuegod63 ~] # cp / usr/share/doc/heartbeat-3.0.6/authkeys / etc/ha.d/ # Authentication files used in communication between active and standby nodes to ensure security

[root@xuegod63 ~] # cp / usr/share/doc/heartbeat-3.0.6/haresources / etc/ha.d/ # configuration file for floating resources

2) set up the authentication file used in the communication between the active and standby nodes to ensure the security. The configuration of active and standby nodes is required to be consistent.

[root@xuegod63 ~] # vim / etc/ha.d/authkeys

Change:

Are:

Auth 3

# 1 crc

# 2 sha1 HI!

3 md5 mkkey

[root@xuegod63 ~] # chmod 600 / etc/ha.d/authkeys # the permission of this file must be 600, otherwise the startup will not be successful

Note:

The / etc/ha.d/authkeys file determines the authentication key. There are three authentication methods: crc,md5, and sha1.

The use of three authentication methods

If Heartbeat runs on a secure network, such as the crossover line in this example, you can use crc, which is the least expensive approach from a resource point of view. If the network is not secure, but you also want to reduce CPU usage, use md5. Finally, if you want the best authentication, regardless of CPU usage, use sha1, which is the most difficult of the three.

3) modify floating resources and pay attention to the order in which they are added.:: indicates the delimiter

[root@xuegod63 ~] # vim / etc/ha.d/haresources

Change: 44 # node-name resource1 resource2... ResourceN

Are:

Xuegod63.cn IPaddr::192.168.1.200/24/eth0 Filesystem::192.168.1.62:/wwwdir::/var/www/html::nfs httpd

Note:

Node-name fills in the host name of the primary server. No modification is required on xuegod64. In this way, resources will be added to this host by default. When xuegod63 breaks down, xuegod64 will take over.

IPaddr::192.168.1.200/24/eth0 # specify VIP and which network card to bind to

Filesystem::192.168.1.62:/wwwdir::/var/www/html::nfs # specify the storage to mount

Httpd # specifies the service to start. This service must be / etc/init.d and can be turned on or off through service.

4) Test

(1) Test: manually load VIP 192.168.1.200 onto eth0

[root@xuegod63 ~] # ll / etc/ha.d/resource.d/IPaddr

-rwxr-xr-x 1 root root 2273 Jul 29 20:49 / etc/ha.d/resource.d/IPaddr

[root@xuegod63 ~] # / etc/ha.d/resource.d/IPaddr 192.168.1.200/24/eth0 start

IPaddr [7116]: INFO: Success

INFO: Success

View VIP:

[root@xuegod63 x86room64] # ip addr # you can see that the IP of eth0:0 is: 192.168.1.200

Note: you can view it with ifconfig in the Linux rhel6.2 version, but you can only see it with the ip addr command on rhel6.5

(2) Test: manually load NFS storage resources to / var/www/html

[root@xuegod63] # / etc/ha.d/resource.d/Filesystem 192.168.1.62:/wwwdir / var/www/html/ nfs start

Filesystem [23575]: INFO: Running start for 192.168.1.62:/wwwdir on / var/www/html

Filesystem [23567]: INFO: Success

INFO: Success

[root@xuegod63 ~] # mount

192.168.1.62:/wwwdir on / var/www/html type nfs (rw,vers=4,addr=192.168.1.62,clientaddr=192.168.1.63)

(3) Test: start the httpd service manually

[root@xuegod63 ~] # / etc/init.d/httpd restart

[root@xuegod63 ~] # grep / etc/ha.d/ha.cf # only the following two lines are enabled by default

Logfacility local0

Auto_failback on

Note:

When # auto_failback on # is on, when the master node returns to normal, the resources are automatically transferred to the master node. It is recommended to set it to auto_failback off. When the master node returns to normal, it will switch back when the business is not busy. When the primary node returns to normal, the network outage occurs again when the master node returns to normal.

5) modify the configuration file

[root@xuegod63 ~] # vim / etc/ha.d/ha.cf # remove the # before the following from the configuration file

24 debugfile / var/log/ha-debug

29 logfile / var/log/ha-log

48 keepalive 2 # sets the interval between heartbeat to 2 seconds.

56 deadtime 30 # declares the node dead after 30 seconds.

61 warntime 10 # the time in seconds to wait before issuing a "late heartbeat" warning in the log.

71 initdead 120 on some systems, it takes a while for the network to function properly after the system is booted or rebooted. This option is used to resolve the time interval caused by this situation. The value is at least twice that of deadtime.

76 udpport 694 # uses port 694 for bcast and ucast communication. This is the default port number that is officially registered with IANA.

121 ucast eth0 192.168.1.64 # means to send a heartbeat message from the local eth0 interface to the other node and write the IP address on the other end. This is the unicast address. Change it to 192.168.1.63 on xuegod64. Heartbeat network card, if you have two network cards, it can be written as eth2

Note: 91 # bcast eth0 # in the configuration file indicates the use of broadcast heartbeat on the eth0 interface (replace eth2 with eth0,eth3, or which interface you use).

Auto_failback on # when auto_failback is set to on, once the master node is back online, all resources will be retrieved from the slave node. If this option is set to off, the primary node cannot regain the resource.

211 node xuegod63.cn # this option must be configured. The hostname of the machine in the cluster, the same as the output of "uname-n".

212 node xuegod64.cn

Modify the following

Change: 223 # ping 10.10.10.254

223 ping 192.168.1.1 # to achieve arbitration through the ping command

Change: 256 # respawn hacluster / usr/lib/heartbeat/ipfail

Is: 256 respawn hacluster / usr/libexec/heartbeat/ipfail

Change: 262 # apiauth ipfail gid=haclient uid=hacluster

Is: apiauth ipfail gid=haclient uid=hacluster

At the end of the modification, save and exit.

7. Configure heartbeat on xuegod64

1) copy the configuration file to xuegod64:

Root@xuegod63 ~] # cd / etc/ha.d/

[root@xuegod63 ha.d] # scp ha.cf haresources authkeys 192.168.1.64:/etc/ha.d/

[root@xuegod64 ~] # chmod 600 / etc/ha.d/authkeys # this file must be 600 or heartbeat startup is not successful

# modify unicast address

[root@xuegod64 ~] # vim / etc/ha.d/ha.cf

Changed to: ucast eth0 192.168.1.64

Is: ucast eth0 192.168.1.63

2) two machines start the heartbeat service:

[root@xuegod63 ~] # / etc/init.d/heartbeat restart

[root@xuegod64 ~] # / etc/init.d/heartbeat restart

# wait for the resource takeover to complete. Takeover takes over. When startup stays in this interface, xuegod64 takes over all floating resources. When the following startup is successful, xuegod64 will release resources, float resources, and load them again.

3) View cluster resources in xuegod63:

[root@xuegod63 ~] # ifconfig

Eth0:0 Link encap:Ethernet HWaddr 00:0C:29:12:EC:1E

Inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0

[root@xuegod63] # df-h

/ dev/sr0 3.4G 3.4G 0100% / mnt

192.168.1.62:/wwwdir 9.7G 3.4G 5.8G 37% / var/www/html

[root@xuegod63 ~] # / etc/init.d/httpd status

Httpd (pid 23641) is running...

4) check on xuegod64, there are no floating cloud resources:

[root@xuegod64 ~] # ifconfig

[root@xuegod64] # df-h

[root@xuegod63 ~] # / etc/init.d/httpd status

Httpd is stopped

8. Test:

When both hosts are turned on, all requests are transferred to xuegod63. Access: http://192.168.1.200/ is normal. Pawn the xuegod63, close the network card, wait 30 seconds, all the requests are transferred to the xuegod64.

1) Master hangs up and verifies backup

[root@xuegod63 ha.d] # ifdown eth0

[root@xuegod64 ~] # ifconfig

Eth0:0 Link encap:Ethernet HWaddr 00:0C:29:48:80:95

Inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0

[root@xuegod64] # df-h

192.168.1.62:/wwwdir 9.7G 3.4G 5.8G 37% / var/www/html

[root@xuegod64 ~] # service httpd status

Httpd (pid 6375) is running...

2) the master is resurrected again to verify the preemption function-enable the eth0 Nic again on xuegod63:

[root@xuegod63 ~] # ifup eth0

After waiting for 30 seconds, check:

[root@xuegod63] # df-h

192.168.1.62:/wwwdir 9.7G 3.4G 5.8G 37% / var/www/html # has loaded httpd resources

[root@xuegod63 ~] # service httpd status

Httpd (pid 27097) is running...

[root@xuegod63 ~] # ifconfig

Eth0:0 Link encap:Ethernet HWaddr 00:0C:29:12:EC:1E

Inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0

Resources have been cut back.

3) View the release data on xuegod64:

[root@xuegod64 ~] # ifconfig # cannot see the IP address of eth0:0 192.168.1.200

[root@xuegod64] # df-h

Filesystem Size Used Avail Use% Mounted on

/ dev/sda2 9.7G 3.7G 5.5G 41% /

Tmpfs 569m 0 569m 0% / dev/shm

/ dev/sda1 194M 28M 157m 15% / boot

/ dev/sr0 3.4G 3.4G 0100% / mnt

[root@xuegod64 ~] # service httpd status

Httpd is stopped

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report