In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces how to configure heartbeat v2 httpd high availability cluster based on haresources profile in Linux. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.
Configure heartbeat v2 httpd high availability cluster based on haresources profile.
Ll requirements
Complete mastery of heartbeat v2 httpd high availability services based on haresources profiles.
Preparation in advance
1. Heartbeat service host planning
Host interface ip service usage node1.chanedu.cometh0192.168.1.131
Heartbeat
Httpd
LAN data forwarding
Eth2192.168.2.131 heartbeat information link
Vip192.168.1.180 provides ipnode1.chanedu.cometh0192.168.1.132 for external access to httpd
Heartbeat
Httpd
LAN data forwarding
Eth2192.168.2.132 heartbeat information link shared.chanedu.cometh0192.168.1.150nfsLAN data forwarding
2. Architecture diagram
3. Configure yum source
Rpm-ivh https://mirrors.ustc.edu.cn/epel/epel-release-latest-6.noarch.rpm
4. Synchronization time
The time of the two nodes must be the same. You can use the network time server or the local ntpd server to synchronize events. Here, I synchronize the network time server directly.
Ntpdate 202.120.2.101
5. The node name and IP address must be resolved to ensure that the positive and negative resolution of the host name in the / etc/hosts file is consistent with the name of 'uname-n'.
Add the following name resolution to the / etc/hosts in node1 and node2, respectively
Echo "192.168.1.131 node1.chanedu.com node1" > > / etc/hostsecho "192.168.1.132 node2.chanedu.com node2" > > / etc/hosts
6. Configure node heartbeat connection
Both node1 and node2 use eth2 network cards to connect to each other. Without going through the switch, they directly connect eth2 on node1 and eth2 on node2 without going through the switch, which is used for heartbeat detection.
Eth2:192.168.2.132 on eth2:192.168.2.131node2 on node1
Add a host route to node1 and node2 respectively to realize heartbeat detection through eth2 network card when the two hosts detect the opposite end.
Add to the node1:
The command route add-host 192.168.2.132 dev eth2# means to access 192.168.2.132 (node2) from node1 and exit echo "route add-host 192.168.2.132 dev eth2" > > / etc/rc.local through the eth2 network card.
Add to the node2:
The command route add-host 192.168.2.131 dev eth2# means to access 192.168.2.131 (node1) from node1 and exit echo "route add-host 192.168.2.131 dev eth2" > > / etc/rc.local through the eth2 network card.
7. In order to ensure the security of communication, nodes communicate with each other by using ssh cipher, and the key can be generated by using "ssh-keygen-t rsa" command.
Ssh-keygen-t rsassh-copy-id root@192.168.1.132
Install heartbeat v2
Since heartbeat-pils has been replaced by cluster-glue after CentOS-6.5, you need to resolve dependencies manually
1. Solve the dependency relationship
Yum install perl-TimeDate net-snmp-libs libnet PyXMLrpm-ivh heartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpm
Note: libnet is in the epel source
Node1, node2 host install httpd
There is no demonstration here.
Shared host installs nfs
There is no demonstration here.
Configure httpd High availability Cluster
1. Copy the three files ha.cf, haresources and authkey into the / etc/ha.d file
Cd / usr/share/doc/heartbeat-2.1.4/cp ha.cf haresources authkeys / etc/ha.d/cd / etc/ha.d/lsauthkeys ha.cf harc haresources rc.d README.config resource.d shellfuncs
2. Configure the basic parameters of heartbeat and edit / etc/ha.d/ha.cf directly.
Vim / etc/ha.d/ha.cf grep-v "#" / etc/ha.d/ha.cf debugfile / var/log/ha-debuglogfile/var/log/ha-loglogfacilitylocal0keepalive 1000msdeadtime 8warntime 4initdead 60udpport694ucast eth2 192.168.1.132auto_failback onnodenode1.chanedu.comnodenode2.chanedu.comping 192.168.1.1
It should be noted here that the above ha.cf configuration is on node1, and the configuration on node2 needs to modify the unicast address.
Ucast eth2 192.168.1.131
3. Configure heartbeat resources, define node1 master node, and edit / etc/ha.d/haresources directly.
Vim / etc/ha.d/haresourcesnode1.chanedu.com 192.168.1.180/24/eth0 httpd
The above ip address is vip, which provides httpd services to the outside. The subnet mask is 24 bits, and aliases are configured from the eth0 interface.
4. Configure the authentication file, edit / etc/ha.d/authkeys
Chmod 600 / etc/ha.d/authkeysopenssl hand-hex 126107510ab21f17a41d377135vim / etc/ha.d/authkeysauth 2: 1 crc2 sha1 6107510ab21f17a41d377135#3 md5 Hello!
5. Copy the files ha.cf, haresources and authkeys to the / etc/ha.d/ directory of node2, and retain the file permissions.
Rsync-p / etc/ha.d/ {ha.cf,haresources,authkeys} root@192.168.1.132:/etc/ha.d/
6. Set the httpd boot of node1 and node2 nodes not to start automatically, and stop the httpd service.
Service httpd stopchkconfig httpd off
7. Start the heartbeat service
Start the heartbeat service on the node1 node and see if eth0:0 exists
Service heartbeat start
Start node2's heartbeat service on node1
[root@node1 ha.d] # ssh node2 'service heartbeat start'Starting High-Availability services: 2017 ssh node2 05 ss 22 INFO: Resource is stoppedDone. [root@node1 ha.d] # ss-unl'State Recv-Q Send-Q Local Address:Port Peer Address:Port UNCONN 0 0 *: 694 *: * UNCONN 0 *: 45373 *: *
As you can see above, node2 has been listening on UDP:694, and node2 has been started successfully.
Client access test
Access 192.168.1.180 on the client
Can be accessed, indicating that there is no problem with the basic configuration of heartbeat. Next, simulate node1 downtime. Here, stop the heartbeat service of node1 directly to see if you can jump to node2 directly.
Root@node1 ha.d] # service heartbeat stopStopping High-Availability services: Done.
Look at the IP address of node2 and find that the VIP resource has been taken over by node2.
Access the client again, and the httpd resource has been transferred to the node2 node.
High availability of httpd based on nfs shared storage
Configure the two nodes in the cluster service to share the backend NFS file system resources, add a shared folder to the shared host, and modify the owner to be an apache user
Cat / etc/exports / www/htdocs 192.168.1.0 take 24 (rw,no_root_squash,async) setfacl-m u:apache:rwx / www/htdocs/echo "Page in NFS Server." > / www/htdocs/index.html
Modify the haresources resource configuration file of node1 to specify that the shared NFS file system be mounted and synchronized to the / etc/ha.d/ directory of the node2 node
Vim / etc/ha.d/haresourcesnode1.chanedu.com 192.168.1.180/24/eth0 Filesystem::192.168.1.180:/www/var/shared::/var/www/html/::nfs httpdrsync / etc/ha.d/haresources root@192.168.2.132:/etc/ha.d/ssh node2 'service heartbeat stop'service heartbeat stopservice network restartssh node2' service network restart'service heartbeat startssh node2 'service heartbeat start'
When the client accesses 192.168.1.180, the page of the back-end nfs shared storage is accessed successfully.
One thing to note here is to make sure that the node1 and node2 nodes can successfully mount and shared the shared directory of the host, and then use the curl command to access the local. It is best to test here, if your CentOS or Redhat installation is minimized, there is no nfs-client client installed by default, and there is no mount.nfs command. If not, the nfs shared directory cannot be mounted successfully, then HA will not take effect.
Mount the / www/htdocs directory to the / var/www/html directory on the node1 and node2 nodes.
Mount-t nfs 192.168.1.150:/www/htdocs / var/www/html [root@node1 heartbeat] # curl http://192.168.1.131 Page in NFS Server. [root@node2 heartbeat] # curl http://192.168.1.132 Page in NFS Server.
Stop the heartbeat service for node1
Ssh node1 'service heartbeat stop'
In the / usr/share/heartbeat/ directory, there are many script files, including:
Run a hb_standby script, which means to turn yourself into a standby node; for example, run a hb_standby script on node1 and the resource will be taken over by node2
Running the hb_takeover script means to restart the other node; for example, if node1 is already a standby node, running the hb_takeover script on node1 will restart the node2 node. At this point, node1 thinks that node2 is down, so node1 will take over the resources again.
It's not going to be demonstrated here. It's simple.
Summary
1. The high availability cluster of heartbeat v2.x based on haresources configuration file is relatively simple to implement, but its function is limited after all. If more powerful functions are needed, pacemaker is undoubtedly a better choice.
2. Due to the reason of minimizing the installation of centos, the node2 node does not install nfs-client and other software, and there is no mount.nfs command, which leads to the test that node1 resources cannot be transferred to the node2 node. After investigating many reasons, it is found that the culprit is that node2 cannot mount the nfs shared file system. Carelessness leads to a longer experiment time, which is not allowed if it is in a production environment.
This is the end of how to configure heartbeat v2 httpd high availability cluster based on haresources configuration file in Linux. I hope the above content can be helpful to you and learn more. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.