In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
# introduction to heartbeat #
The Heartbeat project is a part of the Linux-HA project, which implements a highly available cluster system. Heartbeat service and cluster communication are two key components of high availability cluster. In the Heartbeat project, these two functions are implemented by the heartbeat module.
This cluster scheme is built by using third-party software, which is simpler in function than the cluster software that comes with RedHat, but it is very convenient to build. And it's a quick solution.
The communication mode of heartbeat high availability cluster is udp protocol and serial communication, and heartbeat plug-in technology realizes serial port, multicast, broadcast and multicast communication between clusters. It realizes the core function of HA-heartbeat, installs Heartbeat software on two servers at the same time, and is used to monitor the status of the system, coordinate the work of the master and slave servers, and maintain the availability of the system. It can detect the faults of the application-level system software and hardware of the server, isolate and recover errors in time, and achieve no single point of failure in the whole application through system monitoring, service monitoring, IP automatic migration and other technologies, and ensure the continuous high availability of important services simply and economically. Heartbeat adopts virtual IP address mapping technology to realize the function that the switching between master and slave servers is transparent to the client.
But a single heartbeat cannot provide a robust service, so we use lvs for load balancing in the background.
# install heartbeat####
One: set the environment
1. System: redhat6.5
two。 Nodes: a total of four virtual machines are used as nodes, of which server10 and server11 install heartbeat and lvs, and the other two nodes server12 and server13 only provide apache and vsftpd services
3. The resolution between these four nodes must be done well.
4. The firewall is off, the time is synchronized, and the system version of the four machines had better be the same.
Second: install heartbeat: (this is a third-party software, not included in redhat, so you have to download it yourself.)
Heartbeat-3.0.4-2.el6.x86_64.rpm
Heartbeat-devel-3.0.4-2.el6.x86_64.rpm
Heartbeat-libs-3.0.4-2.el6.x86_64.rpm
You also need to modify the native yum source configuration before downloading
Vim / etc/yum.repos.d/rhel-source.repo
# #
[Server]
Name=Red Hat Enterprise Linux $releasever-$basearch-Source
Baseurl= http://172.25.60.250/rhel6.5/Server
Enabled=1
Gpgcheck=1
Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[HighAvailability]
Name=HighAvailability
Baseurl= http://172.25.60.250/rhel6.5/HighAvailability
Gpgcheck=0
[LoadBalancer]
Name=LoadBalancer
Baseurl= http://172.25.60.250/rhel6.5/LoadBalancer
Gpgcheck=0
[ResilientStorage]
Name=ResilientStorage
Baseurl= http://172.25.60.250/rhel6.5/ResilientStorage
Gpgcheck=0
[ScalableFileSystem]
Name=ScalableFileSystem
Baseurl= http://172.25.60.250/rhel6.5/ScalableFileSystem
Gpgcheck=0
#
The configuration file for heartbeat is stored in / etc/ha.d/
But by default, it's not in the configuration file, so we need to copy it from somewhere else.
[root@server10 ~] # cd / usr/share/doc/heartbeat-3.0.4/
[root@server10 heartbeat-3.0.4] # cp ha.cf authkeys haresources / etc/ha.d/
Next, write the configuration file:
[root@server10 ha.d] # vim ha.cf
# #
29 logfile / var/log/ha-log # # Log location
48 keepalive 2 # # set the interval between heartbeat to 2 seconds
56 deadtime 30 # # declares the node dead after 30 seconds.
61 warntime 10 # # the time to wait before issuing a "late heartbeat" warning in the log (in seconds)
71 initdead 120 # in some configurations, it takes some time for the network to function properly after a restart. Its value should be at least twice that of the usual deadtime.
76 udpport 694 # # uses port 694 for bcast and ucast communication. This is the default.
91 bcast eth0
211node server10 # # there are two nodes where heartbeat is installed
212 node server11
# #
[root@server10 ha.d] # vim authkeys
#
23 auth 1
24 1 crc
#
Finally, we modify the haresources, we want to add an apache service to it, but we need to give it a virtual ip (the ip must not be occupied by others) and let these two nodes do its search.
[root@server10 ha.d] # vim haresources
# # #
149 server10 IPaddr::172.25.60.100/24/eth0 httpd
# # #
Httpd services are installed on both sides as a test, and to distinguish between them, we write different content in their index.html
After completing the above, open the heartbeat service and httpd service on both sides:
[root@server10 ha.d] # / etc/init.d/heartbeat start
[root@server10 ha.d] # / etc/init.d/httpd start
Then let's pass the firefox to check the effect:
Then we shut down the current node heartbeat and found that another node automatically took over:
Let the heartbeat of that node turn on again, because it is the master node (server10), so it takes over again:
This has the effect of load balancing.
# introduction to LVS #
LVS is the abbreviation of Linux Virtual Server, which means Linux virtual server, which is a virtual server cluster system.
Three load balancing technologies of 1.LVS:
(1)。 Implement virtual server (VS/NAT) through NAT
(2)。 Implement virtual server (VS/TUN) through IP tunnel
(3)。 Implement a virtual server (VS/DR) through direct routing
# installation and configuration of LVS #
Download ipvsadm first
[root@server10 ha.d] # yum install ipvsadm-y
[root@server10 ha.d] # ipvsadm-l # # View schedule, there should be no schedule at this time
Next, add the virtual ip as a public access ip
[root@server10 ha.d] # ifconfig eth0:0 172.25.60.200 netmask 255.255.255.0 up
We added the port of the httpd service of the virtual ip and adopted the round robin algorithm (RR):
[root@server10 ha.d] # ipvsadm-A-t 172.25.60.200 rr 80-s
[root@server10 ha.d] # ipvsadm-l # # when you check the schedule again, you will find that there is a schedule
Allow httpd services for server12 and server13 to be rotated nodes:
[root@server10 ha.d] # ipvsadm-a-t 172.25.60.200 80-r 172.25.60.12 purl 80-g
[root@server10 ha.d] # ipvsadm-a-t 172.25.60.200 80-r 172.25.60.13 purl 80-g
Install the httpd service on serevr12 and server13, respectively:
Server12 and server13 need to be able to recognize the virtual IP of 172.25.60.200, so they also need to add this virtual network card.
[root@server12 ~] # ifconfig eth0:0 172.25.60.200 netmask 255.255.255.0 up
Now that both the control node and the node providing the real service can recognize the VIP (172.25.60.200), we will add a policy to the node providing the service, which feels similar to a firewall, but with an additional software installation:
[root@server12 ~] # yum install arptable*-y
Add a policy to discard all packets that directly enter and access 172.25.60.200, and let packets leaving 172.25.60.200 go out from 172.25.60.12 (172.25.60.13).
[root@server13] # arptables-An IN-d 172.25.60.200-j DROP
[root@server13] # arptables-An OUT-s 172.25.60.200-j mangle--mangle-ip-s 172.25.60.12
[root@server13 ~] # arptables-nL
[root@server13 ~] # / etc/init.d/arptables_jf save # # Save policy
Now let's repeatedly visit 172.25.60.200 through the browser (make sure the httpd service of the service node is enabled) and refresh it several times:
Use ipvsdm-l to view the information recorded by the control node before access
After 12 visits, we found that the two nodes each called 6 times, but the visited ip was both virtual ip (172.25.60.200), which is the LVS scheme in the case of direct connection.
# heartbeat+lvs
To enable the two software to cooperate with each other, and to make the platform have an alarm and rescue mechanism, we need to install ldirectord software.
Ldirectord-3.9.2-1.2.x86_64.rpm
Install ldirectord on server10 and server11 (because the package is dependent on the system package, all are installed using yum
[root@server10 ha.d] # yum install ldirectord-3.9.2-1.2.x86_64.rpm-y
Copy the configuration file to the configuration file directory of heartbeat
[root@server10 ha.d] # cp / usr/share/doc/packages/ldirectord/ldirectord.cf / etc/ha.d/
Install perl-IO-Socket-INET6-2.56-4.el6.noarch on both nodes, otherwise the subsequent ldirectord cannot be opened due to lack of scripts:
[root@server10 ha.d] yum install perl-IO-Socket-INET-y
Edit the configuration file for ldirectord
[root@server10 ha.d] vim directord.cf
# #
25 virtual=172.25.60.200:80
26 real=172.25.60.12:80 gate
27 real=172.25.60.13:80 gate
28 fallback=127.0.0.1:80 gate
29 service=http
30 scheduler=rr
31 # persistent=600
32 # netmask=255.255.255.255
33 protocol=tcp
34 checktype=negotiate
35 checkport=80
36 request= "index.html"
# #
We specify two real service nodes 172.25.60.12 and 172.25.60.13, and their access order is in a round-robin way. when both nodes are dead, 172.25.60.10 provides the service itself.
Copy this configuration file to the configuration file of another control node 172.25.60.11:
[root@server10 ha.d] # scp ldirectord.cf 172.25.60.11:/etc/ha.d/
Edit the haresources file and add the ldirectord service to the heartbeat:
[root@server10 ha.d] # vim haresources
# #
149 server10 IPaddr::172.25.60.100/24/eth0 httpd ldirectord
# #
Make the same modification under server11
At this point, we directly start the heartbeat service, which automatically invokes the ldirectord service, and the contents of our ldirectord configuration file perform the same scheduling function as LVS, so that the platform construction is basically complete:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.