In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Introduction to MHA Architecture
MHA is the abbreviation of Master High Availability, it is a relatively mature solution in the high availability aspect of MySQL at present, its core is a set of scripts written in Perl language, and it is a set of excellent high availability software for failover and master-slave upgrade in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and can ensure the consistency of the data to the maximum extent, so as to achieve the real high availability.
Unlike MMM, MHA-based architecture does not need to build master-master replication, it only needs to build a basic master-slave replication architecture. Because when MHA hangs up the master library, it selects one slave library from multiple slave libraries as the new master library. All the nodes in the MHA cluster need to communicate with each other based on ssh mutual trust to achieve remote control and data management functions.
What features does MHA provide:
You can monitor the availability of Master nodes when Master is not available and elect a new Master among multiple Slave to provide master-slave handover and failover functions. MHA will try to save binlog on the down Master to ensure that transactions are not lost to the maximum extent. However, if the server where Master is located is no longer accessible, or if there is a problem at the hardware level, the binlogMHA can be saved successfully. It can be combined with semi-synchronous replication to avoid data inconsistency between slave libraries. MySQL supports GTID-based and log-point-based replication.
MHA failover process:
Try to use ssh to log in to the crashed Master node to save the binary log event (binlog events); identify the Slave with the latest updates from multiple Slave as an alternative Master; and then synchronize the differential relay log (relaylog) to other Slave based on that Slave; then synchronize the binary log event (binlog events) saved from the original Master; promote the alternative Master to a new Master; so that other Slave connects to the new Master for replication Start the vip address in the new Master to ensure that the front-end request can be sent to the new Master.
The architecture of MHA is as follows:
Build MHA architecture by hand
The machines used in this article describe:
Name IP role master192.168.190.151 master library slave-01192.168.190.152 slave library slave-02192.168.190.154 slave library manager192.168.190.153 cluster management node (MHA)
Environment release Notes:
Operating system version: CentOS 7MySQL version: 8.0.19MHA version: 0.58
Additional instructions:
Friends who will come to understand the MMM architecture must have mastered the installation of MySQL, and there are many articles about the installation of MySQL, so this article will not demonstrate the installation of MySQL in order to reduce unnecessary space. The machines used in this article have been installed MySQL in advance. Configure the configuration file of the master-slave node
1. Use the following statement on all master-slave nodes to create MySQL users for master-slave replication. Because each slave library may be elected as the master library, you need to have users for replication:
Create user 'repl'@'%' identified with mysql_native_password by' Abc_123456';grant replication slave on *. * to 'repl'@'%';flush privileges
2. Then modify the MySQL configuration file on the master node:
[root@master ~] # vim / etc/my.cnf [mysqld] # set the idserver_id=101# of the current node to enable binlog, specify the name of the binlog file log_bin=mysql_bin# to enable relay_log, and specify the name of the relay_log file relay_log=relay_bin# to record the synchronous contents of relaylog to binlog log_slave_updates=on# enable GTID replication mode gtid_mode=ONenforce_gtid_consistency=1
3. The same configuration is added to the configuration file of slave-01, except that server_id is different:
[root@slave-01 ~] # vim / etc/ my.cnf [mysqld] server_id=102log_bin=mysql_binrelay_log=relay_binlog_slave_updates=ongtid_mode=ONenforce_gtid_consistency=1
4. Then configure slave-02:
[root@slave-02 ~] # vim / etc/ my.cnf [mysqld] server_id=103log_bin=mysql_binrelay_log=relay_binlog_slave_updates=ongtid_mode=ONenforce_gtid_consistency=1
After the modification of the above configuration file, restart the MySQL service on these three nodes:
[root@master ~] # systemctl restart mysqld [root @ slave-01 ~] # systemctl restart mysqld [root @ slave-02 ~] # systemctl restart mysqld configure the master-slave relationship of slave-01 to master
Enter the MySQL command line terminal of the slave-01 node and execute the following statements to configure the master-slave replication link:
Mysql > stop slave;-- stop master-slave synchronization mysql > change master to master_host='192.168.190.151', master_port=3306, master_user='repl', master_password='Abc_123456', master_auto_position=1;-- configure connection information of master node mysql > start slave;-- start master-slave synchronization
After configuring the master-slave replication link, use the show slave status\ G; statement to view the master-slave synchronization status. The values of Slave_IO_Running and Slave_SQL_Running are both Yes to indicate that the master-slave synchronization status is normal:
Configure the master-slave relationship of slave-02 to master
In the same step, enter the MySQL command line terminal of the slave-02 node and execute the following statements to configure the master-slave replication link:
Mysql > stop slave;-- stop master-slave synchronization mysql > change master to master_host='192.168.190.151', master_port=3306, master_user='repl', master_password='Abc_123456', master_auto_position=1;-- configure connection information of master node mysql > start slave;-- start master-slave synchronization
After configuring the master-slave replication link, use the show slave status\ G; statement to view the master-slave synchronization status. The values of Slave_IO_Running and Slave_SQL_Running are both Yes to indicate that the master-slave synchronization status is normal:
Configure ssh secret-free login
Configure that all hosts in the cluster can log in through ssh, because MHA is based on ssh to achieve remote control and data management. For example, saving the binary log of the original Master node and configuring the virtual IP during the failover process.
1. Generate ssh login key:
[root@master ~] # ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/ root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in / root/.ssh/id_rsa.Your public key has been saved in / root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:LzRXziRQPrqaKEteH6KrZpCiV6uGP6GTi6RonE7Hhms root@masterThe key's randomart image is:+--- [RSA 2048]-+ | . | | o | | + o | |. B | |. S. O | | +. . = | | = Bo*o.. O. | |% EOo.+ +. | | |% XB*. | + | +-[SHA256]-+
2. Copy the key to another server:
[root@master ~] # ssh-copy-id-I / root/.ssh/id_rsa root@192.168.190.151 [root@master ~] # ssh-copy-id-I / root/.ssh/id_rsa root@192.168.190.152 [root@master ~] # ssh-copy-id-I / root/.ssh/id_rsa root@192.168.190.154 [root@master ~] # ssh-copy-id-I / root/.ssh/id_rsa root@192.168.190.153
Then do the same on other nodes in the cluster, which will not be demonstrated here because it is a repetitive operation. Finally, simply test whether you can log in normally without secret:
[root@master] # ssh root@192.168.190.152Last failed login: Sat Feb 1 15:29:38 CST 2020 from 192.168.190.151 on ssh:nottyThere was 1 failed login attempt since the last successful login. # No password is required. Last login: Sat Feb 1 14:14:03 2020 from 192.168.190.1 [root @ slave-01 ~] # install the MHA package
1. First install the mha4mysql-node package on all nodes. The installation package can be downloaded from the following address:
Https://github.com/yoshinorim/mha4mysql-node/releases/tag/v0.58
The downloaded rpm file is as follows:
[root@master] # ls * .rpmmha4mysql-node-0.58-0.el7.centos.noarch.rpm [root@master ~] #
You need to install perl related dependencies before installing the rpm package:
[root@master ~] # yum-y install epel-release [root@master ~] # yum-y install perl-DBD-MySQL perl-DBI ncftp
You are now ready to install mha4mysql-node with the following command:
[root@master ~] # rpm-ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpmTips: the other two Slave nodes and monitoring nodes can be installed according to the above steps. We will not repeat the demonstration here.
2. Next, install the mha4mysql-manager software package on the monitoring node manager, and download the installation package to the following address:
Https://github.com/yoshinorim/mha4mysql-manager/releases
The downloaded rpm file is as follows:
[root@manager] # ls * .rpmmha4mysql-manager-0.58-0.el7.centos.noarch.rpm [root@manager ~] #
Similarly, you need to install perl dependent dependencies before installing the rpm package:
[root@manager ~] # yum-y install epel-release [root@manager ~] # yum-y install perl-Config-Tiny perl-Time-HiRes perl-Parallel-ForkManager perl-Log-Dispatch perl-DBD-MySQL ncftp
Then install the mha4mysql-manager package with the following command:
[root@manager ~] # rpm-ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm configure MHA management node
1. Create the configuration file storage directory and working directory for MHA:
[root@manager ~] # mkdir / etc/mha [root@manager ~] # mkdir / home/mysql_mha
2. Create a configuration file for MHA and add the following:
[root@manager ~] # vim / etc/mha/mysql_ mha.cnf [server default] # mha account and password used to access the database user=mhapassword=Abc_123456# specified mha working directory manager_workdir=/home/mysql_mha# mha log file storage path manager_log=/home/mysql_mha/manager.log# specified mha working directory on remote nodes remote_workdir=/home/mysql_mha# users who can log in using ssh Ssh_user=root# MySQL user and password for master-slave replication repl_user=replrepl_password=Abc_123456# specifies how many seconds to detect ping_interval=1# specifies the directory where the master node stores the binlog log files master_binlog_dir=/var/lib/mysql# specifies a script The script implements after the master-slave switch Drift the virtual IP to the new Master master_ip_failover_script=/usr/bin/master_ip_failover# specifies the script secondary_check_script=/usr/bin/masterha_secondary_check-s 192.168.190.151-s 192.168.190.152-s 192.168.190.152-s 192.168.190.151configured the node information in the cluster [server1] hostname=192.168.190.151# that the node can participate in Master election candidate_master= 1 [server2] hostname=192.168.190.152candidate_master= 1 [server3] hostname=192.168.190.154# specifies that the node cannot participate in the Master election no_master=1
3. Write the master_ip_failover script configured in the configuration file, which is modified according to the official example of MHA, which is not provided by MHA by default. It should be noted that there are several areas in the script that need to be modified according to the actual situation, which have been marked with comments:
[root@manager ~] # vim / usr binqure Mastermind ipsilateral failed overwritten databases. USR binqure env perluse strict;use warnings FATAL = > 'all';use Getopt::Long;my ($command, $orig_master_host, $orig_master_ip,$ssh_user, $orig_master_port, $new_master_host, $new_master_ip,$new_master_port, $orig_master_ssh_port,$new_master_ssh_port,$new_master_user,$new_master_password) # the virtual IP defined here can be modified according to the actual situation, my $vip = '192.168.190.80 take 24 cards, my $key =' 1cards. # the name of the network card here "ens32" needs to be modified according to the name of your machine's network card my $ssh_start_vip = "sudo / sbin/ifconfig ens32:$key $vip"; my $ssh_stop_vip = "sudo / sbin/ifconfig ens32:$key down" My $ssh_Bcast_arp= "sudo / sbin/arping-I bond0-c 3-A $vip" GetOptions ('command=s' = >\ $command,' ssh_user=s' = >\ $ssh_user, 'orig_master_host=s' = >\ $orig_master_host,' orig_master_ip=s' = >\ $orig_master_ip, 'orig_master_port=i' = >\ $orig_master_port,' orig_master_ssh_port=i' = >\ $orig_master_ssh_port, 'new_master_host=s' = >\ $new_master_host 'new_master_ip=s' = >\ $new_master_ip,' new_master_port=i' = >\ $new_master_port, 'new_master_ssh_port' = >\ $new_master_ssh_port,' new_master_user' = >\ $new_master_user, 'new_master_password' = >\ $new_master_password) Exit & main (); sub main {$ssh_user = defined $ssh_user? $ssh_user: 'root'; print "\ n\ nIN SCRIPT TEST====$ssh_user | $ssh_stop_vip==$ssh_user | $ssh_start_vip===\ n\ n"; if ($command eq "stop" | | $command eq "stopssh") {my $exit_code = 1; eval {print "Disabling the VIP on old master: $orig_master_host\ n"; & stop_vip () $exit_code = 0;}; if ($@) {warn "Got Error: $@\ n"; exit $exit_code;} exit $exit_code;} elsif ($command eq "start") {my $exit_code = 10; eval {print "Enabling the VIP-$vip on the new master-$new_master_host\ n" & start_vip (); & start_arp (); $exit_code = 0;}; if ($@) {warn $@; exit $exit_code;} exit $exit_code;} elsif ($command eq "status") {print "Checking the Status of the script.. OK\ n "; exit 0;} else {& usage (); exit 1;}} sub start_vip () {`ssh $ssh_user\ @ $new_master_host\" $ssh_start_vip\ "`;} sub stop_vip () {`ssh $ssh_user\ @ $orig_master_host\" $ssh_stop_vip\ "` } sub start_arp () {``ssh $ssh_user\ @ $new_master_host\ "$ssh_Bcast_arp\" `;} sub usage {print "Usage: master_ip_failover-- command=start | stop | stopssh | status-- ssh_user=user-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n";}
You also need to add executable permissions to the script, otherwise MHA cannot be called:
[root@manager ~] # chmod adepx / usr/bin/master_ip_failover
4. According to the configuration of remote_workdir in the configuration file, you need to create a remote working directory of MHA on other nodes:
[root@master ~] # mkdir / home/mysql_ mha [root @ slave-01 ~] # mkdir / home/mysql_ mha [root @ slave-02 ~] # mkdir / home/mysql_mha
5. Manager is specified in the configuration file to use mha to access the database node, so you need to create a mha user on the master node:
Create user 'mha'@'%' identified with mysql_native_password by' Abc_123456';grant all privileges on *. * to 'mha'@'%';flush privileges
6. After completing all the above steps, use masterha_check_ssh and masterha_check_repl on the manager node to check the configuration, where masterha_check_ssh is used to check whether the ssh login is normal, and masterha_check_repl is used to check whether the replication link of the master-slave node is normal:
[root@manager ~] # masterha_check_ssh-- conf=/etc/mha/mysql_ mha.cnf [root @ manager ~] # masterha_check_repl-- conf=/etc/mha/mysql_mha.cnf
The implementation results are as follows:
7. After all the above tests are passed, you can start the MHA service. The startup command is as follows:
[root@manager] # nohup masterha_manager-- conf=/etc/mha/mysql_mha.cnf &
After the startup is complete, you can use the ps command to check whether the masterha_manager process exists. The following indicates that the startup is successful:
[root@manager ~] # ps aux | grep masterha_managerroot 2842 0.3 1.1 299648 22032 pts/0 S 18:30 0:00 perl / usr/bin/masterha_manager-- conf=/etc/mha/mysql_mha.cnfroot 2901 0.0 112728 976 pts/0 R + 18:31 grep-- color=auto masterha_ manager [root @ manager ~] #
8. Finally, we need to configure the virtual IP manually on the master node. Because MHA will only drift the virtual IP to the new Master node during the master-slave switch, and will not actively set the virtual IP of Master at the first startup, we need to set it manually. The command to set up the virtual IP is as follows:
[root@master ~] # ifconfig ens32:1 192.168.190.80 Universe 24
After the setting is successful, you can see the virtual IP bound on the Nic by using the ip addr command:
Test the MHA service
At this point, we have completed the construction of the MHA high-availability architecture, and then we will do some simple tests on it. For example, to test whether the ping can communicate with the virtual IP, after all, the application connects to the virtual IP when accessing the database, so you must first make sure that the virtual IP can be accessed. As follows:
After you can ping, use remote connection tools such as Navicat to test whether you can connect to the database through virtual IP:
After confirming that the virtual IP can be accessed normally, then test whether the MHA can switch between master and slave normally. First, stop the MySQL service on the master node and simulate Master downtime:
[root@master ~] # systemctl stop mysqld
Normally, the Nic on the master node will no longer bind the virtual IP:
Instead, the MHA drifts to the network card of the slave-01 node, because at this time the Slave is the new Master:
Then enter the MySQL command line terminal on the slave-02 node and confirm that the Slave has been properly synchronized with the new Master. Before we configured slave-02, the main library is master. After stopping master, we can see that the Master_Host of slave-02 has been switched to the IP of slave-01 by MHA:
After the above tests, we can see that the MHA architecture we built is working normally, which has made the Replication cluster have basic high availability capabilities. Even after the Master is offline, it can normally select a new Master from the Slave and switch, and the replication link between other Slave and the new Master has been correctly established.
Advantages and disadvantages of MHA architecture: developed using Perl scripting language and completely open source, developers can carry out secondary development according to their own needs and can support GTID-based and log-point-based replication mode MHA is less prone to data loss during failover. Multiple Replication clusters can be monitored on one monitoring node: MHA does not provide virtual IP function by default You need to write your own script or use third-party tools to implement the configuration of virtual IP. After startup, MHA will only monitor Master, not Slave, and cannot monitor replication links. The cluster environment needs to be able to log in secret-free through ssh. There are certain security risks that MHA does not provide read load balancing to Slave, so it needs to be implemented through third-party tools.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.