In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The following brings you about how keepalived+MHA should implement mysql master-slave high-availability clusters. If you are interested, let's take a look at this article. I believe it will be of some help to you after reading how keepalived+MHA should achieve mysql master-slave high-availability clusters.
One-principle analysis
1 introduction to MHA:
MHA (Master High Availability) is currently a relatively mature solution for MySQL high availability. It was developed by youshimaton, a Japanese DeNA company (now working for Facebook). It is a set of excellent high availability software for failover and master-slave upgrade in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent in order to achieve high availability in the real sense.
2 composition of MHA:
The software consists of two parts: MHA Manager (management node) and MHA Node (data node). MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a slave node. MHA Node runs on each MySQL CVM, and MHA Manager regularly detects the master nodes in the cluster. When the master fails, it can automatically upgrade the slave of the latest data to the new master, and then redirect all other slave to the new master. The entire failover process is completely transparent to the application.
The Manager toolkit mainly includes the following tools:
Masterha_check_ssh check MHA SSH configuration status masterha_check_repl check MySQL replication status masterha_manger launch MHAmasterha_check_status check current MHA operational status masterha_master_monitor detect master downtime masterha_master_switch control failover (automatic or manual) masterha_conf_host Add or remove configured server information
The Node toolkit (these tools are usually triggered by MHA Manager scripts and do not require human manipulation) mainly includes the following tools:
Save_binary_logs saves and replicates master's binary log apply _ diff_relay_logs to identify differential relay log events and apply them to other slavefilter_mysqlbinlog to remove unnecessary ROLLBACK events (MHA no longer uses this tool) purge_relay_logs clears the relay log (does not block SQL threads)
3 working principle of MHA:
During the automatic failover process of MHA, MHA tries to save binary logs from the down primary cloud server to ensure that the data is not lost as much as possible, but this is not always feasible. For example, if the hardware of the primary CVM fails or cannot be accessed through ssh, MHA cannot save binary logs and only fails over and loses the latest data. With semi-synchronous replication of MySQL 5.5, the risk of data loss can be greatly reduced. MHA can be combined with semi-synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave cloud servers, thus ensuring data consistency among all nodes.
Currently, MHA mainly supports the architecture of one master and multiple slaves. To build MHA, you must have at least three database CVMs in a replication cluster. One master and two slaves, that is, one serves as master, one acts as standby master, and the other acts as slave database, because at least three CVMs are needed. Taobao has also made changes on this basis due to machine cost. At present, Taobao TMHA already supports one master and one slave. In fact, we can also use one master and one slave for our own use, but the master host cannot be switched after downtime, and the binlog cannot be completed. After the mysqld process of master crash, you can still switch successfully and complete the binlog. Its structure is as follows:
Official address: https://code.google.com/p/mysql-master-ha/
2. Preparation of experimental environment
1 system version
Unified version, unified specification, this is the premise of automatic household operation and maintenance in the future.
[root@vin ~] # cat / etc/redhat-release CentOS Linux release 7.3.1611 (Core)
2 kernel parameters
[root@vin] # uname-r3.10.0-514.el7.x86_64
3 Host configuration parameters: prepare 4 clean hosts, node {1, 2, 3, 4}.
Each other can resolve host names. Since many configuration files on nodes are roughly the same, you only need to modify one copy and use for loop to copy it to other nodes. It is simple and convenient, so host name authentication is implemented here.
Role ip address hostname server_id type MHA-Manager 172.18.253.73 node1-Monitoring replication group Master 172.18.250.27 node2 1 write Candicate master 172.18.253.160 node3 2 read Slave 172.18.254.15 node4 3 read
4 realize that the hostname can be resolved from each other
[root@vin ~] # cat / etc/hosts172.18.253.73 node1172.18.250.27 node2172.18.253.160 node3172.18.254.15 node4
5 to realize the key-free communication between hosts.
Since Manager needs to verify ssh connectivity between nodes when using MHA, we need to implement keyless communication between nodes here. A simple method is used here, that is, to generate a ssh key pair on a node to authenticate the host, and then copy the authentication file and public and private keys to other nodes, so that there is no need for each node to create a key pair to achieve authentication.
[root@vin ~] # ssh-keygen-t rsa-P''[root@vin ~] # ssh-copy-id-I. / id_rsa.pub node1: [root@vin ~] # for i in {2.. 4}; do scp id_rsa {, .pub} authorized_keys root@node$i:/root/.ssh/;done
Third, realize the master-slave replication cluster
1 Master configuration:
Modify the configuration file
[root@vin ~] # cat / etc/my.cnf.d/ server.cnf [server] server_id = 1 # provide the primary node with an server number, which can be any integer log_bin = master-log # enable binary log relay_log = relay-log # enable relay log Because the master will also become a slave innodb_file_per_table = ON # each data table is stored as a single file skip_name_resolve = ON # turn off hostname resolution, which helps to improve performance max_connections = 5000 # maximum number of concurrent connections
Create users with replication capabilities and users for Manager node management
[root@vin ~] # mysqlMariaDB [(none)] > show master status\ G * * 1. Row * * File: master-log.000003 Position: 245Binlog_Do_DB: Binlog_Ignore_DB: MariaDB [(none)] > grant replication slave,replication client on *. * to-> 'vinsent'@'172.18.%.%' identified by' vinsent' # this is a statement MariaDB [(none)] > grant ALL on *. * to 'MhaAdmin'@'172.18.%.%' identified by' MhaPass';MariaDB [(none)] > flush privileges
Note: we should first check the log file being used by the master node and the corresponding POSITION, and then create users so that the slave node can own these users synchronously. When creating users for management in the Manager node, it should be noted that the host range in the user name must include the addresses of other nodes.
2 Slave {1Pol 2} configuration:
The configuration of the two slave nodes is the same; modify the configuration file to support master-slave replication
[root@vin ~] # cat / etc/my.cnf.d/ server.cnf [server] server_id = 2log_bin = master-logrelay_log = relay-logrelay_log_purge = OFF # turn off relay log clipping read_only = ON # because it is a slave node, it is set to read-only mode innodb_file_per_table = ONskip_name_resolve = ONmax_connections = 5000
Connect to the primary node for synchronization
[root@vin ~] # mysqlMariaDB [(none)] > change master to master_host='172.18.250.27',master_user='vinsent',master_password='vinsent',master_log_file='master-log.000003',master_log_pos=245;MariaDB [(none)] > start slave;MariaDB [(none)] > show slave status\ G * * 1. Row * * Slave_IO_State: Waiting for master to send event Master_Host: 172.18.250.27 Master_User: vinsent Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-log.000003 Read_Master_Log_Pos: 637 Relay_Log_File: relay-log.000002 Relay_Log_Pos: 922 Relay_Master_Log_File: master-log.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 637 Relay_Log_Space: 1210 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1
Note: check the status of the slave node, make sure that the value of "Slave_IO_Running" and "Slave_SQL_Running" is "YES", that is, the slave node is working normally, and there is no error message in "Last_IO_Errno" and "Last_SQL_Errno". If there is an error, it is generally a connectivity error, which means that either the user creates a problem, or the data of the master-slave node is out of sync, please make sure that the two data are consistent.
Test whether the slave node synchronizes the data of the master node locally:
MariaDB [(none)] > select user from mysql.user;+-+ | user | +-+ | root | | MhaAdmin | | vinsent | | root | +-+ |
4. Install MHA package
In addition to the source package, MHA officially provides a package in rpm format, which can be downloaded from http://code.google.com/p/mysql-master/wiki/Downloads?tm=2. CentOS 7 systems can use packages for el6 directly, and the versions of MHA Manager and MHA NODe packages do not require consistency.
1 Manager node
[root@vin] # lsmha4mysql-manager-0.56-0.el6.noarch.rpm mha4mysql-node-0.56-0.el6.noarch.rpm
The master node needs to install the mha4mysql-manager management package and several node packages.
2 Master & & SLave {1pm 2} node
The slave node only needs to install the mode package
[root@vin ~] # lsmha4mysql-node-0.56-0.el6.noarch.rpm [root@vin ~] # yum install / root/*.rpm
3 check the availability of ssh between nodes
The following results show that ssh connectivity is correct.
[root@vin] # masterha_check_ssh-- conf=/etc/masterha/app1.cnf...- [info] All SSH connection tests passed successfully.
4 check whether the connection configuration parameters of the managed mysql master-slave replication cluster meet
[root@vin] # masterha_check_repl-- conf=/etc/masterha/app1.cnf... Mon Nov 13 22:11:30 2017-[info] Slaves settings check done.Mon Nov 13 22:11:30 2017-[info] 172.18.250.27 (172.18.250.27) (current master) +-- 172.18.253.160 (172.18.253.160) +-- 172.18.254.15 (172.18.254). 15 3306). MySQL Replication Health is OK.
If the configuration parameters meet the requirements, then you will see the master and slave nodes of the cluster, as shown in the example above.
5 start MHA
Manager node:
[root@vin] # masterha_manager-- conf=/etc/masterha/app1.cnfMon Nov 13 22:16:17 2017-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping. # No default configuration file Mon Nov 13 22:16:17 2017-[info] Reading application default configuration from / etc/masterha/app1.cnf..Mon Nov 13 22:16:17 2017-[info] Reading server configuration from / etc/masterha/app1.cnf..
Note: MHA works in the foreground by default. To prevent it from running in the background, use the following command:
[root@vin ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf >\ / data/masterha/app1/managerha/manager.log 2 & > 1 &
After a successful startup, check the status of the Master node
[root@vin] # masterha_check_status-- conf=/etc/masterha/app1.cnf app1 (pid:4090) is running (0:PING_OK), master:172.18.250.27
Note: if it is not started successfully, the command here will not be executed correctly; prompt: "app1 is stopped (2:NOT_RUNNING)."
Six configuration keepalived
Set the address to provide services to users as "172.18.14.55 VIP 16", and implement VIP to float in the Mysql replication cluster through keepalived.
1 install keepalived
Use the default yum installation; install and configure keepalived on all hosts in the Mysql replication cluster
[root@vin ~] # yum install keepalived-y
2 modify keepalived configuration file to realize keepalived cluster
Master:
[root@vin ~] # vim / etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {root@localhost} notification_email_from kadmin@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DEVEL route_mcast_group4 224.14.0.14 # broadcast address} vrrp_script chk_mysql {script "killall-0 mysql" # Monitoring mysql health script insterval 1 weight-10} vrrp_instance VI_1 {# keepalived instance state BACKUP interface ens33 virtual_router_id 66 priority 98 # keepalived Node priority advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {172.18.14.55 and 16 # client-facing address} track_script {chk_mysql}}
Slave {1 ~ 2}: return the configuration file of the master node to the Slave node:
[root@vin ~] # for i in {3jue 4}; do scp / etc/keepalived/keepalived.conf\ root@node$i:/etc/keepalived/; done
Note: replication can not be used directly in the past. Because keepalived determines which host the VIP works on through the priority mechanism, the priority of the two slave nodes is lower than that of the keepalived on the master node and is different from each other.
If you are interested, you may find that the status of the VRRP instance is "state BACKUP". The keepalived of the above CVM is set to BACKUP mode. In keepalived, there are two modes: master- > backup mode and backup- > backup mode. There is a big difference between the two models. In master- > backup mode, once the master node goes down, the virtual ip will automatically drift to the slave node. When the master node is repaired and the keepalived starts, it will preempt the virtual ip, even if the non-preemptive mode (nopreempt) is set to preempt the ip. In backup- > backup mode, when the master node fails, the virtual ip will automatically drift to the slave node. When the original master node is restored, it will not preempt the new master virtual ip, even if the priority is higher than the priority of the slave library. In order to reduce the number of ip drifts, the repaired main library is usually used as a new backup library.
Seven faults occur
Simulation failure occurs, we manually "down" the master node, there may be a variety of reasons leading to failure in production, here for the best simulation method, of course, shut down the service.
1 Master
[root@vin ~] # systemctl stop mariadb
2 View the status of MHA on the MHA node
[root@vin ~] # masterha_check_repl-- conf=/etc/masterha/app1.cnf....Mon Nov 13 22:36:37 2017-[info] MHA::MasterMonitor version 0.56.Mon Nov 13 22:36:37 2017-[info] GTID failover mode = 0Mon Nov 13 22:36:37 2017-[info] Dead Servers: # indicate the failed node Mon Nov 13 22:36:37 2017-[info ] 172.18.250.27 (172.18.250.27 Mon Nov 13 22:36:37 2017-[info] Alive Servers:Mon Nov 13 22:36:37 2017-[info] 172.18.253.160 (172.18.253.160 Alive Servers:Mon Nov 3306) Mon Nov 13 22:36:37 2017-[info] 172.18.254.15 (172.18.254.15 info 13 22:36:37-[info] Alive Slaves: Mon Nov 13 22:36:37 2017-[info] 172.18.254.15 (172.18.254.15 3306) # from two nodes to one Another upgrade to the primary node.
3 testing the slave node to see if the master node is switching correctly
Slave1:
MariaDB [(none)] > show slave status; # View the slave node status is empty, indicating that the non-slave node Empty set (0.00 sec) MariaDB [(none)] > show master status # check the master status again Switched +-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +-+-correctly -+ | master-log.000003 | 245 | +-+
Slave2:
MariaDB [(none)] > show slave status\ G * * 1. Row * * Slave_IO_State: Waiting for master to send event Master_Host: 172.18.253.160 # Slave node "Slave2" has directed the master node to the new master node Master_User: vinsent Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-log.000003...
4 check the binding of keepalived address:
Master:
[root@vin ~] # ip a | grep ens33 2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 172.18.250.27 brd 16 brd 172.18.255.255 scope global dynamic ens33
Slave1:
[root@vin ~] # ip a | grep ens33 2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 172.18.250.160 brd 16 brd 172.18.255.255 scope global dynamic ens33 inet 172.18.14.55 scope global secondary ens33 # address correctly drifted to slave node Slave1
Slave2:
[root@vin ~] # ip a | grep ens33 2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 172.18.254.15 inet 16 brd 172.18.255.255 scope global dynamic ens33
Eight fault recovery
In order to meet the requirements of the cluster, the failed master node should be repaired online immediately. Since the master node of the Mysql replication cluster has been switched, the failed master node can only be a slave node after it goes online, so its configuration file should be modified to meet the requirements of the slave node.
1 Master node
[root@vin ~] # vim / etc/my.cnf.d/server.cnf # add the following two items [server] relay_log_purge = OFFread_only = ON
Start the service, connect to the Mysql; and connect to the new master node for master-slave synchronization. It is worth noting here that if your master node goes down during operation, then you need to do more than just modify the configuration and start the service. After modifying the configuration file, you should make a full backup of the new master, restore the data of the new master node to the local machine, and then do replication synchronization after connecting to the new master node (there is not much data in this lab, so go online directly).
[root@vin ~] # systemctl start mariadb [root@vin ~] # mysqlMariaDB [(none)] > change master to master_host='172.18.253.160',master_user='vinsent',master_password='vinsent',master_log_file='master-log.000003',master_log_pos=245;MariaDB [(none)] > start slave MariaDB [(none)] > show slave status\ gateway * 1. Row * * Slave_IO_State: Waiting for master to send event Master_Host: 172.18.253.160 Master_User: vinsent Master_Port: 3306...
2 Manager:
Switch to MHA and view cluster status
[root@vin] # masterha_check_repl-- conf=/etc/masterha/app1.cnf... Mon Nov 13 22:54:53 2017-[info] GTID failover mode = 0Mon Nov 13 22:54:53 2017-[info] Dead Servers:Mon Nov 13 22:54:53 2017-[info] Alive Servers:Mon Nov 13 22:54:53 2017-[info] 172.18.250.27 (172.18.250.27 conf=/etc/masterha/app1.cnf 3306) # because who is the master is not indicated in the configuration file Therefore, you can only see all the working hosts Mon Nov 13 22:54:53 2017-[info] 172.18.253.160 (172.18.253.160) Mon Nov 13 22:54:53 2017-[info] 172.18.254.15 (172.18.254.15 info) Mon Nov 13 22:54:53 2017-[info] Alive Slaves:Mon Nov 13 22:54:53 2017-[info] 172.18.250.27 ( 172.18.250.27 MySQL Replication Health is OK 3306).
Start MHA Manger monitoring to see who is master in the cluster.
[root@vin] # masterha_check_status-- conf=/etc/masterha/app1.cnfapp1 is stopped (2:NOT_RUNNING).
?? What's going on? why is it displayed as "stopped" here when it has been started correctly? go to the official website and find out: "Currently MHA Manager process does not run as a daemon. If failover completed successfully or the master process was killed by accident, the manager stops working. To run as a daemon, daemontool. Or any external daemon program can be used. Here is an example to run from daemontools."
Nine summing up
Analyze the MHA handover process by checking the log to observe the handover process:
[root@vin masterha] # cat manager.log Mon Nov 13 22:36:03 2017-[info] MHA::MasterMonitor version 0.56.Mon Nov 13 22:36:04 2017-[info] GTID failover mode = 0Mon Nov 13 22:36:04 2017-[info] Dead Servers:Mon Nov 13 22:36:04 2017-[info] 172.18.250.27 (172.18.250.27) Mon Nov 13 22:36:04 2017-[info] Alive Servers:Mon Nov 13 22:36:04 2017-[info] 172.18.253.160 (172.18.253.160) Mon Nov 13 22:36:04 2017-[info] 172.18.254.15 (172.18.254.15 info) Mon Nov 13 22:36:04 2017-[info] Alive Slaves:Mon Nov 13 22:36:04 2017-[info] 172.18.254.15 (172.18.254.15 info 3306) Version=5.5. 52-MariaDB (oldest major version between slaves) log-bin:enabledMon Nov 13 22:36:04 2017-[info] Replicating from 172.18.253.160 (172.18.253.160) Mon Nov 13 22:36:04 2017-[info] Primary candidate for the new Master (candidate_master is set) Mon Nov 13 22:36:04 2017-[info] Current Alive Master: 172.18.253.160 (172.18.253.160) Mon Nov 13 22: 36:04 2017-[info] Checking slave configurations..Mon Nov 13 22:36:04 2017-[warning] relay_log_purge=0 is not set on slave 172.18.254.15 (172.18.254.15 warning). Mon Nov 13 22:36:04 2017-[info] Checking replication filtering settings..Mon Nov 13 22:36:04 2017-[info] binlog_do_db= Binlog_ignore_db= Mon Nov 13 22:36:04 2017-[info] Replication filtering check ok.Mon Nov 13 22:36:04 2017-[info] GTID (with auto-pos) is not supportedMon Nov 13 22:36:04 2017-[info] Starting SSH connection tests..Mon Nov 13 22:36:05 2017-[info] All SSH connection tests passed successfully.Mon Nov 13 22:36:05 2017-[info] Checking MHA Node version..Mon Nov 13 22:36:06 2017- [info] Version check ok.Mon Nov 13 22:36:06 2017-[error] [/ usr/share/perl5/vendor_perl/MHA/ServerManager.pm Ln492] Server 172.18.250.27 (172.18.250.27 is dead, but must be alive! Check server settings.Mon Nov 13 22:36:06 2017-[error] [/ usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations. At / usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm line 399.Mon Nov 13 22:36:06 2017-[error] [/ usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm] Ln523] Error happened on monitoring servers.Mon Nov 13 22:36:06 2017-[info] Got exit code 1 (Not master dead). Mon Nov 13 22:36:13 2017-[info] MHA::MasterMonitor version 0.56.Mon Nov 13 22:36:13 2017-[info] GTID failover mode = 0Mon Nov 13 22:36:13 2017-[info] Dead Servers:Mon Nov 13 22:36:13 2017-[info] 172.18.250.27 (172.18.250.27) 3306) Mon Nov 13 22:36:13 2017-[info] Alive Servers:Mon Nov 13 22:36:13 2017-[info] 172.18.253.160 (172.18.253.160 Mon Nov 13 22:36:13 2017-[info] 172.18.254.15 (172.18.254.15 info 13 22:36:13 2017-[info] Alive Slaves:Mon Nov 13 22:36:13 2017-[info] 172. 18.254.15 (172.18.254.15 log-bin:enabledMon Nov 3306) Version=5.5.52-MariaDB (oldest major version between slaves) log-bin:enabledMon Nov 13 22:36:13 2017-[info] Replicating from 172.18.253.160 (172.18.253.160 log-bin:enabledMon Nov 3306) Mon Nov 13 22:36:13 2017-[info] Primary candidate for the new Master (candidate_master is set) Mon Nov 13 22:36:13 2017-[info] Current Alive Master: 172. 18.253.160 (172.18.253.160 Mon Nov 13 22:36:13 2017-[info] Checking slave configurations..Mon Nov 13 22:36:13 2017-[warning] relay_log_purge=0 is not set on slave 172.18.254.15 (172.18.254.15 Checking slave configurations..Mon Nov 3306). Mon Nov 13 22:36:13 2017-[info] Checking replication filtering settings..Mon Nov 13 22:36:13 2017-[info] binlog_do_db= Binlog_ignore_db= Mon Nov 13 22:36:13 2017-[info] Replication filtering check ok.Mon Nov 13 22:36:13 2017-[info] GTID (with auto-pos) is not supportedMon Nov 13 22:36:13 2017-[info] Starting SSH connection tests..Mon Nov 13 22:36:15 2017-[info] All SSH connection tests passed successfully.Mon Nov 13 22:36:15 2017-[info] Checking MHA Node version..Mon Nov 13 22:36:15 2017- [info] Version check ok.Mon Nov 13 22:36:15 2017-[error] [/ usr/share/perl5/vendor_perl/MHA/ServerManager.pm Ln492] Server 172.18.250.27 (172.18.250.27 is dead, but must be alive! Check server settings.Mon Nov 13 22:36:15 2017-[error] [/ usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations. At / usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm line 399.Mon Nov 13 22:36:15 2017-[error] [/ usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers.Mon Nov 13 22:36:15 2017-[info] Got exit code 1 (Not master dead).
From the output above, you can see the entire switching process of MHA, which includes the following steps:
Profile check phase, which checks the entire cluster profile configuration
Master handling of outages. This phase includes virtual ip removal operation and host shutdown operation. When MHA manages keepalived, we use keepalived scripts to monitor mysql status. MHA also has scripts to manage keepalived. If you need it, you can study it on your own.
Copy the relay log of the difference between dead maste and the latest slave, and save it to the specific directory of MHA Manger
Identify the slave with the latest updates
Apply binary log events saved from master (binlog events)
Upgrade a slave to a new master for replication
Make other slave connect to the new master for replication
At present, high availability solutions can achieve high availability of databases to some extent, such as MMM,heartbeat+drbd,Cluster and so on. And percona's Galera Cluster and so on. These highly available software have their own advantages and disadvantages. When making the choice of highly available solutions, it mainly depends on the business and the requirements for data consistency. Finally, for the requirements of high availability of database and data consistency, it is recommended to use MHA architecture.
Read the above details about how keepalived+MHA should implement mysql master-slave highly available clusters, and whether you have gained anything. If you want to know more about it, you can continue to follow our industry information section.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.