In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
At present, MHA (Master High Availability) is a relatively mature solution in the aspect of high availability of MySQL. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and in the process of failover, MHA can ensure the consistency of data to the maximum extent in order to achieve the real sense of high availability. There are two roles in MHA, one is MHA Node (data node) and the other is MHA Manager (management node). MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a slave node. MHA Node runs on each MySQL server, and MHA Manager regularly detects the master nodes in the cluster. When the master fails, it automatically promotes the slave of the latest data to the new master, and then redirects all other slave to the new master. The entire failover process is completely transparent to the application.
The schematic diagram of the working architecture of MHA is as follows:
In MHA architecture, semi-synchronous replication can be used to maximize data consistency. In master-slave replication of MySQL, asynchronous replication is used by default, but asynchronous has some disadvantages. Here is a brief introduction to asynchronous and semi-synchronous replication:
Asynchronous and semi-synchronous: MySQL replication is asynchronous by default, and all updates on Master are written to Binlog, which does not ensure that all updates are copied to Slave. Although asynchronous operation is efficient, when there is a problem with Master/Slave, there is a high risk of data being out of sync, and data may even be lost. The purpose of introducing semi-synchronous replication in MySQL5.5 is to ensure that the data of at least one Slave is complete when something goes wrong with master. In case of timeout, you can also temporarily transfer to asynchronous replication to ensure the normal use of the business until a salve catches up and continues to switch to semi-synchronous mode.
MHA compared with other HA software, the purpose of MHA is to maintain the high availability of the master library in MySQL replication. Its biggest feature is that it can repair the difference logs between multiple slave, finally make all slave data consistent, and then choose one of them to act as the new master and point the other slave to it.
The approximate process is as follows:
1. Save binary log events from the downtime master
2. Identify the slave with the latest updates
3. Apply different relay logs (relay log) to other slave
4. Apply binary log events saved from master
5. Upgrade a slave to a new master
6. Connect other slave to the new master for replication.
Currently, MHA mainly supports the architecture of "one master and more slaves". To build a MHA, there are at least three database nodes in a general MySQL cluster, including one master and two slaves, that is, one serves as master, one acts as standby master, and the other acts as slave.
The main features of MHA are as follows:
1. MHA switching does not depend on instances to use storage engine and BINLOG format.
2. MHA does not increase the performance overhead of MySQL servers, and there is no need to add additional servers except MHA management nodes.
3. Deploying MHA data nodes on the MySQL server will not affect the current instance.
4. MHA can realize automatic failover, or manually trigger online switchover.
5. MHA can achieve failover in seconds.
6. MHA can promote any slave to master, or you can specify master candidate nodes when switching.
7. MHA provides an extension interface that allows user-defined scripts to be executed at a specific point in the process of MHA switching.
MHA support and restrictions:
1. Only BINLOG V4 is supported, and MySQL 5.0 or later is required.
2. Candidate master nodes must enable the log-bin parameter. If all slave nodes are enabled, no failover will be performed.
3. Multi-master mode is not supported before MHA version 0.52.
4. MHA does not support multi-level master-slave replication by default, by modifying the configuration file and setting multi_tier_slave parameters
Environment prepares OSIP address hostname & role type Centos 7.5192.168.20.2master master master (write operation) Centos 7.5192.168.20.3slave1 slave (standby master) mysql (read operation) Centos 7.5192.168.20.4slave2 slave mysql (read operation) Centos 7.5192.168.20.5manager management node
In the above environment, master provides write services and slave provides read operations. Once the master goes down, one of the slave (I will specify the standby master here) will be promoted to the new master,slave and also point to the new master,manager as the management server.
1. Configure the hosts file, which can be parsed between hosts.
The following actions can be performed on one of the nodes.
# write the host file [root@master ~] # cat > > / etc/hosts 192.168.20.2 master > 192.168.20.3 slave1 > 192.168.20.4 slave2 > 192.168.20.5 manager > EOF > # distribute the prepared hosts file to other nodes [root@master ~] # for i in master slave1 slave2 manager;do scp / etc/hosts $iVANT EtcUniverse 2, configure ssh secret-free login
Note: all hosts in the cluster need to ensure that they can log in to each other without secret login.
Execute the following two commands on all nodes once to complete the secret-free login between hosts.
# create a key pair [root@master ~] # ssh-keygen-t rsa# to distribute the public key to other nodes [root@master ~] # for i in master slave1 slave2 manager;do ssh-copy-id $idone
When all nodes execute the above two commands, you can use the following command to verify that secret-free login is successful:
# if the hostnames of all hosts can be obtained without entering a password by executing the following instructions on each host, the configuration of secret-free login is correct [root@master ~] # for i in masterslave1slave2manager; do ssh $I hostname;donemasterslave1slave2manager3, the dependencies required to configure epel and install MHA
Note: the following command needs to be executed on all nodes.
# configure Ali's epel source (to speed up the installation), if not in the domestic environment Execute yum-y install epel-release to configure [root@master ~] # wget-O / etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo[root@master ~] # yum makecache # establish metadata cache # rely on [root@master ~] # yum-y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-ParallelForkManager perl-Config-IniFiles ncftp perl for installation -Params-Validate perl-CPAN perl-TestMock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker II. Master-slave replication based on semi-synchronization
Note: all subsequent operations, if not specified, need to be performed on all database servers.
1. Find the plug-ins needed for semi-synchronization mysql > show variables like'% plugin_dir%' # find the directory where the plug-in resides (each DB server may be different) +-+-+ | Variable_name | Value | +-+- -+ | plugin_dir | / usr/local/mysql/lib/plugin/ | +-+ 1 row in set (0.01sec) mysql > show variables like'% have_dynamic%' # check whether dynamic detection +-- +-+ | Variable_name | Value | +-- +-+ | have_dynamic_loading | YES | +-- +- -+ 1 row in set (0.01sec) [root@master ~] # ls / usr/local/mysql/lib/plugin/ | egrep 'master | slave'# determines that there are the following two files in the directory of the plug-in (these files are available by default after installing the database) semisync_master.sosemisync_slave.so2, Install the plug-ins # semisync_master.so and semisync_slave.so for the plug-in name mysql > install plugin rpl_semi_sync_master SONAME 'semisync_master.so' queried above Mysql > install plugin rpl_semi_sync_slave SONAME 'semisync_slave.so';3, check whether Plugin is installed correctly: mysql > show plugins;# or mysql > select * from information_schema.plugins
The following result is returned after show plugins execution, indicating that there is no problem:
4. View semi-synchronous related information mysql > show variables like'% rpl_semi_sync%' +-+-+ | Variable_name | Value | +-- -+-+ | rpl_semi_sync_master_enabled | OFF | | rpl_semi_sync_master_timeout | 10000 | | rpl_semi_sync_master_trace_level | 32 | rpl_semi_sync_master_wait_for_slave_count | 1 | rpl_semi_sync_master_wait_no_slave | ON | | rpl_semi_sync_master_wait_point | AFTER_SYNC | | rpl_semi_sync_slave_enabled | OFF | | rpl_semi_sync_slave_trace_level | 32 | +-+-+ |
You can see that the semi-synchronous plug-in is still in an unenabled off state, so you need to modify the my.cnf configuration file as follows:
The complete configuration file for master hosts is as follows:
[root@master ~] # cat / etc/ my.cnf [mysqld] basedir=/usr/local/mysqldatadir=/usr/local/mysql/dataport=3306server_id=1 socket=/usr/local/mysql/mysql.socklog-error=/usr/local/mysql/data/mysqld.errlog-bin=mysql-binbinlog_format=mixedrpl_semi_sync_master_enabled=1rpl_semi_sync_master_timeout=1000rpl_semi_sync_slave_enabled=1relay_log_purge=0relay-log = relay-binrelay-log-index = slave-relay-bin.index
Note:
The rpl_semi_sync_master_enabled=1:1 table is enabled, and 0 means to turn off the rpl_semi_sync_master_timeout=1000: millisecond unit. After waiting for the confirmation message for 10 seconds, the master server no longer waits and becomes asynchronous.
The complete configuration file for slave1 hosts is as follows:
[root@slave1 ~] # cat / etc/my.cnf [mysqld] basedir=/usr/local/mysqldatadir=/usr/local/mysql/dataport=3306server_id=2socket=/usr/local/mysql/mysql.socklog-error=/usr/local/mysql/data/mysqld.errlog-bin=mysql-binbinlog_format=mixedrpl_semi_sync_master_enabled=1rpl_semi_sync_master_timeout=1000rpl_semi_sync_slave_enabled=1relay_log_purge=0relay-log = relay-binrelay-log-index = slave-relay-bin.index
The complete configuration file for the slave2 host is as follows:
[root@slave2 ~] # cat / etc/my.cnf [mysqld] basedir=/usr/local/mysqldatadir=/usr/local/mysql/dataport=3306server_id=3socket=/usr/local/mysql/mysql.socklog-error=/usr/local/mysql/data/mysqld.errlog-bin = mysql-binrelay-log = relay-binrelay-log-index = slave-relay-bin.indexread_only = 1rpl_semi_sync_slave_enabled=1# since slave2 is only used as a slave host, there is no need to enable semi-synchronization of master
In the configuration of master-slave replication, because the parameter relay_log_purge=0 is set on the master and slave hosts (indicating that the relay log is not automatically cleared), the slave node needs to delete the relay log periodically. It is recommended that each slave node delete the relay log at a different time.
Crontab-e0 5 * / usr/local/bin/purge_relay_logs-- user=root-- password=pwd123-- port=3306-- disable_relay_log_purge > > / var/log/purge_relay.log 2 > & 1
After changing the configuration file, you need to restart MySQL by executing the following command.
[root@master ~] # systemctl restart mysqld
Check the semi-synchronization status and confirm that it is enabled:
Mysql > show variables like'% rpl_semi_sync%' # check whether semi-synchronization is enabled +-+ | Variable_name | Value | +-- -+-+ | rpl_semi_sync_master_enabled | ON | # this value should be ON | rpl_semi_sync_master_timeout | 1000 | | rpl_semi_sync_master_trace_level | 32 | | rpl_semi_sync_master_wait_for_slave_count | 1 | | rpl_semi_sync_master_wait_no_slave | ON | | rpl_semi_sync_master_wait_point | AFTER_SYNC | | rpl_semi_sync_slave_enabled | ON | # this value should also be ON. | | rpl_semi_sync_slave_trace_level | 32 | +-+-+ 8 rows in set (0.00 sec) mysql > show status like'% rpl_semi_sync%' | +-+-+ | Variable_name | Value | +-+- -+ | Rpl_semi_sync_master_clients | 0 | Rpl_semi_sync_master_net_avg_wait_time | 0 | Rpl_semi_sync_master_net_wait_time | 0 | Rpl_semi_sync_master_net_waits | 0 | Rpl_semi_sync_master_no_times | 0 | Rpl_semi _ sync_master_no_tx | 0 | | Rpl_semi_sync_master_status | ON | | Rpl_semi_sync_master_timefunc_failures | 0 | Rpl_semi_sync_master_tx_avg_wait_time | 0 | Rpl_semi_sync_master_tx_wait_time | 0 | Rpl_semi_sync_master_tx_waits | | 0 | | Rpl_semi_sync_master_wait_pos_backtraverse | 0 | Rpl_semi_sync_master_wait_sessions | 0 | Rpl_semi_sync_master_yes_tx | 0 | | Rpl_semi_sync_slave_status | OFF | +-- | -+-+ 15 rows in set (0.00 sec) # about the information found in the last command There are several states worth watching, which will be written down below. Rpl_semi_sync_master_status: shows whether the master service is asynchronous replication mode or semi-synchronous replication mode, ON is semi-synchronous; rpl_semi_sync_master_clients: shows how many slaves are configured for semi-synchronous replication mode Rpl_semi_sync_master_yes_tx: displays the number of successful commits confirmed from the server rpl_semi_sync_master_no_tx: shows the number of unsuccessful commits confirmed from the server rpl_semi_sync_master_tx_avg_wait_time: the average extra waiting time for the transaction to open semi_sync: rpl_semi_sync_master_net_avg_wait_time: after the transaction enters the waiting queue Average waiting time to the network 5. Create related users
1) the operation of master host is as follows:
# create user mysql > grant replication slave on for synchronization *. * to mharep@'192.168.20.%' identified by '123.comp. Query OK, 0 rows affected, 1 warning (1.00 sec) # create user mysql > grant all on *. * to manager@'192.168.20.%' identified by' 123.com 'for manager monitoring of user mha Query OK, 0 rows affected 1 warning (0.00 sec) # View masterbinary related information mysql > show master status\ row * 1. Row * * File: mysql-bin.000001 Position: 744 Binlog_Do_DB: Binlog_Ignore_DB: Executed _ Gtid_Set: 1 row in set (0.00 sec)
2) the operation of slave1 host is as follows:
# create user mysql > grant replication slave on *. * to mharep@'192.168.20.%' identified by '123.composter query OK, 0 rows affected, 1 warning (1.00 sec) # create user mysql > grant all on *. * to manager@'192.168.20.%' identified by' 123.compositing query OK, 0 rows affected, 1 warning (0.00 sec)
3) the operation of slave2 host is as follows:
Since slave2 does not need to be a backup master, there is no need to create an account for synchronizing data
# create manager monitoring account mysql > grant all on *. * to manager@'192.168.20.%' identified by '123.composability query OK, 0 rows affected, 1 warning (0.00 sec) 6, configure master-slave replication
The following operations need to be performed once on the slave1 host and once on the slave2 host to synchronize the data of the master host.
# specify information about master hosts mysql > change master to-> master_host='192.168.20.2',-> master_port=3306,-> master_user='mharep',-> master_password='123.com',-> master_log_file = 'mysql-bin.000001', # this is the binary log name found on the master host-> master_log_pos=744 # ditto, this is the positionQuery OK of the binary log you can see, 0 rows affected, 2 warnings (0.01 sec) mysql > start slave; # launch slaveQuery OK, 0 rows affected (0 sec)
Finally, check the status of the slave host:
If you view the information related to semi-synchronization on the master host, you will find that the synchronized client has become 2, as follows:
7. Install MHA-node
Note: MHA-node needs to be installed on all nodes (including manager host nodes)
# download package [root@master src] # wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58.tar.gz# install [root@master src] # tar zxf mha4mysql-node-0.58.tar.gz [root@master src] # cd mha4mysql-node-0.58/ [root@master mha4mysql-node-0.58] # perl Makefile.PL [root@master mha4mysql-node-0.58] # make & & make install
Note: all the next operations, if not specifically noted, only need to be performed on the manager host node.
8. Install MHA-manager# download package [root@manager src] # wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58.tar.gz# install [root@manager src] # tar zxf mha4mysql-manager-0.58.tar.gz [root@manager src] # cd mha4mysql-manager-0.58/ [root@manager mha4mysql-manager-0.58] # perl Makefile.PL [root@manager mha4mysql-manager-0.58] # make & & make install9, Create the appropriate directory and copy the required files [root@manager mha4mysql-manager-0.58] # mkdir / etc/masterha [root@manager mha4mysql-manager-0.58] # mkdir-p / masterha/app1 [root@manager mha4mysql-manager-0.58] # mkdir / scripts [root@manager mha4mysql-manager-0.58] # pwd/usr/src/mha4mysql-manager-0.58 # determine the current directory [root@manager mha4mysql-manager-0.58] # cp samples/ Conf/* / etc/masterha/ [root@manager mha4mysql-manager-0.58] # cp samples/scripts/* / scripts/10 、 Modify mha-manager configuration file
Note: there are two main configuration files for manager, one is universal default and the other is separate. The contents of the default generic configuration file need to be cleared, as follows:
# clear the default configuration file [root@manager masterha] # > / etc/masterha/masterha_default.cnf
Then modify the separate configuration file:
[root@manager ~] # cat / etc/masterha/app1.cnf # modified as follows: [server default] manager_workdir=/masterha/app1 # specify working directory manager_log=/masterha/app1/manager.log # specify log file user=manager # specify the user name used by the manager management database node password=123.com # corresponds to the password of the above user ssh_user=root # specify the configuration of ssh exemption Secret login system user repl_user=mharep # specifies the user name used to synchronize data repl_password=123.com # corresponds to the above synchronization user password ping_interval=1 # setting monitoring main library The time interval for sending ping packets is 3 seconds by default. Switch automatically if there is no response for three attempts. [server1] hostname=192.168.20.2port=3306master_binlog_dir=/usr/local/mysql/data # specify the path where master saves binary logs, so that MHA can find the log of master _ master=1 # set as candidate master, and after setting this parameter After the master-slave switch occurs, this library will be promoted to the master library [server2] hostname=192.168.20.3port=3306master_binlog_dir=/usr/local/mysql/datacandidate_master=1 # set as the candidate master[ server3] hostname=192.168.20.4port=3306master_binlog_dir=/usr/local/mysql/datano_master=1 # and not the candidate master library.
The complete uncommented configuration file is as follows:
[root@manager ~] # cat / etc/masterha/app1.cnf [server default] manager_workdir=/masterha/app1 manager_log=/masterha/app1/manager.log user=managerpassword=123.comssh_user=rootrepl_user=mhareprepl_password=123.comping_interval= 1 [server1] hostname=192.168.20.2port=3306master_binlog_dir=/usr/local/mysql/datacandidate_master= 1 [server2] hostname=192.168.20.3port=3306master_binlog_dir=/usr/local/mysql/datacandidate_master= 1 [server3] hostname=192.168.20.4port=3306master_binlog_dir=/usr/local/mysql/datano_master=111, pre-launch test
Verify the validity of SSH:
Verify the validity of cluster replication (MySQL must be started), as follows:
[root@manager masterha] # masterha_check_repl-global_conf=/etc/masterha/masterha_default.cnf-conf=/etc/masterha/app1.cnf
The execution of the above command may result in an error as follows:
This is because the command mysqlbinlog was not found on the mysql node, just execute the following command on all MySQL host nodes to create a soft connection, as follows:
# Note: the following command executes [root@master ~] # ln-s / usr/local/mysql/bin/* / usr/local/bin/ on all database service nodes
I am here to carry out the order again, and the error is as follows:
Solution:
[root@master ~] # rpm-qa | grep-I dbd # find the following package perl-DBD-MySQL-4.023-6.el7.x86_64perl-DBD-SQLite-1.39-3.el7.x86_64# Uninstall the following package [root@master ~] # rpm-e-nodeps perl-DBD-MySQL-4.023-6.el7.x86_64 [root@master ~] # yum-y install perl-DBD-MySQL # to solve the error report by installing yum again
Finally, check that the command was executed successfully, as follows:
[root@manager] # masterha_check_repl-- global_conf=/etc/masterha/masterha_default.cnf-- conf=/etc/masterha/app1.cnf
The following information is returned, indicating that it is correct:
12. Start manager [root @ manager ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / var/log/mha_manager.log &
Note: you can execute the above command multiple times and specify different app.cnf configuration files to monitor multiple MySQL clusters. I only have one MySQL cluster here, so I only need to execute the above command.
13. Verify failover
You can check which server is the master in the current MySQL cluster. Here, the main server in the MySQL cluster is the master host, as shown below:
Now simulate the failure of the master host, and then check which master is in the cluster, as follows:
# stop the MySQL service on the master host [root@master ~] # systemctl stop mysqld# check the current host on the slave2 host again? Mysql > show slave status\ gateway * 1. Row * * Slave_IO_State: Waiting for master to send event Master_Host: 192.168.20.3 # you can see that the current master is slave1. Master_User: mharep Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 744 Relay_Log_File: relay-bin.000002 Relay_Log_Pos: 3203, Daily maintenance of MHA Manager side 1) check whether there are the following files Delete if there is.
After the master-slave switch occurs in the MHA Manager service, the MHA manager service will stop automatically, and the app1.failover.complete file will be generated under the manager_workdir (/ masterha/app1/) directory. To start MHA again, you must make sure that there is no such file. If there is an app1.failover.complete or app1.failover.error file, the error will be as follows:
Simply delete this file and point the master host to the new master (that is, slave1) to start the manager service successfully, as follows:
# Delete [root@manager app1] # rm-rf / masterha/app1/app1.failover.complete # View the information of the specified master in the log of the manager service, as follows: [root@manager app1] # cat / masterha/app1/manager.log | grep MASTER# if the service has been running for a long time, you must read the last message. The log has recorded the instructions that need to be executed to point to the new master # but you need to write your own manual password Fri Feb 21 17:57:37 2020-[info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.20.3', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=744, MASTER_USER='mharep', MASTER_PASSWORD='xxx' Fri Feb 21 17:57:38 2020-[info] Executed CHANGE MASTER.# executes the following command on the master host to designate slave2 as the primary: mysql > CHANGE MASTER TO MASTER_HOST='192.168.20.3', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=744, MASTER_USER='mharep', MASTER_PASSWORD='123.com';# to start the slave function mysql > start slave # start the manager service again [root@manager app1] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / var/log/mha_manager.log & [1] 1613 check the status as "running" through jobs-l That means manager re-runs successfully [root@manager app1] # jobs-l [1] + 16130 running nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / var/log/mha_manager.log & 2) check MHA replication [root@manager ~] # masterha_check_repl-- conf=/etc/masterha/app1.cnf3) stop MHA [root@manager ~] # masterha_stop-- conf=/etc/masterha/app1.cnf4) start Move MHA [root@manager ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / tmp/mha_manager.log &
When a slave node is down, the manager service cannot be started. It is recommended to temporarily comment out the information of the downtime node in the configuration file and uncomment it after repair.
5) check the status [root@manager ~] # masterha_check_status-- the status of conf=/etc/masterha/app1.cnf # manager without startup is as follows: app1 is stopped (2:NOT_RUNNING). # start the manager service and check as follows: [root@manager ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / tmp/mha_manager.log & [1] 19651 [root@manager ~] # masterha_check_status-- conf=/etc / masterha/app1.cnf app1 (pid:19651) is running (0:PING_OK) Master:192.168.20.3
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.