Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the MHA high availability cluster in MySQL?

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article shows you what the MHA high availability cluster in MySQL is like, the content is concise and easy to understand, it will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

What is MHA

In traditional master-slave replication, if the master database is down, the rest of the slave library will not automatically replace the master library to continue to work, so the high availability of the business cannot be guaranteed. MHA is a highly available solution for mysql master-slave replication. When the master database is down, MHA can achieve fault detection and automatic fault transfer in 1-30 seconds, and choose an optimal slave library as the master library. At the same time, the new master library continues to maintain data consistent with other slave libraries.

II. Composition of MHA architecture

The whole MAH architecture consists of two parts, namely MHA Manager (management node) and MHA Node (data node). MHA Manager can be independently deployed to one server (including virtual machines) to manage multiple master-slave replication clusters, or it can be deployed to a master-slave replication node or other applications. While MHA Node needs to run on each mysql server, the MHA Manager server will regularly check the running status of the master database through the MHA Node on the master library. When the master database fails, it can promote the optimal slave library (which can be specified in advance or determined by MHA) to the new master library, and then the other slave libraries and the new master library will maintain a new replication state.

Third, the working principle of MHA

The main library instance is down, but ssh can still connect.

1. Monitor the downtime of the master database, select a new master, and the selected new master will cancel the role of the slave library (reset slave)

Selection criteria:

One is to select the latest slave library as the new master library according to the location of the binlog logs of other slave libraries.

Second, if the semi-synchronous slave library is set up, directly select the semi-slave library as the new master library.

2. The slave library obtains the missing binlog from the master library through ssh through the script program included in MHA.

3. Other slave libraries and new master libraries will be rebuilt and continue to provide services.

4. If the VIP is drifted from the original main library to the new main library by the vip mechanism, the application will not be aware of it.

The primary node server is down (ssh is no longer connected)

1. After monitoring that the host is down, try to connect to ssh, but the connection fails

2. Select the new main library according to the selection criteria mentioned above

3. Calculate the difference of relay-log between libraries and compensate for other new slave libraries

4. Other slave libraries re-establish master-slave relationship with the new master and continue to provide services.

5. If the VIP mechanism is used, the VIP is drifted from the original master to the new master, making the application unaware.

6. If there is a binlog server mechanism, it will continue to compensate for the missing things in binlog server to the new main library.

IV. MHA implementation

1. There are more than three MySQL independent node instances, and the network communication between nodes is normal. Configure hosts resolution.

10.0.0.51 Master

10.0.0.52 from

10.0.0.53 from and manager

2. Enable GTID replication structure (show slave status\ G)

3. Disable the automatic deletion of relay-log for each node (show variables like'% relay%')

Vim / etc/my.cnf

Relay_log_purge=0

Set global relay_log_purge=0

4. Create mha administrative users in the main database

Grant all privileges on. To mha@'10.0.0.%' identified by 'mha'; (will be synchronized to its slave node)

5. Configure soft connection (mha can only call commands under / usr/bin/)

Ln-s / application/mysql/bin/mysqlbinlog / usr/bin/mysqlbinlog

Ln-s / application/mysql/bin/mysql / usr/bin/mysql

6. Deploy node toolkits and dependency packages for each node

Install the dependency package rpm-ivh perl-DBD-MySQL

Install the node node: rpm-ivh mha4mysql-node-0.56-0.el6.noarch.rpm (all instances must be installed)

7. Select one of the slave nodes to deploy the manager toolkit

Installation dependency: yum install-y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes

Install the manager node: rpm-ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

8. Create the working directories and files necessary to configure mah on manager

Mkdir-p / etc/mha

Mkdir-p / var/log/mha/app1 (can manage multiple sets of master-slave replication)

Create a configuration file (do not keep unnecessary configurations, comments are useless, and will be rewritten after switching)

Vim / etc/mha/app1.cnf (serverdefault can be independent)

[server default]

Manager_log=/var/log/mha/app1/manager

Manager_workdir=/var/log/mha/app1

Master_binlog_dir=/data/binlog

User=mha

Password=mha

Ping_interval=2

Repl_password=123

Repl_user=repl

Ssh_user=root

[server1]

Hostname=10.0.0.51

Port=3306

[server2]

Hostname=10.0.0.52

Port=3306

[server3]

Hostname=10.0.0.53

Port=3306

9. Ssh key mutual trust configuration of each node

Ssh-keygen-t dsa-P''- f ~ / .ssh/id_dsa > / dev/null 2 > & 1

Ssh-copy-id-I / root/.ssh/id_dsa.pub root@10.0.0.51

Ssh-copy-id-I / root/.ssh/id_dsa.pub root@10.0.0.52

Ssh-copy-id-I / root/.ssh/id_dsa.pub root@10.0.0.53

10. Check mutual trust

Masterha_check_ssh-conf=/etc/mha/app1.cnf

11. Detect the master and slave

Masterha_check_repl-conf=/etc/mha/app1.cnf

12. Enable the MHA function

Nohup masterha_manager-conf=/etc/mha/app1.cnf-remove_dead_master_conf-ignore_last_failover

< /dev/null >

/ var/log/mha/app1/manager.log 2 > & 1 &

13. View the startup result

Tail-f / var/log/mha/app1/manager

10.0.0.51 (10.0.0.51) (current master)

+-10.0.0.52 (10.0.0.52purl 3306)

+-- 10.0.0.53 (10.0.0.53pur3306)

Masterha_check_status-conf=/etc/mha/app1.cnf

Fifth, mha fault simulation handover

The focus of mha is not on building mha, but on how to switch and recover after a failure.

1. Fault simulation. Stop the main library and view manager to observe the switching process.

Tail-f / var/log/mha/app1/manager

2. Open the master library (simulate that the master library has been repaired) and add the original master library to the master-slave environment

CHANGE MASTER TO MASTER_HOST='10.0.0.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='xxx'

Start slave

3. Add the information of the original master library back to the configuration file of manager. The configuration file is / etc/mha/app1.cnf (the information of the original master library will be automatically deleted in the configuration file after a successful mha failover)

4. Start the mha manager program (the manager program will exit automatically after the switch is successful)

Nohup masterha_manager-conf=/etc/mha/app1.cnf-remove_dead_master_conf-ignore_last_failover

< /dev/null >

/ var/log/mha/app1/manager.log 2 > & 1 &

5. Check the startup mha status

Masterha_check_status-conf=/etc/mha/app1.cnf

VI. MHAvip address drift

1. Upload the master _ ip_failover file to / usr/local/bin/

Then modify the code dos2unix / usr/local/bin/master_ip_failover

2. Add master_ip_failover_script=/usr/local/bin/master_ip_failover to the configuration file of mha / etc/mha/app1.cnf

3. Restart mha

Masterha_stop-conf=/etc/mha/app1.cnf

Nohup masterha_manager-conf=/etc/mha/app1.cnf-remove_dead_master_conf-ignore_last_failover

< /dev/null >

/ var/log/mha/app1/manager.log 2 > & 1 &

4. Manually bind vip to the main library, which must be consistent with the ethN in the configuration file. Mine is eth0:1 (1 is the value specified by key).

Ifconfig eth0:1 10.0.0.55/24

5. Stop the main library to see if the vip address has drifted successfully.

7. Binlogserver configuration and use

Binlogserver is a server configured in the MHA environment to keep the binary logs of the main library. This server must have version 5.6 or above, support gtid and enable it.

1. Configure binlogserver on the manager program

Vim / etc/mha/app1.cnf

[binlog1]

No_master=1

Hostname=10.0.0.53

Master_binlog_dir=/data/mysql/binlog

2. Create these two directories on binlogserver in advance.

Mkdir-p / data/mysql/binlog

Chown-R mysql.mysql / data/mysql/*

3. After the modification is completed, pull the main library binlog (starting from 000001, and then the binlog will automatically come in order)

Cd / data/mysql/binlog-> you must enter the directory you created.

Mysqlbinlog-R-host=10.0.0.52-user=mha-password=mha-raw-stop-never mysql-bin.000001 &

4. Restart mha takes effect

Masterha_stop-conf=/etc/mha/app1.cnf

Nohup masterha_manager-conf=/etc/mha/app1.cnf-remove_dead_master_conf-ignore_last_failover

< /dev/null >

/ var/log/mha/app1/manager.log 2 > & 1 &

Masterha_check_status-conf=/etc/mha/app1.cnf

8. Other parameters of mha

Ping_interval=2 manager detects the interval between nodes' survival, which is detected a total of 4 times.

# set as candidate master. If this parameter is set, this slave database will be promoted to master database after master-slave switching occurs, even if the master database is not the latest slave of events in the cluster.

Candidate_master=1

# by default, if a slave lags behind the relay logs of master 100m, MHA will not select the slave as a new master

Because it takes a long time to recover from this slave, set the check_repl_delay=0

MHA triggered switchover ignores replication delay when selecting a new master, which is useful for hosts with candidate_master=1 set

Because this candidate must be the new master during the handover.

Check_repl_delay=0

The above is what the MHA high availability cluster is like in MySQL. Have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report