Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of MySQL High availability Operation and maintenance

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail the example analysis of MySQL high availability operation and maintenance. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

I. introduction of basic knowledge

1. MHA definition

Master HA . Is an open source high-availability program that provides automatic fault recovery for the master-slave architecture of MySQL. During master-slave replication, when MHA detects a failure of the Master node of MySQL, it automatically promotes a Slave node in the replication cluster to become a new Master node. During the handover, data consistency issues are also avoided by obtaining additional information from other nodes. In addition, MHA also provides the online failover function of master nodes, which can switch Master/Slave nodes as needed.

2. MHA structure

All nodes in MHA have two main roles, namely, management node and data node.

Management node (MHA Manager): typically deployed on a single machine to manage multiple master and slave clusters, while each master and slave cluster is called an application.

Data Node (MHA Node): runs on each MySQL server and accelerates node failover by monitoring scripts with parsing and cleaning logs capabilities.

3. HMA components

(1) Manager components

Masterha_check_sh:MHA dependent ssh environment monitoring tools masterha_check_repl:MySQL replication environment monitoring tools masterha_manager:MHA service master program masterha_check_status:MHA running state detection tools masterha_master_monitor:MySQL master node availability monitoring tools masterha_master_switch:master node switching tools masterha_conf_host: add or remove configured nodes masterha_stop: tools to turn off MHA services

(2) Node components

Save_binary_logs: save and assign master binary log apply _ diff_relay_logs: identify differences in relay logs and apply to other slavefilter_mysqlbinlog: remove unnecessary ROLLBACK events (MHA no longer uses this tool) purge_relay_logs: clear relay logs (does not block SQL threads)

(3) Custom extension

Secondary_check_script: monitor the availability of master through multiple network routes master_ip_failover_script: update masteripshutdown_script used by application: force master node shutdown report_script: send report init_conf_load_script: load initial configuration parameters master_ip_online_change_script: update master node ip address 2, MHA build

1. Environmental preparation

Operating system: CentOS6.9_X86_64

MySQL version: MySQL5.6.39 Universal binary

Server Planning:

Primary node Master:192.168.199.104 (mysql-master,master)

Slave node 1: 192.168.199.105 (mysql-slave01,slave01)

Slave node 192.168.199.106 (mysql-slave02,slave02)

Management Node: 192.168.199.107 (mysql-manager,manager)

2. Set up

(1) configure the host of each node, which is convenient to use later, so that you don't have to write IP every time.

[root@mysql-master ~] # vim / etc/hosts# adds the following: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.199.104 mysql-master master192.168.199.105 mysql-slave01 slave01192.168.199.106 mysql-slave02 slave02192.168.199.107 mysql-manager manager

(2) copy the configured host file to the other three nodes

[root@mysql-master ~] # scp / etc/hosts slave01:/etc/ [root@mysql-master ~] # scp / etc/hosts slave02:/etc/ [root@mysql-master ~] # scp / etc/hosts manager:/etc/

(3) configure master-slave synchronization. Slave01 and slave02 are the slave libraries of master.

Open the binary log of master

[root@mysql-master ~] # vim / etc/my.cnf# add the following under [mysqld]: server-id = 104skip-name-resolvelog_bin=/mysql_data/mysql-binlog_bin_index=/mysql_data/mysql-bin.indexbinlog_format = mixed

View the current binary log point location on master and create a replication account

[root@mysql-master ~] # mysql- uroot-proot# View the location of the current binary log point mysql > SHOW MASTER STATUS\ Graph * 1. Row * * File: mysql-bin.000090 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) # create a copy account mysql > GRANT REPLICATION SLAVE from the library REPLICATION CLIENT ON *. * TO 'repl'@'192.168.199.%' IDENTIFIED BY' repl' # refresh permissions mysql > FLUSH PRIVILEGES

Open the binary log and relay log on slave01 and configure it as the slave library of master

[root@mysql-slave01 ~] # vim / etc/my.cnf# add the following under [mysqld]: server-id = 105log_bin = / mysql_data/mysql-binlog_bin_index = / mysql_data/mysql-bin.indexrelay_log = / mysql_data/mysql-relayrelay_log_index = / mysql_data/mysql-relay.indexread_onlyrelay_log_purge = 0

Parameter explanation:

Relay_log_purge: this parameter means that the relay log is not automatically cleaned, because MHA needs to determine where the binlog of slave synchronous master is based on the relay log of slave. Read_only: it means it is read-only, and MHA needs to identify the master-slave library according to this parameter: turn on the binary log of the slave library, because when the master node fails, you need to promote one of the slave libraries to the master database, so you need to open the binary log of the slave database.

Start the slave01 node and point the master node information that needs to be synchronized to the master node:

[root@mysql-slave01 ~] # service mysqld restart [root@mysql-slave01 ~] # mysql- uroot-proot# uses the change master command to point the main library to the master node mysql > CHANGE MASTER TO MASTER_HOST = '192.168.199.104 MASTERLOGPORTcodes 3306 # check whether the synchronization is successful mysql > SHOW SLAVE STATUS\ G see the following message: Slave_IO_Running: YesSlave_SQL_Running: Yes

By configuring the second slave library in the same way, you can copy the configuration file on the slave01 directly to the second slave, and then modify it.

[root@mysql-slave01 ~] # scp / etc/my.cnf slave02:/etc/ [root@mysql-slave02 ~] # vim / etc/my.cnf# modify server-id: server-id = 106

After the modification is completed, save the exit, restart mysql, and then configure the slave library with slave02 as master

[root@mysql-slave02 ~] # mysql- uroot-prootmysql > CHANGE MASTER TO MASTER_HOST = '192.168.199.104MASTERPROTECTOUSERPORTTE3306 MASTERLOGUSERDUSING replicas MASTERLOGLOGPOS120 MySQL > START SLAVE;# check whether synchronization is successful mysql > SHOW SLAVE STATUS\ G see the following message: Slave_IO_Running: YesSlave_SQL_Running: Yes

(4) create a user account with administrative permissions

# create a management account on the master node master so that it can be synchronized to other slave nodes. Note: this management account needs to be able to remotely connect to MySQLmysql > GRANT ALL ON *. * TO 'mha'@'192.168.199.%' IDENTIFIED BY' mha';mysql > FLUSH PRIVILEGES

(5) because the MHA architecture requires mutual trust and communication between nodes, it is necessary to configure each node to log in without a secret key. Because there are more nodes, it is more troublesome to generate and copy individually. You can generate a secret key pair on the master node, and then let each host have the same private key.

# generate a key pair on master [root@mysql-master ~] # ssh-keygen-t rsa-P''# first save it to the local key file, so that the machine can log in to the local machine [root@mysql-master ~] # cat .ssh / id_rsa.pub > > .ssh / authorized_keys# using ssh to log in for testing without a secret key. It is found that you can log in to [root@mysql-master ~] # ssh 192.168.199.10 without a password to modify the permission of the authorized_keys file, which can only be viewed by the master. Users in the same group and other users cannot view [root@mysql-master ~] # chmod go= .ssh / authorized_keys.

Copy the key pair and private key file to the other three nodes, note: ensure their original permissions

[root@mysql-master ~] # scp-p .ssh / id_rsa .ssh / authorized_keys slave01:/root/.ssh/ [root@mysql-master ~] # scp-p .ssh / id_rsa .ssh / authorized_keys slave02:/root/.ssh/ [root@mysql-master ~] # scp-p .ssh / id_rsa .ssh / authorized_keys manager:/root/.ssh/# test login without password, execute the following command on the primary node Check the ip address of the slave01 node [root@mysql-master ~] # ssh slave01 'ifconfig eth0'#. The ip address of the slave01 indicates that the key-free login configuration is successful, and the other two nodes can verify it for themselves.

(6) download mha4mysql-manager and mha4mysql-node installation packages

The package versions used here are as follows:

Management node: mha4mysql-manager-0.56.tar.gz data node: mha4mysql-node-0.56.tar.gz

(7) configure epel because some packages in mha4mysql come from base and some from epel

[root@mysql-master ~] # rpm-ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

(8) extract and use the perl command to execute the Makefile.PL script

[root@mysql-master ~] # tar xf mha4mysql-manager-0.56.tar.gz [root@mysql-master ~] # cd mha4mysql-manager-0.56 [root@mysql-master mha4mysql-manager-0.56] # perl Makefile.PL

Note: during the execution of Makefile.PL, if an error similar to the following occurs, you need to install the library files that the perl program depends on:

Can't locate ExtUtils/MakeMaker.pm in @ INC (@ INC contains: inc / usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl / usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At inc/Module/Install/Makefile.pm line 4.BEGIN failed--compilation aborted at inc/Module/Install/Makefile.pm line 4

If the above error occurs, you need to install the dependency:

[root@mysql-master ~] # yum install perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker cpan perl-Time-HiRes perl-DBI perl-DBD-MySQL

(9) after ensuring that "perl MakeFile.PL" is executed correctly, use the make command to complete the installation. Note: you can use "echo $?" Check the execution result of the previous command to determine whether the previous command has been executed correctly. Usually 0 means done correctly.

[root@mysql-master mha4mysql-manager-0.56] # make [root@mysql-master mha4mysql-manager-0.56] # make install

(10) after the manager installation on master is complete, compile and install mha4mysql-node in the same way

[root@mysql-manager ~] # tar xf mha4mysql-node-0.56.tar.gz [root@mysql-manager ~] # cd mha4mysql-node-0.56 [root@mysql-manager mha4mysql-node-0.56] # perl Makefile.PL# to determine whether the Makefile.PL has completed normal execution [root@mysql-manager mha4mysql-node-0.56] # echo $? 0 [root@mysql-manager mha4mysql-node-0.56] # make & & make install

(11) install mha4mysql-node on the other three servers (master,slave01,slave02)

[root@mysql-slave01 ~] # tar mha4mysql-node-0.56.tar.gz [root@mysql-slave01 ~] # cd mha4mysql-node-0.56 [root@mysql-slave01 mha4mysql-node-0.56] # perl Makefile.PL [root@mysql-slave01 mha4mysql-node-0.56] # make & & make install

The other two nodes are installed in the same way, briefly here.

(12) when the installation is complete, you can see that there are some scripts under / usr/local/bin, that is, the script files generated by mha4mysql

(13) initialize MHA

There are two main types of editing mha configuration files:

Global configuration: provides default configuration for each application

Application configuration: used to specify which servers are available

# create a directory and edit the configuration file [root@mysql-manager ~] # mkdir-pv / etc/masterha/ [root@mysql-manager ~] # vim / etc/masterha/app01.conf

Write the following:

[server default] user=mha # manage user name password=mha # manage user password manager_workdir=/data/masterha/app01 # manager node working directory, used to store some binary logs, there is no working directory for each node that automatically creates manager_log=/data/masterha/app01/manager.log # log file location remote_workdir=/data/masterha/app01 # remote Ssh_user=root # does not automatically generate ssh_user=root # needs to use ssh to perform some administrative operations repl_user=repl # username with copy permission repl_password=repl # password with replication permission ping_interval=1 # how often does the master node be monitored, heartbeat information monitoring # other hosts [server1] hostname=192.168.199.104#ssh_port=3306 # if the MySQL does not use the default port number, it needs to be specified here Using the default port does not require specifying candidate_master=1 # to indicate whether the node participates in becoming the binlog log path of the primary node master_binlog_dir=/mysql_data/ # specified mysql [server2] hostname=192.168.199.105candidate_master=1master_binlog_dir=/mysql_data/ [server3] hostname=192.168.199.106#no_master=1 # indicates that the node does not participate in becoming the primary node master_binlog_dir=/mysql_data/ after the primary node failure

(14) check whether the communication between nodes is normal.

[root@mysql-manager] # masterha_check_ssh-- conf=/etc/masterha/app01.conf

Error 1:Can't locate Config/Tiny.pm in @ INC (@ INC contains:XXX

Reason: this error is due to the lack of dependency packages

Solution: install the prompt's dependency package

If the following rpm packages are not included in the operating system, you need to download them from the Internet. You can download them from the

Http://rpmfind.net/ downloads the required rpm package. The following rpm packages are for CentOS6.9 X86room64:

Perl-Mail-Sender-0.8.16-3.el6.noarch.rpm perl-Log-Dispatch-2.26-1.el6.rf.noarch.rpm perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpmperl-Config-Tiny-2.12-7.1.el6.noarch.rpm

After the download is complete, you can install it using the yum command, as it may also rely on other packages.

Error in 2:master_check_ssh execution:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

Reason: the content of the file under / root/.ssh/ is different or there is a problem with the permission of the file. I encounter a problem with the content of the file.

Solution: make a new copy of the key pair and private key on other hosts and it will be fine.

(15) check whether the master-slave replication environment is normal.

[root@mysql-manager] # masterha_check_repl-- conf=/etc/masterha/app01.conf

Error 1:

Sat Jun 2 03:07:26 2018-[error] [/ usr/local/share/perl5/MHA/ServerManager.pm, ln301] install_driver (mysql) failed: Can't locate DBD/mysql.pm in @ INC (@ INC contains: / usr/local/lib64/perl5 / usr/local/share/perl5/ usr/lib64/perl5/vendor_perl / usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At (eval 27) line 3.Perhaps the DBD::mysql perl module hasn't been fully installed,or perhaps the capitalisation of 'mysql' isn't right.

Reason: lack of perl-DBD-MySQL dependency library

Solution:

[root@mysql-manager ~] # yum install-y perl-DBD-MySQL*

Error 2:

Sat Jun 2 03:11:06 2018-[error] [/ usr/local/share/perl5/MHA/ServerManager.pm, ln188] There is no alive server. We can't do failover

Error 3:

Failed to save binary log: Binlog not found from / var/lib/mysql,/var/log/mysql! If you got this error at MHA Manager, please set "master_binlog_dir=/path/to/binlog_directory_of_the_master

Reason: mysql's binlog log directory is not under the default / var/log/mysql, but somewhere else, causing the directory not to be found.

Solution: make the directory path of the binary log under each node in the configuration file of the manager node.

[server1] hostname=192.168.199.105candidate_master=1master_binlog_dir=/mysql_data/ # specifies the binary log file directory

Error 4:

Can't exec "mysqlbinlog": No such file or directory at / usr/local/share/perl5/MHA/BinlogManager.pm line 106.mysqlbinlog version command failed with rc 1:0, please verify PATH, LD_LIBRARY_PATH, and client options

Reason: cannot find mysqlbinlog command after using ssh connection

Solution: connect the / usr/local/mysql-5.6.39/ directory to / usr/bin

[root@mysql-master ~] # ln-s / usr/local/mysql-5.6.39/bin / usr/bin [root@mysql-slave01 ~] # ln-s / usr/local/mysql-5.6.39/bin / usr/bin [root@mysql-slave02 ~] # ln-s / usr/local/mysql-5.6.39/bin / usr/bin

(16) start MHA

# when the foreground starts, the log will be printed directly on the console [root@mysql-manager app01] # masterha_manager-- conf=/etc/masterha/app01.conf# background launches [root@mysql-manager app01] # nohup masterha_manager-- conf=/etc/masterha/app01.conf &

(17) verify automatic failover for high availability

# stop the mysqld service of master node [root@mysql-master ~] # killall mysqld mysqld_safe# to view the information of the original two slave nodes It is found that some slave01 have been promoted to master mysql > SHOW SLAVE STATUS\ row * 1. Row * * Slave_IO_State: Waiting for master to send event Master_Host: 192.168.199.105 Master_User: repl... # View the read_only property of slave01 Found to have been modified to OFF mysql > SHOW VARIABLES LIKE 'read_only' +-+ | Variable_name | Value | +-+-+ | read_only | OFF | +-+-+

Note: when a master fails and the slave library is promoted to the master library, the manager service on the original manager node will automatically exit. A manual restart is required.

(18) when the failed server recovers again, it needs to be manually added to the slave node using the "CHANGE MASTER" command.

(19) after the manager node is started, you can use the masterha_check_status command to view the status information of the current master node.

[root@mysql-manager] # masterha_check_status-- conf=/etc/masterha/app01.conf app01 (pid:20582) is running (0:PING_OK), master:192.168.199.105

For other uses of MHA, such as online master-slave switching, you can refer to the official documentation, which is not described here. In addition, the automatic restart function of the manager node of MHA, the alarm function, and the complete removal of the faulty host can be realized through the shell script, which will be introduced later.

This is the end of the article on "sample Analysis of MySQL High availability Operations and maintenance". I hope the above content can be helpful to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report