Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy MySQL High availability scenario MHA

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to deploy MHA, a highly available solution for MySQL. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

MHA (Master High Availability) is a relatively mature MySQL high availability scheme, which can automatically complete the database failover operation within 30 seconds, and basically ensure the consistency of the data without downtime of the master server.

It consists of two parts: MHA Manager (management node) and MHA Node (data node). MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a single slave. MHA Node runs on each mysql node, and MHA Manager regularly detects the master nodes in the cluster. When the master fails, it automatically promotes the slave of the latest data to master, and then points all other slave to the new master.

During MHA automatic failover, MHA tries to save the binary log of master to maximize data loss, when this is not always feasible, for example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save the binary log, so it only fails over but loses the latest data. Can be combined with semi-synchronous replication introduced in MySQL 5.5 to reduce the risk of data loss.

The MHA software consists of two parts: the Manager toolkit and the Node toolkit, which are described as follows:

MHA Manager:

1. Masterha_check_ssh: check the SSH configuration of MHA

2. Masterha_check_repl: check the replication status of MySQL

3. Masterha_manager: start MHA

4. Masterha_check_status: check the current MHA running status

5. Masterha_master_monitor: check whether master is down

6. Masterha_master_switch: control failover (automatic or manual)

7. Masterha_conf_host: add or remove configured server information

8. Masterha_stop: close MHA

MHA Node:

Save_binary_logs: save or copy the binary log of master

Apply_diff_relay_logs: identify the differential relay log and apply the differential event to other slave

Filter_mysqlbinlog: remove unnecessary ROLLBACK events (MHA no longer uses this tool)

Purge_relay_logs: eliminates relay logs (does not block SQL threads)

There are several other scripts to customize:

1. Master_ip_failover: manage VIP

2. Master_ip_online_change:

3. Masterha_secondary_check: when MHA manager detects that master is not available, the masterha_secondary_check script is used to further confirm to reduce the risk of miscut.

4. Send_report: when a failover occurs, you can send an alarm message through the send_report script.

Cluster information

Role IP address ServerID type

Master 192.168.244.10 1 write

Candicate master 192.168.244.20 2 read

Slave 192.168.244.30 3 read

Monitor host 192.168.244.40 Monitoring Cluster Group

Note: all operating systems are RHEL 6.7.

Among them, master provides writing services, alternative master provides reading services, and slave also provides related reading services. Once the master goes down, it will upgrade the alternative master to a new master,slave and point to a new master.

Install MHA node on all nodes

1. Install the perl module (DBD:mysql) required for MHA node on the MySQL server

# yum install perl-DBD-MySQL-y

two。 Install mha node on all nodes

Download address: https://code.google.com/p/mysql-master-ha/wiki/Downloads?tm=2

As the website is walled in China, the relevant files are downloaded and put into the personal network disk, http://pan.baidu.com/s/1boS31vT, children's shoes can be downloaded on their own.

# tar xvf mha4mysql-node-0.56.tar.gz

# cd mha4mysql-node-0.56

# perl Makefile.PL

Can't locate ExtUtils/MakeMaker.pm in @ INC (@ INC contains: inc / usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl / usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At inc/Module/Install/Can.pm line 6.BEGIN failed--compilation aborted at inc/Module/Install/Can.pm line 6.Compilation failed in require at inc/Module/Install.pm line 283.Can't locate ExtUtils/MakeMaker.pm in @ INC (@ INC contains: inc/ usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl / usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At inc/Module/Install/Makefile.pm line 4.BEGIN failed--compilation aborted at inc/Module/Install/Makefile.pm line 4.Compilation failed in require at inc/Module/Install.pm line 283.Can't locate ExtUtils/MM_Unix.pm in @ INC (@ INC contains: inc/ usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl / usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At inc/Module/Install/Metadata.pm line 349.

View Code

From the error report, you can see that the related dependency package is not installed.

# yum install perl-ExtUtils-MakeMaker-y

# perl Makefile.PL

* * Module::AutoInstall version 1.03 employees * Checking for Perl dependencies...Can't locate CPAN.pm in @ INC (@ INC contains: inc / usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl / usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At inc/Module/AutoInstall.pm line 277.

# yum install perl-CPAN-y

# perl Makefile.PL

* Module::AutoInstall version 1.03 Core Features * Checking for Perl dependencies... [Core Features]-DBI... loaded. (1.609)-DBD::mysql... loaded. (4.013) * Module::AutoInstall configuration finished.Checking if your kit is complete...Looks goodWriting Makefile for mha4mysql::node

View Code

# make

# make install

At this point, when the MHA node node is installed, the following script file is generated under / usr/local/bin

# ll / usr/local/bin/total 44-r-xr-xr-x 1 root root 16367 Jul 20 07:00 apply_diff_relay_logs-r-xr-xr-x 1 root root 4807 Jul 20 07:00 filter_mysqlbinlog-r-xr-xr-x 1 root root 8261 Jul 20 07:00 purge_relay_logs-r-xr-xr-x 1 root root 7525 Jul 20 07:00 save_binary_logs

Deploy MHA Manager on the Monitor host node

# tar xvf mha4mysql-manager-0.56.tar.gz

# cd mha4mysql-manager-0.56

# perl Makefile.PL

* Module::AutoInstall version 1.03 Core Features * Checking for Perl dependencies... [Core Features]-DBI... loaded. (1.609)-DBD::mysql... loaded. (4.013)-Time::HiRes... missing.- Config::Tiny... missing.- Log::Dispatch... missing.- Parallel::ForkManager... missing.- MHA::NodeConst. Missing. = > Auto-install the 5 mandatory module (s) from CPAN? [y] yawning * Dependencies will be installed the next time you type 'make'.*** Module::AutoInstall configuration finished.Checking if your kit is complete...Looks goodWarning: prerequisite Config::Tiny 0 not found.Warning: prerequisite Log::Dispatch 0 not found.Warning: prerequisite MHA::NodeConst 0 not found.Warning: prerequisite Parallel::ForkManager 0 not found.Warning: prerequisite Time::HiRes 0 not found.Writing Makefile for mha4mysql::manager

View Code

# make

# make install

After execution, the following files will be added under / usr/local/bin

# ll / usr/local/bin/total 40-r-xr-xr-x 1 root root 1991 Jul 20 00:50 masterha_check_repl-r-xr-xr-x 1 root root 1775 Jul 20 00:50 masterha_check_ssh-r-xr-xr-x 1 root root 1861 Jul 20 00:50 masterha_check_status-r-xr-xr-x 1 root root 3197 Jul 20 00:50 masterha_conf_host-r-xr-xr-x 1 root root 2513 Jul 20 00:50 masterha_manager- R-xr-xr-x 1 root root 2161 Jul 20 00:50 masterha_master_monitor-r-xr-xr-x 1 root root 2369 Jul 20 00:50 masterha_master_switch-r-xr-xr-x 1 root root 5167 Jul 20 00:50 masterha_secondary_check-r-xr-xr-x 1 root root 1735 Jul 20 00:50 masterha_stop

Configure SSH login without password authentication

1. Configure password-free authentication to all Node nodes on manager

# ssh-keygen

Press "Enter" all the way.

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.10

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.20

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.30

two。 Configured on Master (192.168.244.10)

# ssh-keygen

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.20

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.30

3. Configured on Candicate master (192.168.244.20)

# ssh-keygen

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.10

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.30

4. Configured on Slave (192.168.244.30)

# ssh-keygen

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.10

# ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.244.20

Fourth, build a master-slave replication environment

1. Perform a backup on Master

# mysqldump-- master-data=2-- single-transaction-R-- triggers-A > all.sql

Where-R is the backup stored procedure,-- triggers is the backup trigger-- A represents the entire library

two。 Create a replication user on Master

Mysql > grant replication slave on *. * to 'repl'@'192.168.244.%' identified by' repl';Query OK, 0 rows affected (0.09 sec)

3. View the CHANGE MASTER statement in the backup file all.sql

# head-n 30 all.sql

-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=120

4. Copy backup files to Candicate master and Slave

# scp all.sql 192.168.244.20:/root/

# scp all.sql 192.168.244.30:/root/

5. Build a slave library on Candicate master

# mysql

< all.sql 设置复制信息 mysql>

CHANGE MASTER TO-> MASTER_HOST='192.168.244.10',-> MASTER_USER='repl',-> MASTER_PASSWORD='repl',-> MASTER_LOG_FILE='mysql-bin.000002',-> MASTER_LOG_POS=120;Query OK, 0 rows affected, 2 warnings (0.19 sec) mysql > start slave;Query OK, 0 rows affected (0.02 sec) mysql > show slave status\ G

6. Build a slave library on Slave

7. Set the slave server to read only

Mysql > set global read_only=1;Query OK, 0 rows affected (0.04 sec)

8. Create a monitoring user in Master

Mysql > grant all privileges on *. * to 'monitor'@'%' identified by' monitor123';Query OK, 0 rows affected (0.07 sec)

5. Configure MHA

1. Create a MHA working directory on Monitor host (192.168.244.40) and create related configuration files

# mkdir-p / etc/masterha

# vim / etc/masterha/app1.cnf

[server default] manager_log=/masterha/app1/manager.log / / set manager's log manager _ workdir=/masterha/app1 / / set manager's working directory master_binlog_dir=/var/lib/mysql / / set master's default location to save binlog So that MHA can find the log master _ ip_failover_script= / usr/local/bin/master_ip_failover / / when setting automatic failover, the switching script master_ip_online_change_script= / usr/local/bin/master_ip_online_change / / set the switching script user=monitor / / set the monitoring user password=monitor123 / / set the password ping_ of the monitoring user Interval=1 / / set up the monitoring master library Interval between sending ping packets. Default is 3 seconds. Try to automatically failoverremote_workdir=/tmp / / set remote mysql when there is no response three times when switching occurs, repl_user=repl / / set the replication user name repl_password=repl / / set the replication user's password report_script=/usr/local/bin/send_report / / set the script secondary_check_script= / usr/ to send the alarm after switching Local/bin/masterha_secondary_check-s 192.168.244.20-s 192.168.244.30-- user=root-- master_host=192.168.244.10-- master_ip=192.168.244.10-- master_port=3306 / / once there is a problem between MHA and master monitoring MHA Manager will determine whether the other two slave can establish a connection to master_ip port 3306 shutdown_script= "/ / close the failed host script (the main function of this script is to shut down the host to prevent brain fissure) ssh_user=root / / set the login user name of ssh [server1] hostname=192.168.244.10port= 3306 [server2] hostname=192.168.244.20port=3306candidate_master=1 / / as a candidate master If this parameter is set, the master-slave database will be promoted to the master library after the master-slave switch occurs, even if the master library is not the latest slavecheck_repl_delay=0 / / in the cluster. By default, if a slave lags behind the relay logs of master 100m, MHA will not select the slave as a new master, because it will take a long time to recover this slave. Triggering a handover by setting check_repl_delay=0,MHA ignores replication delay when selecting a new master. This parameter is useful for hosts with candidate_master=1 set, because it ensures that the candidate must be the latest master [server 3] hostname=192.168.244.30port=3306 during the handover.

Note:

1 > when editing the file, be sure to remove the following comments, and MHA will not recognize them as comments.

2 > master_ip_failover_script,secondary_check_script,master_ip_online_change_script,report_script is set in the configuration file, and the corresponding file is at the end of the article.

two。 Set the relay log cleanup method (on each Slave)

Mysql > set global relay_log_purge=0;Query OK, 0 rows affected (0.00 sec)

During the switching process of MHA, the slave library depends on the relevant information of relay log during the recovery process, so we need to set the automatic clarity of relay log to OFF and clear the relay log manually.

By default, the relay log from the server is automatically deleted after the SQL thread finishes execution. However, in a MHA environment, these relay logs may be used when restoring other slave servers, so it is necessary to disable the automatic cleanup of relay logs. Instead, it manually clears the relay logs applied by SQL threads on a regular basis.

In the ext3 file system, deleting large files takes a certain amount of time, which will lead to serious replication delays, so in Linux, large files are usually deleted by hard links.

3. Set up periodic cleanup relay scripts

The MHA node contains a purge_relay_logs script that creates a hard link for relay log, executes set global relay_log_purge=1, and waits a few seconds for the SQL thread to switch to a new relay log before executing set global relay_log_purge=0.

Here's how to use the script:

# purge_relay_logs-user=monitor-password=monitor123-disable_relay_log_purge-workdir=/tmp/

2017-04-24 20:27:46: purge_relay_logs script started. Found relay_log.info: / var/lib/mysql/relay-log.info Opening / var/lib/mysql/mysqld-relay-bin.000001.. Opening / var/lib/mysql/mysqld-relay-bin.000002.. Opening / var/lib/mysql/mysqld-relay-bin.000003.. Opening / var/lib/mysql/mysqld-relay-bin.000004.. Opening / var/lib/mysql/mysqld-relay-bin.000005.. Opening / var/lib/mysql/mysqld-relay-bin.000006.. Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SET GLOBAL relay_log_purge=0;. Ok.2017-04-24 20:27:50: All relay log purging operations succeeded.

Among them

-- user:mysql user name

-- password of password:mysql user

-- host: mysqlserver address

-- workdir: specifies the location where the hard link for the relay log is created. The default is / var/tmp. Because the creation of hard-link files in different partitions of the system will fail, you need to specify a specific hard-link location.

-- disable_relay_log_purge: by default, if relay_log_purge=1, the script exits directly. By setting this parameter, the script first sets relay_log_purge to 1, clears relay log, and then sets the parameter to 0.

Set up crontab to clean up relay log periodically

MHA calls the mysqlbinlog command directly during the switch, so you need to specify the specific path of mysqlbinlog in the environment variable.

# vim / etc/cron.d/purge_relay_logs

4 * / usr/local/bin/purge_relay_logs-- user=monitor-- password=monitor123-disable_relay_log_purge-- workdir=/tmp/ > > / tmp/purge_relay_logs.log 2 > & 1

Note: it is best for each slave server to perform this scheduled task at different points in time.

4. Add the path of mysqlbinlog to the environment variable

Check the configuration of SSH

Execute on Monitor host

# masterha_check_ssh-conf=/etc/masterha/app1.cnf

Wed Jul 20 14:33:36 2016-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.Wed Jul 20 14:33:36 2016-[info] Reading application default configuration from / etc/masterha/app1.cnf..Wed Jul 20 14:33:36 2016-[info] Reading server configuration from / etc/masterha/app1.cnf..Wed Jul 20 14:33:36 2016-[info] Starting SSH connection tests..Wed Jul 20 14:33:51 2016-[debug] Wed Jul 20 14:33:36 2016-[debug] Connecting via SSH from root@192.168.244. 10 (192.168.244.10) to root@192.168.244.20 (192.168.244.20). Wed Jul 20 14:33:48 2016-[debug] ok.Wed Jul 20 14:33:48 2016-[debug] Connecting via SSH from root@192.168.244.10 (192.168.244.10) to root@192.168.244.30 (192.168.244.30). Wed Jul 20 14:33 50 2016-[debug] ok.Wed Jul 20 14:33:55 2016-[debug] Wed Jul 20 14:33:37 2016-[debug] Connecting via SSH from root@192.168.244.30 (192.168.244.30 Wed Jul 22) to root@192.168.244.10 (192.168.244.10 debug 22) .Wed Jul 20 14:33:49 2016-[debug] ok.Wed Jul 20 14:33:49 2016-[debug] Connecting Via SSH from root@192.168.244.30 (192.168.244.30) to root@192.168.244.20 (192.168.244.20). Wed Jul 20 14:33:54 2016-[debug] ok.Wed Jul 20 14:33:55 2016-[debug] Wed Jul 20 14:33:36 2016-[debug] Connecting via SSH from root@192.168.244.20 (192.168.244.20) to root@192 .168.244.10 (192.168.244.10 Wed Jul 22).. Wed Jul 20 14:33:49 2016-[debug] ok.Wed Jul 20 14:33:49 2016-[debug] Connecting via SSH from root@192.168.244.20 (192.168.244.20 ok.Wed Jul 22) to root@192.168.244.30 (192.168.244.30 ok.Wed Jul 22). Wed Jul 20 14:33:54 2016-[debug] ok.Wed Jul 20 14: 33:55 2016-[info] All SSH connection tests passed successfully.

View Code

7. Check the status of the entire cluster

Execute on Monitor host

# masterha_check_repl-conf=/etc/masterha/app1.cnf

Wed Jul 20 14:44:30 2016-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.Wed Jul 20 14:44:30 2016-[info] Reading application default configuration from / etc/masterha/app1.cnf..Wed Jul 20 14:44:30 2016-[info] Reading server configuration from / etc/masterha/app1.cnf..Wed Jul 20 14:44:30 2016-[info] MHA::MasterMonitor version 0.56.Wed Jul 20 14:44:31 2016-[info] GTID failover mode = 0Wed Jul 20 14:44:31 2016-[info] Dead Servers:Wed Jul 20 14:44:31 2016-[info] Alive Servers:Wed Jul 20 14:44:31 2016-[info] 192.168.244.10 (192.168.244.10 info) Wed Jul 20 14:44:31 2016-[info] 192.168.244.20 (192.168.244.20) Wed Jul 20 14:44:31 2016-[info] 192.168.244.30 (192.168.244.30) Wed Jul 20 14:44: 31 2016-[info] Alive Slaves:Wed Jul 20 14:44:31 2016-[info] 192.168.244.20 (192.168.244.20 Version=5.6.31 (oldest major version between slaves) log-bin:disabledWed Jul 20 14:44:31 2016-[info] Replicating from 192.168.244.10 (192.168.244.10 info 20 14:44:31 2016-[info] Primary candidate for the new Master (candidate) _ master is set) Wed Jul 20 14:44:31 2016-[info] 192.168.244.30 Version=5.6.31 (oldest major version between slaves) log-bin:disabledWed Jul 20 14:44:31 2016-[info] Replicating from 192.168.244.10 (192.168.244.10 info 20 14:44:31 2016-[info] Current Alive Master: 192.168.244.10 (192.168) .244.10 Wed Jul 20 14:44:31 2016-[info] Checking slave configurations..Wed Jul 20 14:44:31 2016-[warning] log-bin is not set on slave 192.168.244.20) This host cannot be a master.Wed Jul 20 14:44:31 2016-[warning] log-bin is not set on slave 192.168.244.30 (192.168.244.30 log-bin is not set on slave 3306) This host cannot be a master.Wed Jul 20 14:44:31 2016-[info] Checking replication filtering settings..Wed Jul 20 14:44:31 2016-[info] binlog_do_db=, binlog_ignore_db= Wed Jul 20 14:44:31 2016-[info] Replication filtering check ok.Wed Jul 20 14:44:31 2016-[error] [/ usr/local/share/perl5/MHA/MasterMonitor.pm, ln361] None of slaves can be master. Check failover configuration file or log-bin settings in my.cnfWed Jul 20 14:44:31 2016-[error] [/ usr/local/share/perl5/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations. At / usr/local/bin/masterha_check_repl line 48.Wed Jul 20 14:44:31 2016-[error] [/ usr/local/share/perl5/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers.Wed Jul 20 14:44:31 2016-[info] Got exit code 1 (Not master dead) .MySQL Replication Health is NOT OK!

View Code

The error report is obvious. Neither Candicate master nor Slave has started log-bin. If it is not started, it will not be able to be promoted later.

After setting up the log-bin, re-execute:

Wed Jul 20 15:49:58 2016-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.Wed Jul 20 15:49:58 2016-[info] Reading application default configuration from / etc/masterha/app1.cnf..Wed Jul 20 15:49:58 2016-[info] Reading server configuration from / etc/masterha/app1.cnf..Wed Jul 20 15:49:58 2016-[info] MHA::MasterMonitor version 0.56.Wed Jul 20 15:49:59 2016-[info] GTID failover mode = 0Wed Jul 20 15:49:59 2016-[info] Dead Servers:Wed Jul 20 15:49:59 2016-[info] Alive Servers:Wed Jul 20 15:49:59 2016-[info] 192.168.244.10 (192.168.244.10 info) Wed Jul 20 15:49:59 2016-[info] 192.168.244.20 (192.168.244.20) Wed Jul 20 15:49:59 2016-[info] 192.168.244.30 (192.168.244.30) Wed Jul 20 15:49: 59 2016-[info] Alive Slaves:Wed Jul 20 15:49:59 2016-[info] 192.168.244.20 (192.168.244.20 Version=5.6.31-log (oldest major version between slaves) log-bin:enabledWed Jul 20 15:49:59 2016-[info] Replicating from 192.168.244.10 (192.168.244.10 info) Wed Jul 20 15:49:59 2016-[info] Primary candidate for the new Master (candidate_master is set) Wed Jul 20 15:49:59 2016-[info] 192.168.244.30 (192.168.244.30 Version=5.6.31-log (oldest major version between slaves) log-bin:enabledWed Jul 20 15:49:59 2016-[info] Replicating from 192.168.244.10 (192.168.244.10) Wed Jul 20 15:49:59 2016-[info] Current Alive Master: 192.168.244.10 (192.168.244.10 Wed Jul 3306) Wed Jul 20 15:49:59 2016-[info] Checking slave configurations..Wed Jul 20 15:49:59 2016-[info] Checking replication filtering settings..Wed Jul 20 15:49:59 2016-[info] binlog_do_db= Binlog_ignore_db= Wed Jul 20 15:49:59 2016-[info] Replication filtering check ok.Wed Jul 20 15:49:59 2016-[info] GTID (with auto-pos) is not supportedWed Jul 20 15:49:59 2016-[info] Starting SSH connection tests..Wed Jul 20 15:50:17 2016-[info] All SSH connection tests passed successfully.Wed Jul 20 15:50:17 2016-[info] Checking MHA Node version..Wed Jul 20 15:50:18 2016- [info] Version check ok.Wed Jul 20 15:50:18 2016-[info] Checking SSH publickey authentication settings on the current master..Wed Jul 20 15:50:20 2016-[info] HealthCheck: SSH to 192.168.244.10 is reachable.Wed Jul 20 15:50:21 2016-[info] Master MHA Node version is 0.56.Wed Jul 20 15:50:21 2016-[info] Checking recovery script configurations on 192.168.244.10 ). Wed Jul 20 15:50:21 2016-[info] Executing command: save_binary_logs-- command=test-- start_pos=4-- binlog_dir=/var/lib/mysql-- output_file=/tmp/save_binary_logs_test-- manager_version=0.56-- start_file=mysqld-bin.000002 Wed Jul 20 15:50:21 2016-[info] Connecting to root@192.168.244.10 (192.168.244.10).. Creating / tmp if not exists.. Ok. Checking output directory is accessible or not.. Ok. Binlog found at / var/lib/mysql Up to mysqld-bin.000002Wed Jul 20 15:50:23 2016-[info] Binlog setting check done.Wed Jul 20 15:50:23 2016-[info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..Wed Jul 20 15:50:23 2016-[info] Executing command: apply_diff_relay_logs-command=test-slave_user='monitor'-slave_host=192.168.244.20-slave_ip=192.168.244.20-slave_port=3306- -workdir=/tmp-- target_version=5.6.31-log-- manager_version=0.56-- relay_log_info=/var/lib/mysql/relay-log.info-- relay_dir=/var/lib/mysql/-- slave_pass=xxxWed Jul 20 15:50:23 2016-[info] Connecting to root@192.168.244.20 (192.168.244.20).. Checking slave recovery environment settings.. Opening / var/lib/mysql/relay-log.info... Ok. Relay log found at / var/lib/mysql, up to mysqld-relay-bin.000004 Temporary relay log file is / var/lib/mysql/mysqld-relay-bin.000004 Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure. Done. Testing mysqlbinlog output.. Done. Cleaning up test file (s).. Done.Wed Jul 20 15:50:28 2016-[info] Executing command: apply_diff_relay_logs-- command=test-- slave_user='monitor'-- slave_host=192.168.244.30-- slave_ip=192.168.244.30-- slave_port=3306-- workdir=/tmp-- target_version=5.6.31-log-manager_version=0.56-- relay_log_info=/var/lib/mysql/relay-log.info-- relay_dir=/var/lib/mysql /-slave_pass=xxxWed Jul 20 15:50:28 2016-[info] Connecting to root@192.168.244.30 (192.168.244.30).. Checking slave recovery environment settings.. Opening / var/lib/mysql/relay-log.info... Ok. Relay log found at / var/lib/mysql, up to mysqld-relay-bin.000008 Temporary relay log file is / var/lib/mysql/mysqld-relay-bin.000008 Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure. Done. Testing mysqlbinlog output.. Done. Cleaning up test file (s).. Done.Wed Jul 20 15:50:32 2016-[info] Slaves settings check done.Wed Jul 20 15:50:32 2016-[info] 192.168.244.10 (192.168.244.10) (current master) +-- 192.168.244.20 (192.168.244.20) +-192.168.244.30 (192.168.244.20) Wed Jul 20 15:50:32 2016-[info] Checking replication health on 192. 168.244.20..Wed Jul 20 15:50:32 2016-[info] ok.Wed Jul 20 15:50:32 2016-[info] Checking replication health on 192.168.244.30..Wed Jul 20 15:50:32 2016-[info] ok.Wed Jul 20 15:50:32 2016-[info] Checking master_ip_failover_script status:Wed Jul 20 15:50:32 2016-[info] / usr/local/bin/master_ip_failover Command=status-- ssh_user=root-- orig_master_host=192.168.244.10-- orig_master_ip=192.168.244.10-- orig_master_port=3306 Wed Jul 20 15:50:32 2016-[info] OK.Wed Jul 20 15:50:32 2016-[warning] shutdown_script is not defined.Wed Jul 20 15:50:32 2016-[info] Got exit code 0 (Not master dead) .MySQL Replication Health is OK.

View Code

Check and pass ~

8. Check the status of MHA Manager

# masterha_check_status-conf=/etc/masterha/app1.cnf app1 is stopped (2:NOT_RUNNING).

If normal, "PING_OK" will be displayed, otherwise "NOT_RUNNING" will be displayed, indicating that MHA monitoring has not been turned on.

9. Enable MHA Manager monitoring

# nohup masterha_manager-conf=/etc/masterha/app1.cnf-remove_dead_master_conf-ignore_last_failover

< /dev/null >

/ masterha/app1/manager.log 2 > & 1 &

Among them

Remove_dead_master_conf: this parameter means that when a master-slave switch occurs, the IP of the old master library will be removed from the configuration file.

Ignore_last_failover: by default, an app1.failover.complete file will be generated under / masterha/app1 after MHA is switched. If the file is found in the directory next time and the interval between the two switches is less than 8 hours, the switch will not be allowed. Unless you manually rm-rf / masterha/app1/app1.failover.complete after the first switchover. This parameter means that the files generated by the last MHA trigger switch are ignored.

Check whether the MHA Manager monitoring is normal.

# masterha_check_status-conf=/etc/masterha/app1.cnf app1 (pid:1873) is running (0:PING_OK), master:192.168.244.10

Turn off MHA Manager monitoring

# masterha_stop-- conf=/etc/masterha/app1.cnf Stopped app1 successfully. [1] + Exit 1 nohup masterha_manager-- conf=/etc/masterha/app1.cnf-- remove_dead_master_conf-- ignore_last_failover

< /dev/null >

/ masterha/app1/manager.log 2 > & 1

Now that the MHA is partially configured, let's configure VIP.

11. VIP configuration

VIP can be configured in two ways, one is to manage VIP by introducing Keepalived, and the other is to manage VIP manually in scripts.

For keepalived management VIP, there is a brain fissure, that is, when there is a problem with the master-slave network, slave will preempt VIP, which will cause both master and slave databases to hold VIP, resulting in IP conflicts. Therefore, it is not recommended to use keepalived service when the network is not very good.

The second one that is more used in actual production is to manage VIP manually in the script, so children's shoes that are not interested in keepalived can skip the first way directly.

1. Keepalived manages VIP

1 > install keepalived

Because I set up Candicate master here, I only install it on Master and Candicate master.

If there is no Candicate master and the two Slave are equal, keepalived needs to be installed on both Slave.

# wget http://www.keepalived.org/software/keepalived-1.2.24.tar.gz

# tar xvf keepalived-1.2.24.tar.gz

# cd keepalived-1.2.24

#. / configure-prefix=/usr/local/keepalived

# make

# make install

# cp / usr/local/keepalived/etc/rc.d/init.d/keepalived / etc/rc.d/init.d/

# cp / usr/local/keepalived/etc/sysconfig/keepalived / etc/sysconfig/

# mkdir / etc/keepalived

# cp / usr/local/keepalived/etc/keepalived/keepalived.conf / etc/keepalived/

# cp / usr/local/keepalived/sbin/keepalived / usr/sbin/

2 > set up a separate log file for keepalived (optional)

Keepalived logs are output to / var/log/message by default

# vim / etc/sysconfig/keepalived

KEEPALIVED_OPTIONS= "- D-d-S 0"

Set up syslog

# vim / etc/rsyslog.conf

Add the following:

Local0.* / var/log/keepalived.log

# service rsyslog restart

2 > configure keepalived

Modify on Master

# vim / etc/keepalived/keepalived.conf

Global_defs {notification_email {slowtech@qq.com} notification_email_from root@localhost.localdomain smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MySQL-HA} vrrp_instance VI_1 {state BACKUP interface eth0 virtual_router_id 51 priority 150 advert_int 1 nopreempt authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.244.188/24}}

View Code

For a detailed description of the parameters of keepalived, please refer to:

Building MyCAT High availability load balancing Cluster by LVS+Keepalived

Working principle and configuration description of keepalived

Scp the configuration file to the Candicate master

# scp / etc/keepalived/keepalived.conf 192.168.244.20:/etc/keepalived/

Just set the priority in the configuration file to 90

Note: why do we set keepalived to backup mode here?

In master-backup mode, if the main library is down, VIP will automatically drift to Slave. When the main library is repaired and keepalived is started, it will snatch the VIP, even if the nopreempt (no preemption) is set.

Type, the action will still occur. However, in backup-backup mode, when the master library is modified and keepalived is started, the VIP of the new master will not be preempted, even if the priority of the original owner is higher than that of the new master.

3 > start keepalived

Start it on Master first

# service keepalived start

Env: / etc/init.d/keepalived: Permission denied

# chmod + x / etc/init.d/keepalived

# service keepalived start

Check the binding status

# ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6:: 1/128 scope host valid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c6:47:04 brd ff:ff:ff:ff:ff:ff inet 192.168. 244.10/24 brd 192.168.244.255 scope global eth0 inet 192.168.244.188/24 scope global secondary eth0 inet6 fe80::20c:29ff:fec6:4704/64 scope link valid_lft forever preferred_lft forever

View Code

It can be seen that VIP (19168.244.188) has been bound to the eth0 network card of Master.

Start keepalived for Candicate master

# service keepalived start

4 > introducing keepalived into MHA

Edit / usr/local/bin/master_ip_failover

Relative to the original file, modify the place to 93-95 lines

1 #! / usr/bin/env perl 2 3 # Copyright (C) 2011 DeNA Co.,Ltd. 4 # 5 # This program is free software; you can redistribute it and/or modify 6 # it under the terms of the GNU General Public License as published by 7 # the Free Software Foundation; either version 2 of the License, or 8 # (at your option) any later version. 9 # 10 # This program is distributed in the hope that it will be useful, 11 # but WITHOUT ANY WARRANTY; without even the implied warranty of 12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 # GNU General Public License for more details. 14 # 15 # You should have received a copy of the GNU General Public License 16 # along with this program; if not, write to the Free Software 17 # Foundation, Inc., 18 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 19 20 # # Note: This is a sample script and is not complete. Modify the script based on your environment. 21 22 use strict; 23 use warnings FATAL = > 'all'; 24 25 use Getopt::Long; 26 use MHA::DBHelper; 27 my (28$ command, $ssh_user, $orig_master_host, 29 $orig_master_ip, $orig_master_port, $new_master_host, 30 $new_master_ip, $new_master_port, $new_master_user, 31 $new_master_password 32) 33 34 GetOptions (35 'command=s' = >\ $command, 36' ssh_user=s' = >\ $ssh_user, 37 'orig_master_host=s' = >\ $orig_master_host, 38' orig_master_ip=s' = >\ $orig_master_ip, 39 'orig_master_port=i' = >\ $orig_master_port, 40' new_master_host=s' = >\ $new_master_host 41 'new_master_ip=s' = >\ $new_master_ip, 42' new_master_port=i' = >\ $new_master_port, 43 'new_master_user=s' = >\ $new_master_user, 44' new_master_password=s' = >\ $new_master_password, 45) 46 47 exit & main (); 48 49 sub main {50 if ($command eq "stop" | | $command eq "stopssh") {51 52 # $orig_master_host, $orig_master_ip, $orig_master_port are passed. 53 # If you manage master ip address at global catalog database, 54 # invalidate orig_master_ip here. 55 my $exit_code = 1; 56 eval {57 58 # updating global catalog, etc 59 $exit_code = 0; 60}; 61 if ($@) {62 warn "Got Error: $@\ n"; 63 exit $exit_code; 64} 65 exit $exit_code; 66} 67 elsif ($command eq "start") {68 69 # all arguments are passed. 70 # If you manage master ip address at global catalog database, 71 # activate new_master_ip here. 72 # You can also grant write access (create user, set read_only=0, etc) here. 73 my $exit_code = 10; 74 eval {75 my $new_master_handler = new MHA::DBHelper (); 76 77 # args: hostname, port, user, password, raise_error_or_not 78 $new_master_handler- > connect ($new_master_ip, $new_master_port, 79 $new_master_user, $new_master_password, 1) 80 81 # # Set read_only=0 on the new master 82 $new_master_handler- > disable_log_bin_local (); 83 print "Set read_only=0 on the new master.\ n"; 84 $new_master_handler- > disable_read_only (); 85 86 # # Creating an app user on the new master 87 # print "Creating app user on the new master..\ n" 88 # FIXME_xxx_create_user ($new_master_handler- > {dbh}); 89 $new_master_handler- > enable_log_bin_local (); 90 $new_master_handler- > disconnect (); 91 92 # # Update master ip on the catalog database, etc 93 my $cmd; 94$ cmd = 'ssh'. $ssh_user.'@'.$orig_master_ip.' Service keepalived stop'; 95 system ($cmd); 96 $exit_code = 0; 97}; 98 if ($@) {99 warn $@; 100 101 # If you want to continue failover, exit 10.102 exit $exit_code;103} 104 exit $exit_code;105} 106 elsif ($command eq "status") 113 exit 110114} 116117 sub usage {118print119 "Usage: master_ip_failover-- command=start | stop | stopssh | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n";

View Code

two。 Manage VIP through scripting

Edit / usr/local/bin/master_ip_failover

#! / usr/bin/ software You can redistribute it and/ not, Franklin Street, Fifth Floor, Boston, MA-= > > = > & ($command eq | | $command eq = & = access (create user) Set read_only== =-> =-> & & $ssh_user\ @ $new_master_host\ `$ssh_user\ @ $new_master_host\ $ssh_user\ @ $orig_master_host\

In the actual production environment, it is recommended to manage VIP in this way, which can effectively prevent the occurrence of brain fissure.

At this point, the MHA high availability environment has been basically built.

For common operations on MHA, including automatic Failover, manual Failover, and online switching, please refer to another blog:

The steps and principles of MHA online handover

Practice and principle of MHA automatic Failover and Manual Failover

Summary:

1. Scripts such as master_ip_failover,master_ip_online_change,send_report can be debugged separately.

/ usr/local/bin/master_ip_online_change-command=stop-orig_master_ip=192.168.244.10-orig_master_host=192.168.244.10-orig_master_port=3306-orig_master_user=monitor-orig_master_password=monitor123-orig_master_ssh_user=root-new_master_host=192.168.244.20-new_master_ip=192.168.244.20-new_master_port=3306 new_master_user=monitor new_master_password=monitor123 -new_master_ssh_user=root/usr/local/bin/master_ip_failover-- command=start-- ssh_user=root-- orig_master_host=192.168.244.10-- orig_master_ip=192.168.244.10-- orig_master_port=3306-- new_master_host=192.168.244.20-- new_master_ip=192.168.244.20-- new_master_port=3306-- new_master_user='monitor'-- new_master_password='monitor123'

two。 Officially, for master_ip_failover,master_ip_online_change,send_report scripts, only sample is given, and the switching logic needs to be defined by itself.

Many children's shoes are not familiar with perl and feel that they have no way to start. In fact, they can call other scripts, such as python,shell, etc.

Such as:

[root@node4 ~] # cat test.plaugh root/test.py';system root/test.py';system USR binder strict;my $cmd='python / root/test.py';system ($cmd) This is the end of the article [root@node4] # cat test.pythonprint "hello,python" [root@node4 ~] # perl test.plhello,python on "how to deploy MHA, a highly available solution for MySQL". I hope the above content can be of some help to you so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report