Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Building of mysql Learning Notes MHA is highly available

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

4. Build MHA highly available 4.1Architecture Diagram

Only one set of master and slave is built here, and the structure is as follows

4.2 install MHA Node

All nodes need to be installed, including MHA,Master,Slave

4.2.1 install the dependency package yum install-y perl-DBD-MySQL perl-ExtUtils-MakeMaker perl-CPAN4.2.2 download package

Https://github.com/yoshinorim/mha4mysql-node/releases/tag/v0.58

4.2.3 compile and install # extract tar zxvf mha4mysql-node-0.58.tar.gz# and move to / usr/local/ directory, and change the directory mv mha4mysql-node-0.58 / usr/local/cd / usr/local/mha4mysql-node-0.58# to compile and install perl Makefile.PL makemake install

After the installation is complete, the following script files are generated in / usr/local/bin/, with instructions for Node scripts (these tools are usually triggered by scripts from MHAManager)

Save_binary_logs / / Save and copy master's binary log apply _ diff_relay_logs / / identify differential relay log events and apply their differential / / events to other slavefilter_mysqlbinlog / / remove unnecessary ROLLBACK events (MHA has / / do not use this tool again) purge_relay_logs / / clear the relay log (does not block SQL threads) 4.3 install MHA Manger4.3.1 install dependent yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager-y4.3.2 download package

Https://github.com/yoshinorim/mha4mysql-manager/releases/tag/v0.58

4.3.3 compile and install # extract tar-zxf mha4mysql-manager-0.58.tar.gz# and move to the / usr/local/ directory, and change the directory mv mha4mysql-manager-0.58/ usr/local/cd / usr/local/mha4mysql-manager-0.58/# to compile and install perl Makefile.PLmakemake install

After the installation is complete, the following related command scripts will be added to / usr/local/bin:

Masterha_check_repl / / check MySQL replication health status masterha_check_ssh / check ssh health status masterha_check_status / / check current MHA operational status masterha_conf_host / / add or delete configured server information masterha_manager / / start MHAmasterha_master_monitor / / detect whether master is down masterha _ master_switch / / Control failover (automatic or manual) masterha_secondary_check / / if masterha_stop / / stop MHA is found from the manager node

There will be related scripts in the / usr/local/mha4mysql-manager-0.58/samples/scripts directory

The script managed by vip during master_ip_failover / / automatic switching is not necessary. If we use keepalived, we can write our own scripts to manage vip, such as monitoring mysql. If mysql is abnormal, we can stop keepalived, so that vip will automatically drift indistinguishableness / / vip management during online switching. It is not necessary. You can also write a simple shell script to shut down the host after a power_manager / / failure. It is not necessary to send_report / / send an alarm after a failover. You can write simple shell to complete 4.4.4.setting host parsing 4.4.1 setting hostname # MHAhostnamectl set-hostname MHA#Masterhostnamectl set-hostname Master#Slave01hostnamectl set-hostname Slave01#Slave02hostnamectl set-hostname Slave024.4.2 configuration Host file # vim / etc/hosts192.168.56.50 MHA192.168.56.51 Master192.168.56.52 Slave01192.168.56.53 Slave024.5 configuration mutual trust # create directory All nodes execute mkdir ~ / .sshcd ~ / .ssh # to generate public key and private key files, always enter to use the default, and all nodes execute ssh-keygen-t rsa# to copy the public keys on all nodes to a machine and aggregate them into an authorized_keys#192.168.56.51 server. Copy the generated public key to the scp id_rsa.pub root@192.168.56.50:/root/.ssh/id_rsa.pub_51#192.168.56.52 server under 192.168.56.50, and copy the generated public key to the scp id_rsa.pub root@192.168.56.50:/root/.ssh/id_rsa.pub_52#192.168.56.53 server under 192.168.56.50 Copy the generated public key to the scp id_rsa.pub root@192.168.56.50:/root/.ssh/id_rsa.pub_53#192.168.56.50 server under 192.168.56.50, and append the public key of the 50meme 51meme 52jin53 server to the authentication file cat id_rsa.pub id_rsa.pub_51 id_rsa.pub_52 id_rsa.pub_53 > > authorized_keys#192.168.56.50 server. Distribute the summarized public key authentication file to other nodes scp authorized_keys root@192.168.56.51:/root/.ssh/scp authorized_keys root@192.168.56.52:/root/.ssh/scp authorized_keys root@192.168.56.53:/root/.ssh/# all nodes to verify ssh secret-free login, you may need to enter yes for the first time There is no need to enter the password ssh 192.168.56.50 datessh 192.168.56.51 datessh 192.168.56.52 datessh 192.168.56.53 date4.6 build master-slave replication

Refer to section 3 Replication and note the following:

# the filtering rules for master-slave node replication should be the same, that is, the master-slave configuration of binlog_do_db and binlog_ignore_db # parameters should be the same # set the slave library to read-only by command, and do not write this parameter into the configuration file mysql-e "set global read_only=1" # turn off the mysql-e "set global relay_log_purge=0" 4.7 configuration MHA4.7.1 to create monitoring user create user 'mha'@'%' identified by' mha' GRANT ALL PRIVILEGES ON *. * TO 'mha'@'%';flush privileges;4.7.2 create configuration file

(1) create a directory

Mkdir / mha/app1

(1) Edit configuration file

# vim / mha/app1/app1.cnf [server default] # mha manager log file manager_log=/mha/app1/manager.log#manager working directory manager_workdir=/mha/app1#master node stores binlog log path so that MHA can find binlog Here is the MySQL data directory master_binlog_dir=/data/mysql/3306/data# switching when the slave node binlog log storage path remote_workdir=/data/mysql/3306/data# automatic switching script master_ip_failover_script=/usr/local/bin/master_ip_failover# manual switching script master_ip_online_change_script=/usr/bin/master_ip_online_change# once there is a problem between MHA and 51 monitoring MHA Manager will attempt to log in to 51secondary_check_script=/usr/local/bin/masterha_secondary_check-s 192.168.56.52-s 192.168.56.53-- user=root-- port=22-- master_host=192.168.56.51-- master_port=3306# monitoring master node time interval from 52 and 53 to close the failed host script after the ping_interval=3# setting failure occurs (the main function of this script is to shut down the host and place it in the event of a brain fissure. ) # shutdown_script= "" # Database Monitoring user user=mhapassword=mha# replication user repl_password=replrepl_user=repl#ssh login user ssh_user= root [server1] hostname=192.168.56.51port= 3306 [server2] candidate_master=1check_repl_delay=0hostname=192.168.56.52port= 3306 [server3] hostname=192.168.56.53port=3306

Can be changed after copying from the template

Cp / usr/local/mha4mysql-manager-0.58/samples/conf/app1.cnf / mha/app1.cnf # is the configuration file for a replication group. Masterha_default.cnf # MHA manager global configuration file through which you can manage multiple replication groups 4.7.1 check status masterha_check_ssh-- conf=/mha/app1/app1.cnfmasterha_check_repl-- conf=/mha/app1/app1.cnf

If an error occurs:

Bareword "FIXME_xxx" not allowed while "strict subs" in use at / etc/mha/script/master_ip_failover line 100.

[^ bug]: comment out FIXME_xxx

Start HMA Manager monitoring # check the status of MHA manager monitoring, there is no masterha_check_status running here-- conf=/mha/app1/app1.cnf#app1 is stopped (2:NOT_RUNNING). # start MHA monitoring-- remove_dead_master_conf-- ignore_last_failovernohup masterha_manager-- conf=/mha/app1/app1.cnf-- remove_dead_master_conf-- ignore_last_failover & # wait a moment Prompt: # app1 (pid:29512) is running (0:PING_OK) Master:192.168.56.51 # stop MHA monitoring masterha_stop-- conf=/mha/app1/app1.cnf4.9 check log [root@localhost app1] # tail-f / mha/app1/manager.log +-- 192.168.56.53 (192.168.56.53 master:192.168.56.51 3306) Fri Aug 23 10:09:30 2019-[info] Checking master_ip_failover_script status:Fri Aug 23 10:09:30 2019-[info] / usr/local / mha4mysql-manager-0.58/samples/scripts/master_ip_failover-- command=status-- ssh_user=root-- orig_master_host=192.168.56.51-- orig_master_ip=192.168.56.51-- orig_master_port=3306Fri Aug 23 10:09:30 2019-[info] OK.Fri Aug 23 10:09:30 2019-[warning] shutdown_script is not defined.Fri Aug 23 10:09:30 2019-[info] Set master ping interval 3 seconds.Fri Aug 23 10:09:30 2019-[info] Set secondary check script: / usr/local/bin/masterha_secondary_check-s 192.168.56.52-s 192.168.56.53-- user=root-- port=22-- master_host=192.168.56.51-- master_port=3306Fri Aug 23 10:09:30 2019-[info] Starting ping health check on 192.168.56.51 (192.168.56.51). Fri Aug 23 10:09:30 2019-[info] Ping (SELECT) succeeded Waiting until MySQL doesn't respond..

​ Ping (SELECT) succeeded, the whole system monitoring starts normally.

4.10 regularly clean the relay log 4.10.1 create a cleaning script # vim purge_relay_log.sh #! / bin/bash# database username password port user=rootpasswd='Yxc@3306'port=3306# script log storage path log_dir='/mha/app1'# specifies the location of the hard link to create relay log. The default is / var/tmp. Because the creation of hard-link files in different partitions of the system will fail, the specific location of the hard-link needs to be performed. After successful execution of the script, the hard-linked relay log files are deleted. Work_dir='/mha'# delete relay log script purge='/usr/local/bin/purge_relay_logs'if [!-d $log_dir] then mkdir $log_dir-pfi#--disable_relay_log_purge: by default, if relay_log_purge=1, the script cleans up nothing and exits automatically. By setting this parameter, relay_log_purge will be set to 0 in the case of relay_log_purge=1. After cleaning up the relay log, finally set the parameter to OFF. $purge-- user=$user-- password=$passwd-- disable_relay_log_purge-- port=$port-- workdir=$work_dir > > $log_dir/purge_relay_logs.log 2 > & 14.10.2 authorize chmod aquix purge_relay_log.sh4.10.3 to create a scheduled task 00 03 * / bin/bash / root/purge_relay_log.sh4.11 keepalievd configure VIP

Slightly

4.12 Failover test verification

Slightly

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report