In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Mysql MHA deployment-04MHA configuration
A Mysql MHA deployment-01 introduction
2 Mysql MHA deployment-02 master-slave replication
Three Mysql MHA deployment-03MHA software installation
Four Mysql MHA deployments-04MHA configuration
Five Mysql MHA deployment-05 failover
Six Mysql MHA deployment-06 manual switchover
7. Mysql MHA deployment-07 FAQ
Architecture description:
Reference: http://www.zhaibibei.cn/mysql/mha/
One: MySQL MHA pre-deployment preparation
1.1 hosts Fil
First, we add the hosts information of other servers to the four servers.
[root@rac1 mha] # vim / etc/hosts
192.168.2.222 rac4
192.168.2.187 rac1
192.168.2.188 rac2
192.168.2.223 rac3
1.2 configure password-free login
Next, we need to configure password-free login between service requests.
Please proceed to the next step after the ssh-kengen command.
-192.168.2.187 rac1
Ssh-keygen-t rsa
Ssh-copy-id 192.168.2.188
Ssh-copy-id 192.168.2.223
Ssh-copy-id 192.168.2.222
Ssh-copy-id rac2
Ssh-copy-id rac3
Ssh-copy-id rac4
-192.168.2.188 rac2
Ssh-keygen-t rsa
Ssh-copy-id 192.168.2.187
Ssh-copy-id 192.168.2.223
Ssh-copy-id 192.168.2.222
Ssh-copy-id rac1
Ssh-copy-id rac3
Ssh-copy-id rac4
-192.168.2.223 rac3
Ssh-keygen-t rsa
Ssh-copy-id 192.168.2.187
Ssh-copy-id 192.168.2.188
Ssh-copy-id 192.168.2.222
Ssh-copy-id rac1
Ssh-copy-id rac2
Ssh-copy-id rac4
-192.168.2.222 rac4
Ssh-keygen-t rsa
Ssh-copy-id 192.168.2.187
Ssh-copy-id 192.168.2.188
Ssh-copy-id 192.168.2.223
Ssh-copy-id rac1
Ssh-copy-id rac2
Ssh-copy-id rac3
1.3 create a monitoring account
Next we create a database account for MHA monitoring
Main library 187:
Master database
Mysql > grant all privileges on *. * to 'monitor'@'192.168.2.187' identified by' 123456'
Mysql > grant all privileges on *. * to 'monitor'@'192.168.2.188' identified by' 123456'
Mysql > grant all privileges on *. * to 'monitor'@'192.168.2.222' identified by' 123456'
Mysql > grant all privileges on *. * to 'monitor'@'192.168.2.223' identified by' 123456'
Mysql > grant all privileges on *. * to 'monitor'@'127.0.0.1' identified by' 123456'
Mysql > flush privileges
Mysql > select host,user from mysql.user
+-+ +
| | host | user |
+-+ +
| | 127.0.0.1 | monitor |
| | 192.168.2.187 | monitor | |
| | 192.168.2.187 | repl | |
| | 192.168.2.188 | monitor | |
| | 192.168.2.188 | repl | |
| | 192.168.2.222 | monitor | |
| | 192.168.2.223 | monitor | |
| | 192.168.2.223 | repl | |
| | localhost | mysql.session |
| | localhost | mysql.sys |
| | localhost | root |
+-+ +
11 rows in set (0.00 sec)
1.4 turn off the firewall
If the firewall starts, port 3306 communication between servers needs to be opened.
1.5 turn off automatic cleaning of relay logs
By default, the relay logs of the slave library will be deleted automatically after the execution of the SQL thread, but in the MHA scenario, the recovery of some lagging slave libraries depends on the relay log of other slave libraries, so the method of disabling automatic deletion and cleaning regularly is adopted.
Use the purge_relay_logs script as explained later
3 sets of master and slave libraries
Relay_log_purge=0
1.6 enable read-only from the library
Slave library needs to set read-only mode
2 slave libraries
Read_only=1
1.7 master node binding VIP
At first you need to manually bind the VIP to the master node
[root@rac1 mysql] # ifconfig | grep 192
Inet 192.168.2.187 netmask 255.255.255.0 broadcast 192.168.2.255
Inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
[root@rac1 mysql] # / sbin/ifconfig enp0s3:2 192.168.2.189 netmask 255.255.255.0 up
[root@rac1 mysql] # ifconfig | grep 192
Inet 192.168.2.187 netmask 255.255.255.0 broadcast 192.168.2.255
Inet 192.168.2.189 netmask 255.255.255.0 broadcast 192.168.2.255
Inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
1.8 prepare a custom script
Here we download it from the network disk, put it under / etc/mha/script and give executable permission.
Link: https://pan.baidu.com/s/18H12bTIYHi2H0dXs2DI7NA extraction code: mwkq
Management node: 192.168.2.222 RAC4
[root@rac4 ~] # mkdir / etc/mha/script-p
[root@rac4 ~] # cp / package/mysql/MHA/script/* / etc/mha/script/
[root@rac4 ~] # ll-rth / etc/mha/script/
Total 24K
-rw-r--r-- 1 root root 2.4K Mar 14 19:56 send_report
-rw-r--r-- 1 root root 11K Mar 14 19:56 master_ip_online_change
-rw-r--r-- 1 root root 4.3K Mar 14 19:56 master_ip_failover
Next, let's modify the relevant files.
-master_ip_failover
The file only needs to be modified
My $vip = '192.168.2.189'
My $key = "0"
My $ssh_start_vip = "/ sbin/ifconfig enp0s3:$key $vip/24"
My $ssh_stop_vip = "/ sbin/ifconfig enp0s3:$key down"
My $ssh_send_garp = "/ sbin/arping-U $vip-I enp0s3-c 1"
-master_ip_online_change
The file also needs to be modified.
My $vip = '192.168.2.189'
My $key = "0"
My $ssh_start_vip = "/ sbin/ifconfig enp0s3:$key $vip/24"
My $ssh_stop_vip = "/ sbin/ifconfig enp0s3:$key down"
My $ssh_send_garp = "/ sbin/arping-U $vip-I enp0s3-c 1"
-send_report
This file needs to modify the mail configuration, which will be introduced later.
II. MySQL MHA deployment Guide
2.1 manage node configuration
[root@rac4 script] # vim / etc/mha/mha.conf
[server default]
# mysql user and password
User=monitor
Password=123456
Ssh_user=root
# working directory on the manager
Manager_workdir=/etc/mha/manager
Manager_log=/etc/mha/manager/mha.log
Ping_interval=1
# working directory on MySQL servers
Remote_workdir=/etc/mha/node
# replication user
Repl_user=repl
Repl_password=rpl
Master_binlog_dir=/datalog/mysql/binlog
# # customized script
Master_ip_failover_script=/etc/mha/script/master_ip_failover
Master_ip_online_change_script=/etc/mha/script/master_ip_online_change
Secondary_check_script=/usr/bin/masterha_secondary_check-s 192.168.2.188-s 192.168.2.223-- user=root-- master_host=192.168.2.187-- master_ip=192.168.2.187-- master_port=3306
[server1]
Hostname=rac1
Candidate_master=1
[server2]
Hostname=rac2
Candidate_master=1
[server3]
Hostname=rac3
No_master=1
Let's explain the meaning of each parameter:
The working directory of the manager_workdir management node, and state files will be generated when mha is running.
Logs generated by masnager_log management node
The interval at which the ping_interval management node detects the status of the main database. The default is 3s.
Remote_workdir MHA node working directory. If it does not exist, it will be established automatically. If creation is not allowed, MHA Node automatically exits abnormally.
Master_binlog_dir specifies the binary log location of the main library to prevent the management node from being unable to connect to the main library to obtain the log location
VIP drift script when master_ip_failover_script is switched over
Drift of VIP when master_ip_online_change_script is used to manually switch
Secondary_check_script if the management node cannot connect to the master library, call this script to detect the status of the master library from the slave library
Candidate_master specifies that if the primary database fails, the node should be promoted as the primary database first.
No_master specifies that the node will not become the main library
2.2 Test connection
2.2.1 Test whether the ssh is set correctly
Management node
[root@rac4 script] # masterha_check_ssh-- conf=/etc/mha/mha.conf
Sat Mar 14 20:10:42 2020-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.
Sat Mar 14 20:10:42 2020-[info] Reading application default configuration from / etc/mha/mha.conf..
Sat Mar 14 20:10:42 2020-[info] Reading server configuration from / etc/mha/mha.conf..
Sat Mar 14 20:10:42 2020-[info] Starting SSH connection tests..
Sat Mar 14 20:10:44 2020-[debug]
Sat Mar 14 20:10:42 2020-[debug] Connecting via SSH from root@rac1 (192.168.2.187) to root@rac2 (192.168.2.188).
Sat Mar 14 20:10:43 2020-[debug] ok.
Sat Mar 14 20:10:43 2020-[debug] Connecting via SSH from root@rac1 (192.168.2.187) to root@rac3 (192.168.2.223)..
Sat Mar 14 20:10:44 2020-[debug] ok.
Sat Mar 14 20:10:45 2020-[debug]
Sat Mar 14 20:10:43 2020-[debug] Connecting via SSH from root@rac3 (192.168.2.223) to root@rac1 (192.168.2.187)..
Sat Mar 14 20:10:44 2020-[debug] ok.
Sat Mar 14 20:10:44 2020-[debug] Connecting via SSH from root@rac3 (192.168.2.223) to root@rac2 (192.168.2.188).
Sat Mar 14 20:10:45 2020-[debug] ok.
Sat Mar 14 20:10:45 2020-[debug]
Sat Mar 14 20:10:43 2020-[debug] Connecting via SSH from root@rac2 (192.168.2.188) to root@rac1 (192.168.2.187)..
Sat Mar 14 20:10:44 2020-[debug] ok.
Sat Mar 14 20:10:44 2020-[debug] Connecting via SSH from root@rac2 (192.168.2.188) to root@rac3 (192.168.2.223).
Sat Mar 14 20:10:44 2020-[debug] ok.
Sat Mar 14 20:10:45 2020-[info] All SSH connection tests passed successfully.
If you pass the test, the words "success" will be printed.
2.2.2 Test whether replication is normal
Management Node 222 (rac4)
[root@rac4 script] # masterha_check_repl-- conf=/etc/mha/mha.conf
Sat Mar 14 20:14:42 2020-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.
Sat Mar 14 20:14:42 2020-[info] Reading application default configuration from / etc/mha/mha.conf..
Sat Mar 14 20:14:42 2020-[info] Reading server configuration from / etc/mha/mha.conf..
Sat Mar 14 20:14:42 2020-[info] MHA::MasterMonitor version 0.56.
Sat Mar 14 20:14:43 2020-[info] GTID failover mode = 1
Sat Mar 14 20:14:43 2020-[info] Dead Servers:
Sat Mar 14 20:14:43 2020-[info] Alive Servers:
Sat Mar 14 20:14:43 2020-[info] rac1 (192.168.2.187 purl 3306)
Sat Mar 14 20:14:43 2020-[info] rac2 (192.168.2.188 Sat Mar 3306)
Sat Mar 14 20:14:43 2020-[info] rac3 (192.168.2.223 rac3 3306)
Sat Mar 14 20:14:43 2020-[info] Alive Slaves:
Sat Mar 14 20:14:43 2020-[info] rac2 (192.168.2.188 Version=5.7.28-log) Version=5.7.28-log (oldest major version between slaves) log-bin:enabled
Sat Mar 14 20:14:43 2020-[info] GTID ON
Sat Mar 14 20:14:43 2020-[info] Replicating from 192.168.2.187 (192.168.2.187 Replicating from 3306)
Sat Mar 14 20:14:43 2020-[info] Primary candidate for the new Master (candidate_master is set)
Sat Mar 14 20:14:43 2020-[info] rac3 (192.168.2.223) Version=5.7.28-log (oldest major version between slaves) log-bin:enabled
Sat Mar 14 20:14:43 2020-[info] GTID ON
Sat Mar 14 20:14:43 2020-[info] Replicating from 192.168.2.187 (192.168.2.187 Replicating from 3306)
Sat Mar 14 20:14:43 2020-[info] Not candidate for the new Master (no_master is set)
Sat Mar 14 20:14:43 2020-[info] Current Alive Master: rac1 (192.168.2.187 Current Alive Master 3306)
Sat Mar 14 20:14:43 2020-[info] Checking slave configurations..
Sat Mar 14 20:14:43 2020-[info] Checking replication filtering settings..
Sat Mar 14 20:14:43 2020-[info] binlog_do_db=, binlog_ignore_db=
Sat Mar 14 20:14:43 2020-[info] Replication filtering check ok.
Sat Mar 14 20:14:43 2020-[info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Sat Mar 14 20:14:43 2020-[info] Checking SSH publickey authentication settings on the current master..
Sat Mar 14 20:14:43 2020-[info] HealthCheck: SSH to rac1 is reachable.
Sat Mar 14 20:14:43 2020-[info]
Rac1 (192.168.2.187) (current master)
+-- rac2 (192.168.2.188 rac2 3306)
+-- rac3 (192.168.2.223)
Sat Mar 14 20:14:43 2020-[info] Checking replication health on rac2..
Sat Mar 14 20:14:43 2020-[info] ok.
Sat Mar 14 20:14:43 2020-[info] Checking replication health on rac3..
Sat Mar 14 20:14:43 2020-[info] ok.
Sat Mar 14 20:14:43 2020-[info] Checking master_ip_failover_script status:
Sat Mar 14 20:14:43 2020-[info] / etc/mha/script/master_ip_failover-command=status-ssh_user=root-orig_master_host=rac1-orig_master_ip=192.168.2.187-orig_master_port=3306
Sat Mar 14 20:14:43 2020-[info] OK.
Sat Mar 14 20:14:43 2020-[warning] shutdown_script is not defined.
Sat Mar 14 20:14:43 2020-[info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
If you pass the test, the words "success" will be printed.
Through the output, we can see the relevant architecture information.
3 start MHA
First, we can check whether MHA has been started.
Management node
[root@rac4 script] # masterha_check_status-- conf=/etc/mha/mha.conf
Mha is stopped (2:NOT_RUNNING).
We start MHA by executing the following command on manager
Management node
[root@rac4 script] # nohup masterha_manager-conf=/etc/mha/mha.conf-remove_dead_master_conf &
-- remove_dead_master_conf means that when a switch occurs, the old master library information will be deleted from the configuration file
[1] 4360
[root@rac4 script] # nohup: ignoring input and appending output to 'nohup.out'
[root@rac4 script] # masterha_check_status-- conf=/etc/mha/mha.conf
Mha (pid:4360) is running (0:PING_OK), master:rac1
4 stop MHA
[root@rac4 script] # masterha_stop-conf=/etc/mha/mha.conf
Stopped mha successfully.
[1] + Exit 1 nohup masterha_manager-conf=/etc/mha/mha.conf-remove_dead_master_conf
[root@rac4 script] # masterha_check_status-- conf=/etc/mha/mha.conf
Mha is stopped (2:NOT_RUNNING).
5 set up relay log cleanup schedule
As we mentioned earlier, we have cancelled the automatic cleaning of relay log. Here, we need to set up scheduled tasks to clean up manually on the master and slave library 3.
First of all, we need to add the mysql-related path to the root user environment variable.
Master and slave library three sets
Export MYSQL_HOME=/usr/local/mysql
Export PATH=$HOME/bin:$MYSQL_HOME/bin:$PATH
0 41 * * / usr/bin/purge_relay_logs-user=monitor-password=123456-disable_relay_log_purge-workdir=/tmp/ > > / tmp/purge_relay_logs.log 2 > & 1
Welcome to follow my Wechat official account "IT Little Chen" and learn and grow together!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.