In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Foreword:
The combination of servers running the same service is called clustering
Centralized cluster: with master, slave
MHA is used to solve the problem of master downtime in the cluster.
MMM cluster architecture: there are two master servers
Overview of MHA
Japanese DeNA company youshimaton (now works for Facebook) develops a set of excellent high availability software for failover and master-slave upgrade in MySQL high availability environment
Composition of MHA MHA Manager (Management Node) MHA Node (data Node)
MHA features in the process of automatic failover, MHA tries to save binary logs from the down master server, which ensures that the data is not lost to the maximum extent and uses semi-synchronous replication of MySQL5.5, which can greatly reduce the risk of data loss.
Case topology diagram
Experimental environment
Centos7 version
MHA version 0.57
Mysql version 5.6.36
Cmake version 2.8.6
MHA:192.168.191.132
Master:192.168.191.160
Slave1:192.168.191.161
Slave2:192.168.191.162
Build MySQL master-slave replication environment
1. Modify the hostname to make it easier to distinguish between servers
The installation and configuration of the three mysql servers are the same, so take master as an example
Install the environment on which compilation depends
Install the cmake compiler
Install the mysql database
Then comes the process of make & & make install
Optimize the mysql database path, create mysql users, initialize the database
Under vim / etc/my.cnf:
Modify the master server master master configuration file
Modify the slave server slave1 master configuration file
Modify the slave server slave2 master configuration file
The master-slave server makes two soft connections for easy operation.
Master-slave server starts MySQL
Create a master-slave synchronization user myslavemysql > grant replication slave on *. * to 'myslave'@'192.168.191.%' identified by' 123 synchronization query OK, 0 rows affected (0.00 sec)
Authorize mha users to operate on the database to all databases
Mysql > grant all privileges on *. * to 'mha'@'192.168.191.%' identified by' manager';Query OK, 0 rows affected (0 sec) mysql > grant all privileges on *. * to 'mha'@'master' identified by' manager';Query OK, 0 rows affected (0 sec) mysql > grant all privileges on *. * to 'mha'@'slave1' identified by' manager';Query OK, 0 rows affected (0 sec) mysql > grant all privileges on. * to 'mha'@'slave2' identified by' manager' Query OK, 0 rows affected (0.00 sec) # refresh database mysql > flush privileges;Query OK, 0 rows affected (0.00 sec)
To disable files and syncs on the mysql master server, do not add anything to the database at this time, which will result in
Perform synchronization on the slave server
Mysql > change master to master_host='192.168.191.160',master_user='myslave',master_password='123',master_log_file='master-bin.000002',master_log_pos=12215;Query OK, 0 rows affected, 2 warnings (0.00 sec) mysql > start slave;Query OK, 0 rows affected (0.01 sec) mysql > show slave status\ G Omit part of the content Slave_IO_Running: Yes / / if this is Slave_IO_Running: Connecting, check whether the firewall of the primary server is turned off Slave_SQL_Running: Yes omit part of the content
Set two slave servers to read-only mode
Mysql > set global read_only=1;Query OK, 0 rows affected (0.00 sec)
Install Node components (all servers, version 0.57)
Install the perl environment
MHA is different for each operating system version, centos7.4 must choose version 0.57, node components must be installed on all servers first, and finally manager components must be installed on MHA-manager nodes, because manager depends on node components
Install Node components
[root@master ~] # tar zxvf / abc/mha/mha4mysql-node-0.57.tar.gz [root@master ~] # cd mha4mysql-node-0.57/ [root@master mha4mysql-node-0.57] # perl Makefile.PL [root@master mha4mysql-node-0.57] # make [root@master mha4mysql-node-0.57] # make install
After node installation, generate several scripting tools under / usr/local/bin/ (these tools are usually triggered by MHA Manager scripts and do not require human manipulation)
Save_binary_logs: save and copy master binaries
Apply_diff_relay_logs: identify differential relay log events and apply their differential events to other slave
Filter_mysqlbinlog: remove unnecessary ROLLBACK rollback events (MHA no longer uses this tool)
Purge_relay_logs: clear the relay log (does not block SQL threads)
Install the Manager component (on the manager node, version 0.57)
[root@manager ~] # tar zxvf / abc/mha/mha4mysql-manager-0.57.tar.gz [root@manager ~] # cd mha4mysql-manager-0.57/ [root@manager mha4mysql-manager-0.57] # perl Makefile.PL [root@manager mha4mysql-manager-0.57] # make [root@manager mha4mysql-manager-0.57] # make install
Several scripting tools are generated under / usr/local/bin/ after manager installation
Masterha_check_ssh: check the SSH configuration of MHA
Masterha_check_repl: checking MySQL replication statu
Masterha_manager: the script that starts manager
Masterha_check_status: check the current MHA running status
Masterha_master_monitor: check if master is down
Masterha_master_switch: control failover (automatic or manual)
Masterha_conf_host: add or remove configuration server information
Masterha_stop: turn off manager
Configure password-free authentication
Tools: ssh-keygen, ssh-copy-id
(1) configure password-free authentication of all data nodes on manager
[root@manager ~] # ssh-keygen-t rsa# click enter [root@manager ~] # ssh-copy-id 192.168.191.160 [root@manager ~] # ssh-copy-id 192.168.191.161 [root@manager] # ssh-copy-id 192.168.191.162
(2) configure password-free authentication to database nodes slave1 and slave2 on master
# click enter [root@master ~] # ssh-copy-id 192.168.191.161 [root@master ~] # ssh-copy-id 192.168.191.162
(3) configure password-free authentication to database nodes master and slave2 on slave1
[root@slave1 ~] # ssh-keygen-t rsa# click enter [root@slave1 ~] # ssh-copy-id 192.168.191.160 [root@slave1 ~] # ssh-copy-id 192.168.191.162
(4) configure password-free authentication to database nodes master and slave1 on slave2
[root@slave2 ~] # ssh-keygen-t rsa# click enter [root@slave2 ~] # ssh-copy-id 192.168.191.160 [root@slave2 ~] # ssh-copy-id 192.168.191.161
Configure MHA (on the manager node)
Copy related scripts to the / usr/local/bin directory
[root@manager ~] # cp-ra / root/mha4mysql-manager-0.57/samples/scripts / usr/local/bin
After copying, there will be four execution files.
Master_ip_failover: script managed by VIP when switching automatically
Master_ip_online_change: management of vip when switching online
Power_manager: script for shutting down the host after a failure
Send_report: a script that sends an alarm after a failover
(2) when copying the above automatic switching, VIP management scripts are added to the / usr/local/bin directory, and scripts are used to manage VIP.
[root@manager ~] # cp / usr/local/bin/scripts/master_ip_failover / usr/local/bin
Modify the master_ip_failover script (delete the original content, rewrite)
[root@manager ~] # vim command / usr ssh_user orig_master_host orig_master_ip,$orig_master_port orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip, $new_master_port) # # add content # # set drift IPmy $vip = '192.168.191.200 drift =' 192.168.191.255 'ens33';my $ifdev =' ens33';my $key ='1' My $ssh_start_vip = "/ sbin/ifconfig ens33:$key $vip"; my $ssh_stop_vip = "/ sbin/ifconfig ens33:$key down"; my $exit_code = 0 role my $ssh_start_vip = "/ usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping-Q-A-c 1-I $ifdev $vip;iptables-F; # my $ssh_stop_vip =" / usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key " # GetOptions ('command=s' = >\ $command,'ssh_user=s' = >\ $ssh_user,'orig_master_host=s' = >\ $orig_master_host,'orig_master_ip=s' = >\ $orig_master_ip 'orig_master_port=i' = >\ $orig_master_port,'new_master_host=s' = >\ $new_master_host,'new_master_ip=s' = >\ $new_master_ip,'new_master_port=i' = >\ $new_master_port,) Exit & main (); sub main {print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"; if ($command eq "stop" | | $command eq "stopssh") {my $exit_code = 1eval {print "Disabling the VIP on old master: $orig_master_host\ n"; & stop_vip (); $exit_code = 0;}; if ($@) {warn "Got Error: $@\ n"; exit $exit_code;} exit $exit_code } elsif ($command eq "start") {my $exit_code = 10 exit_code {print "Enabling the VIP-$vip on the new master-$new_master_host\ n"; & start_vip (); $exit_code = 0;}; if ($@) {warn $@; exit $exit_code;} exit $exit_code;} elsif ($command eq "status") {print "Checking the Status of the script.. OK\ n "; exit 0;} else {& usage (); exit 1;}} sub start_vip () {`ssh $ssh_user\ @ $new_master_host\" $ssh_start_vip\ "`;} # A simple system call that disable the VIP on the old_mastersub stop_vip () {`ssh $ssh_user\ @ $orig_master_host\" $ssh_stop_vip\ "` } sub usage {print "Usage: master_ip_failover-command=start | stop | stopssh | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n";}
Note: for the first time, you need to enable the virtual IP manually on master.
[root@master] # / sbin/ifconfig ens33:1 192.168.191.200 Universe 24
(4) create the MHA software directory and copy the configuration file
[root@manager ~] # mkdir / etc/masterha [root@manager ~] # cp / root/mha4mysql-manager-0.57/samples/conf/app1.cnf / etc/masterha/ [root@manager ~] # vim / etc/masterha/app1.cnf [server default] # manager configuration file manager_log=/var/log/masterha/app1/manager.log # manager log manager _ workdir=/var/log/masterha/app1#master save binlog location The path here should be the same as the master_binlog_dir=/home/mysql# setting of the bilog configured in master when the automatic failover is set to switch scripts. That is, the above script master_ip_failover_script=/usr/local/bin/master_ip_failover# setting manual switching script master_ip_online_change_script=/usr/local/bin/master_ip_online_change# this password is the password created earlier to monitor the user password=managerping_interval=1remote_workdir=/tmp# setting replication user password repl_password=123# setting replication user repl_user=myslave# setting Set the script secondary_check_script=/usr/local/bin/masterha_secondary_check-s 192.168.150.158-s 192.168.150.158-s 192.168.150.24 setting failure to close the failure script host shutdown_script= "" # set the login user name of ssh ssh_user=root# setting monitoring user user= [server1] hostname=192.168.150.240port=3306 [server2] # set as candidate master If this parameter is set, the sending master-slave switch will upgrade from the slave library to the master library candidate_master=1#. By default, if a slave lags behind the relay logs of master 100m, MHA will not select the slave as the new mastercheck_repl_delay=0hostname=192.168.150.158port= 3306 [server3] hostname=192.168.150.244port=3306
test
SSH interactive login-free (manager node)
[root@manager ~] # masterha_check_ssh-conf=/etc/masterha/ app1.cnf omitted part of the content # if normal will output successfullySun Jan 12 19:19:11 2020-[info] All SSH connection tests passed successfully.
Start MHA to check the MHA status
# start MHA and run [root@manager ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf-- remove_dead_master_conf-- ignore_last_failover in the background
< /dev/null >/ var/log/masterha/app1/manager.log 2 > & 1 & [1] 1352 check the MHA status and you can see that the current master is the mysql node [root@manager ~] # masterha_check_status-- conf=/etc/masterha/app1.cnfapp1 (pid:13525) is running (0:PING_OK). Master:192.168.191.200-remove_dead_master_conf: this parameter represents when the master-slave switch occurs. The old ip will remove-ignore_last_failover from the configuration file: by default, if MHA detects continuous downtime and the interval between two outages is less than 8 hours, failover will not be performed. This restriction is to avoid ping-pong effect. This parameter means that the files generated after the last MHA trigger switch are ignored. By default, the directory is recorded in the log after the MHA switch occurs. That is, the appl.failover.complete file set above. If you find that the file exists in this directory the next time you switch, you will not be allowed to trigger the switch unless you receive and delete the file after the first switch. For convenience, set it to-ignore_last_failover.
Start monitoring and observation logging on manager
Simulated fault
Execute the stop mysql service [root@master ~] # pkill-9 mysqld on the main library master
At this time, manager detects the downtime of the primary server, and switches to the backup primary server.
Check the IP address on slave1
[root@slave1 ~] # ifconfigens33: flags=4163 mtu 1500 inet 192.168.191.160 netmask 255.255.255.0 broadcast 192.168.191.255 # Virtual IP address translation to standby primary server ens33:1: flags=4163 mtu 1500 inet 192.168.191.200 netmask 255.255.255.0 broadcast 192.168.191.255
At this point, use the client to log in through the virtual IP address, or you can log in.
The MHA high availability configuration and failover lab is complete.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.