In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Lao Zhang has been a little busy in the last two days, and some old friends have been asking me when to update my blog post. I may not be able to update it every day, but I promise to write some practical knowledge and share it with you as soon as I have time.
If we want to do a good job in technology, we must first combine theory with practice. I am an experienced person who has been torn to pieces by the database to give you some advice: just reading books and memorizing theories really won't work, and when you encounter thorny problems, you will still be blind. More experiments must be done on the basis of theoretical clarification.
Set a goal for yourself to do 100-500 experiments in 3 months. Then sort out all kinds of error reports in the process of doing the experiment, carefully interpret and analyze the error reporting principle, and take good notes. Finally, pick up the book and reread some theoretical knowledge that may not be understood before. I think the process of reading this time will certainly be more profitable than before.
I promised you that I would introduce you to the MySQL High availability Architecture Trilogy. Today, let's first introduce the first step; the most mainstream architecture for MySQL high availability-MHA
MHA high availability architecture is deployed based on the principle of master-slave replication and is the most common and mainstream architecture.
Introduction to MHA:
MHA, or MasterHigh Availability Manager and Toolsfor MySQL, is a script management tool written by a MySQL expert in Japan in Perl language. This tool is only suitable for the MySQLReplication environment and is designed to maintain the high availability of the Master main library.
MHA (Master High Availability) is a software package for automatic master failover and Slave upgrade. It is based on standard MySQL replication (asynchronous / semi-synchronous).
MHA components:
MHA consists of two parts:
MHA Manager (Management Node)
MHA Node (data node)
Interpretation of MHA deployment:
MHA Manager can be deployed on a single machine to manage multiple master-slave clusters, or it can be deployed on a single slave. MHA Manager detects the node nodes of the cluster. When it is found that the master fails, it can automatically upgrade the slave with the latest data to the new master, and then direct all other slave to the new master. The entire failover process is transparent to the application.
MHA node runs on each MySQL server (master/slave/manager) and speeds up failover by monitoring scripts with the ability to parse and clean up logs.
Advantages and disadvantages of MHA:
Advantages:
1. During the failover, you can judge which slave database is closest to the data of the master database, and then switch to the above, which can reduce the data loss and ensure the consistency of the data.
two。 Support for binlog server can improve the efficiency of binlog transmission and further reduce the risk of data loss.
3. Enhanced semi-synchronization of mysql 5.7can be configured to ensure data synchronization at all times.
Disadvantages:
1. The automatic switching script is too simple and aging, so it is recommended to improve it gradually in the later stage.
two。 To build MHA architecture, it is necessary to turn on the mutual trust protocol of linux system, so it is not a small test for system security.
Introduction to the principle:
The purpose of MHA is to maintain the high availability of the Master library in MySQL Replication. Its biggest feature is that it can fix the difference logs between multiple Slave, finally make all Slave keep data consistent, and then choose one of them to act as the new Master and point the other Slave to it.
When the master fails, you can read the position number of the main library binlog by comparing the slave O thread, and select the nearest slave as the alternative main library (spare tire). Other slave libraries can generate differential relay logs by comparing them with alternative master libraries. Apply the binlog saved from the original master to the alternative master library, and promote the alternative master library to master. Finally, apply the corresponding differential relay log to other slave and start replication from the new master.
Introduction to the MHA toolkit features:
Manager tool:
# masterha_check_ssh: check the SSH configuration of MHA. # masterha_check_repl: check MySQL replication. # masterha_manager: start MHA. # masterha_check_status: check the current MHA running status. # masterha_master_monitor: monitor whether master is down. # masterha_master_switch: controls failover (automatic or manual). # masterha_conf_host: add or remove configured server information.
2. Node tool:
# save_binary_logs: save and copy the binary log of master. # apply_diff_relay_logs: identify different relay log events and apply them to other slave. # filter_mysqlbinlog: remove unnecessary ROLLBACK events (MHA no longer uses this tool). # purge_relay_logs: clear the relay log (does not block SQL threads).
The actual combat chart shows:
The actual combat begins:
Environment introduction:
192.168.56.100 master node192.168.56.101 slave1 node192.168.56.102 slave2 (manager,node) vip 192.168.56.123
The first step: generate ssh keyless certificate
The main library (100) performs a key generation operation:
Ssh-keygen-t dsa-P''- f id_dsa Id_dsa.pub is the public key, id_dsa is the private key, and then the public key file is copied into an authorized_keys file. This step is necessary. The process is as follows: cat id_dsa.pub > > authorized_keys
A generate key operation is performed from the library (101):
Ssh-keygen-t dsa-P'- f id_dsa cat id_dsa.pub > > authorized_keys
The management node (102) generates the key process:
Ssh-keygen-t dsa-P'- f id_dsa cat id_dsa.pub > > authorized_keys
The main library (100) performs the process of receiving the key:
Scp 192.168.56.101:/root/.ssh/id_dsa.pub. / id_dsa.pub.101scp 192.168.56.102:/root/.ssh/id_dsa.pub. / id_dsa.pub.102 merge key: cat id_dsa.pub.101 > > authorized_keyscat id_dsa.pub.102 > > authorized_keys
Transfer the composite key on the main library:
Scp authorized_keys 192.168.56.101:/root/.ssh/scp authorized_keys 192.168.56.102:/root/.ssh/
On the three servers, edit the / etc/hosts file and add the three hosts hostname
Vim / etc/hostsnode1 192.168.56.100node2 192.168.56.101node3 192.168.56.102
Verify hostname login, key verification:
Execute on three machines:
Execute ssh node2ssh node3 at 192.168.56.100 execute ssh node1ssh node3 at 192.168.56.101 execute ssh node1ssh node2 at 192.168.56.102
Build a master-slave environment (one master and two slaves architecture, here is mysql version 5.7)
Execute on all nodes
Create a master / slave account: grant replication slave on *. * to 'repl'@'192.168.56.%' identified by' repl';flush privileges; create a management account: grant all privileges on *. * to 'zs'@'192.168.56.%' identified by' 123456 account fiush privileges
Install the data node on Master (100):
First, install the perl environment that mysql depends on.
Yum install perl-DBD-MySQL
Extract the packet of the data node
Tar-zxvf mha4mysql-node-0.57.tar.gz
Install the perl-cpan package
Cd mha4mysql-node-0.57yum-y install perl-CPAN*perl Makefile.PLmake & & make install
Install the data node on the slave library (101):
The same installation operation as the main library
Install the management node on top of the manager management node:
The first step is to install the media package required by the environment
Yum install-y perl-DBD-MySQL*rpm-ivh perl-Params-Validate-0.92-3.el6.x86_64.rpmrpm-ivh perl-Config-Tiny-2.12-1.el6.rfx.noarch.rpmrpm-ivh perl-Log-Dispatch-2.26-1.el6.rf.noarch.rpmrpm-ivh perl-Parallel-ForkManager-0.7.5-2.2.el6.rf.noarch.rpm
Download address: https://centos.pkgs.org/
Reinstall the data node:
Tar-zxvf mha4mysql-node-0.57.tar.gz cd mha4mysql-node-0.57yum-y install perl-CPAN*perl Makefile.PLmake & & make install
Finally, install the management node:
Tar-zxvf mha4mysql-manager-0.57.tar.gz perl Makefile.PLmake make install
Environment configuration, basic installation operation completed.
Let's configure the management node MHA:
Create the mha home directory and edit the startup configuration file
Mkdir-p / usr/local/mhamkdir-p / etc/mhacd / etc/mha/ Edit MHA configuration file vim / etc/mha/mha.conf [server default] user=zspassword=123456manager_workdir=/usr/local/mhamanager_log=/usr/local/mha/manager.logremote_workdir=/usr/local/mhassh_user=rootrepl_user=replrepl_password=replping_interval=1master_ip_failover_script=/usr/local/scripts/master_ip_failovermaster_ip_online_change_script=/usr/local/scripts/master_ip_online_ change [server1] Hostname=192.168.56.100ssh_port=22master_binlog_dir=/data/mysqlcandidate_master=1port= 3306[server2] hostname=192.168.56.101ssh_port=22master_binlog_dir=/data/mysqlcandidate_master=1port= 3306[server3] hostname=192.168.56.102ssh_port=22master_binlog_dir=/data/mysqlno_master=1port=3306
Create the directory of the failover,online script
Mkdir-p / usr/local/scripts
Edit the failover switch script:
Vim mastery ipipple failed override torque new_master_ip binv perl use strict; use warnings FATAL = > 'all'; use Getopt::Long; my ($command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port); my $vip =' 192.168.56.123 Universe; my $key ='0' My $ssh_start_vip = "/ sbin/ifconfig eth0:$key $vip"; my $ssh_stop_vip = "/ sbin/ifconfig eth0:$key down" GetOptions ('command=s' = >\ $command,' ssh_user=s' = >\ $ssh_user, 'orig_master_host=s' = >\ $orig_master_host,' orig_master_ip=s' = >\ $orig_master_ip, 'orig_master_port=i' = >\ $orig_master_port,' new_master_host=s' = >\ $new_master_host, 'new_master_ip=s' = >\ $new_master_ip 'new_master_port=i' = >\ $new_master_port,) Exit & main ()
Sub main {print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"; if ($command eq "stop" | | $command eq "stopssh") {my $exit_code = 1; eval {print "Disabling the VIP on old master: $orig_master_host\ n"; & stop_vip (); $exit_code = 0;} If ($@) {warn "Got Error: $@\ n"; exit $exit_code;} exit $exit_code;} elsif ($command eq "start") {my $exit_code = 10; eval {print "Enabling the VIP-$vip on the new master-$new_master_host\ n" & start_vip (); $exit_code = 0;}; if ($@) {warn $@; exit $exit_code;} exit $exit_code;} elsif ($command eq "status") {print "Checking the Status of the script.. OK\ n "; exit 0;} else {& usage (); exit 1;}}
Sub start_vip () {`ssh $ssh_user\ @ $new_master_host\ "$ssh_start_vip\" `;} sub stop_vip () {return 0 unless ($ssh_user); `ssh $ssh_user\ @ $orig_master_host\ "$ssh_stop_vip\"` } sub usage {print "Usage: master_ip_failover-command=start | stop | stopssh | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n";}
Edit the script for online_change:
Cd / usr/local/scripts/
Vim master_ip_online_change #! / usr/bin/env perl use strict; use warnings FATAL = > 'all'; use Getopt::Long; my $vip =' 192.168.1.123 my 24; # Virtual IP my $key = "0"; my $ssh_start_vip = "/ sbin/ifconfig eth0:$key $vip"; my $ssh_stop_vip = "/ sbin/ifconfig eth0:$key down"; my $exit_code = 0 My ($command, $orig_master_is_new_slave, $orig_master_host, $orig_master_ip, $orig_master_port, $orig_master_user, $orig_master_password, $orig_master_ssh_user, $new_master_host, $new_master_ip, $new_master_port, $new_master_user, $new_master_password, $new_master_ssh_user,) GetOptions ('command=s' = >\ $command,' orig_master_is_new_slave' = >\ $orig_master_is_new_slave, 'orig_master_host=s' = >\ $orig_master_host,' orig_master_ip=s' = >\ $orig_master_ip, 'orig_master_port=i' = >\ $orig_master_port,' orig_master_user=s' = >\ $orig_master_user 'orig_master_password=s' = >\ $orig_master_password,' orig_master_ssh_user=s' = >\ $orig_master_ssh_user, 'new_master_host=s' = >\ $new_master_host,' new_master_ip=s' = >\ $new_master_ip, 'new_master_port=i' = >\ $new_master_port,' new_master_user=s' = >\ $new_master_user 'new_master_password=s' = >\ $new_master_password,' new_master_ssh_user=s' = >\ $new_master_ssh_user,) Exit & main (); sub main {# print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"; if ($command eq "stop" | | $command eq "stopssh") {# $orig_master_host, $orig_master_ip, $orig_master_port are passed. # If you manage master ip address at global catalog database, # invalidate orig_master_ip here. My $exit_code = 1; eval {print "\ n\ n\ ncustomers *\ n"; print "Disabling the VIP-$vip on old master: $orig_master_host\ n" Print "* *\ n\ n"; & stop_vip (); $exit_code = 0;} If ($@) {warn "Got Error: $@\ n"; exit $exit_code;} exit $exit_code;} elsif ($command eq "start") {# all arguments are passed. # If you manage master ip address at global catalog database, # activate new_master_ip here. # You can also grant write access (create user, set read_only=0, etc) here. My $exit_code = 10; eval {print "\ n\ n\ ncustomers *\ n"; print "Enabling the VIP-$vip on new master: $new_master_host\ n" Print "* *\ n\ n"; & start_vip (); $exit_code = 0;} If ($@) {warn $@; exit $exit_code;} exit $exit_code;} elsif ($command eq "status") {print "Checking the Status of the script.. OK\ n "; `ssh $orig_master_ssh_user\ @ $orig_master_host\" $ssh_start_vip\ "`; exit 0;} else {& usage (); exit 1;}} # A simple system call that enable the VIP on the new master sub start_vip () {`ssh $new_master_ssh_user\ @ $new_master_host\" $ssh_start_vip\ "` } # A simple system call that disable the VIP on the old_master sub stop_vip () {``ssh $orig_master_ssh_user\ @ $orig_master_host\ "$ssh_stop_vip\" `;} sub usage {print "Usage: master_ip_failover-command=start | stop | stopssh | status-orig_master_host=host-orig_master_ip=ip-orig_master_port=po rt-new_master_host=host-new_master_ip=ip-new_master_port=port\ n";}
After creating two scripts, remember to give execution permission
Using mha tool to detect ssh
Install the required environment packages:
Yum-y install perl-Time-HiRes
Execute detection command
/ usr/local/bin/masterha_check_ssh-- conf=/etc/mha/mha.conf
The test results show that all of them are OK, which means that the ssh test is successful.
The results show:
Sun Jul 23 09:39:09 2017-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.Sun Jul 23 09:39:09 2017-[info] Reading application default configuration from / etc/mha/mha.conf..Sun Jul 23 09:39:09 2017-[info] Reading server configuration from / etc/mha/mha.conf..Sun Jul 23 09:39:09 2017-[info] Starting SSH connection tests..Sun Jul 23 09:39:11 2017-[debug] Sun Jul 23 09:39:09 2017-[debug] Connecting via SSH from root@192.168.56. Sun Jul 23 09:39:10 2017-[debug] ok.Sun Jul 23 09:39:10 2017-[debug] Connecting via SSH from root@192.168.56.100 (192.168.56.100 debug 22) to root@192.168.56.102 (192.168.56.102 debug 22). Sun Jul 23 09:39 11 2017-[debug] ok.Sun Jul 23 09:39:11 2017-[debug] Sun Jul 23 09:39:10 2017-[debug] Connecting via SSH from root@192.168.56.101 (192.168.56.101 Sun Jul 22) to root@192.168.56.100 (192.168.56.100 Connecting 22) .Sun Jul 23 09:39:10 2017-[debug] ok.Sun Jul 23 09:39:10 2017-[debug] Connecting Via SSH from root@192.168.56.101 (192.168.56.101) to root@192.168.56.102 (192.168.56.102). Sun Jul 23 09:39:11 2017-[debug] ok.Sun Jul 23 09:39:11 2017-[debug] Sun Jul 23 09:39:10 2017-[debug] Connecting via SSH from root@192.168.56.102 (192.168.56.102) to root@192 .168.56.100 (192.168.56.100) Warning: Permanently added '192.168.56.102' (RSA) to the list of known hosts.Sun Jul 23 09:39:11 2017-[debug] ok.Sun Jul 23 09:39:11 2017-[debug] Connecting via SSH from root@192.168.56.102 (192.168.56.102) to root@192.168.56.101 (192.168.56.101) .. Sun Jul 23 09:39:11 2017-[debug] ok.Sun Jul 23 09:39:11 2017-[info] All SSH connection tests passed successfully. Test masterha_check_repl-- conf=/etc/mha/mha.confSun Jul 23 10:14:11 2017-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.Sun Jul 23 10:14:11 2017-[info] Reading application default configuration from / etc/mha/mha.conf..Sun Jul 23 10:14:11 2017-[info] Reading server configuration from / etc/mha/mha.conf..Sun Jul 23 10:14:11 2017-[info] MHA::MasterMonitor version 0.57.Sun Jul 23 10:14:11 2017-[info] Multi-master configuration is detected. Current primary (writable) master is 192.168.56.100 (192.168.56.100 Sun Jul 3306) Sun Jul 23 10:14:11 2017-[info] Master configurations are as below: Master 192.168.56.100 (192.168.56.100), replicating from 192.168.56.101 (192.168.56.101) Master 192.168.56.101 (192.168.56.101), replicating from 192.168.56.100 (192.168.56.100) Read-onlySun Jul 23 10:14:11 2017-[info] GTID failover mode = 1Sun Jul 23 10:14:11 2017-[info] Dead Servers:Sun Jul 23 10:14:11 2017-[info] Alive Servers:Sun Jul 23 10:14:11 2017-[info] 192.168.56.100 (192.168.56.100 3306) Sun Jul 23 10:14:11 2017-[info] 192.168.56.101 (192.168.56.101 purl 3306) Sun Jul 23 10:14:11 2017-[info] 192.168.56.102 (192.168.56.102 Sun Jul 23 10:14:11 2017-[info] Alive Slaves:Sun Jul 23 10:14:11 2017-[info] 192.168.56.101 (192.168.56.101 Sun Jul) Version=5.7.14-log (oldest major version between slaves) log-bin:enabledSun Jul 23 10:14:11-[info] GTID ONSun Jul 23 10:14:11 2017-[info] Replicating from 192.168.56.100 (192.168.56.100 Sun Jul 23 10:14:11 2017-[info] Primary candidate for the new Master (candidate_master is set) Sun Jul 23 10:14:11 2017-[info] 192.168.56.102 (192.168.56.102) Version=5.7.14-log (oldest major version between slaves) log-bin:enabledSun Jul 23 10 : 14:11 2017-[info] GTID ONSun Jul 23 10:14:11 2017-[info] Replicating from 192.168.56.100 (192.168.56.100 Sun Jul 23 10:14:11 2017-[info] Not candidate for the new Master (no_master is set) Sun Jul 23 10:14:11 2017-[info] Current Alive Master: 192.168.56.100 (192.168.56.100 Replicating from 3306) Sun Jul 23 10:14 11 2017-[info] Checking slave configurations..Sun Jul 23 10:14:11 2017-[info] Checking replication filtering settings..Sun Jul 23 10:14:11 2017-[info] binlog_do_db= Binlog_ignore_db= Sun Jul 23 10:14:11 2017-[info] Replication filtering check ok.Sun Jul 23 10:14:11 2017-[info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.Sun Jul 23 10:14:11 2017-[info] Checking SSH publickey authentication settings on the current master..Sun Jul 23 10:14:11 2017-[info] HealthCheck: SSH to 192.168.56.100 is reachable.Sun Jul 23 10:14:11 2017-[info] 192.168.56.100 (192.168.56.100) (current master) +-192.168.56.101 (192.168.56.101) + -- 192.168.56.102) Sun Jul 23 10:14:11 2017-[info] Checking replication health on 192.168.56.101..Sun Jul 23 10:14:11 2017-[info] ok.Sun Jul 23 10:14:11 2017-[info] Checking replication health on 192.168.56.102..Sun Jul 23 10:14:11 2017-[info] ok.Sun Jul 23 10:14:11 2017-[ Info] Checking master_ip_failover_script status:Sun Jul 23 10:14:11 2017-[info] / usr/local/scripts/master_ip_failover-- command=status-- ssh_user=root-- orig_master_host=192.168.56.100-- orig_master_ip=192.168.56.100-- orig_master_port=3306 IN SCRIPT TEST====/sbin/ifconfig eth0:0 down==/sbin/ifconfig eth0:0 192.168.56.123/24===Checking the Status of the script.. OK Sun Jul 23 10:14:11 2017-[info] OK.Sun Jul 23 10:14:11 2017-[warning] shutdown_script is not defined.Sun Jul 23 10:14:11 2017-[info] Got exit code 0 (Not master dead) .MySQL Replication Health is OK.
Perform the process of adding vip in the main library: (add it manually for the first time)
Ip addr add 192.168.56.123 dev eth0
On the management node, perform the startup of the mha
Nohup masterha_manager-- conf=/etc/mha/mha.conf > / tmp/mha_manager.log
< /dev/null 2>& 1 &
Verify the command that started successfully: view the display status
Masterha_check_status-conf=/etc/mha/mha.conf
Simulate the failure of the main library to see if it is switched automatically:
A stop mysql operation is performed on the main library.
Mysqladmin-uroot-proot123 shutdown automatically fetches the vip from the library 192.168.56.101 and converts it to the new master library, and 192.168.56.102 automatically points to the new master library.
After switching, the MHA process will stop running automatically
View in the management node:
Masterha_check_status-conf=/etc/mha/mha.confmha is stopped (2:NOT_RUNNING).
Restore operation:
Restore the failed main library 192.168.56.100
/ usr/local/mysql/bin/mysqld_safe-defaults-file=/etc/my.cnf &
Verification result: repoint to the current main library 192.168.56.101
Master-slave state everything ok!
End of MHA build exercise
I hope you can set aside some time every day to read books, read newspapers and read Lao Zhang's technology blog. We may not be able to compete with the rich second generation, and we can splurge on our money. But this day is very empty, there is no direction of their own struggle! We still have to study hard with our own hands. Since we are engaged in the relevant technical positions, we should do more experiments and practice over and over again. Experience and skills are acquired through deliberation.
This year, Lao Zhang plans to publish more practical technical blogs that are useful to his brothers.
We should learn from each other, consult each other, and strive for a doubling of our wages next year.
MHA, the first installment of the trilogy of MySQL high availability mainstream architecture in the enterprise.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.