In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
This article takes the high availability cluster MHA as an example to analyze the working principle of MHA, the installation and deployment of MHA and the usage fee of MHA. After reading the complete article, I believe you have a certain understanding of the highly available cluster MHA.
1. Brief introduction to MHA:
MHA (Master High Availability)
(1) at present, it is a relatively mature solution in the aspect of MySQL high availability, which is developed by youshimaton (now works for Facebook company) of DeNA company in Japan. It is a set of excellent high availability software for failover and master-slave upgrade in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent in order to achieve high availability in the real sense.
(2) the software consists of two parts: MHA Manager (management node) and MHA Node (data node). MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a slave node. MHA Node runs on each MySQL server, and MHA Manager regularly detects the master nodes in the cluster. When the master fails, it automatically promotes the slave of the latest data to the new master, and then redirects all other slave to the new master. The entire failover process is completely transparent to the application.
(3) working principle:
1. In the process of MHA automatic failover, MHA tries to save binary logs from the down master server to ensure that the data is not lost as much as possible, but this is not always feasible. For example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save binary logs and only fails over and loses the latest data. With semi-synchronous replication of MySQL 5.5, the risk of data loss can be greatly reduced. MHA can be combined with semi-synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, thus ensuring data consistency across all nodes.
2. Order:
Save binary log events (binlog events) from a crashed master
Identify the slave with the latest updates
Apply differential Relay logs (relay log) to other slave
Apply binary log events saved from master (binlog events)
Upgrade a slave to a new master
Make other slave connect to the new master for replication
2. Deploy MHA:
The software packages that the role needs to install
Master (192.168.220.131) mha4mysql-node
Slave1 (192.168.220.140) mha4mysql-node
Slave2 (192.168.220.136) mha4mysql-node
Manager (192.168.220.170) mha4mysql-manager, mha4mysql-node
(1) demand:
This case requires that the MySQL database should be automatically switched in case of failure through MHA monitoring, which does not affect the business.
(2) ideas:
Install the MySQL database
Configure MySQL with one master and two slaves
Install MHA softwar
Configure password-free authentication
Configure MySQL and MHA for high availability
Simulate master failover
(3) operating system: Cent0S7 … Version 6, version 0.57 for MHA
Step 1: install mysql on three master and slave servers (for MySQL version, please use 5.6.36 position cmake version, please use 2.8.6), because the installation operations of the three servers are the same, the installation sequence on master is demonstrated here. 1. Install compilation dependency package: yum-y install ncurses-devel gcc-c++ perl-Module-Install2, install gmake compiler software (decompress directly, then. / configure,gmake & & gmake install is fine) 3. Install Mysql database: decompress Compile: cmake\-DCMAKE_INSTALL_PREFIX=/usr/local/mysql\-DDEFAULT_CHARSET=utf8\-DDEFAULT_COLLATION=utf8_general_ci\-DWITH_EXTRA_CHARSETS=all\-DSYSCONFDIR=/etc install: make & & make install4, create mysql user And authorized: groupadd mysqluseradd-M-s / sbin/nologin mysql-g mysqlchown-R mysql.mysql / usr/local/mysql5, Set the environment variable: cp support-files/my-default.cnf / etc/my.cnfcp support-files/mysql.server / etc/rc.d/init.d/mysqldchmod + x / etc/rc.d/init.d/mysqldchkconfig-- add mysqldecho "PATH=$PATH:/usr/local/mysql/bin" > > / etc/profilesource / etc/profile / / to make the environment variable take effect step 2: modify the main configuration file of mysql: / etc/my.cnf Note that the server-id of the three servers cannot be the same-configure the master server: vim / etc/ my.cnf [MySQL] server-id = 1log_bin = master-binlog-slave-updates = true--- configuration slave server 1:vim / etc/ my.cnf [MySQL] server-id = 2log_bin = master-binrelay-log = relay-log-binrelay-log-index = slave-relay-bin.index--- configure slave server 2:vim / etc/my CNF [mysql] server-id = 3log_bin = master-binrelay-log = relay-log-binrelay-log-index = slave-relay-bin.index step 3: three servers start the mysql service (1) create a soft connection: ln-s / usr/local/mysql/bin/mysql / usr/sbin/ln-s / usr/local/mysql/bin/mysqlbinlog / usr/sbin/ (2) start the service: systemctl stop firewalld.service setenforce 0/usr/local/mysql/bin/mysqld _ safe-- user=mysql & / enable service [root@s01 mysql-5.6.36] # netstat-natp | grep 3306 / / check whether the port is developed normally tcp6 0: 3306: * LISTEN 40105/mysqld step 4: configure Mysql master-slave synchronization (one master and two slaves)
(1) Master / slave configuration of mysql. Pay attention to authorization:
Two users are authorized on all database nodes, one is the user myslave synchronized from the library with the password set to "123", and the other is the manager use monitoring user mha with the password set to" manager ".
Mysql-uroot-p / / enter the database mysql > grant replication slave on *. * to 'myslave'@'192.168.220.%' identified by' 123 * to 'mha'@'192.168.220.%' identified by' manager';mysql > flush privileges / / refresh the database. Add the following three items (not needed in theory) and authorize them with hostname (when MHA checks, it is in the form of hostname): mysql > grant all privileges on *. * to 'mha'@'master' identified by' manager';mysql > grant all privileges on *. * to 'mha'@'slave1' identified by' manager';mysql > grant all privileges on *. * to 'mha'@'slave2' identified by' manager' (2) View binaries and synchronization points on the Mysql master server: show master status
(3) next, perform synchronization on slave1 and slave2: slave1:change master to master_host='192.168.220.131',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1215;slave2:change master to master_host='192.168.220.131',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1215 1. Open slave on two slave servers to check whether both IO and SQL threads are yes, indicating that synchronization is normal: mysql > start slave;mysql > show slave status\ G
2. Two slave servers must be set to read-only mode: mysql > set global read_only=1; step 5: install MHA (1) MHA dependent environment packages must be installed on all servers. First install the epel source: yum install epel-release-- nogpgcheck-yyum install-y perl-DBD-MYSQL\ perl-Config-Tiny\ perl-Log-Dispatch\ perl-Parallel-ForkManager\ perl-ExtUtils-CBuilder\ perl-ExtUtils-MakeMaker\ perl-CPAN
(2) the mode component must be installed on all servers, and finally the manager component must be installed on the MHA-manager node, because manager depends on the node component, the following is the operation demonstration on master to install the node component.
1. Install node components (all four servers are required): tar zvxf mha4mysql-node-0.57.tar.gz-C / opt/cd / opt/mha4mysql-node-0.57/perl Makefile.PL make & & make install2, Install the manager component on the manager server (this is only required by the manager server): tar zvxf mha4mysql-manager-0.57.tar.gz-C / opt/cd / opt/mha4mysql-manager-0.57/perl Makefile.PLmake & & make install (3) after the manager server is installed, several tools are generated under the usr/local/bin directory:
Masterha_check_ssh: check MHA's SSH configuration; masterha_check_repl: check MYSQL replication; masterha_manager: start the manager script; masterha_check_status: check the current MHA running status; masterha_master_monitor: detect whether master is down; masterha_master_switch: start failover (automatically or manually); masterha_conf_host: add or remove configured server information; masterha_stop: turn off manager . (4) at the same time, after installing node, several tools will be generated under the usr/local/bin directory (these tools are usually triggered by MHA manager scripts without human operation):
Apply_diff_relay_logs: identify differential relay log events and apply them to other slave;save_binary_logs: save and copy master's binary log; filter_mysqlbinlog: remove unnecessary ROLLBACK events (MHA no longer uses this tool); purge_relay_logs: clear relay log (does not block SQL threads) (5) configure password-free authentication: 1. Configure password-less authentication to all database nodes on manager: ssh-keygen-t rsa / / because you are logged in without a password, you can enter (key will appear) ssh-copy-id 192.168.220.131ssh-copy-id 192.168.220.140ssh-copy-id 192.168.220.136 and enter "yes". 2. Configure password-free authentication to database nodes slave1 and slave2 on master: ssh-keygen-t rsassh-copy-id 192.168.220.140ssh-copy-id 192.168.220.136 enter "yes"; then enter password 3. Configure password-free authentication to database nodes master and slave2 on slave1: ssh-keygen-t rsassh-copy-id 192.168.220.131ssh-copy-id 192.168.220.136 enter "yes" Enter the password again. 4. Configure password-free authentication to database nodes master and slave1 on slave2: ssh-keygen-t rsassh-copy-id 192.168.220.131ssh-copy-id 192.168.220.140 enter "yes" Then enter the password to (6) configure MHA:1 and copy the relevant scripts on the manager node to the / usr/local/bin directory: cp-ra / opt/mha4mysql-manager-0.57/samples/scripts/ / usr/local/bin/ls scripts/ master_ip_failover: script managed by VIP during automatic switchover; master_ip_online_change: management of VIP during online switchover; power_manager: script for shutting down the host after failure Send_report: a script that sends an alarm after a failover Copy the script managed by VIP during automatic switching to the / usr/local/bin/ directory: cp / usr/local/bin/scripts/master_ip_failover / usr/local/bin/2, and rewrite the master_ip_failover script: vim / usr _ perluse strict;use warnings FATAL My ($command, $ssh_user, $orig_master_host, $orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip, $new_master_port) # # add the content section # # my $vip = '192.168.220.100' ens33';my $brdc = '192.168.220.255' ens33';my $key ='1' My $ssh_start_vip = "/ sbin/ifconfig ens33:$key $vip"; my $ssh_stop_vip = "/ sbin/ifconfig ens33:$key down"; my $exit_code = 0 role my $ssh_start_vip = "/ usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping-Q-A-c 1-I $ifdev $vip;iptables-F; # my $ssh_stop_vip =" / usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key " # GetOptions ('command=s' = >\ $command,'ssh_user=s' = >\ $ssh_user,'orig_master_host=s' = >\ $orig_master_host,'orig_master_ip=s' = >\ $orig_master_ip 'orig_master_port=i' = >\ $orig_master_port,'new_master_host=s' = >\ $new_master_host,'new_master_ip=s' = >\ $new_master_ip,'new_master_port=i' = >\ $new_master_port,) Exit & main (); sub main {print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"; if ($command eq "stop" | | $command eq "stopssh") {my $exit_code = 1eval {print "Disabling the VIP on old master: $orig_master_host\ n"; & stop_vip (); $exit_code = 0;}; if ($@) {warn "Got Error: $@\ n"; exit $exit_code;} exit $exit_code } elsif ($command eq "start") {my $exit_code = 10 exit_code {print "Enabling the VIP-$vip on the new master-$new_master_host\ n"; & start_vip (); $exit_code = 0;}; if ($@) {warn $@; exit $exit_code;} exit $exit_code;} elsif ($command eq "status") {print "Checking the Status of the script.. OK\ n "; exit 0;} else {& usage (); exit 1;}} sub start_vip () {`ssh $ssh_user\ @ $new_master_host\" $ssh_start_vip\ "`;} # A simple system call that disable the VIP on the old_mastersub stop_vip () {`ssh $ssh_user\ @ $orig_master_host\" $ssh_stop_vip\ "` } sub usage {print "Usage: master_ip_failover-- command=start | stop | stopssh | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n" } 3 、 Create the MHA software directory and copy the configuration file: mkdir / etc/masterhacp / opt/mha4mysql-manager-0.57/samples/conf/app1.cnf / etc/masterha/ edit the configuration file: vim / etc/masterha/app1.cnf [server default] manager_log=/var/log/masterha/app1/manager.logmanager_workdir=/var/log/masterha/app1master_binlog_dir=/usr/local/mysql/datamaster_ip_failover_script=/usr/local/bin/master_ip_failovermaster_ip _ online_change_script=/usr/local/bin/master_ip_online_changepassword=managerremote_workdir=/tmprepl_password=123repl_user=myslavesecondary_check_script=/usr/local/bin/masterha_secondary_check-s 192.168.220.140-s 192.168.220.136shutdownloadscript= "" ssh_user=rootuser= mha [server1] hostname=192.168.220.131port= 3306 [server2] candidate_master=1hostname=192.168.220.140check_repl_delay=0port= 3306 [server3] hostname=192.168.220.136port=3306 (7) Test ssh without password authentication If it is normal, it will finally output successful:masterha_check_ssh-conf=/etc/masterha/app1.cnf.
Masterha_check_repl-conf=/etc/masterha/app1.cnf / / check health status
(8) Note: for the first configuration, you need to manually open the virtual IP/sbin/ifconfig ens33:1 192.168.220.100 IP/sbin/ifconfig ens33:1 24 on master step 6: start MHA (1) start MHA: nohup masterha_manager-- conf=/etc/masterha/app1.cnf-- remove_dead_master_conf-- ignore_last_failover
< /dev/null >/ var/log/masterha/app1/manager.log 2 > & 1 & (2) check the MHA status and you can see that the current master is a mysql1 node: masterha_check_status-- conf=/etc/masterha/app1.cnf
(3) Fault simulation: 1. First, enable monitoring and observation logging: tailf / var/log/masterha/app1/manager.log2. Now disable the mysql service on the master database: pkill-9 mysql can see the status of the slave library, and vip will switch to one of the slave libraries:
At this point, the client can also connect to the database through the virtual ip: mysql-h 192.168.220.100-u root-p for example, now create a new library in the database:
1. We can see the new library created on the first slave server that becomes the new master library:
2. At the same time, due to the synchronous deployment of master and slave, the new library can also be seen on the second slave server:
The above is the method of installing MHA in Mysql, and you have to use it yourself in order to know the details. If you want to read more related articles, you are welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.