In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Introduction to MHA
Brief introduction
At present, it is a relatively mature solution in the aspect of high availability of MySQL. It is developed by youshimaton (now works for Facebook) of DeNA company in Japan. It is a set of excellent high availability software for failover and master-slave upgrade in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent in order to achieve high availability in the real sense.
The software consists of two parts.
MHA Manager (management node) and MHA Node (data node). MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a slave node. MHA Node runs on each MySQL server, and MHA Manager regularly detects the master nodes in the cluster. When the master fails, it automatically promotes the slave of the latest data to the new master, and then redirects all other slave to the new master. The entire failover process is completely transparent to the application.
working principle
1. In the process of MHA automatic failover, MHA tries to save binary logs from the down master server to ensure that the data is not lost as much as possible, but this is not always feasible. For example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save binary logs and only fails over and loses the latest data. With semi-synchronous replication of MySQL 5.5, the risk of data loss can be greatly reduced. MHA can be combined with semi-synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, thus ensuring data consistency across all nodes.
2. Order:
① saves binary log events (binlog events) from crashed master
② identifies the slave with the latest updates
③ Application differential Relay Log (relay log) to other slave
④ application saved from master
Experimental environment
Master (192.168.149.29)
Slave1 (192.168.149.130)
Slave2 (192.168.149.128)
Manager (192.168.149.150)
Experimental procedure
Install the mysql database on three master and slave servers
# install and compile dependent environment
[root@master] yum-y install gcc gcc-c++ ncurses ncurses-devel bison perl-Module-Install cmake
[root@master ~] # mount.cifs / / 192.168.100.100/tools / mnt/tools/ # # mount kit
Password for root@//192.168.100.100/tools:
[root@master ~] # cd / mnt/tools/MHA/
[root@master MHA] # tar xf cmake-2.8.6.tar.gz-C / opt/ # # decompress
[root@master mnt] cd / opt/cmake-2.8.6/
[root@master MHA] # cd / opt/cmake-2.8.6/ # # configuration
[root@master cmake-2.8.6] gmake & & gmake install # # compile and install
# install mysql database
[root@master cmake-2.8.6] # cd / mnt/tools/MHA/
[root@master MHA] # tar xf mysql-5.6.36.tar.gz-C / opt/t # # decompress MySQL
# compiling mysql
[root@master MHA] # cd / opt/mysql-5.6.36/
[root@master mysql-5.6.36] # cmake-DCMAKE_INSTALL_PREFIX=/usr/local/mysql\
-DDEFAULT_CHARSET=utf8\ # # specify the character set
-DDEFAULT_COLLATION=utf8_general_ci\ # # specify the character set default
-DWITH_EXTRA_CHARSETS=all\ # # Associate all additional character sets
-DSYSCONFDIR=/etc # # configuration file directory
# installation
[root@master mysql-5.6.36] # make & & make install # # compile and install
# set environment variables
[root@master mysql-5.6.36] # cp support-files/my-default.cnf / etc/my.cnf # # copy configuration file
[root@master mysql-5.6.36] # cp support-files/mysql.server / etc/rc.d/init.d/mysqld
# # copy startup script
[root@master mysql-5.6.36] # chmod + x / etc/rc.d/init.d/mysqld # # give execution permission
[root@master mysql-5.6.36] # chkconfig-- add mysqld # # add to service management
[root@master mysql-5.6.36] # echo "PATH=$PATH:/usr/local/mysql/bin" > > / etc/profile
# # modifying Environment variables
[root@master mysql-5.6.36] # source / etc/profile # # refresh Huanning variable
# create mysql database and authorize
[root@master mysql-5.6.36] # groupadd mysql # # create a group
[root@master mysql-5.6.36] # useradd-M-s / sbin/nologin mysql- g mysql
# # creating system users
[root@master mysql-5.6.36] # chown-R mysql.mysql / usr/local/mysql # # modify the group owner
[root@master mysql-5.6.36] # mkdir-p / data/mysql # # create a data directory
# initialize the database
[root@master mysql-5.6.36] # / usr/local/mysql/scripts/mysql_install_db\
-- basedir=/usr/local/mysql\ # # File directory
-- datadir=/usr/local/mysql/data\ # # data directory
-- user=mysql # # user
Modify the main configuration file of mysql: / etc/my.cnf
# # configuring Master Server:
[root@master mysql-5.6.36] # vim / etc/my.cnf
[mysql]
Server-id = 1
# enable binary log
Log_bin = master-bin
# allow synchronization from the server
Log-slave-updates = true
# # configure slave server 1:
[root@slave1 mysql-5.6.36] # vim / etc/my.cnf
[mysql]
Server-id = 2
# enable binary log
Log_bin = master-bin
# using relay logs for synchronization
Relay-log = relay-log-bin
Relay-log-index = slave-relay-bin.index
# # configure slave server 2:
[root@slave2 mysql-5.6.36] # vim / etc/my.cnf
[mysql]
Server-id = 3
Log_bin = master-bin
Relay-log = relay-log-bin
Relay-log-index = slave-relay-bin.index
Three servers start the mysql service
# create these two soft links on three servers
[root@master mysql-5.6.36] # ln-s / usr/local/mysql/bin/mysql / usr/sbin/
[root@master mysql-5.6.36] # ln-s / usr/local/mysql/bin/mysqlbinlog / usr/sbin/
# start mysql
[root@master mysql-5.6.36] # / usr/local/mysql/bin/mysqld_safe-- user=mysql &
# turn off firewall and security features
[root@master mysql-5.6.36] # systemctl stop firewalld.service
[root@master mysql-5.6.36] # setenforce 0
Configure Mysql master-slave synchronization (one master and two slaves) to authorize two users on all database nodes
[root@master mysql-5.6.36] # mysql- u root-p / / enter the database
Mysql > grant replication slave on. To 'myslave'@'192.168.52.%' identified by' 123'
# # using user myslave synchronously from database
Mysql > grant all privileges on. To 'mha'@'192.168.52.%' identified by' manager'
# # using manager to monitor users
Mysql > flush privileges; / / refresh the database
# add the following authorization to the database (not required in theory) with a hostname (MHA check is in the form of a hostname)
Mysql > grant all privileges on. To 'mha'@'master' identified by' manager'
Mysql > grant all privileges on. To 'mha'@'slave1' identified by' manager'
Mysql > grant all privileges on. To 'mha'@'slave2' identified by' manager'
View binaries and synchronization points on the master server
Mysql > show master status
+-+
| | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | |
+-+
| | master-bin.000001 | 1213 | |
+-+
1 row in set (0.00 sec)
Set up synchronization on two slave servers
# execute the following command on both slave servers to synchronize the logs of the master server
Mysql > change master to master_host='192.168.52.129',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1213
Mysql > start slave; / / enable slave
Mysql > show slave status\ G; / / View slave
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Mysql > set global read_only=1
Mysql > flush privileges; / / refresh the database
Install MHA lazy loop mirror on all servers
[root@master mysql-5.6.36] # yum install epel-release-- nogpgcheck-y # # installation source
[root@master mysql-5.6.36] # yum install-y perl-DBD-MySQL\ # for MySQL
Perl-Config-Tiny\ # # configuration file
Perl-Log-Dispatch\ # # Log
Perl-Parallel-ForkManager\ # # Multithreading management
Perl-ExtUtils-CBuilder\ # # extension tool
Perl-ExtUtils-MakeMaker\
Perl-CPAN # # Library
# decompress and install node
[root@manager] # cd ~
[root@manager ~] # tar zxvf / mnt/mha4mysql-node-0.57.tar.gz
[root@manager ~] # cd mha4mysql-node-0.57/
[root@manager mha4mysql-node-0.57] # perl Makefile.PL # # perl to compile
[root@manager mha4mysql-node-0.57] # make & & make install
Install manager on the manger server
# turn off the firewall
[root@manager ~] # systemctl stop firewalld.service
[root@manager ~] # setenforce 0
# extract and install manager
[root@manager] # cd ~
[root@manager ~] # tar zxvf / mnt/mha4mysql-manager-0.57.tar.gz
[root@manager ~] # cd mha4mysql-manager-0.57/
[root@manager mha4mysql-manager-0.57] # perl Makefile.PL # # perl compilation
[root@manager mha4mysql-manager-0.57] # make & & make install # # compile and install
Several tools are generated under the usr/local/bin directory after the manager server is installed:
Masterha_check_repl check mysql replication status masterha_master_monitor check master downtime masterha_check_ssh check MHA's SSH configuration masterha_master_switch control failover masterha_check_status check current MHA running status masterha_conf_host add or remove configured server information masterha_stop close managermasterha_manager startup manager script
Several scripts that will be generated under / usr/local/bin after node installation (usually triggered by MHA Manager scripts without human action)
Apply_diff_relay_logs: identify differential relay log events and apply them to other slave;save_binary_logs: save and copy master's binary log; filter_mysqlbinlog: remove unnecessary ROLLBACK events (MHA no longer uses this tool); purge_relay_logs: clear relay log (does not block SQL threads)
Configure password-less access
# # configure password-free authentication of all database nodes in manager
[root@manager ~] # ssh-keygen-t rsa # # generate secret key
Enter file in which to save the key (/ root/.ssh/id_rsa): # # enter
Enter passphrase (empty for no passphrase): # # enter
Enter same passphrase again: # # enter
[root@manager ~] # ssh-copy-id 192.168.52.129 # # upload to another server
Are you sure you want to continue connecting (yes/no)? Yes
Root@192.168.52.129's password: # # enter the password of the server
[root@manager ~] # ssh-copy-id 192.168.52.130
[root@manager ~] # ssh-copy-id 192.168.52.131
# # configure password-free authentication to database nodes slave1 and slave2 on master
[root@master] # ssh-keygen-t rsa
[root@master ~] # ssh-copy-id 192.168.52.130
[root@master ~] # ssh-copy-id 192.168.52.131
# # configure password-free authentication to database nodes master' and slave2 on slave1
[root@slave1] # ssh-keygen-t rsa
[root@slave1 ~] # ssh-copy-id 192.168.52.129
[root@slave1 ~] # ssh-copy-id 192.168.52.131
# # configure password-free authentication to database nodes slave1 and master on slave2
[root@slave2] # ssh-keygen-t rsa
[root@slave2 ~] # ssh-copy-id 192.168.52.129
[root@slave2 ~] # ssh-copy-id 192.168.52.130
Configure MHA, copy the relevant scripts on the manager node to the / usr/local directory, and configure
[root@manager ~] # cp-ra / root/mha4mysql-manager-0.57/samples/scripts/ / usr/local/bin/
# # copy script to / usr/local
[root@manager ~] # ls mha4mysql-manager-0.57/samples/scripts/
# # generate four executable scripts
Master_ip_failover: script managed by VIP when switching automatically
Master_ip_online_change: management of VIP when switching online
Power_manager: script for shutting down the host after a failure
Send_report: a script that sends an alarm after a failover
# # copy the scripts managed by VIP when switching automatically to the / usr/local/bin/ directory:
[root@manager ~] # cp / usr/local/bin/scripts/master_ip_failover / usr/local/bin/
[root@manager ~] # vim / usr/local/bin/master_ip_failover
# # Delete all content and rewrite master_ip_failover script
#! / usr/bin/env perl
Use strict
Use warnings FATAL = > 'all'
Use Getopt::Long
My (
$command, $ssh_user, $orig_master_host, $orig_master_ip
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
My $vip = '192.168.52.100'
My $brdc = '192.168.52.255'
My $ifdev = 'ens33'
My $key ='1'
My $ssh_start_vip = "/ sbin/ifconfig ens33:$key $vip"
My $ssh_stop_vip = "/ sbin/ifconfig ens33:$key down"
My $exit_code = 0
# my $ssh_start_vip = "/ usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping-Q-A-c 1-I $ifdev $vip;iptables-F;"
# my $ssh_stop_vip = "/ usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key"
GetOptions (
'command=s' = >\ $command
'ssh_user=s' = >\ $ssh_user
'orig_master_host=s' = >\ $orig_master_host
'orig_master_ip=s' = >\ $orig_master_ip
'orig_master_port=i' = >\ $orig_master_port
'new_master_host=s' = >\ $new_master_host
'new_master_ip=s' = >\ $new_master_ip
'new_master_port=i' = >\ $new_master_port
);
Exit & main ()
Sub main {
Print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"
If ($command eq "stop" | | $command eq "stopssh") {
My $exit_code = 1
Eval {
Print "Disabling the VIP on old master: $orig_master_host\ n"
& stop_vip ()
$exit_code = 0
}
If ($@) {
Warn "Got Error: $@\ n"
Exit $exit_code
}
Exit $exit_code
}
Elsif ($command eq "start") {
My $exit_code = 10
Eval {
Print "Enabling the VIP-$vip on the new master-$new_master_host\ n"
& start_vip ()
$exit_code = 0
}
If ($@) {
Warn $@
Exit $exit_code
}
Exit $exit_code
}
Elsif ($command eq "status") {
Print "Checking the Status of the script.. OK\ n"
Exit 0
}
Else {
& usage ()
Exit 1
}
}
Sub start_vip () {
Ssh $ssh_user\ @ $new_master_host\ "$ssh_start_vip\"
}
Sub stop_vip () {
Ssh $ssh_user\ @ $orig_master_host\ "$ssh_stop_vip\"
}
Sub usage {
"Usage: master_ip_failover-- command=start | stop | stopssh | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n"
}
Create a MHA software directory and copy the configuration file on the manager node
[root@manager ~] # mkdir / etc/masterha
[root@manager ~] # cp / root/mha4mysql-manager-0.57/samples/conf/app1.cnf / etc/masterha/
# Editing configuration file
[root@manager ~] # vim / etc/masterha/app1.cnf
[server default]
# manager configuration File
Manager_log=/var/log/masterha/app1/manager.log
# manager Log
Manager_workdir=/var/log/masterha/app1
# master saves the location of binlog. The path here should be the same as that of bilog configured in master.
Master_binlog_dir=/usr/local/mysql/data
# switch script when setting automatic failover. That's the script above.
Master_ip_failover_script=/usr/local/bin/master_ip_failover
# set the switching script for manual switching
Master_ip_online_change_script=/usr/local/bin/master_ip_online_change
# this password is the password that was created earlier to monitor the user
Password=manager
Remote_workdir=/tmp
# set replication user password
Repl_password=123
# set the user of the replication user
Repl_user=myslave
# set the script that gives an alarm after switching
Secondary_check_script=/usr/local/bin/masterha_secondary_check-s 192.168.52.130-s 192.168.52.131
# set the failure to shut down the failure script host
Shutdown_script= ""
# set login user name for ssh
Ssh_user=root
# set up monitoring users
User=mha
[server1]
Hostname=192.168.52.129
Port=3306
[server2]
Candidate_master=1
# set as candidate master. If this parameter is set, the master-slave switch will be upgraded from this slave library to the master library after sending the master-slave switch
Hostname=192.168.52.130
Check_repl_delay=0
Port=3306
[server3]
Hostname=192.168.52.131
Port=3306
Configure virtual ip on master to launch mha
[root@master mha4mysql-node-0.57] # / sbin/ifconfig ens33:1 192.168.52.100/24
[root@manager scripts] # nohup masterha_manager-conf=/etc/masterha/app1.cnf-remove_dead_master_conf-ignore_last_failover
< /dev/null >/ var/log/masterha/app1/manager.log 2 > & 1 &
# # looking at the MHA status, you can see that the current master is a mysql node
[root@manager scripts] # masterha_check_status-- conf=/etc/masterha/app1.cnf
App1 (pid:43036) is running (0:PING_OK), master:192.168.52.129
Fault simulation
[root@manager scripts] # tailf / var/log/masterha/app1/manager.log
# # start Monitoring and observation logging
# # shutting down master server
[root@master mha4mysql-node-0.57] # pkill-9 mysql
You can see the status of the slave library, and vip switches to one of the slave libraries:
[root@slave1 mha4mysql-node-0.57] # ifconfig
Ens33: flags=4163 mtu 1500
Inet 192.168.52.130 netmask 255.255.255.0 broadcast 192.168.52.255
Ens33:1: flags=4163 mtu 1500
Inet 192.168.52.100 netmask 255.255.255.0 broadcast 192.168.52.255
Ether 00:0c:29:af:94:06 txqueuelen 1000 (Ethernet)
At this point, mysql is installed on the manager, and the client can also connect to the database through the virtual ip:
# # raising Rights on vip Database Server
Mysql > grant all on. To 'root'@'%' identified by' abc123'
Query OK, 0 rows affected (0.00 sec)
# # Log in with Virtual ip on client
[root@manager ~] # mysql-uroot-h 192.168.52.100-p # # specify virtual ip
Enter password: # # enter password
MySQL [(none)] >
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.