In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Overview of MHA High availability Cluster basic deployment MHA
MHA is currently a relatively mature solution for the high availability of MySQL, developed by youshimaton, a Japanese DeNA company (now working for Facebook).
Highly available software for failover and master-slave promotion in MySQL high-availability environments. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent in order to achieve high availability in the real sense.
MHA also provides the function of switching the online master library, which can safely switch the currently running master library to a new master library (by upgrading the slave library to the master library), which can be completed in about 0.5-2 seconds.
Basic deployment experiment flow 1, pre-experiment preparation name role address centos7- 2master192.168.142.203centos7-3slave1192.168.142.132centos7-minslave2192.168.142.172centos7-4manger (monitoring side) 192.168.142.136 II, start experiment 1, prepare all server environments
Install the epel source (without checking)
[root@manger] # yum-y install epel-release-- nogpgcheck
Install the environment package
[root@manger ~] # yum-y install\ perl-DBD-MySQL\ perl-Config-Tiny\ perl-Log-Dispatch\ perl-Parellel-ForkManager\ perl-ExtUtils-CBuider\ perl-Extutils-MakeMaker\ perl-CPAN
Among them
Perl-DBD-MySQL: Perl module for mysql
Configuration file for the perl-Config-Tiny:Perl module
Perl-Log-Dispatch: log fil
Multi-thread Management of perl-Parellel-ForkManager:Perl
Perl-ExtUtils-CBuider: management tool
Perl-Extutils-MakeMaker: management tool
Database in perl-CPAN:Perl module
Note:
The MHA package is different for each operating system version, where Cent0S7.4 must select 0. 0. Version 57
Node components: to be installed on all servers (including manager itself)
Manager components: only need to be installed on the manager side
Install node components
[root@master mha4mysql-node-0.57] # tar zxvf mha4mysql-node-0.57.tar.gz-C / opt/ [root@master mha4mysql-node-0.57] # perl Makefile.PL [root@master mha4mysql-node-0.57] # make [root@master mha4mysql-node-0.57] # make install2, install mysql from master and slave server
Extract the software package, install the software
[root@master ~] # yum install gcc gcc-c++ ncurses-devel perl-Module-Install-y [root@master ~] # tar zxf cmake-2.8.6.tar.gz-C / opt [root@master ~] # cd / opt/cmake-2.8.6 [root@master cmake-2.8.6] # gmake & & gmake install [root@master ~] # tar zxf mysql-5.6.36.tar.gz-C / opt/ [root@master ~] # cd / opt/mysql -5.6.36Accord / configure and compile and install [root@master mysql-5.6.36] # / configure [root@master mysql-5.6.36] # cmake\-DCMAKE_INSTALL_PREFIX=/usr/local/mysql\-DDEFAULT_CHARSET=utf8\-DDEFAULT_COLLATION=utf8_general_ci\-DWITH_EXTRA_CHARSETS=all\-DSYSCONFDIR=/etc [root@master mysql-5.6.36] # make & & make install
Lifting weights, configuring environment variables
[root@master mysql-5.6.36] # cp support-files/my-default.cnf / etc/my.cnf / / overwrite with template file [root@master mysql-5.6.36] # cp support-files/mysql.server / etc/rc.d/init.d/mysqld [root@master mysql-5.6.36] # chmod + x / etc/rc.d/init.d/mysqld [root@master mysql-5.6.36] # chkconfig-- add mysqld [root@master mysql-5.6.36] # echo "PATH=$PATH:/usr/local/mysql/bin" > > / etc/profile [root@master mysql-5.6.36] # source / etc/profile [root@master mysql-5.6.36] # useradd-M-s / sbin/nologin mysql/ / create programmatic user [root@master mysql-5.6.36] # chown-R mysql.mysql / usr/local/ Mysql// initializes database [root@master mysql-5.6.36] # / usr/local/mysql/scripts/mysql_install_db\-- basedir=/usr/local/mysql/\-- datadir=/usr/local/mysql/data/\-- user=mysql
Modify the mysql configuration file and open
[root@master] # vim / etc/my.cnf// master server [mysqld] server-id = 10log-slave-updates = truelog-bin = master-bin// slave server [mysqld] server-id = 11 / / the id of two slave servers cannot be the same log-bin = master-binrelay-log=relay-log-binrelay-log-index=slave-relay-bin.index// to establish a soft link to facilitate the computer to identify [root@master ~] # ln- S / usr/local/mysql/bin/mysql / usr/local/sbin/ [root@master ~] # ln-s / usr/local/mysql/bin/mysqlbinlog / usr/local/sbin/// Security enable Service [root@master ~] # / usr/local/mysql/bin/mysqld_safe-- user=mysql &
Elevate the rights in each database
[root@master ~] # mysql-uroot-p# Master-Slave synchronization account permissions mysql > grant replication slave on *. * to 'myslave'@'192.168.142.%' identified by' 123123 synchronization Manger Supervision user permissions mysql > grant all privileges on *. * to 'mha'@'192.168.142.%' identified by' mysql' # mha users are authorized in each library, otherwise the slave library will report an error mysql > grant all privileges on *. * to 'mha'@'master' identified by' mysql';mysql > grant all privileges on *. * to 'mha'@'slave1' identified by' mysql';mysql > grant all privileges on *. * to 'mha'@'slave2' identified by' mysql'
Deploy master-slave synchronization
# Master server to view binary files and node number mysql > show master status;# slave server open master slave mysql > change master to master_host='192.168.142.203',master_user='myslave',master_password='123123',master_log_file='master-bin.000001',master_log_pos=1335;mysql > start slave;mysql > set global read_only=1; # slave library to read-only mode 3, configure manager side
Install manager components (make sure that the installation of node components is complete)
[root@manger mha] # tar zxf mha4mysql-manager-0.57.tar.gz-C / opt/ [root@manger mha] # cd / opt/mha4mysql-manager-0.57/ [root@manger mha4mysql-manager-0.57] # perl Makefile.PL [root@manger mha4mysql-manager-0.57] # make [root@manger mha4mysql-manager-0.57] # make install
There are many tools in manager and node.
Manager tool (/ usr/local/bin)
Masterha_manager: startup script
Masterha_master_monitor: check if master is down
Masterha_master_switch: control failover (automatic / manual)
Masterha_check_repl: check the replication of mysql
Masterha_check_ssh: check the SSH configuration of MHA
Masterha_check_status: check the current operation of MHA (whether the node is healthy)
Masterha_conf_host: add / remove configured serer information
Masterha_stop: turn off manager
Node script (/ usr/local/bin)
Apply_diff_relay_logs: relay logs that identify differences
Purge_relay_logs: clear relay log
Save_binary_logs: binary log files for saving / copying master
Filter_mysqlbinlog: remove unnecessary rollback events
4 、 Between all hosts, implement keyless login / manager → all databases key-free (password: empty) [root@manger ~] # ssh-keygen-t rsa / / generate key pair [root@manger ~] # ssh-copy-id 192.168.142.203 / / push to masterroot @ manger ~ # ssh-copy-id 192.168.142.132 / / push to Slave1 [root@manger ~] # ssh-copy-id 192.168.142.172 / / push to slave2// masterside → connection from database [root@master ~] # ssh-keygen-t rsa / / generate key pair [root@master ~] # ssh-copy-id 192.168.142.132 / / push to slave1 [root @ master ~] # ssh-copy-id 192.168.142.172 / push to slave2//slave1 → master&slave2 [root@slave1 ~] # ssh-keygen-t rsa [root@slave1 ~] # ssh-copy-id 192.168.142.203 [root@slave1 ~] # ssh-copy-id 192.168.142.172//slave2 → master&slave1 [root@slave2 ~] # ssh-keygen-t rsa [root@slave2 ~] # ssh-copy-id 192.168.142.203 [root@slave2 ~] # ssh-copy-id 192.168.142.1325, Configure MHA on the Manager side
Copy MHA script
[root@manger scripts] # cp-ra / opt/mha4mysql-manager-0.57/samples/scripts / usr/local/bin/ [root@manger scripts] # ll / usr/local/bin/scripts/ total dosage 32-rwxr-xr-x. 1 1001 1001 3648 May 31 2015 master_ip_failover / / failover, VIP float-rwxr-xr-x. 1 1001 1001 9870 May 31 2015 master_ip_online_change / / online switching-rwxr-xr-x. 1 1001 1001 11867 May 31 2015 power_manager / / shut down the host-rwxr-xr-x. 1 1001 1001 1360 May 31 2015 send_report / / alarm after failure [root@manger scripts] # cp / usr/local/bin/scripts/master_ip_failover / usr/local/bin/
Modify the failover script
[root@manger scripts] # vim / usr/local/bin/master_ip_failover## is deleted, adding #! / usr/bin/env perluse strict;use warnings FATAL = > 'all';use Getopt::Long;my ($command, $ssh_user, $orig_master_host, $orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip, $new_master_port) # # add content # # my $vip = '192.168.142.100mm; # elegant address my $brdc =' 192.168.142.255' # broadcast address my $ifdev = 'ens33'; # Network card name my $key =' 1"; # Virtual network card serial number my $ssh_start_vip = "/ sbin/ifconfig ens33:$key $vip"; # use virtual address for my $ssh_stop_vip = "/ sbin/ifconfig ens33:$key down"; my $exit_code = 0 # my $ssh_start_vip = "/ usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping-Q-A-c 1-I $ifdev $vip;iptables-F;"; # my $ssh_stop_vip = "/ usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key" # GetOptions ('command=s' = >\ $command,'ssh_user=s' = >\ $ssh_user,'orig_master_host=s' = >\ $orig_master_host,'orig_master_ip=s' = >\ $orig_master_ip 'orig_master_port=i' = >\ $orig_master_port,'new_master_host=s' = >\ $new_master_host,'new_master_ip=s' = >\ $new_master_ip,'new_master_port=i' = >\ $new_master_port,) Exit & main (); sub main {print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"; if ($command eq "stop" | | $command eq "stopssh") {my $exit_code = 1eval {print "Disabling the VIP on old master: $orig_master_host\ n"; & stop_vip (); $exit_code = 0;}; if ($@) {warn "Got Error: $@\ n"; exit $exit_code;} exit $exit_code } elsif ($command eq "start") {my $exit_code = 10 exit_code {print "Enabling the VIP-$vip on the new master-$new_master_host\ n"; & start_vip (); $exit_code = 0;}; if ($@) {warn $@; exit $exit_code;} exit $exit_code;} elsif ($command eq "status") {print "Checking the Status of the script.. OK\ n "; exit 0;} else {& usage (); exit 1;}} sub start_vip () {`ssh $ssh_user\ @ $new_master_host\" $ssh_start_vip\ "`;} # A simple system call that disable the VIP on the old_mastersub stop_vip () {`ssh $ssh_user\ @ $orig_master_host\" $ssh_stop_vip\ "` } sub usage {print "Usage: master_ip_failover-command=start | stop | stopssh | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n";}
Create a MHA software directory and modify the configuration file
[root@manger bin] # mkdir / etc/masterha [root@manger bin] # cp / opt/mha4mysql-manager-0.57/samples/conf/app1.cnf / etc/masterha/ # remove the original from the mount directory copy configuration file [root@manger bin] # vim / etc/masterha/app1.cnf# Add [server default] # manager log file location manager_log=/var/log/masterha/app1/manager.log # manager working directory manager_workdir=/var/log/masterha/app1 # master save binlog location The location should be the same as the setting of master master_binlog_dir=/usr/local/mysql/data # automatic switching script when setting failover master_ip_failover_script=/usr/local/bin/master_ip_failover # setting script switching when manual switching master_ip_online_change_script=/usr/local/bin/master_ip_online_change # password of monitoring user mha password=mysql # supervising user user=mha # monitoring master Interval between sending ping packets (default is 3s) Three times no response is regarded as failoverping_interval=1 # setting the save location of the remote mysql when sending and switching remote_workdir=/tmp # replication user password (previously set in the database) repl_password=123123 # replication user repl_user=myslave # specifies the server address secondary_check_script=/usr/local/bin/masterha_secondary_check-s 192.168.142.132-s 192.168.142. # setting the script to shut down the failed host after a failure "" indicates that you did not use shutdown_script= "" # set the ssh login user name ssh_user= root [server1] hostname=192.168.142.203port= 3306 [server2] # as candidate mastercandidate_master=1check_repl_delay=0hostname=192.168.142.132port= 3306 [server3] hostname=192.168.142.172port=3306
Perform a health check
/ / check whether the key pair is valid [root@manger bin] # masterha_check_ssh-- conf=/etc/masterha/app1.cnf// check whether the copied file is valid [root@manger bin] # masterha_check_repl-- conf=/etc/masterha/app1.cnf
ERROR may occur during a health check-up
ERROR solution for health examination
Error:
Sat Dec 14 22:01:09 2019-[error] [/ usr/local/share/perl5/MHA/ServerManager.pm, ln492] Server 192.168.142.172 (192.168.142.172) is dead, but must be alive! Check server settings.
Sat Dec 14 22:01:09 2019-[error] [/ usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations. At / usr/local/share/perl5/MHA/MasterMonitor.pm line 402.
Sat Dec 14 22:01:09 2019-[error] [/ usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Solution:
(1) turn off the firewall
(2) clear iptables rule: iptables-F
For the first time, you need to manually turn on the virtual IP on the primary server.
[root@master] # / sbin/ifconfig ens33:1 192.168.142.100 Universe 24
Start MHA
[root@manger app1] # nohup masterha_manager-conf=/etc/masterha/app1.cnf-remove_dead_master_conf-ignore_last_failover
< /dev/null >/ var/log/masterha/app1/manager.log 2 > & 1 & / / View MHA status [root@manger app1] # masterha_check_status-- conf=/etc/masterha/app1.cnfapp1 (pid:55255) is running (0:PING_OK), and master:192.168.142.203// view MHA log file [root@manger app1] # cat / var/log/masterha/app1/manager.log
Among them
-- conf=/etc/masterha/app1.cnf: configuration file location
-- remove_dead_master_conf: delete when downtime
-- ignore_last_failover: ignore failover
2 > & 1: convert error output to normal output
Thank you for reading.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.