In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what is the method of MHA building and fault maintenance". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what is the method of MHA building and fault maintenance"?
(1) Environmental introduction 1. Host deployment
CentOS 7 change the hostname
Hostnamectl set-hostname master192.168.56.121 master192.168.56.122 slave1 # standby master192.168.56.123 slave2 192.168.56.124 manager
Configure the ip and domain name into the / etc/hosts file
Try to add port permission to the firewall on each host
Iptables-I INPUT-s 0amp 0-p tcp-- dport 3306-j ACCEPT
This rule means that if you want to access port 3306 in the input data INPUT where protocol is tcp/IP, you will be allowed.
Iptables-L-n | grep 3306ACCEPT tcp-- 0.0.0.0Universe 0 0.0.0.0Universe tcp dpt:3306 (2) use ssh-keygen to realize key-free login between four hosts. Generate key
[master,slave1,slave2,manager]
Ssh-keygen-t rsa
[slave1,slave2,manager]
Scp .ssh / id_rsa.pub master:/root/.ssh/slave1.pub scp .ssh / id_rsa.pub master:/root/.ssh/slave2.pubscp .ssh / id_rsa.pub master:/root/.ssh/manager.pub2. Import the public key into the / root/.ssh/authorized_keys file on the host using cat xxx > > authorized_keys
[master]
Cat ~ / .ssh / * .pub > ~ / .ssh/authorized_keysscp ~ / .ssh/authorized_keys slave1:/root/.ssh/authorized_keys scp ~ / .ssh/authorized_keys slave2:/root/.ssh/authorized_keys scp ~ / .ssh/authorized_keys manager:/root/.ssh/authorized_keysmysql-node,mha4mysql-manager- package > (III) install MHAmha4mysql-node,mha4mysql-manager package 1. Install MHAmha4mysql-node [manager,master,slave1,slave2] yum-y install perl-DBD-MySQLyum-y install perl-Config-Tiny yum-y install perl-Log-Dispatch yum-y install perl-Parallel-ForkManagermha4mysql-node-0.55-0.el6.noarch.rpm2. Install mha4mysql-manager [manager] yum-y install perl yum-y install cpan rpm-ivh mha4mysql-manager-0.55-0.el6.noarch.rpm
Whatever is missing, yum install xxx will do.
(4) establish master-slave replication strategy between master,slave1,slave2 (5), configure MHA file on management machine manager [manager]
1. Create a directory
Mkdir-p / masterha/app1mkdir / etc/masterhavi / etc/masterha/app1.cnf [server default] user=rootpassword=rootmanager_workdir=/masterha/app1manager_log=/masterha/app1/manager.logremote_workdir=/masterha/app1ssh_user=rootrepl_user=reprepl_password=replping_interval= 1 [server1] hostname=192.168.56.122master_binlog_dir=/var/lib/mysqlcandidate_master=1#relay_log_purge= 0 [server2] hostname=192.168.56.121master_binlog_dir=/var/lib/mysqlcandidate_master= 1 [server3] hostname=192.168.56.123master_binlog_dir=/var / lib/mysqlno_master=1#relay_log_purge=0 (VI), The masterha_check_ssh tool verifies that the ssh trust login is successful [manager] masterha_check_ssh-- conf=/etc/masterha/app1.cnf [root@manager ~] # masterha_check_ssh-- conf=/etc/masterha/app1.cnfThu Feb 23 12:00:24 2017-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.Thu Feb 23 12:00:24 2017-[info] Reading application default configurations from / etc/masterha/app1.cnf..Thu Feb 23 12:00:24 2017-[info] Reading server configurations from / etc/masterha/app1.cnf..Thu Feb 23 12:00:24 2017-[info] Starting SSH connection tests..Thu Feb 23 12:00:25 2017-[debug] Thu Feb 23 12:00:24 2017-[debug] Connecting via SSH from root@192.168.56. To root@192.168.56.121 (192.168.56.122) to root@192.168.56.121. Thu Feb 23 12:00:25 2017-[debug] ok.Thu Feb 23 12:00:25 2017-[debug] Connecting via SSH from root@192.168.56.122 (192.168.56.122) to root@192.168.56.123 (192.168.56.123). Thu Feb 23 12:00 25 2017-[debug] ok.Thu Feb 23 12:00:25 2017-[debug] Thu Feb 23 12:00:25 2017-[debug] Connecting via SSH from root@192.168.56.121 (192.168.56.121) to root@192.168.56.122 (192.168.56.122) .warning: Permanently added '192.168.56.121' (ECDSA) to the list of known hosts.Thu Feb 23 12:00:25 2017 -[debug] ok.Thu Feb 23 12:00:25 2017-[debug] Connecting via SSH from root@192.168.56.121 (192.168.56.121 Connecting via SSH from root@192.168.56.121 22) to root@192.168.56.123 (192.168.56.123 to root@192.168.56.123 22) .Thu Feb 23 12:00:25 2017-[debug] ok.Thu Feb 23 12:00:26 2017-[debug] Thu Feb 23 12:00:25 2017-[debug] Connecting via SSH from Root@192.168.56.123 (192.168.56.123) to root@192.168.56.122 (192.168.56.122).. Warning: Permanently added '192.168.56.123' (ECDSA) to the list of known hosts.Thu Feb 23 12:00:26 2017-[debug] ok.Thu Feb 23 12:00:26 2017-[debug] Connecting via SSH from root@192.168.56.123 (192.168.56.123) : 22) to root@192.168.56.121 (192.168.56.121 to root@192.168.56.121 22). Thu Feb 23 12:00:26 2017-[debug] ok.Thu Feb 23 12:00:26 2017-[info] All SSH connection tests passed successfully. [root@manager] # (7), The masterha_check_repl tool verifies that the mysql replication was successful [manager] masterha_check_repl-- conf=/etc/masterha/app1.cnf [root@manager mysql] # masterha_check_repl-- conf=/etc/masterha/app1.cnfThu Feb 23 14:37:05 2017-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.Thu Feb 23 14:37:05 2017-[info] Reading application default configurations from / etc/masterha/app1.cnf..Thu Feb 23 14:37:05 2017-[info] Reading server configurations from / etc/masterha/app1.cnf..Thu Feb 23 14:37:05 2017-[info] MHA::MasterMonitor version 0.55.Thu Feb 23 14:37:05 2017-[info] Dead Servers:Thu Feb 23 14:37:05 2017-[info] Alive Servers:Thu Feb 23 14 37:05 2017-[info] master (192.168.56.121 Thu Feb 3306) Thu Feb 23 14:37:05 2017-[info] slave1 (192.168.56.122 Thu Feb 3306) Thu Feb 23 14:37:05 2017-[info] slave2 (192.168.56.123 Thu Feb 3306) Thu Feb 23 14:37:05 2017-[info] Alive Slaves:. Thu Feb 23 14:37:08 2017-[info] Connecting to root@192.168.56.123 (slave2:22) is omitted here. Creating directory / masterha/app1.. Done. Checking slave recovery environment settings.. Opening / var/lib/mysql/relay-log.info... Ok. Relay log found at / tmp, up to mysql-relay-bin.000004 Temporary relay log file is / tmp/mysql-relay-bin.000004 Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure. Done. Testing mysqlbinlog output.. Done. Cleaning up test file (s).. Done.Thu Feb 23 14:37:08 2017-[info] Slaves settings check done.Thu Feb 23 14:37:08 2017-[info] master (current master) +-- slave1 +-- slave2Thu Feb 23 14:37:08 2017-[info] Checking replication health on slave1..Thu Feb 23 14:37:08 2017-[info] ok.Thu Feb 23 14:37:08 2017-[info] Checking replication health on slave2..Thu Feb 23 14:37:08 2017-[info] Ok.Thu Feb 23 14:37:08 2017-[warning] master_ip_failover_script is not defined.Thu Feb 23 14:37:08 2017-[warning] shutdown_script is not defined.Thu Feb 23 14:37:08 2017-[info] Got exit code 0 (Not master dead). MySQL Replication Health is OK. Start MHA manager And monitor the log file [manager] masterha_manager-- conf=/etc/masterha/app1.cnf tail-f / masterha/app1/manager.log (IX) Test master (after downtime Whether it will automatically switch 1. Stop the mysql service [master] [root@master ~] # service mysql stopShutting down MySQL on master. SUCCESS! [root@master] # [manager] 2. After master is disabled, the / masterha/app1/manager.log file shows: tail-f / masterha/app1/manager.log
The log file shows:
-Failover Report-app1: MySQL Master failover master to slave1 succeededMaster master is downloading check MHA Manager logs at manager:/masterha/app1/manager.log for details.Started automated (non-interactive) failover.The latest slave slave1 (192.168.56.122) has all relay logs for recovery.Selected slave1 as a new master.slave1: OK: Applying all logs succeeded.slave2: This host has the latest relay log events.Generating relay diff files from the latest slave succeeded.slave2: OK: Applying all logs succeeded. Slave started, replicating from slave1.slave1: Resetting slave info succeeded.Master failover to slave1 (192.168.56.122) completed successfully.
The above results show that master switched successfully.
Several problems that need to be paid attention to in the process of switching
1. The switching process automatically shuts down the read_only
two。 Manual deletion / masterha/app1/app1.failover.complete needs to be deleted after switching before the second test can be conducted.
3. Once a handover occurs, the management process will exit and cannot be tested again. You need to add the failed database to the MHA environment.
4. The original primary node can only be set to slave when the original primary node is rejoined to the MHA
Change master to master_host='192.168.56.122',master_user='repl',master_password='repl',master_log_file='mysql-bin.000010',master_log_pos=120
You need to reset slave before.
5. There are several ways to take over the ip address. Here, MHA automatically invokes the IP alias. The advantage is that it can ensure the consistency between the database state and the business IP switch. After starting the management node, VIP will automatically alias to the current master node, and Keepalived can only check the health of 3306, but it is easy to misjudge the handover if it fails to check the Slave-SQL and Slave-IO processes in MySQL replication.
6. Note: secondary slave server needs to open log_slave_updates
7. Manual switching requires defining the master_ip_online_change_script script first, otherwise the mysql,IP address will only be switched and will not be bound. You can configure the script according to the template
8. By setting no_master=1, a node can never become a new master node.
Resume cluster operation
① deletes the app1.failover.complete file on manager
Cd / masterha/app1rm-f app1.failover.complete
② original master master node service starts
Service mysql start
③ manager management node, checking for synchronization errors
Masterha_check_repl-- conf=/etc/masterha/app1.cnfThu Feb 23 15:00:56 2017-[error] [/ usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln604] There are 2 non-slave servers! MHA manages at most one non-slave server. Check configurations.
⑤ to view the information on the current slave1
Mysql > show master status\ gateway * 1. Row * * File: mysql-bin.000010 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec)
④ configures the 187node mysql as the new slave and starts the synchronization process
Change master to master_host='192.168.56.122',master_user='repl',master_password='repl',master_log_file='mysql-bin.000010',master_log_pos=120;mysql > start slave
Check the synchronization status successfully on the management node again:
Masterha_check_repl-conf=/etc/masterha/app1.cnf
Note: after following the above steps, 121nodes have been added to the cluster as slaver, but the newly generated data in 122,123 is not available during the downtime, so you still need to import the latest data from the master node backup before starting synchronization.
⑤ starts MHA
Nohup masterha_manager-conf=/etc/masterha/app1.cnf > / mha/app1/mha_manager.log & 1 &
Fail back:
By the same token, if there is no problem with the configuration of the above steps, stop the MySQL process of the current master, and MHA can directly switch the master to the original node.
At this point, I believe you have a deeper understanding of "what is the method of MHA building and fault maintenance". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.