In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces how to build and configure mysql high availability architecture MHA. The content of the article is carefully selected and edited by the author. It has certain pertinence and is of great significance to everyone's reference. The following is to learn with the author how to build and configure mysql high availability architecture MHA.
Mysql implements MHA of High availability Architecture
I. brief introduction
MHA (Master HA) is an open source MySQL highly available program that provides automating for the MySQL master-slave replication architecture.
Master failover (Automated Primary failover) function. When MHA monitors a master node failure, it promotes the one that has the latest data
The slave node becomes the new master node, during which time MHA avoids consistency issues by getting additional information from other nodes.
MHA also provides the online switching function of master nodes, that is, switching master/slave nodes on demand. MHA was created by the Japanese.
Yoshinorim (formerly working for DeNA and now working for FaceBook) developed a more mature MySQL high availability solution. MHA can be implemented in 30 seconds.
Failover, and in the failover, to ensure data consistency as much as possible. At present, Taobao is also developing a similar product TMHA, which now supports
Master and follower.
II. MHA services
2.1 Service roles
MHA services have two roles, MHA Manager (management node) and MHA Node (data node): MHA Manager: usually deployed on a separate machine to manage multiple master/slave clusters (groups), each master/slave cluster is called an application, which is used to manage and coordinate the entire cluster. MHA node: runs on each MySQL CVM (master/slave/manager), which speeds up failover by monitoring scripts that have the ability to parse and clean up logs. It is mainly the agent that receives the instructions issued by the management node, and the agent needs to run on each mysql node. In a nutshell, node is used to collect the bin-log generated from the node cloud server. Compare whether the slave node that is intended to be promoted to the new master node owns and completes the operation, if it is not sent to the new master node to promote the master node after local application.
From the above picture
We can see that ssh is needed to realize password-free interconnection within each replication group and between Manager. Only in this way, when Master fails, Manager can connect smoothly and achieve master-slave switching function.
2.2 tools provided
MHA provides a number of utility programs, the common of which are as follows: Manager node: masterha_check_ssh: MHA dependent
Ssh environmental monitoring tool; masterha_check_repl: MYSQL replication environment detection tool; masterga_manager: MHA
Service master program; masterha_check_status: MHA running status detection tool; masterha_master_monitor:
MYSQL master node availability monitoring tool; masterha_master_swith:master: node switching tool
Masterha_conf_host: add or remove configured nodes; masterha_stop: a tool for shutting down MHA services. Node node: (these tools are usually triggered by MHA Manager scripts without human action.) save_binary_logs: save and copy master binary logs; apply_diff_relay_logs: relay log events that identify differences and apply to other slave
Purge_relay_logs: clear the relay log (does not block the SQL thread); Custom extension:
Secondary_check_script: check the availability of master through multiple network routes; master_ip_failover_script: update
Masterip; report_script used by application: send reports; init_conf_load_script: load initial configuration parameters
Number
Master_ip_online_change_script; update the ip address of the master node.
2.3 how it works
The working principles of MHA are summarized as follows: (1) save binary log events (binlog events) from crashed master; (2) identify slave with latest updates; (3) relay logs with application differences (relay log) to other slave; (4) apply binary log events saved from master (binlog events); (5) promote a slave to a new master
III. The process of realization
3.1 prepare the Replication environment for the experimental Mysql
3.1.1 related configuration
MHA has special requirements for MYSQL replication environment, for example, each node has to enable binary log and relay log, and each slave node must display to enable its read-only attribute and disable the relay_log_purge feature. Here is an explanation for the configuration. There are four nodes in this lab environment, and the roles are assigned as follows (the experimental machines are all centos 7.3):
All nodes are initialized to shut down the firewall, and selinux restarts the system
1 [root@master ~] # hostnamectl-- static set-hostname master
2 [root@master ~] # systemctl status firewalld.service
3 [root@master ~] # systemctl stop firewalld.service
4 [root@master ~] # systemctl disable firewalld.service
5 [root@master ~] # getenforce
6 Enforcing
7 [root@master ~] # setenforce 0
8 [root@master ~] # vim / etc/selinux/config
9 # modify SELINUX=disabled
10 [root@master ~] # reboot
To facilitate our later operation, we add the following to the configuration content of the / etc/hosts file of each node:
1192.168.37.111 manager.qf.com manager
2 192.168.37.122 master.qf.com master
3 192.168.37.133 slave1.qf.com slave1
4 192.168.37.144 slave2.qf.com slave2
In this way, we can access the private key through the host parsing node, which will be much more convenient.
Install mariadb
11. Add the yum source of mariadb
2 [root@master ~] # vim / etc/yum.repo.d/mariadb.repo
3 [mariadb]
4 name = MariaDB
5 baseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.4/centos7-amd64 gpgkey= https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
6gpgcheck=1
7 2. Installation
8 [root@master] # yum-y install MariaDB-server MariaDB-client
9 3. Initialize mariadb
10 [root@master ~] # mysql_secure_installation
3.1.2 configuration of the initial primary node master
We need to modify the database configuration file of master to initialize the configuration:
1 [root@master ~] # vim / etc/my.cnf
2 [mysqld]
3 server-id = 1 / / the id of each node in the replication cluster must be unique
4 log-bin = master-log / / enable binary log
5 relay-log = relay-log / / enable relay log
6 skip_name_resolve / / turn off name resolution (optional)
7 [root@master ~] # systemctl restart mariadb
This step is complete.
3.1.3 configuration on which all slave nodes depend
We modify the database configuration files of the two slave, and both machines do the following:
1 [root@slave1 ~] # vim / etc/my.cnf
2 [mysqld]
3 server-id = 2 / / the id of each node in the replication cluster must be unique
4 relay-log = relay-log / / enable relay log
5 log-bin = master-log / / enable binary log
6 read_only = ON / / enable read-only attribute
7 relay_log_purge = 0 / / whether to empty automatically no longer need relay log
8 skip_name_resolve / / turn off name resolution (optional)
9 log_slave_updates = 1 / / causes updated data to be written to the binary log
10 [root@slave1 ~] # systemctl restart mariadb
11 [root@slave2 ~] # vim / etc/my.cnf
12 [mysqld]
13 server-id = 3 / / the id of each node in the replication cluster must be unique
14 relay-log = relay-log / / enable relay log
15 log-bin = master-log / / enable binary log
16 read_only = ON / / enable read-only attribute
17 relay_log_purge = 0 / / whether to automatically empty the relay log is no longer required
18 skip_name_resolve / / turn off name resolution (optional)
19 log_slave_updates = 1 / / causes updated data to be written to the binary log
20 [root@slave2 ~] # systemctl restart mariadb
This step is complete.
3.1.4 configure one-master-multi-slave replication architecture
On the master node:
"authorize synchronization data account
2 [root@master] # mysql-uroot-pawkeer`
3MariaDB [(none)] > grant replication slave,replication client on. To 'slave'@'192.168.%.%' identified by' keer'
"backup data is exported to the slave library
5 [root@master ~] # mysqldump-uroot-ptaker'--all-databases > date +% F-mysql-all.sql
6 [root@master ~] # scp * mysql-all.sql 192.168.20.100:/root
7 [root@master] # mysql-uroot-pawkeer`
8 MariaDB [(none)] > show master status
On the slave node:
"Import data
2 [root@slave1] # mysql-uroot-pawkeer`
< *mysql-all.sql 3 [root@slave1 ~]# mysql -uroot -p'keer' 4 MariaDB [(none)]>Change master to master_host='192.168.37.122'
5-> master_user='slave'
6-> master_password='keer'
7-> master_log_file='mysql-bin.000001'
8-> master_log_pos=415
9 MariaDB [(none)] > start slave
10 MariaDB [(none)] > show slave status\ G
This step is complete.
3.2 install and configure MHA
3.2.1 Authorization on master
Users with administrative privileges are authorized on all Mysql nodes to have remote access on other nodes in the local network. Of course, you only need and can only run SQL statements like the following on the master node at this time.
1 [root@master] # mysql-uroot-pawkeer`
2MariaDB [(none)] > grant all on. To 'mhaadmin'@'192.168.%.%' identified by' mhapass'
This step is complete.
3.2.2 prepare the ssh interworking environment
All the nodes in the MHA cluster need to communicate with each other based on ssh mutual trust to achieve remote control and data management functions. For simplicity, after the Manager node generates a key pair and sets it to remotely connect to the local host, copy the private key file and authorized_keys file to all the remaining nodes. The following actions operate on all nodes:
1 [root@manager ~] # ssh-keygen-t rsa
2 [root@manager ~] # ssh-copy-id-I. ssh / id_rsa.pub root@node1
After all four machines have done this, we can see the following files on the manager machine:
1 [root@manager ~] # cd .ssh /
2 [root@manager .ssh] # ls
3 authorized_keys id_rsa id_rsa.pub known_hosts
4 [root@manager .ssh] # cat authorized_keys
The public keys of the four machines are already in the authorized_keys file, and then we just need to send this file to the other three machines, and these four machines can achieve ssh password-free interworking:
1 [root@manager .ssh] # scp authorized_keys root@master:~/.ssh/
2 [root@manager .ssh] # scp authorized_keys root@slave1:~/.ssh/
3 [root@manager .ssh] # scp authorized_keys root@slave2:~/.ssh/
Of course, we can also experiment on the machine to see if ssh still needs to enter a password. This step is complete.
3.2.3 install the MHA package
In this step, the Manager node needs to install an additional package. The specific items to be installed are as follows:
All four nodes need to be installed: mha4mysql-node-0.56-0.el6.norch.rpm Manager node also needs to be installed: mha4mysql-manager-0.56-0.el6.noarch.rpm
I have uploaded the packages that need to be installed to Baidu Cloud disk. Password: mkcr. Download the packages you need and use the rz command to upload them separately, and then install them using yum.
1 [root@manager ~] # rz
2 [root@manager ~] # ls
3 anaconda-ks.cfg initial-setup-ks.cfg Pictures
4 Desktop mha4mysql-manager-0.56-0.el6.noarch.rpm Public
5 Documents mha4mysql-node-0.56-0.el6.noarch.rpm Templates
6 Downloads Music Videos
7 [root@manager] # yum install-y mha4mysql-node-0.56-0.el6.noarch.rpm
8 [root@manager] # yum install-y mha4mysql-manager-0.56-0.el6.noarch.rpm
The rest of the machines are also installed separately, so there are no examples. This step is complete.
Mariadb source needs to be configured before installation
3.2.4 initialize MHA for configuration
The Manager node needs to provide a dedicated configuration file for each monitored master/slave cluster, and all master/slave clusters can share the global configuration. The global profile defaults to / etc/masterha_default.cnf, which is an optional configuration. If only one group of master/slave clusters is monitored, the default configuration information of each CVM can also be provided directly through the configuration of application. The configuration file path for each application is custom. See the next step for specific operation.
3.2.5 define MHA management profile
Create a special administrative user for MHA to facilitate future use. On the primary node of mysql, the three nodes are automatically synchronized:
1mkdir / etc/mha_master
2vim / etc/mha_master/mha.cnf
The configuration file is as follows
1 [server default] / / configuration for server1,2,3 server
2 user=mhaadmin / / mha administrative user
3 password=mhapass / / mha administrative password
4 manager_workdir=/etc/mha_master/app1 / / mha_master 's own work path
5 manager_log=/etc/mha_master/manager.log / / mha_master 's own log file
6 remote_workdir=/mydata/mha_master/app1 / / where is the working directory of each remote host
7 ssh_user=root / / key authentication based on ssh
8 repl_user=slave / / database user name
9 repl_password=keer / / database password
10 ping_interval=1 / / ping interval
11 [server1] / / Node 2
12 hostname=192.168.37.122 / / Node 2 host addr
13 ssh_port=22 / / ssh port of Node 2
14 candidate_master=1 / / whether it can be a candidate node / master node for master in the future
15 [server2]
16hostname=192.168.37.133
17ssh_port=22
18candidate_master=1
19 [server3]
20hostname=192.168.37.144
21ssh_port=22
22candidate_master=1
This step is complete.
3.2.6 detect four nodes
1) check whether the ssh mutual trust communication configuration between nodes is ok. Enter the following command on the Manager machine to detect:
1 [root@manager ~] # masterha_check_ssh-conf=/etc/mha_master/mha.cnf
If the last line is displayed as [info] All SSH connection tests passed successfully. Is a success.
2) check whether the connection configuration parameters of the managed MySQL replication cluster are OK
1 [root@manager ~] # masterha_check_repl-conf=/etc/mha_master/mha.cnf
We found that the detection failed, which may be because there is no account on the slave node, because this architecture, any slave node, may become the master node, so we also need to create an account. Therefore, we need to do the following again on the master node:
1 [root@manager] # mysql-uroot-pawkeer`
2MariaDB [(none)] > grant replication slave,replication client on. To 'slave'@'192.168.%.%' identified by' keer'
3MariaDB [(none)] > flush privileges
After performing this operation, we run the test command again:
1 [root@manager ~] # masterha_check_repl-conf=/etc/mha_master/mha.cnf
2 Thu Nov 23 09:07:08 2017-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.
3 Thu Nov 23 09:07:08 2017-[info] Reading application default configuration from / etc/mha_master/mha.cnf..
4 Thu Nov 23 09:07:08 2017-[info] Reading server configuration from / etc/mha_master/mha.cnf..
5.
6 MySQL Replication Health is OK.
As you can see, our test has already been ok. This step is complete.
3.3.Starting MHA
We start MHA by executing the following command on the manager node:
1 [root@manager ~] # nohup masterha_manager-conf=/etc/mha_master/mha.cnf & > / etc/mha_master/manager.log &
2 [1] 7598
After the startup is successful, let's take a look at the status of the master node:
1 [root@manager ~] # masterha_check_status-conf=/etc/mha_master/mha.cnf
2 mha (pid:7598) is running (0:PING_OK), master:192.168.37.122
"mha (pid:7598) is running (0:PING_OK)" in the above information indicates that the MHA service is running OK, otherwise it will be displayed as similar to "mha is stopped (1:NOT_RUNNING)." If we want to stop MHA, we need to use the stop command:
1 [root@manager ~] # masterha_stop-conf=/etc/mha_master/mha.cnf
3.4Test MHA failover
3.4.1 shut down the mariadb service on the master node to simulate the data crash of the master node
1 [root@master] # killall5-9 mysqld mysqld_safe
2 [root@master ~] # rm-rf / var/lib/mysql/*
3.4.2 View logs on the manger node
Let's take a look at the log:
1 [root@manager] # tail-200 / etc/mha_master/manager.log
2.
3 Thu Nov 23 09:17:19 2017-[info] Master failover to 192.168.37.133 (192.168.37.133) completed successfully.
Indicates that manager detects a 192.168.37.122 node failure and then automatically performs a failover, promoting 192.168.37.133 to the primary node. Note that after the failover is completed, manager will stop automatically, and using the masterha_check_status command to detect will encounter an error.
Prompt, as follows:
1 [root@manager ~] # masterha_check_status-conf=/etc/mha_master/mha.cnf
2 mha is stopped (2:NOT_RUNNING).
3.5 provide new slave nodes to repair replication clusters
After the failure of the original master node, a new MySQL node needs to be reprepared. After you restore the data based on the backup from the master node, configure it as a slave node of the new master. Note that if the newly added node is a new node, its IP address should be configured as the IP of the original master node, otherwise, the corresponding ip address in the mha.cnf needs to be modified. Then start manager again and detect its status again. Let's take it as
The master that has just been shut down is used as a newly added machine for database recovery: the original slave1 has become the new master machine, so I
We make a full backup of it, and then send the backup data to our newly added machine:
1 [root@slave1 ~] # mkdir / backup
2 [root@slave1 ~] # mysqldump-- all-database > / backup/date +% F-%T-myql-all.sql
3 [root@slave1 ~] # scp / backup/2017-11-23-09\: 57\: 09-mysql-all.sql root@node2:~/
Data recovery is then performed on the original master node:
Mariadb starts and initializes
2 [root@master ~] # systemctl start mariadb
3 [root@master ~] # mysql_secure_installation
4 # restore data
5 [root@master ~] # mysql
< 2017-11-23-09\:57\:09-mysql-all.sql 接下来就是配置主从。照例查看一下现在的主的二进制日志和位置,然后就进行如下设置: 1# 查看主云服务器信息 2[root@slave1 ~]# mysql -uroot -p'keer' 3MariaDB [(none)]>Show master status
4 # configure Master and Slave
5 [root@master] # mysql-uroot-pawkeer`
6 MariaDB [(none)] > show master status
7 MariaDB [(none)] > change master to master_host='192.168.37.133', master_user='slave', master_password='keer', master_log_file='mysql-bin.000006', master_log_pos=925
8 MariaDB [(none)] > start slave
9 MariaDB [(none)] > show slave status\ G
10 Slave_IO_State: Waiting for master to send event
11 Master_Host: 192.168.37.133
12 Master_User: slave
13 Master_Port: 3306
14 Connect_Retry: 60
15 Master_Log_File: mysql-bin.000006
16Read_Master_Log_Pos: 925
17 Relay_Log_File: mysql-relay-bin.000002
18 Relay_Log_Pos: 529
19Relay_Master_Log_File: mysql-bin.000006
20 Slave_IO_Running: Yes
21 Slave_SQL_Running: Yes
22.
It can be seen that our masters and followers have been configured. This step is complete.
3.6 perform the check operation again after the new node is provided
Let's check the status again:
1 [root@manager ~] # masterha_check_repl-conf=/etc/mha_master/mha.cnf
If an error is reported, re-authorize it (see above for details). If there is no problem, start manager. Note that log is required for this startup:
1 [root@manager] # masterha_manager-conf=/etc/mha_master/mha.cnf > / etc/mha_master/manager.log 2 > & 1 &
2 [1] 10012
After the startup is successful, let's take a look at the status of the master node:
1 [root@manager ~] # masterha_check_status-conf=/etc/mha_master/mha.cnf
2 mha (pid:9561) is running (0:PING_OK), master:192.168.37.133
Our service has been successfully continued. This step ends.
3.7 New node online, matters needing attention for failover recovery
1) in the production environment, when your master node dies, be sure to make a backup on the slave node, manually promote the master node to the slave node with the backup file, and indicate from which location of the log file to start replication. 2) after each automatic conversion, every (replication health) detection cannot always start the ok and must repair the master node manually. Unless you change the configuration file 3) after manually repairing the promotion of the master node to the slave node, run the test command again
1 [root@manager ~] # masterha_check_status-conf=/etc/mha_master/mha.cnf
2 mha (pid:9561) is running (0:PING_OK), master:192.168.37.133
4) if you run it again, you will recover successfully.
1 [root@manager ~] # masterha_manager-- conf=/etc/mha_master/mha.cnf
After reading the above about how to build and configure mysql high availability architecture MHA, many readers must have some understanding. If you need more industry knowledge and information, you can continue to follow our industry information section.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.