In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The following mainly brings you how to build a mysql MHA cluster. I hope that how to build a mysql MHA cluster can bring you practical use, which is also the main purpose of my editing this article. All right, don't talk too much nonsense, let's just read the following.
Introduction and installation of MHA Cluster
MHA (Master High Availability)
-developed by youshimaton, a Japanese DeNA company (now working for Facebook)
-is a set of excellent high-availability software for failover and master-slave promotion in MySQL high-availability environment.
-currently, it is a relatively mature solution in terms of high availability of MySQL.
-during the MySQL failover process, MHA can automatically complete the database failover operation within 0: 30 seconds.
-and in the process of failover, MHA can maximize the consistency of data to achieve true high availability.
MHA composition
MHA Manager (Management Node)
-can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a slave node.
MHA Node (data node)
-runs on each MySQL CVM.
MHA working process
MHA Manager regularly detects the master nodes in the cluster, and when the master fails, it automatically promotes the slave of the latest data to the new master, and then redirects all other slave to the new master. The entire failover process is completely transparent to the application.
-(1) Save binary log events (binlog events) from crashed master
-(2) identify the slave with the latest updates
-(3) Relay logs (relay log) that apply differences to other slave
-(4) apply binary log events saved from master (binlog events)
-(5) upgrade a slave to a new master
-(6) make other slave connect to the new master for replication
Master51
| | |
| | |
Slave52 slave53 slave54 slave55 mgm56
Backup primary and standby primary Manager
Configure that hosts of all data nodes can authenticate and log in to each other by means of ssh key pairs
1.1 create a key pair on each database cloud server, and then copy the public key to the other 4 database cloud servers
[root@51 mysql] # ssh-keygen creates a key pair
[root@51 ~] # for i in 192.168.4. {52.. 56}; ssh-copy-id $I / / also copy the public key to the other four database cloud servers
1.2Configuring manager56 host password-less ssh login to all data node hosts
[root@56 ~] # ssh-keygen / / create a key pair
[root@56] # for i in 192.168.4. {51.. 55}; do ssh-copy-id $I; done
Second, install the software package
2.1 install the perl package on all hosts (51-56) (I'll take 51 as an example here)
[root@51] # yum-y install perl-*.rpm
2.2 install the mha_node package on all data node hosts [51-56]
[root@51 mha-soft-student] # yum-y install perl-DBD-mysql perl-DBI
[root@56 mha-soft-student] # rpm-ivh mha4mysql-node-0.56-0.el6.noarch.rpm
2.3 install the mha_manager package only on the management "host 56"
[root@56 mha-soft-student] # yum-y install perl-ExtUtils- perl-CPAN
2.4 [root@56 mha-soft-student] # tar-zxvf mha4mysql-manager-0.56.tar.gz
[root@56 mha-soft-student] # cd mha4mysql-manager-0.56/
[root@56 mha4mysql-manager-0.56] # perl Makefile.PL
[root@56 mha4mysql-manager-0.56] # make & & make install
3. Configure master-slave synchronization. The requirements are as follows:
51 open semi-synchronous replication of main library
52 open semi-synchronous replication from the library (standby primary database)
53 semi-synchronous replication from the library (standby primary library)
54 the slave library does not act as a standby primary database, so there is no need to open semi-synchronous replication.
55 the slave library does not act as a backup primary database, so there is no need to open semi-synchronous replication.
56 manage hosts
3.1 configure the main library 51
[root@51 ~] # vim / etc/my.cnf
[mysqld]
Plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
Rpl-semi-sync-master-enabled = 1
Rpl-semi-sync-slave-enabled = 1
Server_id=51
Log-bin=master51
Binlog-format= "mixed"
: wq
[root@51 ~] # systemctl restart mysqld
[root@51 ~] # ls / var/lib/mysql/master51.*
/ var/lib/mysql/master51.000001 / var/lib/mysql/master51.index
[root@51] # mysql-uroot-p123456
Mysql > grant replication slave on. To harry@ "%" identified by "123456"
Query OK, 0 rows affected, 1 warning (10.00 sec)
Mysql > set global relay_log_purge=off; / / does not delete local relay log files automatically
Query OK, 0 rows affected (0.00 sec)
Mysql > show master status
+-+
| | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | |
+-+
| | master51.000001 | 438 |
+-+
Mysql > quit
3.2.Configuration of standby master52
[root@52 ~] # vim / etc/my.cnf
[mysqld]
Plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
Rpl-semi-sync-master-enabled = 1
Rpl-semi-sync-slave-enabled = 1
Server_id=52
Log-bin=master52
Binlog-format= "mixed"
[root@52 ~] # ls / var/lib/mysql/master52.*
/ var/lib/mysql/master52.000001 / var/lib/mysql/master52.index
[root@52] # mysql-uroot-p123456
Mysql > grant replication slave on. To harry@ "%" identified by "123456"
Query OK, 0 rows affected, 1 warning (10.00 sec)
Mysql > set global relay_log_purge=off; / / does not delete local relay log files automatically
Query OK, 0 rows affected (0.00 sec)
Mysql > change master to master_host= "192.168.4.51",\
-> master_user= harry,\-> master_password= "123456",\-> master_log_file= "master51.000001",\-> master_log_pos=438
Query OK, 0 rows affected, 2 warnings (0.04 sec)
Mysql > start slave
Query OK, 0 rows affected (0.00 sec)
Mysql > show slave status\ G
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.4.51
Master_User: harry
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: master51.000001
Read_Master_Log_Pos: 438
Relay_Log_File: 52-relay-bin.000002
Relay_Log_Pos: 319
Relay_Master_Log_File: master51.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
3.3.Configuration of standby master53
[root@53 ~] # vim / etc/my.cnf
Plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
Rpl-semi-sync-master-enabled = 1
Rpl-semi-sync-slave-enabled = 1
Server_id=53
Log-bin=master53
Binlog-format= "mixed"
[root@53 ~] # systemctl restart mysqld
[root@53 ~] # ls / var/lib/mysql/master53.*
/ var/lib/mysql/master53.000001 / var/lib/mysql/master53.index
[root@53] # mysql-uroot-p123456
Mysql > grant replication slave on. To harry@ "%" identified by "123456"
Query OK, 0 rows affected, 1 warning (10.00 sec)
Mysql > set global relay_log_purge=off; / / does not delete local relay log files automatically
Query OK, 0 rows affected (0.00 sec)
Mysql > change master to master_host= "192.168.4.51",\
-> master_user= harry,\-> master_password= "123456",\-> master_log_file= "master51.000001",\-> master_log_pos=438
Query OK, 0 rows affected, 2 warnings (0.04 sec)
Mysql > start slave
Query OK, 0 rows affected (0.00 sec)
Mysql > show slave status\ G
1. Row
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.4.51
Master_User: harry
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: master51.000001
Read_Master_Log_Pos: 438
Relay_Log_File: 53-relay-bin.000002
Relay_Log_Pos: 319
Relay_Master_Log_File: master51.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
.
Mysql > quit
3.4. Configure slave CVM 54
[root@54 ~] # vim / etc/my.cnf
[mysqld]
Server_id=54
: wq
[root@54 ~] # systemctl restart mysqld
Mysql > change master to master_host= "192.168.4.51",\
-> master_user= harry,\-> master_password= "123456",\-> master_log_file= "master51.000001",\-> master_log_pos=438
Query OK, 0 rows affected, 2 warnings (0.13 sec)
Mysql > start slave
Query OK, 0 rows affected (0.00 sec)
Mysql > show slave status\ G
1. Row
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.4.51
Master_User: harry
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: master51.000001
Read_Master_Log_Pos: 438
Relay_Log_File: 54-relay-bin.000002
Relay_Log_Pos: 319
Relay_Master_Log_File: master51.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
3.5. Configure slave CVM 55
[root@55 ~] # vim / etc/my.cnf
Server_id=55
: wq
[root@55 ~] # systemctl restart mysqld
[root@55] # mysql-uroot-p123456
Mysql > change master to master_host= "192.168.4.51",\
-> master_user= "harry",-> master_password= "123456",\-> master_log_file= "master51.000001",\-> master_log_pos=438
Query OK, 0 rows affected, 2 warnings (0.04 sec)
Mysql > start slave
Query OK, 0 rows affected (0.01 sec)
Mysql > show slave status\ G
1. Row
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.4.51
Master_User: harry
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: master51.000001
Read_Master_Log_Pos: 438
Relay_Log_File: 55-relay-bin.000002
Relay_Log_Pos: 319
Relay_Master_Log_File: master51.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
3.6.2 Test the master-slave synchronization configuration on the client side
3.6.1 add an authorized user to access data on the main library 51
[root@51] # mysql-uroot-p123456
Mysql > grant all on gamedb.* to admin@ "" identified by "123456"
Query OK, 0 rows affected, 1 warning (0.00 sec)
Mysql > create database gamedb
Query OK, 1 row affected (0.06 sec)
Mysql > create table gamedb.t1 (id int)
Mysql > insert into gamedb.t1 values
Query OK, 1 row affected (0.03 sec)
Mysql > insert into gamedb.t1 values
Query OK, 1 row affected (0.04 sec)
And then view it from the library.
3.6.3 you can also see the same database tables and records by using authorized users to connect to slave library 52-55 on the client side.
[root@52] # mysql-uroot-p123456
Mysql > select * from gamedb.t1
+-+
| | id |
+-+
| | 222 |
| | 222 |
+-+
Edit the main configuration file of the management host (56)
[root@56 mha-soft-student] # cp mha4mysql-manager-0.56/bin/* / usr/local/bin/
[root@56 mha-soft-student] # mkdir / etc/mha_manager/
[root@56 mha-soft-student] # cp mha4mysql-manager-0.56/samples/conf/app1.cnf / etc/mha_manager/
[root@56] # vim / etc/mha_manager/app1.cnf
[server default]
Manager_workdir=/etc/mha_manager
Manager_log=/etc/mha_manager.log
Master_ip_failover_script=/usr/local/bin/master_ip_failover
Ssh_user=root
Ssh_port=22
Repl_user=harry / / Master-Slave synchronization user name
Repl_password=123456 / / Master-Slave synchronization password
User=root / / user name of the connection database
Password=123456 / / password for connecting to database
[server1]
Hostname=192.168.4.51
Candidate_master=1 / / set as candidate master
Port=3306
[server2]
Hostname=192.168.4.52
Candidate_master=1
Port=3306
[server3]
Hostname=192.168.4.53
Candidate_master=1
Port=3306
[server4]
Hostname=192.168.4.54
No_master=1 / / not running for master
Port=3306
[server5]
Hostname=192.168.4.55
No_master=1
Port=3306
Do a ssh check through master_check_ssh on the management node
[root@56 mha_manager] # masterha_check_ssh-- conf / etc/mha_manager/app1.cnf
Test master-slave synchronization status
* * comment out this configuration item # master_ip_failover_script=/usr/local/bin/master_ip_failover in the app1.cnf file when checking master-slave synchronization, otherwise the check will fail.
[root@56 mha_manager] # vim / etc/mha_manager/app1.cnf
# master_ip_failover_script=/usr/local/bin/master_ip_failover
[root@56] # masterha_check_repl-- conf=/etc/mha_manager/app1.cnf
Fourth, test the highly available cluster configuration
4.1 manually deploy the vip address 192.168.4.100 on the main library
[root@51 ~] # ifconfig eth0:1 192.168.4.100 Universe 24
[root@51 ~] # ifconfig eth0:1
4.2 modify the deployment information of the vip address specified in the failover script
[root@56 ~] # vim / etc/mha_manager/master_ip_failover
My $vip = '192.168.4.100 Universe 2400; # Virtual IP
My $key = "1"
My $ssh_start_vip = "/ sbin/ifconfig eth0:$key $vip"
My $ssh_stop_vip = "/ sbin/ifconfig eth0:$key down"
: wq
4.3 start the management service and view the service status
Enable MHA Manager monitoring
-masterha_manager / / start command
-remove_dead_master_conf / / not in the app1.cnf file
Delete the information of the main library that is down in the
-ignore_last_failover / / ignore .health files
[root@56] # masterha_manager-conf=/etc/mha_manager/app1.cnf-remove_dead_master_conf-ignore_last_failover
View status: masterha_check_status
[root@56] # masterha_check_status-- conf=/etc/mha_manager/app1.cnf
App1 (pid:9399) is running (0:PING_OK), master:192.168.4.51
Stop service: masterha_stop
[root@host56 bin] # masterha_stop-- conf=/etc/mha_manager/app1.cnf
Stopped app1 successfully.
4.4 Test failover
Execute] # shutdown-h now on main library 51
4.5 check the service status on the management host (if the service stops, manually start the service, and then check the status)
[root@56] # masterha_check_status-- conf=/etc/mha_manager/app1.cnf
App1 (pid:17507) is running (0:PING_OK), master:192.168.4.52
4.6 check locally on 52 to see if the vip address is obtained.
[root@52 ~] # ip addr show | grep 192.168.4
Inet 192.168.4.52/24 brd 192.168.4.255 scope global eth0
Inet 192.168.4.100/24 brd 192.168.4.255 scope global secondary eth0:1
4.6 client connects to the vip address and accesses the data service
[root@58] # mysql-h292.168.4.100-uwebadmin-p123456
View VIP addr
When the primary database cloud server goes down, check the VIP address on the standby 1 primary database cloud server
[root@server0 ~] # ip addr show | grep vip address
Manually configure vip addr
[root@server0 ~] # ifconfig ethX:1 x.x.x.x/32
For the above about how to build a mysql MHA cluster, you do not find it very helpful. If you need to know more, please continue to follow our industry information. I'm sure you'll like it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.