Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MySQL 5.6 highly available architecture for read-write separation through MMM

2025-03-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

In the previous post, we used keepalived+ master-slave synchronization to build a simple MySQL high-availability architecture (see https://blog.51cto.com/jiangjianlong/1981994 for details). Today we will share the high-availability cluster built through MMM to achieve the separation of MySQL read and write. MMM (Master-Master Replication Manager for MySQL) is a MySQL master replication manager developed by perl, which can realize the high availability architecture of read-write separation, the high availability of write requests to the master library, and the load balancing of read requests to the slave library. The architecture diagram of this article is as follows:

I. deployment environment

II. Environmental preparation

1. Configure the IP address and hostname, turn off the firewall and selinux

2. Install MySQL 5.6for mysql-master01, mysql-master02, mysql-slave01 and mysql-slave02, respectively. For installation method, please see the previous blog article "MySQL 5.6.38 is installed on RedHat 6.5 through the RPM package". It is important to install the rpm package of Mysql-devel.

3. Configure hosts resolution for 5 servers:

192.168.10.81 mysql-master01

192.168.10.82 mysql-master02

192.168.10.83 mysql-slave01

192.168.10.84 mysql-slave02

192.168.10.85 mysql-monitor

4. Configure the media of RedHat 6.5to yum source at cost.

3. Configure two master as master and slave to each other

1. Edit the / etc/my.cnf file of mysql-master01 and mysql-master02 and restart the mysql service:

Mysql-master01:

[mysqld] server-id=1 # each set different log-bin=mysql- bins [mysqld _ safe] auto_increment_increment=2 # field how many increments at a time auto_increment_offset=1 # the starting value of the increment field, master02 should be set to 2 replicate-do-db=all # synchronized databases, multiple log-slave_updates # separated by commas # when a main failure The other immediately takes over sync-binlog=1 # every automatic update with high security. The default is 0 log-error = / var/log/mysqld.logread_only = 1.

Mysql-master02:

[mysqld] server-id=2 # each set different log-bin=mysql-bin [mysqld_safe] auto_increment_increment=2 # fields how many increments at a time the starting value of the auto_increment_offset=2 # increment field, master02 needs to be set to 2replicate-do-db=all # synchronized database, multiple log-slave_updates # separated by commas # when one main failure, the other immediately takes over sync-binlog=1 # each automatic update High security. Default is 0log-error = / var/log/mysqld.logread_only = 1

2. Create an account repl for synchronization on mysql-master01 with a password of 123456, query the master status, and write down the file name and posttion value

Mysql > GRANT REPLICATION SLAVE ON *. * to 'repl'@'%' identified by' 123456'

Mysql > show master status

3. Log in to mysql on mysql-master02 and execute the following statement to enable the slave server

Mysql > change master to master_host='192.168.10.81',master_user='repl',master_password='123456',master_log_file='mysql-bin.000003',master_log_pos=318

Mysql > start slave

Mysql > show slave status\ G

4. Then create an account repl for synchronization on mysql-master02 with a password of 123456, query the master status, and write down the file name and posttion value

Mysql > GRANT REPLICATION SLAVE ON *. * to 'repl'@'%' identified by' 123456'

Mysql > show master status

5. Log in to mysql on mysql-master01 and execute the following statement to enable the slave server. Note that master_host needs to enter the IP of mysql-master02.

Mysql > change master to master_host='192.168.10.82',master_user='repl',master_password='123456',master_log_file='mysql-bin.000006',master_log_pos=318

Mysql > start slave

Mysql > show slave status\ G

6. The mutual master and slave of the two master are configured. Let's test it. First, execute the following statement on the mysql-master01 to create a database, a table and insert a row of data.

Mysql > create database names_test

Mysql > use names_test

Mysql > create table names (id int (5), name char (15))

Mysql > insert into names values (01JIangjianlong')

Mysql > select * from names

7. Then query the mysql-master02 and find that the synchronization has been completed, indicating that the replication of mysql-master02 as a mysql-master01 slave library is normal.

8. Also create a database, a table and insert a row of data on mysql-master02

Mysql > create database blogs_test

Mysql > use blogs_test

Mysql > create table blogs (id int (5), name char (35))

Mysql > insert into blogs values (01jangjianlong.blog.51cto.com')

Mysql > select * from names

9. After querying on mysql-master01, it is found that the data has also been synchronized, indicating that mysql-master01, as a slave library of mysql-master02, can also replicate normally.

4. Configure two slave as slave libraries of master01

1. Mysql-slave01 and mysql-slave02 use the following my.cnf, respectively

Mysql-slave01:

[mysqld] server-id=3 # each set different log-bin=mysql-bin [mysqld_safe] replicate-do-db=all log-error = / var/log/mysqld.logread_only = 1

Mysql-slave02:

[mysqld] server-id=4 # each set different log-bin=mysql-bin [mysqld_safe] replicate-do-db=all log-error = / var/log/mysqld.logread_only = 1

2. Based on the results found by executing show master status on mysql-master01, execute the following sql statement on mysql-slave01 and mysql-slave02, respectively, to enable the slave server:

Mysql > change master to master_host='192.168.10.81',master_user='repl',master_password='123456',master_log_file='mysql-bin.000004',master_log_pos=120

Mysql > start slave

Mysql > show slave status\ G

Install the Perl module required by MMM

1. The Perl modules required by MMM for all MySQL nodes are as follows:

2. Take mysql-master01 as an example to install the perl module, and the other three MySQL servers are the same. First, take a look at which of these perl modules are not installed:

Perldoc-l Algorithm::Diff DBI DBD::mysql File::Basename File::stat Log::Dispatch Log::Log4perl Mail::Send Net::ARP Proc::Daemon Time::HiRes

3. Some missing perl modules are found in Redhat6.5 media and can be installed through local yum sources.

[root@mysql-master01 ~] # yum install perl-DBI perl-DBD-mysql perl-MailTools perl-Time-HiRes perl-CPAN-y

4. Use the following command to check the installation results again

Perldoc-l Algorithm::Diff DBI DBD::mysql File::Basename File::stat Log::Dispatch Log::Log4perl Mail::Send Net::ARP Proc::Daemon Time::HiRes

5, the remaining several perl modules are a little more troublesome to install, you can use the source package to compile and install, or you can use cpan to automatically download and install online, and can deal with the dependencies between the modules. I initially tried to download the source package and compile and install it manually, but I gave up this method when I compiled to Log::Dispatch because it depends on many other perl modules, which in turn may depend on multiple other modules! As shown below:

6. So I decided to use cpan online automatic installation, but the server needs to be able to connect to the external network. Since installing Algorithm::Diff using cpan alerts me that YAML is not installed, I install YAML using cpan first, while installing YAML will install the Algorithm::Diff on which it depends

7. Then execute cpan Log::Dispatch to install Log::Dispatch. Unexpectedly, after automatically downloading and installing more than 50 dependent modules, an error is reported that multiple dependent modules are missing, as shown below:

8. Solution: use cpan to install the missing modules shown above, and then execute cpan Log::Dispatch to install them successfully. If cpan reports an error when installing Params:: ValidationCompiler, download the source package of this module separately to compile and install it.

9. Execute cpan Log::Log4perl to install Log::Log4perl module, execute cpan Net::ARP to install Net::ARP module, execute cpan Proc::Daemon to install Proc::Daemon module

10. Check the installation results of the required perl modules. Now they are all installed.

11. Monitor server also has requirements for perl module, as shown below:

12. These modules are also installed using cpan, so I won't repeat them here. Once the perl modules of the four mysql servers and monitor servers have been installed, we can install and configure MMM

VI. MySQL-MMM installation and configuration

1. The requirements of MMM for MySQL users are as follows:

2. Since MMM tools is not used in this article, you only need to create the first three users. Since master-slave synchronization has been done before, you only need to create it on mysql-master01 to synchronize to the other three MySQL servers.

GRANT REPLICATION CLIENT ON *. * TO 'mmm_monitor'@'192.168.10.%' IDENTIFIED BY' monitor'

GRANT SUPER,REPLICATION CLIENT,PROCESS ON *. * TO 'mmm_agent'@'192.168.10.%' IDENTIFIED BY'agent'

GRANT REPLICATION SLAVE ON *. * TO 'mmm_repl'@'192.168.10.%' IDENTIFIED BY'repl'

Note: in fact, the user mmm_repl does not need to be built, because we have already created a repl user for use when we configured the master-slave synchronization.

3. Use the following statement to check whether the other three MySQL servers have synchronized these three users. You can see that all of them already exist.

Select user,host from mysql.user where user in ('mmm_monitor','mmm_agent','mmm_repl')

4. Compile and install MMM on 4 MySQL servers and monitor servers. Take mysql-master01 as an example

[root@mysql-master01] # tar-zxf mysql-mmm-2.2.1.tar.gz-C / usr/

[root@mysql-master01 ~] # cd / usr/mysql-mmm-2.2.1/

[root@mysql-master01 mysql-mmm-2.2.1] # make install

5. After the installation is complete, all configuration files are placed under / etc/mysql-mmm/. Both the monitor server and the MySQL server should contain a common file, mmm_common.conf, as follows:

[root@mysql-master01 mysql-mmm-2.2.1] # cat / etc/mysql-mmm/mmm_common.confactive_master_role writer cluster_interface eth0 pid_path / var/run/mmm_agentd.pidbin_path / usr/lib/mysql-mmm/ replication_user mmm_repl replication_password Repl agent_user mmm_agentagent_password agent ip 192.168.10.81mode masterpeer mysql-master02 ip 192.168.10.82mode masterpeer mysql-master01 ip 192.168.10.83mode slave ip 192.168.10.84 mode slave hosts mysql-master01 Mysql-master02ips 192.168.10.91mode exclusive hosts mysql-slave01, mysql-slave02ips 192.168.10.93 192.168.10.94mode balanced [root@mysql-master01 mysql-mmm-2.2.1] #

6. After editing the file on mysql-master01, transfer it to another 4 servers through scp

7. 4 MySQL servers also have a mmm_agent.conf file that needs to be modified.

8. Under the script file #! / bin/sh of the / etc/init.d/mysql-mmm-agent of the 4 MySQL servers, add the following

Source / root/.bash_profile, take mysql-master01 as an example

9. Add the mmm_agent startup scripts of four MySQL servers as system services and set them to self-startup. Take mysql-master01 as an example.

# chkconfig-add mysql-mmm-agent

# chkconfig mysql-mmm-agent on

10. Start the mmm_agent of 4 MySQL servers respectively. If the previous Perl module is not installed, it may fail to start by error.

/ etc/init.d/mysql-mmm-agent start or service mysql-mmm-agent start

11. Configure mmm_mon.conf files on the monitor server

[root@mysql-monitor mysql-mmm] # cat/etc/mysql-mmm/mmm_mon.conf includemmm_common.conf ip 127.0.0.1 pid_path / var/run/mmm_mond.pid bin_path / usr/lib/mysql-mmm/ status_path / var/lib/misc/mmm_mond.status Ping_ips 192.168.10.81192.168.10.82192.168.10.83192.168.10.84 monitor_user mmm_monitor monitor_password monitor check_period 5 trap_period 10 timeout 3 max_backlog 86400 debug 0

12. Under the script file #! / bin/sh of / etc/init.d/mysql-mmm-monitor on the monitor server, add the following

Source / root/.bash_profile

13. Add mysql-mmm-monitor as a system service and set it to self-startup

# chkconfig-add mysql-mmm-monitor

# chkconfigmysql-mmm-monitor on

14. Start the mysql-mmm-monitor service of monitor server

15. Execute the command on the monitor server to list the client status, and it is found that they are all AWAITING status

16. Execute the command mmm_control set_online mysql-master01 on the monitor server, put mysql-master01 online, and after a while, it will become online.

17. Observe the IP of mysql-master01 and find that Write VIP has been successfully added.

18. Set mysql-master02 and mysql-slave01 and mysql-slave02 to online in turn, and look at the IP of slave and find that Read VIP is also added. MMM has been configured so far, and the high availability test will be conducted next.

7. High availability test of MySQL-MMM main library

1. Stop the mysql service of mysql-master01, simulate the downtime of the main library, and then check the IP to find that Write VIP has been automatically removed.

2. By observing the logs on the monitor server, we can find that MMM detects the mysql-maste01 fault and switches the Write VIP to mysql-master02.

3. Check the IP of mysql-master02 and find that Write VIP has indeed been added.

4. At this time, the two previously configured master libraries of slave have been down and the master libraries have been switched, will they automatically switch Master Host to mysql-master02? Check their slave status and find that they have been switched automatically

5. Check the status of the client on the monitor server. Mysql-master01 is indeed offline and Ping can be reached, but the mysql connection fails.

6. Start the mysql service of master01 again and set it to online. You can see that Write VIP will not fail back (unless the current main library goes down again)

7. Observe that the log of the monitor server is as follows:

8. Check the client again. It has returned to normal.

8. MySQL-MMM from the library high availability test

1. Stop the mysql service of mysql-slave01, simulate downtime from the library, and then check the IP to find that the Read VIP on it has been automatically removed.

2. Observe the log on the monitor server and find that the Read VIP of mysql-slave01 has been switched to mysql-slave02:

3. Check the client status on the monitor server and find that mysql-slave02 does bind 2 Read VIP

4. Check the IP of mysql-slave02. It is true.

5. Restart the mysql service of mysql-slave01 and set it to online on the monitor server. You can see that the Read VIP switched back is not the original VIP,Read VIP and is not fixed.

6. Observe the log contents of the monitor server as follows:

7. The IP information of the two slave is as follows:

At this point, the highly available architecture of MySQL-MMM read-write separation has been simply tested by OK! The following is a summary:

1. If the master01 master node goes down and the master02 alternate master node takes over the write role, slave1,slave2 points to the new master2 master database for replication, and slave01,slave02 will automatically change master to master02.

2. If the master01 master node goes down, and master02 synchronization lags behind master01, it becomes master writable state, then the data master cannot guarantee consistency.

3. If the master02,slave01,slave02 is delayed by the master01 master, and the master01 goes down at this time, the slave01,slave02 will wait for the data to catch up with the master01, and then redirect the replication operation to the new master master02. In this case, the data cannot guarantee the consistency of synchronization.

4. The virtual IP that provides reading and writing is controlled by the monitor program. If the monitor is not started, the db server will not be assigned a virtual ip, but if the virtual ip has been assigned, when the monitor program is closed, the previously assigned virtual ip will not be closed, as long as the network is not restarted, external programs can connect and access, which can reduce the reliability requirements of the monitor, but if one of the db servers fails at this time, it will not be able to handle handover. Because the agent program is controlled by the monitor program, it handles the operations such as master library switching, slave library switching and so on. If the monitor process shuts down, then the agent process doesn't work, and it can't handle the fault itself.

5. The monitor program is responsible for monitoring the status of the db server, including the Mysql database, whether the server is running, whether the replication thread is normal, master-slave delay, etc., as well as controlling the agent program to handle failures. Therefore, monitor is not only the core role of this high availability architecture, but also the risk point of a single point of failure. We should be able to make monitor highly available through keepalived and other methods. I will test it later when I have time.

6. The configuration file parameter "auto_set_online" on the monitor monitoring side allows the node recovered from the fault to automatically online, waiting for free testing. There are three states of node servers in the cluster: HARD_OFFLINE → AWAITING_RECOVERY → online

7. The default monitor controls that mmm_agent will change the writer db server read_only to OFF, and other db servers read_only to ON, so for rigor, you can add read_only=1 to the my.cnf files of all servers. Monitor controls writer and read,root users and replication users are not affected by read_only parameters.

This article comes from Jiang Jianlong's technical blog. Please indicate the source https://blog.51cto.com/jiangjianlong/2073744 for reprint.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report