Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Building High availability Rack of MySQL Replication Cluster based on MMM

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

MMM introduction

MMM is an acronym for Multi-Master Replication Manager for MySQL, a multi-master replication manager provided by MySQL, whose core is a set of scripts written in Perl. In fact, MMM is an early and even a little old way to build a high-availability MySQL architecture, but because it still has some application scenarios, this article will demonstrate how to build a MMM architecture.

MMM consists of two components:

Monitor: monitor the status of databases in the cluster, issue switching commands when an exception occurs, and generally deploy agent separately from the database: the agent process running on each MySQL server, the executor of the monitor command, completing the monitoring probe work and specific service settings, such as setting VIP and pointing to the new synchronization node

The main roles of MMM are:

Monitor and manage the master and master replication topology of MySQL, and perform master-slave switchover and failover between master and standby servers when the current master server fails

What features does MMM provide:

MMM can monitor the health of MySQL master-slave replication, including the status of mysql server and copy activity of slave database, etc. It can fail over when the master database is down, and automatically configure other slave libraries to replicate the new master library to provide a write virtual IP for the master library. When there is a problem with the master-slave server, the virtual IP can be automatically migrated. When multiple slave libraries are configured, multiple virtual read IP can be configured. Implement load balancing with third-party load balancing software (such as LVS)

MMM for how to make the slave database synchronize log points after switching between master and slave:

MMM is not safe to deal with this aspect, simply rudely synchronizing the current log point of the new master database from the slave library, so using MMM in a busy system is likely to cause data loss to build the MMM architecture.

The MMM architecture to be built in this article is as follows:

Take the above architecture as an example, describe the failover process, and now assume that Master1 is down:

Monitor detects Master1 connection failure Monitor sends set_offline instruction to AgentMaster1 Agent of Master1 if alive, write VIP offline, try to set Master1 to read_only=1Moniotr to send set_online instruction to Master2Master2 Agent to receive instruction, execute select master_pos_wait () wait for synchronization to complete Master2 Agent online write VIP, set Master2 node to read_only=0Monitor to send instructions to change synchronization object to Agent of each Slave node, each Slave node synchronizes data to the new Master

As can be seen from the whole process, if the master node fails, MMM will automatically switch over without human intervention. At the same time, we can also see some problems, that is, after the database is hung up, it is only switched and will not actively make up for the lost data, so MMM will have the risk of data inconsistency.

Description of the resources required to build the MMM architecture:

The number of names indicates that Master server 2 is used for master and master replication configuration in primary and standby mode. Slave server 0murn can be configured with 0 or more slave servers However, it is not recommended that too many monitoring servers 1 install MMM to monitor MySQL replication cluster IP address 2 * (nail1) n is the number of MySQL servers monitoring user 1 MySQL user used to monitor database status (at least have replication client permission) proxy user 1 MySQL user for MMM agent (at least super, replication client, process permission) replication user 1 is used to configure MySQL master-slave replication user (at least replication slave permission)

The machines used in this article describe:

Name IP role master-01192.168.190.146 master library master-02192.168.190.148 slave library slave-01192.168.190.149 slave library manager192.168.190.147 cluster manager (MMM)

Environment release Notes:

Operating system version: CentOS 7MySQL version: 8.0.19MMM version: 2.2.1

Additional instructions:

Friends who will come to understand the MMM architecture must have mastered the installation of MySQL, and there are many articles about the installation of MySQL, so this article will not demonstrate the installation of MySQL in order to reduce unnecessary space. The machines used in this article have been installed MySQL in advance. Configure master-master replication and master-slave synchronization clusters

1. Use the following statements on master-01 and master-02 to create MySQL users for primary primary replication:

Create user 'repl'@'%' identified with mysql_native_password by' Abc_123456';grant replication slave on *. * to 'repl'@'%';flush privileges;Tips: after creating an account, it is best to log in to each other on both nodes to ensure that the account is available

2. Modify the MySQL configuration file on master-01:

[root@master-01 ~] # vim / etc/my.cnf [mysqld] # set the idserver_id=101# of the node to enable binlog, and specify the name of the binlog file log_bin=mysql_bin# to enable relay_log, and specify the name of the relay_log file relay_log=relay_bin# to record the synchronization content of relaylog to binlog log_slave_updates=on

The same configuration is added to the configuration file of master-02, except that the server_id is different:

[root@master-02 ~] # vim / etc/ my.cnf [mysqld] server_id=102log_bin=mysql_binrelay_log=relay_binlog_slave_updates=on

The next step is to configure slave-01. Since this node does not exist as a slave library, but only plays a separate slave role, you do not need to enable the log_slave_updates parameter:

[root@slave-01 ~] # vim / etc/ my.cnf [mysqld] server_id=103log_bin=mysql_binrelay_log=relay_bin

After the modification of the above configuration file, restart the MySQL service on these three nodes:

[root@master-01 ~] # systemctl restart mysqld [root @ master-02 ~] # systemctl restart mysqld [root @ slave-01 ~] # systemctl restart mysqld configure the master-slave relationship of master-02 to master-01

Enter the MySQL command line terminal of master-01 and query the binary log currently being used by master-01 and the location of the current binary log execution with the following statement:

Mysql > show master status +-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +- -+ | mysql_bin.000001 | 155 | +- -+

Record the values of File and Position, then enter the MySQL command line terminal of master-02 and execute the following statements:

Mysql > stop slave;-- stop master-slave synchronization mysql > change master to master_host='192.168.190.146', master_port=3306, master_user='repl', master_password='Abc_123456', master_log_file='mysql_bin.000001', master_log_pos=155;-- configure the connection information of master-01 nodes, and from which binlog file where to start copying mysql > start slave;-- start master-slave synchronization

After configuring the master-slave relationship, use the show slave status\ G; statement to view the master-slave synchronization status. The values of Slave_IO_Running and Slave_SQL_Running are both Yes to indicate that the master-slave synchronization status is normal:

Configure the master-slave relationship of master-01 to master-02

In order to achieve master-master replication, master-01 and master-02 need to have a master-slave relationship with each other, so you also need to configure the master-slave relationship between master-01 and master-02. Enter the MySQL command line terminal of master-02 and query the binary log currently being used by master-02 and the location of the current binary log execution with the following statement:

Mysql > show master status +-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +- -+ | mysql_bin.000001 | 155 | +- -+

Record the values of File and Position, then enter the MySQL command line terminal of master-01 and execute the following statements:

Mysql > stop slave;mysql > change master to master_host='192.168.190.148', master_port=3306, master_user='repl', master_password='Abc_123456', master_log_file='mysql_bin.000001', master_log_pos=155;mysql > start slave

After the same configuration is completed, use the show slave status\ G; statement to view the master-slave synchronization status. The values of Slave_IO_Running and Slave_SQL_Running are both Yes to indicate that the master-slave synchronization status is normal:

Configure the master-slave relationship of slave-01 to master-01

The next step is to configure the master-slave relationship between the slave database and the master database, which is the same as master-02. Unless the data on the master-01 is modified during the period, you need to retrieve the log point:

Mysql > stop slave;mysql > change master to master_host='192.168.190.146', master_port=3306, master_user='repl', master_password='Abc_123456', master_log_file='mysql_bin.000001', master_log_pos=155;mysql > start slave; to build MMM service

1. Install the epel source on all machines:

Yum install-y epel-release

2. Install the MMM proxy client on all master and slave nodes:

Yum install-y mysql-mmm-agent

3. Install all MMM packages on the manager node:

Yum install-y mysql-mmm*

4. Then create a MySQL user on master-01 for monitoring and agents:

-- Monitoring user create user 'mmm_monitor'@'%' identified with mysql_native_password by' Abc_123456';grant replication client on *. * to 'mmm_monitor'@'%';-- proxy user create user' mmm_agent'@'%' identified with mysql_native_password by 'Abc_123456';grant super, replication client, process on *. * to' mmm_agent'@'%';flush privileges

Because the master and slave are configured, the other two database nodes will also synchronize these newly created users. You can query the following statements on the other two nodes:

Mysql > use mysql;mysql > select host,user,plugin from user +-+ | host | user | plugin | +- -+ |% | mmm_agent | mysql_native_password |% | mmm_monitor | mysql_native_password | |% | repl | mysql_native_password | | localhost | mysql.infoschema | caching_sha2_password | | localhost | mysql.session | caching_sha2_password | | localhost | mysql.sys | caching_sha2_password | | localhost | root | caching _ sha2_password | +-- +

5. Edit the mmm_common.conf configuration files on all nodes, including monitoring nodes, mainly to configure the information of the current node and other nodes in the cluster. Here, take the master-01 node as an example, and its configuration is as follows:

[root@master-01 ~] # vim / etc/mysql-mmm/mmm_common.confactive_master_role writer cluster_interface ens32 # the name of the network card of the current node, which is used to bind the virtual IP You can query the path of pid_path / run/mysql-mmm-agent.pid # pid file storage bin_path / usr/libexec/mysql-mmm/ # executable file storage path replication_user repl # MySQL user replication_password Abc_123456 # repl user password agent_user for replication by ip addr command Mmm_agent # MySQL user for agent agent_password Abc_123456 # password for mmm_agent user # configure ip address and role for master-01 ip 192.168.190.146 mode master peer db2# configure ip address and role ip 192.168.190.148 mode master peer db1# configure ip address for slave-01 and The role ip 192.168.190.149 mode slave# configures the library hosts db1 responsible for write operations Db2 # specify writable libraries Here is the name ips 192.168.190.90 # defined in the above host tag # configure write virtual IP There can be multiple mode exclusive separated by commas # indicates that only one master library provides services at a time # configure libraries responsible for read operations hosts db1, db2, db3 # specify readable libraries ips 192.168.190.91192.168.190.92192.168.198.190.93 # configure read virtual IP mode balanced # to balance the load of read requests to the db configured above

The other three nodes can be configured in the same way, and all the parameters should be the same except that the name of the network card may be different.

6. Then configure the mmm_agent.conf file of each node to declare the name of the current node defined in the host tag. The configuration of the master-01 node is as follows:

[root@master-01 ~] # vim / etc/mysql-mmm/mmm_agent.confinclude mmm_common.confthis db1

The configuration of the master-02 node is as follows:

[root@master-02 ~] # vim / etc/mysql-mmm/mmm_agent.confinclude mmm_common.confthis db1

The configuration of the slave-01 node is as follows:

[root@slave-01 ~] # vim / etc/mysql-mmm/mmm_agent.confinclude mmm_common.confthis db3

7. Then configure the mmm_mon.conf file on the monitoring node, which is as follows:

[root@manager] # vim / etc/mysql-mmm/mmm_mon.confinclude mmm_common.conf ip 127.0.0.1 pid_path / run/mysql-mmm-monitor.pid bin_path / usr/libexec/mysql-mmm status_path / var/lib/mysql-mmm/mmm_mond.status ping_ips 192.168.190.146192.168 .190.148192.168.190.149 # configure the IP auto_set_online 60 # of each node in the cluster to set the time when the node goes online automatically when the node is down and resumes. Unit is # The kill_host_bin does not exist by default, though the monitor will # throw a warning about it missing. See the section 5.10 "Kill Host # Functionality" in the PDF documentation. # # kill_host_bin / usr/libexec/mysql-mmm/monitor/kill_host # # configure MySQL user and password monitor_user mmm_monitor monitor_password Abc_123456debug 0 for monitoring

8. Start the MMM proxy service of all master and slave nodes:

[root@master-01 ~] # systemctl start mysql-mmm-agent [root@master-02 ~] # systemctl start mysql-mmm-agent [root@slave-01 ~] # systemctl start mysql-mmm-agent

The agent service listens on port 9989 by default. If the firewall is enabled, the port needs to be opened:

Firewall-cmd-zone=public-add-port=9989/tcp-permanentfirewall-cmd-reload

9. Start the monitoring service on the monitoring node:

[root@manager ~] # systemctl start mysql-mmm-monitor

10. After completing all the above steps, you can use the mmm_control show command on the monitoring node to view the status of each node in the cluster and its assigned virtual IP, as shown in the following example:

[root@manager] # mmm_control show db1 (192.168.190.146) master/ONLINE. Roles: reader (192.168.190.91), writer (192.168.190.90) db2 (192.168.190.148) master/ONLINE. Roles: reader (192.168.190.93) db3 (192.168.190.149) slave/ONLINE. Roles: reader (192.168.190.92) [root@manager] # Test

At this point, we have completed the construction of the MMM high-availability architecture, and then we will do some simple tests on it. For example, to test whether the ping can communicate with the virtual IP, after all, the application connects to the virtual IP when accessing the database, so you must first make sure that the virtual IP can be accessed. As follows:

After you can ping, use remote connection tools such as Navicat to test whether you can connect properly:

After confirming that each virtual IP can be accessed properly, test whether the MMM can fail over properly, and first stop the MySQL service on the master-01:

[root@master-01 ~] # systemctl stop mysqld

Normally, when you use the mmm_control show command on the monitoring node, you can see that the master-01 node is offline, and the virtual IP used for writing switches to the master-02 node normally:

[root@manager] # mmm_control show db1 (192.168.190.146) master/HARD_OFFLINE. Roles: db2 (192.168.190.148) master/ONLINE. Roles: reader (192.168.190.93), writer (192.168.190.90) db3 (192.168.190.149) slave/ONLINE. Roles: reader (192.168.190.91), reader (192.168.190.92) [root@manager ~] #

Then enter the MySQL terminal on the slave-01 node. Previously, we configured master-01 as the main library of slave-01. After stopping master-01, we can see that the main library of slave-01 has been switched to master-02 by MMM:

After the above tests, we can see that the MMM architecture we built can run normally, which has made the Replication cluster have basic high availability capabilities. Even after the master database is offline, it can be switched to the slave database normally, and the relationship between the slave database and the new master database has been established correctly.

Advantages and disadvantages of MMM architecture: developed using Perl scripting language and completely open source, developers can carry out secondary development according to their own needs to provide delayed monitoring from the server and read and write VIP (virtual IP), making the change of server role transparent to the front-end application. And when a large number of master-slave delays or master-slave replication links are interrupted, the read virtual IP on the slave database can be drifted to other normal readable nodes in the cluster. When the master database fails over, the slave database resynchronizes the new master database, so it is easy to re-launch the failed master database, making the Replication cluster have high availability shortcomings: MMM belongs to an older tool. The last version was released several years ago, so there will be some small bug, and the new GTID-based replication in MySQL 5.6 + version is not supported. It only supports log point-based replication but does not provide read load balancing, so additional tools such as LVS are needed to easily cause data loss or transaction repeat commit when switching between master and slave. Because MMM will not compare multiple log points of the slave database, but directly select the slave database to switch. Because the Replication cluster is replicated asynchronously, when the synchronization delay of the standby database is large, it may be updated from the log point of the database than that of the database. Therefore, when the master database is unexpectedly offline, MMM forced switching to standby database may lead to data loss or repeat commit transactions from the library. MMM does not provide relevant high availability features, so the monitoring node itself has a single point of failure, while the high availability of the Replication cluster depends on the monitoring node. When the monitoring node hangs up the Replication cluster, it will no longer have high availability. However, we can introduce third-party tools to do dual hot backup for monitoring nodes, such as Keepalived

Combining the advantages and disadvantages, we can see that MMM is only suitable for scenarios where there is no high requirement for data consistency and allows a small amount of data to be lost, such as comments, information and other data.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report