Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MMM Architecture Scheme and implementation

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

MMM Architecture Scheme and implementation

MMM, or Master-Master Replication Manager for MySQL (mysql Master replication Manager), is a scalable suite of scripts for monitoring, failover, and management of mysql master replication configurations (only one node can be written at any time). This suite can also read load balancing from any number of slave servers based on the standard master-slave configuration, so you can use it to launch virtual ip on a group of servers that reside in replication In addition, it has scripts that implement data backup and resynchronization between nodes.

MySQL itself does not provide a replication failover solution, through the MMM scheme to achieve server failover, so as to achieve the high availability of mysql.

The MMM project is from Google: http://code.google.com/p/mysql-master-master

The official website is: http://mysql-mmm.org

The main functions of MMM are provided by the following three scripts

Mmm_mond: the monitoring daemon responsible for all monitoring work, determining the removal of nodes, etc.

Mmm_agentd: a proxy daemon running on a mysql server that is provided to the monitoring node through a simple set of remote services

Mmm_control: managing mmm_mond processes from the command line

About the pros and cons of this architecture:

Advantages: high security, high stability and good expansibility. When the master server dies, another master immediately takes over, and the other slave servers can be switched automatically without human intervention.

Disadvantages: at least three nodes, there are requirements for the number of hosts, need to achieve read-write separation, can be difficult to achieve in the program expansion. At the same time, the requirement of master-slave (double master) synchronization delay is relatively high! Therefore, it is not suitable for situations where data security is very strict.

Practical place: high traffic, fast business growth, and requires the separation of read and write scenarios.

Environment deployment:

Master1 ip: 192.168.1.10

Master2 ip: 192.168.1.20

Slave1 ip: 192.168.1.30

Slave2 ip:192.168.1.40

Monitor ip:192.168.1.50

All hosts need to install the dependency package of MMM. Here we take master1 as an example to demonstrate that the other four machines are like this without any changes.

# yum-y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64

Install the perl library and install using centos7's Internet yum source

# cpan-I Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiResParams::Validate Net::ARP

In order to prevent the error message from appearing in the follow-up environment, modify it here; give the solution:

Solution:

# cpan Proc::Daemon

# cpan Log::Log4perl

Configure the / etc/hosts file on all hosts and add the following: here again take master1 as an example; the other four are the same

Here, we use monitor to test the ping command of the other four hosts to see if they can communicate with each other:

Install mysql5.7 and configure replication on master1, master2, slave1, slave2 hosts

Master1 and master2 are masters and slaves of each other, while slave1 and slave2 are slaves of master1.

Add the following to the configuration file / etc/my.cnf of each mysql, and note that the server_id cannot be repeated.

Master1 server:

Master2 server

Slave1 server

Slave2 server

After completing the modification to my.cnf, restart the mysql service through systemctl restart mysqld

To enable the firewall for 4 database hosts, either turn off the firewall or create access rules:

Firewall-cmd-permanent-add-port=3306/tcp

Firewall-cmd-reload

Here we use to turn off the firewall; but if the application is in a production environment, this is not advisable. You can use the above method to open port 3306.

Master-slave configuration (master1 and master2 are configured as master, slave1 and slave2 are configured as slaves of master1):

Authorize on master1:

Mysql > grant replication slave on *. * to rep@'192.168.1.%' identified by '123456'

Authorize on master2:

Mysql > grant replication slave on *. * to rep@'192.168.1.%' identified by '123456'

Configure master2, slave1, and slave2 as slave libraries for master1:

Execute show master status; on master1 to get binlog files and Position points

Mysql > show master status

Execute in master2, slave1, and slave2

Mysql > change master to master_host='192.168.1.10',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=451

Master2 execution:

Slave1 execution:

Slave2 execution:

After a successful execution, start verifying the master-slave replication results:

Mysql > start slave

Mysql > show slave status\ G

First, verify the result let's go of master2.

Then there is slave1 verification:

Finally, verify the slave2:

The above results show that the master-slave replication of these four databases has been set up, but what we need is that master1 and master2 are master-master replication, so we need to configure master1:

Configure master1 as a slave library for master2:

Execute show master status on master2; get binlog files and Position points

Mysql > show master status

Execute on master1:

Mysql > change master to master_host='192.168.1.20',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.000002',master_log_pos=154

Verify the primary primary replication for master1 and master2:

Mysql > start slave

Mysql > show slave status\ G

Let's get this straight; here master1 and master2 are master replicas

The relationship between slave1 and slave2 and master1 is master-slave replication, then configure the configuration of MMM

First: create users {master1, 2 and slave1, 2} on 4 mysql nodes

Create a proxy account:

Mysql > grant super,replication client,process on *. * to 'mmm_agent'@'192.168.1.%' identified by' 123456'

Create a monitoring account:

Mysql > grant replication client on *. * to 'mmm_monitor'@'192.168.1.%' identified by' 123456'

Here, take master1 as an example. Because they already have a master-slave synchronization relationship, the other three hosts also have these two accounts.

Here, let's take one of the slave2 as an example to see if it exists?

You can see that the two accounts created on master1 have been synchronized on slave2.

Mmm_monitor users: mmm monitoring is used to check the health of mysql server processes

Mmm_agent user: mmm agent is used to change read-only mode, replicated master server, etc.

II: installation of mysql-mmm

Install the monitor on the monitor host:

Tar-zxf mysql-mmm-2.2.1.tar.gz

Cd mysql-mmm-2.2.1

Make install

Install the agent on the database {master1,master2,slave1,slave2}

Tar-zxf mysql-mmm-2.2.1.tar.gz

Cd mysql-mmm-2.2.1

Make install

Master: this software must be installed on each MySQL; this cannot be synchronized.

Master1 server

Master2 server:

Slave1 server:

Slave2 server:

Next to the exciting moment, everyone cheer up a little bit, start to configure MMM files, five documents should be consistent. {mmm_commin.conf}

All configuration files are placed under / etc/mysql-mmm/. There should be a common file mmm_common.conf on both the management server and the database server, as follows:

Copy the above file to the other four servers to make sure that the contents of this file are the same for the five servers. In addition:

There is one more mmm_agent.conf that needs to be modified, which is as follows:

Includemmm_common.conf

This master1

Here, take master1 as an example: except for the other three units modified to {master2, slave1, slave2} monitor

The other three machines that start the agent process are also monitor. Here, take master1 as an example:

In the script file of / etc/init.d/mysql-mmm-agent, under #! / bin/sh, add the following

Join the system service

If you start the service, you will report an error:

# cpan Proc::Daemon

# cpan Log::Log4perl

It can be solved.

The result of rebooting here

All of the above take master1 as an example. The configuration of the other three sets is the same as that of master1, but it has been pointed out above that the modification of mmm_agent.conf file is different.

Edit / etc/mysql-mmm/mmm_mon.conf on the monitor host

Includemmm_common.conf

Start the monitoring process:

In the script file of / etc/init.d/mysql-mmm-monitor, under #! / bin/sh, add the following

Source / root/.bash_profile

Added as a system service and set to self-startup

# chkconfig-add mysql-mmm-monitor

# chkconfig mysql-mmm-monitor on

# / etc/init.d/mysql-mmm-monitor start

An error will be reported when starting. Solution method

Install the following libraries for perl

# cpan Proc::Daemon

# cpan Log::Log4perl

Start here

Next, check the cluster status:

And the command for the server to go online:

Cluster status:

View the status of all clusters:

Next, you can simulate the downtime of the server of the main master1 to see if the vip will drift to the master2. I won't repeat it here.

This is the MMM architecture for you today. If you have any questions, you can ask them in the comments.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report