Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement MMM High availability Cluster Architecture in MySQL Database

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces how to achieve MMM high-availability cluster architecture in MySQL database, the content is very detailed, interested friends can refer to, hope to be helpful to you.

MMM High availability Architecture description

Mmm_mond: monitor the process, be responsible for all monitoring work, determine and handle all node role activities. This script needs to be run on the supervisor.

Mmm_agentd: an agent process that runs on each mysql server, performs monitoring probe work and performs simple remote service settings. This script needs to be run on the supervised machine.

Mmm_control: a simple script that provides commands for managing mmm_mond processes.

The supervisor of mysql-mmm will provide multiple virtual IP (VIP), including a writable VIP and multiple readable VIP. Through regulatory management, these IP will be bound to the available mysql. When a mysql goes down, the supervisor will migrate the VIP to other mysql. During the whole supervision process, the relevant authorized users need to be added to the mysql so that the mysql can support the maintenance of the supervision machine. Authorized users include a mmm_monitor user and a mmm_agent user.

Advantages and disadvantages of MMM

Advantages: high availability, good expansibility, automatic failure switching, for master and master synchronization, only one database write operation is provided at the same time to ensure the consistency of the data.

Disadvantages: Monitor node is a single point, can be combined with Keepalived to achieve high availability, there are requirements for the number of hosts, the need to achieve read-write separation, which is a challenge for the program.

Deployment of experimental environment

Step 1: install the MySQL database on all four servers

1. Configure the ALI cloud source, then install the epel-release source

[root@localhost ~] # wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo[root@localhost ~] # yum-y install epel-release [root@localhost ~] # yum clean all & & yum makecache

two。 Set up a local yum source

# install database [root@localhost ~] # yum-y install mariadb-server mariadb# turn off firewall and security features [root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce "Open database [root@localhost ~] # systemctl start mariadb.service

3. Modify the ml main configuration file

[root@localhost ~] # vim / etc/my.cnf# delete the first 9 lines Add the following [mysqld] log_error=/var/lib/mysql/mysql.err # file location of error log log=/var/lib/mysql/mysql_log.log # file location of access log log_slow_queries=/var/lib/mysql_slow_queris.log # file location of man log binlog-ignore-db=mysql,information_schema # mysql Information_schema does not generate binary log files character_set_server=utf8 # character set log_bin=mysql_bin # binary log file function enables server_id=1 # different host id different log_slave_updates=true # authorization synchronization sync_binlog=1 # binary log file function enables auto_increment_increment=2 # incremental auto_increment_offset=1 # starting value [ Root@localhost ~] # systemctl restart mariadb.service [root@localhost ~] # netstat-natp | grep 3306

4. Copy the configuration file to the other three database servers and modify the server_id

[root@localhost ~] # scp / etc/my.cnf root@192.168.142.134:etc/

5. Enter the database and view the log file information

[root@localhost ~] # mysql# View log file name and location value MariaDB [(none)] > show master status +-- + | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +- -+ | mysql_bin.000001 | 245 | | mysql Information_schema | +-- + 1 row in set (0.00 sec)

6. Grant access to each other on M1 and m2, and authorize synchronous logs

# Grant access to each other on M1 and m2 MariaDB [(none)] > grant replication slave on *. * to 'replication'@'192.168.142.%' identified by' 123456 permissions # specify the log file name and location parameter MariaDB [(none)] > change master to master_host='192.168.142.134',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245 of m2 on M1 # specify the log file name and location parameter MariaDB [(none)] > change master to master_host='192.168.142.131',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245 of M1 on m2

7. Enable synchronization on M1

MariaDB [(none)] > start slave

8. Check the synchronization status. Both main servers need to see Yes.

MariaDB [(none)] > show slave status\ G; Slave_IO_Running: Yes Slave_SQL_Running: Yes

9. Create a database on M1

MariaDB [(none)] > create database school

10. View the synchronization database on m2

MariaDB [(none)] > show databases;+-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | school | | test | +-+ 5 rows in set (0.00 sec)

11. Do it on both sets-pay attention to the changes in log files and location parameters (both point to M1)

MariaDB [(none)] > change master to master_host='192.168.142.131',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=245

twelve。 Turn on synchronization function

MariaDB [(none)] > start slave;# view synchronization data information MariaDB [(none)] > show databases from the server +-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | school | | test | +-+ 5 rows in set (0.00 sec)

13. Install MMM related software on four servers

[root@localhost ~] # yum-y install mysql-mmm*

14. Configure mmm_common.conf Profil

[root@localhost ~] # vim / etc/mysql-mmm/mmm_common.conf# modify the network card to ens33 cluster_interface ens33 pid_path / run/mysql-mmm-agent.pid bin_path / usr/libexec/mysql-mmm/ replication_user replicantion # modify the authorization password replication_password 123456 agent_user mmm_agent # modify the agent authorization password agent_password 12345 specify the corner of four servers Color and IP address ip 192.168.142.131 mode master peer db2 ip 192.168.142.134 mode master peer db1 ip 192.168.142.130 mode slave ip 192.168.142.135 mode slave# setting master server virtual IP hosts db1 Db2 ips 192.168.142.250 mode exclusive# settings slave server virtual IP hosts db3, db4 ips 192.168.142.251, 192.168.142.252 mode balanced

15. Copy the configuration file from M1 to the other three servers

[root@localhost ~] # scp mmm_common.conf/mmm_common.conf root@192.168.142.134:/etc/mysql-mmm/mmm_ common.confs [root @ localhost ~] # scp mmm_common.conf/mmm_common.conf root@192.168.142.130:/etc/mysql-mmm/mmm_ common.confs [root @ localhost ~] # scp mmm_common.conf/mmm_common.conf root@192.168.142.135:/etc/mysql-mmm/mmm_common.conf

Step 2: configure the monitor monitoring server

1. Install epel-release source and MMM related software

[root@localhost ~] # systemctl stop firewalld.service [root@localhost ~] # setenforce 0 [root@localhost ~] # wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo[root@localhost ~] # yum-y install epel-release [root@localhost ~] # yum clean all & & yum makecache [root@localhost ~] # yum-y install mysql-mmm*

two。 Copy the configuration file from M1 to the monitoring server

[root@localhost ~] # scp mmm_common.conf root@192.168.142.134:/etc/mysql-mmm/

3. Configure mmm_common.conf Profil

[root@localhost ~] # vim / etc/mysql-mmm/mmm_mon.conf ip 127.0.0.1 pid_path / run/mysql-mmm-monitor.pid bin_path / usr/libexec/mysql-mmm status_path / var/lib/mysql-mmm/mmm_mond.status # points to the IP address ping_ips 192.168.142.131192.168.142.134192.168.142.130192.168.142.135 auto_set_online 10 of the four servers

4. Authorize mmm_agent on all database servers

MariaDB [(none)] > grant super, replication client, process on *. * to 'mmm_agent'@'192.168.142.%' identified by' 123456 query OK, 0 rows affected (0.02 sec)

5. Authorize mmm_moniter on all database servers

MariaDB [(none)] > grant replication client on *. * to 'mmm_monitor'@'192.168.18.%' identified by' 123456; Query OK, 0 rows affected (0.02 sec)

6. Refresh the database and set the agent name on all database servers

[root@localhost ~] # vim / etc/mysql-mmm/mmm_agent.conf # modify the name this db4 in the name this db2#s1 in the default name this db1 # m2 in the agent configuration file # M1

7. Start the agent function and set boot self-startup on all database servers

# launch Agent function [root@localhost ~] # systemctl start mysql-mmm-agent.service# set boot self-boot [root@localhost ~] # systemctl enable mysql-mmm-agent.service

8. Turn on the monitoring service on the monitor server and check the situation of each node

[root@localhost mysql-mmm] # systemctl start mysql-mmm-monitor.service [root@localhost ~] # mmm_control show db1 (192.168.142.131) master/ONLINE. Roles: writer (192.168.142.250) db2 (192.168.142.134) master/ONLINE. Roles: db3 (192.168.142.130) slave/ONLINE. Roles: reader (192.168.142.252) db4 (192.168.142.135) slave/ONLINE. Roles: reader (192.168.142.251)

9. Check the status of all servers

[root@localhost ~] # mmm_control checks alldb4 ping [last change: 18:23:03 on 2019-11-25] OKdb4 mysql [last change: 18:23:03 on 2019-11-25] OKdb4 rep_threads [last change: 18:23:03 on 2019-11-25] OKdb4 rep_backlog [last change: 18:23:03 on 2019-11-25] OK: Backlog is nulldb2 ping [last change: 18:23:03 on 2019-11-25] OKdb2 mysql [last change: 20119 / 11/25 18:59:01] OKdb2 rep_threads [last change: 2019-11-25 18:59:01] OKdb2 rep_backlog [last change: 2019-11-25 18:59:01] OK: Backlog is nulldb3 ping [last change: 2019-11-25 18:23:03] OKdb3 mysql [last change: 2019-11-25 18:59:01] OKdb3 rep_threads [last change: 2019-11-25 18:59:01] OKdb3 rep_backlog [last change: 2019/11/ 25 18:59:01] OK: Backlog is nulldb1 ping [last change: 2019-11-25 18:23:03] OKdb1 mysql [last change: 2019-11-25 18:59:01] OKdb1 rep_threads [last change: 2019-11-25 18:59:01] OKdb1 rep_backlog [last change: 2019-11-25 18:59:01] OK: Backlog is null

Third, fault testing

1. Simulate M1 server downtime and stop service

[root@localhost ~] # systemctl stop mariadb.service

two。 When M1 server goes down, m2 receives virtual IP and continues to provide services.

[root@localhost] # mmm_control show db1 (192.168.142.131) master/ONLINE. Roles: db2 (192.168.142.134) master/ONLINE. Roles: writer (192.168.142.250) db3 (192.168.142.130) slave/ONLINE. Roles: reader (192.168.142.252) db4 (192.168.142.135) slave/ONLINE. Roles: reader (192.168.142.251)

3. When S1 server goes down, S2 receives virtual IP and continues to provide services.

[root@localhost] # mmm_control show db1 (192.168.142.131) master/ONLINE. Roles: writer (192.168.142.250) db2 (192.168.142.134) master/ONLINE. Roles: db3 (192.168.142.130) slave/HARD_OFFLINE. Roles: db4 (192.168.142.135) slave/ONLINE. Roles: reader (192.168.142.251), reader (192.168.142.252)

4. Authorize login for the monitoring server address on the M1 server

MariaDB [(none)] > grant all on *. * to 'root'@'192.168.142.136' identified by' 123456 query OK, 0 rows affected (0.00 sec)

5. Install the database client on the monitoring server

[root@localhost ~] # yum-y install mariadb

6. Log in to the database with the specified user on the monitoring server and create data information

[root@localhost] # mysql-u root-p-h 192.168.142.250Enter password: # enter the password to # create a database MariaDB [(none)] > create database BDQN;Query OK, 1 row affected (0.01 sec)

7. Timely synchronized data information can be viewed on all databases

MariaDB [(none)] > show databases # View database +-+ | Database | +-+ | information_schema | | BDQN | # BDQN database synchronized to | mysql | | performance_schema | | school | | test | +-+ 6 rows in set (0) .00 sec) this is how to implement the MMM highly available cluster architecture in the MySQL database. I hope the above content can be of some help to you and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report