Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mysql5.7-mmm high availability cluster under Centos7

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Introduction to mysql5.7-mmm High availability Cluster MMM under Centos7

MMM (Master- Master replication manager for MYSQL, MYSQL Primary Primary replication Manager)

Is a set of scripts that support dual-master failover and dual-master daily management. MMM is developed in Perl language, which is mainly used to monitor and manage MYSQL master-master (double master) replication. Although it is called dual master replication, only one master is allowed to write to the master at the same time, and the other alternative master provides partial read service to speed up the warm-up of the backup master during the master-master switch. it can be said that the script program MMM realizes the function of failover on the one hand.

On the other hand, the additional tool scripts can also achieve read load balancing of multiple Slave.

Experimental preparation

4 devices with mysql5.7 services installed

1 centos7 device used to install mmm

Lab steps: turn off firewall self-startup, and related features and enhanced security features systemctl stop firewalld.service setenforce 0 configure the ALI cloud source, and then install the epel-release source. Wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repoyum-y install epel-releaseyum clean all & & yum makecache modify the configuration file of the mysql server vim / etc/my.cnf modify the content [mysqld] log_error=/var/lib/mysql/mysql.errlog=/var/lib/mysql/mysql_log.loglog_slow_queries=/var/lib/mysql_slow_queris.logbinlog-ignore-db=mysql under mysqld After there is no problem with information_schemacharacter_set_server=utf8log_bin=mysql_binserver_id=1log_slave_updates=truesync_binlog=1auto_increment_increment=2auto_increment_offset=1systemctl restart mysqld- Copy the configuration file to the other 3 database servers and start the server-scp / etc/my.cnf root@192.168.100.101:/etc/scp / etc/my.cnf root@192.168.100.102:/etc/scp / etc/my.cnf root@192.168.100.103:/etc/- Note: the server_id in the configuration file needs to be modified-configure master master replication The two master servers replicate show master status with each other +-+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +- -+-+ | master-bin.000002 | 339 | +- -+ 1 row in set (0.00 sec) # # record log file name and location value Check it on the two hosts. -grant slave permissions to m2 on M1, and slave permissions to M1 on m2-grant replication slave on *. * to 'replication'@'192.168.100.%' identified by' 123456slave; # # both master servers execute, and slave servers do not need change master to master_host='192.168.100.101',master_user='replication',master_password='123456',master_log_file='mysql_bin.000002',master_log_pos=339 -Note that on M1, specify the log file name on m2, and the location parameters-and vice versa on m2. Start slave;show slave status; Slave_IO_Running: Yes Slave_SQL_Running: Yes- do it on another master-to specify the log and location parameters of M1-change master to master_host='192.168.100.100',master_user='replication',master_password='123456',master_log_file='mysql_bin.000002',master_log_pos=339;start slave;show slave status Slave_IO_Running: Yes Slave_SQL_Running: Yes configuration Master-Slave replication change master to master_host='192.168.100.100',master_user='replication',master_password='123456',master_log_file='mysql_bin.000002',master_log_pos=339;start slave;show slave status; Slave_IO_Running: Yes Slave_SQL_Running: Yes installation MMM

Install on all servers-note that the epel source should be configured

Yum-y install mysql-mmm* configuration mmmcd / etc/mysql-mmm/vi mmm_common.conf # # all hosts need to be configured to directly copy multiple copies of cluster_interface ens33. Replication_user replication replication_password 123456 agent_user mmm_agent agent_password 123456 ip 192.168.100.100 mode master peer db2 ip 192.168.100.101 mode master peer db1 ip 192.168.100.102 mode slave ip 192.168.100.103 mode slave hosts db1, db2 ips 192.168.100.200 mode exclusive hosts db3 Db4 ips 192.168.100.201 192.168.100.202 mode balancedscp mmm_common.conf root@192.168.100.100:/etc/mysql-mmm/scp mmm_common.conf root@192.168.100.101:/etc/mysql-mmm/scp mmm_common.conf root@192.168.100.102:/etc/mysql-mmm/scp mmm_common.conf root@192.168.100.103:/etc/mysql-mmm/ # # copy to mysql server- -configure on monitor server-cd / etc/mysql-mmm/ change password vi mmm_mon.conf monitor_user mmm_monitor monitor_password 123456-authorize mmm_agent on all databases-grant super Replication client, process on *. * to 'mmm_agent'@'192.168.100.%' identified by' 123456' -mmm_moniter authorization on all databases-grant replication client on *. * to 'mmm_monitor'@'192.168.100.%' identified by' 123456 authorization for Flush privileges -modify the mmm_agent.conf----vi / etc/mysql-mmm/mmm_agent.confthis db1 # # master 1 master 2 slave 1 slave 2 of all databases as db {1pr 2 respectively 3authoring 4}-start mysql-mmm-agent---systemctl start mysql-mmm-agent.servicesystemctl enable mysql-mmm-agent.service- on all database servers and configure on monitor servers-cd / etc/mysql-mmm/vi mmm_mon.conf... Ping_ips 192.168.100.100192.168.100.101192.168.100.102192.168.100.103 # # Database server address auto_set_online 10systemctl start mysql-mmm-monitor.service # # launch mysql-mmm-monitor mmm_control show # # to view the situation of each node db1 (192.168.100.100) master/ONLINE. Roles: writer (192.168.100.200) db2 (192.168.100.101) master/ONLINE. Roles: db3 (192.168.100.102) slave/ONLINE. Roles: reader (192.168.100.201) db4 (192.168.100.103) slave/ONLINE. Roles: reader (192.168.100.202) mmm_control checks all # # requires all kinds of OKmmm_control move_role writer db1 # # manually switch roles test turn off the mysql service of Master 1 systemctl stop mysqldmmm_control show # # check the situation of each node db1 (192.168.100.100) master/HARD_OFFLINE. Roles: db2 (192.168.100.101) master/ONLINE. Roles: writer (192.168.100.200) db3 (192.168.100.102) slave/ONLINE. Roles: reader (192.168.100.201) db4 (192.168.100.103) slave/ONLINE. Roles: reader (192.168.100.202) turn off the mysql service mmm_control show # # from 1 to check the status of each node db1 (192.168.100.100) master/HARD_OFFLINE. Roles: db2 (192.168.100.101) master/ONLINE. Roles: writer (192.168.100.200) db3 (192.168.100.102) slave/HARD_OFFLINEE. Roles: db4 (192.168.100.103) slave/ONLINE. Roles: reader (192.168.100.201192.168.100.202) so far, mysql high availability cluster has been completed through mmm

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report