In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces what the mysql-mmm high availability cluster is and how to build it. I hope I can add and update some knowledge to you. If you have any other questions to know, you can continue to follow my updated article in the industry information.
Introduction to building mysql-mmm highly available cluster MMM: MMM (Master-Master replication manager for MySQL) is a set of scripts that support dual-master failover and dual-master day-to-day management. MMM is developed in Perl language, which is mainly used to monitor and manage MySQL Master-Master (double master) replication. Although it is called double master replication, only one master is allowed to write to one master at the same time, and the other alternative master provides partial read service to speed up the warm-up of the backup master at the time of master-master switch. it can be said that the script program MMM realizes the function of failover on the one hand. On the other hand, the additional tool scripts can also achieve read load balancing of multiple slave. MMM provides both automatic and manual ways to remove the virtual ip of servers with high replication latency in a group of CVM. At the same time, it can also back up data and achieve data synchronization between the two nodes. Because MMM cannot completely guarantee data consistency, MMM is suitable for scenarios where data consistency is not very high, but you want to maximize business availability. For those businesses that require high data consistency, a highly available architecture such as MMM is not recommended. Advantages: 1 stable and mature open source products, after the test of time, the core technology is mysql's own technology, only the use of scripts to control, so the principle is easier to understand, and management can be more intelligent. 2 easy to install, simple to configure, simple to use 3 powerful (HA,failover,tools suite, cluster mode can manage multiple mmm groups with one monitor) disadvantages: 1 because there is only one write point in the architecture, so the scalability is limited, but it is enough for general medium-sized enterprises. Solution: for large applications, you can split vertically into multiple mmm architectures, using mmm cluster to manage. (2) for read-write separation and read load balancing, programs are still needed to develop or use other tools to complete. MMM High availability Architecture:
Mmm_mond: monitor the process, be responsible for all monitoring work, determine and handle all node role activities. This script needs to be run on the supervisor. Mmm_agentd: an agent process that runs on each mysql server, performs monitoring probe work and performs simple remote service settings. This script needs to be run on the supervised machine. Mmm_control: a simple script that provides commands for managing mmm_mond processes. Example of this case: host name system main software IP virtual IPmysql-master1CentOS 7.3 x86_64mmm192.168.217.129192.168.217.100mysql-master2CentOS 7.3 x86_64mmm192.168.217.130192.168.217.100mysql-slave1CentOS 7.3 x86_64mmm192.168.217.131192.168.217.200mysql-slave2CentOS 7.3 x86_64mmm192.168.217.132192.168.217.210monitorCentOS 7.3 x86_64mmm192.168.217.133 installation MMM on all servers Install on: wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo # configure ALI cloud source Then install the epel-release source. Yum clean all & & yum makecache # delete cache download to local computer cache yum-y install epel-release # install epel source to install mmmyum-y install mysql-mmm* # install mmm service install and configure MySQL server on all servers: configure all mysql Everything is the same except server-id: vim / etc/ my.cnf [mysqld] user = mysql # administrative user basedir = / usr/local/mysql # working directory datadir = / usr/local/mysql/data # database file port = 3306character_set_server=utf8pid-file = / usr/local/mysql/mysqld.pid # pid process file socket = / usr/local/mysql/mysql.sock # chain Connect to the database server-id = 1 binlog-ignore-db=mysql Information_schema # Database name that does not require synchronization log_slave_updates=true # update binary log sync _ binlog=1 # synchronous binary log auto _ increment_increment=2 # field one-time increment auto_increment_offset=1 # starting value of the increment field log-error=/usr/local/mysql/data/mysql_error.log # error log general _ log=ON # open General _ log_file=/usr/local/mysql/data/mysql_general.loglog_bin=mysql-bin # binary log slow _ query_log=ON # enable slow log slow_query_log_file=mysql_slow_query.log long_query_time=1 to copy configuration files to the other three database servers and start the server scp / etc/my.cnf root@192.168.217.130:/etc/ # the authorized user and IP address of the other party enter yes, Password replication systemctl restart mysqld.service # modify server-id restart service configuration for all databases Master replication: mysql-u root-p # enter the database and grant m2 slave permissions on M1 M1 should also be granted permission from mysql > grant replication slave on *. * to 'replication'@'192.168.217.%' identified by' 123456' on m2. Mysql > show master status; # record the log file name and location value, and view it on the two hosts. Mysql > change master to master_host='192.168.217.129',master_user='replication',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=1104; pay attention to the IP address, log name and offset pointed to. Don't make a mistake: mysql > start slave; # enable synchronization mysql > show slave status\ G; # check link status Slave_IO_Running: Yes Slave_SQL_Running: Yes is executed by both hosts and is never needed. You can create a database on the primary server, view it, delete it, and test it on another primary server. Configure two slave servers: point to any master server, mysql > change master to master_host='192.168.217.129',master_user='replication',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=1104;, pay attention to the IP address, log name and offset pointed to, and don't mistake start slave;show slave status\ G; Slave_IO_Running: Yes Slave_SQL_Running: Yes
Test master-slave, master-master and synchronization
Configure MMM configuration mysql-master1: cd / etc/mysql-mmm/vim mmm_common.conf on all hosts. Cluster_interface ens32 # Nic name... Replication_user replication # Master / Slave authorized user replication_password 123456agent_user mmm_agent # proxy user agent_password 123456....ip 192.168.217.129 # Master Server mode masterpeer db2....ip 192.168.217.130 # Master Server mode masterpeer db1....ip 192.168.217.131 # Slave server mode slave....ip 192.168.217.132 # slave server mode slave....hosts db1 Db2ips 192.168.217.100 # main server virtual IP address mode exclusive....hosts db3, db4ips 192.168.217.200 192.168.217.210 # mode balancedscp mmm_common.conf root@192.168.217.130:/etc/mysql-mmm/scp mmm_common.conf root@192.168.217.131:/etc/mysql-mmm/scp mmm_common.conf root@192.168.217.132:/etc/mysql-mmm/scp mmm_common.conf root@192.168.217.133:/etc/mysql-mmm/ configuration from the server virtual IP address on the monitor server : cd / etc/mysql-mmm/vim mmm_mon.conf. Ping_ips 192.168.217.129192.168.217.130192.168217.131192.168.217.132 # Database server address auto_set_online 10 # sets automatic online time. Monitor_user mmm_monitor # monitors user monitor_password 123456 for authorization and modification in all databases: mysql > grant super, replication client, process on *. * to 'mmm_agent'@'192.168.217.%' identified by' 123456authorization for mmm_agent on all databases; mysql > grant replication client on *. * to 'mmm_monitor'@'192.168.217.%' identified by' 123456authorization for flush privileges on all databases # refresh and modify the mmm_agent.conf files of all databases vim / etc/mysql-mmm/mmm_agent.confthis db1 # / start the proxy service on all database servers according to the corresponding IP address modification in the specification mmm _ common.conf configuration systemctl start mysql-mmm-agent.service # enable the proxy service systemctl enable mysql-mmm-agent.service # Boot configuration monitor server Systemctl start mysql-mmm-monitor.service # starts the monitoring service mmm_control show # to check the situation of each node db1 (192.168.217.129) master/ONLINE. Roles: writer (192.168.217.100) db2 (192.168.217.130) master/ONLINE. Roles: db3 (192.168.217.131) slave/ONLINE. Roles: reader (192.168.217.200) db4 (192.168.217.132) slave/ONLINE. Roles: reader (192.168.217.210) mmm_control checks all # various OK instructions are working normally mmm_control move_role writer db2 # manually specify active server Note: do not preempt mmm_control show # check the situation of each node db1 (192.168.217.129) master/ONLINE. Roles: db2 (192.168.217.130) master/ONLINE. Roles: writer (192.168.217.100) db3 (192.168.217.131) slave/ONLINE. Roles: reader (192.168.217.200) db4 (192.168.217.132) slave/ONLINE. Roles: reader (192.168.217.210) Test: shut down the mysql-master1 service and see if mysql-master2 will preempt it. Note: wait for a while to check. If you turn off the mysql-slave1 service, the mysql-slave1 virtual IP will automatically drift to the mysql-slave2 and there will be two addresses on the mysql-slave2.
Read the above about what is the mysql-mmm high-availability cluster and how to build it, I hope it can bring some help to you in practical application. Due to the limited space in this article, it is inevitable that there will be deficiencies and need to be supplemented. If you need more professional answers, you can contact us on the official website for 24-hour pre-sales and after-sales to help you answer questions at any time.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.