In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you an introduction and deployment method of MMM in mysql. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
1. Brief introduction to MMM:
MMM, or Multi-Master Replication Manager for MySQL:mysql multi-master replication manager, is implemented based on perl
A scalable suite of scripts for monitoring, failover, and management of mysql master replication configurations (at all times only
There is a node that can be written), and MMM can also load balance reads from the server, so it can be used in a group
The server used for replication starts the virtual ip. In addition, it also has the functions of data backup and resynchronization between nodes.
Script. MySQL itself does not provide a replication failover solution, and the server can be implemented through the MMM solution.
To achieve the high availability of mysql. MMM can not only provide the function of floating IP, if the current master server
After the server is hung up, it will automatically transfer your backend server from the server to the new master server for synchronous replication without having to manually change the same
Step by step configuration. This solution is a relatively mature solution at present.
Advantages: high availability, good scalability, automatic switching in case of failure, for master and master synchronization, only one is provided at the same time.
According to the database write operation, to ensure the consistency of the data. When the master server dies, another master immediately takes over and the other subservients
The server can be switched automatically without manual intervention.
Disadvantages: the monitor node is a single point, but you can also combine this with keepalived or haertbeat to make it highly available
At least three nodes, there are requirements for the number of hosts, need to achieve read-write separation, but also need to write read-write separation programs at the front end.
In the business system with very busy reading and writing, the performance is not very stable, and there may be problems such as replication delay, handover failure and so on.
The MMM scheme is not suitable for the environment with high data security requirements and busy reading and writing.
Applicable scenarios:
The applicable scenario of MMM is that the database has a large number of visits and can achieve read-write separation.
The main functions of Mmm are provided by the following three scripts:
Mmm_mond is responsible for all the monitoring work of the monitoring daemon, determining the removal of nodes (mmm_mond process timing
Heartbeat detection, failure to float write ip to another master), and so on.
Mmm_agentd A proxy daemon running on a mysql server that is provided to the monitoring node through a simple set of remote services
Mmm_control manages the mmm_mond process from the command line
During the whole supervision process, relevant authorized users need to be added to the mysql, including a mmm_monitor.
Users and a mmm_agent user will also add a mmm_tools user if they want to use mmm's backup tool.
II. Deployment and implementation
It is recommended to clone the virtual machine after installing the package.
1. Environmental introduction
OS:centos7.2 (64 bit) database system: mysql5.7.13
Close selinux
Configure ntp, synchronization time
Turn off the firewall
Role IPhostnameserver-idwrite vipread vipmaster1192.168.41.10master11192.168.41.100
Master2 (backup) 192.168.41.11master22
192.168.41.101slave1192.168.41.12slave13
192.168.41.102slave2192.168.41.13slave24
192.168.41.103monitor192.168.41.14monitor1
192.168.41.14
2. Configure the / etc/hosts file on all hosts and add the following:
192.168.41.10 master1
192.168.41.11 master2
192.168.41.12 slave1
192.168.41.13 slave2
192.168.41.14 monitor1
Install perl perl-devel perl-CPAN libart_lgpl.x86_64 rrdtool.x86_64 on all hosts
Rrdtool-perl.x86_64 package
# yum-y install perl- libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64
Note: use centos7 online yum source installation (if the installation is not successful, yum remove libvirt-client is re-yum-y install perl- libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64)
Install related libraries for perl
# cpan-I Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl
Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP
3. Install mysql5.7 and configure replication on master1, master2, slave1, slave2 hosts
Master1 and master2 are the master and slave of each other, and slave1 and slave2 are the slaves of master1
Add the following to the configuration file / etc/my.cnf of each mysql, and note that the server-id cannot be repeated.
Master1 host:
Log-bin = mysql-bin
Binlog_format = mixed
Server-id = 1
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Log-slave-updates = 1
Auto-increment-increment = 2
Auto-increment-offset = 1
Master2 host:
Log-bin = mysql-bin
Binlog_format = mixed
Server-id = 2
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Log-slave-updates = 1auto-increment-increment = 2
Auto-increment-offset = 2
Slave1 host:
Server-id = 3
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Read_only = 1
Slave2 host:
Server-id = 4
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Read_only = 1
After completing the modification to my.cnf, restart the mysql service through systemctl restart mysqld
Master-slave configuration (master1 and master2 are configured as master, slave1 and slave2 are configured as slaves of master1):
Authorize on master1:
Mysql > grant replication slave on. To rep@'192.168.31.%' identified by '123456'
Authorize on master2:
Mysql > grant replication slave on. To rep@'192.168.31.%' identified by '123456'
Configure master2, slave1, and slave2 as slave libraries for master1:
Execute show master status; on master1 to get binlog files and Position points
Execute in master2, slave1, and slave2
Mysql > change master to master_host='192.168.41.10',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=452
Mysql > slave start
Verify master-slave replication:
Master2 host:
Mysql > show slave status\ G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
If both Slave_IO_Running and Slave_SQL_Running are yes, then the master and slave have already configured OK
Configure master1 as a slave library for master2:
Execute show master status on master2; get binlog files and Position points
Mysql > show master status
Execute on master1:
Mysql > change master to
Master_host='192.168.41.11',master_port=3306,master_user='rep',master_password='123456'
Master_log_file='mysql-bin.000001',master_log_pos=452
Mysql > start slave
Verify master-slave replication:
Master1 host:
Mysql > show slave status\ G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
If both Slave_IO_Running and Slave_SQL_Running are yes, then the master and slave have already configured OK
4. Mysql-mmm configuration:
Create a user on 4 mysql nodes
Create a proxy account:
Mysql > grant super,replication client,process on. To 'mmm_agent'@'192.168.41.%' identified
By '123456'
Create a monitoring account:
Mysql > grant replication client on. To 'mmm_monitor'@'192.168.41.%' identified by' 123456'
Note 1: because the previous master-slave replication, and the master-slave is already ok, so I execute ok on the master1 server.
Check whether monitoring and proxy accounts exist on master2, slave1 and slave2 db.
Mysql > select user,host from mysql.user where user in ('mmm_monitor','mmm_agent')
Note 2:
Mmm_monitor users: mmm monitoring is used to check the health of mysql server processes
Mmm_agent user: mmm agent is used to change read-only mode, replicated master server, etc.
5. Mysql-mmm installation
Install the monitor program on the monitor host (192.168.41.14)
Cd / usr/local/src
Wget
Http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251
D3183328f0249461e/mysql-mmm-2.2.1.tar.gz
Tar-zxf mysql-mmm-2.2.1.tar.gz
Cd mysql-mmm-2.2.1
Make install
Install the agent on the database server (master1, master2, slave1, slave2)
Cd / tmp
Wget
Http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251
D3183328f0249461e/mysql-mmm-2.2.1.tar.gz
Tar-zxf mysql-mmm-2.2.1.tar.gz
Cd mysql-mmm-2.2.1
Make install
6. Configure mmm
To write a configuration file, the five hosts must be consistent:
After the installation is complete, all configuration files are placed under / etc/mysql-mmm/. Management server and database server
All of them should contain a common file mmm_common.conf, which is as follows:
The indication of active_master_rolewriter# 's active master role. All db servers should enable read_only parameters.
For the writer server monitoring agent, the read_only property is automatically turned off.
Network interface of cluster_interface eno16777736# cluster pid_path / var/run/mmm_agentd.pid#pid path bin_path / usr/lib/mysql-mmm/# executable file path replication_user rep# replication user replication_password 12345 replication user password agent_usermmm_agent# proxy user agent_password 12345 proxy user password # master1 host name ipmode master# role attribute of ipmode master# Master represents the host name of the server where the primary peer master2# is equivalent to the master1 That is, the server host name of master2 # is the same as the concept of master. If there are multiple slaves, you can repeat the configuration of ip 192.168.41.1. The role attribute of the slave ipmode slave#slave represents that the current host is the same as the concept of # and slave. The ip 192.168.41.13mode slave#writer role configures the host name of the server that hosts master1,master2# can write to. If you do not want to switch write operations, you can only configure master here, so that you can avoid switching write due to network delay, but in the event of master failure, there will be no writer in the current MMM and only external read operations. The virtual IPmode exclusive#exclusive representative for write operations provided by ips 192.168.41.10 allows only one master, that is, only one write IP#read role can be provided to configure the host name of the server that provides read operations for hosts master2,slave1,slave2#. Of course, master can also be added to the virtual ip of ips 192.168.41.101,192.168.41.102,192.168.41.10for external read operations. The three ip and host do not correspond one to one, and the number of ips hosts can also be different. If configured in this way, one of the hosts will assign two ipmode balanced#balanced to represent the load balancer.
At the same time, copy this file to another server with the same configuration.
# for host in master1 master2 slave1 slave2; do scp / etc/mysql-mmm/mmm_common.conf
$host:/etc/mysql-mmm/; done
Agent file configuration
Edit / etc/mysql-mmm/mmm_agent.conf on 4 mysql node machines
On the database server, there is another mmm_agent.conf that needs to be modified, which is:
Includemmm_common.conf
This master1
Note: this configuration only configures the db server, but the monitoring server does not need to be configured. The host name after the this is changed to the hostname of the current server.
Start the agent process
In the script file of / etc/init.d/mysql-mmm-agent, under #! / bin/sh, add the following
Source / root/.bash_profile is added as a system service and set to self-startup
# chkconfig-add mysql-mmm-agent
# chkconfigmysql-mmm-agent on
# / etc/init.d/mysql-mmm-agent start
Note: the purpose of adding source / root/.bash_profile is to enable the mysql-mmm-agent service to boot itself.
The only difference between automatic startup and manual startup is to activate a console. So it means that when starting as a service,
May be due to the lack of environment variables.
The service failed to start. The error message is as follows:
Daemon bin:'/ usr/sbin/mmm_agentd'
Daemon pid:'/ var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Can't locate Proc/Daemon.pm in @ INC (@ INC contains:
/ usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl
/ usr/share/perl5/vendor_perl / usr/lib64/perl5 / usr/share/perl5.) At / usr/sbin/mmm_agentd line
7.
BEGIN failed--compilation aborted at / usr/sbin/mmm_agentd line 7.
Failed
Solution:
# cpan Proc::Daemon
# cpan Log::Log4perl
# / etc/init.d/mysql-mmm-agent start
Daemon bin:'/ usr/sbin/mmm_agentd'
Daemon pid:'/ var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
# netstat-antp | grep mmm_agentd
Tcp 0 0192.168.31.83Rose 9989 0.0.0.0Vue * LISTEN 9693/mmm_agentd
Edit / etc/mysql-mmm/mmm_mon.conf on the monitor host
Includemmm_common.conf
Ip 127.0.0.1 address # for security, it is set to listen only locally. By default, mmm_mond listens to 9988pid_path / var/run/mmm_mond.pidbin_path / usr/lib/mysql-mmm/status_path/var/lib/misc/mmm_mond.statusping_ips192.168.41.10192.168.41.11192.168.41.12192.168.41.13# to test the network availability IP address list, as long as there is an address ping. It means that the network is normal. Do not write the local address auto_set_online address here to set the time for automatic online. By default, it is set to online when it exceeds 60s. By default, it is set to 60s. Here, setting it to 0 means immediate onlinecheck_period 5trap_period 10timeout 2#restart_after 10000max_backlog 86400check_period description: the check period defaults to 5s default value: 5strap_period description: the time for a node to be detected unsuccessfully lasts for trap_period seconds. It is prudent to think that this node has failed. Default: 10stimeout description: time to check timeout default: 2srestart_after description: after completing the restart_after check, restart the checker process default: 10000max_backlog description: record the maximum number of times to check the rep_backlog log default: 60monitor_usermmm_monitor# monitoring db server user monitor_password 12345 monitor db server password
Start the monitoring process:
In the script file of / etc/init.d/mysql-mmm-agent, under #! / bin/sh, add the following
Source / root/.bash_profile
Added as a system service and set to self-startup
# chkconfig-add mysql-mmm-monitor
# chkconfigmysql-mmm-monitor on
# / etc/init.d/mysql-mmm-monitor start
Start the error message:
Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @ INC (@ INC contains:
/ usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl
/ usr/share/perl5/vendor_perl / usr/lib64/perl5 / usr/share/perl5.) At / usr/sbin/mmm_mond line
11.
BEGIN failed--compilation aborted at / usr/sbin/mmm_mond line 11.
Failed
Solution: install the following libraries for perl
# cpanProc::Daemon
# cpan Log::Log4perl [root@monitor1 ~] # / etc/init.d/mysql-mmm-monitor start
Daemon bin:'/ usr/sbin/mmm_mond'
Daemon pid:'/ var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
[root@monitor1 ~] # netstat-anpt | grep 9988\ does not come out of the port. Wait a while.
Tcp 0 0127.0.0.1 9988 0.0.0.0 * LISTEN 8546/mmm_mond
Note 2: MMM startup sequence: start monitor first, and then start agent
View cluster status
[root@monitor1 mysql-mmm] # mmm_control show
Master1 (192.168.41.10) master/ONLINE. Roles: writer (192.168.41.100)
Master2 (192.168.41.11) master/ONLINE. Roles: reader (192.168.41.101)
Slave1 (192.168.41.12) slave/ONLINE. Roles: reader (192.168.41.102)
Slave2 (192.168.41.13) slave/ONLINE. Roles: reader (192.168.41.103)
If the server status is not ONLINE, you can bring the server online with the following command, for example:
# mmm_controlset_online hostname
For example: [root@monitor1 ~] # mmm_control set_online master1
As can be seen from the above display, the VIP for writing the request is on master1, and all slave nodes regard master1 as the master section.
Point.
Check to see if vip is enabled
[root@master1 ~] # ip addr show dev eno16777736
[root@master1 mysql-mmm] # ip addr show dev eno16777736
2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:0c:29:4b:7f:71 brd ff:ff:ff:ff:ff:ff
Inet 192.168.41.10/24 brd 192.168.41.255 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet 192.168.41.100/32 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet6 fe80::20c:29ff:fe4b:7f71/64 scope link
Valid_lft forever preferred_lft forever
[root@master2 mysql-mmm] # ip addr show dev eno16777736
2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:0c:29:c6:2f:5c brd ff:ff:ff:ff:ff:ff
Inet 192.168.41.11/24 brd 192.168.41.255 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet 192.168.41.101/32 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet6 fe80::20c:29ff:fec6:2f5c/64 scope link
Valid_lft forever preferred_lft forever
[root@slave1 mysql-mmm] # ip addr show dev eno16777736
2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:0c:29:f6:45:7b brd ff:ff:ff:ff:ff:ff
Inet 192.168.41.12/24 brd 192.168.41.255 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet 192.168.41.102/32 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet6 fe80::20c:29ff:fef6:457b/64 scope link
Valid_lft forever preferred_lft forever
[root@slave2 mysql-mmm] # ip addr show dev eno16777736
2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:0c:29:46:31:2a brd ff:ff:ff:ff:ff:ff
Inet 192.168.41.13/24 brd 192.168.41.255 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet 192.168.41.103/32 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet6 fe80::20c:29ff:fe46:312a/64 scope link
Valid_lft forever preferred_lft forever
MMM High availability Test:
The server reads and writes with VIP addresses. In case of failure, VIP will drift to other nodes and be served by other nodes.
Business.
First of all, check the status of the whole cluster, you can see that the whole cluster is in a normal state.
[root@monitor1 ~] # mmm_control show
Master1 (192.168.41.10) master/ONLINE. Roles: writer (192.168.41.100)
Master2 (192.168.41.11) master/ONLINE. Roles: reader (192.168.41.101)
Slave1 (192.168.41.12) slave/ONLINE. Roles: reader (192.168.41.102)
Slave2 (192.168.41.13) slave/ONLINE. Roles: reader (192.168.41.103)
Simulate master1 downtime, stop the mysql service manually, and observe the monitor log. The log of master1 is as follows:
[root@monitor1] # tail-f / var/log/mysql-mmm/mmm_mond.log
View the latest status of the cluster
[root@monitor1 mysql-mmm] # mmm_control show
# Warning: agent on host master1 is not reachable
Master1 (192.168.41.10) master/HARD_OFFLINE. Roles:
Master2 (192.168.41.11) master/ONLINE. Roles: reader (192.168.41.102), writer (192.168.41.100)
Slave1 (192.168.41.12) slave/ONLINE. Roles: reader (192.168.41.101)
Slave2 (192.168.41.13) slave/ONLINE. Roles: reader (192.168.41.103)
From the display result, we can see that the state of master1 has been transformed from ONLINE to HARD_OFFLINE, and the write VIP has been transferred to
On the master2 host.
Check the cluster status of all db servers
[root@monitor1 ~] # mmm_control checks all
From the above, you can see that master1 can ping, indicating that the service is dead.
View the ip address of the master2 host:
[root@master2 ~] # ip addr show dev eno16777736
Eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00:0c:29:c6:2f:5c brd ff:ff:ff:ff:ff:ff
Inet 192.168.41.11/24 brd 192.168.41.255 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet 192.168.41.102/32 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet 192.168.41.100/32 scope global eno16777736
Valid_lft forever preferred_lft forever
Inet6 fe80::20c:29ff:fec6:2f5c/64 scope link
Valid_lft forever preferred_lft forever
Slave1 host:
Mysql > show slave status\ G
1. Row
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.41.11
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Slave2 host:
Mysql > show slave status\ G
1. Row
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.41.11
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Start the mysql service of the master1 host, and observe the monitor log. The log of master1 is as follows:
[root@monitor1] # tail-f / var/log/mysql-mmm/mmm_mond.log
14:57:07 on 2018-07-30 INFO Check 'mysql' on' master1' is ok!
14:57:07 on 2018-07-30 INFO Check 'rep_backlog' on' master1' is ok!
14:57:07 on 2018-07-30 INFO Check 'rep_threads' on' master1' is ok!
14:57:10 on 2018-07-30 FATAL State of host 'master1' changed from HARD_OFFLINE to AWAITING_RECOVERY
From the above, we can see that the state of master1 has been changed from hard_offline to awaiting_recovery.
Bring the server online with the following command:
[root@monitor1 ~] # mmm_control set_online master1
[root@monitor1 mysql-mmm] # mmm_control show
Master1 (192.168.41.10) master/ONLINE. Roles:
Master2 (192.168.41.11) master/ONLINE. Roles: reader (192.168.41.102), writer (192.168.41.100)
Slave1 (192.168.41.12) slave/ONLINE. Roles: reader (192.168.41.101)
Slave2 (192.168.41.13) slave/ONLINE. Roles: reader (192.168.41.103)
You can see that the master library startup will not take over the master, only until the existing master downtime again.
Summary
(1) the downtime of the master2 candidate master node does not affect the status of the cluster, that is, the read status of the master2 candidate node is removed.
(2) when the master1 master node goes down, the master2 candidate master node takes over the write role, and the slave1,slave2 points to the new master2
Copy the main library, and slave1,slave2 will automatically change master to master2.
(3) if the master1 main library is down and the master2 replication application lags behind master1, it will become writable.
At this time, the data owner can not guarantee consistency.
If the master2,slave1,slave2 is delayed by the master1 master, and the master1 goes down, the slave1,slave2 will
Wait for the data to catch up with the db1, and then point to the new master node2 for replication, and then the data cannot be guaranteed.
Consistency of synchronization. (4) if the MMM high availability architecture is adopted, the machine configuration of the primary and standby nodes is the same, and semi-synchronization is enabled.
Improve security or use MariaDB/mysql5.7 for multithreaded replication to improve replication performance.
Attached:
1. Log file:
Log files are often the key to error analysis, so you should be good at using log files for problem analysis.
Db side: / var/log/mysql-mmm/mmm_agentd.log
Monitoring side: / var/log/mysql-mmm/mmm_mond.log
2. Command file:
Mmm_agentd: the startup file for the db agent process
Mmm_mond: the startup file for monitoring the process
Mmm_backup: backing up fil
Mmm_restore: restoring files
Mmm_control: monitoring operation command file
There are only mmm_agentd programs on the db server side, and the rest are on the monitor server side.
3. Mmm_control usage
The mmm_control program can be used to monitor cluster status, switch writer, set online\ offline operations, and so on.
Valid commands are:
Help-show this message # help information
Whether the current cluster of ping-ping monitor # ping is normal
Show-show status # Cluster presence check
Checks [| all [| all]]-show checks status# performs monitoring check operation
Set_online-set host online # sets host to online
Set_offline-set host offline # sets host to offline
Mode-print current mode. # printout the current mode
Set_active-switch into active mode.
Set_manual-switch into manual mode.
Set_passive-switch into passive mode.
Move_role [--force]-move exclusive role to host # remove writer server
For the specified host server (Only use-- force if you know what you are doing!)
Set_ip-set role with ip to host
Check the cluster status of all db servers:
[root@monitor1 ~] # mmm_control checks all
Check items include: ping, whether mysql is running properly, whether the replication thread is normal, etc.
Check the online status of the cluster environment:
[root@monitor1 ~] # mmm_control show
Offline the specified host:
[root@monitor1 ~] # mmm_controlset_offline slave2
Onine the specified host:
[root@monitor1 ~] # mmm_controlset_online slave2
Perform write switchover (manual switchover):
View the master corresponding to the current slave
[root@slave2] # mysql-uroot-p123456-e 'show slave status\ Gtinct'
Mysql: [Warning] Using a password on the command line interface can be insecure. 1. Row
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
Writer switch. Make sure that the writer attribute in the mmm_common.conf file has a configured host, otherwise it cannot be cut.
Change
[root@monitor1 ~] # mmm_controlmove_role writer master1
OK: Role 'writer' has been moved from' master2' to 'master1'. Now you can wait some time and
Check new roles info!
[root@monitor1 ~] # mmm_control show
Master1 (192.168.41.10) master/ONLINE. Roles: writer (192.168.41.100)
Master2 (192.168.41.11) master/ONLINE. Roles: reader (192.168.41.101)
Slave1 (192.168.41.12) slave/ONLINE. Roles: reader (192.168.41.102)
Slave2 (192.168.41.13) slave/ONLINE. Roles: reader (192.168.41.103)
Save automatically switches from the library to the new master
[root@slave2] # mysql-uroot-p123456-e 'show slave status\ Gtinct'
Mysql: [Warning] Using a password on the command line interface can be insecure.
1. Row
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.41.10
4. Other handling problems
If you do not want writer to switch from master to backup (including master-slave delay will also lead to the switching of writing VIP), then
You can remove the backup from / etc/mysql-mmm/mmm_common.conf when configuring it.
# writer role configuration
Hosts master1 # only one Hosts is configured here
Virtual IP for external write operations provided by ips 192.168.41.10
Mode exclusive # exclusive represents that only one master is allowed, that is, only one write IP can be provided.
In this way, when master1 fails, writer writes will not be switched to the master2 server, and slave will not
It will point to the new master, where the current MMM provides external write services.
5. Summary
1. The virtual IP that provides reading and writing is controlled by the monitor program. If monitor is not started, then the db server
The virtual ip will not be assigned, but if the virtual ip has already been assigned, when the monitor program closes the previously assigned virtual ip
External programs will not be closed immediately and can be connected and accessed (as long as the network is not restarted), which has the advantage that for monitor
The reliability requirements will be lower, but if one of the db servers fails at this time, it will not be able to handle the cut.
Instead, the original virtual ip will remain the same, and the virtual ip of the dead DB will become inaccessible.
The 2.agent program is controlled by the monitor program to handle write switching, switching from the library and other operations. If monitor enters
When the program is turned off, the agent process doesn't work, and it can't handle the fault itself.
The 3.monitor program is responsible for monitoring the status of the db server, including the Mysql database, whether the server is running, and replication
Whether the thread is normal, master-slave delay, etc.; it is also used to control agent programs to handle failures.
4.monitor monitors the status of the db server every few seconds if the db server has gone from failure to normal
Then monitor will automatically set it to online state after 60s (default is 60s can be set to other values), there are
The configuration file parameter "auto_set_online" on the monitoring side determines that there are three states of the cluster server:
HARD_OFFLINE → AWAITING_RECOVERY → online
5. The default monitor controls mmm_agent to change the writer db server read_only to OFF, and other db servers read_only to ON, so for rigor, you can add it to the my.cnf files of all servers.
Read_only=1 is controlled by monitor to control writer and read,root users and replication users not subject to the read_only parameter
Influence.
The above is the introduction and deployment method of MMM in mysql shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.