In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Role
Ip
Hostname
Id
Write
Read
Master1
192.168.1.128
Master1
one
192.168.1.100
Master2 (backup)
192.168.1.131
Master2
two
192.168.1.110
Slave1
192.168.1.132
Slave1
three
192.168.1.120
Slave2
192.168.1.133
Slave2
four
192.168.1.130
Monitor
192.168.1.134
Monitor1
None
Configure / etc/hosts (master1,master2,slave1,slave2,monitor1) on all hosts
[root@mini1 ~] # vim / etc/hosts
192.168.1.128 master1
192.168.1.131 master2
192.168.1.132 slave1
192.168.1.133 slave2
192.168.1.134 monitor1
[root@mini1 ~] # hostnamectl set-hostname master1
[root@master1 ~] # for i in master2 slave1 slave2 monitor; do scp / etc/hosts $i:/etc/hosts; dne
Install perl perl-devel perl-CPAN libart_lgpl.x86_64 rrdtool.x86_64 on all hosts
Rrdtool-perl.x86_64 package
# yum-y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64
Note: use centos7 online yum source installation
Install related libraries for perl
# cpan-I Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl
Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP
Install mysql5.7 and configure replication on master1, master2, slave1, slave2 hosts
Master1 and master2 are the master and slave of each other, and slave1 and slave2 are the slaves of master1
Add the following to the configuration file / etc/my.cnf of each mysql, and note that the server-id cannot be repeated.
[root@master1 ~] # vim / etc/my.cnf
Log-bin = mysql-bin
Binlog_format = mixed
Server-id = 1
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Log-slave-updates = 1
Auto-increment-increment = 2
Auto-increment-offset = 1
[root@master2 ~] # vim / etc/my.cnf
Log-bin = mysql-bin
Binlog_format = mixed
Server-id = 2
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Log-slave-updates = 1
Auto-increment-increment = 2
Auto-increment-offset = 2
[root@slave1 ~] # vim / etc/my.cnf
Server-id = 3
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Read_only = 1
[root@slave2 ~] # vim / etc/my.cnf
Server-id = 4
Relay-log = relay-bin
Relay-log-index = slave-relay-bin.index
Read_only = 1
To enable the firewall for 4 database hosts, either turn off the firewall or create access rules:
[root@master1] # firewall-cmd-- permanent-- add-port=3306/tcp
Success
[root@master1] # firewall-cmd-- reload
Success
Master-slave configuration (master1 and master2 are configured as master, slave1 and slave2 are configured as slaves of master1):
Authorize on master1:
Mysql > grant replication slave on *. * to rep@'192.168.1.%' identified by '123'
Authorize on master2:
Mysql > grant replication slave on *. * to rep@'192.168.1.%' identified by '123'
Configure master2, slave1, and slave2 as slave libraries for master1:
Execute show master status; on master1 to get binlog files and Position points
Mysql > show master status\ G
* * 1. Row *
File: mysql-bin.000003
Position: 451
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.00 sec)
Execute in master2, slave1, and slave2
Mysql > change master to master_host='192.168.1.128',master_port=3306,master_user='rep',master_password='123',master_log_file='mysql-bin.000003',master_log_pos=451
Mysql > slave start
MySQL [(none)] > show slave status\ G
* * 1. Row *
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.128
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 451
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Configure master1 as a slave library for master2: execute show master status on master2; get binlog files and Position points
Mysql > change master to master_host='192.168.1.131',master_port=3306,master_user='rep',master_password='123',master_log_file='mysql-bbin.000002',master_log_pos=1045
Mysql > start slave
Mysql > show slave status\ G
* * 1. Row *
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.131
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 1045
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
4. Mysql-mmm configuration: create users on 4 mysql nodes
Create a proxy account:
Mysql > grant super,replication client,process on *. * to 'mmm_agent'@'192.168.1.%' identified by' 123'
Create a monitoring account:
Mysql > grant replication client on *. * to 'mmm_monitor'@'192.168.1.%' identified by' 123'
Note 1: because the previous master-slave replication, and the master-slave is already ok, so I execute ok on the master1 server.
Check whether monitoring and proxy accounts exist on master2, slave1 and slave2 db.
Mysql > select user,host from mysql.user where user in ('mmm_monitor','mmm_agent')
+-+ +
| | user | host |
+-+ +
| | mmm_agent | 192.168.1% |
| | mmm_monitor | 192.168.1% |
+-+ +
Or
Mysql > show grants for 'mmm_agent'@'192.168.1.%'
+-- +
| | Grants for mmm_agent@192.168.1.% |
+-- +
| | GRANT PROCESS, SUPER, REPLICATION CLIENT ON *. * TO 'mmm_agent'@'192.168.1.%' |
+-- +
MySQL [(none)] > show grants for 'mmm_monitor'@'192.168.1.%'
+-- +
| | Grants for mmm_monitor@192.168.1.% |
+-- +
| | GRANT REPLICATION CLIENT ON *. * TO 'mmm_monitor'@'192.168.1.%' |
+-- +
Install the monitor program on the monitor host (192.168.31.106)
[root@monitor1 ~] # tar zxf mysql-mmm-2.2.1.tar.gz
[root@monitor1 mysql-mmm-2.2.1] #
[root@monitor1 mysql-mmm-2.2.1] # make install
Install the agent on the database server (master1, master2, slave1, slave2)
[root@master1 ~] # tar zxf mysql-mmm-2.2.1.tar.gz
[root@master1 ~] # cd mysql-mmm-2.2.1/
Configure mmm
To write configuration files, the five hosts must be consistent: after installation, all configuration files are placed under / etc/mysql-mmm/. Both the management server and the database server should contain a common file mmm_common.conf, which is as follows: the active_master_rolewriter# active master role is marked, all db servers need to turn on the read_only parameter, and the read_only property is automatically turned off for the writer server monitoring agent.
[root@monitor1 mysql-mmm-2.2.1] # cd / etc/mysql-mmm/
[root@monitor1 mysql-mmm] # vim mmm_common.conf
Active_master_role writer
Cluster_interface ens32 # Nic name
Pid_path / var/run/mmm_agentd.pid # pid path
Bin_path / usr/lib/mysql-mmm/ # executable file path
Replication_user rep # replicating users
Replication_password 123 # copy user password
Agent_user mmm_agent # proxy user
Agent_password 123 # proxy user password
# host name of master1
Ip 192.168.1.128 # ip
Mode master # role attributes
Peer master2 # equivalent server host name
Ip 192.168.1.131
Mode master
Peer master1
# from the host name of the library, multiple configurations are repeated
Ip 192.168.1.132
Mode slave
Ip 192.168.1.133
Mode slave
# write role configuration
The host name of the server where hosts master1,master2# master1,master2# can perform write operations. If you do not want to switch write operations, you can only configure master here to avoid write switching due to network delay. However, in the event of master failure, the current MMM will have no writer and only external read operations.
Virtual ip written by ips 192.168.1.100 #
Mode exclusive # means that only one master is allowed, that is, only one write IP can be provided.
# read role assignment
Hosts master2,slave1,slave2
Ips 192.168.1.130, 192.168.1.110, 192.168.1.120
Mode balanced # load balancing
[root@monitor1 mysql-mmm] # for i in master1 master2 slave1 slave2; do scp / etc/mysql-mmm/mmm_common.conf $i:/etc/mysql-mmm/; done
Agent file configuration edits / etc/mysql-mmm/mmm_agent.conf on 4 mysql node machines
On the database server, there is another mmm_agent.conf that needs to be modified, which is:
[root@master1 mysql-mmm-2.2.1] # vim / etc/mysql-mmm/mmm_agent.conf
Include mmm_common.conf
This master1 # host per database
Start the agent process
In the script file of / etc/init.d/mysql-mmm-agent, under #! / bin/sh, add the following
Source / root/.bash_profile
[root@master1 mysql-mmm-2.2.1] # cd / etc/init.d/
[root@master1 init.d] # chkconfig-- add mysql-mmm-agent
[root@master1 init.d] # chkconfig mysql-mmm-agent on
[root@master1 init.d] # / etc/init.d/mysql-mmm-agent start
Daemon bin:'/ usr/sbin/mmm_agentd'
Daemon pid:'/ var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok # indicates successful startup
[root@master1 init.d] # ss-lanp | grep mmm
Tcp LISTEN 0 10 192.168.1.128 pid=9680,fd=3 9989 *: * users: ("mmm_agentd", pid=9680,fd=3)
[root@master1 init.d] # firewall-cmd-add-port=9989/tcp-permanent
[root@master1 init.d] # firewall-cmd-- reload
Edit / etc/mysql-mmm/mmm_mon.conf on the monitor host
[root@monitor1 mysql-mmm] # vim / etc/mysql-mmm/mmm_mon.conf
Include mmm_common.conf
Ip 127.0.0.1
Pid_path / var/run/mmm_mond.pid
Bin_path / usr/lib/mysql-mmm/
Status_path / var/lib/misc/mmm_mond.status
Auto_set_online 0
Ping_ips 192.168.1.128,192.168.1.131,192.168.1.132192.168.1.133 # Real ip of four databases
Check_period 5
Trap_period 10
Timeout 2
Restart_after 10000
Max_backlog 86400
Monitor_user mmm_monitor
Monitor_password 123
Debug 0
Start the monitoring process
In the script file of / etc/init.d/mysql-mmm-monitor, under #! / bin/sh, add the following
Source / root/.bash_profile
Added as a system service and set to self-startup
[root@monitor1 init.d] # vim / etc/init.d/mysql-mmm-monitor
[root@monitor1 init.d] # chkconfig-- add mysql-mmm-monitor
[root@monitor1 init.d] # chkconfig mysql-mmm-monitor on
[root@monitor1 init.d] #. / mysql-mmm-monitor start
Daemon bin:'/ usr/sbin/mmm_mond'
Daemon pid:'/ var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
[root@monitor1 ~] # mmm_control show
Master1 (192.168.1.128) master/HARD_OFFLINE. Roles:
Master2 (192.168.1.131) master/HARD_OFFLINE. Roles:
Slave1 (192.168.1.132) slave/HARD_OFFLINE. Roles:
Slave2 (192.168.1.133) slave/HARD_OFFLINE. Roles:
Start all mysql and bring all services online
[root@monitor1 ~] # mmm_control set_online master1
OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor1 ~] # mmm_control show
Master1 (192.168.1.128) master/ONLINE. Roles: writer (192.168.1.100)
Master2 (192.168.1.131) master/ONLINE. Roles: reader (192.168.1.130)
Slave1 (192.168.1.132) slave/ONLINE. Roles: reader (192.168.. 120)
Slave2 (192.168.1.133) slave/ONLINE. Roles: reader (192.168.1.110)
[root@master1 init.d] # ip addr
2: eno16777728: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
Link/ether 00:0c:29:d8:de:a3 brd ff:ff:ff:ff:ff:ff
Inet 192.168.1.128/24 brd 192.168.1.255 scope global noprefixroute dynamic eno16777728
Valid_lft 5352004sec preferred_lft 5352004sec
Inet 192.168.1.100/32 scope global eno16777728
Valid_lft forever preferred_lft forever
Inet6 fe80::20c:29ff:fed8:dea3/64 scope link noprefixroute
Valid_lft forever preferred_lft forever
When the master1 down vip address floats to master2
[root@master1 init.d] # service mysqld stop
[root@monitor1 ~] # mmm_control show
Master1 (192.168.1.128) master/HARD_OFFLINE. Roles:
Master2 (192.168.1.131) master/ONLINE. Roles: reader (192.168.1.130), writer (192.168.1.100)
Slave1 (192.168.1.132) slave/ONLINE. Roles: reader (192.168.1.120)
Slave2 (192.168.1.133) slave/ONLINE. Roles: reader (192.168.1.110)
# looking at the status of the entire cluster, you can see that the status of the whole cluster is normal
[root@monitor1 ~] # mmm_control show
Log file:
Log files are often the key to error analysis, so you should be good at using log files for problem analysis.
Db side: / var/log/mysql-mmm/mmm_agentd.log
Monitoring side: / var/log/mysql-mmm/mmm_mond.log
Command file:
Startup file for the mmm_agentd:db agent process
Mmm_mond: the startup file for monitoring the process
Mmm_backup: backing up fil
Mmm_restore: restoring files
Mmm_control: monitoring operation command file
There are only mmm_agentd programs on the db server side, and the rest are on the monitor server side.
Mmm_control usage
The mmm_control program can be used to monitor cluster status, switch writer, set online\ offline operations, and so on.
Valid commands are:
Help-show this message # help information
Whether the current cluster of ping-ping monitor # ping is normal
Show-show status # Cluster presence check
Checks [| all [| all]]-show checks status# performs monitoring check operation
Set_online-set host online # sets host to online
Set_offline-set host offline # sets host to offline
Mode-print current mode. # printout the current mode
Set_active-switch into active mode.
Set_manual-switch into manual mode.
Set_passive-switch into passive mode.
Move_role [--force]-move exclusive role to host # remove writer server
For the specified host server (Only use-- force if you know what you are doing!)
Set_ip-set role with ip to host
Check the cluster status of all db servers:
[root@monitor1 ~] # mmm_control checks all
Check items include: ping, whether mysql is running properly, whether the replication thread is normal, etc.
Check the online status of the cluster environment:
[root@monitor1 ~] # mmm_control show
Offline the specified host:
[root@monitor1 ~] # mmm_controlset_offline slave2
Onine the specified host:
[root@monitor1 ~] # mmm_controlset_online slave2
Perform write switchover (manual switchover):
[root@monitor1 ~] # mmm_controlmove_role writer master1
Summary
1. The virtual IP that provides reading and writing is controlled by the monitor program. If monitor is not started, then the db server
The virtual ip will not be assigned, but if the virtual ip has already been assigned, when the monitor program closes the previously assigned virtual ip
External programs will not be closed immediately and can be connected and accessed (as long as the network is not restarted), which has the advantage that for monitor
The reliability requirements will be lower, but if one of the db servers fails at this time, it will not be able to handle the cut.
Instead, the original virtual ip will remain the same, and the virtual ip of the dead DB will become inaccessible.
The 2.agent program is controlled by the monitor program to handle write switching, switching from the library and other operations. If monitor enters
When the program is turned off, the agent process doesn't work, and it can't handle the fault itself.
The 3.monitor program is responsible for monitoring the status of the db server, including the Mysql database, whether the server is running, and replication
Whether the thread is normal, master-slave delay, etc.; it is also used to control agent programs to handle failures.
4.monitor monitors the status of the db server every few seconds if the db server has gone from failure to normal
Then monitor will automatically set it to online state after 60s (default is 60s can be set to other values), there are
The configuration file parameter "auto_set_online" on the monitoring side determines that there are three states of the cluster server:
HARD_OFFLINE → AWAITING_RECOVERY → online
5. The default monitor controls mmm_agent to change the writer db server read_only to OFF, and the rest
Db server read_only is changed to ON, so for rigor, you can add it to all server my.cnf files.
Read_only=1 is controlled by monitor to control writer and read,root users and replication users not subject to the read_only parameter
Influence.
Summary
(1) the downtime of the master2 candidate master node does not affect the status of the cluster, that is, the read status of the master2 candidate node is removed.
(2) when the master1 master node goes down, the master2 candidate master node takes over the write role, and the slave1,slave2 points to the new master2
Copy the main library, and slave1,slave2 will automatically change master to master2.
(3) if the master1 main library is down and the master2 replication application lags behind master1, it will become writable.
At this time, the data owner can not guarantee consistency.
If the master2,slave1,slave2 is delayed by the master1 master, and the master1 goes down, the slave1,slave2 will
Wait for the data to catch up with the db1, and then point to the new master node2 for replication, and then the data cannot be guaranteed.
Consistency of synchronization.
(4) if the MMM high availability architecture is adopted, the machine configuration of the primary and standby nodes is the same, and semi-synchronization is enabled.
Improve security or use MariaDB/mysql5.7 for multithreaded replication to improve replication performance.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.