In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Brief introduction of MHA I and MHA of MySQL High availability Cluster
MHA (Master High Availability) is currently a relatively mature solution for MySQL high availability. It was developed by youshimaton, a Japanese DeNA company (now working for Facebook). It is a set of excellent high availability software for failover and master-slave upgrade in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent in order to achieve high availability in the real sense. There are two roles in MHA, one is MHA Node (data node) and the other is MHA Manager (management node).
MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a slave node. MHA Node runs on each MySQL server, and MHA Manager regularly detects the master nodes in the cluster. When the master fails, it automatically promotes the slave of the latest data to the new master, and then redirects all other slave to the new master. The entire failover process is completely transparent to the application.
During the automatic failover process of MHA, MHA tries to save binary logs from the down primary server to ensure that the data is not lost as much as possible, but this is not always feasible. For example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save binary logs and only fails over and loses the latest data. With semi-synchronous replication of MySQL 5.5, the risk of data loss can be greatly reduced. MHA can be combined with semi-synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, thus ensuring data consistency across all nodes.
Concept of synchronization: asynchronous replication (Asynchronous replication) MySQL default replication is asynchronous, and the master library will immediately return the result to the client after executing the transaction committed by the client, and does not care whether the slave library has received and processed it, so there will be a problem. If the master crash is lost, the committed transaction on the master may not be transferred to the slave. If at this time, the slave will be promoted to the master. May result in incomplete data on the new owner. Fully synchronous replication (Fully synchronous replication) means that when the master library executes a transaction, all slave libraries execute the transaction before returning to the client. Because you need to wait for all slaves to complete the transaction before returning, the performance of fully synchronous replication is bound to be seriously affected. Semi-synchronous replication (Semisynchronous replication) is between asynchronous replication and full-synchronous replication. The main library does not return to the client immediately after executing the transaction committed by the client, but waits for at least one to be received from the library and written to relay log before returning to the client. Compared with asynchronous replication, semi-synchronous replication improves the security of data, and it also causes a certain degree of delay, which is at least one TCP/IP round trip time. Therefore, semi-synchronous replication is best used in low-latency networks. Summary:
Asynchronous and semi-synchronous replication in MySQL is asynchronous by default, and all updates on Master are written to Binlog, which does not ensure that all updates are copied to Slave. Although asynchronous operation is efficient, when there is a problem with Master/Slave, there is a high risk of data being out of sync, and data may even be lost. The purpose of introducing semi-synchronous replication in MySQL5.5 is to ensure that the data of at least one Slave is complete when something goes wrong with master. In case of timeout, you can also temporarily transfer to asynchronous replication to ensure the normal use of the business until a salve catches up and continues to switch to semi-synchronous mode.
working principle
Compared with other HA software, the purpose of MHA is to maintain the high availability of the Master library in MySQL Replication. Its most important feature is that it can repair the differential logs between multiple Slave, finally make all Slave data consistent, and then choose one of them to act as the new Master and point the other Slave to it. Save binary log events (binlogevents) from the crashed master. Identify the slave that contains the latest updates. Apply differential relay logs (relay log) to other slave. Apply binary log events (binlogevents) saved from master. Promote a slave to a new master. Make the other slave connect to the new master to copy.
Currently, MHA mainly supports the architecture of one master and multiple slaves. To build MHA, you must have at least three database servers in a replication cluster. One master and two slaves, that is, one serves as master, one acts as standby master, and the other acts as slave database, because at least three servers are required.
Deploy MHA environment: host operating system IP address master1CentOS 7.3192.168.1.1master2 (standby master) CentOS 7.3192.168.1.8slave1CentOS 7.3192.168.1.9managerCentOS 7.3192.168.1.3
Master provides write services, alternative master2 provides read services, and slave also provides related read services. Once master1 goes down, it will upgrade master2 to a new master,slave and point to a new master,manager as a management server.
Turn off the firewall, selinux in the case
# systemctl stop firewalld# setenforce 01, basic environment preparation 1) check the Selinux,Firewalld setting after configuring the IP address, and turn off the Selinux,Firewalld service so that there is no error in the later master-slave synchronization Note: time should be synchronized.
Modify the hosts file and transfer it to other hosts
[root@master1 ~] # vim / etc/hosts192.168.1.1 master1192.168.1.8 master2192.168.1.9 slave1192.168.1.3 manager
[root@master1 ~] # for i in master2 slave1 manager; do scp / etc/hosts $i:/etc/hosts; done
Configure NTP time synchronization (master1):
[root@master1 ~] # vim / etc/chrony.conf
Point to the IP address of master1 in the configuration file of another server
After the configuration is complete, restart the chronyd service and configure self-boot.
# systemctl restart chronyd# systemctl enable chronyd
You can see that the IP of master is now 192.168.1.8, which has been switched to synchronize with 192.168.1.8, originally synchronized with 192.168.1.1, indicating that MHA has promoted master2 to the new master,IO thread and SQL thread to run correctly, and the MHA has been successfully built.
Third, the MHA Manager side of the daily main operation steps 1, check whether there are the following files, if any, delete.
After the master-slave switch occurs, the MHAmanager service automatically stops and generates the file app1.failover.complete under the manager_workdir (/ masterha/app1) directory (to start MHA, you must first make sure you don't have this file) if prompted, delete the file
/ masterha/app1/app1.failover.complete [error] [/ usr/share/perl5/vendor_perl/MHA/MasterFailover.pm, ln298] Last failover was done at 2015-01-09 10:00:47.Current time is too early to do failover again. If you want to do failover, manually remove / masterha/app1/app1.failover.complete and run this script again.2, check MHA replication check: (need to set master1 to master2 slave server) mysql > show master status
[root@master1 ~] # systemctl start mysqld [root@master1 ~] # mysql- uroot-p123.commysql > change master to master_host='192.168.1.8',master_port=3306,master_log_file='mysql-bin.000001',master_log_pos=742,master_user='mharep',master_password='123.com';mysql > start slave
Check the cluster status in manager:
[root@manager] # masterha_check_repl-- conf=/etc/masterha/app1.cnf
3. Stop MHA: [root@manager ~] # masterha_stop-- conf/etc/masterha/app1.cnf4, start MHA: [root @ manager ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / tmp/mha_manager.log &
When a slave node is down, it cannot be started by default, plus-- ignore_fail_on_start can start MHA even if a node is down, as follows:
[root@manager ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf-- ignore_fail_on_start& > / tmp/mha_manager.log & 5, check status: [root@manager ~] # masterha_check_status-- conf=/etc/masterha/app1.cnfapp1 (pid:13739) is running (0:PING_OK), master:192.168.1.8
PS: if normal, "PING_OK" will be displayed, otherwise "NOT_RUNNING" will be displayed, which means that MHA monitoring is not enabled.
6. Check the log: [root@manager ~] # tail-f / masterha/app1/manager.log7, follow-up work of master-slave switch
Refactoring: refactoring means that the primary server goes down, switching to the standby master, and the standby master becomes the master.
So a scheme for refactoring:
The original master library is repaired into a new slave. After the master library is switched, the original master library is repaired into a new slave library. Then re-perform the above 5 steps.
When the original master database data file is complete, the last executed CHANGE MASTER command can be found in the following ways:
[root@manager ~] # grep "CHANGE MASTER TO MASTER" / masterha/app1/manager.log | tail-1Sat Feb 22 15:25:21 2020-[info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.1.8', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=742, MASTER_USER='mharep', MASTER_PASSWORD='xxx'
Periodically delete relay logs in the configuration of master-slave replication, the parameter relay_log_purge=0 is set on slave, so slave nodes need to delete relay logs periodically. It is recommended that each slave node delete relay logs at a different time.
# crontab-e0 5 * / usr/local/bin/purge_relay_logs-- user=root-- password=123.com-- port=3306-- disable_relay_log_purge > > / var/log/purge_relay.log 2 > & 1
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.