Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Implementing the High availability method of MySQL with MHA Architecture

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The following to understand the use of MHA architecture to achieve MySQL high availability methods, I believe you will benefit a lot after reading, the text in the essence is not much, hope to use MHA architecture to achieve MySQL high availability method this short article is what you want.

MySQL replication is asynchronous or semi-synchronous. When master fails, some slave may not receive the latest relay log, which means that each slave may be in a different state. It is trivial to deal with these consistency issues manually, because replication cannot be started without fixing them. But it is not uncommon to take an hour or more to fix these problems manually.

One master and one slave

If the architecture is one master and one slave, there will be no problem that part of the slave lags behind the latest slave. When the master fails, all the application traffic can be sent to the new master (the original slave). Failover is easy to resolve. But there will be the following problems.

First of all, read traffic cannot be extended. In many cases, important operations may be run, such as backups, analytical queries, and batch processing. This can cause performance problems for slave. If there is only one slave, when the slave fails, the master must handle all this traffic.

Secondly, the issue of usability. If the master fails and there is only one service left (the original slave becomes the master), it becomes a single point of failure. To create a new slave, you need an online backup, then store it on the new slave and start slave immediately. But these operations usually take hours (or even more than one day to complete the replication). In some important applications, you may not be able to stand the single point of failure of the database for such a long time. Moreover, online backup will greatly increase the load of master, so it is dangerous to back up during peak hours.

Double master and multi-follower

Double masters and multiple followers are also common architectures. If the current master fails, the standby master becomes the new master. In many scenarios, the backup master is configured as read-only.

However, this does not always run as a master failover solution. When the current master fails, the remaining slave may not receive all the relay log, so solving the consistency problem between the slave still requires other solutions.

If you can't stand the consistency problem and want to start the service immediately. Just use the spare master as the new master and discard the remaining slave. Then make an online backup from this new master to create a new slave. But this method has the same problem as the "one master, one slave" method mentioned earlier. The remaining slave cannot be read extended or redundant.

In addition, it is possible to use dual masters (one read-only) and each master has at least one slave.

At least one slave can be replicated if the current master fails. But in fact, many users will not adopt this architecture, because the biggest disadvantage is complexity. Three-tier replication is used in this architecture. Managing three-tier replication is not easy. For example, if the standby master fails, the slave of the standby master cannot continue to replicate. In many cases, the standby master and its slave must be reconfigured. It is important that at least four cloud servers must be used in this architecture.

Heartbeat + DRBD

Using heartbeat (Heartbeat) + DRBD+MySQL is a very common HA solution. But there are some serious problems with this solution.

The first problem is overhead, especially if you want to run a large number of MySQL replication environments. Heartbeat + DRBD is the active / standby solution, so you need a passive (standby) master that does not handle any application traffic. Passive cloud servers cannot be used for read expansion. Typically, you need at least four MySQL services, one active master, one passive master, and two slave.

The second problem is downtime. Because heartbeat + Heartbeat is an active/standby cluster, if active server fails, failure recovery occurs on passive server. This can take a long time, especially without the InnoDB plug-in. Even with the InnoDB plug-in, it is not common to start receiving access connections in passive server in just a few minutes. In addition to failure recovery time, it takes time to warm up (warm-up) (populate data into the buffer pool) after failure recovery, because on passive, the database / file system cache is empty. In practice, one or more additional slave is required to handle sufficient read traffic. During warm-up, write performance degrades significantly because it is still bold and empty.

The third problem is write performance degradation or consistency. In order to ensure that active/passive is highly available for cluster operation, transaction logs (binary logs and InnoDB logs) must be flushed to disk each time commit, so innodb-flush-log-at-trx-cmmit=1 and sync-binlog=1 must be set. But sync-binlog=1 degrades write performance because fsync () is contiguous in the current version of MySQL (if sync-binlog is 1, the group commit will break). In most cases, sync-binlog=1 is not set. But if sync-binlog=1 is not set, when active master fails, the new master (the previous passive server) may lose binary log events that have been sent to slave. Suppose master fails and slave A receives the 1500 location of mysql-bin.00123. If the binlog data is nearly flushed to 1000 to disk, the new master is only mysql-bin.00123 to 1000 and a new binary file mysql-bin.00124 is created. If this happens, slave A cannot replicate because the new master does not have the 1500 location of mysql-bin.00123.

The fourth problem is complexity. For many users, installing / configuring heartbeat and DRBD is not easy. In many deployment environments, configuring DRBD often requires rebuilding system partitions, which is not easy in many cases. In addition, sufficient experience in the DRBD and Linux kernel layers is required. If you execute a wrong command (such as drbd-overwrite-data-of-peer on the passive node), it is very easy to damage the production data. When using DRBD, it is very difficult for most DBA to solve this problem once there is a problem with disk Ibank O layer.

MySQL cluster

The MySQL cluster really implements a highly available solution, but you must use the NDB storage engine. InnoDB is used in most cases, so you cannot take advantage of MySQL clustering.

Semi-synchronous replication

Semi-synchronous replication greatly reduces the risk that "binlog exists only on failed master". This is of great help to avoid data loss. However, semi-synchronous replication does not solve the consistency problem. Semi-synchronous replication ensures that at least one (not all) slave receives the binlog event when the master is submitted. It is still possible that some slave did not receive the binlog event. If relay log on the latest slave is not applied to a non-up-to-date slave, slave cannot be in a consistent state.

MHA solves the consistency problem, so by using semi-synchronous replication with MHA, almost no data loss and slave retention can be achieved all the time.

Global transaction ID (GTID)

The purpose of the global transaction ID is basically the same as what MHA wants to achieve, but the global transaction ID includes more. MHA can only support two-tier replication, but the global transaction ID can support many tier replication environments, so even if the second-tier replication fails, three-tier replication can still be restored.

GTID has been supported since MySQL5.6. Oracle's official tool, mysqlfailover, supports master failover with GTID. Starting with version 0.56 of MHA, GTID-based failover is also supported. MHA automatically detects whether mysqld is running on GTID. If GTID is enabled, MHA fails over ID, the main method for querying data with specified number of entries in GTmysql database. If not, MHA uses relay log-based failover.

After reading this article on implementing the high availability method of MySQL with MHA architecture, many readers will certainly want to know more about it. For more industry information, you can follow our industry information section.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report