Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Construction and deployment of MGR

2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1. MGR introduction

MySQL Group Replication (hereinafter referred to as: MGR) is a state machine replication based on Paxos protocol officially launched by MySQL. Before the advent of MGR, the common way of MySQL high availability for users, no matter how much the architecture was changed, was essentially the Master-Slave architecture. MySQL version 5.7 supports lossless semi-synchronous replication (lossless semi-sync replication), which further indicates the strong consistency of data replication.

1.1 MySQL asynchronous replication

The commit of a master transaction does not need to be confirmed by slave, and whether slave receives the binlog,master of master is not care. After receiving the master binlog, slave first writes the relay log, and finally asynchronously executes the sql application in the relay log to itself. Since the submission of master does not need to ensure that slave relay log is correctly accepted, when slave fails to accept master binlog or relay log application fails, master cannot perceive it

1.2 MySQL semi-synchronous replication

Based on the shortcomings of traditional asynchronous, mysql introduced semi-synchronous replication in version 5.5. It can be said that semi-synchronous replication is an improvement of traditional asynchronous replication. Before the commit of a master transaction, it is necessary to ensure that a slave receives the relay log and responds to the master before the transaction commit. However, the application of slave to relay log is still carried out asynchronously, as shown in the following figure:

1.3 MySQL Group replication (MGR)

Based on the defect of traditional asynchronous replication and semi-synchronous replication-the consistency of data can not be guaranteed, MySQL officially launched MySQL Group Replication (MGR) in version 5.7.17.

A replication group is formed by several nodes, and the submission of a transaction must be resolved and approved by most of the nodes in the group (N / 2 + 1) before it can be submitted. As shown in the figure above, a replication group is composed of three nodes, and the Consensus layer is the consistency protocol layer. During the transaction commit, the inter-group communication occurs, and the transaction is finally committed and responded to by the certify of the two nodes.

Group replication is introduced to solve the problem of data inconsistency caused by traditional asynchronous replication and semi-synchronous replication. Group replication relies on the distributed consistency protocol (a variant of the Paxos protocol), which realizes the ultimate consistency of distributed data and provides a real scheme of high data availability (whether it is really highly available is still open to question). The multi-write scheme it provides brings hope for us to realize the multi-activity scheme.

1.4 Features and limitations of MySQL group replication

Feature advantages:

1. High consistency, group replication technology based on native replication and paxos protocol, and provided as plug-ins to provide consistent data security.

2. High fault tolerance, it can continue to work as long as most of the nodes are not broken, there is an automatic detection mechanism, when different nodes have resource contention conflicts, there will be no errors, and they will be dealt with according to the principle of first come first. And built-in automatic brain fissure protection mechanism.

3. High scalability, the addition and removal of nodes are automatic. After the new node is added, the state is automatically synchronized from other nodes until the new node is consistent with other nodes. If a node is removed, other nodes automatically update group information and automatically maintain new group information.

4. High flexibility, single-master mode and multi-master mode. In single-master mode, the master is automatically selected, and all update operations are carried out on the master; in multi-master mode, all server can handle update operations at the same time.

Restrictions: (for more information, please refer to the official documentation: https://dev.mysql.com/doc/refman/5.7/en/group-replication-requirements-and-limitations.html)

1. Only InnoDB tables are supported, and each table must have a primary key for conflict detection of write set.

2. The GTID feature must be enabled, and the binary log format must be set to ROW, which is used to select master and write set.

3. COMMIT may lead to failure, similar to the failure scenario at snapshot transaction isolation level

4. Currently, a MGR cluster supports a maximum of 9 nodes.

5. Foreign keys and save point features are not supported, and global constraint detection and partial transaction rollback cannot be done.

6. Binlog event checksum is not supported for binary logs.

2. MGR building

Set up the official documentation for reference:

Official document: https://dev.mysql.com/doc/refman/5.7/en/group-replication.html

Single master node building address: https://dev.mysql.com/doc/refman/5.7/en/group-replication-deploying-in-single-primary-mode.html

Building a multi-master node is basically the same as a single master step, with the additional addition of the configuration file my.cnf. The following example is built for a single master node, and the multi-master node construction is almost the same.

Loose-group_replication_single_primary_mode=FALSEloose-group_replication_enforce_update_everywhere_checks= TRUE

2.1 Machine distribution host ip installation service weight 192.168.142.48mysql server 5.7.1850192.168.142.49mysql server 5.7.1840192.168.142.50mysql server 5.7.1830

Weight: the priority reference in the new main election, the larger the priority, the higher the priority.

2.2 create a mapping between hostname and ip

Set up on all three database servers:

192.168.142.48 dbtest1192.168.142.49 dbtest2192.168.142.50 dbtest3

2.3 what needs to be explained is the configuration file my.cnf extra configuration

For an introduction to the configuration file, refer to the official document: https://dev.mysql.com/doc/refman/5.7/en/group-replication-configuring-instances.html

# Replication Frameworkserver_id=1gtid_mode=ONenforce_gtid_consistency=ONmaster_info_repository=TABLErelay_log_info_repository=TABLElog_bin=binlogbinlog_format=ROWlog_slave_updates=ONbinlog_checksum=NONEtransaction_write_set_extraction=XXHASH64loose-group_replication_group_name= "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" loose-group_replication_start_on_boot=offloose-group_replication_local_address= "192.168.142.48 loose-group_replication_start_on_boot=offloose-group_replication_local_address= 33061" loose-group_replication_group_seeds= "192.168.142.48 purl 33061192.168.142.49 Vera 33061192.168.142.50: 33061 "loose-group_replication_bootstrap_group=offloose-group_replication_member_weight=50

2.4 create a database instance

Manually build or automate scripts to configure mysql instances on three hosts respectively. Here, you can refer to the method of building a single instance of MySQL, which is not discussed in detail.

2.5 first node configuration

1. Set the permission to copy account

SET SQL_LOG_BIN=0;CREATE USER rpl_user@'%' IDENTIFIED BY 'password';GRANT REPLICATION SLAVE ON *. * TO rpl_user@'%';FLUSH PRIVILEGES;SET SQL_LOG_BIN=1

2. Specify the recovery channel channel

CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery'

3. Install the plug-in plugin and view

INSTALL PLUGIN group_replication SONAME 'group_replication.so';show plugins

4. Start group replication

# setting group_replication_bootstrap_group to ON is to indicate that the server that joins the cluster in the future is based on this server. If you join in the future, you do not need to set SET GLOBAL group_replication_bootstrap_group=ON;START GROUP_REPLICATION;SET GLOBAL group_replication_bootstrap_group=OFF.

5. Check the status of mgr

# query table performance_schema.replication_group_membersselect * from performance_schema.replication_group_members

2.6 second node configuration and third node configuration

1. Set the permission to copy account

SET SQL_LOG_BIN=0;CREATE USER rpl_user@'%' IDENTIFIED BY 'password';GRANT REPLICATION SLAVE ON *. * TO rpl_user@'%';FLUSH PRIVILEGES;SET SQL_LOG_BIN=1

2. Specify the recovery channel channel

CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery'

3. Install the plug-in plugin and view

INSTALL PLUGIN group_replication SONAME 'group_replication.so';show plugins

4. Start group replication

# you no longer need to enable group_replication_bootstrap_group here. Since the replication group has been created, you only need to add the second node to START GROUP_REPLICATION.

5. Check the status of mgr

# query table performance_schema.replication_group_membersselect * from performance_schema.replication_group_members

3. Observe the relevant status and switch

1. Switch

STOP GROUP_REPLICATION; will re-select the new master masterSTART GROUP_REPLICATION; according to the weight as the slave server slave after joining.

2. Check the relevant status

1. Current group member list select * from performance_schema.replication_group_members;2, current node detailed log application information select * from performance_schema.replication_group_member_stats 3. The current replication channel connection information select * from performance_schema.replication_connection_status;4, the current replication channel application information select * from performance_schema.replication_applier_status;5, the current master masterselect a.variableencoding valueb. Memberroomhostforb. Memberroomhostforcing.memberroomportaccountingstate from performance_schema.global_status a, performance_schema.replication_group_members b where a.variable_value=b.member_id and variable_name= 'group_replication_primary_member'

4. MGR dynamically add and delete nodes 4.1 dynamically add node configuration

For example, based on the original 192.168.142.48192.168.142.49192.168.142.50 MGR, a new node 192.168.142.51 is dynamically added.

1. Create a mapping between hostname and ip

Set up on all four database servers:

192.168.142.48 dbtest1192.168.142.49 dbtest2192.168.142.50 dbtest3192.168.142.51 dbtest4

2. The my.cnf configuration file of 192.168.142.51 is consistent with the above my.cnf, with the following additional modifications

Loose-group_replication_local_address= "192.168.142.51 33061" loose-group_replication_group_seeds= "192.168.142.48 33061192.168.142.49" 33061192.168.142.50 "33061192.168.142.51"

3. Create a database instance

Manually build or automate scripts to configure mysql instances on three hosts respectively. Here, you can refer to the method of building a single instance of MySQL, which is not discussed in detail.

4. Set the permission to copy account

SET SQL_LOG_BIN=0;CREATE USER rpl_user@'%' IDENTIFIED BY 'password';GRANT REPLICATION SLAVE ON *. * TO rpl_user@'%';FLUSH PRIVILEGES;SET SQL_LOG_BIN=1

5. Specify the recovery channel channel

CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='password' FOR CHANNEL 'group_replication_recovery'

6. Install the plug-in plugin and view

INSTALL PLUGIN group_replication SONAME 'group_replication.so';show plugins

7. The node configuration that already exists in the cluster modifies the value of group_replication_group_seeds

1. In 192.168.142.48192.168.142.49192.168.142.50, the corresponding node server configuration parameter set global group_replication_group_seeds= "192.168.142.48 purl 33061192.168.142.49 purl 33061192.168.142.50 33061192.168.142.51Ru 33061"; 2. Write the corresponding parameter value to the configuration file my.cnf persistence.

8. Start group replication

# you no longer need to enable group_replication_bootstrap_group here. Since the replication group has been created, you only need to add the new node to START GROUP_REPLICATION.

9. Check the status of mgr

# query table performance_schema.replication_group_membersselect * from performance_schema.replication_group_members

4.2 dynamically delete node configuration

For example, based on the original 192.168.142.48192.168.142.49192.168.142.50192.168.142.51 MGR, the node 192.168.142.51 is dynamically deleted.

1. Stop group replication on 192.168.142.51

Group members are removed after STOP GROUP_REPLICATION;slave is closed

2. Modify the value of group_replication_group_seeds for the remaining node configuration in the cluster

1. In 192.168.142.48192.168.142.49192.168.142.50, the corresponding node server configuration parameter set global group_replication_group_seeds= "192.168.142.48 purl 33061192.168.142.49 33061192.168.142.50 33061". 2. Write the corresponding parameter values to the configuration file for persistence.

3. Thoroughly clean up the group information related to 192.168.142.51 nodes, configure and modify the values of group_replication_group_seeds and group_replication_local_address

1. In 192.168.142.51, the corresponding node server configuration parameter set global group_replication_group_seeds= "; set global group_replication_local_address="; 2. Write the corresponding parameter values to the configuration file my.cnf for persistence (add comments)

4. Delete plug-in plugin and copy account permissions

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report