Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mysql5.7 MGR cluster building

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Wechat public account

Mysql5.7 MGR cluster building and deployment

This article is written by team members (earnestly).

This article comes from wuguiyunwei.com, the official website of turtle operation and maintenance.

QQ group 602183872

Recently, I took a look at mysql5.7 's MGR cluster, which has single-master and multi-master modes, so it works well after building and testing. What I mean is that it is relatively simple to build and maintain. The vast majority of the Internet is a single master mode, of course, I am here, in order to deepen the impression, specially record the construction process, and then try the multi-master mode later. I believe that the bottleneck of the database is basically writing. Although read-write separation is a feasible solution, if the amount of data is large, writing will also have problems, although some solutions can deploy multiple master nodes and can read and write at the same time. However, brain fissure is also a serious problem, so the built-in automatic brain fissure protection mechanism in MGR cluster has been favored by many people. Here MGR for short MySQL Group Replication is a brand-new solution with high availability and high scalability officially launched by MySQL in December 2016. Note that this article no longer elaborates on the principle.

Note: I use the method of compilation and installation here. If you want to simply install mysql5.7 on yum directly, the disk space required for mysql compilation and installation is still relatively large, usually about 7G, so you should plan well in advance. Using three nodes is closer to the production environment, and it is more direct and clear.

The detailed deployment information is as follows:

Hostname IP address installation software usage apache192.168.2.25cmake, boost, mysql node nginx192.168.2.26cmake, boost, mysql node kibana192.168.2.30cmake, boost, mysql node

1. Three machines are ready to work

Rpm-qa mysql mariadb

If so, just uninstall it!

Write the hosts file mapping relationship, which can be used by the cluster.

192.168.2.25 apache

192.168.2.26 nginx

192.168.2.30 kibana

2. Install the dependency package

Yum install gcc gcc-c++ ncurses-devel-y

3. Install cmake, download address: https://cmake.org/download/

Tar zxvf cmake-3.7.2.tar.gz

Cd make-3.7.2

. / configure

Gmake & & gmake install

4. Install boost because mysql5.7 is needed. Note that the download version here is 1: 59. 0 corresponds to the mysql version. If your MySQL version is different from mine, if you do not add the parameter-DWITH_BOOST, it will report an error and tell you which version of boost you need to download.

Tar zxvf boost_1_59_0.tar.gz

Cp-r boost_1_59_0 / usr/local/boost

5. Install mysql5.7.17 and initialize operation

Groupadd mysql

Useradd-M-s / sbin/nologin mysql-g mysql

Tar zxvf mysql-5.7.17.tar.gz

Cd mysql-5.7.17

Cmake-DCMAKE_INSTALL_PREFIX=/data/mysql-DSYSCONFDIR=/etc-DDEFAULT_CHARSET=utf8-DDEFAULT_COLLATION=utf8_general_ci-DWITH_EXTRA_CHARSETS=all-DWITH_BOOST=/usr/local/boost

Make

Make install

Chown-R mysql.mysql / data/mysql

Mv / etc/my.cnf / etc/my.cnf.default

Cp / data/mysql/support-files/my-default.cnf / etc/my.cnf

/ data/mysql/bin/mysqld-initialize-user=mysql-basedir=/data/mysql-datadir=/data/mysql/data / / Note that initialization will generate a random password, please remember

Echo "PATH=$PATH:/data/mysql/bin" > > / etc/profile

Source / etc/profile

Cp / data/mysql/support-files/mysql.server / etc/rc.d/init.d/mysqld

Chmod + x / etc/rc.d/init.d/mysqld

The above steps need to be performed on all three machines

6. Start building the MGR cluster environment and modify the my.cnf file of the first node, as follows:

# For advice on how to change settings please see

# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html

# * * DO NOT EDIT THIS FILE. It's a template which will be copied to the

# * * default location during install, and will be replaced if you

# * * upgrade to a newer version of MySQL.

[mysqld]

# Remove leading # and set to the amount of RAM for the most important data

# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.

# innodb_buffer_pool_size = 128m

# Remove leading # to turn on a very important data integrity option: logging

# changes to the binary log between backups.

# log_bin

# These are commonly set, remove the # and set as required.

Basedir = / data/mysql

Datadir = / data/mysql/data

Port = 3306

Socket = / data/mysql/data/mysql.sock

Log-error = / data/mysql/data/mysqld.log

Pid-file = / data/mysql/data/mysqld.pid

# Remove leading # to set options mainly useful for reporting servers.

# The server defaults are faster for transactions and fast SELECTs.

# Adjust sizes as needed, experiment to find the optimal values.

# join_buffer_size = 128m

# sort_buffer_size = 2m

# read_rnd_buffer_size = 2m

Sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

# Group Replication

Server_id = 1

Gtid_mode = ON

Enforce_gtid_consistency = ON

Master_info_repository = TABLE

Relay_log_info_repository = TABLE

Binlog_checksum = NONE

Log_slave_updates = ON

Log_bin = binlog

Binlog_format= ROW

Transaction_write_set_extraction = XXHASH64

Loose-group_replication_group_name = 'ce9be252-2b71-11e6-b8f4-00212844f856'

Loose-group_replication_start_on_boot = off

Loose-group_replication_local_address = '192.168.2.25 purl 33061'

Loose-group_replication_group_seeds = '192.168.2.25 33061192.168.2.26Vera 33061192.168.2.30Rd 33061'

Loose-group_replication_bootstrap_group = off

[client]

Socket = / data/mysql/data/mysql.sock

Start the mysql service

/ etc/init.d/mysqld start

Set sql_log_bin=0

Create user rpl_user@'%'

Grant replication slave on *. * to rpl_user@'%' identified by 'rpl_pass'

Flush privileges

Set sql_log_bin=1

Change master to master_user='rpl_user',master_password='rpl_pass' for channel 'group_replication_recovery'

Install PLUGIN group_replication SONAME 'group_replication.so'

Set global group_replication_bootstrap_group=ON

Start group_replication

Set global group_replication_bootstrap_group=OFF

Select * from performance_schema.replication_group_members

The display results are as follows:

If ONLINE appears, it is normal. This is the master node, and two more slave nodes are built.

7. The second node joins the cluster and copies the main configuration file my.cnf of the first node just now. You only need to modify two places, which have been marked in red.

# For advice on how to change settings please see

# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html

# * * DO NOT EDIT THIS FILE. It's a template which will be copied to the

# * * default location during install, and will be replaced if you

# * * upgrade to a newer version of MySQL.

[mysqld]

# Remove leading # and set to the amount of RAM for the most important data

# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.

# innodb_buffer_pool_size = 128m

# Remove leading # to turn on a very important data integrity option: logging

# changes to the binary log between backups.

# log_bin

# These are commonly set, remove the # and set as required.

Basedir = / data/mysql

Datadir = / data/mysql/data

Port = 3306

Socket = / data/mysql/data/mysql.sock

Log-error = / data/mysql/data/mysqld.log

Pid-file = / data/mysql/data/mysqld.pid

# Remove leading # to set options mainly useful for reporting servers.

# The server defaults are faster for transactions and fast SELECTs.

# Adjust sizes as needed, experiment to find the optimal values.

# join_buffer_size = 128m

# sort_buffer_size = 2m

# read_rnd_buffer_size = 2m

Sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

# Group Replication

Server_id = 2

Gtid_mode = ON

Enforce_gtid_consistency = ON

Master_info_repository = TABLE

Relay_log_info_repository = TABLE

Binlog_checksum = NONE

Log_slave_updates = ON

Log_bin = binlog

Binlog_format= ROW

Transaction_write_set_extraction = XXHASH64

Loose-group_replication_group_name = 'ce9be252-2b71-11e6-b8f4-00212844f856'

Loose-group_replication_start_on_boot = off

Loose-group_replication_local_address = '192.168.2.26 purl 33061'

Loose-group_replication_group_seeds = '192.168.2.25 33061192.168.2.26Vera 33061192.168.2.30Rd 33061'

Loose-group_replication_bootstrap_group = off

[client]

Socket = / data/mysql/data/mysql.sock

The second node executes the following command:

Set sql_log_bin=0

Create user rpl_user@'%'

Grant replication slave on *. * to rpl_user@'%' identified by 'rpl_pass'

Set sql_log_bin=1

Change master to master_user='rpl_user',master_password='rpl_pass' for channel 'group_replication_recovery'

Install plugin group_replication SONAME 'group_replication.so'

Set global group_replication_allow_local_disjoint_gtids_join=ON

Start group_replication

The display results are as follows:

Similarly, the third node joins in the same way as the second node.

The screenshot is as follows:

Query which is the primary node:

From the figure above, it is clear that the apache host is the master node.

Test steps:

1. Create a library on the master library, and then create a table to query whether the data is synchronized on the two slave libraries?

2. The two slave libraries can only perform query operations?

2. Manually close the master library and confirm whether one of the two slave libraries will become the master library? And is the first letter of MEMBER_ID in order of priority to take over the main library?

Routine maintenance steps:

1. If you shut down from a node in the library

Start group_replication

2. If all libraries are closed, the first library will be executed first as the main library

Set global group_replication_bootstrap_group=ON

Start group_replication

The rest of the library can be executed directly!

Start group_replication

3. If the master library fails, one master library will be automatically selected from the two slave libraries. After the master library starts, it will become a slave library after executing the following command again.

Start group_replication

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report