Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

High-availability PXC for MYSQL

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Brief introduction

Percona XtraDB Cluster is a high availability and extensibility solution for MySQL users, based on Percona Server. It includes Write Set REPlication patches using the Galera 2.0 library, a synchronous multi-host replication plug-in for transactional applications.

Percona XtraDB Cluster features:

(1) synchronous replication, where transactions are either committed at the same time or not committed at all cluster nodes.

(2) Multi-master replication, which can be written at any node.

(3) from the parallel application events on the server, real "parallel replication".

(4) configure nodes automatically.

(5) data consistency, there is no asynchronous slave server.

Pxc frame diagram

Summary of advantages:

It can achieve real-time synchronization without delay.

Fully compatible with MySQL

For the addition of new nodes in the cluster, it is easy to maintain

Strong consistency of data

Summary of inadequacies:

Only Innodb storage engine is supported

The biggest disadvantage is the problem of multi-write and the performance limit of the shortest board.

Online DDL statement, table locking problem

Sst aims at the problem that the transmission cost of new nodes is too high.

Test scenario Test:

In our hardware level is 256 GB of memory, 32-core CPU,SSD hardware, a single row of data about 1K, a single table 10 million, 512 tables. When QPS writes at 2.5k and reads at 5K, there will be a problem of node synchronization blocking. At that time, we temporarily cut to read-only (do not perform write SQL) for 10 minutes before relieving. Write queue configuration parameters below.

Recommended scenarios: low-write QPS DB, use PXC clusters to prevent hardware failures and achieve high availability.

I. Environmental preparation

1. System environment

[root@master ~] # cat / etc/redhat-release

CentOS Linux release 7.5.1804 (Core)

2. Database environment

Database IP address

Database version

Hostnam

Server_id

192.168.56.129

None

Master

one hundred and twenty nine

192.168.56.130

None

Slave1

one hundred and thirty

192.168.56.131

None

Slave2

one hundred and thirty one

3. Points for attention

Temporarily shut down the firewall and selinux services

Use ports 3306, 4444, 4567, 4568

Second, build (yum installation) (all three)

1. Install the percona environment

1.1.Configuring the yum source

[root@master ~] # yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm

[root@master ~] # yum install perl-IO-Socket-SSL perl-DBD-MySQL perl-Time-HiRes socat nc openssl-y

Download and install Percona-XtraDB-Cluster

[root@master ~] # yum install-y libaio*

[root@master ~] # yum groupinstall-y 'Development tools'

[root@master ~] # yum install-y libssl.so.6

[root@master lib64] # ln-sf / usr/lib64/libssl.so.10 / usr/lib64/libssl.so.6

[root@master lib64] # ln-sf / usr/lib64/libcrypto.so.10 / usr/lib64/libcrypto.so.6

Yum installation

[root@master ~] # yum install Percona-XtraDB-Cluster-57

2. Start the service

[root@master ~] # systemctl start mysqld.service (the first node starts in a different way)

View default password

[root@master ~] # grep "temporary password" / var/log/mysqld.log

3. Log in to modify the database

Mysql > alter user 'root'@'localhost' identified by' Jsq@2018'

Query OK, 0 rows affected (0.00 sec)

Mysql > flush privileges

Query OK, 0 rows affected (0.00 sec)

4. Create sst account password (only add it in the first node, other synchronization can be done)

Mysql > GRANT RELOAD,LOCK TABLES,PROCESS,REPLICATION CLIENT ON *. * TO 'sstuser1'@'localhost' IDENTIFIED BY' 123456'

Mysql > FLUSH PRIVILEGES

5. Configure the mysqld.cnf file

Cat / etc/percona-xtradb-cluster.conf.d/mysqld.cnf

Default statu

Now add [mysqld] in the first line of / etc/my.cnf, and then restart the service successfully (nodes 2 and 3 also need to refer to node 1 to add [mysqld] in the first line of the configuration file / etc/my.cnf).

The master configuration is as follows:

[mysqld]

#! includedir / etc/my.cnf.d/

#! includedir / etc/percona-xtradb-cluster.conf.d/

# PXC

# Path to Galera library

Wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3

Wsrep_cluster_address=gcomm://192.168.56.129192.168.56.130192.168.56.131

# In order for Galera to work correctly binlog format should be ROW

Binlog_format=ROW

# MyISAM storage engine has only experimental support

Default_storage_engine=InnoDB

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera

Innodb_autoinc_lock_mode=2

# Node # 1 address

Wsrep_node_address=192.168.56.129

Wsrep_node_name=pxc1

# SST method

Wsrep_sst_method=xtrabackup-v2

# Cluster name

Wsrep_cluster_name=pxc-cluster

# Authentication for SST method

Wsrep_sst_auth= "sstuser1:123456"

The slave1 configuration is as follows:

[mysqld]

# PXC

Wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

Wsrep_cluster_address=gcomm://192.168.56.129192.168.56.130192.168.56.131

Binlog_format=ROW

Default_storage_engine=InnoDB

Innodb_autoinc_lock_mode=2

Wsrep_node_address=192.168.56.130

Wsrep_node_name=pxc2

Wsrep_sst_method=xtrabackup-v2

Wsrep_cluster_name=pxc-cluster

Wsrep_sst_auth= "sstuser1:123456"

The slave2 configuration is as follows:

[mysqld]

# PXC

Wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

Wsrep_cluster_address=gcomm://192.168.56.129192.168.56.130192.168.56.131

Binlog_format=ROW

Default_storage_engine=InnoDB

Innodb_autoinc_lock_mode=2

Wsrep_node_address=192.168.56.131

Wsrep_node_name=pxc3

Wsrep_sst_method=xtrabackup-v2

Wsrep_cluster_name=pxc-cluster

Wsrep_sst_auth= "sstuser1:123456"

Start the first node

[root@master ~] # systemctl start mysql@bootstrap.service

[root@master ~] # systemctl status mysql@bootstrap.service

[root@master ~] # ss-tnl

Other nodes start in the right way

[root@slave1 ~] # systemctl start mysqld.service

[root@slave2 ~] # systemctl start mysqld.service

6. View the status of three nodes

Mysql > show status like'% wsrep%'

III. Testing

Update data in master

Update data in slave1

Update data in slave2

That's it!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report