Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Highly available PXC

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Construction of 1.Percona XtraDB Cluster

Installation environment:

Node 1VR A: 192.168.91.18

Node 2:B:192.168.91.20

Node 3:C:192.168.91.21

Replication implemented by innodb engine layer

ABC server_id would be different.

ABC:

Download the software:

Wget http://www.percona.com/downloads/Percona-XtraDB-Cluster-56/Percona-XtraDB-Cluster-5.6.21-25.8/binary/tarball/Percona-XtraDB-Cluster-5.6.21-rel70.1-25.8.938.Linux.x86_64.tar.gz

Install the dependency package:

Yum install-y socat

Yum install-y perl-DBD-MySQL.x86_64 perl-IO-Socket-SSL.noarch socat.x86_64 nc

(nc is a powerful network tool)

Yum install-y http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm

# install xtrabackup backup software:

Yum list | grep percona-xtrabackup

Yum install-y percona-xtrabackup.x86_64

# rpm-qa | grep percona

Percona-release-0.1-3.noarch

Percona-xtrabackup-2.3.7-2.el6.x86_64

ABC:

Extract the PXC package:

Tar xf Percona-XtraDB-Cluster-5.6.21-rel70.1-25.8.938.Linux.x86_64.tar.gz

Soft links:

Ln-s / home/tools/Percona-XtraDB-Cluster-5.6.21-rel70.1-25.8.938.Linux.x86_64 / usr/local/mysql

Users and groups that create mysql

Groupadd mysql

Useradd-g msyql-s / sbin/nologin-d / usr/local/mysql mysql

Create a startup file:

Cp / usr/local/mysql/support-files/mysql.server / etc/init.d/mysqld

Create the basic directories required by mysql

Mkdir-p / data/mysql3306/ {data,logs,tmp}

Chown-R mysql:mysql *

A profile:

Vim / etc/my.cnf

# pxc

Default_storage_engine=Innodb

# innodb_locks_unsafe_for_binlog=1

Innodb_autoinc_lock_mode=2

Wsrep_cluster_name=pxc_cluster # Cluster name

Wsrep_cluster_address=gcomm://192.168.91.18192.168.91.20192.168.91.21

Wsrep_node_address=192.168.91.18

Wsrep_provider=/usr/local/mysql/lib/libgalera_smm.so

# wsrep_provider_options= "gcache.size = 1Gbot debug = yes"

Wsrep_provider_options= "gcache.size = 1G;"

# wsrep_sst_method=rsync

Wsrep_sst_method=xtrabackup-v2

Wsrep_sst_auth=sst:147258

B profile:

# pxc

Default_storage_engine=Innodb

# innodb_locks_unsafe_for_binlog=1

Innodb_autoinc_lock_mode=2

Wsrep_cluster_name=pxc_cluster

Wsrep_cluster_address=gcomm://192.168.91.18192.168.91.20192.168.91.21

Wsrep_node_address=192.168.91.20

Wsrep_provider=/usr/local/mysql/lib/libgalera_smm.so

# wsrep_provider_options= "gcache.size = 1Gbot debug = yes"

Wsrep_provider_options= "gcache.size = 1G;"

# wsrep_sst_method=rsync

Wsrep_sst_method=xtrabackup-v2

Wsrep_sst_auth=sst:147258

C profile:

# pxc

Default_storage_engine=Innodb

# innodb_locks_unsafe_for_binlog=1

Innodb_autoinc_lock_mode=2

Wsrep_cluster_name=pxc_cluster

Wsrep_cluster_address=gcomm://192.168.91.18192.168.91.20192.168.91.21

Wsrep_node_address=192.168.91.21

Wsrep_provider=/usr/local/mysql/lib/libgalera_smm.so

# wsrep_provider_options= "gcache.size = 1Gbot debug = yes"

Wsrep_provider_options= "gcache.size = 1G;"

# wsrep_sst_method=rsync

Wsrep_sst_method=xtrabackup-v2

Wsrep_sst_auth=sst:147258

ABC:

Initialize:

[root@Darren1 mysql] #. / scripts/mysql_install_db

A:

The first node starts:

/ etc/init.d/mysql bootstrap-pxc

Bootstrapping PXC (Percona XtraDB Cluster) Starting MySQL (Percona XtraDB Cluster). SUCCESS!

> mysql

Delete from mysql.user where username localhost'or hostworthy root.'

Truncate mysql.db

Drop database test

Grant all on *. * to sst@localhost identified by '147258users; # create a user sst for xtrabackup with a password corresponding to that in my.cnf

Flush privileges

BC:

Start Node 2 and Node 3:

/ etc/init.d/iptables stop

Sed-I's setting SELINUXFORCING'/ etc/selinux/config

[root@Darren2 data] # / etc/init.d/mysqld start

Starting MySQL (Percona XtraDB Cluster). State transfer in progress, setting sleep higher

... SUCCESS!

[root@Darren3 data] # / etc/init.d/mysqld start

ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/ var/lock/subsys/mysql) exists

Starting MySQL (Percona XtraDB Cluster). State transfer in progress, setting sleep higher

... SUCCESS!

Test:

A:

Root@localhost [testdb] > create database testdb

Root@localhost [testdb] > create table T1 (C1 int auto_increment not null,c2 timestamp,primary key (C1))

Root@localhost [testdb] > insert into T1 select 1 select now ()

Root@localhost [testdb] > select * from testdb.t1

+-+ +

| | C1 | c2 |

+-+ +

| | 1 | 2017-03-06 12:29:56 |

+-+ +

B:

Root@localhost [testdb] > select * from testdb.t1

+-+ +

| | C1 | c2 |

+-+ +

| | 1 | 2017-03-06 12:29:56 |

+-+ +

C:

Root@localhost [testdb] > select * from testdb.t1

+-+ +

| | C1 | c2 |

+-+ +

| | 1 | 2017-03-06 12:29:56 |

+-+ +

Shutdown method:

Close: / etc/init.d/mysql stop

Restart after all nodes are shut down:

The node started by the first node: / etc/init.d/mysql bootstrap-pxc

Other nodes / etc/init.d/mysql start

SST and IST

State Snapshot Transfer (SST) full transmission

It occurs when a new node is added, or the node failure (shutdown) in the cluster takes too long.

Wsrep_sst_method = xtrabackup-v2

This parameter has three values:

(1) xtrabackup-v2

To use xtrabackup transport, you need to create a user for backup in advance and set the parameter username and password: wsrep_sst_auth=sst:147258

(2) rsync: the fastest transmission method. You don't need to specify wsrep_sst_auth parameters. Read-only (flush table with read lock) when copying data.

(3) mysqldump: not recommended, not when there is a large amount of data, read-only (flush table with read lock) when copying data.

Incremental state Transfer (IST) incremental transmission

It occurs when the data of one node is changed, the incremental part is copied to other nodes and controlled by a cache gcache. If the increment is greater than gcache, full transmission will be selected, and incremental transmission will be selected only when the increment is less than or equal to gcache.

Wsrep_provider_options= "gcache.size = 1G"

What if I stop one of the PXC nodes?

When the status of wsrep_local_state_comment is Synced, it means that the data is synchronized between the three nodes, so that one of the services can be stopped and scrolled to restart.

How long can each node be calculated offline?

For example, if you want to be offline for 2 hours, calculate how much binlog can be generated in 2 hours, and the corresponding gcache.size will be set.

For example, if a busy order system generates 200m binog in 5 minutes, 2.4G in one hour and 4.8G in two hours, then wsrep_provider_options= "gcache.size = 6G", gcache needs actual memory allocation, and cannot be set too large, otherwise oom-kill will appear.

After fault recovery, the process of joining the cluster is analyzed:

(1) if the amount of data is not very large, reinitialize and do a SST.

(2) if the amount of data is very large, use rsync to transmit it.

Characteristics and points for attention of PXC:

(1) each node of PCX is automatically configured with self-increasing initial value and step size, just like double masters, in order to prevent primary key conflicts.

Node1:

Auto_increment_offset=1

Auto_incremnet_increment=3

Node2:

Auto_increment_offset=2

Auto_incremnet_increment=3

Node3:

Auto_increment_offset=3

Auto_incremnet_increment=3

(2) PCX cluster is optimistically controlled, and thing conflicts may occur in the commit phase. When multiple nodes modify the same row of data, only one node can succeed, the failed node will be terminated, and a deadlock error code will be returned:

Such as:

A:

Root@localhost [testdb] > begin

Root@localhost [testdb] > update T1 set c2=now () where c1y3

B:

Root@localhost [testdb] > begin

Root@localhost [testdb] > update T1 set c2=now () where c1y3

Root@localhost [testdb] > commit

A:

Error deadlock:

Root@localhost [testdb] > commit

ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

(3) PXC only supports innodb engine. Basically, the tables under the mysql library are all myisam tables. Although PXC does not support myisam tables, it supports DCL statements, such as create user,drop user,grant,revoke. You can enable pxc to support myisam tables by turning on the parameter wsrep_replicate_myisam, so when there is data inconsistency in PXC, you should first check whether it is a myisam table.

Such as:

Node1:

Root@localhost [testdb] > show create table T2\ G

* * 1. Row *

Table: t2

Create Table: CREATE TABLE `t2` (

`c1` int (11) NOT NULL AUTO_INCREMENT

`c2` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP

PRIMARY KEY (`c1`)

) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=utf8

Root@localhost [testdb] > select * from T2

+-+ +

| | C1 | c2 |

+-+ +

| | 2 | 2017-03-08 11:41:31 |

+-+ +

It cannot be seen on both the node2 and node3 nodes because it is not transmitted.

(4) each table in PXC must have a primary key. If there is no primary key, the data in the data page of each node in the cluster may be different, and select limit may produce different result sets in different nodes.

(5) Table-level locks (lock table) are not supported. All DDL operations are instance-level locks, and pt-osc tools are required.

Such as:

Example 1:

Node1:

Root@localhost [testdb] > lock table T1 read

Root@localhost [testdb] > insert into T1 select 69MagneNow ()

ERROR 1099 (HY000): Table 't1' was locked with a READ lock and can't be updated

Node2: node 2 can still be inserted, indicating that read lock is not in effect

Root@localhost [testdb] > insert into T1 select 69MagneNow ()

Query OK, 1 row affected (0.01sec)

Records: 1 Duplicates: 0 Warnings: 0

Example 2:

Node1:

Root@localhost [testdb] > lock table T1 write

Root@localhost [testdb] > insert into T1 select 1 select now ()

Query OK, 1 row affected (0.03 sec)

Records: 1 Duplicates: 0 Warnings: 0

Root@localhost [testdb] > select * from T1

+-+ +

| | C1 | c2 |

+-+ +

| | 1 | 2017-03-08 14:59:46 |

+-+ +

Node2: node 2 is not affected by write locks and can read and write:

Root@localhost [testdb] > insert into T1 select 2MagneNow ()

Query OK, 1 row affected (0.05sec)

Records: 1 Duplicates: 0 Warnings: 0

Root@localhost [testdb] > select * from T1

+-+ +

| | C1 | c2 |

+-+ +

| | 1 | 2017-03-08 14:59:46 |

| | 2 | 2017-03-08 14:59:57 |

+-+ +

(6) XA things are not supported

(7) the query log log is stored in the file, but not in the table, that is, the parameter log_output=file needs to be specified.

(8) the performance / throughput of the whole cluster is determined by the node with the worst performance, and the bucket effect

Master-slave replication without latency: 60,000 insert per second

Master-slave replication with latency: 30,000 insert per second

Pxc: 10,000 insert per second

(9) the number of nodes is 3

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report