Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Percona XtraDB Cluster planning and implementation

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

# # learn to read logs when setting up a cluster. Logs are very important. Logs are really important. 1. Preparation work 1.1, server configuration

Hostnam

IP address

Pxc01

192.168.1.17

Pxc02

192.168.1.18

Pxc03

192.168.1.19

Download the source code package: http://down.51cto.com/data/2330430

1.2. Environment configuration

Note: oracle's mysql cannot be used for compilation and installation because mysql GPL does not recognize the relevant parameters of pxc.

The following installation packages are required to install the pxc environment:

Percona xtaDB cluster-percona mysql database

Percona xtracebackup-backup recovery tool

Note:

Percona-XtraDB-Cluster-5.6.28-rel76.1-25.14.1.Linux.x86Linux 64.ssl100.tar.gz-the compiled installation package, after unpacking, you only need to execute mysql_install_db in scripts to initialize the database.

Percona-xtradb-cluster-5.6_5.6.28-25.14.orig.tar.gz-original installation package, which needs to be compiled and installed using cmake

1.3. Noun interpretation

Noun:

WS:write set write dataset

IST: Incremental State Transfer incremental synchronization

SST:State Snapshot Transfer full synchronization

UUID: unique identification of node state change and sequence.

GTID:Global Transaction ID, which consists of UUID and offset. The global transaction id within the cluster defined in wsrep api.

1.4.The PXC principle

Pxc is a mysql cluster based on validated data strong consistency data replication, characteristic analysis:

Advantages:

1. High availability, the unavailability of nodes does not affect the normal operation of the cluster.

two。 With strong consistency, reads can be extended to multiple nodes.

3. The addition of data synchronization automation (IST,SST) of nodes.

4. Multi-point read and write can be achieved, but the write pressure should still be synchronized to all nodes.

Disadvantages:

1. Because the ddl needs to be verified globally, the performance of the cluster is determined by the worst performance node in the cluster.

two。 In order to ensure consistency, galera always gives priority to ensuring data consistency. When writing simultaneously at multiple points, the problem of lock conflict is serious.

3. Full copy data (sst) is required for new nodes to join or delay larger nodes to rejoin, and nodes as donor cannot provide read and write during synchronization.

4. Data redundancy is the number of nodes

1.5.The working principle of PXC

After the node receives the sql request, for the ddl operation, before the commit, wsrep API calls the galera library for intra-cluster broadcast. After successful verification of all other nodes, the transaction commits at all nodes in the cluster, and vice versa roll back. Pxc ensures the strong consistency of all data in the whole cluster, which meets the requirements of CAP theory: Consistency and Availability.

WSREP API:

1): provides an interface between DBMS and wsrep provider.

2): GTID:Global Transaction ID. It consists of UUID and sequence number, which is used to indicate the unique indication of the state change in the cluster and the offset in the queue.

Galera wsrep provider:

1): complete the broadcast of the transaction in the cluster: send the local transaction to other nodes for verification, receive the local verification of other node events and return the result

2): apply events received from other nodes and verified globally to the local.

3): intra-cluster communication, detection of node survival, pc election, etc.

4): brain fissure. In order to avoid the failure of the pc election caused by node failure, the whole cluster is unavailable. It is recommended that the number of nodes should be at least 3.

5): lock conflict detection mechanism for multi-point writing

6): wait for the concurrent commit of transactions in the queue

The group communication layer of galera implements a unified global data synchronization strategy and the sorting of all transactions in the cluster, which is easy to generate GTID.

There are two aspects of work for each node:

(1) complete data synchronization.

(2) complete the communication with other nodes.

The replication layer of galera completes data synchronization, which is composed of slavequeue and applier. In the process of transaction synchronization, lock conflicts occur locally in the node when the transaction is in the queue and in the application thread. The efficiency of the replication module directly affects the write performance of the whole cluster.

Lock conflicts between local transactions and waiting queues during synchronization:

Pessimistic locks are used internally in innodb to ensure the successful conduct and commit of transactions. Optimistic locks are used in pxc to avoid acquiring locks and network overhead at each node. On the write node, the transaction is the same as the innodb at a single point before committing. When it reaches the commit point, it broadcasts (galera library completes concurrency) to other nodes in the cluster and waits for each node to verify the result. If all nodes return success, commit, and vice versa, roll back.

If the first commit in pxc succeeds, other transactions (local or other node synchronization) will be rolled back or deadlock errors will be reported.

Related status values:

Wsrep_local_cert_failures synchronization process node authentication failure count, conflicts from locally committed transactions and transactions in the synchronization queue have lock conflicts, then local verification fails (ensuring global data consistency)

Wsrep_local_bf_aborts forcibly abandons. When there is a lock conflict between the local transaction and the transaction being executed in the synchronization queue, it will be forced to guarantee the success of the transaction committed first, and the latter rollback the live error report.

Verification module:

In the verification process, the node does conflict verification locally and returns the verification result after receiving the writeset of other nodes.

1.6. PXC status description

1.OPEN: the node starts successfully and attempts to connect to the cluster. If it fails, it exits or creates a new cluster according to the configuration.

2.PRIMARY: the node is in the cluster PC. Try to select donor from the cluster for data synchronization.

3.JOINER: the node is waiting to receive / receive data files, and the data is loaded locally after the data transfer is completed.

4.JOINED: the node completes data synchronization and tries to keep pace with the cluster

5.SYNCED: normal services provided by nodes: reading and writing of data, synchronization of cluster data, sst requests of newly joined nodes

6.DONOR: the node is in the state of preparing or transferring all the data of the cluster for the new node and is not available to the client.

State machine change factors:

1. New nodes join the cluster

two。 Node fault recovery

3. Node synchronization effect

Monitoring status description:

(1) Cluster integrity check:

Wsrep_cluster_state_uuid: the values of all nodes in the cluster should be the same. Nodes with different values indicate that they are not connected to the cluster.

Wsrep_cluster_conf_id: normally this value is the same on all nodes. If the value is different, the node is temporarily partitioned. The same value should be restored when the network connection between nodes is restored.

Wsrep_cluster_size: if this value matches the expected number of nodes, then all cluster nodes are connected.

Wsrep_cluster_status: the state of the cluster composition. If it is not Primary, it means that there is a partition or split-brain condition.

(2) Node status check:

Wsrep_ready: if the value is ON, the SQL load is acceptable. If Off, you need to check wsrep_connected.

Wsrep_connected: if the value is Off and the value of wsrep_ready is also Off, the node is not connected to the cluster. It may be caused by misconfiguration such as wsrep_cluster_address or wsrep_cluster_name. You need to check the error log for specific errors)

Wsrep_local_state_comment: if wsrep_connected is On, but wsrep_ready is OFF, you can check the reason from this item.

State change phase:

1.OPEN: the node starts successfully and attempts to connect to the cluster. If it fails, it exits or creates a new cluster according to the configuration.

2.PRIMARY: the node is in the cluster PC. Try to select donor from the cluster for data synchronization.

3.JOINER: the node is waiting to receive / receive data files, and the data is loaded locally after the data transfer is completed.

4.JOINED: the node completes data synchronization and tries to keep pace with the cluster

5.SYNCED: normal services provided by nodes: reading and writing of data, synchronization of cluster data, sst requests of newly joined nodes

6.DONOR (data contributor): the node is in the state of preparing or transferring all the data of the cluster for the new node and is not available to the client.

When one of the nodes stops and restarts, the incremental data is synchronized through IST to ensure consistency with the data of the other two nodes. The implementation of IST is determined by the wsrep_provider_options= "gcache.size=1G" parameter, which is generally set to 1G size, and the parameter size is determined by what. According to the downtime, if the downtime is one hour, you need to confirm how much binlog is generated within 1 hour to calculate the parameter size. Suppose all three of our nodes are shut down, what will happen? all send SST, because the gcache data is gone.

To close all, you need to turn it off by rolling:

1. Close node1, and after repairing it, activate and add it back.

2. Close node2, and after repairing it, activate and add it back.

3. Until the last node

4. The principle is to keep at least one member of Group alive

After the database is shut down, the most last Txid will be saved, so when starting, start the last closed node first, in the reverse order of startup and shutdown.

The wsrep_recover=on parameter is added at startup and is used to parse the gtid from log.

How to avoid data loss at shutdown and startup?

1. At least one of all nodes is online for rolling restart

2. Using the concept of master and slave, a slave node is transformed into a node in PXC.

(3) copy health check:

Wsrep_flow_control_paused: indicates how long replication has been stopped. That is to say, the extent to which the cluster is slow due to Slave latency. The value is 0-1, which is as close to 0 as possible, and a value of 1 means that replication stops completely. The value of wsrep_slave_threads can be optimized to improve.

What if the primary node writes too much? The Wsrep_slave_threads parameter is configured to equal or 1.5 times the number of cpu.

Wsrep_cert_deps_distance: how many transactions can be processed in parallel. The wsrep _ slave_threads setting should not be too much higher than this value.

Wsrep_flow_control_sent: indicates how many times the node has stopped copying.

Wsrep_local_recv_queue_avg: indicates the average length of the slave transaction queue. A sign of bottleneck.

The slowest node has the highest wsrep_flow_control_sent and wsrep_local_recv_queue_avg values. If these two values are lower, they are relatively better.

(4) detect slow network problems:

Wsrep_local_send_queue_avg: a harbinger of network bottleneck. If this value is high, there may be a cyber bottle.

Number of conflicts or deadlocks:

Wsrep_last_committed: number of transactions last committed

Wsrep_local_cert_failures and wsrep_local_bf_aborts: rollback, number of conflicts detected

three。 Problems paid attention to by pxc

1. Cerebral fissure

The occurrence of unkown command in the execution of any command indicates that there is a brain fissure, and port 4567 between the two nodes of the cluster is not connected and cannot provide external services.

SET GLOBAL wsrep_provider_options= "pc.ignore_sb=true"

2. Concurrent writing

The starting values of the three nodes are 1, 2, 3, and the step size is 3, which solves the insert problem, but the update will have a problem with one line operation at the same time: Error: 1213 SQLSTATE: 40001, so update and write operate on the same node.

3 、 DDL

Cause a global lock, using: pt-online-schema-change

4. MyISAM engine cannot be copied, only innodb is supported.

5. There must be a primary key in pxc structure

If there is no master builder, the data in the Data page of each node in the set may be different.

6. Table-level locks are not supported

Lock / unlock tables is not supported

7. Only slow log and query log can be put into File in pxc

8. XA transactions are not supported

9. Performance is determined by the node with the worst performance in the cluster

1.7. advantages and disadvantages of PXC

Advantages:

(1)。 Replicate Synchronous replication synchronously

(2) Active-active multi-master topological logic

(3)。 It can read and write data to any node in the cluster.

(4)。 Automatic membership control, fault nodes are automatically removed from the cluster

(5)。 Automatic node join

(6)。 True parallel replication, based on row level

(7)。 Direct client connection, native MySQL interface

(8)。 Each node contains a complete copy of the data

(9)。 Data synchronization in multiple databases is realized by wsrep interface.

Disadvantages:

(1)。 Current replication only supports the InnoDB storage engine. Any tables written to other engines, including mysql.* tables, will not be replicated, but DDL statements will be replicated, so the creating user will be replicated, but insert into mysql.user... Will not be copied.

Delete operation does not support tables without primary keys. Tables without primary keys will have different order in different nodes. If SELECT is performed, the order of tables will be different. LIMIT... Different result sets will appear

(3)。 LOCK/UNLOCK TABLES is not supported in multi-host environment, as well as lock functions GET_LOCK (), RELEASE_LOCK ().

(4)。 The query log cannot be saved in a table. If you open the query log, it can only be saved to a file

(5)。 The maximum transaction size allowed is defined by wsrep_max_ws_rows and wsrep_max_ws_size. Any large operation will be rejected. Such as large LOAD DATA operations

(6)。 Because the cluster is optimistic about concurrency control, the transaction commit may be aborted at this stage. If two transactions are written and committed to the same line to different nodes in the cluster, the failed node will abort. For cluster-level aborts, the cluster returns a deadlock error code (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK))

(7) .XA transaction is not supported because it may be rolled back on commit

(8)。 The write throughput of the entire cluster is limited by the weakest node, and if one node becomes slow, the entire cluster will be slow. For stable and high performance requirements, all nodes should use unified hardware

(9)。 A minimum of 3 cluster nodes are recommended

(10)。 If there is a problem with the DDL statement, the cluster will be destroyed.

Binary package installation percona mysql

The binary installation process is brief:

Edit the my.cnf file

[root@PXC01 Percona-XtraDB-Cluster-5.6.28-76.1] # vi / etc/my.cnf

[mysqld]

Basedir = / usr/local/mysql

Datadir = / data/mysql

Port = 3306

Server_id = 3

Socket = / tmp/mysql.sock

Pid-file = / tmp/mysql.pid

# replication settings #

Sync_binlog=0

Binlog_format=row

# log settings #

Log-error = / data/log/3306.err

Log-bin=/data/binlog/mysql-bin

# connection #

Interactive_timeout = 1800

Wait_timeout = 1800

Lock_wait_timeout = 1800

Skip_name_resolve = 1

Max_connections = 2000

Max_connect_errors = 1000000

Default-storage-engine=INNODB

Skip-name-resolve

Skip-host-cache

Explicit_defaults_for_timestamp

Sql_mode=STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER

Expire_logs_days = 7

Collation_server=utf8_general_ci

Init_connect='SET NAMES utf8'

Transaction_isolation = READ-COMMITTED

Max_allowed_packet = 16m

Max_heap_table_size = 64m

# session memory settings #

Sort_buffer_size = 33m

Join_buffer_size = 134m

Query_cache_size = 0

Query_cache_limit = 0

Read_buffer_size= 8M

# table cache performance settings

Table_open_cache = 4096

Table_definition_cache = 4096

Table_open_cache_instances = 64

# innodb settings #

Innodb_undo_directory = / data/undolog/

Innodb_undo_logs = 128,

Innodb_undo_tablespaces = 3

Innodb_buffer_pool_instances = 8

Innodb_online_alter_log_max_size=1G

Innodb_data_file_path = ibdata1:1G:autoextend

Innodb_buffer_pool_size = 20g

Innodb_file_per_table = 1

Innodb_flush_method = O_DIRECT

Innodb_flush_log_at_trx_commit = 2

Innodb_log_buffer_size = 64m

Innodb_log_file_size = 256m

Innodb_log_files_in_group = 3

Innodb_io_capacity = 600,

Innodb_max_dirty_pages_pct = 80

Innodb_autoinc_lock_mode=2

# innodb_locks_unsafe_for_binlog=1

# innodb_doublewrite=0

Innodb_open_files = 4096

Innodb_page_size = 16k

Innodb_purge_threads=4

Innodb_large_prefix = 1

Innodb_thread_concurrency = 64

Innodb_print_all_deadlocks = 1

Innodb_strict_mode = 1

Innodb_sort_buffer_size = 60m

Innodb_online_alter_log_max_size=1G

2.1. Initialize the database

Dimensions:

The database as a primary node needs to initialize the database. As a slave node, it does not need to initialize the database, but only needs to start the database directly. The slave node will automatically copy data from the primary node to the location where the data files are stored.

2.1.2. Install xtrabackup2.1.2.1 with source code and install dependency package

[root@PXC01 opt] # yum-y install perl-DBI perl-DBD-MySQLperl-Time-HiRes perl-IO-Socket-SSL libev libgcrypt-devel libcurl-devel crypt* libgcrypt*imake libxml2-devel expat-devel git bzip2* python-sphinx* cryptopp* ncurses* perl-TermReadKey*tcp_wrappers* libev-devel

2.1.2.2, compile xtrabackup

Note: the sphinx,lzlib,libev dynamic library needs to be manually downloaded, compiled and installed at the following address:

Http://software.schmorp.de/pkg/libev.html

Http://download.savannah.gnu.org/releases/lzip/lzlib/

Http://sphinxsearch.com

Http://www.dest-unreach.org/socat/

2.1.2.2.1. Install socat

[root@pxc02opt] # tar-zxvf socat-2.0.0-b9.tar.gz-C / usr/local

[root@pxc02 opt] # cd / usr/local/socat-2.0.0-b9

[root@pxc02 socat-2.0.0-b9] #. / configure

[root@pxc02 socat-2.0.0-b9] # make

[root@pxc02 socat-2.0.0-b9] # make install

[root@pxc02 socat-2.0.0-b9] # ln-s / usr/local/bin/socat / usr/bin/

2.1.2.2.2, install libev

[root@PXC01 opt] # tar-zxvf libev-4.22.tar.gz-C / usr/local

[root@PXC01 libev-4.22] # chmod 755 / *

[root@PXC01 libev-4.22] #. / configure

[root@PXC01 libev-4.22] # make

[root@PXC01 libev-4.22] # make install

2.1.2.2.3, install lzlib

[root@PXC01 opt] # tar-zxvf lzlib-1.7.tar.gz-C / usr/local

[root@PXC01 opt] # cd / usr/local/lzlib-1.7/

[root@PXC01 lzlib-1.8-rc3] #. / configure

[root@PXC01 lzlib-1.8-rc3] # make

[root@PXC01 lzlib-1.8-rc3] # make install

2.1.2.2.4, install sphinx

[root@PXC01 opt] # tar-zxvf sphinx-2.2.8-release.tar.gz-C / usr/local

[root@PXC01 local] # cd / usr/localsphinx-2.2.8-release/

[root@PXC01sphinx-2.2.8-release] # / configure--prefix=/usr/local/sphinx/-- with-mysql-- enable-id64

Note:

If mysql is not installed under / usr/local/, use the following command:

. / configure--prefix=/usr/local/sphinx/-with-mysql=/mysoft-with-mysql-includes=/mysoft/include--enable-id64

[root@PXC01sphinx-2.2.8-release] # make

[root@PXC01sphinx-2.2.8-release] # make install

2.1.2.2.5, install xtrabackup

[root@PXC01 opt] # tar-zxvfpercona-xtrabackup_2.3.4.orig.tar.gz

[root@PXC01 opt] # cd percona-xtrabackup-2.3.4

[root@PXC01percona-xtrabackup-2.3.4] # cmake-DBUILD_CONFIG=xtrabackup_release-DWITH_MAN_PAGES=OFF

[root@PXC01percona-xtrabackup-2.3.4] # make

[root@PXC01percona-xtrabackup-2.3.4] # make install

Install to / usr/local/xtrabackup by default

Note: if there is an error in the compilation and installation, you need to reinstall and clear the old objects and cache information.

Make clean

Rm-fCMakeCache.txt

Xtrabackup is required for reinstallation

Rm-rf / usr/bin/innobackupex

Rm-rf / usr/bin/xtrabackup

You can add the following two parameters at compile time:

-DDOWNLOAD_BOOST=1-DWITH_BOOST=/usr/local

2.1.2.5.1, establish a soft connection

[root@PXC01xtrabackup] # ln-s/usr/local/xtrabackup/bin/innobackupex / usr/bin/innobackupex

[root@PXC01xtrabackup] # ln-s/usr/local/xtrabackup/bin/xbcrypt / usr/bin/xbcrypt

[root@PXC01xtrabackup] # ln-s/usr/local/xtrabackup/bin/xbstream / usr/bin/xbstream

[root@PXC01xtrabackup] # ln-s / usr/local/xtrabackup/bin/xtrabackup/usr/bin/xtrabackup

[root@PXC01xtrabackup] # ln-s/usr/local/xtrabackup/bin/xbcloud_osenv / usr/bin/xbcloud_osenv

[root@PXC01 xtrabackup] # ln-s / usr/local/xtrabackup/bin/xbcloud/usr/bin/xbcloud

Ln-s/usr/local/xtrabackup/bin/innobackupex / usr/bin/innobackupex

Ln-s / usr/local/xtrabackup/bin/xbcrypt / usr/bin/xbcrypt

Ln-s / usr/local/xtrabackup/bin/xbstream / usr/bin/xbstream

Ln-s / usr/local/xtrabackup/bin/xbcloud_osenv/usr/bin/xbcloud_osenv

Ln-s / usr/local/xtrabackup/bin/xbcloud/usr/bin/xbcloud

Ln-s / usr/local/xtrabackup/bin/xtrabackup/usr/bin/xtrabackup

2.1.2.5.2, verify xtrabackup

[root@PXC01 xtrabackup] # xtrabackup-version

Xtrabackup version 2.3.4 based on MySQLserver 5.6.24 Linux (x86x64) (revision id: e80c779)

2.1.3. Install galera2.1.3.1 and compile galera

[root@pxc01 opt] # tar-zxvf / opt/ galera-3-25.3.16.tar.gz-C / usr/local

[root@pxc01 opt] # cd/usr/local/galera-3-25.3.16 /

[root@pxc01 galera-3-25.3.16] # cd scripts

[root@pxc01 galera-3-25.3.16] #. / build.sh

[root@PXC01 galera-3-25.3.16] # cplibgalera_smm.so/usr/lib64

[root@PXC01 galera-3-25.3.16] # chmod + x libgalera_smm.so

Execute the above script. / build.sh Times error resolution such as:

Yum install boost* openssl* check*-y

Yum-y install gcc

Yum-y install scons

Then you can compile without executing the script by executing scons:

[root@pxc01 opt] # cd/usr/local/galera-3-25.3.16 /

# scons

You can start the primary node mysql (/ usr/local/mysql/bin/mysqld_safe--wsrep_cluster_address=gcomm:// > / dev/null &) in normal mode

2.2. Install from Node

The remaining two nodes are installed according to the steps of the master node.

The slave node does not need to initialize the data after installing the mysql. You can start the mysql,mysql directly and automatically join the cluster node after startup, and copy the data file from the master node to the location where the data file is stored.

Chown-R mysql.mysql / usr/local/mysql/

Boot from the node:

/ usr/local/mysql/bin/mysqld_safe--wsrep_cluster_address=gcomm://192.168.20.11:4567 > / dev/null&

/ usr/local/mysql/bin/mysqld_safe--wsrep_cluster_address=gcomm://192.168.20.22:4567 > / dev/null &

4. Configure PXC cluster 4.1 and add PXC parameters

4.1.1. Add cluster replication account

[mysql@VM-172-31-102133 ~] # mysql

Mysql > GRANT PROCESS, RELOAD, LOCKTABLES, REPLICATION CLIENT ON *. * TO 'sstuser'@'localhost' identified by'sstuser'

Mysql > FLUSH PRIVILEGES

4.1.2. Edit the my.cnf file

Edit the my.cnf file to add the following parameters:

Server-id = 1

# Path to Galera library

Wsrep_provider=/usr/lib64/libgalera_smm.so-pay attention to the storage directory when compiling galera software

# Cluster connection URL contains the IPsof node#1, node#2 and node#3

Wsrep_cluster_address=gcomm://172.31.102.133172.31.102.139172.31.102.167-Node IP address. Default port will be used if no port is added.

# In order for Galera to work correctlybinlog format should be ROW

Binlog_format=ROW

# MyISAM storage engine has onlyexperimental support

Default_storage_engine=InnoDB

# This changes how InnoDB autoincrementlocks are managed and is a requirement for Galera

Innodb_autoinc_lock_mode=2

# Node # 1 address

Wsrep_node_address=172.31.102.133-Host IP

# SST method

Wsrep_sst_method=xtrabackup-v2

# Cluster name

Wsrep_cluster_name=my_centos_cluster

# Authentication for SST method

Wsrep_sst_auth= "sstuser:s3cret"

[root@pxc01] # cat / etc/my.cnf

[mysqld]

Basedir = / usr/local/mysql

Datadir = / data/mysql

Port = 3306

Server-id = 238

# socket = / data/mysql/mysql.sock

Pid-file = / tmp/mysql.pid

# replication settings #

Sync_binlog = 0

Binlog_format = row

# log settings #

Log-error = / data/log/3306.err

Log-bin=/data/binlog/mysql-bin

# connection #

Interactive_timeout = 1800

Wait_timeout = 1800

Lock_wait_timeout = 1800

Skip_name_resolve = 1

Max_connections = 2000

Max_connect_errors = 1000000

Default-storage-engine = innodb

Skip-name-resolve

Skip-host-cache

Explicit_defaults_for_timestamp

Sql_mode = STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER

Expire_logs_days = 7

Collation_server=utf8_general_ci

Init_connect='SET NAMES utf8'

Transaction_isolation = READ-COMMITTED

Max_allowed_packet = 16m

Max_heap_table_size = 64m

# session memory settings #

Sort_buffer_size = 33m

Join_buffer_size = 134m

Query_cache_size = 0

Query_cache_limit = 0

Read_buffer_size= 8M

# table cache performance settings

Table_open_cache = 4096

Table_definition_cache = 4096

Table_open_cache_instances = 64

# innodb settings #

Innodb_undo_directory = / data/undolog/

Innodb_undo_logs = 128,

Innodb_undo_tablespaces = 3

Innodb_buffer_pool_instances = 8

Innodb_online_alter_log_max_size = 1G

Innodb_data_file_path = ibdata1:1G:autoextend

Innodb_buffer_pool_size = 12G

Innodb_file_per_table = 1

Innodb_flush_method = O_DIRECT

Innodb_flush_log_at_trx_commit = 2

Innodb_log_buffer_size = 64m

Innodb_log_file_size = 256m

Innodb_log_files_in_group = 3

Innodb_io_capacity = 600,

Innodb_max_dirty_pages_pct = 75

Innodb_autoinc_lock_mode = 2

Innodb_locks_unsafe_for_binlog = 1

Innodb_doublewrite = 0

Innodb_open_files = 4096

Innodb_page_size = 16k

Innodb_purge_threads = 4

Innodb_large_prefix = 1

Innodb_thread_concurrency = 64

Innodb_print_all_deadlocks = 1

Innodb_strict_mode = 1

Innodb_sort_buffer_size = 60m

Innodb_online_alter_log_max_size = 1G

# wsrep settings #

Wsrep_provider=/usr/lib64/libgalera_smm.so

Wsrep_cluster_address=gcomm://192.168.10.238192.168.10.239192.168.10.240

Wsrep_sst_auth=sstuser:sstuser

Wsrep_provider_options= "gcache.size=2G"

Wsrep_cluster_name=Percona_cluster

Wsrep_sst_method=xtrabackup-v2

Wsrep_node_address=192.168.10.238

Wsrep_node_name=pxc01

Wsrep_slave_threads=8

4.1.3. Start mysql

Start mysql in PXC mode. Only the master node can start it, and the slave node must start mysql in the normal startup mode.

[mysql@VM-172-31-102133] # / etc/init.d/mysqldbootstrap-pxc

Bootstrapping PXC (Percona XtraDBCluster) Starting MySQL (Percona XtraDBCluster). SUCCESS!

4.1.3. Configure the second node

Server-id = 2

Configure the my.cnf file and add the following parameters:

# Path to Galera library

Wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPsof node#1, node#2 and node#3

Wsrep_cluster_address=gcomm://172.31.102.133172.31.102.139172.31.102.167

# In order for Galera to work correctlybinlog format should be ROW

Binlog_format=ROW

# MyISAM storage engine has onlyexperimental support

Default_storage_engine=InnoDB

# This changes how InnoDB autoincrementlocks are managed and is a requirement for Galera

Innodb_autoinc_lock_mode=2

# Node # 1 address

Wsrep_node_address=172.31.102.139

# SST method

Wsrep_sst_method=xtrabackup-v2

# Cluster name

Wsrep_cluster_name=my_centos_cluster

# Authentication for SST method

Wsrep_sst_auth= "sstuser:s3cret"

4.1.4. Configure the third node

Server-id = 3

Configure the my.cnf file and add the following parameters:

# Path to Galera library

Wsrep_provider=/usr/lib64/libgalera_smm.so

# Cluster connection URL contains the IPsof node#1, node#2 and node#3

Wsrep_cluster_address=gcomm://172.31.102.133172.31.102.139172.31.102.167

# In order for Galera to work correctlybinlog format should be ROW

Binlog_format=ROW

# MyISAM storage engine has onlyexperimental support

Default_storage_engine=InnoDB

# This changes how InnoDB autoincrementlocks are managed and is a requirement for Galera

Innodb_autoinc_lock_mode=2

# Node # 1 address

Wsrep_node_address=172.31.102.167

# SST method

Wsrep_sst_method=xtrabackup-v2

# Cluster name

Wsrep_cluster_name=my_centos_cluster

# Authentication for SST method

Wsrep_sst_auth= "sstuser:s3cret"

4.1.4. Start mysql from the slave node

Start with the following command

Service mysql start

View the log:

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report