Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction of PXC Cluster in MySQL

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1. Introduction of PXC

Reference: Percona official

PXC is an open source MySQL high availability solution that integrates Percona Server and Xtrabackup with Galera libraries for synchronous multi-master replication. The high-availability solutions based on Galera are mainly MariaDB Galera Cluster (MGC) and Percona XtraDB Cluster (PXC). At present, PXC architecture is used more and more mature in production environment. Compared with those traditional cluster architecture based on master-slave mode, MHA and dual-master, the most prominent feature of PXC is that it solves the long-criticized problem of replication delay and can basically achieve real-time synchronization. And between nodes, the relationship between them is equal. Galera Cluster itself is also a multi-host architecture. PXC replicates synchronously rather than asynchronously at the storage engine layer, so its data consistency is quite high.

To build a PXC architecture, at least three MySQL instances are needed to form a cluster. The three instances are not master-slave mode, but each master, so the relationship between the three is equal, regardless of master-slave. This is also called multi-master architecture. When the client reads and writes, any instance is connected, and the read data is the same. After writing to any instance, the cluster will synchronize its newly written data to other instances. This architecture does not share any data and is a highly redundant MySQL cluster architecture.

1. Advantages and disadvantages of PXC.

Advantages:

High availability of MySQL cluster and strong consistency of data are realized. The real multi-node read-write cluster scheme is completed. The problem of master-slave replication delay is improved, and real-time synchronization is basically achieved. The newly added nodes can automatically synchronize data without manual backup in advance, so it is easy to maintain. Database failover is easy because it is multi-node writes.

Disadvantages:

Adding a new node is expensive because when you add a new node, you must copy the complete dataset from one of the existing nodes, or 100GB if the data in the existing database is 100GB. Any updated transaction needs to be globally validated before it is executed on other nodes, and cluster performance is limited by the worst node, the so-called short board effect (bucket law). Because of the need to ensure data consistency, PXC uses real-time based on storage engine layer to achieve synchronous replication, so when multiple nodes write concurrently, the problem of lock conflict is more serious. There is a problem of write expansion, and write operations will occur on all nodes. PXC is not recommended for scenarios with heavy write load. Only the Innodb storage engine is supported. 2. Working principle of PXC

The operation flow of PXC is roughly like this: first, before the client commits the transaction to the write node requesting the connection, the node broadcasts the replication writeset that needs to be generated, and then obtains the global transaction ID and transmits it to other nodes. After the other nodes merge the data through certification and find that there is no conflicting data, they perform apply_cb and commit_cb operations, otherwise they discard this transaction.

After the current node (the write node requested by the client) passes the verification, it performs the commit_cb operation and returns the OK to the client. If the validation fails, rollback_cb.

There must be at least three nodes in the PXC cluster on the production line. If one of the nodes fails verification and there is a data conflict, the way to do this is to kick the node with inconsistent data out of the cluster, and it will automatically execute the shutdown command to shut down automatically.

3. Important concepts in PXC

First of all, the number of nodes in the cluster should be regulated, and the number of nodes in the whole cluster should be controlled within the range of at least 3 and at most 8. At least 3 are to prevent brain fissure, because brain fissure occurs only in the case of two nodes. The manifestation of the brain fissure is to output any command and return the result as unkown command.

When a new node wants to join the PXC cluster, it is necessary to select a doner node from each node in the cluster as the full data contributor. PXC has two kinds of data transmission modes of nodes, one is called SST full transmission, the other is called IST incremental transmission. There are three ways of SST transmission: XtraBackup, mysqldump and rsync, while incremental transmission is only XtraBackup. Generally speaking, when the amount of data is small, SST can be used for full transmission, but only XtraBackup mode can be used.

Nodes in the cluster will switch due to the addition or failure of new nodes, synchronization failure, and so on. The meaning of these states is listed below:

Open: the node starts successfully and attempts to connect to the cluster primary: the node is already in the cluster. When a new node joins the cluster, a donor is selected for data synchronization. Joiner: the node is in a state of waiting to receive synchronous data files. Joined: the node has completed data synchronization and is trying to keep pace with other nodes in the cluster. Synced: the status of the normal service provided by the node, indicating that the synchronization has been completed and is consistent with the progress of the cluster. Doner: the state in which a node provides full data for a new node. 2. Deploy PXC cluster 1. Prepare the environment for OS hostname IPCent OS 7.5pxc-01192.168.20.2Cent OS 7.5pxc-02192.168.20.3Cent OS 7.5pxc-03192.168.20.4

Note: all hosts do not need to install the MySQL service in advance, and the MySQL service is included in the PXC source package.

Note: all the operations of the 2-8 summary need to be performed by the three nodes. If there is anything that does not need to be performed on all three nodes, I will specify it.

2. Installation depends on [root@pxc-01 ~] # yum install-y libev lsof perl-Compress-Raw-Bzip2 perl-Compress-Raw-Zlib perl-DBD-MySQL perl-DBI perl-Digest perl-Digest-MD5 perl-IO-Compress perl-Net-Daemon perl-PlRPC socat openssl openssl-devel3, uninstall Mariadb [root@pxc-01 ~] # yum-y erase mariadb & & rpm-qa | grep mariadb | xargs rpm-e-nodeps4, download the required software packages (xtraBackup, Percona-XtraDB-Cluster and qpress) [root@pxc-01 bin] # wget https://www.percona.com/downloads/Percona-XtraBackup-2.4/Percona-XtraBackup-2.4.18/binary/redhat/7/x86_64/percona-xtrabackup-24-2.4.18-1.el7.x86_64.rpm[root@pxc-01 bin] # wget http://www.quicklz.com/qpress-11-linux-x64.tar# the package above using wget may report error 406 Use the browser to open the link to download and upload to the server. [root@pxc-01 bin] # wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-5.7.28-31.41/binary/tarball/Percona-XtraDB-Cluster-5.7.28-rel31-31.41.1.Linux.x86_64.ssl101.tar.gz# this package will take a little longer to download. Wait patiently. 5. Configure the qpress command and install xtrabackup [root @ pxc-01 bin] # tar xf qpress-11-linux-x64.tar-C / usr/ bin [root @ pxc-01 bin] # rpm-ivh percona-xtrabackup-24-2.4.18-1.el7.x86_64.rpm6, Deploy PXC service # create MySQL system user [root@pxc-01 bin] # useradd-M-r-s / bin/false mysql # unpack [root@pxc-01 bin] # tar zxf Percona-XtraDB-Cluster-5.7.28-rel31-31.41.1.Linux.x86_64.ssl101.tar.gz# move to the specified directory [root@pxc-01 bin] # mv Percona-XtraDB-Cluster-5.7.28-rel31-31.41.1.Linux.x86 _ 64.ssl101 / usr/local/mysql# create the data directory and change the directory master group to MySQL [root @ pxc-01 bin] # mkdir-p / usr/local/mysql/ data [root @ pxc-01 bin] # chown-R mysql.mysql / usr/local/mysql/# configure the global variable [root@pxc-01 bin] # echo "export PATH=/usr/local/mysql/bin:$PATH" > > / etc/ profile [root @ pxc-01 bin] # source / etc/profile7 for the MySQL command. 1) the complete pxc-01 configuration file that defines the MySQL service is as follows: [root@pxc-01 bin] # cat / etc/my.cnf# Example: Percona XtraDB Cluster 5.7 [client] port = 3306socket = / tmp/ MySQL. Sock [MySQL] prompt= "\ u@\ h\ R:\ m:\ s [\ d] >" no-auto-rehash [mysqld] user = mysqlport = 3306basedir = / usr/local/mysqldatadir = / usr/local/mysql/datasocket = / tmp/mysql .sockpid-file = db.pidcharacter-set-server = utf8mb4skip_name_resolve = 1open_files_limit = 65535back_log = 1024max_connections = 512max_connect_errors = 1000000table_open_cache = 1024table_definition_cache = 1024table_open_cache_instances = 64thread_stack = 512Kexternal-locking = FALSEmax_allowed_packet = 32Msort_buffer_size = 4Mjoin_buffer_size = 4Mthread_cache_size = 768#query_cache_size = 0#query_cache_type = 0interactive_timeout = 600wait_timeout = 600tmp_table_size = 32Mmax_heap_table_size = 32Mslow_query_log = 1slow_query_log_file = / usr/local/mysql/data/slow.loglog-error = / usr/local/mysql/data/error.loglong_query_time = 0.1server-id = 1813306log-bin = / usr/local/mysql/data/mysql-binsync_binlog = 1binlog_cache_size = 4Mmax_binlog_cache_size = 1Gmax_binlog_size = 1Gexpire_logs_days = 7master_info_repository = TABLErelay_log_info_repository = TABLEgtid_mode = onenforce_gtid_consistency = 1log_slave_updatesbinlog_format = row # to make Galera work correctly The binlog format should be ROWrelay_log_recovery = 1relay-log-purge = 1key_buffer_size = 32Mread_buffer_size = 8Mread_rnd_buffer_size = 4Mbulk_insert_buffer_size = 64Mlock_wait_timeout = 3600explicit_defaults_for_timestamp = 1innodb_thread_concurrency = 0innodb_sync_spin_loops = 100innodb_spin_wait_delay = 30transaction_isolation = REPEATABLE-READinnodb_buffer_pool_size = 1024Minnodb_buffer_pool_instances = 8innodb_buffer_pool_load_at_startup = 1innodbbuffer _ Pool_dump_at_shutdown = 1innodb_data_file_path = ibdata1:1G:autoextendinnodb_flush_log_at_trx_commit = 1innodb_log_buffer_size = 32Minnodb_log_file_size = 2Ginnodb_log_files_in_group = 2#innodb_max_undo_log_size = 4Ginnodb_io_capacity = 2000innodb_io_capacity_max = 4000innodb_flush_neighbors = 0innodb_write_io_threads = 4innodb_read_io_threads = 4innodb_purge_threads = 4innodb_page_cleaners = 4innodb _ Open_files = 65535innodb_max_dirty_pages_pct = 50innodb_flush_method = O_DIRECTinnodb_lru_scan_depth = 4000innodb_checksum_algorithm = crc32#innodb_file_format = Barracuda#innodb_file_format_max = Barracudainnodb_lock_wait_timeout = 10innodb_rollback_on_timeout = 1innodb_print_all_deadlocks = 1innodb_file_per_table = 1innodb_online_alter_log_max_size = 4Ginternal_tmp_disk_storage_engine = InnoDBinnodb_stats_on_metadata = PXC The above is some information about database optimization. Most are optional configurations All related to pxc are the following parameters: wsrep_provider=/usr/local/mysql/lib/libgalera_smm.sowsrep_provider_options= "gcache.size=2G" wsrep_cluster_name=my_pxc_clusterwsrep_cluster_address=gcomm://192.168.20.2192.168.20.3192.168.20.4wsrep_node_name=pxc01wsrep_node_address=192.168.20.2wsrep_sst_method=xtrabackup-v2wsrep_sst_auth=sst:pwd@123pxc_strict_mode=ENFORCING default_storage_engine=InnoDB # myisam storage engine can only Using innodb_autoinc_lock_mode=2 # this InnoDB automatic incremental locking mode is forced by Galera in the test environment [mysqldump] quickmax_allowed_packet = 32m

Some parameters related to PXC are explained as follows:

Wsrep_provider: specify the path to the desired plug-in (this plug-in is the default). Wsrep_provider_options= "gcache.size=1G": the size of the Gcache ring buffer, which is pre-allocated at startup and defaults to 10m. There are other options for this configuration: gcache.size represents the size used to cache write set incremental information. Its default size is 128MB, which is set by the wsrep_provider_options variable parameter. It is recommended to adjust to the 2G-4G range, so that there is enough space to cache more incremental information.

Gcache.mem_size represents the size of the memory cache in Gcache, and moderate scaling can improve the performance of the entire cluster.

Gcache.page_size can be understood as writing the write set directly to a disk file if there is not enough memory (insufficient Gcache). Wsrep_cluster_name: this parameter defines the logical cluster name of the node, and all nodes participating in the cluster must specify the same name. Wsrep_cluster_address: use this parameter to determine the IP addresses of other nodes in the cluster. Currently, the only back-end mode that supports production is gcomm. Wsrep_node_name: this parameter is used to set the logical name of the node itself. Wsrep_node_address: used to specify the IP address of the current node. Wsrep_sst_method: used to specify the method of data synchronization. It is recommended to use xtrabackup-v2wsrep_sst_auth: define the user and password to be used when sst replicates data (the database node must have a specified user and password) pxc_strict_mode: strict mode. It is officially recommended that this parameter be ENFORCING. With this configuration, any table created in the database must have a primary key, otherwise the data cannot be written.

For more detailed explanation of PXC parameters, please move: MySQL wsrep option, you will be surprised to find, at the same time, if you want to modify the above configuration items, it is recommended to see the detailed option explanation.

2) the complete configuration file of pxc-02 node is as follows:

Compared with the pxc-01 node, there are only three differences, all of which are marked below

[root@pxc-02 src] # cat / etc/my.cnf # Example: Percona XtraDB Cluster 5.7 [client] port = 3306socket = / tmp/ MySQL. Socks [MySQL] prompt= "\ u@\ h\ R:\ m:\ s [\ d] >" no-auto- rehash [mysqld] user = mysqlport = 3306basedir = / usr/local/mysqldatadir = / usr/local/mysql/datasocket = / tmp/mysql.sockpid-file = db.pidcharacter-set-server = utf8mb4skip_name_resolve = 1open_files_limit = 65535back_log = 1024max_connections = 512max_connect_errors = 1000000table_open_cache = 1024table_definition_cache = 1024table_open_cache_instances = 64thread_stack = 512Kexternal-locking = FALSEmax_allowed_packet = 32Msort_buffer_size = 4Mjoin_buffer_size = 4Mthread_cache_size = 768#query_cache_size = 0#query_cache_type = 0interactive_timeout = 600wait_timeout = 600tmp_table_size = 32Mmax_heap_table_size = 32Mslow_query_log = 1slow_query_log_file = / Usr/local/mysql/data/slow.loglog-error = / usr/local/mysql/data/error.loglong_query_time = 0.1server-id = 1813307 # server-id cannot be the same log-bin = / usr/local/mysql/data/mysql-binsync_binlog = 1binlog_cache_size = 4Mmax_binlog_cache_size = 1Gmax_binlog_size = 1Gexpire_logs_days = 7master_info_repository = TABLErelay_log_info_repository = TABLEgtid_mode = onenforce _ gtid_consistency = 1log_slave_updatesbinlog_format = row relay_log_recovery = 1relay-log-purge = 1key_buffer_size = 32Mread_buffer_size = 8Mread_rnd_buffer_size = 4Mbulk_insert_buffer_size = 64Mlock_wait_timeout = 3600explicit_defaults_for_timestamp = 1innodb_thread_concurrency = 0innodb_sync_spin_loops = 100innodb_spin_wait_delay = 30transaction_isolation = REPEATABLE-READinnodb_buffer_pool_size = 1024Minnodb_buffer_pool_instances = 8innodb_buffer_pool_load _ at_startup = 1innodb_buffer_pool_dump_at_shutdown = 1innodb_data_file_path = ibdata1:1G:autoextendinnodb_flush_log_at_trx_commit = 1innodb_log_buffer_size = 32Minnodb_log_file_size = 2Ginnodb_log_files_in_group = 2#innodb_max_undo_log_size = 4Ginnodb_io_capacity = 2000innodb_io_capacity_max = 4000innodb_flush_neighbors = 0innodb_write_io_threads = 4innodb_read_io_threads = 4innodb_purge_threads = 4innodb_page_cleaners = 4innodb_open_files = 65535innodb_max_dirty_pages_pct = 50innodb_flush_method = O_DIRECTinnodb_lru_scan_depth = 4000innodb_checksum_algorithm = crc32#innodb_file_format = Barracuda#innodb_file_format_max = Barracudainnodb_lock_wait_timeout = 10innodb_rollback_on_timeout = 1innodb_print_all_deadlocks = 1innodb_file_per_table = 1innodb_online_alter_log_max_size = 4Ginternal_tmp_disk_storage_engine = InnoDBinnodb_stats _ on_metadata = "PXC wsrep_provider=/usr/local/mysql/lib/libgalera_smm.sowsrep_provider_options=" gcache.size=2G "wsrep_cluster_name=my_pxc_clusterwsrep_cluster_address=gcomm://192.168.20.2192.168.20.3192.168.20.4wsrep_node_name=pxc02 # Node's logical name cannot be the same wsrep_node_address=192.168.20.3 # where the IP address is native Of course, it is not the same. The complete configuration document for wsrep_sst_method=xtrabackup-v2wsrep_sst_auth=sst:pwd@123pxc_strict_mode=ENFORCINGdefault_storage_engine=InnoDB innodb_autoinc_lock_mode=2 [mysqldump] quickmax_allowed_packet = 32M3) pxc-03 is as follows:

Again, there are three configurations that cannot be the same: server-id, node logical name, and node IP address.

[root@pxc-03 src] # cat / etc/my.cnf # Example: Percona XtraDB Cluster 5.7 [client] port = 3306socket = / tmp/ MySQL. Socks [MySQL] prompt= "\ u@\ h\ R:\ m:\ s [\ d] >" no-auto- rehash [mysqld] user = mysqlport = 3306basedir = / usr/local/mysqldatadir = / usr/local/mysql/datasocket = / tmp/mysql.sockpid-file = db.pidcharacter-set-server = utf8mb4skip_name_resolve = 1open_files_limit = 65535back_log = 1024max_connections = 512max_connect_errors = 1000000table_open_cache = 1024table_definition_cache = 1024table_open_cache_instances = 64thread_stack = 512Kexternal-locking = FALSEmax_allowed_packet = 32Msort_buffer_size = 4Mjoin_buffer_size = 4Mthread_cache_size = 768#query_cache_size = 0#query_cache_type = 0interactive_timeout = 600wait_timeout = 600tmp_table_size = 32Mmax_heap_table_size = 32Mslow_query_log = 1slow_query_log_file = / Usr/local/mysql/data/slow.loglog-error = / usr/local/mysql/data/error.loglong_query_time = 0.1server-id = 1813308log-bin = / usr/local/mysql/data/mysql-binsync_binlog = 1binlog_cache_size = 4Mmax_binlog_cache_size = 1Gmax_binlog_size = 1Gexpire_logs_days = 7master_info_repository = TABLErelay_log_info_repository = TABLEgtid_mode = onenforce_gtid_consistency = 1log_slave_updatesbinlog_format = row relay_log_ Recovery = 1relay-log-purge = 1key_buffer_size = 32Mread_buffer_size = 8Mread_rnd_buffer_size = 4Mbulk_insert_buffer_size = 64Mlock_wait_timeout = 3600explicit_defaults_for_timestamp = 1innodb_thread_concurrency = 0innodb_sync_spin_loops = 100innodb_spin_wait_delay = 30transaction_isolation = REPEATABLE-READinnodb_buffer_pool_size = 1024Minnodb_buffer_pool_instances = 8innodb_buffer_pool_load_at_startup = 1innodb_buffer_pool_dump_at_shutdown = 1innodb _ Data_file_path = ibdata1:1G:autoextendinnodb_flush_log_at_trx_commit = 1innodb_log_buffer_size = 32Minnodb_log_file_size = 2Ginnodb_log_files_in_group = 2#innodb_max_undo_log_size = 4Ginnodb_io_capacity = 2000innodb_io_capacity_max = 2000innodb_io_capacity_max = 4000innodb_flush_neighbors = 0innodb_write_io_threads = 4innodb_read_io_threads = 4innodb_purge_threads = 4innodb_page_cleaners = 4innodb_open_files = 65535 innodbroommaxroomdirty65535 Pages_pct = 50innodb_flush_method = O_DIRECTinnodb_lru_scan_depth = 4000innodb_checksum_algorithm = crc32#innodb_file_format = Barracuda#innodb_file_format_max = Barracudainnodb_lock_wait_timeout = 10innodb_rollback_on_timeout = 1innodb_print_all_deadlocks = 1innodb_file_per_table = 1innodb_online_alter_log_max_size = 4Ginternal_tmp_disk_storage_engine = InnoDBinnodb_stats_on_metadata = PXC wsrep_provider=/usr/local/mysql/lib / libgalera_smm.sowsrep_provider_options= "gcache.size=2G" wsrep_cluster_name=my_pxc_clusterwsrep_cluster_address=gcomm://192.168.20.2192.168.20.3192.168.20.4wsrep_node_name=pxc03wsrep_node_address=192.168.20.4wsrep_sst_method=xtrabackup-v2wsrep_sst_auth=sst:pwd@123pxc_strict_mode=ENFORCINGdefault_storage_engine=InnoDB innodb_autoinc_lock_mode=2 [mysqldump] quickmax_allowed_packet = 32M8, Each node initializes MySQL services and opens firewall-related ports # initialize MySQL services [root@pxc-01 ~] # mysqld-- defaults-file=/etc/my.cnf-- user=mysql-- basedir=/usr/local/mysql/-- datadir=/usr/local/mysql/data/-- initialize# open firewall-related ports [root@pxc-01 ~] # firewall-cmd-- permanent-- add-port= {3306, 4567, 444, 444, 4568) / tcpads [root @ pxc-01 ~ ] # firewall-cmd-- reload [root @ pxc-01 ~] # firewall-cmd-- list-allpublic (active) target: default icmp-block-inversion: no interfaces: ens33 sources: services: ssh dhcpv6-client ports: 3306/tcp 4567/tcp 4444/tcp 4568/tcp # confirm that protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:

The relevant port information is as follows:

Mysql service port: 3306 cluster communication port: 4567SST (State Snaphot Transfer) port: 4444IST (Incremental State Transfer) port: 4568

Note: the next operation, without special declaration, is done on the pxc-01 node.

9. Boot the pxc-01 node to initialize the cluster [root@pxc-01 ~] # mysqld-- defaults-file=/etc/my.cnf-- wsrep_new_cluster & [root@pxc-01 ~] # ss-anptl | grep mysqlLISTEN 0128 *: 4567 *: * users: ("mysqld", pid=61739 Fd=11)) LISTEN 0 128:: 3306:: * users: (("mysqld", pid=61739,fd=31)) 10. View the initial password on the pxc01 host and log in to MySQL# to view the initial password [root@pxc-01 ~] # grep password / usr/local/mysql/data/error.log 2020-02-26T13:11:29.770228Z 1 [Note] A temporary password is generated for root@localhost: j?wh Jn=7uB.# logs in to the database. Because of the special characters, the password must be enclosed in single quotation marks [root@pxc-01 ~] # mysql-uroot-paired jnlogin 7uB.' # to modify the initial password of the root user before other operations can be carried out. Root@localhost 21:47: [(none)] > alter user root@localhost identified by '123.composter # create the sst account specified in the configuration file, and the password must match the password specified in the configuration file. Root@localhost 21:47: [(none)] > grant all privileges on *. * to sst@localhost identified by 'pwd@123';11, add other nodes to the cluster

Note: the following operations need to be performed on each node to be added to the pxc-01 cluster, which I do here on the pxc-02 and pxc-03 nodes, respectively.

[root@pxc-02 src] # mysqld-- defaults-file=/etc/my.cnf & [1] 61731 [root@pxc-02 src] # ss-anltp | grep mysql # the node that joined the cluster at this time Synchronizing data from nodes in the cluster LISTEN 0128 *: 4567 *: * users: (("mysqld", pid=61731,fd=11)) [root@pxc-02 src] # ss-anltp | grep mysql # wait for a while. When port 3306 is found listening, the data is synchronized. The node has been added to the cluster LISTEN 0128 *: 4567 *: * users: (("mysqld", pid=61731,fd=11)) LISTEN 0128:: 3306: * users: (("mysqld", pid=61731,fd=36)) # the node after successful startup You can directly use the root password set by the pxc-01 node for database login # because the database information of the pxc-01 node has been synchronized to the local database node. [root@pxc-02 src] # mysql-uroot-p123.com12, verify replication function create library and table [root@pxc-01 ~] # mysql-uroot-p123.comroot@localhost 22:23: [(none)] > create database test01;root@localhost 22:24: [(none)] > use test01root@localhost 22:31: [test01] > create table T1 (id int primary key,name varchar (10)) # insert a piece of data on each of the three nodes # pxc-01:root@localhost 22:31: [test01] > insert into T1 (id,name) values (1meme 'Zhang San'); # pxc-02:root@localhost 22:33: [(none)] > insert into test01.t1 (id,name) values (2m'Li Si'); # pxc-03:root@localhost 22:33: [(none)] > insert into test01.t1 (id,name) values ('Wang Wu')

13. Add MySQL to serve the system

After it is added as a system service, starting and stopping the service later does not need to be started with the mysqld command as if the service was started for the first time, but directly using the systemctl start mysqld command.

# copy script file [root@pxc-01 ~] # cp / usr/local/mysql/support-files/mysql.server / etc/init.d/ mysqld [root @ pxc-01 ~] # chkconfig-- add mysqld # add as a system service [root@pxc-01 ~] # chkconfig mysqld on # join boot self

At this point, all three nodes have joined the cluster, and if you want to add other nodes to the cluster, you can simply configure it in the same way as the pxc-02 or pxc-03 node (the configuration of the pxc-01 node is slightly different because it is the first node in the cluster).

14. Check the cluster status

The above parameters are explained as follows:

Wsrep_cluster_size: the number of nodes in the current cluster. Wsrep_cluster_state_uuid: the UUID of the cluster, and the UUID value in the same cluster must be the same. Wsrep_cluster_status: the status of the cluster composition. A value of primary indicates normal. The following is an explanation of the different state values: open: the node starts successfully and attempts to connect to the cluster. Primary: the node is already in the cluster. When a new node joins the cluster, donor is selected for data synchronization. Joiner: the node is in a state of waiting to receive synchronous data files. Joined: the node has completed data synchronization and is trying to keep pace with other nodes in the cluster. Synced: the status of the normal service provided by the node, indicating that the synchronization has been completed and is consistent with the progress of the cluster. Doner: the state in which a node provides full data for a new node.

Wsrep_local_state: displays the current node status, with a value of 4 indicating normal. There are four values for this state:

Joining: indicates that the node is joining the cluster doner: the state in which the node is providing full data for the newly joined node. Joined: the current node has successfully joined the cluster. Synced: the current node is synchronized with each node in the cluster.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report