In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Introduction to PXC
PXC, which stands for Percona XtraDB Cluster, is a free MySQL cluster product produced by Percona. The function of PXC is to connect different mysql instances and realize multi-master cluster through the Galera cluster technology of mysql. In a PXC cluster, every mysql node is readable and writable, that is, the master node in the master-slave concept, and there are no read-only nodes.
PXC is actually a multi-master synchronous replication plug-in for OLTP based on Galera. PXC is mainly used to solve the problem of strong data synchronization in MySQL clusters. PXC can cluster any derived version of mysql, such as MariaDB and Percona Server. Because the performance of Percona Server is closest to that of mysql Enterprise Edition, there is a significant improvement in performance compared to the standard version of mysql, and it is basically compatible with mysql. Therefore, when building a PXC cluster, it is usually recommended to build it based on Percona Server.
For more information on how to choose a database cluster scheme, please see:
A brief discussion on the characteristics of synchronous replication of database cluster scheme PXC, transactions are either completed or all failed multi-master replication in all cluster nodes, there is no division of master and slave roles, and strong consistency of data synchronization can be carried out in any node. The more PXC cluster nodes have the same data in real time, the slower the speed of data synchronization. Therefore, the size of the PXC cluster should not be too large. The speed of data synchronization of the PXC cluster depends on the node with the lowest configuration, so the hardware configuration of all nodes in the PXC cluster should be consistent as far as possible. The PXC cluster only supports the InnoDB engine, so only the data of the InnoDB engine will be synchronously installed with PXC and ready to set up the cluster environment.
Environment release Notes:
VMware Workstation Pro 15.5Percona XtraDB Cluster 5.7CentOS 8
MySQL has several common derivatives, and Percona Server is one of them. Percona Server was chosen here because it is the closest to the enterprise version of MySQL. The comparison of the derivative versions is as follows:
The PXC cluster design of this article is shown in the figure:
Tips: the smallest PXC cluster actually has two nodes, but it is designed for three nodes in the course. This is because the PXC cluster stops running automatically when more than half of the nodes are inaccessible due to unexpected downtime in order to prevent brain fissure. So if it is designed as two nodes, if one node dies, more than half of the nodes are inaccessible, then the cluster will stop running and the other node will not be able to use it. This disaster tolerance is too poor, so it is designed to have at least three nodes to improve the availability of the PXC cluster.
According to the figure, we need to create three virtual machines to build a three-node PXC cluster:
Node description:
NodeHostIPNode1PXC-Node1192.168.190.132Node2PXC-Node2192.168.190.133Node3PXC-Node3192.168.190.134
The configuration of each virtual machine is shown in the following figure:
The problem of PXC cluster is to ensure strong consistency of data at the expense of performance. The more nodes in the PXC cluster means the longer the time for data synchronization, so how many database servers should be used to do the cluster, relatively speaking, it can achieve the best performance?
Generally speaking, no more than 15 nodes form a PXC cluster, which is good in performance, but not if there are more. Then the PXC cluster is used as a shard, and several more shards are set on the MyCat to cope with data sharding and concurrent access.
System preparation
Some versions of CentOS are bundled with mariadb-libs by default, and you need to uninstall PXC before installing it:
[root@PXC-Node1 ~] # yum-y remove mari*
The PXC cluster uses four ports:
Port description 3306MySQL service port 4444 request full synchronization (SST) port 4567 communication port 4568 request incremental synchronization (IST) port between database nodes
So if the system has a firewall enabled, you need to open these ports:
[root@PXC-Node1 ~] # firewall-cmd-- zone=public-- add-port=3306/tcp-- permanent [root@PXC-Node1 ~] # firewall-cmd-- zone=public-- add-port=4444/tcp-- permanent [root@PXC-Node1 ~] # firewall-cmd-- zone=public-- add-port=4567/tcp-- permanent [root@PXC-Node1 ~] # firewall-cmd-zone=public-add-port=4568/tcp-permanent [root@PXC-Node1 ~] # firewall-cmd-reload installation PXC
Go to the official document first:
Installing Percona XtraDB Cluster
There are two relatively simple ways to install PXC: one is to download the rpm package from the official website to the local system for installation, and the other is to use the official yum warehouse for online installation. This article demonstrates local installation by first opening the following URL:
Https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/
After selecting the appropriate version, copy the download link:
Then download it on CentOS using the wget command, as shown in the following example:
[root@PXC-Node1 ~] # cd / usr/local/src [root@PXC-Node1 / usr/local/src] # wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-5.7.28-31.41/binary/redhat/8/x86_64/Percona-XtraDB-Cluster-5.7.28-31.41-r514-el8-x86_64-bundle.tar
Create a directory where the rpm files are stored, and extract the downloaded PXC installation package to the new directory:
[root@PXC-Node1 / usr/local/src] # mkdir pxc-rpms [root@PXC-Node1 / usr/local/src] # tar-xvf Percona-XtraDB-Cluster-5.7.28-31.41-r514-el8-x86_64-bundle.tar-C pxc-rpms [root@PXC-Node1 / usr/local/src] # ls pxc-rpmsPercona-XtraDB-Cluster-57-5.7.28-31.41.1.el8.x86q64.rpmPerconaSumi XtraDBSumi 57debuginfol- 5.7.28-31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-57-debugsource-5.7.28-31.41.1.el8.x86mm 64.rpmPercona-31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-client-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-devel-57-5.7.28- 31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-full-57-5.7.28-31.41.1.el8.x86mm 64.rpmPercona-31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-garbd-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-server-57-5.7.28-31.41.1. El8.x86_64.rpmPercona-XtraDB-Cluster-server-57-debuginfo-5.7.28-31.41.1.el8.x86room64.rpmPerconaMy XtraDBMULISHAUR SharedKey 57-5.7.28-31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-shared-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpmPercona-XtraDB-Cluster-test-57-5.7.28-31.41.el8.x86_ 64.rpmPercona-XtraDB-Cluster-test-57-debuginfo-5.7.28-31.41.1.el8.x86_64.rpm
In addition, the installation of PXC depends on qpress and percona-xtrabackup-24, and the corresponding rpm package download link can be found in the repository provided by percona. Then go to the pxc-rpms directory to download the rpm package for these two components, as follows:
[root@PXC-Node1 / usr/local/src] # cd pxc-rpms [root@PXC-Node1 / usr/local/src/pxc-rpms] # wget https://repo.percona.com/release/8/RPMS/x86_64/qpress-11-1.el8.x86_64.rpm[root@PXC-Node1 / usr/local/src/pxc-rpms] # wget https://repo.percona.com/release/8/RPMS/x86_64/percona-xtrabackup-24-2.4.18-1.el8.x86_64.rpm
After completing the above steps, you can now install PXC locally through the yum command:
[root@PXC-Node1 / usr/local/src/pxc-rpms] # yum localinstall-y * .rpm
After a successful installation, there will be commands related to mysql in the system. As follows, if you can view the version information normally, it means that it has been installed successfully:
[root@PXC-Node1 / usr/local/src/pxc-rpms] # mysql-- versionmysql Ver 14.14 Distrib 5.7.28-31, for Linux (x86 / 64) using 7.0 [root@PXC-Node1 / usr/local/src/pxc-rpms] # configure PXC cluster
Some configurations are required to start the cluster after installation. The configuration file of PXC is located in the / etc/percona-xtradb-cluster.conf.d/ directory by default, and the / etc/my.cnf file is only a reference to it:
[root@PXC-Node1 ~] # cd / etc/percona-xtradb-cluster.conf.d/ [root@PXC-Node1 / etc/percona-xtradb-cluster.conf.d] # ll Total usage 12 mysqld_safe.cnf mysqld_safe.cnf / etc/percona-xtradb-cluster.conf.d-1 root root 381 December 13 17:19 mysqld.cnf # mysql configuration-rw-r--r-- 1 root root 440 December 13 17:19 mysqld_safe.cnf # mysqld_safe configuration-rw-r--r-- 1 Root root 1066 December 13 17:19 related configuration of wsrep.cnf # PXC cluster
Add some basic configurations such as character sets to the mysqld.cnf file:
[root@PXC-Node1 / etc/percona-xtradb-cluster.conf.d] # vim mysqld.cnf [mysqld]. # set character set character_set_server=utf8# setting listening ipbind-address=0.0.0.0# skips DNS parsing skip-name-resolve
Then configure the PXC cluster and modify the following configuration items in the wsrep.cnf file:
[root@PXC-Node1 / etc/percona-xtradb-cluster.conf.d] # vim wsrep.cnf [mysqld] # unique ID of MySQL instance in PXC cluster, which cannot be repeated And must be the path to the digital server-id=1# Galera library file, the name of the wsrep_provider=/usr/lib64/galera3/libgalera_smm.so# PXC cluster, the name of the ipwsrep_cluster_address=gcomm://192.168.190.132192.168.190.133192.168.190.134# current node of all nodes in the wsrep_cluster_name=pxc-cluster# cluster, the IPwsrep_node_address=192.168.190.132# synchronization method of the current node wsrep_node_name=pxc-node-01# (mysqldump, rsync, Xtrabackup) account password used in wsrep_sst_method=xtrabackup-v2# synchronization wsrep_sst_auth=admin:Abc_123456# uses strict synchronization mode pxc_strict_mode=ENFORCING# based on ROW replication (secure and reliable) binlog_format=ROW# default engine default_storage_engine=InnoDB# primary key self-growing unlocked table innodb_autoinc_lock_mode=2 start PXC cluster
So far, we have completed the installation and configuration of PXC on the virtual machine PXC-Node1. Then complete the same steps on the other two nodes, and I won't repeat them here.
When all the nodes are ready, start the PXC cluster using the following command. Note that this command is used to start the first node, which can be any one of these three nodes when starting the cluster for the first time. Here I use PXC-Node1 as the head node. So execute this command under the virtual machine:
[root@PXC-Node1 ~] # systemctl start mysql@bootstrap.service
Other nodes only need to start the MySQL service normally, and then join the cluster automatically according to the configuration in the wsrep.cnf file:
[root@PXC-Node2 ~] # systemctl start mysqld
Disable the boot self-boot of Percona Server:
[root@localhost ~] # systemctl disable mysqldRemoved / etc/systemd/system/multi-user.target.wants/mysqld.service.Removed / etc/systemd/system/mysql.service. [root@localhost ~] # Tips: the reason for disabling boot is that in a PXC cluster, when a node goes down and restarts, it will randomly synchronize data with a PXC node. If the downtime of the node is too long, the amount of data that needs to be synchronized will be large. When a large amount of data synchronization occurs, the PXC cluster restricts other write operations until all the data is synchronized successfully. So after a long downtime, the right thing to do is not to start the node immediately, but to copy data files from other nodes to that node before starting. In this way, there will be much less data that needs to be synchronized and will not cause a long speed limit. Create a database account
Then change the default password for the root account. We can find the initial default password in the log file of mysql. The red box in the following figure indicates the default password:
Tips: the default password is generated only after the MySQL service is started for the first time
Copy the default password and use the mysql_secure_installation command to change the password for the root account:
[root@localhost ~] # mysql_secure_installation
For security reasons, remote login is generally not allowed for root accounts, so we need to create a separate database account for remote access. This account is also used for synchronizing data in PXC clusters, corresponding to the wsrep_sst_auth configuration item in the wsrep.cnf file:
[root@localhost ~] # mysql-uroot-pmysql > create user 'admin'@'%' identified by' Abc_123456';mysql > grant all privileges on *. * to 'admin'@'%';mysql > flush privileges
After creating the account, use the client tool to test the remote connection to see if the connection is successful:
At this point, we have finished building the PXC cluster. You should be able to see the synchronization effect of the PXC cluster by now, because the above operations of changing the root password and creating a new account will be synchronized to the other two nodes. In other words, the root account password of the other two nodes is already the modified password, and there will also be an admin account. You can verify this point by yourself.
In addition, we can also use the following statement to confirm the status information of the cluster:
Show status like 'wsrep_cluster%'
Execution result:
Variable description:
Wsrep_cluster_weight: the weight value of the node in the cluster wsrep_cluster_conf_id: the number of changes in the cluster node relationship (+ 1 for each increase / deletion) wsrep_cluster_size: the number of nodes in the cluster wsrep_cluster_state_uuid: the UUID of the current state of the cluster, which is the unique identifier of the current state of the cluster and the sequence of changes it has experienced. It is also used to compare whether two or more nodes are in the same cluster. If the value of the variable of two nodes is the same, it means they are in a cluster. If the value is inconsistent, it means that they are not in the same cluster wsrep_cluster_status: the current status of the cluster verifies the data synchronization of the cluster.
1. Verify whether the created database can be synchronized.
Create a test library in Node 1:
After the creation is completed, you should also see the test library by clicking on other nodes:
2. Verify whether the created data table can be synchronized.
Create an student table in the test library in node 1:
After the creation is complete, you should also see this student table on other nodes:
3. Verify whether the table data can be synchronized.
Insert a piece of data into the student table in node 1:
At this point, this data should also be visible in other nodes:
Description of status parameters of the cluster
The status parameters of the cluster can be queried through the SQL statement, as follows:
Show status like'% wsrep%'
As there are a lot of state parameter variables queried, some commonly used ones are described here. PXC cluster parameters can be divided into the following categories:
Queue related wsrep_local_send_queue: length of sending queue wsrep_local_send_queue_max: maximum length of sending queue wsrep_local_send_queue_min: minimum length of sending queue wsrep_local_send_queue_avg: average length of sending queue wsrep_local_recv_queue: length of receiving queue wsrep_local_recv_queue_max: maximum length of receiving queue wsrep_local_recv_queue_min: Minimum length of receiving queue wsrep_local_recv_queue_avg: average length of receiving queue replication related wsrep_replicated: number of times data is synchronized to other nodes wsrep_replicated_bytes: total amount of data synchronized to other nodes Wsrep_received per byte: the number of synchronization requests received from other nodes wsrep_received_bytes: the total amount of synchronization data received from other nodes Wsrep_last_applied per byte: number of synchronous applications wsrep_last_committed: number of transaction commits flow control related wsrep_flow_control_paused_ns: total time spent in flow control pause state (nanoseconds) wsrep_flow_control_paused: percentage of flow control pause time (0 ~ 1) wsrep_flow_control_sent: number of flow control pause events sent That is, the number of times the current node triggers the flow control wsrep_flow_control_recv: the number of traffic control pause events received wsrep_flow_control_interval: the lower and upper limit of the flow control. The upper limit is the maximum number of requests allowed in the queue. If the queue reaches the upper limit, the new request is rejected, that is, the traffic control is triggered. When an existing request is processed, the queue will be reduced, and once the lower limit is reached, a new request will be allowed again, that is, to remove the flow control wsrep_flow_control_status: the on / off state of the flow control (on: ON) Off: OFF) transaction related wsrep_cert_deps_distance: number of concurrency transactions executed wsrep_apply_oooe: percentage of transactions in receive queue wsrep_apply_oool: frequency of transactions in receive queue out of order wsrep_apply_window: average number of transactions in receive queue wsrep_commit_oooe: percentage of transactions in send queue wsrep_commit_oool: meaningless (there is no local out-of-order commit) wsrep_commit _ window: average number of transactions in the send queue related wsrep_local_state_comment: current status of the node wsrep_cluster_status: current status of the cluster wsrep_connected: whether the node is connected to the cluster wsrep_ready whether the cluster is working properly wsrep_cluster_size: number of nodes in the cluster wsrep_desync_count: number of delayed nodes wsrep_incoming_addresses: IP address of all nodes in the cluster
PXC Node State Diagram:
OPEN: node starts successfully PRIMARY: node successfully joins cluster JOINER: synchronizes data with other nodes JOINED: synchronizes data with other nodes successfully SYNCED: synchronizes with cluster and can provide services DONER: receives full data synchronization from other nodes and is unavailable
PXC cluster state diagram:
PRIMARY: normal state NON_PRIMARY: brain fissure occurs in the cluster DISCONNECTED: the cluster is unable to connect
Official documents:
Index of wsrep status variablesGalera Status Variables about the online and offline of PXC nodes
1. Secure offline posture of PXC node
How the node is started, just use the corresponding command to shut it down.
First node example: the command used to start the first node is: systemctl start mysql@bootstrap.service, then the corresponding shutdown command is: systemctl stop mysql@bootstrap.service other node example: the command to start other nodes is: systemctl start mysqld, then the corresponding shutdown command is: systemctl stop mysqld
2. If all PXC nodes are safely offline, you need to start the last offline node before starting the cluster.
You can start any node as the first node when you start the cluster for the first time. However, if it is a cluster that has already been started, when the cluster goes offline and goes online again, you need to start the last offline node as the first node. In fact, you can know whether a node can be started as the first node by looking at the grastate.dat file:
[root@PXC-Node1 ~] # cat / var/lib/mysql/grastate.dat # GALERA saved stateversion: 2.1uuid: 2c915504-39ac-11ea-bba7-a294386c4285seqno:-1safe_to_bootstrap: 0 [root@PXC-Node1 ~] # Note: a value of 0 means it cannot be started as the first node, and a value of 1 means it can be started as the first node. The last offline node in the PXC cluster will change the value of safe_to_bootstrap to 1. The next time you start the cluster, you need to start this node as the first node. This is because the node data of the last offline is up to date. Start it as the first node, and then have other nodes synchronize with that node to ensure that the data in the cluster is up-to-date. Otherwise, it may cause the data in the cluster to be old data before a certain point in time.
3. If all PXC nodes exit unexpectedly, and not at the same time
As mentioned at the beginning of this article, the PXC cluster stops running when more than half of the nodes in the PXC cluster are inaccessible due to an unexpected downtime. However, if these PXC nodes exit securely, it will not cause the cluster to stop running automatically, but will only reduce the size of the cluster. Only when more than half of the nodes are unexpectedly logged off will the cluster stop automatically. Unexpected offline situations include:
Downtime, suspension, shutdown, restart, power outage, network disconnection, etc. Anyway, without using the corresponding stop command, the safe offline nodes are all accidentally offline.
As long as the node in the PXC cluster does not exit unexpectedly at the same time, when there is one node left in the cluster, the node automatically changes the safe_to_ bootstrap value in the grastate.dat file to 1. Therefore, when restarting the cluster, the last exiting node is also started first.
4. If all PXC nodes exit unexpectedly at the same time, you need to modify the grastate.dat file.
When all nodes in the cluster exit at the same time due to unexpected conditions, then the safe_to_bootstrap of all nodes is 0, because no node has time to modify the value of safe_to_bootstrap. When the safe_to_bootstrap of all nodes is 0, the PXC cluster cannot be started.
In this case, we can only manually select a node, change safe_to_bootstrap to 1, and then start that node as the first node:
[root@PXC-Node1 ~] # vim / var/lib/mysql/grastate.dat... safe_to_bootstrap: 1 [root@PXC-Node1 ~] # systemctl start mysql@bootstrap.service
Then start the other nodes in turn:
[root@PXC-Node2 ~] # systemctl start mysqld
5. If there are runnable nodes in the cluster, then the other offline nodes only need to go online as ordinary nodes.
[root@PXC-Node2 ~] # systemctl start mysqld
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.