In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you about the cluster installation of PXC. The article is rich in content and analyzed and described from a professional point of view. I hope you can get something after reading this article.
Introduction to Percona XtraDB Cluster
Percona XtraDB Cluster is a solution for high availability and scalability of MySQL, and Percona XtraDB Cluster is fully compatible with MySQL and Percona Server.
The features provided by Percona XtraDB Cluster are:
In synchronous replication, transactions are either committed on all nodes or not committed.
Multi-master replication, which can be written at any node.
Events are applied in parallel from the server, which is true parallel replication.
The node is automatically configured.
Data consistency, no longer asynchronous replication.
Advantages of pxc:
When a query is executed, it is executed on the local node. Because all data is local, there is no need for remote access.
There is no need for centralized management. You can lose any node at any point in time, but the cluster will work as usual.
Good read load expansion, any node can query.
Disadvantages of pxc:
Adding a new node costs a lot of money. The complete data needs to be copied.
Can not effectively solve the write scaling problem, all write operations will occur on all nodes.
There are as many duplicated data as there are nodes.
Limitations of Percona XtraDB Cluster
Current replication only supports the InnoDB storage engine. Any tables written to other engines, including mysql.* tables, will not be replicated. But the DDL statement will be copied, so the creating user will be copied, but insert into mysql.user... Will not be copied.
DELETE operations do not support tables without primary keys. The order of tables without primary keys will be different at different nodes, if SELECT is performed. LIMIT... Different result sets will appear.
LOCK/UNLOCK TABLES is not supported in a multi-host environment. And lock functions GET_LOCK (), RELEASE_LOCK ()...
The query log cannot be saved in a table. If you open the query log, it can only be saved to a file.
The maximum transaction size allowed is defined by wsrep_max_ws_rows and wsrep_max_ws_size. Any large operation will be rejected. Such as large LOAD DATA operations.
Because the cluster is optimistic about concurrency control, the transaction commit may be aborted at this stage. If two transactions are written and committed to the same line to different nodes in the cluster, the failed node will abort. For cluster-level aborts, the cluster returns a deadlock error code (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
XA transactions are not supported because they may be rolled back on the commit.
The write throughput of the entire cluster is limited by the weakest node, and if one node becomes slow, the entire cluster will be slow. In order to achieve stable and high performance requirements, all nodes should use unified hardware.
A minimum of 3 cluster nodes are recommended.
If there is a problem with the DDL statement, the cluster will be destroyed.
Install Percona XtraDB Cluster shell > tar zxvf Percona-XtraDB-Cluster-5.6.22-72.0.tar.gzshell > cd Percona-XtraDB-Cluster-5.6.22-72.0 install dependency package shell > yum install cmake gcc gcc-c++ libaio libaio-devel automake autoconf bzr bison libtool ncurses5-develshell > BUILD/compile-pentium64 default installation path / usr/local/mysql/shell > make & & make installshell > rpm-ivh Percona-XtraDB-Cluster-galera-3-3.14-1.rhel6.x86_64.rpm create Create the data file directory shell > mkdir-p / data/pxcshell > mv / usr/local/mysql/ / opt/pxcshell > chown-R mysql.mysql / opt/pxcshell > chown-R mysql.mysql / data/pxc configuration node node1:shell > more / opt/pxc/ my.cnf [mysqld] socket=/tmp/mysql-node1.sockport=3307datadir=/data/pxcuser=mysqlbinlog_format=ROWdefault-storage-engine=innodbinnodb_autoinc_lock_mode=2innodb_locks_unsafe_for_binlog=1query_cache_size=0query_cache_type=0bind-address=0.0.0.0wsrep_provider=/usr / lib64/libgalera_smm.sowsrep_cluster_address=gcomm://10.106.58.211,10.106.58.212,10.106.58.213wsrep_cluster_name=my_wsrep_clusterwsrep_slave_threads=1wsrep_certify_nonPK=1wsrep_max_ws_rows=131072wsrep_max_ws_size=1073741824wsrep_debug=1wsrep_convert_LOCK_to_trx=0wsrep_retry_autocommit=1wsrep_auto_increment_control=1wsrep_causal_reads=0#wsrep_notify_cmd=# SST method#wsrep_sst_method=xtrabackup-v2wsrep_sst_method=rsync# Authentication for SST method#wsrep_sst_auth= "sstuser : s3cret "# Node # 1 addresswsrep_node_address=10.106.58.211server-id = 1 launch node1:shell > / opt/pxc/scripts/mysql_install_db-- basedir=/opt/pxc/-- datadir=/data/pxc/\-- user=mysql-- defaults-file=/opt/pxc/my.cnfshell > / opt/pxc/bin/mysqld_safe-- defaults-file=/opt/pxc/my.cnf-- wsrep-new-cluster & shell > / opt/pxc/bin/mysql-u root-S / Tmp/mysql-node1.sockmysql > delete from mysql.user where user ='' Mysql > GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *. * TO sstuser@'localhost' identified by's 3cretShop MySQL > flush privileges; configure startup nodes node2 and node3:
The same is true for node2 and node3 configuration startup.
[mysqld] socket=/tmp/mysql-node1.sockwsrep_node_address=10.106.58.212server-id = 2shell > / opt/pxc/bin/mysqld_safe-- defaults-file=/opt/pxc/my.cnf & check pxc status mysql > show status like 'wsrep%' +-- +-- + | Variable_name | Value | | +-+ | wsrep_local_state_uuid | 17b9d472-5ace-11e5-b22f-ab14cb9dcc7b | | | wsrep_protocol_version | 7 | | wsrep_last_committed | 7 | | wsrep_replicated | 0 | | | wsrep_replicated_bytes | 0 | | wsrep_repl_keys | 0 | | wsrep_repl_keys_bytes | 0 | | | wsrep_repl_data_bytes | 0 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 3 | | | wsrep_received_bytes | 311 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | | wsrep_incoming_addresses | 10.106.58.213 wsrep_evs_evict_list 3307 Magneto 10.106.58.212 Fraser 3307 Magazine 10.106.58.212 wsrep_evs_evict_list 3307 | | wsrep_evs_delayed | wsrep_evs_evict_list | wsrep_evs | _ repl_latency | 0 wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | c21183b0-5acf-11e5-930d-9bfcbb0bb24c | | wsrep_cluster_conf_id | 5 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | 17b9d472-5ace-11e5-b22f-ab14cb9dcc7b | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 2 | | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy | | wsrep_provider_version | 3.11 (r93aca2d) | | wsrep_ready | | | ON | +-- -- + 58 rows in set (0.01sec) simple test pxcmysql@node1 > create database svoid Query OK, 1 row affected (0.00 sec) mysql@node2 > use svoidDatabase changedmysql@node2 > create table test (id int); Query OK, 0 rows affected (0.01 sec) mysql@node3 > use svoidDatabase changedmysql@node3 > show tables +-+ | Tables_in_svoid | +-+ | test | +-+ 1 row in set (0.00 sec) mysql@node3 > insert into test select 1 sec query OK, 1 row affected (0.01 sec) Records: 1 Duplicates: 0 Warnings: 0mysqlnode2 > update test set id = 2 where id = 1 Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0mysqlnode1 > alter table test add primary key (id); Query OK, 1 row affected (0.02 sec) Records: 1 Duplicates: 0 Warnings: 0mysqlnode1 > select * from test;+----+ | id | +-- + | 2 | +-+ 1 row in set (0.00 sec) Simulation failure node1 downtime shell > / opt/pxc/bin/mysqladmin-S / tmp/mysql-node1.sock-u root shutdownmysqlnode2 > insert into test select 1 Query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0mysqlnode2 > insert into test select 3 politics query OK, 1 row affected (0.00 sec) Records: 1 Duplicates: 0 Warnings: 0 launch node1shell > / opt/pxc/bin/mysqld_safe-- defaults-file=/opt/pxc/my.cnf & mysqlnode1 > select * from test +-+ | id | +-+ | 1 | | 2 | 3 | +-+ 3 rows in set (0.00 sec) you can see that the failure of one of the nodes does not affect the normal use of other nodes. After starting the node, the data will be automatically synchronized. This is what the cluster installation of PXC shared by Xiaobian is like. If you happen to have similar doubts, please refer to the above analysis for understanding. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.