In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Avoid creating clusters with an even number of nodes, as this can lead to brain fissures.
Linux version: CentOS 6.5
IP Information:
Node IP
Node 1 10.20.30.10
Node 2 10.20.30.20
Node 3 10.20.30.30
Turn off selinux and firewall, otherwise initialization of the cluster will fail later
[root@localhost mysql_log_57] # vim / etc/selinux/config
SELINUX=disabled
1. Install Percona XtraDB cluster software on all nodes
Install the YUM source
[root@localhost ~] # yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
Check if the package is available
[root@localhost install] # yum list | grep Percona-XtraDB-Cluster-57
Percona-XtraDB-Cluster-57.x86_64 5.7.18-29.20.1.el6 percona-release-x86_64
Percona-XtraDB-Cluster-57-debuginfo.x86_64 5.7.18-29.20.1.el6 percona-release-x86_64
Install the Percona XtraDB cluster package
[root@localhost install] # yum install Percona-XtraDB-Cluster-57
The following packages are missing during installation
Error: Package: Percona-XtraDB-Cluster-server-57-5.7.18-29.20.1.el6.x86_64 (percona-release-x86_64)
Requires: socat
Error: Package: percona-xtrabackup-24-2.4.7-2.el6.x86_64 (percona-release-x86_64)
Requires: libev.so.4 () (64bit)
Install missing socat and libev packages
Go to the fedora website to download repository
Download the rpm package starting with epel-release from the following web page
Http://dl.fedoraproject.org/pub/epel/6/x86_64/
Upload the rpm package to the server
Install the repositor package
[root@localhost install] # yum localinstall epel-release-6-8.noarch.rpm
Install socat, libev
[root@localhost install] # yum install socat libev
Install the Percona XtraDB cluster package again
[root@localhost install] # yum install Percona-XtraDB-Cluster-57
Note that if the database is installed on the server, the / etc/my.cnf configuration file already exists and needs to be renamed, otherwise it will have an impact on cluster startup
[root@localhost ~] # mv / etc/my.cnf / etc/my_5.7_mha.cnf
View the contents of the data directory
[root@localhost usr] # cd / var/lib/mysql
[root@localhost mysql] # ls-trl
Total 126800
-rw-r-. 1 mysql mysql 50331648 Jul 1 19:21 ib_logfile1
-rw-r-. 1 mysql mysql 56 Jul 1 19:21 auto.cnf
-rw-. 1 mysql mysql 1676 Jul 1 19:21 ca-key.pem
-rw-r--r--. 1 mysql mysql 1083 Jul 1 19:21 ca.pem
-rw-. 1 mysql mysql 1676 Jul 1 19:21 server-key.pem
-rw-r--r--. 1 mysql mysql 1087 Jul 1 19:21 server-cert.pem
-rw-. 1 mysql mysql 1680 Jul 1 19:21 client-key.pem
-rw-r--r--. 1 mysql mysql 1087 Jul 1 19:21 client-cert.pem
-rw-r--r--. 1 mysql mysql 452 Jul 1 19:21 public_key.pem
-rw-. 1 mysql mysql 1680 Jul 1 19:21 private_key.pem
Drwxr-x---. 2 mysql mysql 4096 Jul 1 19:21 performance_schema
Drwxr-x---. 2 mysql mysql 4096 Jul 1 19:21 mysql
Drwxr-x---. 2 mysql mysql 12288 Jul 1 19:21 sys
-rw-r-. 1 mysql mysql 417 Jul 1 19:21 ib_buffer_pool
-rw-rw----. 1 root root 5 Jul 1 19:21 mysqld_safe.pid
-rw-. 1 mysql mysql 5 Jul 1 19:21 mysql.sock.lock
Srwxrwxrwx. 1 mysql mysql 0 Jul 1 19:21 mysql.sock
-rw-r-. 1 mysql mysql 5 Jul 1 19:21 localhost.localdomain.pid
-rw-r-. 1 mysql mysql 3932160 Jul 1 19:21 xb_doublewrite
-rw-r-. 1 mysql mysql 12582912 Jul 1 19:21 ibtmp1
-rw-r-. 1 mysql mysql 12582912 Jul 1 19:21 ibdata1
-rw-r-. 1 mysql mysql 50331648 Jul 1 19:21 ib_logfile0
-rw-r-. 1 mysql mysql 4653 Jul 1 19:22 localhost.localdomain.er
Default installed software directory
View the contents of the software directory
[root@localhost ~] # cd / usr/
[root@localhost usr] # ls-trl
Total 144
Drwxr-xr-x. 2 root root 4096 Sep 23 2011 games
Drwxr-xr-x. 2 root root 4096 Sep 23 2011 etc
Drwxr-xr-x. 4 root root 4096 Mar 23 16:03 src
Lrwxrwxrwx. 1 root root 10 Mar 23 16:03 tmp->.. / var/tmp
Dr-xr-xr-x. 15 root root 4096 Mar 23 16:17 lib
Drwxr-xr-x. 42 root root 4096 Apr 3 03:15 include
Drwxr-xr-x. 22 root root 12288 Apr 3 03:41 libexec
Drwxr-xr-x. 14 root root 4096 Apr 9 08:25 local
Dr-xr-xr-x. 2 root root 36864 Jul 1 16:11 bin
Dr-xr-xr-x. 92 root root 49152 Jul 1 16:11 lib64
Dr-xr-xr-x. 2 root root 12288 Jul 1 16:11 sbin
Drwxr-xr-x. 175 root root 4096 Jul 1 16:11 share
Start the Percona XtraDB cluster service
Service mysql start
View cluster status
[root@localhost mysql] # service mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (2263)
In fact, the process of the cluster is the process of mysqld
[root@localhost mysql] # service mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (2928)
[root@localhost mysql] # ps-ef | grep mysql
Root 2824 10 19:56 pts/1 00:00:00 / bin/sh / usr/bin/mysqld_safe-- datadir=/var/lib/mysql-- pid-file=/var/lib/mysql/localhost.localdomain.pid
Mysql 2928 2824 2 19:56 pts/1 00:00:01 / usr/sbin/mysqld-- basedir=/usr-- datadir=/var/lib/mysql-- plugin-dir=/usr/lib64/mysql/plugin-- user=mysql-- log-error=/var/lib/mysql/localhost.localdomain.err-- pid-file=/var/lib/mysql/localhost.localdomain.pid-- wsrep_start_position=00000000-0000-0000-0000-000000000000001
Root 2982 2711 0 19:57 pts/1 00:00:00 grep mysql
Reset the root password
[root@localhost log] # mysqld_safe-skip-grant-tables-skip-networking&
[root@localhost log] # mysql-uroot
Mysql > update mysql.user set authentication_string=password ('root') where user='root'
Mysql > commit
[root@localhost log] # mysqladmin shutdown
[root@localhost log] # service mysql start
Starting MySQL (Percona XtraDB Cluster).. SUCCESS!
[root@localhost] # mysql-uroot-p
Mysql > alter user root@localhost identified by 'root'
two。 Configure parameters related to write collection replication on all nodes
This includes the path to the Glera library and the location of other nodes
Stop the cluster service
[root@localhost ~] # service mysql stop
Shutting down MySQL (Percona XtraDB Cluster)... SUCCESS!
Parameter description:
Wsrep_provider specifies the path to the Galera library
Wsrep_cluster_name specifies the IP address of each node in the cluster
Wsrep_node_name specifies the logical name of each node. If this parameter is not specified, the hostname is used by default
Wsrep_node_address specifies the IP address of the node
By default, wsrep_sst_method clusters use Percona XtraBackup for status snapshot transfer (SST). It is strongly recommended that this parameter be set to wsrep_sst_method=xtrabackup-v2. Using this method requires configuring a database user. Specify the SST authenticated user in the wsrep_sst_auth parameter.
Wsrep_sst_auth specifies the SST authentication username and password in the format:.
Sexual and unsupported features
Binlog_format Gelera only supports row-level replication, so set binlog_format=ROW
Default_storage_engine Galera only supports the InnoDB storage engine. MyISAM or other non-transactional storage engines are not supported. So set up default_storage_engine=InnoDB
Innodb_autoinc_lock_mode Gelara only supports interleaved (2) lock mode that is suitable for InnoDB. Setting this parameter to traditional (0) or consecutive (1) lock mode will cause a deadlock
This in turn will cause the replication to fail. So set up innodb_autoinc_lock_mode=2
Node 1 Profil
[mysqld]
Wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
Wsrep_cluster_name=pxc-cluster
Wsrep_cluster_address= "gcomm://10.20.30.10,10.20.30.20,10.20.30.30"
Wsrep_node_name=pxc1
Wsrep_node_address=10.20.30.10
Wsrep_sst_method=xtrabackup-v2
Wsrep_sst_auth=sstuser:passw0rd
Pxc_strict_mode=ENFORCING
Binlog_format=ROW
Default_storage_engine=InnoDB
Innodb_autoinc_lock_mode=2
Basedir = / usr
Datadir = / var/lib/mysql
Pid-file=/var/run/mysqld/mysqld_galera.pid
Log-error=/var/log/mysqld_galera.log
Port = 3306
User = mysql
Socket = / var/lib/mysql/mysql.sock
Skip-external-locking
Max_allowed_packet = 1m
Table_open_cache = 4
Sort_buffer_size = 64K
Read_buffer_size = 256K
Net_buffer_length = 2K
Read_buffer_size = 256K
Net_buffer_length = 2K
Thread_stack = 256K
Max_connections = 1000
# log-bin = / mysql_log_57/galera-bin
# servier-id = 1000
Node 2 Profil
[mysqld]
Wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
Wsrep_cluster_name=pxc-cluster
Wsrep_cluster_address=gcomm://10.20.30.10,10.20.30.20,10.20.30.30
Wsrep_node_name=pxc2
Wsrep_node_address=10.20.30.20
Wsrep_sst_method=xtrabackup-v2
Wsrep_sst_auth=sstuser:passw0rd
Pxc_strict_mode=ENFORCING
Binlog_format=ROW
Default_storage_engine=InnoDB
Innodb_autoinc_lock_mode=2
Basedir = / usr
Datadir = / var/lib/mysql
Pid-file=/var/run/mysqld/mysqld_galera.pid
Log-error=/var/log/mysqld_galera.log
Port = 3306
User = mysql
Socket = / var/lib/mysql/mysql.sock
Skip-external-locking
Max_allowed_packet = 1m
Table_open_cache = 4
Sort_buffer_size = 64K
Read_buffer_size = 256K
Net_buffer_length = 2K
Read_buffer_size = 256K
Net_buffer_length = 2K
Thread_stack = 256K
Max_connections = 1000
# log-bin = / mysql_log_57/galera-bin
# servier-id = 1000
Node 3 Profil
[mysqld]
Wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
Wsrep_cluster_name=pxc-cluster
Wsrep_cluster_address=gcomm://10.20.30.10,10.20.30.20,10.20.30.30
Wsrep_node_name=pxc3
Wsrep_node_address=10.20.30.30
Wsrep_sst_method=xtrabackup-v2
Wsrep_sst_auth=sstuser:passw0rd
Pxc_strict_mode=ENFORCING
Binlog_format=ROW
Default_storage_engine=InnoDB
Innodb_autoinc_lock_mode=2
Basedir = / usr
Datadir = / var/lib/mysql
Pid-file=/var/run/mysqld/mysqld_galera.pid
Log-error=/var/log/mysqld_galera.log
Port = 3306
User = mysql
Socket = / var/lib/mysql/mysql.sock
Skip-external-locking
Max_allowed_packet = 1m
Table_open_cache = 4
Sort_buffer_size = 64K
Read_buffer_size = 256K
Net_buffer_length = 2K
Read_buffer_size = 256K
Net_buffer_length = 2K
Thread_stack = 256K
Max_connections = 1000
# log-bin = / mysql_log_57/galera-bin
# servier-id = 1000
View Modul
[root@localhost ~] # ll / usr/lib64/galera3/libgalera_smm.so
-rwxr-xr-x. 1 root root 2404960 May 31 23:07 / usr/lib64/galera3/libgalera_smm.so
3. Initialize the cluster on the first node
This node must contain all the data as the data source for the cluster.
On the first node, do the following
[root@localhost mysql_log_57] # / etc/init.d/mysql bootstrap-pxc
Starting MySQL (Percona XtraDB Cluster). SUCCESS!
View cluster status
[root@localhost mysql_log_57] # mysql-uroot-p
Mysql > show status like 'wsrep%'
| | wsrep_local_state_comment | Synced | / / Node is synchronized |
| | wsrep_cluster_size | 1 | / / the cluster has only one node |
| | wsrep_cluster_status | Primary | / / Master node |
| | wsrep_connected | ON | / / connected to the cluster |
| | wsrep_ready | ON | / / ready to copy the collection
Create a database user for SST before adding other nodes to the cluster. This account must match the information in the configuration file.
Mysql > CREATE USER 'sstuser'@'localhost' IDENTIFIED BY' passw0rd'
Query OK, 0 rows affected (0.02 sec)
Mysql > GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *. * TO
-> 'sstuser'@'localhost'
Query OK, 0 rows affected (0.00 sec)
4. Add other nodes to the cluster
When a configuration file such as wsrep_cluster_address is configured, the node is started, and it automatically joins the cluster and starts synchronizing data.
Note: do not add multiple nodes to the cluster at the same time to avoid huge pressure on network traffic.
By default, the Percona XtraDB cluster uses Percona XtraBackup to transfer the status snapshot State Snapshot Transfer (SST).
The following conditions need to be met:
Set the wsrep_sst_method parameter to xtrabackup-v2 and use the wsrep_sst_auth variable to provide SST user authentication.
Create a SST user above the initialization node.
Start the second node
[root@localhost ~] # / etc/init.d/mysql start
On the second node, view the user and cluster status
You can see that the SST user has been replicated to the second node, the cluster already has two nodes, and the size has become 2.
Mysql > select user, host from mysql.user
+-+ +
| | user | host |
+-+ +
| | mysql.sys | localhost |
| | root | localhost |
| | sstuser | localhost |
+-+ +
3 rows in set (0.01sec)
Mysql > show status like 'wsrep%'
| | wsrep_local_state_comment | Synced |
| | wsrep_cluster_size | 2 | |
Add the third node to the cluster
[root@localhost ~] # / etc/init.d/mysql start
Check the status of the cluster on the third node
Mysql > show status like 'wsrep%'
| | wsrep_cluster_size | 3 | |
You can see that the wsrep_cluster_size parameter of the cluster changes to 3, and the cluster increases to the third node.
5. Verify replication effect
Create a database on the second node
Mysql > CREATE DATABASE percona
Query OK, 1 row affected (0.07 sec)
On the third node, create a table in the library you just created
Mysql > USE percona
Database changed
Mysql > CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR (30))
Query OK, 0 rows affected (0.05 sec)
On the first node, insert a piece of data into the table
Mysql > INSERT INTO percona.example VALUES (1, 'percona1')
Query OK, 1 row affected (0.23 sec)
Mysql > commit
Query OK, 0 rows affected (0.00 sec)
View the data in this table on the second node
Mysql > SELECT * FROM percona.example
+-+ +
| | node_id | node_name |
+-+ +
| | 1 | percona1 |
+-+ +
1 row in set (0.00 sec)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.