Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Practical Mariadb galera Cluster cluster architecture

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Mariadb galera Cluster installation:

Operating system: Centos7.4 version

Number of clusters: 3 nodes

Host information: 192.168.153.142 node1 selinux=disabled firewalld shutdown

192.168.153.143 node2 selinux=disabled firewalld closed

192.168.153.144 node3 selinux=disabled firewalld off

Building steps

1. The hosts parse each other: all three nodes have to execute

Vim / etc/hosts

192.168.153.142 node1

192.168.153.143 node2

192.168.153.144 node3

two。 Install the package

The first method: (yum install-y MariaDB-server MariaDB-client galera)

Configure yum installation source and configure mariadb galera installation source

Yum source configuration hangs iso

Set up the yum source for mariadb and install it (for all nodes)

Modify yum source file

Vi / etc/yum.repos.d/ MariaDB. Repos [MariaDB] name = MariaDBbaseurl = http://yum.mariadb.org/10.3.5/centos74-amd64gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDBgpgcheck=1enabled=0 need to solve its dependency package when installing galera software: boost-program-options.x86_64 (direct yum source installation) second method: (rpm package installation) all three nodes need to install download rpm packages from the Internet: Galera-25.3.23-1.rhel7.el7.centos.x86_64.rpm MariaDB-10.3.5-centos74-x86_64-client.rpm MariaDB-10.3.5-centos74-x86_64-compat.rpm MariaDB-10.3.5-centos74-x86_64-common.rpm MariaDB-10.3.5-centos74-x86_ 64-server.rpmrpm-ivh MariaDB-10.3.5-centos74-x86_64-compat.rpm-- nodepsrpm-ivh MariaDB-10.3.5-centos74-x86_64-common.rpm rpm-ivh MariaDB-10.3.5-centos74-x86_64-client.rpmyum install-y boost-program-options.x86_64 (solving the dependency package for installing galera) rpm-ivh galera-25.3.23-1.rhel7.el7.centos.x86_64.rpmrpm-ivh MariaDB-10.3.5-centos74-x86_64-server.rpm

3.mariadb initialization (all three nodes need to be performed)

After the installation is complete, you will be prompted to initialize mariadb (set password)

Systemctl start mariadb

Mysql_secure_installation (set the mysql password as prompted)

Systemctl stop mariadb

4. Configure galera

Master Node profile server.cnf

Vim / etc/my.cnf.d/server.cnf

[galera]

Wsrep_on=ON

Wsrep_provider=/usr/lib64/galera/libgalera_smm.so

Wsrep_cluster_address= "gcomm://192.168.153.142192.168.153.143192.168.153.144"

Wsrep_node_name= node1

Wsrep_node_address=192.168.153.142

Binlog_format=row

Default_storage_engine=InnoDB

Innodb_autoinc_lock_mode=2

Wsrep_slave_threads=1

Innodb_flush_log_at_trx_commit=0

Innodb_buffer_pool_size=120M

Wsrep_sst_method=rsync

Wsrep_causal_reads=ON

Copy this file to mariadb-2, mariadb-3, and make sure that wsrep_node_name and wsrep_node_address are changed to hostname and ip of the corresponding nodes.

5. Start the cluster service:

Start the MariaDB Galera Cluster service:

[root@node1 ~] # / bin/galera_new_cluster

The startup methods of the remaining two nodes are as follows:

[root@node1 ~] # systemctl start mariadb

Check the cluster status: (the cluster service uses ports 4567 and 3306)

[root@node1 ~] # netstat-tulpn | grep-e 4567-e 3306

Tcp 0 0 0.0.0.0:4567 0.0.0.0: LISTEN 3557/mysqld

Tcp6 0 0: 3306: LISTEN 3557/mysqld

6. Verify the cluster status:

Execute on node1:

[root@node1] # mysql-uroot-p # # enter the database

Check whether the galera plug-in is enabled

Connect to mariadb to see if the galera plug-in is enabled

MariaDB [(none)] > show status like "wsrep_ready"

+-+ +

| | Variable_name | Value |

+-+ +

| | wsrep_ready | ON |

+-+ +

1 row in set (0.004 sec)

Current number of cluster machines

MariaDB [(none)] > show status like "wsrep_cluster_size"

+-+ +

| | Variable_name | Value |

+-+ +

| | wsrep_cluster_size | 3 | |

+-+ +

1 row in set (0.001 sec)

View cluster status

MariaDB [(none)] > show status like "wsrep%"

+-+

| | Variable_name | Value |

+-+

| | wsrep_apply_oooe | 0.000000 | |

| | wsrep_apply_oool | 0.000000 | |

| | wsrep_apply_window | 1.000000 | |

| | wsrep_causal_reads | 14 | |

| | wsrep_cert_deps_distance | 1.200000 | |

| | wsrep_cert_index_size | 3 | |

| | wsrep_cert_interval | 0.000000 | |

| | wsrep_cluster_conf_id | 22 | |

| | wsrep_cluster_size | 3 | # # Cluster member |

| | wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | # # unique tag for UUID cluster |

| | wsrep_cluster_status | Primary | # # Master server |

| | wsrep_commit_oooe | 0.000000 | |

| | wsrep_commit_oool | 0.000000 | |

| | wsrep_commit_window | 1.000000 | |

| | wsrep_connected | ON | # # whether to connect or not |

| | wsrep_desync_count | 0 | |

| | wsrep_evs_delayed |

| | wsrep_evs_evict_list |

| | wsrep_evs_repl_latency | 0 / 0 / 0 | 0 / 0 / 0 |

| | wsrep_evs_state | OPERATIONAL |

| | wsrep_flow_control_paused | 0.000000 | |

| | wsrep_flow_control_paused_ns | 0 | |

| | wsrep_flow_control_recv | 0 | |

| | wsrep_flow_control_sent | 0 | |

| | wsrep_gcomm_uuid | 0eba3aff-2341-11e8-b45a-f277db2349d5 |

| | wsrep_incoming_addresses | 192.168.153.142virtual 3306192.168.153.143purl 3306192.168.153.144purl 3306 | # # the database in the connection |

| | wsrep_last_committed | 9 | # # sql submission record |

| | wsrep_local_bf_aborts | 0 | # # the process of executing a transaction is interrupted locally |

| | wsrep_local_cached_downto | 5 | |

| | wsrep_local_cert_failures | 0 | # # Local failed transaction |

| | wsrep_local_commits | 4 | # # locally executed sql |

| | wsrep_local_index | 0 | |

| | wsrep_local_recv_queue | 0 | |

| | wsrep_local_recv_queue_avg | 0.057143 | |

| | wsrep_local_recv_queue_max | 2 | |

| | wsrep_local_recv_queue_min | 0 | |

| | wsrep_local_replays | 0 | |

| | wsrep_local_send_queue | 0 | # # queue sent locally |

| | wsrep_local_send_queue_avg | 0.000000 | # # APCge queue interval |

| | wsrep_local_send_queue_max | 1 | |

| | wsrep_local_send_queue_min | 0 | |

| | wsrep_local_state | 4 |

| | wsrep_local_state_comment | Synced |

| | wsrep_local_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | # # Cluster ID |

| | wsrep_protocol_version | 8 |

| | wsrep_provider_name | Galera |

| | wsrep_provider_vendor | Codership Oy |

| | wsrep_provider_version | 25.3.23 (r3789) | |

| | wsrep_ready | ON | # # whether the plug-in is in use |

| | wsrep_received | 35 | # # number of data copies received |

| | wsrep_received_bytes | 5050 | |

| | wsrep_repl_data_bytes | 1022 | |

| | wsrep_repl_keys | 14 | |

| | wsrep_repl_keys_bytes | 232 | |

| | wsrep_repl_other_bytes | 0 | |

| | wsrep_replicated | 5 | # # number of times sent along with replication |

| | wsrep_replicated_bytes | 1600 | # # Bytes emitted from data replication |

| | wsrep_thread_count | 2 | |

+-+

58 rows in set (0.003 sec)

View connected hosts

MariaDB [(none)] > show status like "wsrep_incoming_addresses"

+-+

| | Variable_name | Value |

+-+

| | wsrep_incoming_addresses | 192.168.153.142 virtual 3306192.168.153.143purl 3306192.168.153.144purl 3306 |

+-+

1 row in set (0.002 sec)

7. Test whether the cluster mariad data is synchronized

MariaDB [(none)] > create database lizk

Query OK, 1 row affected (0.010 sec)

MariaDB [(none)] > show databases +-+ | Database | +-+ | china | | hello | | hi | | information_schema | | lizk | | mysql | | performance_schema | | test | +- -+ 8 rows in set (0.001 sec) you can see that the lizk library has been synchronized on the other two nodes.

8. Treatment after simulated cerebral fissure

The following simulation shows that in the case of packet loss caused by network jitter, the loss of contact between two nodes leads to brain fissure. Execute on the 192.168.153.143 and 192.168.153.144 nodes:

Iptables-An INPUT-p tcp-- sport 4567-j DROP

Iptables-An INPUT-p tcp-- dport 4567-j DROP

The above command is used to disable communication on port 4567 of wsrep full synchronous replication

View on the 192.168.153.142 node:

MariaDB [(none)] > show status like "ws%"

+-+

| | Variable_name | Value |

+-+

| | wsrep_apply_oooe | 0.000000 | |

| | wsrep_apply_oool | 0.000000 | |

| | wsrep_apply_window | 1.000000 | |

| | wsrep_causal_reads | 16 | |

| | wsrep_cert_deps_distance | 1.125000 | |

| | wsrep_cert_index_size | 3 | |

| | wsrep_cert_interval | 0.000000 | |

| | wsrep_cluster_conf_id | 18446744073709551615 | |

| | wsrep_cluster_size | 1 | |

| | wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 |

| | wsrep_cluster_status | non-Primary |

A brain fissure has now occurred and the cluster is unable to execute any commands.

To solve this problem, you can execute

Set global wsrep_provider_options= "pc.bootstrap=true"

Use this command to force the recovery of the node where the brain fissure occurs.

Verify:

MariaDB [(none)] > set global wsrep_provider_options= "pc.bootstrap=true"

Query OK, 0 rows affected (0.015 sec)

MariaDB [(none)] > select @ @ wsrep_node_name +-+ | @ @ wsrep_node_name | +-+ | node1 | +-+ 1 row in set (0.478 sec) finally we restore the nodes 192.168.153.143 and 192.168.153.144 Just clean up the iptables table (because mine is a test environment, and the production environment needs to delete the above rules): [root@node3 mysql] # iptables-F verify after recovery: MariaDB [(none)] > show status like "wsrep_cluster_size" +-+ | Variable_name | Value | +-+-+ | wsrep_cluster_size | 3 | +-+-+ 1 row in set (0.001 sec)

9. Due to the failure, it is necessary to check the downtime of the two nodes of the cluster and whether the data can be synchronized after restarting the service.

Stop mariadb for 192.168.153.143 and 192.168.153.144:

[root@node2 mysql] # systemctl stop mariadb

Insert data on the 192.168.153.142 node:

MariaDB [test] > select * from test1

+-+

| | id |

+-+

| | 2 |

| | 2 |

| | 1 |

| | 3 |

+-+

4 rows in set (0.007 sec)

Now restart the other two nodes in the cluster to check the data consistency, just like the data of the master node.

10. Exception handling: when there is a sudden power outage in the data center, all galera hosts are powered off abnormally, and the power on after an incoming call will cause the galera cluster service to fail to start normally. How to deal with it?

Step 1: start the mariadb service of the group host of the galera cluster.

Step 2: turn on the mariadb service of the member hosts of the galera cluster.

Exception handling: what if the mysql service of the group host and member hosts of the galera cluster cannot be started?

Solution 1: step 1, delete the / var/lib/mysql/grastate.dat status file of the garlera group host

/ bin/galera_new_cluster starts the service. Start up normally. Log in and view the wsrep status.

Step 2: delete the / var/lib/mysql/grastate.dat status file in the galera member host

Systemctl restart mariadb restarts the service. Start up normally. Log in and view the wsrep status.

Solution 2: step 1, modify the 0 in the / var/lib/mysql/grastate.dat status file of the garlera group host to be 1

/ bin/galera_new_cluster starts the service. Start up normally. Log in and view the wsrep status.

Step 2: modify the 0 in the / var/lib/mysql/grastate.dat status file in the galera member host to 1

Systemctl restart mariadb restarts the service. Start up normally. Log in and view the wsrep status.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report