Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement the deployment of Percona Mysql Galera Multi-read-write Cluster

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

This article is mainly about how to achieve the deployment of Percona Mysql Galera multi-read-write cluster. If you are interested, let's take a look at this article. I believe it is of some reference value to you after reading how to achieve the deployment of Percona Mysql Galera multi-read-write cluster.

1. Deploy MySQL:

Yum install https://www.percona.com/redir/downloads/percona-release/redhat/latest/percona-release-0.1-6.noarch.rpm-y installs the warehouse file for percona.

Yum install Percona-XtraDB-Cluster-57 percona-toolkit-y installation database

Modify the configuration file: / etc/my.cnf

[mysqld]

Datadir = / storage/DBdata # the directory where the database is stored

Pid-file = / usr/local/mysql/mysqld.pid

Port = 3306

Socket = / var/lib/mysql/mysql.sock

User = mysql

# modify transaction isolation level #

Transaction-isolation = READ-COMMITTED

Default_storage_engine = InnoDB

Default-time_zone ='+ 8 00' # modifies the time zone in the mysql log.

Log_timestamps=SYSTEM

# InnoDB

Innodb_file_per_table = 1

Innodb_flush_method = O_DIRECT

# name-resolve

Skip-host-cache

Explicit_defaults_for_timestamp = true

# character-set

Character-set-server = utf8

Collation-server = utf8_general_ci

Sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

Max_connections = 5000 # maximum number of connections allowed

# fix up table open cache too small

Table_open_cache = 1024

Open_files_limit = 65535

Ignore-db-dir=lost+found

# LOG

Log-error=/var/log/mysqld.log

The following general_log=ON is enabled during general_log_file=/var/log/mysql-general.log # debugging, and needs to be closed after normal operation.

General_log=OFF

Slow_query_log=ON

Long_query_time = 5

Slow_query_log_file = / var/log/mysql-slow.log

Log_queries_not_using_indexes=false

Server-id = 1 # the ID needs for each host in the cluster are different.

Log-bin = / usr/local/mysql/binlogs/mysql-bin # binlog, preferably on a different physical disk from the database directory. The log function is used for data synchronization.

Expire_logs_days = 10 # save time. If other nodes join the cluster after 10 days of downtime, the data will be fully synchronized. If you join the cluster on the ninth day after the outage, the status of the shutdown will be synchronized.

Skip-name-resolve

Relay-log = mysqld-relay-bin

Innodb_support_xa = 1

Relay_log_purge = 0

Innodb_print_all_deadlocks = on

Sync_binlog=2

# slave-skip-errors = all

# optimize configuration

Join_buffer_size = 2m

Sort_buffer_size = 2m

Key_buffer_size=512M

Read_rnd_buffer_size=2M

Query_cache_type=1

Query_cache_size=6M

Tmp_table_size = 512m

Innodb_thread_concurrency=64

Innodb_buffer_pool_size=10G

Max_binlog_size = 1G

Max_relay_log_size = 1G

Innodb_lock_wait_timeout=120

# innodb_data_file_path = ibdata1:1G:autoextend

# Audit log configuration audit configuration needs to be commented out for the first time. After startup, go to mysql to order to install plug-ins and other operations.

# audit_log_file='/var/log/mysql-audit.json'

# audit_log_format=JSON

# audit_log_policy=QUERIES

# audit_log_rotate_on_size=4096

# audit_log_rotations=7

# audit_log_include_commands='create,update,insert,drop,grant,delete,select' # sql command type of audit log.

# New configuration

Innodb_log_file_size=20M

Binlog_stmt_cache_size=1M

Binlog_cache_size=1M

Table_open_cache=5000

# Start For Mysql Zabbix plugin

Binlog_stmt_cache_size=1M

Binlog_cache_size=1M

# End For Mysql Zabbix plugin

Log-queries-not-using-indexes = false

Gtid_mode=ON

Enforce_gtid_consistency=on

Master_info_repository=TABLE

Relay_log_info_repository=TABLE

Binlog_checksum=NONE

Log_slave_updates=ON

# configuration related to galera config Cluster

Wsrep_provider=/usr/lib64/libgalera_smm.so

The hosts in the wsrep_cluster_address=gcomm://192.168.2.181192.168.2.183192.168.2.185 # cluster should be filled in according to the actual situation.

Wsrep_slave_threads=16

Innodb_autoinc_lock_mode=2

Wsrep_node_address=192.168.2.181 # the IP address used locally for data synchronization. The address here can be in a different network segment from the address used for service.

Wsrep_sst_method=xtrabackup-v2 # synchronization method

Pxc_strict_mode=MASTER # synchronization mode

Wsrep_cluster_name=mysql-cluster # Cluster name

Wsrep_sst_auth= "sstuser:s3creV9228s@#%" # is used to check the account and password for synchronization, and only local host access can be granted.

[client]

Socket = / var/lib/mysql/mysql.sock

Port = 3306

After configuring the first host, you need to start the first host using service mysql@bootstrap start. Note that this command is only used when initializing the cluster. If the cluster has already been started, use the service mysql start command to start normally. This includes the recovery of all hosts in the cluster after downtime.

For systems above centos7, use the systemctl start mysql@bootstrap.service and systemctl start mysql commands.

Create a synchronization account after startup:

CREATE USER 'sstuser'@'localhost' IDENTIFIED BY's 3creV9228s subscription%'

GRANT RELOAD, LOCK TABLES,REPLICATION CLIENT ON. TO'sstuser'@'localhost'

Please pay attention to the permissions of database directories and directories such as binlog, which must be mysql. Modify permissions by using the chown mysql:mysql directory name-R command. To start the specific error report, please check the log file.

If you migrated data from another version of the database, use the mysql_upgrade-u root-p command to update the database file. Otherwise, the data table may not be accessed properly.

All the log files are normal after checking, and the first host is started after the database can be accessed normally.

The next step is to add new nodes.

Install the program on other hosts, create relevant file directories and copy the configuration file / etc/my.cnf. Modify the parameters server-id and wsrep_node_address in the configuration file. Then use the service mysql start or systemctl start mysql command to start the database, and the newly added node will automatically synchronize all data from the primary database to the new node. You can access the main database performance_schema.pxc_cluster_ view table to view the status of the nodes in the cluster. SYNCED indicates available, JOIN indicates joining, and DONOR indicates synchronization. Please check the official documentation of percona for details.

When the new node joins, please monitor the startup log file of the new node and troubleshoot the relevant errors.

Start the synchronization node to report an error

WSREP_SST: [ERROR] xtrabackup_checkpoints missing, failed innobackupex/SST on donor

Check the innobackup.backup.log log file under the main database directory to see the specific reason.

After all the nodes have joined the cluster, please stop the first node using service mysql@bootstrap stop or systemctl stop mysql@bootstrap.service. And then start it with the normal command. This completes the configuration of the entire galare cluster for percona.

2. Deploy load balancing services haproxy and keepalive with high availability

Yum install haproxy keepalive-y installer package. Two hosts are needed here, and the configuration is not too high. It is best to install and deploy from a host other than the database. Here you need to plan a floating address for high availability.

Configure vi / etc/haproxy/haproxy.cfg

Global

Log 127.0.0.1 local2

Chroot / var/lib/haproxy

Pidfile / var/run/haproxy.pid

Maxconn 50000 # maximum number of links for the entire service

User root

Group root

Daemon

# turn on stats unix socket is required for zabbix monitoring

Stats socket / var/lib/haproxy/stats

Defaults

Mode http

Log global

Option tcplog

Option dontlognull

Option redispatch

Retries 3

Timeout http-request 10s

Timeout queue 1m

Timeout connect 10s

Timeout server 60m # database needs to be configured for about 1 hour, and 60m can have the same effect as 1h.

Timeout client 60m # needs to be configured for about 1 hour to do database.

Timeout http-keep-alive 10s

Timeout check 3s

Listen stats # configuration related to web monitoring

Bind: 9099

Stats uri / nstatus

Stats enable

Stats realm HAPorxy\ Stats\ Page

Stats auth monitor:Hawatch3.tz.Com

Stats admin if TRUE

Listen mysql-cluster # Service configuration of load balancer

Bind 0.0.0.0:3306

Mode tcp

Maximum total backend links provided by fullconn 13500 #

Balance leastconn # load balancing mode, which has been tested to be the most suitable for mysql services. Minimum number of links first

Option httpchk

Option tcplog

Server db01 192.168.2.181:3306 maxqueue 30 maxconn 4500 check port 9200 inter 1000 rise 3 fall 3

Server db02 192.168.2.183:3306 maxqueue 30 maxconn 4500 check port 9200 inter 1000 rise 3 fall 3

Server db03 192.168.2.185:3306 maxqueue 30 maxconn 4500 check port 9200 inter 1000 rise 3 fall 3

Db01 can be anything, and each line represents a backend host that cannot be repeated. Db01.example.com:3306 host domain name or IP address port. The maximum queue for maxqueue to establish a back-end connection to prevent the backend downtime when the CVM cannot handle it. The maximum number of links for maxconn CVM needs to be less than the actual number of links for the database. Check port 9200 checks whether the back-end service is a normal port.

After the above configuration, you can start the haproxy service normally. Service haproxy start or systemctl start haproxy. Then open the back end of the web page to view the CVM status. At this time, the service may not be provided properly, and you need to do some configuration for the backend of mysql.

Install xinetd on the backend database cloud server.

Yum install xinetd-y

/ etc/xinetd.d/mysqlchk # provides a configuration file to check the port.

/ usr/bin/clustercheck # the script that checks the results needs to modify two parameters in the script.

MYSQL_USERNAME= "${1-sstuser}"

The account password of MYSQL_PASSWORD= "${2murs3creV9228saccounts%}" to check the status can be used with the account password of database synchronization.

Then start the xinetd service, service xinetd start or systemctl start xinetd. After a successful startup, you will find that the back-end service has become available when you look at the monitoring page of haproxy.

Haproxy logs need to be configured with the rsyslog service.

Vi / etc/rsyslog.conf

$ModLoad imudp

$UDPServerRun 514

.info; mail.none;authpriv.none;cron.none;local2.none / var/log/messages

Local2. / var/log/haproxy.log

The above local2 number needs to be the same as in the haproxy configuration file. Restart service rsyslog restart after configuration. Then there is a log file in / var/log/haproxy.log.

Copy a copy of the above haproxy to another host. Restart the rsyslog service after configuring rsyslog, and then start haproxy to have two hosts.

Configure keepalived for high availability:

Edit the vi / etc/keepalived/keepalived.conf file.

Global_defs {

Router_id mysql

}

Vrrp_script check_ha {

The script used by script "/ opt/share/selfTools/check_haproxy" # to detect the service. You need to write it yourself. If you successfully return 0 or fail, you will return a non-0 value, and the directory can be defined by yourself.

Interval 2

}

Vrrp_instance ha01 {# this number needs to be different for each host.

Both state BACKUP # are set to BACKUP mode to avoid application instability caused by frequent switching.

Name of the network card bound to interface eth0 #

Virtual_router_id 22 # both hosts need to be the same.

Priority 100

Advert_int 1

Nopreempt

Authentication {

Auth_type PASS

Auth_pass 1111

}

Track_script {

Check_ha

}

Virtual_ipaddress {

192.168.2.191ram 24 dev eth0 scope global # floating address configuration

}

}

Script configuration: / opt/share/selfTools/check_haproxy

#! / bin/bash

Nc-z 127.0.0.1 3306 # check whether the local haproxy service port exists

Status=$?

Nc-z 127.0.0.1 9099 # check whether the status monitoring port of haproxy exists

Status=$ ($? + $status))

Exit $status # returns the exit value of the script run

Here you need to install nc on the host. Yum install nc-y

Start keepalived,service keepalived start. Then check the floating address. Copy a copy of the configuration file to another host and change the name of the vrrp_instance ha01. Everything else remains the same, just start the keepalived service.

Even if the configuration is completed here, the business access floating address can be directly assigned to the backend database cloud server.

III. Monitoring services

Database cluster monitoring, install percona-zabbix-templates-1.1.8-1.noarch, php package.

Copy the userparameter_percona_mysql.conf file to the / etc/zabbix/zabbix_agentd.d/ directory.

Edit the file vi / var/lib/zabbix/percona/scripts/ss_get_mysql_stats.php and fill in the monitoring account and password of mysql and save it.

Check to see if Include=/etc/zabbix/zabbix_agentd.d/.conf is configured in the zabbix.conf file. No, please add.

The template needs to be configured in the zabbix monitor, and my template cannot upload the file.

Percona's monitoring script cannot be used to monitor the status of the cluster. You need to write your own scripts to monitor. Add two things to the configuration file of zabbix.

UserParameter=Galera.discovery,/opt/share/shell/galera/galera_discovery.sh

UserParameter=Galera. [] / opt/share/shell/galera/galera_status.sh $1

Script / opt/share/shell/galera/galera_discovery.sh content:

#! / bin/bash

Sql= "show status like 'wsrep%';"

Galeras= (echo "set names utf8;$sql" | HOME=/var/lib/zabbix mysql-N | awk'{print $1}')

Text= (wsrep_local_state_uuid wsrep_flow_control_interval wsrep_local_state_comment wsrep_incoming_addresses wsrep_ist_receive_status wsrep_evs_repl_latency wsrep_evs_state wsrep_gcomm_uuid wsrep_cluster_state_uuid wsrep_cluster_status wsrep_connected wsrep_provider_name wsrep_provider_vendor wsrep_provider_version wsrep_ready)

Float= (wsrep_cert_interval wsrep_commit_window wsrep_commit_oool wsrep_commit_oooe wsrep_apply_window wsrep_apply_oool wsrep_apply_oooe wsrep_cert_deps_distance wsrep_flow_control_paused wsrep_local_recv_queue_avg wsrep_local_send_queue_avg)

Printf'{"data": ['

I, 1

For option in ${galeras [@]}

Do

If [[echo "${text [@]}" | grep "\ b$option\ b"]; then

Printf'{"{# TEXT}": "'$option

Elif [[echo "${float [@]}" | grep "\ b$option\ b"]; then

Printf'{"{# FLOAT}": "'$option

Else

Printf'{"{# OTNAME}": "'$option

Fi

If [$I-lt ${# galeras [@]}]; then

Printf'"},'

Else

Printf'"}]}\ n'

Fi

ITunes $(($item1))

Done

Script / opt/share/shell/galera/galera_status.sh content:

#! / bin/bash

Sql= "show status like'$1mm;"

If [- z $2]; then

Echo "set names utf8;$sql" | HOME=/opt/share/shell/galera mysql-N | awk'{print $2}'

Else

Echo "set names utf8;$sql" | HOME=/opt/share/shell/galera mysql-hobby 2-N | awk'{print $2}'

Fi

IV. Expand the solution

1. After using HAproxy, all the information related to client IP in mysql will be changed to the address of haproxy CVM.

Enable the transparent transfer function of haproxy, and transfer the client IP to realserver through haproxy. After the packet arrives at the realserver, it will not be able to return the data directly to the real IP of the client, which will not be able to establish a link to the tcp. Therefore, you need to return the packet to the haproxy CVM. After getting the data packet, haproxy needs to intercept the relevant data packet and send it to the haproxy process to complete the link establishment of the entire tcp. Look at the picture in detail:

You need to complete several steps in the figure to complete the entire establishment process. Specific configuration method:

1). Configure haproxy. Add a configuration such as source 0.0.0.0 usesrc clientip to the configured listen.

2) configure packet interception:

Configure / etc/sysctl.conf, add the following content

Net.ipv4.ip_forward = 1

Net.ipv4.conf.default.rp_filter = 2

Net.ipv4.conf.all.rp_filter = 2

Net.ipv4.conf.eth0.rp_filter = 0

Execute the command, and after execution, you also need to configure to enable autorun. Iptables-related rules can be saved and loaded, and the routing table needs to be created every time it is powered on.

Iptables-t mangle-N MYSQL # creates a linked list named MYSQL in the mangle table alone

Iptables-t mangle-A PREROUTING-m state-- state RELATED,ESTABLISHED-j CONNMARK-- restore-mark

Iptables-t mangle-A PREROUTING-m mark!-- mark 0-j ACCEPT

Iptables-t mangle-A PREROUTING-p tcp-m socket-j MYSQL # intercepting packets from the PREROUTING chain in the mangle table is a tcp protocol and is a socket packet. Forward to the MYSQL linked list.

Iptables-t mangle-A MYSQL-j MARK-- set-mark 1 # label incoming packets as 1 in the MYSQL linked list

Iptables-t mangle-A MYSQL-j ACCEPT # accepts all packets through.

Ip rule add fwmark 1 lookup 3306 # can distinguish the relevant packet content in the system kernel by marking the packet above. Here, create a new routing table number 3306 for the route marked as 1 for the packet.

Ip route add local 0.0.0.0 dev lo table 3306 # creates a routing rule in the 3306 routing table that routes packets entering the 3306 table to the local loop backport by default. In this way, haproxy can get the relevant data packets for processing.

3) the above two steps are configured on HAROXY's CVM. This step will be configured on the real CVM at the back end.

Iptables-t mangle-N MYSQL # adds a linked list with the same meaning as above

Iptables-t mangle-An OUTPUT-p tcp-- sport 3306-j MYSQL # here you need to intercept packets from the OUTPUT linked list, other linked lists are invalid. For specific reasons, please see the LINUX packet forwarding diagram below. Get a packet with a source port of 3306, which is the source port because it is the outgoing direction of the OUTPUT.

Iptables-t mangle-A MYSQL-j MARK-- set-mark 1 # for packet marking.

Iptables-t mangle-A MYSQL-j ACCEPT # accepts all packets through

Ip rule add fwmark 1 table 3306 # features the same as above to create a new routing table.

Ip route add default via 192.168.2.191 table 3306 # adds a default route and the next hop goes to HAPROXY.

Here, you can make the business normal without intercepting the packet. You only need to add an one-hop default route, ip route default via HAPROXY_IP. This can be done, but other services such as ssh will not be able to connect properly.

4) it can replace the third section. The main reason is that if you change it directly, there will be no comparison. The following command is also done directly on the database cloud server.

Iptables-t mangle-A PREROUTING-m state-- state RELATED,ESTABLISHED-j CONNMARK-- restore-mark # established links are automatically tagged

Iptables-t mangle-A PREROUTING-m mark!-- mark 0-j ACCEPT # packets that have been marked are released directly and do not enter the matching below. Improve the processing efficiency.

Iptables-t mangle-A PREROUTING-p tcp-m tcp-- dport 3306-m mac--mac-source 00:50:56:a0:13:a2-j MARK-- set-mark 1 # the MAC address here is the ENI address of the HAPROXY CVM.

Iptables-t mangle-A PREROUTING-p tcp-m tcp-- dport 3306-m mac--mac-source 00:50:56:81:4F:08-j MARK-- set-mark 2 # We have two haproxy, so we need two tags

Iptables-t mangle-A PREROUTING-j CONNMARK-- save-mark # keeps all packets marked on this marked link.

Iptables-t mangle-An OUTPUT-j CONNMARK-- what restore-mark # has done before is the incoming direction of the packet when the client accesses the host, and this sentence cannot be routed out if not added. The OUTPUT chain is a record marked according to the incoming direction of the packet and re-marked with the relevant tag. So that the route can get the outgoing packet correctly.

Ip rule add fwmark 1 table 3301

Ip route add default via 192.168.2.150 table 3301 # these two are routed packets, and the IP address is the real Nic address of haproxy. That is, the above mac address is the 00:50:56:a0:13:a2 host IP address, not a floating address.

Ip rule add fwmark 2 table 3302

Ip route add default via 192.168.2.153 table 3302 # these are the same functions as above, and the IP address is the host IP address with the mac address 00:50:56:81:4F:08.

There is no need to fill in the floating address here, no matter where the floating address is. Or access to any host can connect to the database normally.

If you follow the method of the previous section 3, the client cannot directly link to the database and normally cannot access the cloud server without a floating address. The method in section 4 will be handled in the same way as without clustering, and there is no difference between links and access.

2. Status monitoring of haproxy CVM.

Configure two items / etc/zabbix/zabbix_agentd.conf in the CVM zabbix.

UserParameter=ha.discovery,/opt/share/shell/zabbix-script/haproxy/haproxy_discovery.sh

UserParameter=Haproxy. [*] / opt/share/shell/zabbix-script/haproxy/haproxy_get.sh $1

Haproxy_discovery.sh:

#! / bin/bash

Node= (echo "show stat" | socat / var/lib/haproxy/stats stdio | sed "BEGIN FS=" | awk 'FS= ","} {print $2}' | sed'/ FRONTEND/d;/BACKEND/d;1d;$d')

Options= (qcur qmax scur smax stot status)

Printf'{"data": ['

I, 1

For name in ${node [@]}

Do

Nasty 1

For op in ${options [@]}

Do

If [$n-lt ${# options [@]}]; then

Printf'{"{# NODE}": "'$name-$op

Else

Printf'{"{# TEXT}": "'$name-$op

Fi

If [$I-lt ${# node [@]}] | | [$n-lt ${# options [@]}]; then

Printf'"},'

Else

Printf'"}]}\ n'

Fi

Nasty $(($nasty 1))

Done

ITunes $(($item1))

Done

Haproxy_get.sh:

#! / bin/bash

Cd dirname $0

Values= (echo $1 | awk 'BEGIN {FS= "-"} {print $1 "" $2}')

Case ${values [1]} in

"qcur")

Grep ${values [0]} status.txt | awk'{print $2}'

"qmax")

Grep ${values [0]} status.txt | awk'{print $3}'

"scur")

Grep ${values [0]} status.txt | awk'{print $4}'

"smax")

Grep ${values [0]} status.txt | awk'{print $5}'

"stot")

Grep ${values [0]} status.txt | awk'{print $6}'

"status")

Grep ${values [0]} status.txt | awk'{print $7}'

Esac

To reduce the impact on haproxy. Create a scheduled task and save the status data to a file on a regular basis.

Haproxy_status.sh:

#! / bin/bash

Cd dirname $0

Echo "show stat" | socat / var/lib/haproxy/stats stdio | sed "BEGIN status.txt" | awk 'BEGIN {FS= ","} {print $2 "\ t" $3 "\ t" $4 "\ t" $5 "\ t" $6 "\ t" $8 "\ t" $18}' > status.txt

If there is no socat command, you need to use yum installation. / var/lib/haproxy/stats this file needs to have permission to access, use chmod 606 to adjust.

Zabbix debugging can be debugged using the zabbix_get command on the zabbix-server CVM. Such as:

Zabbix_get-s 192.168.2.150-k 'Haproxy. [db02-scur]' where key is the field name customized by the monitored host.

Please deal with the establishment of the same template by yourselves. Unable to upload template file sharing.

The above details on how to implement the deployment of Percona Mysql Galera multi-read-write cluster are helpful to you. If you want to know more about it, you can continue to follow our industry information section.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report