Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MySQL master-slave replication, read-write separation, high availability cluster construction

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

MySQL master-slave replication, read-write separation, high availability cluster construction

First, service introduction

1.1 Keepalived

Keepalived, which means to stay alive by name, is designed to solve a single point of failure and automatically switch to another server when one server goes down or fails. Keepalived is implemented based on VRRP protocol. VRRP protocol is a protocol used to achieve router redundancy. VRRP protocol virtualizes two or more router devices into virtual devices, which can provide virtual router IP (one or more), namely drift IP (VIP).

1.2 ProxySQL

ProxySQL is a high-performance, high-availability MySQL proxy service used to achieve database proxy and read-write separation functions.

1.3 Mariadb

Mariadb is a multi-user, multi-threaded SQL database server. It is the Cpact S architecture, or client/server, the customer / server architecture. MariaDB transaction-based Maria storage engine, using a variant of XtraDB,InnoDB of Percona, the performance is very powerful. Mariadb is maintained by the open source community, licensed by GPL, and fully compatible with MySQL.

1.4 MHA

MHA (Master HA) is an open source MySQL highly available program that provides automating master failover functionality for MySQL master-slave replication architecture. When MHA monitors a master node failure, it will promote the slave node with the latest data to the new master node. During this period, MHA will avoid consistency problems by obtaining additional information from other nodes. MHA also provides the online switching function of master nodes, that is, switching master/slave nodes on demand.

II. Architectural requirements

2.1 Architectural requirements

Achieve MsySQL database server master-slave replication, master-slave high availability, read and write separation, ProxySQL high availability functions.

2.2 system version selection

OS:centos7.3

Kernel:3.10.0-514.el7.x86_64

Archive:X86_64

2.3 deployment environment

III. Linux system environment view

3.1 View server hardware information

Dmidecode | grep "Product Name"

3.2View CPU CPU model

Lscpu | grep "Model name"

3.3View the number of CPU

Lscpu | grep "^ CPU (s)"

3.4 View memory size

Free-h | grep Mem | awk'{print $2}'

IV. System initialization

4.1 clear firewall rules

Iptables-F

4.2 turn off the firewall or define firewall rules (here for experimental convenience, choose to turn it off)

Systemctl stop firewalld.servicesystemctl disable firewalld.service

4.3 close Selinux

Sed-I's setting SELINUXFORCING'/ etc/selinux/configsetenforce 0

4.4 turn off NetworkManager

Systemctl stop NetworkManagersystemctl disable NetworkManager

4.5 time synchronization

Ntpdate 172.16.0.1

4.6 hosts file modification

192.168.0.51 node1192.168.0.52 node2192.168.0.53 node3192.168.0.54 node4192.168.0.55 node5192.168.0.56 node6

4.7 set hostname

Hostnamectl set-hostname node1 hostnamectl set-hostname node2 hostnamectl set-hostname node3hostnamectl set-hostname node4hostnamectl set-hostname node5hostnamectl set-hostname node6

4.8Distribution sshkey (Operation on node3)

# create key [root@node3 ~] # ssh-keygen-t rsa-P''# distribute the public key to each host ssh-copy-id-i.ssh / id_rsa.pub root@node3ssh-copy-id-i.ssh / id_rsa.pub root@node1ssh-copy-id-i.ssh / id_rsa.pub root@node2ssh-copy-id-i.ssh / id_rsa.pub root@node4ssh-copy-id-I. ssh / id_rsa.pub root@node5ssh- Copy-id-I. ssh / id_rsa.pub root@node6# sends private and public keys to the other six nodes Ensure that all nodes communicate based on key [root@node3 ~] # scp .ssh / id_rsa / id_rsa.pub root@node1:/root/.ssh/id_rsa 100% 1675 1.6KB/s 00:00 id_rsa.pub 100% 392 0.4KB/s 00:00 [root@node3 ~] # scp. Ssh/id_rsa .ssh / id_rsa.pub root@node2:/root/.ssh/id_rsa 1675 1.6KB/s 00:00 id_rsa.pub 100392 0.4KB/s 00:00 [root@node3 ~] # scp .ssh / id_rsa .ssh / id_rsa.pub root@node4:/root/ .ssh / id_rsa 1675 1.6KB/s 00:00 id_rsa.pub 100392 0.4KB/s 00:00 [root@node3 ~] # scp .ssh / id_rsa .ssh / id_rsa.pub root@node5:/root/.ssh/id_rsa 1675 1.6KB/s 00:00 id_rsa.pub 1675 392 0.4KB/s 00:00 [root@node3] # scp .ssh / id_rsa .ssh / id_rsa.pub root@node6:/root/.ssh/id_rsa 1675 1.6KB/s 00:00 id_rsa. Pub 100% 392 0.4KB/s 00:00

V. Environmental deployment

5.1 Master-Slave replication configuration (node3, node4, node5, node6)

1 install Mariadb (4 nodes)

Yum install mariadb mariadb-server-y

2 Mariadb service configuration of master node

Vim / etc/my.cnf.d/ server.cnf [mysqld] innodb_file_per_table=ONskip_name_resolve=ONserver_id = 1log_bin = log-bin

3 start the service

Systemctl start mariadb.service

4 create a master-slave configuration account

Grant all privileges on *. * to 'michael'@'192.168.0.%' identified by' password';grant replication slave,replication client on *. * to 'repuser'@'192.168.0.%' identified by' repass';flush privileges

5 other slave node Mariadb service configuration

# node3 [mysqld] innodb_file_per_table=ONskip_name_resolve=ONserver_id = 11relay_log=relay-logread_only=ON# Node4 [mysqld] innodb_file_per_table=ONskip_name_resolve=ONserver_id = 12relay_log=relay-logread_only=ON# node5 [mysqld] innodb_file_per_table=ONskip_name_resolve=ONserver_id = 13relay_log=relay-logread_only=ON

6 start all services

Systemctl start mariadb.service

7 Master node to view log files

MariaDB [(none)] > SHOW MASTER LOGS +-+-+ | Log_name | File_size | +-+-+ | log-bin.000001 | 30331 | | log-bin.000002 | 1038814 | log-bin.000003 | 899 | +-- -+ 3 rows in set (0.00 sec)

8 establish a master-slave replication relationship

# establish master-slave relationship between slave nodes and launch SLAVESTART SLAVE; MariaDB [(none)] > START SLAVE; Query OK, 0 rows affected (0.00 sec) # check the slave status and confirm whether the master-slave replication is successfully configured (both slave nodes need to be configured) MariaDB [(none)] > SHOW SLAVE STATUS\ G * * 1. Row * * Slave_IO_State: Waiting for master to send event Master_Host: 192.168.0.53 Master_User: repuser Master_Port: 3306 Connect_Retry: 60 Master_Log_File: log-bin.000003 Read_Master_Log_Pos: 899 Relay_Log_File: relay-log.000002 Relay_Log_Pos: 527 Relay_Master_Log_File: log-bin.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 899 Relay_Log_Space: 815 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1

9 testing master-slave replication

# Master node creates database hellodbMariaDB [(none)] > CREATE DATABASE hellodb;Query OK, 1 row affected (0.00 sec) MariaDB [(none)] > SHOW DATABASES +-+ | Database | +-+ | information_schema | | hellodb | | mysql | performance_schema | | test | +-+ # each slave node shows that the hellodb database exists Master / slave configuration succeeded MariaDB [(none)] > SHOW DATABASES +-+ | Database | +-+ | information_schema | | hellodb | | mysql | | performance_schema | | test | +-+ 5 rows in set (0.00 sec)

5.2 read-write separation configuration

1 Keepalived, ProxySQL, Mariadb installation

Yum install keepalived mariadb-ywget-c ftp://172.16.0.1/pub/Sources/7.x86_64/proxysql/proxysql-1.3.6-1-centos7.x86_64.rpmyum install proxysql-1.3.6-1-centos7.x86_64.rpm-y

2 configure ProxySQL

[root@node1 keepalived] # cat / etc/proxysql.cnf datadir= "/ var/lib/proxysql" # manage the configuration segment admin_variables= {admin_credentials= "admin:admin" mysql_ifaces= "127.0.0.1For Proxysqlaccounadmin.sock"} # MySQL variable configuration segment mysql_variables= {threads=4max_connections=2048default_query_delay=0default_query_timeout=36000000have_compress=truepoll_timeout=2000interfaces= "0.0.0.0Varible configuration segment 3306 / tmp/proxysql.sock "default_schema=" information_schema "stacksize=1048576server_version=" 5.5.30 "connect_timeout_server=3000monitor_history=600000monitor_connect_interval=60000monitor_ping_interval=10000monitor_read_only_interval=1500monitor_read_only_timeout=500ping_interval_server=120000ping_timeout_server=500commands_stats=truesessions_sort=trueconnect_retries_on_failure=10} # MySQL service configuration segment mysql_servers = ({address =" 192.168.0.53 "# no default, required. If port is 0, address is interpred as a Unix Socket Domainport = 3306 # no default, required. If port is 0, address is interpred as a Unix Socket Domainhostgroup = 0 # master nodestatus = "ONLINE" # default: ONLINEweight = 1 # default: 1compression = 0 # default: 0}, {address= "192.168.0.54" port=3306hostgroup=1 # slave nodestatus = "ONLINE" # default: ONLINEweight = 1 # default: 1compression = 0 # default: 0} {address= "192.168.0.55" port=3306hostgroup=1 # slave nodestatus = "ONLINE" # default: ONLINEweight = 1 # default: 1compression = 0 # default: 0}, {address= "192.168.0.56" port=3306hostgroup=1 # slave nodestatus = "ONLINE" # default: ONLINEweight = 1 # default: 1compression = 0 # default: 0}) # MySQL user configuration segment mysql_users: ({username = "michael" # no default Requiredpassword = "password" # default:''default_hostgroup = 0 # default: 0maxconnections connections 1000defultschemas = "test" active = 1 # default: 1}) mysql_query_rules: () scheduler= () # MySQL read and write configuration segment mysql_replication_hostgroups= ({writer_hostgroup=0 reader_hostgroup=1})

3 start ProxySQL

[root@node1] # service proxysql startStarting ProxySQL: DONE! [root@node2 init.d] # ss-tnlState Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.1 purl 6032 *: * LISTEN 0 128 *: 22 *: * LISTEN 0 100 127.0.0.1 LISTEN 25 *: * LISTEN 0 1024 *: 3306 *: * LISTEN 0 1024 *: 3306 *: * LISTEN 0 1024 *: 3306 *: * LISTEN 0 1024 *: 3306 *: * LISTEN 0 LISTEN 0 100:: 1:25: *

4 node1 connection test, connection is successful

[root@node1] # mysql-umichael-ppassword-h 192.168.0.153Welcome to the MariaDB monitor. Commands end with; or\ g.Your MariaDB connection id is 387Server version: 5.5.52-MariaDB MariaDB ServerCopyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.Type 'help;' or'\ h' for help. Type'\ c'to clear the current input statement.

The node2 configuration is the same as the node1 configuration.

5.3 ProxySQL highly available configuration

1 node1 keepalived.conf configuration

[root@node1 ~] # cat / etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {root@localhost} notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node1 vrrp_mcast_group4 224.1.101.114} # server downtime, perform downgrade, switch server, otherwise exit. Vrrp_script chk_down {script "[[- f / etc/keepalived/down]] & & exit 1 | | exit 0" interval 1 weight-10 fall 1 rize 1} # Health status detection script Check whether the proxysql is alive vrrp_script chk_proxysql {script "killall-0 proxysql & & exit 0 | | exit 1" interval 1 weight-10 fall 1 rise 1} vrrp_instance VI_1 {state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication {auth_type PASS auth_pass fs3D4Gr} virtual_ipaddress {192.168.0.100 dev ens33 label ens33: 0} # call script track_script {chk_down chk_proxysql} notify_master "/ etc/keepalived/notify.sh master" notify_backup "/ etc/keepalived/notify.sh backup" notify_fault "/ etc/keepalived/notify.sh fault"}

The virtual IP (VIP) is set here, so you need to change the IP in the prosql.conf configuration file. Note that both node1 and node2 need to be changed.

Mysql_ifaces= "192.168.0.100 purge 6032 lemmin.sock" interfaces= "192.168.0.100lux 3306 exorbitant type.sock"

2 Notification script (notify.sh) to notify the server of a failover.

[root@node1 keepalived] # cat notify.sh #! / bin/bash#contact='root@localhost'notify () {mailsubject= "vrrp:$ (hostname) to be $1" mailbody= "$(hostname) to be $1m VRP transition, $(date)." Echo "$mailbody" | mail-s "$mailsubject" $contact} case $1 in master) notify master service proxysql start;; backup) notify backup service proxysql start;; fault) notify fault service proxysql stop; *) echo "Usage: $(basename $0) {master | backup | fault}" exit 1;; esac

3 node2 keepalived.conf configuration

[root@node2 keepalived] # cat keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {root@localhost} notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node2 vrrp_mcast_group4 224.1.101.114} # server downtime, perform downgrade, switch server, otherwise exit. Vrrp_script chk_down {script "[[- f / etc/keepalived/down]] & & exit 1 | | exit 0" interval 1 weight-10 fall 1 rize 1} # Health status detection script Check whether the proxysql is alive vrrp_script chk_proxysql {script "killall-0 proxysql & & exit 0 | | exit 1" interval 1 weight-10 fall 1 rise 1} vrrp_instance VI_1 {state MASTER interface ens33 virtual_router_id 51 priority 90 advert_int 1 authentication {auth_type PASS auth_pass fs3D4Gr} virtual_ipaddress {192.168.0.100 dev ens33 label ens33: 0} # call script track_script {chk_down chk_proxysql} notify_master "/ etc/keepalived/notify.sh master" notify_backup "/ etc/keepalived/notify.sh backup" notify_fault "/ etc/keepalived/notify.sh fault"}

The notify.sh script is consistent with the above. At this point, the high availability has been configured, of course, you can test whether the high availability is implemented by stopping the service of a node.

5.4 High availability of MySQL master node through MHA

MHA services have two roles, MHA Manager (management node) and MHA Node (data node).

MHAManager, usually deployed on a separate machine to manage multiple master/slave clusters, each master/slave cluster is called an application.

MHA node: runs on each MySQL server (master/slave/manager), which accelerates failover through monitoring, scripts with parsing and cleaning logs capabilities.

MHA needs to be based on SSH communication, which we configured from the very beginning of the environment.

1 download MHA installation package (management package, package)

Wget-c http://192.168.0.123/install_package/down/Sources/mha/mha4mysql-manager-0.56-0.el6.noarch.rpmwget-c http://192.168.0.123/install_package/down/Sources/mha/mha4mysql-node-0.56-0.el6.noarch.rpm

2 node1 and node2 act as MHA management nodes and make them highly available, so install all mha rpm packages in node1 and node2

Yum install-y mha4mysql*

3 other nodes install mha data node rpm installation package

Yum install mha4mysql-node-0.56-0.el6.noarch.rpm-y

The MHA manager management node manages other MHA node data nodes that need to configure a configuration file to define management matters. The global profile defaults to / etc/masterha_default.cnf, which is an optional configuration. If there is only one set of Application, we can customize a configuration file.

This article will use the configuration file for the following path.

4. Both node1 and node2 need to perform the following operations

Mkdir / etc/masterhavim / etc/masterha/app1.cnf

5 Custom Management profile app1.cnf configuration

[root@node1 ~] # cat / etc/masterha/app1.cnf [server default] user=michael password=password manager_workdir=/data/masterha/app1 manager_log=/data/masterha/app1/manager.log remote_workdir=/data/masterha/app1ssh_user=root repl_user=repladminrepl_password=replpassping_interval= 1 [server1] hostname=192.168.0.53candidate_master= 1 [server2] hostname=192.168.0.54candidate_master= 1 [server3] hostname=192.168.0.55candidate_master= 1 [server4] hostname=192.168.0.56candidate_master=1

6 check whether the SSH communication of each node is successful

The last line of masterha_check_ssh-- conf=/etc/masterha/app1.cnf # outputs the following information, indicating that the test has passed. [info] All SSH connection tests passed successfully.

7 modify the configuration of master node (node3) and slave node (node4, 5, 6)

# Master node master configuration: [mysqld] innodb_file_per_table=ONskip_name_resolve=ONserver_id = 1log_bin = log-binrelay-log=relay-bin

8 modify the configuration on which all slave nodes depend

[mysqld] innodb_file_per_table=ONskip_name_resolve=ONserver_id = 11 # id modify relay_log=relay-logread_only=ONlog_bin = log-binrelay_log_purge=0 according to different node

9 check whether the replication cluster of MySQL communicates successfully

Masterha_check_repl-conf=/etc/masterha/app1.cnf

10 if successful, start MHA

Nohup masterha_manager-- conf=/etc/masterha/app1.cnf > / data/masterha/app1/manager.log 2 > & 1 &

11 View the current status of the master node

[root@node1] # masterha_check_status-- conf=/etc/masterha/app1.cnfapp1 (pid:75846) is running (0:PING_OK), master:192.168.0.53

At this point, the configuration is complete, and we can simulate the failure of the database master node (node3) to test whether the high availability feature of MHA is implemented. Once our master node fails for some reason, it is automatically transferred to other nodes to become the master node. At this point, we need to back up immediately in the new primary node. At this time, the fault node should deal with the fault immediately and return to normal. Once normal, we should import the backup file of the database, and we also need to modify the mysql configuration file of the failed node to the configuration attribute of the slave server to make it a slave node. The master-slave configuration has been introduced above, so I won't talk about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report