Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the MySQL MHA cluster scheme?

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article shows you what the MySQL MHA cluster scheme is like, the content is concise and easy to understand, it will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

Investigation of MySQL MHA Cluster Scheme

1 basic concepts of MHA cluster 1.1 basic concepts

MHA is a set of Mysql failover scheme written by Japanese Mysql experts with Perl to ensure the high availability of the database. Its function is to achieve primary Mysql failover (failover) within 0-30s.

MHA failover can help us solve the problem of consistency of slave data and maximize the recovery of data after the failure. There are two roles in MHA, one is the node node and the other is the manager node. To implement this MHA, you must have at least three database servers, one master and multiple slaves, that is, one acting as a master, one acting as a backup machine for master, and the other as a slave machine.

Official information website:

Https://code.google.com/p/mysql-master-ha/w/list

Https://code.google.com/p/mysql-master-ha/wiki/Tutorial

1.2 principle of MHA to ensure data consistency

In the MHA cluster architecture based on MySQL replication, when Master fails, MHA uses the following steps to ensure data consistency:

(1) find the most successful slave server (that is, the slave server that is closest to the data of the master server).

(2) if the host is still accessible, retrieve the latest data difference between the slave and the host from the master server.

(3) operate on each slave server to determine which events they lack and supplement them separately.

(4) after upgrading the latest one from the server to the master server.

(5) redirect other slave servers to the new master server.

Although MHA attempts to save binary logs from the primary server that is down, it is not always possible. For example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save binary logs and only fails over and loses the latest data.

Combined with semi-synchronous replication, the risk of data loss can be greatly reduced. MHA can be combined with semi-synchronous replication, and if only one slave has received the latest binary logs, MHA can apply the latest binary logs to all other slave servers, so they are consistent with each other.

1.3 MHA Cluster Architecture Diagram 1.3.1 MHA overall Architecture Diagram

1.3.2 MHA group inner frame composition

Use Keepalived + LVS within the replication group:

1.4 benefits of MHA clustering 1.4.1 Master automatic monitoring and failover

"in an existing master-slave replication environment, MHA can monitor master host failures and automatically fail over." Even if some slave do not accept the new relay log events,MHA, they will automatically identify the differential relay log events from the latest slave and apply the differential event to other slaves. So all slave are consistent. MHA second-level failover, in addition, a slave can be configured in the configuration file to become Master first.

When the new master is migrated, the other slave is restored in parallel. Even if there are thousands of slave, the recovery time of master will not be affected, and the slave will be completed quickly. The whole process does not need the intervention of DBA.

1.4.2 online switching Master to different hosts

In many cases, it is necessary to transfer the master to another host (such as replacing the raid controller, improving the master machine hardware, and so on). This is not a master crash, but planned maintenance must be done. Planned maintenance leads to downtime and must be restored as soon as possible. Fast master switching and elegant blocking writes are required, and MHA provides this way. Elegant master switching, blocking write operations within 0. 5-2 seconds. In many cases, a 0.5-2 second downtime is acceptable, even if it is not in the planned maintenance window. This means that when you need to change faster machines and upgrade higher versions, dba can easily take action.

1.4.3 Master downtime will not cause master-slave data inconsistency

When master goes down, MHA automatically recognizes the difference of relay logevents between slave, then applies to different slave, and finally all slave are synchronized. When used together through semi-synchronization, there is almost no data loss.

1.4.4 MHA deployment does not affect current environment settings

One of the most important design concepts of MHA is to be as easy to use as possible. As long as it is in the master-slave environment, the MHA cluster can be deployed without changing the existing deployment, both in synchronous and semi-synchronous environments. Start / stop / upgrade / downgrade / install / uninstall MHA without changing the mysql master / slave (such as start / stop). Its clustering solution needs to change the mysql deployment settings.

When you need to upgrade MHA to a new version, you don't need to stop mysql, just update the HMA version, and then restart MHAmanger.

1.4.5 it is simple and quick to add cluster groups

MHA contains MHA Manager and MHA node. A single MHA Manager can manage multiple MHA node clusters. When you add a cluster group, configure replication for the new cluster group and update the configuration of MHA Manager. It basically has no effect on the existing clusters. Increasing the number of cluster groups will not put too much burden on MHA Manager. Manager can deploy a separate machine, or it can run on a machine in slaves.

1.4.6 No impact on performance

MHA monitoring Master does not send large queries, and the performance of master-slave replication is not affected. This case uses a custom script to try to log in to Master on a regular basis. If the login fails, it will fail over. This monitoring method has little impact on performance.

1.4.7 applicable to any storage engine

Mysql is not only suitable for transaction-safe innodb engines, but also for all applicable engines, MHA. Even if you use the mysiam engine of the legacy environment, you can use MHA without migration.

1.4.8 transparent failover (business level)

MHA usually has two ways to achieve transparent failover at the business level: one is the virtual IP address, and MHA combines Keepalived and LVS. When the database in the cluster fails over, the virtual IP providing services is also transferred; the second is the global configuration file, which is realized by configuring the master_ip_failover_script and master_ip_online_change_script parameters of MHA.

1.4.9 read-write separation

MHA's Master is readable and writable, while backup Master and Slave can be used for queries to share the read pressure.

1.5 MHA cluster drawbacks 1.5.1 rejoining the cluster requires manual operation

If you want to rejoin the cluster after the Master down is dropped, you need to manually perform the command switch.

1.5.2 standby Master data consistency risk

The alternate Master is writable by default, and if it is not set to read only, there is a risk of data inconsistency. And read-only doesn't work for super users.

1.5.3 there is data synchronization delay

Due to the use of MySQL replication, while MySQL replication is delayed. The replication delay of MySQL is mainly due to the delay of Master multithreading to Slave single thread.

2 MHA deployment case 2.1 case description

In order to achieve better results, this experiment uses four machines. Use semi-synchronous replication of MySQL to ensure the integrity of the data. Combine Keepalived and LVS to realize IP fault transparent transfer and read-write separation. It should be noted that once the primary server goes down, the backup machine begins to act as a master service, and if the primary server goes online, it will no longer become a master, because if so, the consistency of the database will be changed.

2.2 Architecture Diagram

IP

Mainframe

Usage description

192.168.0.11

Master

Master of Master,Keepalived HA

192.168.0.12

Slave1 (standby Master)

BACKUP,LVS load balancing host for standby Master,Keepalived HA

192.168.0.13

Slave2

Slave,LVS load balancing host

192.168.0.14

MHA Manager

Management hosts of the MHA cluster

192.168.0.20

Virtual IP address

MHA cluster writes VIP address

192.168.0.21

Virtual IP address

MHA cluster read VIP address

3 install MySQL MHA

(1) first, use ssh-keygen to realize the mutual key-free login between the four hosts.

(2) install MHAmha4mysql-node,mha4mysql-manager software package

(3) establish master-slave replication between master,slave1,slave2

(4) configure MHA file on the manager of the manager

(5) masterha_check_ssh tool verifies whether ssh trust login is successful

(6) masterha_check_repl tool verifies whether mysql replication is successful

(7) start MHA manager and monitor log files

(8) whether it will automatically switch after testing master downtime

3.1 configure SSH login without password

Generate rsa encryption authentication in all MHA hosts and configure it to access each other without a password

Ssh-keygen-t rsa

Ssh-copy-id-I. ssh / id_rsa.pub root@192.168.0.11

Ssh-copy-id-I. ssh / id_rsa.pub root@192.168.0.12

Ssh-copy-id-I. ssh / id_rsa.pub root@192.168.0.13

Ssh-copy-id-I. ssh / id_rsa.pub root@192.168.0.14

3.2 install MySQL

Install the MySQL software on the 3 MySQL master-slave nodes.

3.2.1 Uninstall the MySQL that comes with the system

Rpm-qa mysql

Rpm-e mysql-devel

Rpm-e mysql

3.2.2 install the cmake compilation package

Cd / media/RHEL_6.4\ x86 / 64\ Disc\ 1/Packages/

Rpm-ivh cmake-2.6.4-5.el6.x86_64.rpm

Cd / root

3.2.3 extract and compile MySQL software

Tar zxvf mysql-5.6.19.tar.gz

Cd / root/mysql-5.6.19

Mkdir-p / usr/local/mysql/data

Cmake-DCMAKE_INSTALL_PREFIX=/usr/local/mysql-DMYSQL_DATADIR=/usr/local/mysql/data

Make & & make install

3.2.4 create mysql users and groups

Groupadd mysql

Useradd-r-g mysql mysql

(5) initialize MySQL

Cd / usr/local/mysql

Chown-R mysql.

Chgrp-R mysql.

Scripts/mysql_install_db-user=mysql

Chown-R root.

Chown-R mysql data

. / bin/mysqld_safe-- user=mysql &

Cp support-files/mysql.server / etc/init.d/mysql.server

3.2.5 setting environment variables

Vi / .bash_profile

PATH plus:

: / usr/local/mysql/bin

Source / .bash_profile

3.2.6 modify MySQL configuration

Cp / usr/local/mysql/my.cnf / etc/

Vi / etc/my.cnf

Log_bin # mha3 will not be master, so this parameter is not used

Basedir = / usr/local/mysql

Datadir = / usr/local/mysql/data

Port = 3306

Server_id = 1 # Master-Slave synchronization needs

Socket = / tmp/mysql.sock

3.2.7 modify MySQL login permissions

Restart MySQL: service mysql.server restart

GRANT ALL ON *. * TO root@localhost IDENTIFIED BY 'revenco123' WITH GRANT OPTION

GRANT ALL PRIVILEGES ON *. * TO root@'%' IDENTIFIED BY 'revenco123' WITH GRANT OPTION

GRANT ALL PRIVILEGES ON *. * TO root@mha1 IDENTIFIED BY 'revenco123' WITH GRANT OPTION

FLUSH PRIVILEGES

3.3.Configuring the master-slave relationship of MySQL 3.3.1 modifying the configuration file of MySQL

(1) modify the / etc/my.cnf on MASTER and add:

Server_id=1

Log_bin

(2) modify the / etc/my.cnf on SLAVE1 (standby MASTER) and add:

Server_id=2

Log_bin

(3) modify the / etc/my.cnf on SLAVE2 and add:

Server_id=3

3.3.2 authorize replication permissions

(1) Authorization on master

Mysql > grant replication slave on *. * to repl@'192.168.0.%' identified by 'repl'

(2) Authorization on alternate master:

Mysql > grant replication slave on *. * to repl@'192.168.0.%' identified by 'repl'

3.3.3 switch to master-slave replication mode

(1) View the name and location of the binary log currently used in Master

Mysql > show master status

Record "File" and "Position", which are the binary log names and locations currently used by the main library.

(2) switch to slave mode

Switch to slave mode on Slave1 (standby Master) and Slave2

Mysql > change master to master_host= "mha1", master_user= "repl", master_password= "repl", master_log_file= "mha1-bin.000005", master_log_pos=120

Master_log_file and master_log_pos are the things written down above.

(3) start replication

Enable replication on Slave1 (standby Master) and slave2:

Mysql > start slave

(6) set to read-only on slave2

Mysql > set global read_only=1

Note: read-only has no effect on users with super permissions

3.3.4 check the replication status of the master-slave library

(1) check the status of the outbound database

Mysql > show master status\ G

(2) View the status of the slave library

Mysql > show slave status\ G

# if Slave_IO_Running: Yes and Slave_SQL_Running: Yes indicate that the master / slave configuration is successful.

# you can also execute Mysql > show global status like "rpl%" on master

If Rpl_semi_sync_master_clients is 2, semi-synchronous replication is normal.

3.3.5 Log synchronization format

There are three synchronization formats for logs:

(1) replication based on SQL statements (statement-based replication, SBR)

(2) Row-based replication (row-based replication, RBR)

(3) mixed mode replication (mixed-based replication, MBR)

Accordingly, there are three formats for binlog: STATEMENT,ROW,MIXED. In MBR mode, SBR mode is the default. For data consistency, it is recommended to choose ROW or MIXED mode.

For example, there is a function:

CREATE DEFINER = 'root'@'%' FUNCTION f_test2

(

Pid int

)

RETURNS int (11)

BEGIN

Insert into T1 values (pid,sysdate ())

RETURN 1

END

After the master library executes select f_test2 (8), the value of the inserted sysdate () is different on the master library and each slave library.

3.4 configure MySQL semi-synchronous replication

The prerequisite for configuring MySQL semi-synchronous replication is that replication for MySQL is already configured. The mechanism of semi-synchronous replication is between synchronous and asynchronous, and the transaction commit of Master is blocked, as long as an Slave has received the event of the transaction and logged it. It does not wait for all Slave to be notified that it has been received, and it just receives it, not for it to be fully executed and submitted.

Both Master and Slave1 (standby Master) can become Master or Slave during failover, so 'semisync_master.so' and' semisync_slave.so' are installed on both servers.

3.4.1 configure Master and Slave1 (standby Master)

(1) install semi-synchronous plug-in

(plug-ins that need to be installed as Master)

Mysql > install plugin rpl_semi_sync_master soname 'semisync_master.so'

(plug-ins that need to be installed as Slave)

Mysql > install plugin rpl_semi_sync_slave soname 'semisync_slave.so'

(2) set parameters

(parameters to be set as Master)

Mysql > set global rpl_semi_sync_master_enabled=1

Mysql > set global rpl_semi_sync_master_timeout=1000

Mysql > show global status like 'rpl%'

(parameters to be set as Slave)

Mysql > install plugin rpl_semi_sync_slave soname 'semisync_slave.so'

Mysql > set global rpl_semi_sync_slave_enabled=1

(3) modify the configuration file

In order for mysql to load this feature automatically when it is restarted, add the following in / etc/my.cnf:

Rpl_semi_sync_master_enabled=1

Rpl_semi_sync_master_timeout=1000

Rpl_semi_sync_slave_enabled=1

3.4.2 configure Slave2

(1) install semi-synchronous plug-in

Mysql > install plugin rpl_semi_sync_slave soname 'semisync_slave.so'

(2) set parameters

Mysql > set global rpl_semi_sync_slave_enabled=1

(3) modify the configuration file

Add in / etc/my.cnf:

Rpl_semi_sync_slave_enabled=1

3.4.3 check the status of semi-synchronous replication

You can execute Mysql > show global status like 'rpl%' on master

If Rpl_semi_sync_master_clients is 2. It indicates that semi-synchronous replication is normal.

3.5 install MHA software

The inconvenience of MHA is that no matter the master switch caused by downtime or manual switching of master, the original master is no longer in the MHA architecture, and the restart will not be joined, so you must join it manually. The step of joining manually is similar, first copy the current master data to the machine to join, and then change master, and then start slave, the key is to do this process, the system can not be written, which is fatal.

A MHA host can manage multiple MySQL replication groups. To install MHA, you need to install DBD-MySQL first. In order to install DBD-MySQL, install the yum tool first.

3.5.1 configure the yum tool

Vim / etc/yum.repos.d/rhel-source.repo

[rhel-source]

Name=Red Hat Enterprise Linux $releasever-$basearch-Source

# baseurl= ftp://ftp.redhat.com/pub/redhat/linux/enterprise/$releasever/en/os/SRPMS/

Baseurl= file:///mnt

Enabled=1

Gpgcheck=1

Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[rhel-source-beta]

Name=Red Hat Enterprise Linux $releasever Beta-$basearch-Source

# baseurl= ftp://ftp.redhat.com/pub/redhat/linux/beta/$releasever/en/os/SRPMS/

Baseurl= file:///mnt

Enabled=1

Gpgcheck=1

Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta,file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

3.5.2 install the DBD-MySQL package

Umount / dev/cdrom

Mount / dev/cdrom / mnt

Yum clean all

Yum list

Yum install perl-DBD-MySQL

3.5.3 install MHA Node on all servers

Cd / root

Tar zxvf mha4mysql-node-0.54.tar.gz

Cd mha4mysql-node-0.54

Perl Makefile.PL

Make

Make install

3.5.4 install MHA Manager on the management machine

(1) download and install the necessary packages

Download address: http://apt.sw.be/redhat/el6/en/i386/extras/RPMS/perl-Config-Tiny-2.12-1.el6.rfx.noarch.rpm

Http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/perl-Log-Dispatch-2.26-1.el6.rf.noarch.rpm

Http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/perl-Parallel-ForkManager-0.7.5-2.2.el6.rf.noarch.rpm

(2) install the necessary packages

Rpm-ivh perl-Params-Validate-0.92-3.el6.x86_64.rpm

Rpm-ivh perl-Config-Tiny-2.12-1.el6.rfx.noarch.rpm

Rpm-ivh perl-Log-Dispatch-2.26-1.el6.rf.noarch.rpm

Rpm-ivh perl-Parallel-ForkManager-0.7.5-2.2.el6.rf.noarch.rpm

(3) install MHA manager software

Cd / mnt/Packages

Cd / root

Tar zxvf mha4mysql-manager-0.55.tar.gz

Cd mha4mysql-manager-0.55

Perl Makefile.PL

Make

Make install

3.6 configure MHA

Create a MHA configuration on MHA Manager

[root@mha4 ~] # more / etc/masterha/app1.cnf

[server default]

# mysql user and password

User=root

Password=revenco123

Ssh_user=root

Repl_user=repl

Repl_password=repl

Ping_interval=1

Shutdown_script= ""

# working directory on the manager

Manager_workdir=/var/log/masterha/app1

Manager_log=/var/log/masterha/app1/manager.log

# working directory on MySQL servers

Remote_workdir=/var/log/masterha/app1

[server1]

Hostname=mha1

Master_binlog_dir=/usr/local/mysql/data

Candidate_master=1

[server2]

Hostname=mha2

Master_binlog_dir=/usr/local/mysql/data

Candidate_master=1

[server3]

Hostname=mha3

Master_binlog_dir=/usr/local/mysql/data

No_master=1

Note: please see the official documentation for specific parameters.

Https://code.google.com/p/mysql-master-ha/wiki/Parameters

3.7Checking MHA configuration

Execute on MHA Manager

Masterha_check_ssh-conf=/etc/masterha/app1.cnf

Masterha_check_repl-conf=/etc/masterha/app1.cnf

If an error is reported, check the configuration.

Question 1:

Can't exec "mysqlbinlog": No such file or directory at / usr/local/share/perl/5.10.1/MHA/BinlogManager.pm line 99.

Mysqlbinlog version not found!

Solution:

# vi ~ / .bashrc or vi / etc/bashrc, and then add at the end of the file

PATH= "$PATH:/usr/local/mysql/bin"

Export PATH

Source / etc/bashrc

Question 2:

Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.

ERROR 1045 (28000): Access denied for user 'root'@'mha2' (using password: YES)

Solution: view the user table.

RANT ALL PRIVILEGES ON *. * TO root@mha1 IDENTIFIED BY 'revenco123' WITH GRANT OPTION

3.8 enable MHA

Start the management node process

Masterha_manager-- conf=/etc/masterha/app1.cnf &

Note: stopping and starting MHA services will not affect the services provided by MySQL.

4 configure automatic failover

When the database in the MHA cluster fails over, the virtual IP that provides services is also transferred. This experimental case uses the method of Keepalived + LVS to achieve read-write separation and IP failure transparent transfer.

Write separation, Master and Slave1 (standby Master) share a write virtual IP. At any time, only one machine can write. Normally, the write operation is performed in Master. When Master downmachine, MHA promotes Slave1 (standby Master) to Master. After that, Slave1 (standby Master) is writable. When Master is repaired, it becomes Slave, and Slave1 (standby Master) is still writable. Promote the repaired Master to Master unless manually intervened.

Read separation, Slave1 (standby Master) and Slave2 share a read virtual IP. Using LVS's polling algorithm, take turns accessing Slave1 (standby Master) and Slave2, and if one of them is not reachable, access the other server.

Note: once the primary server goes down, the backup machine starts to serve as master, and if the primary server goes online, it will no longer become a master, because if so, the consistency of the database will be changed.

4.1 install Keepalived

Install Keepalived on Master and Slave1 (backup Master)

4.1.1 extract and compile keepalived

Rpm-ivh openssl-devel-1.0.0-27.el6.x86_64

Tar zxvf keepalived-1.2.13.tar.gz

Cd keepalived-1.2.13

. / configure-- prefix=/usr/local/keepalived

After compilation, it is considered successful to see three yes. If there are two yes or one should check whether the kernel soft connection is correct:

Use IPVS Framework: Yes # must be YES

IPVS sync daemon support: Yes # must be YES

IPVS use libnl: No

Fwmark socket support: Yes

Use VRRP Framework: Yes # must be YES

Use VRRP VMAC: Yes

4.1.2 install keepalived

Make

Make install

Cp / usr/local/keepalived/etc/rc.d/init.d/keepalived / etc/init.d/

Cp / usr/local/keepalived/etc/sysconfig/keepalived / etc/sysconfig/

Mkdir-pv / etc/keepalived

Cp / usr/local/keepalived/etc/keepalived/keepalived.conf / etc/keepalived/

Ln-s / usr/local/keepalived/sbin/keepalived / sbin/

Service keepalived restart

4.2 configure Keepalived

The entire MHA uses keepalived's "HA + LVS" mode, configuring Keepalived HA mode on mha1 and mha2, that is, the "vrrp_instance VI_1" instance group of the following code, and configuring Keepalived LVS mode on mha2 and mha3, which is the "vrrp_instance VI_2" instance group of the following code. Both hosts are configured with "state BACKUP", but the priority priority is different. The configuration is as follows:

4.2.1 configuring keepalived on a Master host

[root@mha1 ~] # more / etc/keepalived/keepalived.conf

! Configuration File for keepalived

Global_defs {

Router_id mha1

}

Vrrp_script check_mysql {

Script "/ usr/local/keepalived/bin/keepalived_check_mysql.sh"

Interval 3

}

Vrrp_sync_group VG1 {

Group {

VI_1

}

}

Vrrp_sync_group VG2 {

Group {

VI_2

}

}

Vrrp_instance VI_1 {

State BACKUP

Interface eth0

Virtual_router_id 51

Priority 150

Advert_int 1

Nopreempt

Authentication {

Auth_type PASS

Auth_pass revenco123

}

Track_script {

Check_mysql

}

Virtual_ipaddress {

192.168.0.20

}

}

Vrrp_instance VI_2 {

State BACKUP

Interface eth0

Virtual_router_id 52

Priority 100

Advert_int 1

Nopreempt

Authentication {

Auth_type PASS

Auth_pass revenco1234

}

Track_script {

Check_mysql

}

Virtual_ipaddress {

192.168.0.21

}

}

Virtual_server 192.168.0.21 80 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Protocol TCP

Real_server 192.168.0.12 80 {

Weight 3

TCP_CHECK {

Connect_timeout 3

}

}

Real_server 192.168.0.13 80 {

Weight 3

TCP_CHECK {

Connect_timeout 3

}

}

}

}

4.2.2 configuring keepalived on Slave1 (standby Master)

[root@mha2 ~] # more / etc/keepalived/keepalived.conf

! Configuration File for keepalived

Global_defs {

Router_id mha2

}

Vrrp_script check_mysql {

Script "/ usr/local/keepalived/bin/keepalived_check_mysql.sh"

Interval 3

}

Vrrp_sync_group VG1 {

Group {

VI_1

}

}

Vrrp_sync_group VG2 {

Group {

VI_2

}

}

Vrrp_instance VI_1 {

State BACKUP

Interface eth0

Virtual_router_id 51

Priority 100

Advert_int 1

Nopreempt

Authentication {

Auth_type PASS

Auth_pass revenco123

}

Track_script {

Check_mysql

}

Virtual_ipaddress {

192.168.0.20

}

}

Vrrp_instance VI_2 {

State BACKUP

Interface eth0

Virtual_router_id 52

Priority 150

Advert_int 1

Nopreempt

Authentication {

Auth_type PASS

Auth_pass revenco1234

}

Track_script {

Check_mysql

}

Virtual_ipaddress {

192.168.0.21

}

}

Virtual_server 192.168.0.21 3306 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Protocol TCP

Real_server 192.168.0.12 3306 {

Weight 3

TCP_CHECK {

Connect_timeout 3

}

}

Real_server 192.168.0.13 3306 {

Weight 3

TCP_CHECK {

Connect_timeout 3

}

}

}

4.2.3 detect mysql status script configuration

On two Keepalived hosts, configure the script to detect mysql status (the same configuration for both mysql). The function of the script file is roughly as follows: as long as the mysql service is detected to stop, the keepalived service is also stopped, because the keepalived tells the network segment through multicast that it is still alive. When the keepalived is still running after the mysql service is stopped, you need to stop keepalived and let another host get the virtual IP. You can run this script in the background or add it to the keepalived configuration file, which is configured in the keepalived configuration file in this lab.

Script file name: / usr/local/keepalived/bin/keepalived_check_mysql.sh

#! / bin/bash

MYSQL=/usr/local/mysql/bin/mysql

MYSQL_HOST=localhost

MYSQL_USER=root

MYSQL_PASSWORD=revenco123

CHECK_TIME=3

# mysql is working MYSQL_OK is 1, mysql down MYSQL_OK is 0

MYSQL_OK=1

Function check_mysql_helth () {

$MYSQL-h $MYSQL_HOST-u$MYSQL_USER-p$MYSQL_PASSWORD-e "show status;" > / dev/null 2 > & 1

If [$? = 0]; then

MYSQL_OK=1

Else

MYSQL_OK=0

Fi

Return $MYSQL_OK

}

While [$CHECK_TIME-ne 0]

Do

Let "CHECK_TIME-= 1"

Check_mysql_helth

If [$MYSQL_OK = 1]; then

CHECK_TIME=0

Exit 0

Fi

If [$MYSQL_OK-eq 0] & & [$CHECK_TIME-eq 0]

Then

/ etc/init.d/keepalived stop

Exit 1

Fi

Sleep 1

Done

4.3 install LVS

The goal of using LVS technology is to achieve a high-performance, high-availability server cluster through the load balancing technology provided by LVS and the Linux operating system, which has good reliability, scalability and maneuverability. In order to achieve the optimal service performance at a low cost. In this experiment, we use LVS software to achieve read load balancing on Slave1 (standby Master) and Slave2.

4.3.1 LVS server-side installation configuration

Keepalived comes with the function of LVS, so the configuration on the server side of LVS is configured directly in the configuration file of Keepalived, and the configuration in the "virtual_server" section of the configuration file of Keepalived is the configuration of LVS. See the "virtual_server" section of Keepalived's configuration file for details.

4.3.2 Real machine configuration of LVS

In this lab case, the real machines of LVS are Slave1 (standby Master) and Slave2

In the DR and TUn mode of lvs, after the user's access request reaches the real server, it is returned directly to the user instead of going through the front-end Director Server. Therefore, it is necessary to add a virtual VIP address on each Real server node so that the data can be returned directly to the user. The operation of increasing the VIP address can be achieved by creating a script to create a file / etc/init.d/lvsrs, which contains the following content

[root@mha3 ~] # more / etc/init.d/lvsrs

#! / bin/bash

# description: Start Real Server

VIP=192.168.0.21

. / etc/rc.d/init.d/functions

Case "$1" in

Start)

Echo "Start LVS of Real Server"

/ sbin/ifconfig lo:0$ VIP broadcast $VIP netmask 255.255.255.255 up

Echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce

Echo "1" > / proc/sys/net/ipv4/conf/all/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce

Stop)

/ sbin/ifconfig lo:0 down

Echo "close LVS Director server"

Echo "0" > / proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "0" > / proc/sys/net/ipv4/conf/lo/arp_announce

Echo "0" > / proc/sys/net/ipv4/conf/all/arp_ignore

Echo "0" > / proc/sys/net/ipv4/conf/all/arp_announce

*)

Echo "Usage: $0 {start | stop}"

Exit 1

Esac

4.4 Firewall configuration

Vim / etc/iptables

/ etc/init.d/iptables restart

# allow vrrp protocol to pass

5 testing the impact of HA and LVS5.1 testing stopping and starting keepalived services on HA and LVS clusters

(1)。 Start the keepalived of mha1,mha2 and access the host accessed by the two VIP.

Test results: for VIP1, visit mha1; for VIP2, take turns to visit mha2 and mha3.

(2)。 Turn off keepalived for mha1

Test results: for VIP1, visit mha2; for VIP2, take turns to visit mha2 and mha3.

(3)。 Restart the keepalived of mha1

Test results: for VIP1, visit mha2; for VIP2, take turns to visit mha2 and mha3.

(4)。 Turn off keepalived for mha2

Test results: for VIP1, visit mha1; for VIP2, take turns to visit mha2 and mha3.

(5)。 Restart the keepalived of mha2

Test results: for VIP1, visit mha1; for VIP2, take turns to visit mha2 and mha3.

(6)。 Repeat the "1MMI 5" step.

Test results: the test results are the same.

(7)。 At the same time, close the keepalived of mha1,mha2

Test results: neither VIP can be accessed.

5.2 Test the impact of stopping and starting the corresponding services of the cluster on HA and LVS

The service tested is httpd:

(without scripting, let the httpd server stop and the keepalived service stop. If it is a Mysql server, there is a corresponding script)

(1)。 Start mha1,mha2 's keepalived and httpd services to access the hosts accessed by the two VIP.

Test results: for VIP1, visit mha1; for VIP2, take turns to visit mha2 and mha3.

(2)。 Turn off httpd for mha1

Test results: for VIP1, visit mha2; for VIP2, take turns to visit mha2 and mha3.

(3)。 Restart the httpd of mha1

Test results: for VIP1, visit mha2; for VIP2, take turns to visit mha2 and mha3.

(4)。 Turn off httpd for mha2

Test results: for VIP1, cannot be accessed; for VIP2, visit mha3.

(the httpd service should be detected periodically in keepalive, and the httpd service is stopped, then the keepalived service is also stopped, leaving mha1 as the Master, so that VIP1 can access the mha1 instead of being "inaccessible" now.)

(5)。 Restart the httpd of mha2

Test results: for VIP1, restore access to mha2; for VIP2, take turns to access mha2 and mha3.

(6)。 Turn off httpd for mha3

Test results: no effect on VIP1; for VIP2, visit mha2.

(7)。 Restart the httpd of mha3

Test results: no effect on VIP1; for VIP2, take turns accessing mha2 and mha3.

(8)。 At the same time, close the httpd of mha1,mha2

Test results: neither VIP can be accessed.

5.3Test the high availability of write VIP

(1) first use mha-w-vip to connect to the mha cluster and view the information of the connected database hosts.

Mysql-uroot-ptest123-h 172.20.52.252

Use mysql

Select host,user,password from user

(2) close the mysql service of the current master.

(3) check whether the mha-w-vip (write VIP) connection can be connected to the mha cluster and whether the current connection is disconnected.

(4) Test results: after stopping the mysql service on master, switch "write VIP" within 3 seconds. When the mysql service on master is down, the standby master becomes master and the cluster is available. When the original master is restored, you need to manually switch it to the replication state.

High availability of connections can only be achieved, not session-level high availability. That is, after down, the original session is disconnected and reconnected immediately.

5.4 Test the load balancing of reading VIP

Take turns to stop the standby master and, slave, and then use the read vip connection to view the host information of the currently connected library.

(1) first use mha-r-vip (read VIP) to connect to the mha cluster and view the information of the connected database hosts.

Mysql-uroot-ptest-h 172.20.52.253

Use mysql

Select host,user,password from user

(2) Open multiple sessions and repeat the previous step.

(3) check which host each session is connected to.

(4) Test results: when the backup master and slave are online, take turns to visit.

5.5Test the high availability of reading VIP

(1) first use mha-r-vip (read VIP) to connect to the mha cluster and view the information of the connected database hosts.

Mysql-uroot-ptest123-h mha1-r-vip

Use mysql

Select host,user,password from user

(2) take turns to stop the standby master and slave, and then use the read vip connection to view the host information of the currently connected library.

(3) Test results: the mysql service down of any backup master or slave is dropped, and the cluster can read it.

5.6 Test data synchronization

Create libraries, tables, and insert data on the main library to view the data synchronization of the standby master and slave.

(1) create libraries, tables, and insert data on the main library

Mysql > create database test

Query OK, 1 row affected (0.00 sec)

Mysql > create table T1 (id int, name varchar (20))

ERROR 1046 (3D000): No database selected

Mysql > use test

Database changed

Mysql > create table T1 (id int, name varchar (20))

Query OK, 0 rows affected (0.04 sec)

Mysql > insert into T1 values (1)

Query OK, 1 row affected (0.02 sec)

Mysql > insert into T1 values (2mai Zhe b')

(2) Log in to the slave database to view the data

Use test

Select * from T1

(3) Test results:

Backup master slave can automatically synchronize data immediately. (the background completes synchronization at the millisecond level)

5.7Test read-only of standby master and slave

Test how it works when the backup master and slave are set to read_only

(1) set to read-only

Set global read_only=1

(2) Log in and insert data

Mysql-uroot-prtest123-D test

Insert into T1 values (1)

Inserted successfully

(3) create ordinary users

GRANT ALL ON test.* TO test@'%' IDENTIFIED BY 'test' WITH GRANT OPTION

GRANT ALL ON test* TO test@localhost IDENTIFIED BY 'test' WITH GRANT OPTION

(3) Log in and insert data

Mysql-utest-ptest-D test

Insert into T1 values (1)

Insert failed

(4) Test results: read_only has no effect on super users, but can limit ordinary users.

5.8Test whether read_only will affect replication function

Main library

Mysql-utest-ptest-D test

Insert into T1 values (3 recordings)

Alternate master, slave query:

Select * from T1

test

Alternate master, slave insert:

Mysql-utest-ptest-D test

Insert into T1 values (4)

ERROR 1290 (HY000): The MySQL server is running with the-- read-only option so it cannot execute this statement

Conclusion: setting read-only in standby master and slave can limit the write function of standby master and slave, and does not affect the function of master-slave replication.

5.9After the original master is restored, the original master (mha1) is always slave.

After the original master is restored, if you want to keep the original master (mha1) as slave, you need to do the following:

1)。 You need to change the order of the mha1 in the mha manager file.

2)。 Delete rm / var/log/masterha/app1/app1.failover.complete

3)。 Start the mha manager process again

4)。 Let the keepalived of mha2 be HA and the Master,mha1 of HA be "BACKUP" to ensure that "write vip" is associated with the Master.

5)。 Modify and read VIP. Read VIP as mha1, mha3 (formerly mha2, mha3)

6)。 Set the original master (mha1) to read_only

7)。 Check show slave status\ G; show master status

5.10 after the original master is restored, let the original master (mha1) become Master again

After the original master is restored, if you want the original master (mha1) to become Master again, you need to do the following:

1)。 On mha manager, manually switch mha1 to Master.

2)。 Set the standby master (mha2) to read_only.

3)。 Delete rm / var/log/masterha/app1/app1.failover.complete

4)。 Restart the mha manager process.

5)。 Let the keepalived of mha1 be HA and the Master,mha2 of HA be "BACKUP" to ensure that "write vip" is associated with the Master.

6)。 Check show slave status\ G; show master status

5.11 Test manually switching master to standby master

(1)。 Stop the master_manager process: masterha_stop-- conf=/etc/app1.cnf

(2)。 Execute the switch command, using different parameters as needed.

The situation of the main library down

Masterha_master_switch-master_state=dead-conf=/etc/conf/masterha/app1.cnf-dead_master_host=mha2

Non-interactive failover

Masterha_master_switch-master_state=dead-conf=/etc/conf/masterha/app1.cnf-dead_master_host=mha2-new_master_host=mha1-interactive=0

Switch online, do not use the original main library after switching.

Masterha_master_switch-master_state=alive-conf=/etc/masterha/app1.cnf-new_master_host=mha1

Switch online. After switching, the original master library becomes a slave library.

Masterha_master_switch-master_state=alive-conf=/etc/masterha/app1.cnf-new_master_host=mha1-orig_master_is_new_slave

5.12 backup and restore of test database

Export:

Mysqldump-uroot-p123456 test > test_20140704.sql

Import:

Mysql-uroot-ptest123 test

< test_20140704.sql 测试结果:在主库执行导入,从库自动同步数据。 5.13 测试创建存储过程是否会同步到备用master和slave 在Master上创建存储过程`f_test`,然后在备用master和slave上创建,如果提示存储过程已存在,则说明创建存储过程也会同步到备用master和slave。 DELIMITER $$ CREATE DEFINER=`root`@`%` PROCEDURE `f_test`( IN p_id int, IN p_name varchar(20) ) begin insert into t1 values(p_id,p_name); END; $$ DELIMITER ; 测试结果:创建存储过程也会同步到备用master和slave。 5.14 测试存储过程内修改数据是否会同步 在master上执行 call f_test(4,'d');(f_test的功能是插入一条记录到t1) 查询select * from t1; 在备用master和slave查询select * from t1; 数据与Master上一致,存储过程内修改数据会同步。 测试结果:存储过程内修改数据会同步。 6 异常处理6.1 主从复制错误处理 mysql主从复制,经常会遇到错误而导致slave端复制中断,这个时候一般就需要人工干预,跳过错误才能继续。跳过错误有两种方式: 6.1.1 跳过指定数量的事务 mysql>

Stop slave

Mysql > SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1 # Skip a transaction

Mysql > slave start

6.1.2 modify the configuration file of mysql

Skip all errors or specified types of errors through the slave_skip_errors parameter

Vi / etc/my.cnf

[mysqld]

# slave-skip-errors=1062,1053,1146 # Skip error of specified error no type

# slave-skip-errors=all # skip all errors

6.1.3 MHA function synchronization

Mysql has restrictions on the synchronization of functions. Only functions that are explicitly declared as DETERMINISTIC or NO SQL or READS SQL DATA can be created and automatically synchronized to the slave library, otherwise the creation function will fail. The function can be created successfully and synchronized to the slave library by setting the parameter "set global log_bin_trust_function_creators = 1". This parameter is useful only if log-bin is turned on. In the case of log_bin_trust_function_creators=0, you need to have SUPER permission and do not include a SQL that modifies the data before you can create a function.

(1) when testing normal master-slave replication, the creation of the function

Mysql > set global log_bin_trust_function_creators = 1

CREATE DEFINER = 'root'@'%' FUNCTION f_test2

(

Pid int

)

RETURNS int (11)

BEGIN

Insert into T1 values (pid,sysdate ())

RETURN 1

END

Test results: in the master-slave environment, if the function does not declare DETERMINISTIC or NO SQL or READS SQL DATA, the creation of the function fails.

(2) function tests that are forcibly declared as DETERMINISTIC or NO SQL or READS SQL DATA

CREATE DEFINER = 'root'@'%' FUNCTION f_test2

(

Pid int

)

RETURNS int (11)

DETERMINISTIC

BEGIN

Insert into T1 values (pid,sysdate ())

RETURN 1

END

Test results: can be created successfully and can be synchronized to the slave library. But the function cannot be executed, the execution of the Times is wrong.

(3) after global log_bin_trust_function_creators = 1, the function is not declared as DETERMINISTIC or NO SQL or READS SQL DATA attribute.

Mysql > set global log_bin_trust_function_creators = 0

If the function is created successfully, it can also be synchronized to the slave library (the slave library that opens bin-log must be configured with the same parameters).

Execute the function in the main library

Mysql > select f_test2 (7)

Mysql > select * from T1

There are inserted records in the T1 table of the master library, and there is synchronization to the slave library.

6.1.4 stored procedure synchronization

In case of binlog_format = STATEMEN:

Creating a stored procedure p_test1 and automatically synchronizing to the slave library does not require "set global log_bin_trust_function_creators = 1;", which shows that the replication of the stored procedure is secure; execute call p_test1 (14); and the data is fully synchronized to the slave library. If it is a function that is not fully synchronized, sysdate () gets a different value.

P_test1 Code:

DELIMITER |

CREATE DEFINER = 'root'@'%' PROCEDURE p_test1

(

Pid int

)

BEGIN

Declare v_time datetime

Set v_time = sysdate ()

Insert into T1 values (pid,v_time)

END |

DELIMITER

Test results: combined with the test results of function synchronization, it can be concluded that:

In the case of binlog_format = STATEMEN, when executing the stored procedure, the backend uses the method of binlog_format = ROW to synchronize the log, while when executing the function, the backend uses the method of binlog_format = STATEMEN to synchronize the log.

7 MHA routine maintenance

(1) check whether ssh login is successful

Masterha_check_ssh-conf=/etc/masterha/app1.cnf

(2) check whether replication has been established.

Masterha_check_repl-conf=/etc/masterha/app1.cnf

(3) start mha

Nohup masterha_manager-- conf=/etc/masterha/app1.cnf > / tmp/mha_manager.log

< /dev/null 2>

& 1 &

When a slave node is down, it cannot be started, plus-- ignore_fail_on_start can start mha even if a node is down.

Nohup masterha_manager-- conf=/etc/masterha/app1.cnf-- ignore_fail_on_start > / tmp/mha_manager.log

< /dev/null 2>

& 1 &

(4) check the status of startup

Masterha_check_status-conf=/etc/masterha/app1.cnf

(5) stop mha

Masterha_stop-conf=/etc/masterha/app1.cnf

(6) restart next time after failover

The file app1.failover.complete will be generated in the management directory after each failover switch. The next time you switch, you will find that this file causes the switch to be unsuccessful and needs to be cleaned up manually.

Rm-rf / masterha/app1/app1.failover.complete

You can also add a parameter-- ignore_last_failover

(7) Manual online switching

1)。 When switching manually, you need to stop the running mha before switching. You can stop mha with the following command

Masterha_stop-conf=/etc/app1.cnf

2)。 Execute the switch command

Manual failover scenario, master is dead, but masterha_manager is not enabled. You can use manual failover:

Masterha_master_switch-master_state=dead-conf=/etc/app1.cnf-dead_master_host=host1-new_master_host=host5

Or

Masterha_master_switch-conf=/etc/app1.cnf-master_state=alive-new_master_host=host1-orig_master_is_new_slave

Or

Masterha_master_switch-conf=/etc/app1.cnf-master_state=alive-new_master_host=host1-orig_master_is_new_slave-running_updates_limit=10000-interactive=0

Parameter interpretation

-- adding this parameter to the orig_master_is_new_slave switch changes the original master into a slave node. Without this parameter, the original master will not be started. You need to set the configuration repl_user and repl_password parameters in the configuration file.

-- if there is a delay in the candidate master during running_updates_limit=10000 handover, the mha handover cannot be successful. Plus this parameter means that the delay can be switched within this time range (in s), but the handover time is determined by the size of the relay log during recover.

-- interactive=0, whether to intersect with each other when switching. The default is to intersect. Intercourse means that you are asked to select some options, such as typing "yes or no" when prompted.

-- master_state=alive master online switching requires the following conditions: first, IO threads run on all slave; second, SQL threads run normally on all slave; third, Seconds_Behind_Master is less than or equal to running_updates_limit seconds; on all slaves. Fourth, on master, on show processlist output results, no update query operation is more than running_updates_limit seconds.

Execute DDL before preparing the database, generally stop slave first, and generally do not record mysql logs, which can be achieved by set SQL_LOG_BIN = 0. Then perform a master / slave switch operation, and then execute DDL on the original master library. This method is suitable for adding or decreasing indexes, and additional attention is required if fields are added.

The above is what the MySQL MHA cluster solution is like. Have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report