Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MySQL High availability Cluster Architecture MHA

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

MHA (Master HighAvailability) is currently a relatively mature solution for MySQL high availability. It was developed by youshimaton, a Japanese DeNA company (now working for Facebook). It is a set of excellent high availability software for failover and master-slave upgrade in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the failover operation of the database within 30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent in order to achieve high availability in the real sense.

There are two roles in MHA, one is MHA Node (data node) and the other is MHA Manager (management node).

MHA Manager can be deployed on a separate machine to manage multiple master-slave clusters, or it can be deployed on a slave node. MHA Node runs on each MySQL server, and MHA Manager regularly detects the master nodes in the cluster. When the master fails, it automatically promotes the slave of the latest data to the new master, and then redirects all other slave to the new master. The entire failover process is completely transparent to the application.

In the process of MHA automatic failover, MHA tries to save binary logs from the down primary server to ensure that the data is not lost as much as possible, but this is not always feasible. For example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save binary logs and only fails over and loses the latest data. With semi-synchronous replication of MySQL 5.5, the risk of data loss can be greatly reduced. MHA can be combined with semi-synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, thus ensuring data consistency across all nodes.

Note: starting with MySQL5.5, MySQL supports semi-synchronous replication as a plug-in. How to understand semi-synchronization? First of all, let's look at the concept of async and full synchronization:

Asynchronous replication (Asynchronous replication)

The default replication of MySQL is asynchronous. The master database will immediately return the result to the client after executing the transaction committed by the client, and does not care whether the slave database has received and processed it. There will be a problem. If the master crash is dropped, the committed transaction on the master may not be transferred to the slave. If it is forced to promote the slave master at this time, the data on the new master may be incomplete.

Fully synchronous replication (Fully synchronous replication)

When the master library executes a transaction, all slave libraries execute the transaction before returning to the client. Because you need to wait for all slaves to complete the transaction before returning, the performance of fully synchronous replication is bound to be seriously affected.

Semi-synchronous replication (Semisynchronous replication)

Between asynchronous replication and full synchronous replication, the main library does not return to the client immediately after executing the transaction committed by the client, but waits for at least one received from the library and written to the relay log before returning to the client. Compared with asynchronous replication, semi-synchronous replication improves the security of data, and it also causes a certain degree of delay, which is at least one TCP/IP round trip time. Therefore, semi-synchronous replication is best used in low-latency networks.

Let's take a look at the schematic of semi-synchronous replication:

Summary: similarities and differences between asynchronous and semi-synchronous

By default, the replication of MySQL is asynchronous, and after all update operations on Master are written to Binlog, it is not guaranteed that all updates are copied to Slave. Although asynchronous operation is efficient, when there is a problem with Master/Slave, there is a high risk of data being out of sync, and data may even be lost.

The purpose of introducing semi-synchronous replication in MySQL5.5 is to ensure that the data of at least one Slave is complete when something goes wrong with master. In case of timeout, you can also temporarily transfer to asynchronous replication to ensure the normal use of the business until a salve catches up and continues to switch to semi-synchronous mode.

How it works:

Compared with other HA software, the purpose of MHA is to maintain the high availability of the Master library in MySQL Replication. Its most important feature is that it can repair the differential logs between multiple Slave, finally make all Slave data consistent, and then choose one of them to act as the new Master and point the other Slave to it.

-saves binary log events (binlogevents) from a crashed master.

-identify the slave with the latest updates.

-apply differential relay logs (relay log) to other slave.

-apply binary log events (binlogevents) saved from master.

-upgrade a slave to a new master.

-make other slave connect to the new master for replication.

Currently, MHA mainly supports the architecture of one master and multiple slaves. To build MHA, you must have at least three database servers in a replication cluster. One master and two slaves, that is, one serves as master, one acts as standby master, and the other acts as slave database, because at least three servers are required.

The deployment environment is as follows:

Role Ip hostname osmaster192.168.137.134mastercentos 6.5 x86_64Candidate192.168.137.130Candidate

Slave+manage192.168.137.146slave

Master provides external write services, Candidate is the alternative master, and the management node is placed on a pure slave machine. Once master goes down, Candidate will be upgraded to the main library.

I. basic environmental preparation

1. Configure the epel source on 3 machines

Wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo

Rpm-ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

2. Establish a non-interactive login environment for ssh.

[root@master] # ssh-keygen-t rsa-P''

Chmod 6000.ssh / *

Cat .ssh / id_rsa.pub > .ssh/authorized_keys

Scp-p. Ssh / id_rsa. SSH / authorized_keys 192.168.137.130:/root/.ssh

Scp-p. Ssh / id_rsa. SSH / authorized_keys 192.168.137.146:/root/.ssh

Configure mysql semi-synchronous replication

Note: mysql master-slave replication operation is not demonstrated here

Master Licensing:

Grant replication slave,replication client on *. * to 'repl'@'192.168.137.%' identified by' 123456'

Grant all on *. * to 'mhauser'@'192.168.137.%' identified by' 123456'

Candidate Licensing:

Grant replication slave,replication client on *. * to 'repl'@'192.168.137.%' identified by' 123456'

Grant all on *. * to 'mhauser'@'192.168.137.%' identified by' 123456'

Slave Licensing:

Grant all on *. * to 'mhauser'@'192.168.137.%' identified by' 123456'

If the default asynchronous mode of mysql is used, data loss is caused by hardware damage and downtime of the main database, so it is recommended to configure semi-synchronous replication of MySQL when configuring MHA.

Note: the mysql semi-synchronous plug-in is provided by Google. Under the specific location / usr/local/mysql/lib/plugin/, one is semisync_master.so for master and the other is semisync_slave.so for slave.

Mysql > show variables like'% plugin_dir%'

+-- +

| | Variable_name | Value |

+-- +

| | plugin_dir | / usr/local/mysql/lib/plugin/ |

+-- +

1. Install the relevant plug-ins (master,Candidate,slave) on the master and slave nodes respectively

Installing plug-ins on MySQL requires database support for dynamic loading. Check whether it is supported or not, and use the following test:

Mysql > show variables like'% have_dynamic_loading%'

+-+ +

| | Variable_name | Value |

+-+ +

| | have_dynamic_loading | YES |

All mysql database servers with semi-synchronous plug-ins (semisync_master.so,semisync_slave.so) installed

Mysql > install plugin rpl_semi_sync_master soname 'semisync_master.so'

Mysql > install plugin rpl_semi_sync_slave soname 'semisync_slave.so'

Check that Plugin is installed correctly:

Mysql > show plugins

Or

Mysql > select * from information_schema.plugins

View semi-synchronous related information

Mysql > show variables like'% rpl_semi_sync%'

Rpl_semi_sync_master_enabled | OFF |

| | rpl_semi_sync_master_timeout | 10000 | |

| | rpl_semi_sync_master_trace_level | 32 | |

| | rpl_semi_sync_master_wait_no_slave | ON |

| | rpl_semi_sync_slave_enabled | OFF |

| | rpl_semi_sync_slave_trace_level | 32 |

As you can see in the image above, the semi-identical replication plug-in has been installed, but it has not been enabled yet, so it is OFF

2. Modify the my.cnf file and configure master-slave synchronization:

Note: if the master MYSQL server already exists, the slave MYSQL server will only be built later. Before configuring data synchronization, copy the database to be synchronized from the master MYSQL server to the slave MYSQL server (for example, back up the database on the master MYSQL first, and then restore it on the slave MYSQL server with backup).

Master mysql host:

Server-id = 1

Log-bin=mysql-bin

Binlog_format=mixed

Log-bin-index=mysql-bin.index

Rpl_semi_sync_master_enabled=1

Rpl_semi_sync_master_timeout=10000

Rpl_semi_sync_slave_enabled=1

Relay_log_purge=0

Relay-log= relay-bin

Relay-log-index = relay-bin.index

Note:

Rpl_semi_sync_master_enabled=1 1 table is enabled, 0 means off

Rpl_semi_sync_master_timeout=10000: millisecond unit. After waiting for a confirmation message for 10 seconds, the master server no longer waits and becomes asynchronous.

Candidate host:

Server-id = 2

Log-bin=mysql-bin

Binlog_format=mixed

Log-bin-index=mysql-bin.index

Relay_log_purge=0

Relay-log= relay-bin

Relay-log-index = slave-relay-bin.index

Rpl_semi_sync_master_enabled=1

Rpl_semi_sync_master_timeout=10000

Rpl_semi_sync_slave_enabled=1

Note: relay_log_purge=0, forbids the SQL thread to delete a relaylog automatically after executing it. For MHA scenarios, the recovery of some lagging slave libraries depends on other slave relaylog, so disable automatic deletion

Slave host:

Server-id = 3

Log-bin = mysql-bin

Relay-log = relay-bin

Relay-log-index = slave-relay-bin.index

Read_only = 1

Rpl_semi_sync_slave_enabled = 1

View semi-synchronous related information

Mysql > show variables like'% rpl_semi_sync%'

View semi-synchronous status:

Mysql > show status like'% rpl_semi_sync%'

| | Rpl_semi_sync_master_clients | 2 | |

Parameters to focus on:

Rpl_semi_sync_master_status: shows whether the primary service is in asynchronous or semi-synchronous replication mode

Rpl_semi_sync_master_clients: shows how many slave servers are configured for semi-synchronous replication mode

Rpl_semi_sync_master_yes_tx: displays the number of successful submissions confirmed from the server

Rpl_semi_sync_master_no_tx: displays the number of unsuccessful submissions confirmed from the server

Rpl_semi_sync_master_tx_avg_wait_time: the average extra waiting time for a transaction to open semi_sync

Rpl_semi_sync_master_net_avg_wait_time: the average waiting time to the network after the transaction enters the waiting queue

3. Configure mysql-mha

All mysql node installations

Rpm-ivh perl-DBD-MySQL-4.013-3.el6.i686.rpm [yum-y install perl-DBD-MySQL]

Rpm-ivh mha4mysql-node-0.56-0.el6.noarch.rpm

2. Manage needs to install dependent perl packages

Rpm-ivh perl-Config-Tiny-2.12-7.1.el6.noarch.rpm

Rpm-ivh perl-DBD-MySQL-4.013-3.el6.i686.rpm [yum-y install perl-DBD-MySQL]

Rpm-ivh compat-db43-4.3.29-15.el6.x86_64.rpm

Rpm-ivh perl-Mail-Sender-0.8.16-3.el6.noarch.rpm

Rpm-ivh perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpm

Rpm-ivh perl-TimeDate-1.16-11.1.el6.noarch.rpm

Rpm-ivh perl-MIME-Types-1.28-2.el6.noarch.rpm

Rpm-ivh perl-MailTools-2.04-4.el6.noarch.rpm

Rpm-ivh perl-Email-Date-Format-1.002-5.el6.noarch.rpm

Rpm-ivh perl-Params-Validate-0.92-3.el6.x86_64.rpm

Rpm-ivh perl-Params-Validate-0.92-3.el6.x86_64.rpm

Rpm-ivh perl-MIME-Lite-3.027-2.el6.noarch.rpm

Rpm-ivh perl-Mail-Sendmail-0.79-12.el6.noarch.rpm

Rpm-ivh perl-Log-Dispatch-2.27-1.el6.noarch.rpm

Yum install-y perl-Time-HiRes-1.9721-144.el6.x86_64

Rpm-ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

3. Configure mha

The configuration file is located in the administrative node and usually includes the hostname, mysql user name, password, working directory, and so on of each mysql server.

Mkdir / etc/masterha/

Vim / etc/masterha/app1.cnf

[server default]

User=mhauser

Password=123456

Manager_workdir=/data/masterha/app1

Manager_log=/data/masterha/app1/manager.log

Remote_workdir=/data/masterha/app1

Ssh_user=root

Repl_user=repl

Repl_password=123456

Ping_interval=1

[server1]

Hostname=192.168.137.134

Port=3306

Master_binlog_dir=/usr/local/mysql/data

Candidate_master=1

[server2]

Hostname=192.168.137.130

Port=3306

Master_binlog_dir=/usr/local/mysql/data

Candidate_master=1

[server3]

Hostname=192.168.137.146

Port=3306

Master_binlog_dir=/usr/local/mysql/data

No_master=1

Explanation of customs configuration items:

Manager_workdir=/masterha/app1// sets the working directory of manager

Manager_log=/masterha/app1/manager.log// sets the log of manager

User=manager// settings monitoring user manager

Password=123456 / / password of monitoring user manager

Ssh_user=root / / ssh connection user

Repl_user=mharep / / Master-Slave replication user

Repl_password=123.abc// Master-Slave copy user password

Ping_interval=1 / / sets the monitoring master library. The time interval for sending ping packets is 3 seconds by default. Failover is performed automatically when there is no response after three attempts.

Master_binlog_dir=/usr/local/mysql/data / / set the location where master saves binlog so that MHA can find master's log, which is the data directory of mysql

Candidate_master=1// is set as a candidate master, and if this parameter is set, the slave library will be promoted to the master library after the master-slave switch occurs.

Check whether the configuration of ssh mutual trust communication between nodes is ok

Masterha_check_ssh-conf=/etc/masterha/app1.cnf

Results: All SSH connection tests passed successfully.

Check whether the master-slave replication between each node is ok

Masterha_check_repl-conf=/etc/masterha/app1.cnf

Results: MySQL Replication Health is OK.

If you encounter this error during verification: Can't exec "mysqlbinlog".

The workaround is to execute on all servers:

Ln-s / usr/local/mysql/bin/* / usr/local/bin/

Start manager:

Nohup / usr/bin/masterha_manager-- conf=/etc/masterha/app1.cnf-- remove_dead_master_conf-- ignore_last_failover > / etc/masterha/manager.log 2 > & 1 &

After remove_dead_master_conf master slave switch, the old master library IP will be removed from the configuration file

-- ignore_last_failover ignores the generated switch completion file. If it is not ignored, it cannot be switched again within 8 hours.

-- ignore_fail_on_start

# # when a slave node is down, MHA cannot be started by default, and MHA can be started even if a node is down.

Turn off MHA:

Masterha_stop-conf=/etc/masterha/app1.cnf

View MHA status:

Masterha_check_status-conf=/etc/masterha/app1.cnf

App1 (pid:45128) is running (0:PING_OK), master:192.168.137.134

4. Simulated failover

Stop the master.

/ etc/init.d/mysqld stop

View MHA log / data/masterha/app1/manager.log

-Failover Report-

App1: MySQL Master failover 192.168.137.134 (192.168.137.134 to 3306) to 192.168.137.1

30 (192.168.137.130 3306) succeeded

Master 192.168.137.134 (192.168.137.134 3306) is down!

Check MHA Manager logs at zifuji:/data/masterha/app1/manager.log for details.

Started automated (non-interactive) failover.

The latest slave 192.168.137.130 (192.168.137.130) has all relay logs for reco

Very.

Selected 192.168.137.130 (192.168.137.130) as a new master.

192.168.137.130 (192.168.137.130): OK: Applying all logs succeeded.

192.168.137.146 (192.168.137.146): This host has the latest relay log events.

Generating relay diff files from the latest slave succeeded.

192.168.137.146 (192.168.137.146): OK: Applying all logs succeeded. Slave star

Ted, replicating from 192.168.137.130 (192.168.137.130 3306)

192.168.137.130 (192.168.137.130): Resetting slave info succeeded

Master failover to 192.168.137.130 (192.168.137.130) completed successfully.

3. View slave replication status

* * 1. Row *

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.137.130

Master_User: repl

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: mysql-bin.000003

Daily main operation steps on the MHA Manager side

1) check to see if there are any of the following files, and delete them.

After the master-slave switch occurs, the MHAmanager service stops automatically, and the file app1.failover.complete is generated under the manager_workdir (/ data/masterha/app1/app1.failover.complete) directory. To start MHA, you must first make sure that there is no such file)

Find /-name 'app1.failover.complete'

Rm-f / data/masterha/app1/app1.failover.complete

2) check the current setting of MHA:

# masterha_check_repl-conf=/etc/masterha/app1.cnf

3) start MHA:

# nohup masterha_manager-- conf=/etc/masterha/app1.cnf& > / etc/masterha/manager.log &

When a slave node is down, it cannot be started by default, plus-- ignore_fail_on_start can start MHA even if a node is down, as follows:

# nohup masterha_manager-conf=/etc/masterha/app1.cnf-ignore_fail_on_start& > / etc/masterha/manager.log &

4) stop MHA: masterha_stop-- conf=/etc/masterha/app1.cnf

5) check status:

# masterha_check_status-conf=/etc/masterha/app1.cnf

6) check the log:

# tail-f / etc/masterha/manager.log

7) Master-slave switching, the follow-up work of the original master library

Vim / etc/my.cnf

Read_only=ON

Relay_log_purge = 0

Mysql > reset slave all

Mysql > reset master

/ etc/init.d/mysqld restart

Mysql > CHANGE MASTER TO MASTER_HOST='192.168.137.130',MASTER_USER='repl',MASTER_PASSWORD='123456'

# # copy master and slave with the new master library

Masterha_check_status-conf=/etc/masterha/app1.cnf

App1 (pid:45950) is running (0:PING_OK), master:192.168.137.130

Note: if normal, "PING_OK" will be displayed, otherwise "NOT_RUNNING" will be displayed, which means that MHA monitoring is not turned on.

Delete relay logs periodically

In configuring master-slave replication, the parameter relay_log_purge=0 is set on the slave, so the slave node needs to delete the relay log periodically. It is recommended that each slave node delete the relay log at a different time.

Corntab-e

0 5 * / usr/local/bin/purge_relay_logs-user=root-password=pwd123-port=3306-disable_relay_log_purge > > / var/log/purge_relay.log 2 > & 1

5. Configure VIP

Ip can be configured in two ways, one is to manage the floating of the virtual ip through keepalived, and the other is to start the virtual ip by script (that is, no keepalived or heartbeat-like software is required).

1. The method of managing virtual ip,keepalived configuration with keepalived is as follows:

Install keepalived on master and Candidate host

Install the dependency package:

[root@master ~] # yum install openssl-devel libnfnetlink-devel libnfnetlink popt-devel kernel-devel-y

Wget http://www.keepalived.org/software/keepalived-1.2.20.tar.gz

Ln-s / usr/src/kernels/2.6.32-642.1.1.el6.x86_64 / usr/src/linux

Tar-xzf keepalived-1.2.20.tar.gz;cd keepalived-1.2.20

. / configure-- prefix=/usr/local/keepalived;make & & make install

Ln-s / usr/local/keepalived/sbin/keepalived / usr/bin/keepalived

Cp / usr/local/keepalived/etc/rc.d/init.d/keepalived / etc/init.d/keepalived

Mkdir / etc/keepalived

Ln-s / usr/local/keepalived/etc/keepalived/keepalived.conf / etc/keepalived/keepalived.conf

Chmod 755 / etc/init.d/keepalived

Chkconfig-add keepalived

Cp / usr/local/keepalived/etc/sysconfig/keepalived / etc/sysconfig/

Service keepalived restart

Echo 1 > / proc/sys/net/ipv4/ip_forward

Modify the configuration file for Keepalived (configured on master)

Vim / etc/keepalived/keepalived.conf

! Configuration File for keepalived

Global_defs {

Notification_email {

Guopeng@163.com

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 127.0.0.1

Smtp_connect_timeout 30

Router_id mysql-ha1

}

Vrrp_instance VI_1 {

State BACKUP

Interface eth0

Virtual_router_id 51

Priority 100

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.137.100

}

}

Configure on candidate master (Candidate)

[root@Candidate keepalived-1.2.20] # vim / etc/keepalived/keepalived.conf

! Configuration File for keepalived

Global_defs {

Notification_email {

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 127.0.0.1

Smtp_connect_timeout 30

Router_id mysql-ha2

}

Vrrp_instance VI_1 {

State BACKUP

Interface eth0

Virtual_router_id 51

Priority 90

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.137.100

}

}

Start the keepalived service, start on master and view the log

/ etc/init.d/keepalived start

Tail-f/var/log/messages

Aug 14 01:05:25 minion Keepalived_vrrp [39720]: VRRP_Instance (VI_1) Sending gratuitous ARPs on eth0 for 192.168.137.100

[root@master ~] # ip addr show dev eth0

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

Link/ether 00:0c:29:57:66:49 brd ff:ff:ff:ff:ff:ff

Inet 192.168.137.134/24 brd 192.168.137.255 scope global eth0

Inet 192.168.137.100/32 scope global eth0

Inet6 fe80::20c:29ff:fe57:6649/64 scope link

Valid_lft forever preferred_lft forever

[root@Candidate ~] # ip addr show dev eth0 # # there is no virtual ip on the alternate master at this time

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

Link/ether 00:0c:29:a5:b4:85 brd ff:ff:ff:ff:ff:ff

Inet 192.168.137.130/24 brd 192.168.137.255 scope global eth0

Inet6 fe80::20c:29ff:fea5:b485/64 scope link

Valid_lft forever preferred_lft forever

Note:

The keepalived of the above two servers is set to BACKUP mode, and the two modes in keepalived are master- > backup mode and backup- > backup mode. There is a big difference between the two models. In master- > backup mode, once the master database is down, the virtual ip will automatically drift to the slave database. When the master database is repaired and keepalived is started, the virtual ip will be preempted, even if the non-preemptive mode (nopreempt) is set to preempt ip. In backup- > backup mode, when the master database goes down, the virtual ip will automatically drift to the slave database. When the original master database is restored and the keepalived service is started, it will not preempt the new master virtual ip, even if the priority is higher than that of the slave database. In order to reduce the number of ip drifts, the repaired main library is usually used as a new backup library.

2. MHA introduces keepalived (stop keepalived through MHA when the MySQL service process dies):

To bring the keepalived service into MHA, we only need to modify the script file master_ip_failover that is triggered during the switch and add to the script the handling of keepalived in the event of a master downtime.

Edit the script / scripts/master_ip_failover and modify it as follows.

Manager edit script file:

Mkdir / scripts

Vim / scripts/master_ip_failover

#! / usr/bin/env perl

Use strict

Use warnings FATAL = > 'all'

Use Getopt::Long

My (

$command, $ssh_user, $orig_master_host, $orig_master_ip

$orig_master_port, $new_master_host, $new_master_ip, $new_master_port

);

My $vip = '192.168.137.100'

My $ssh_start_vip = "/ etc/init.d/keepalived start"

My $ssh_stop_vip = "/ etc/init.d/keepalived stop"

GetOptions (

'command=s' = >\ $command

'ssh_user=s' = >\ $ssh_user

'orig_master_host=s' = >\ $orig_master_host

'orig_master_ip=s' = >\ $orig_master_ip

'orig_master_port=i' = >\ $orig_master_port

'new_master_host=s' = >\ $new_master_host

'new_master_ip=s' = >\ $new_master_ip

'new_master_port=i' = >\ $new_master_port

);

Exit & main ()

Sub main {

Print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"

If ($command eq "stop" | | $command eq "stopssh") {

My $exit_code = 1

Eval {

Print "Disabling the VIP on old master: $orig_master_host\ n"

& stop_vip ()

$exit_code = 0

}

If ($@) {

Warn "Got Error: $@\ n"

Exit $exit_code

}

Exit $exit_code

}

Elsif ($command eq "start") {

My $exit_code = 10

Eval {

Print "Enabling the VIP-$vip on the new master-$new_master_host\ n

"

& start_vip ()

$exit_code = 0

}

If ($@) {

Warn $@

Exit $exit_code

}

Exit $exit_code

}

Elsif ($command eq "status") {

Print "Checking the Status of the script.. OK\ n"

Exit 0

}

Else {

& usage ()

Exit 1

}

}

Sub start_vip () {

`ssh $ssh_user\ @ $new_master_host\ "$ssh_start_vip\" `

}

# A simple system call that disable the VIP on the old_master

Sub stop_vip () {

`ssh $ssh_user\ @ $orig_master_host\ "$ssh_stop_vip\" `

}

Sub usage {

Print

"Usage: master_ip_failover-- command=start | stop | stopssh | status-- orig_master_h

Ost=host-orig_master_ip=ip-orig_master_port=port-new_master_host=host-new_

Master_ip=ip-- new_master_port=port\ n "

}

Now that the script has been modified, let's call the failover script in / etc/masterha/app1.cnf

Stop MHA:

Masterha_stop-conf=/etc/masterha/app1.cnf

Enable the following parameters in the configuration file / etc/masterha/app1.cnf (add under [server default])

Master_ip_failover_script=/scripts/master_ip_failover

Start MHA:

# nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / etc/masterha/manager.log &

Check status:

] # masterha_check_status-- conf=/etc/masterha/app1.cnf

App1 (pid:51284) is running (0:PING_OK), master:192.168.137.134

Check whether there is an error in the cluster replication status:

] # masterha_check_repl-- conf=/etc/masterha/app1.cnf

192.168.137.134 (192.168.137.134) (current master)

+-192.168.137.130 (192.168.137.130 purl 3306)

+-192.168.137.146 (192.168.137.146 purl 3306)

Tue May 9 14:40:57 2017-[info] Checking replication health on 192.168.137.130..

Tue May 9 14:40:57 2017-[info] ok.

Tue May 9 14:40:57 2017-[info] Checking replication health on 192.168.137.146.

Tue May 9 14:40:57 2017-[info] ok.

Tue May 9 14:40:57 2017-[info] Checking master_ip_failover_script status:

Tue May 9 14:40:57 2017-[info] / scripts/master_ip_failover-command=status-ssh_user=root-orig_master_host=192.168.137.134-orig_master_ip=192.168.137.134-orig_master_port=3306

IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start===

Checking the Status of the script.. OK

Tue May 9 14:40:57 2017-[info] OK.

Tue May 9 14:40:57 2017-[warning] shutdown_script is not defined.

Tue May 9 14:40:57 2017-[info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

Note: the content added or modified by / scripts/master_ip_failover means that when the master database fails, the MHA switch will be triggered, and the MHA Manager will stop the keepalived service on the master database and trigger the virtual ip to drift to the alternate slave database to complete the switch.

Of course, you can introduce a script into keepalived, which monitors whether mysql is running properly, and if not, it is called to kill the keepalived process (see MySQL High availability keepalived+mysql dual hosts).

Test: stop mysql on master

[root@master ~] # / etc/init.d/mysqld stop

Shutting down MySQL. [OK]

Go to slave (192.168.137.146) to view the status of slave:

Mysql > show slave status\ G

* * 1. Row *

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.137.130

Master_User: repl

Master_Port: 3306

Connect_Retry: 60

You can see from the figure above that slave points to the new master server 192.168.137.130 (192.168.137.134 before failover)

View vip bindings:

View vip bindings on 192.168.137.134

[root@master ~] # ip addr show dev eth0

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

Link/ether 00:0c:29:57:66:49 brd ff:ff:ff:ff:ff:ff

Inet 192.168.137.134/24 brd 192.168.137.255 scope global eth0

Inet6 fe80::20c:29ff:fe57:6649/64 scope link

Valid_lft forever preferred_lft forever

View vip bindings on 192.168.137.130

[root@Candidate ~] # ip addr show dev eth0

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

Link/ether 00:0c:29:a5:b4:85 brd ff:ff:ff:ff:ff:ff

Inet 192.168.137.130/24 brd 192.168.137.255 scope global eth0

Inet 192.168.137.100/32 scope global eth0

From the above display, we can see that the vip address has drifted to 192.168.137.130.

Master-slave switch follow-up work: now that the Candidate becomes the master, you need to redo the slave-only copy operation on the original master

Repair to slave library

Start keepalived

Rm-fr app1.failover.complete

Start manager

3. Realize VIP switching through script

If you use scripts to manage vip, you need to manually bind a vip on the master server

] # / sbin/ifconfig eth0:0 192.168.137.100/24

Vim / scripts/master_ip_failover

My $vip = '192.168.137.100 Universe 24'

My $key ='0'

My $ssh_start_vip = "/ sbin/ifconfigeth0:$key $vip"

My $ssh_stop_vip = "/ sbin/ifconfigeth0:$key down"

The subsequent operation is the same as the keepalived operation above

To prevent brain fissure, it is recommended that production environments use scripts to manage virtual ip, rather than using keepalived. At this point, the basic MHA cluster has been configured.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report