Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment of MySQL MHA high availability environment

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

First, install the basic environment of MHA

Install the MHA node

(1) basic environmental description

Role IP address hostname

= =

Host 192.168.1.121 Node 1

Slave 192.168.1.122 Node 2

Slave 192.168.1.123 Node 3

Monitoring host 192.168.1.125 node 5

(2) operate in node1,node2,node3,node5:

# vi / etc / hosts

192.168.1.121 node1

192.168.1.122 node2

192.168.1.123 node3

192.168.1.125 node5

Install the MHA node node package:

# rpm-ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm roomyum

Install perl-DBD-MySQL perl-CPAN-y

# tar xf mha4mysql-node-0.56.tar.gz

# cd mha4mysql-node-0.56

# perl Makefile.PL

# make & & make install

Install MHA

Operate on node 5 management node: note: MHA manager host also needs to install MHA node, MHA manager

# yum install perl-DBD-MySQL perl-CPAN perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-time-HiRes-y

# tar xf mha4mysql-manager-0.56.tar.gz

# cd mha4mysql-manager-0.56

# perl Makefile.PL

# make & & make install

# Note: all installed scripts are in the / usr / local / bin / directory.

3. Configure SSH login between nodes without password authentication (login using key between MHA hosts)

In node5 (Monitor):

# ssh-keygen-t rsa

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node1

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node2

# ssh- copy-id-I / root/.ssh/id_rsa.pub root @ node3

In node1 (Master):

# ssh-keygen-t rsa

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node2

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node3

# ssh- copy-id-I / root/.ssh/id_rsa.pub root @ node5

In node2 (slave):

# ssh-keygen-t rsa

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node1

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node3

# ssh- copy-id-I / root/.ssh/id_rsa.pub root @ node5

In node3 (slave):

# ssh-keygen-t rsa

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node1

# ssh-copy-id-I / root/.ssh/id_rsa.pub root @ node2

# ssh- copy-id-I / root/.ssh/id_rsa.pub root @ node5

Second, build a master-slave replication environment

Master-slave replication environment configuration process

(1) the mysql installation process is slight, but the three nodes need to create the following links

Node1 (master), node2 (master / slave), node3 (slave)

Note: create the following link:

Ln-s / usr / local / mysql / bin / * / usr / local / bin /

Node1 my.cnf

Server-id = 1

Binlog-format = ROW

Log-bin = master-bin

Log-bin-index = master-bin.index

Log-slave-updates = true

Relay_log_purge = 0

Node2 my.cnf

Server-id = 2

Binlog-format = ROW

Log-bin = master-bin

Log-bin-index = master-bin.index

Log-slave-updates = true

Relay_log_purge = 0

Node3 my.cnf

Binlog-format = ROW

Log-bin = mysql-bin

Relay-log = slave-relay-bin

Relay-log-index = slave-relay-bin.index

Log-slave-updates = true

Server-id = 11

Skip-name- resolve

Relay_log_purge = 0

(2) back up a complete copy of data on node1 (Master):

# mysqldump-uroot-p123456-- master-data = 2-single transaction-R-triggers-A > all.sql

Where-master-data = 2 represents the backup time to record the location and location of the master Binlog.

(3) create a replication user on node1 (Master):

Mysql > Grant replication slave *. * to the 'repl'@'192.168.1.%' identified in' 123456'

Refresh permissions

(4) check the name and location of binlog, MASTER_LOG_FILE and MASTER_LOG_POS when backing up the main database:

# head-n 30 all.sql | grep'CHANGE MASTER TO'

-CHANGE MASTER TO MASTER_LOG_FILE = 'master-bin.000004',MASTER_LOG_POS = 120

(5) copy backups to 192.168.1.122 and 192.168.1.123

# scp all.sql

192.168.1.122:/root/#scp all.sql 192.168.1.123:/root/

(6) Import backup on two servers and execute replication related commands

Operate on the node2,node3 host:

# mysql-uroot-p123456 start slave

Show slave status\ G

Create a MHA administrative user and create it on the master.

Will *. All permissions of * grant permissions to '123456' identified as' root'@'192.168.1.%''

Refresh permissions

Third, configure Keepal VIP

Vip can be configured in two ways, one is to manage the floating of virtual ip through keepalived, and the other is through script, which is realized by keepalived in this paper.

1. Install keepalived on node1 (Master) and node2 (alternate master node).

# http://www.keepalived.org/software/keepalived-1.2.12.tar.gz of wget

# keepalived-1.2.12.tar.gz of tar XF

# keepalived-1.2.12 of CD

The. / configure prefix of # = / usr / local / keepalived

# make & & make install

# cp / usr/local/keepalived/etc/rc.d/init.d/keepalived / etc/init.d/#cp

/ usr / local / keepalived / etc / sysconfig / keepalived / etc / sysconfig /

# mkdir / etc / keepalived

# cp / usr/local/keepalived/etc/keepalived/keepalived.conf / etc/keepalived/

# cp / usr / local / keepalived / sbin / keepalived / usr / sbin /

Configure the configuration file for keepalived, and do the following on node1 (master):

Note: keepalived is configured as backup-backup, that is, after the IP address is switched, the IP address is not switched when the host is up. The monitoring script in this article is provided by MHA, and keepalived does not provide monitoring of mysqld.

# vi / etc/keepalived/keepalived.conf

! Keep the profile

Global_defs {

Notification_email {

Abc@163.com

}

Notification_email_from dba@dbserver.com

Smtp_server 127.0.0.1

Smtp_connect_timeout 30

Router_id MySQL-HA

}

Vrrp_instance VI_1 {

State BACKUP

Interface eth0

Virtual_router_id 51

Priority 150

Advert_int 1

Nopreempt

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.1.130

}

}

Configure the configuration file for keepalived, and configure the actions on node2 (standby node) as follows:

# vi / etc/keepalived/keepalived.conf

! Keep the profile

Global_defs {

Notification_email {

Abc@163.com

}

Notification_email_from dba@dbserver.com

Smtp_server 127.0.0.1

Smtp_connect_timeout 30

Router_id MySQL-HA

}

Vrrp_instance VI_1 {

State BACKUP

Interface eth0

Virtual_router_id 51

Priority 120

Advert_int 1

Nopreempt

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.1.130

}

}

4. Node1,node2 starts the keepalived service

# service keepalived start

# chkconfig keepalived on

5. Node1 to check the startup of VIP

[root @

Node1] # ip a 1:lo: mtu 65536 qdisc noqueue state UNKNOWN

Link / loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Inet6:: 1/128 scope host

Valid_lft forever preferred_lft forever

2:eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000

Link / ether 00:0c: 29:4e:53:71 brd ff:ff

Inet 192.168.1.121 eth0 24 brd 192.168.1.255 range global eth0

Inet 192.168.1.130 Compact 32 range global eth0

Inet6 fe80:: 20c:29ff: fe4e:5371/64 range link

Valid_lft forever preferred_lft forever

Fourth, configure MHA

Monitor the working directory where the MHA is created, and create the relevant configuration files (sample configuration files are in the unzipped directory of the package).

# mkdir-p / etc / masterha

# mkdir-p / var / log / masterha / app1

# cp mha4mysql-manager-0.56 / samples / conf / app1.cnf / etc / masterha /

Modify the app1.cnf configuration file with the following contents:

# cat / etc/masterha/app1.cnf

[server default]

Manager_workdir = / var / log / masterha / app1

Manager_log = / var / log / masterha / app1 / manager.log

Master_binlog_dir = / usr / local / mysql / data /

Master_ip_failover_script = / usr / local / bin / master_ip_failover

Master_ip_online_change_script = / usr / local / bin / master_ip_online_change

Password = 123456

User = root

Ping_interval = 1

Remote_workdir = / tmp

Repl_password = 123456

Repl_user = repl

Report_script = / usr / local / bin / send_report

Ssh_user = root

[server1]

Hostname = 192.168.1.121

Port = 3306

[server2]

Hostname = 192.168.1.122

Port = 3306

Candidate_master = 1

Check_repl_delay = 0

[server3]

Hostname = 192.168.1.123

Port = 3306

Description:

Master_ip_failover_script = / usr / local / bin / master_ip_failover # MHA automatically switches the executed script, which needs to be modified

Master_ip_online_change_script = / usr / local / bin / master_ip_online_change # Manual

To switch the script to be executed, you need to modify report_script = / usr / local / bin / send_report # to send an email to report when switching, and need to modify

two。 Set the way to clear the relay log (on each slave node):

(1) on node 2, node 3 operates on the node:

Add relay_log_purge = 0 to the my.cnf configuration file, which has been configured previously.

(2) set the script to clean the relay periodically (operate on node2,node3):

# Cat purge_relay_log.sh

#! / Bin / bash

User = root

Passwd = 123456

Port = 3306

LOG_DIR ='/ data / masterha / log'

WORK_DIR ='/ data'

Clear ='in / usr / local / bin directory / purge_relay_logs'

If [! -d $log_dir]

Then

Mkdir $log_dir-p

Fi

$purge-- user = $user-- password = $passwd-- disable_relay_log_purge-- port = $port-- workdir = $work_dir > > $log_dir / purge_relay_logs.log 2 > & 1

Configure scheduled tasks

# crontab-e 0 4 * / bin / bash / root/purge_relay_log.sh

To introduce the keepalived service into MHA, we only need to modify the script file master_ip_failover in which the switch is triggered, and add to the script the handling of keepalived in the event of master downtime.

(1) Edit the script / usr / local / bin / master_ip_failover, which is modified as follows:

# vi / usr / local / bin / master_ip_failover

#! / usr / bin / env perl

Use strict

Use warning FATAL = > 'all'

Use Getopt:: Long

My (

$command, $ssh_user,$ orig_master_host,$ orig_master_ip

$orig_master_port,$ new_master_host,$ new_master_ip,$ new_master_port

)

My $vip = '192.168.1.130'

My $ssh_start_vip = "/ etc/init.d/keepalived start"

My $ssh_stop_vip = "/ etc/init.d/keepalived stop"

GetOptions (

'command = slots = >\ $command

'ssh_user = slots = >\ $ssh_user

'orig_master_host = slots = >\ $orig_master_host

'orig_master_ip = slots = >\ $orig_master_ip

'orig_master_port ='>\ $orig_master_port

'new_master_host = slots = >\ $new_master_host

'new_master_ip = slots = >\ $new_master_ip

'new_master_port = iTunes = >\ $new_master_port

)

Exit&main ()

Sub main {

Print "\ n\ nIN SCRIPT TEST = $ssh_stop_vip = = $ssh_start_vip =\ n\ n"

If ($command eq "stop" | | $command eq "stopssh") {

My $exit_code = 1

Eval {

Print "disable VIP:$ orig_master_host on old hosts\ n"

& stop_vip ()

$exit_code = 0

}

If ($@) {

Warn "Got Error:$ @\ n"

Exit $exit_code

}

Exit $exit_code

}

Elsif ($command eq "start") {

My $exit_code = 10

Eval {

Print "enable VIP-$vip on the new master-$new_master_host\ n"

& start_vip ()

$exit_code = 0

}

If ($@) {

Warn $@

Exit $exit_code

}

Exit $exit_code

} {

El} {

"{"} "\"

Exit 0

}

Else {

& usage ()

Exit 1

}

}

Sub start_vip () {

`ssh $ssh_user\ @ $new_master_host\ "$ssh_start_vip\" `

}

# A simple system call that is disabled in old_master

VIP stop_vip () {on the child

`ssh $ssh_user\ @ $orig_master_host\ "$ssh_stop_vip\" `

}}

Sub usage {

Print

Usage: master_ip_failover-- command = start | stop | stopssh | status-- orig_master_host = host-- orig_master_ip = ip-- orig_master_port = port-- new_master_host = host-- new_master_ip = ip-- new_master_port = port\ n "

}

(2) Edit the script master_ip_online_change, and modify it as follows:

#! / usr / bin / env perl

# copyright (C) 2011 DeNA Ltd.

# #

This program is free software; you can

According to

# the terms of the GNU General Public license issued by the Free Software Foundation have been reassigned and / or modified # license version 2 or

# (depending on your choice) any later version.

# #

This program is distributed. I hope it is useful.

But there is no guarantee; not even

# implied warranties of merchantability or fitness for a particular purpose. Related

For more information, see # GNU General Public license.

# #

You should have received the GNU General Public license

A copy of # and the program; if not

# # Note: this is a sample script and is incomplete. Modify the script according to your environment.

Use strictly

Use warning FATAL = > 'all'

Use Getopt:: Long

Use MHA:: DBHelper

Use MHA:: NodeUtil

Use Time:: HiRes qw (sleep gettimeofday tv_interval)

Use Data:: Dumper

My $_ tstart

My $_ running_interval = 0.1

My (

$command, $orig_master_is_new_slave,$ orig_master_host

$orig_master_ip,$ orig_master_port,$ orig_master_user

$orig_master_password,$ orig_master_ssh_user,$ new_master_host

$new_master_ip,$ new_master_port,$ new_master_user

$new_master_password,$ new_master_ssh_user

)

My $vip = '192.168.1.130 Universe 24'

My $key ='1'

My $ssh_start_vip = "/ sbin / ifconfig eth0:$ key $vip"

My $ssh_stop_vip = "/ sbin / ifconfig eth0:$ key down"

My $orig_master_ssh_port = 22

My $new_master_ssh_port = 22

Exit&main ()

Sub current_time_us {

My ($sec,$ microsec) = gettimeofday ()

My $curdate = localtime ($sec)

Returns $curdate. "" . Sprintf ("06d", $microsec)

}

Sub sleep_until {

My $elapsed = tv_interval ($_ tstart)

If ($_ running_interval > $elapsed) {

Sleep ($_ running_interval-$elapsed)

}

}

Sub get_threads_util {

My $dbh = shift

My $my_connection_id = shift

My $running_time_threshold = shift

My $type = shift

$running_time_threshold = 0 unless ($running_time_threshold)

$type = 0 unless ($type)

My @ threads

My $sth = $dbh- > prepare ("SHOW PROCESSLIST")

$sth- > execute ()

While (my $ref = $sth- > fetchrow_hashref ()) {

My $id = $ref-> {Id}

My $user = $ref-> {User}

My $host = $ref-> {Host}

My $command = $ref-> {Command}

My $state = $ref-> {State}

My $query_time = $ref-> {Time}

My $info = $ref-> {Info}

$info = s / ^\ s * (. *? )\ s * $/ $1 / if defined ($info)

Next if ($my_connection_id = = $id)

Next if (defined ($query_time) & & $query_time = 1) {

Next if (defined ($command) & & $command eq "Sleep")

Next if (defined ($command) & & $command eq "Connect")

}

If ($type > = 2) {

Next if (defined ($info) & & $info = m / ^ select / I)

Next if (defined ($info) & & $info = m / ^ show / I)

}

Push @ threads,$ ref

}

Return @ threads

}

Sub main {

If ($command eq "stop") {

# # properly kill the connection on the current host

# 1. In the new mainframe

Set read_only = 1 on # 2. DROP USER allows users to establish new connections without applications

# 3 . On the current host

Set read_only = 1 on # 4. Kill the current query

# * any failure to access the database will cause the script to crash.

My $exit_code = 1

Eval {

# # set read_only = 1 on the new host (to avoid accidents)

My $new_master_handler = new MHA:: DBHelper ()

# args:hostname,port,user,password,raise_error (die_on_error) _ or_not

$new_master_handler- > connect ($new_master_ip,$ new_master_port

$new_master_user,$ new_master_password,1)

Print current_time_us (). "set read_only in the new owner."

$new_master_handler- > enable_read_only ()

If ($new_master_handler- > is_read_only ()) {

Print "ok.\ n"

}

Else {

Die "Failed!\ n"

}

$new_master_handler- > disconnect ()

# connect to the original host and die if any database error occurs

$orig_master_handler = new MHA:: DBHelper ()

$orig_master_handler- > connect ($orig_master_ip,$ orig_master_port

$orig_master_user,$ orig_master_password,1)

# # Delete application users so that no one can connect. Disable binlog per session in advance

$orig_master_handler- > disable_log_bin_local ()

Print current_time_us (). "attract application users on the original host.\ n"

# FIXME_xxx_drop_app_user ($orig_master_handler)

Wait N * 100 milliseconds so that the current connection can exit

My $time_until_read_only = 15

$_ tstart = [gettimeofday]

My @ threads = get_threads_util ($orig_master_handler-> {dbh})

$orig_master_handler-> {connection_id})

While ($time_until_read_only > 0 & & $# threads > = 0) {

If ($time_until_read_only%5 = = 0) {

Printf

"% s waits for all running% d threads to be disconnected. (maximum% d milliseconds)\ n"

Current_time_us (), $# threads + 1 time_until_read_only * 100

If ($# threads new ([$_])-> Indent (0)-> Terse (1)-> Dump.

"\ n" foreach (@ threads)

}

}

Sleep_until ()

$_ tstart = [gettimeofday]

$time_until_read_only--

@ threads = get_threads_util ($orig_master_handler-> {dbh})

$orig_master_handler-> {connection_id})

}}

# # set read_only = 1 on the current master device so that no one (except SUPER) can write

Print_time_us (). Set read_only = 1 on the original host.

$orig_master_handler- > enable_read_only ()

If ($orig_master_handler- > is_read_only ()) {

Print "ok.\ n"

}

Else {

Die "Failed!\ n"

}}

Wait M * 100 milliseconds so that the current update query can complete

My $time_until_kill_threads = 5

@ threads = get_threads_util ($orig_master_handler-> {dbh})

$orig_master_handler-> {connection_id})

While ($time_until_kill_threads > 0 & & $# threads > = 0) {

If ($time_until_kill_threads%5 = = 0) {

Printf

"% s waits for all running% d queries to be disconnected. (maximum% d milliseconds)\ n"

Current_time_us (), $# threads + 1 time_until_kill_threads * 100

If ($# threads new ([$_])-> Indent (0)-> Terse (1)-> Dump. "\ n"

Foreach (@ threads)

}

}

Sleep_until ()

$_ tstart = [gettimeofday]

$time_until_kill_threads--

@ threads = get_threads_util ($orig_master_handler-> {dbh})

$orig_master_handler-> {connection_id})

}}

# # Terminating all threads

Print current_time_us (). "Killing all application threads..\ n"

$orig_master_handler- > kill_threads (@ threads) if ($# threads > = 0)

Print current_time_us (). "done."

$orig_master_handler- > enable_log_bin_local ()

$orig_master_handler- > disconnect ()

# # After finishing the script, MHA executes FLUSH TABLES WITH READ LOCK

Eval {

`ssh-p$orig_master_ssh_port $orig_master_ssh_user\ @ $orig_master_host\ "$ssh_stop_vip\" `

}

If ($@) {

Warn $@

}

$exit_code = 0

}

If ($@) {

Warn "Got Error: $@\ n"

Exit $exit_code

}

Exit $exit_code

}

Elsif ($command eq "start") {

# # Activating master ip on the new master

# 1. Create app user with write privileges

# 2. Moving backup script if needed

# 3. Register new master's ip to the catalog database

# We don't return error even though activating updatable accounts/ip failed so that we don't interrupt slaves' recovery.

# If exit code is 0 or 10, MHA does not abort

My $exit_code = 10

Eval {

My $new_master_handler = new MHA::DBHelper ()

# args: hostname, port, user, password, raise_error_or_not

$new_master_handler- > connect ($new_master_ip, $new_master_port

$new_master_user, $new_master_password, 1)

# # Set read_only=0 on the new master

$new_master_handler- > disable_log_bin_local ()

Print current_time_us (). "Set read_only=0 on the new master."

$new_master_handler- > disable_read_only ()

# # Creating an app user on the new master

Print current_time_us (). "Creating app user on the new master..\ n"

# FIXME_xxx_create_app_user ($new_master_handler)

$new_master_handler- > enable_log_bin_local ()

$new_master_handler- > disconnect ()

# # Update master ip on the catalog database, etc

`ssh-p$new_master_ssh_port $new_master_ssh_user\ @ $new_master_host\ "$ssh_start_vip\" `

$exit_code = 0

}

If ($@) {

Warn "Got Error: $@\ n"

Exit $exit_code

}

Exit $exit_code

}

Elsif ($command eq "status") {

# do nothing

Exit 0

}

Else {

& usage ()

Exit 1

}

}

Sub usage {

Print

"Usage: master_ip_online_change-- command=start | stop | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n"

Die

}

(3) Edit the script send_report, and modify it as follows:

#! / usr/bin/perl

Use strict

Use warnings FATAL = > 'all'

Use Mail::Sender

Use Getopt::Long

# new_master_host and new_slave_hosts are set only when recovering master succeeded

My ($dead_master_host, $new_master_host, $new_slave_hosts, $subject, $body)

My $smtp='smtp.163.com'

My $mail_from='xxxx'

My $mail_user='xxxxx'

My $mail_pass='xxxxx'

My $mail_to= ['xxxx','xxxx']

GetOptions (

'orig_master_host=s' = >\ $dead_master_host

'new_master_host=s' = >\ $new_master_host

'new_slave_hosts=s' = >\ $new_slave_hosts

'subject=s' = >\ $subject

'body=s' = >\ $body

);

MailToContacts ($smtp,$mail_from,$mail_user,$mail_pass,$mail_to,$subject,$body)

Sub mailToContacts {

My ($smtp, $mail_from, $user, $passwd, $mail_to, $subject, $msg) = @ _

Open my $DEBUG, "> / tmp/monitormail.log"

Or die "Can't open the debug fileParticipation!\ n"

My $sender = new Mail::Sender {

Ctype = > 'text/plain; charset=utf-8'

Encoding = > 'utf-8'

Smtp = > $smtp

From = > $mail_from

Auth = > 'LOGIN'

TLS_allowed = >'0'

Authid = > $user

Authpwd = > $passwd

To = > $mail_to

Subject = > $subject

Debug = > $DEBUG

}

$sender- > MailMsg (

{msg = > $msg

Debug = > $DEBUG

}

) or print $Mail::Sender::Error

Return 1

}

# Do whatever you want here

Exit 0

5. Daily management of MHA

1. Check the SSH configuration (operation on the node5 Monitor monitoring node) as follows:

# masterha_check_ssh-conf=/etc/masterha/app1.cnf

Sun May 1 22:05:12 2016-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.

Sun May 1 22:05:12 2016-[info] Reading application default configuration from / etc/masterha/app1.cnf..

Sun May 1 22:05:12 2016-[info] Reading server configuration from / etc/masterha/app1.cnf..

Sun May 1 22:05:12 2016-[info] Starting SSH connection tests..

Sun May 1 22:05:14 2016-[debug]

Sun May 2016: 05:12 2016-[debug] Connecting via SSH from root@192.168.1.121 (192.168.1.121) to root@192.168.1.122 (192.168.1.122).

Sun May 1 22:05:13 2016-[debug] ok.

Sun May 1 22:05:13 2016-[debug] Connecting via SSH from root@192.168.1.121 (192.168.1.121) to root@192.168.1.123 (192.168.1.123)..

Sun May 1 22:05:13 2016-[debug] ok.

Sun May 1 22:05:14 2016-[debug]

Sun May 2016: 05:13 2016-[debug] Connecting via SSH from root@192.168.1.122 (192.168.1.122) to root@192.168.1.121 (192.168.1.121).

Sun May 1 22:05:13 2016-[debug] ok.

Sun May 2016: 05:13 2016-[debug] Connecting via SSH from root@192.168.1.122 (192.168.1.122) to root@192.168.1.123 (192.168.1.123).

Sun May 1 22:05:14 2016-[debug] ok.

Sun May 1 22:05:14 2016-[debug]

Sun May 1 22:05:13 2016-[debug] Connecting via SSH from root@192.168.1.123 (192.168.1.123) to root@192.168.1.121 (192.168.1.121)..

Sun May 1 22:05:14 2016-[debug] ok.

Sun May 2016: 05:14 2016-[debug] Connecting via SSH from root@192.168.1.123 (192.168.1.123) to root@192.168.1.122 (192.168.1.122).

Sun May 1 22:05:14 2016-[debug] ok.

Sun May 1 22:05:14 2016-[info] All SSH connection tests passed successfully.

two。 Check the state of the entire replication environment (operation on the node5 monitoring node) as follows:

# masterha_check_repl-conf=/etc/masterha/app1.cnf

Sun May 1 22:46:44 2016-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.

Sun May 1 22:46:44 2016-[info] Reading application default configuration from / etc/masterha/app1.cnf..

Sun May 1 22:46:44 2016-[info] Reading server configuration from / etc/masterha/app1.cnf..

Sun May 1 22:46:44 2016-[info] MHA::MasterMonitor version 0.56.

Sun May 1 22:46:45 2016-[info] GTID failover mode = 0

Sun May 1 22:46:45 2016-[info] Dead Servers:

Sun May 1 22:46:45 2016-[info] Alive Servers:

Sun May 1 22:46:45 2016-[info] 192.168.1.121 (192.168.1.121)

Sun May: 46:45 2016-[info] 192.168.1.122 (192.168.1.122)

Sun May 1 22:46:45 2016-[info] 192.168.1.123 (192.168.1.123)

Sun May 1 22:46:45 2016-[info] Alive Slaves:

Sun May 2016: 46:45 2016-[info] 192.168.1.122 (192.168.1.122 Version=5.6.29-log (oldest major version between slaves) log-bin:enabled

Sun May 1 22:46:45 2016-[info] Replicating from 192.168.1.121 (192.168.1.121 Replicating from 3306)

Sun May 1 22:46:45 2016-[info] Primary candidate for the new Master (candidate_master is set)

Sun May 1 22:46:45 2016-[info] 192.168.1.123 (192.168.1.123 Version=5.6.29-log (oldest major version between slaves) log-bin:enabled

Sun May 1 22:46:45 2016-[info] Replicating from 192.168.1.121 (192.168.1.121 Replicating from 3306)

Sun May 1 22:46:45 2016-[info] Current Alive Master: 192.168.1.121 (192.168.1.121)

Sun May 1 22:46:45 2016-[info] Checking slave configurations..

Sun May 2016 46:45-[info] read_only=1 is not set on slave 192.168.1.122 (192.168.1.122 read_only=1 is not set on slave 3306).

Sun May 2016 46:45-[warning] relay_log_purge=0 is not set on slave 192.168.1.122 (192.168.1.122 relay_log_purge=0 is not set on slave 3306).

Sun May 1 22:46:45 2016-[info] read_only=1 is not set on slave 192.168.1.123 (192.168.1.123 read_only=1 is not set on slave 3306)

Sun May 1 22:46:45 2016-[warning] relay_log_purge=0 is not set on slave 192.168.1.123 (192.168.1.123 relay_log_purge=0 is not set on slave 3306)

Sun May 1 22:46:45 2016-[info] Checking replication filtering settings..

Sun May 1 22:46:45 2016-[info] binlog_do_db=, binlog_ignore_db=

Sun May 1 22:46:45 2016-[info] Replication filtering check ok.

Sun May 1 22:46:45 2016-[info] GTID (with auto-pos) is not supported

Sun May 1 22:46:45 2016-[info] Starting SSH connection tests..

Sun May 1 22:46:46 2016-[info] All SSH connection tests passed successfully.

Sun May 1 22:46:46 2016-[info] Checking MHA Node version..

Sun May 1 22:46:47 2016-[info] Version check ok.

Sun May 1 22:46:47 2016-[info] Checking SSH publickey authentication settings on the current master..

Sun May 1 22:46:47 2016-[info] HealthCheck: SSH to 192.168.1.121 is reachable.

Sun May 1 22:46:47 2016-[info] Master MHA Node version is 0.56.

Sun May 1 22:46:47 2016-[info] Checking recovery script configurations on 192.168.1.121 (192.168.1.121 Checking recovery script configurations on 3306)

Sun May 1 22:46:47 2016-[info] Executing command: save_binary_logs-command=test-start_pos=4-binlog_dir=/usr/local/mysql/data/-output_file=/tmp/save_binary_logs_test-manager_version=0.56-start_file=master-bin.000008

Sun May 1 22:46:47 2016-[info] Connecting to root@192.168.1.121 (192.168.1.121 Connecting to root@192.168.1.121)..

Creating / tmp if not exists.. Ok.

Checking output directory is accessible or not..

Ok.

Binlog found at / usr/local/mysql/data/, up to master-bin.000008

Sun May 1 22:46:48 2016-[info] Binlog setting check done.

Sun May 1 22:46:48 2016-[info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..

Sun May 2016: 46:48 2016-[info] Executing command: apply_diff_relay_logs-- command=test-- slave_user='root'-- slave_host=192.168.1.122-- slave_ip=192.168.1.122-- slave_port=3306-- workdir=/tmp-- target_version=5.6.29-log-- manager_version=0.56-- relay_log_info=/usr/local/mysql/data/relay-log.info-- relay_dir=/usr/local/mysql/data/-- slave_pass=xxx

Sun May: 46:48 2016-[info] Connecting to root@192.168.1.122 (192.168.1.122 Connecting to root@192.168.1.122)..

Checking slave recovery environment settings..

Opening / usr/local/mysql/data/relay-log.info... Ok.

Relay log found at / usr/local/mysql/data, up to node2-relay-bin.000002

Temporary relay log file is / usr/local/mysql/data/node2-relay-bin.000002

Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.

Done.

Testing mysqlbinlog output.. Done.

Cleaning up test file (s).. Done.

Sun May 1 22:46:48 2016-[info] Executing command: apply_diff_relay_logs-- command=test-- slave_user='root'-- slave_host=192.168.1.123-- slave_ip=192.168.1.123-- slave_port=3306-- workdir=/tmp-- target_version=5.6.29-log-manager_version=0.56-- relay_log_info=/usr/local/mysql/data/relay-log.info-- relay_dir=/usr/local/mysql/data/-- slave_pass=xxx

Sun May 1 22:46:48 2016-[info] Connecting to root@192.168.1.123 (192.168.1.123 Connecting to root@192.168.1.123)..

Checking slave recovery environment settings..

Opening / usr/local/mysql/data/relay-log.info... Ok.

Relay log found at / usr/local/mysql/data, up to slave-relay-bin.000012

Temporary relay log file is / usr/local/mysql/data/slave-relay-bin.000012

Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.

Done.

Testing mysqlbinlog output.. Done.

Cleaning up test file (s).. Done.

Sun May 1 22:46:48 2016-[info] Slaves settings check done.

Sun May 1 22:46:48 2016-[info]

192.168.1.121 (192.168.1.121) (current master)

+-192.168.1.122 (192.168.1.122 purl 3306)

+-192.168.1.123 (192.168.1.123 purl 3306)

Sun May: 46:48 2016-[info] Checking replication health on 192.168.1.122.

Sun May 1 22:46:48 2016-[info] ok.

Sun May 1 22:46:48 2016-[info] Checking replication health on 192.168.1.123.

Sun May 1 22:46:48 2016-[info] ok.

Sun May 1 22:46:48 2016-[info] Checking master_ip_failover_script status:

Sun May 1 22:46:48 2016-[info] / usr/local/bin/master_ip_failover-command=status-ssh_user=root-orig_master_host=192.168.1.121-orig_master_ip=192.168.1.121-orig_master_port=3306

IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start===

Checking the Status of the script.. OK

Sun May 1 22:46:48 2016-[info] OK.

Sun May 1 22:46:48 2016-[warning] shutdown_script is not defined.

Sun May 1 22:46:48 2016-[info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

[root@node5 masterha] #

3. Enable MHA Manager monitoring (node5 operation) as follows:

# mkdir-p / var/log/masterha/app1/

# nohup masterha_manager-conf=/etc/masterha/app1.cnf-remove_dead_master_conf-ignore_last_failover

< /dev/null >

/ var/log/masterha/app1/manager.log 2 > & 1 &

Parameter description:

-- remove_dead_master_conf # this parameter means that when a master-slave switch occurs, the ip of the old master library will be removed from the configuration file.

-- manger_log # Log location

-- ignore_last_failover # by default, if MHA detects continuous downtime, it will generate an app1.failover.complete file, which will cause the MHA management process to fail to start.

4. Check whether MHA Manager monitoring is normal:

# masterha_check_status-conf=/etc/masterha/app1.cnf

App1 (pid:2480) is running (0:PING_OK), master:192.168.1.121

5. View the startup log (node5 operation) as follows:

# tail-N20 / var/log/masterha/app1/manager.log

6. Turn off MHA Manage monitoring:

(1) close

# masterha_stop-conf=/etc/masterha/app1.cnf

(2) start

# nohup masterha_manager-conf=/etc/masterha/app1.cnf-remove_dead_master_conf-ignore_last_failover

< /dev/null >

/ var/log/masterha/app1/manager.log 2 > & 1

VI. MHA Failover handoff

1. Automatic Failover switching

(1) simulate master mysql shutdown

(2) VIP will switch to node2

(3) clear the original master server configuration file in / etc/masterha/app1.cnf.

(4) the masterha_manager monitoring process will automatically exit and close, and the app1.failover.complete file will be generated under / var/log/masterha/app1, manager.log will record the whole process, and the slave server will automatically copy from the new master server.

(5) if the original master server mysqld starts, you need to clear the app1.failover.complete file generated under / var/log/masterha/app1, add the node1 configuration file to / etc/masterha/app1.cnf, and resynchronize the master server to become a slave node through the fault point recorded in manager.log.

two。 Manual Failover handoff

(1) stop the MHA Manager process first.

Masterha_stop-conf=/etc/masterha/app1.cnf

(2) stop master mysqld

(3) switch manually, and do the following on the Manager host:

# masterha_master_switch-master_state=dead-conf=/etc/masterha/app1.cnf-dead_master_host=192.168.1.122-dead_master_port=3306-new_master_host=192.168.1.121-new_master_port=3306-ignore_last_failover

The whole process of switching can be observed by observing the log.

(4) as in the above section, the recovery node is a slave server.

3. Switch under normal operation (Master is running)

Wait for a supplement.

4. Summary

By building and testing the environment of MMM,MHA, due to the use of replication architecture, the principle of MHA is simple, in some environments with high requirements for data, in order to ensure reliability, it is best to be used in combination with semi-synchronization.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report