Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mysql MHA High availability Architecture deployment

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Introduction to MHA: Master High Availability Manager and Tools for MySQL is a script management tool written in Perl by an MySQL expert in Japan.

This tool is only suitable for MySQL Replication (two-tier) environments and is designed to maintain the high availability of the Master main library.

In the process of MySQL failover, MHA can automatically complete the database failure within 30 seconds.

Switching operation, and in the process of failover, MHA can maximize the consistency of the database to achieve

High availability in the real sense.

MHA consists of two parts: MHA Manager (management node) and MHA Node (data node). MHA Manager

Can be deployed independently on a single machine to manage multiple Master-Slave clusters, or it can be deployed on a Slave.

When Master fails, it can automatically upgrade the Slave of the latest data to the new Master, and then all other

Redirects the Slave to the new Master. The entire failover process is completely transparent to the application.

Overall environment introduction: master1: 192.168.9.25 master2: 192.168.9.26 slave1: 192.168.9.29 slave2: 192.168.9.30 manager node: 192.168.9.27 lvs1: 192.168.9.27 lvs2: 192.168.9.28 overall construction process: all mysql servers should install node software, and install manager and node nodes on the manager management server (9.27). Then create a mha administrative user in all the mysql data. Here I call it admin.

The following is a demonstration of the construction process:

1: 9.27 manage node deployment, install node and manager software, as long as it can be installed, do not need to open the first upload node and manager software installation image (I have it on my hard drive)

[root@lvs-a] # rpm-ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm (6.5system)

[root@lvs-a] # # rpm-ivh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm (5.8)

[root@lvs-a ~] # ls / etc/yum.repos.d/

Base.repo epel.repo epel-testing.repo rhel-debuginfo.repo

[root@lvs-a ~] # yum-y install perl-DBD-MySQL ncftp

[root@lvs-a tarbag] # tar-zxf mha4mysql-node-0.56..tar.gz-C / usr/local # # node software decompression

[root@lvs-a tarbag] # cd / usr/local/mha4mysql-node-0.56

[root@lvs-a tarbag] # perl Makefile.PL

[root@lvs-a tarbag] # make & & make install # # node installation

[root@lvs-a tarbag] # yum-y install perl-Config-Tiny perl-Params-Validate perl-Log-Dispatch perl-Parallel-ForkManager

[root@lvs-a tarbag] # tar-zxf mha4mysql-manager-0.56.tar.gz-C / usr/local/

[root@lvs-a tarbag] # cd / usr/local/mha4mysql-manager-0.56 # # manager software installation

[root@lvs-a mha4mysql-manager-0.56] # perl Makefile.PL

[root@lvs-a mha4mysql-manager-0.56] # make & & make install

Two: 9.25 master node deployment, only install node

[root@master1~] # rpm-ivh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

[root@master1 ~] # ls / etc/yum.repos.d/

Base.repo epel.repo epel-testing.repo rhel-debuginfo.repo

[root@master1 ~] # yum-y install perl-DBD-MySQL ncftp

[root@master1 tarbag] # tar-zxf mha4mysql-node-0.56..tar.gz-C / usr/local

[root@master1 tarbag] # cd / usr/local/mha4mysql-node-0.56

[root@master1 mha4mysql-node-0.56] # perl Makefile.PL

[root@master1 mha4mysql-node-0.56] # make & & make install

[root@master1 mha4mysql-node-0.56] # ln-s / usr/local/mysql/bin/* / usr/local/bin/

Skip the specific process of installing mysql software, refer to: MySQL master-slave synchronous replication architecture construction document

Create a mha management account

Mysql > grant all on *. * to 'admin'@'%' identified by' 123456'

Mysql > flush privileges

Create a mysql master-slave management account

Mysql > grant replication slave on *. * to 'repl'@'%' identified by' 123456'

Mysql > flush privileges

III. 9.26 deployment of master / slave nodes (the steps are the same as the master library)

[root@master2~] # rpm-ivh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

[root@master2 ~] # ls / etc/yum.repos.d/

Base.repo epel.repo epel-testing.repo rhel-debuginfo.repo

[root@master2 ~] # yum-y install perl-DBD-MySQL ncftp

[root@master2 tarbag] # tar-zxf mha4mysql-node-0.56..tar.gz-C / usr/local

[root@master2 tarbag] # cd / usr/local/mha4mysql-node-0.56

[root@master2 mha4mysql-node-0.56] # perl Makefile.PL

[root@master2 mha4mysql-node-0.56] # make & & make install

[root@master2 mha4mysql-node-0.56] # ln-s / usr/local/mysql/bin/* / usr/local/bin/

In fact, you don't have to set up a mha management account, because before you start, you have to ask master1 to restore another mysql, as long as it is available on master1.

Mysql > grant all on *. * to 'admin'@'%' identified by' 123456'

Mysql > flush privileges

Create a mysql master-slave management account

Mysql > grant replication slave on *. * to 'repl'@'%' identified by' 123456'

Mysql > flush privileges

Four: 9.29 deployment from the node

[root@slave1~] # rpm-ivh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

[root@slave1 ~] # ls / etc/yum.repos.d/

Base.repo epel.repo epel-testing.repo rhel-debuginfo.repo

[root@slave1 ~] # yum-y install perl-DBD-MySQL ncftp

[root@slave1 tarbag] # tar-zxf mha4mysql-node-0.56..tar.gz-C / usr/local

[root@slave1 tarbag] # cd / usr/local/mha4mysql-node-0.56

[root@slave1 mha4mysql-node-0.56] # perl Makefile.PL

[root@slave1 mha4mysql-node-0.56] # make & & make install

[root@slave1 mha4mysql-node-0.56] # ln-s / usr/local/mysql/bin/* / usr/local/bin/

Create a mha management account

Mysql > grant all on *. * to 'admin'@'%' identified by' 123456'

Mysql > flush privileges

Five: 9.30 from the library operation reference 9.29.

Sixth: the software that needs to be installed has been basically completed. The following details show how to configure the manager management node. The upper layer of mha can deploy VIP through keepalive, and the program connects to the database using VIP, thus realizing the failover transparency of the background database.

[root@lvs-a] # mkdir-p / etc/masterha

[root@lvs-a] # mkdir-p / masterha/app1

[root@lvs-a mha4mysql-manager-0.56] # cp samples/conf/* / etc/masterha/

[root@lvs-a ~] # cat / etc/masterha/app1.cnf

[server default]

Manager_workdir=/masterha/app1

Manager_log=/masterha/app1/manager.log

User=admin

Password=123456

Ssh_user=root

Repl_user=repl

Repl_password=123456

Ping_interval=1 # ping once per second

Shutdown_script= ""

Master_ip_failover_script= "/ usr/local/bin/master_ip_failover" # # this script will then create

Master_ip_online_change_script= ""

Report_script= ""

[server1]

Hostname=192.168.9.26

Master_binlog_dir=/mysql/data/log

Candidate_master=1 # # this parameter is required for the main library

[server2]

Hostname=192.168.9.25

Master_binlog_dir=/mysql/data/log

Candidate_master=1

[server3]

Hostname=192.168.9.29

Master_binlog_dir=/data/log

[server4]

Hostname=192.168.9.30

Master_binlog_dir=/data/log/ # corresponding path

[root@lvs-a u01] # cat / usr/local/bin/master_ip_failover

#! / bin/bash

#-part I: definitions of variables and functions-#

# Begin Variables define###

Ssh_port=22

Cmd=/sbin/ifconfig

Vip=192.168.9.232

Device=eth0:0

Netmk=255.255.255.0

Start_vip= "${cmd} ${device} ${vip} netmask ${netmk} up"

Stop_vip= "${cmd} ${device} ${vip} netmask ${netmk} down"

# End Variables define###

# Begin Status Funciont###

Status ()

{

Exit 0

}

# End Status Funciont###

# Begin Stop Or Stopssh Funciont###

Stop ()

{

Exit 0

}

# End Stop Or Stopssh Funciont###

# Begin Start Funciont###

Start ()

{

/ usr/bin/ssh-p ${ssh_port} ${ssh_user} @ ${orig_master_host} "${stop_vip}"

/ usr/bin/ssh-p ${ssh_port} ${ssh_user} @ ${new_master_host} "${start_vip}"

Exit 0

}

# End Start Funciont###

#-part I: definitions of variables and functions-#

#-part II: command line arguments-#

# Begin Get The Command-Line Parameters###

# eval set-- "`getopt-a-Q-o n-l command::,ssh_user:,orig_master_host:,orig_master_ip:,orig_master_port:,new_master_host:,new_master_ip:,new_master_port:,new_master_user:,new_master_password:--" $@ "`

Eval set-- "`getopt-a-Q-o n-l command::,ssh_user:,orig_master_host:,orig_master_ip:,new_master_host:,new_master_ip:--" $@ "`"

If [$?! = 0]; then echo "Terminating..." > & 2; exit 1

While true

Do

Case "$1" in

-- command)

Command= "${2}"

Shift

-- ssh_user)

Ssh_user= "${2}"

Shift

-- orig_master_host)

Orig_master_host= "${2}"

Shift

-- orig_master_ip)

Orig_master_ip= "${2}"

Shift

-- new_master_host)

New_master_host= "${2}"

Shift

-- new_master_ip)

New_master_ip= "${2}"

Shift

--)

Shift

Break

Esac

Shift

Done

# End Get The Command-Line Parameters###

#-part II: command line arguments-#

#-part III: function calls-#

If ["${command}" = = "status"]

Then

Status

Fi

If ["${command}" = = "stop"] | | ["${command}" = "stopssh"]

Then

Stop

Fi

If ["${command}" = = "start"]

Then

Start

Fi

Seven: establish mutual trust and be careful to cover all servers. Only one server is shown below.

[root@lvs-a] # ssh-keygen-t rsa

[root@lvs-a] # ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.9.25

[root@lvs-a] # ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.9.26

[root@lvs-a] # ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.9.29

[root@lvs-a] # ssh-copy-id-I / root/.ssh/id_rsa.pub root@192.168.9.30

Eight: check whether the mha configuration is correct.

1. Check whether ssh is configured successfully

[root@lvs-a] # masterha_check_ssh-- conf=/etc/masterha/app1.cnf

Finally, the following fields are displayed to indicate success:

Tue Jun 30 01:58:05 2015-[info] All SSH connection tests passed successfully.

two。 Check whether the mysql master-slave replication status is successful

[root@lvs-a] # masterha_check_repl-- conf=/etc/masterha/app1.cnf

Finally, MySQL Replication Health is OK. The field description was successful.

3. Start the management node process

[root@lvs-a ~] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf > / tmp/mha_manager.log

< /dev/null 2>

& 1 &

[1] 10085

4. Check mha master-slave status

[root@lvs-a] # masterha_check_status-- conf=/etc/masterha/app1.cnf

Display as follows

App1 (pid:10085) is running (0:PING_OK), master:192.168.153.147

5. Stop mha

[root@lvs-a] # masterha_stop-- conf=/etc/masterha/app1.cnf

IX: refer to the specific process of installing lvs+keepalived and document the installation and deployment of LVS+keepalived http://blog.itpub.net/29654823/viewspace-1844282/

Ten: verify the correctness of the environment:

When the current master (9.25) is closed, the vip9.232 will automatically float to the master / standby (9.26), thus achieving the high availability of the master. At this time, the manger (9.27) process is dead and needs to be manually started.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report