Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize High availability Cluster of Database by mysql, heartbeat and drbd

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Next, let's learn how mysql, heartbeat, drbd achieve database high availability cluster. I believe you will benefit a lot after reading it. The text is not much in essence. I hope mysql, heartbeat, drbd how to achieve database high availability cluster this short article is what you want.

Overview of DRBD I. introduction to DRBD

The full name of DRBD is: Distributed ReplicatedBlock Device (DRBD) distributed block device replication, DRBD

Is made up of kernel modules and related scripts to build a high-availability cluster of data. It is realized by mirroring the entire disk device (data) through the network. You can think of it as a web-based RAID1. It allows the user to establish a real-time mirror of the local parcel device on the remote machine.

Second, the working principle of DRBD

(DRBD Primary) is responsible for receiving data, writing the data to the local disk and sending it to another host (DRBD)

Secondary). Another host stores the data on its own disk. Currently, DRBD is only allowed for one section at a time

Point for read-write access, but this is sufficient for the usual failover high-availability clusters. It is possible that future versions will support read-write access by two nodes.

III. The relationship between DRBD and HA

A DRBD system consists of two nodes, similar to a HA cluster, but also divided into a primary node and a standby node. On nodes with primary devices, applications and operating systems can run and access DRBD devices (/ dev/drbd*). The data written by the primary node is stored in the disk device of the primary node through the DRBD device, and at the same time, the data is automatically sent to the corresponding DRBD device of the standby node, and finally written to the disk device of the standby node. On the standby node, DRBD only writes the data from the DRBD device to the disk of the standby node. Now most high-availability clusters use shared storage, and DRBD can also be used as a shared storage device, using DRBD does not require much hardware investment. Because it runs in the TCP/IP network, using DRBD as a shared storage device requires a lot of cost savings because the price is much cheaper than a dedicated storage network; its performance and stability are also good.

4. DRBD replication mode protocol A:

Asynchronous replication protocol. Once the local disk write has been completed and the packet is in the send queue, the write is considered complete. When a node fails, data loss may occur because the data written to the remote node may still be in the sending queue. Although the data on the failover node is consistent, it is not updated in a timely manner. This is usually used for geographically separate nodes.

Protocol B:

Memory synchronous (semi-synchronous) replication protocol. Once the local disk write has been completed and the replication packet reaches the remote node, the write on the primary node is considered complete. Data loss may occur when both participating nodes fail at the same time, because the data in transit may not be committed to disk.

Protocol C:

Synchronous replication protocol. A write is considered complete only if the disk of the local and remote node has confirmed that the write operation is complete. There is no data loss, so this is a popular mode for a cluster node, but I / O throughput depends on network bandwidth generally using protocol C, but choosing C protocol will affect traffic and thus network latency. For the sake of data reliability, we should carefully choose which protocol to use in the production environment.

Brief introduction of Heartbeat I and heartbeat

Heartbeat is a component of the Linux-HA project. Since 1999, many versions have been released. It is the most successful example of the open source Linux-HA project and has been widely used in the industry.

With the increasing application of Linux in key industries, it is bound to provide some services originally provided by large commercial companies such as IBM and SUN. The services provided by these commercial companies have a key feature, that is, high availability clusters.

Second, the working principle of heartbeat

The core of Heartbeat includes two parts, heartbeat monitoring part and resource takeover part. Heartbeat monitoring can be done through the

The network link and serial port are carried out, and redundant links are supported. They send messages to each other to tell each other their current state. If the message sent by the other party is not received within a specified period of time, then the other party is considered invalid. At this time, it is necessary to start the resource takeover module to take over the resources or services running on the other host.

III. Highly available clusters

A high availability cluster refers to a group of independent computers connected by hardware and software, which behave as a single system in front of users, and one or more nodes within such a set of computer systems stop working. the service will switch from the failed node to the working node without causing service interruption. As can be seen from this definition, the cluster must detect when nodes and services fail and when they become available. This task is usually accomplished by a set of code called "heartbeat". In Linux-HA, this function is done by a program called heartbeat.

Environment description:

Operating system

IP address

Hostnam

Package list

CentOS release 6.5

192.168.200.101

Server1

DRBD 、 heartbeat 、 mysql

CentOS release 6.5

192.168.200.102

Server2

DRBD 、 heartbeat 、 mysql

CentOS release 6.5

192.168.200.103

Slave1

Mysql

CentOS release 6.5

192.168.200.104

Slave2

Mysql

CentOS release 6.5

192.168.200.105

Lvs-m

Lvs+keepalived

CentOS release 6.5

192.168.200.106

Lvs-s

Lvs+keepalived

Configuration process:

Prepare the configuration before installation:

All hosts need to add a 60G SCSI interface hard disk.

Configure all machines: turn off the firewall and selinux mechanism

[root@localhost ~] # service iptables stop

[root@localhost ~] # setenforce 0

Both master and slave should be configured, and the partition does not need to be formatted.

[root@localhost ~] # fdisk / dev/sdb

Command (m for help): n

Command action

E extended

P primary partition (1-4)

P

Partition number (1-4): 1

Last cylinder, + cylinders or + size {Kjimm Magazine G} (1-2610 penalty default 2610): + 10G

Command (m for help): W

[root@server1 ~] # partprobe / dev/sdb

Change from mainframe to server2

[root@localhost ~] # vim / etc/sysconfig/network

2 HOSTNAME=server1

[root@localhost ~] # hostname server1

[root@localhost ~] # bash

[root@server1 ~] # vim / etc/hosts

3 192.168.200.101 server1

4 192.168.200.102 server2

Heartbeat installation: both master and slave need to be installed

Upload the package to / root and install it in order.

[root@server1] # rpm-ivhPyXML-0.8.4-19.el6.x86_64.rpm

[root@server1] # rpm-ivhperl-TimeDate-1.16-13.el6.noarch.rpm

[root@server1] # rpm-ivhresource-agents-3.9.5-24.el6_7.1.x86_64.rpm

[root@server1] # rpm-ivh lib64ltdl7-2.2.6-6.1mdv2009.1.x86_64.rpm

[root@server1] # rpm-ivhcluster-glue-libs-1.0.5-6.el6.x86_64.rpm

[root@server1] # rpm-ivhcluster-glue-1.0.5-6.el6.x86_64.rpm

[root@server1 ~] # yum-y install kernel-devel kernel-headers

[root@server1] # rpm-ivh heartbeat-libs-3.0.4-2.el6.x86_64.rpmheartbeat-3.0.4-2.el6.x86_64.rpm

Install and configure DRBD: both master and slave need to be installed

[root@server1 ~] # tar xf drbd-8.4.3.tar.gz

[root@server1 ~] # cd drbd-8.4.3

[root@server1 drbd-8.4.3] # / configure--prefix=/usr/local/drbd-- with-km-- with-heartbeat

[root@server1 drbd-8.4.3] # make KDIR=/usr/src/kernels/2.6.32-504.el6.x86 "64max" & make & &

Make install

[root@server1 drbd-8.4.3] # mkdir-p/usr/local/drbd/var/run/drbd

[root@server1 drbd-8.4.3] # cp/usr/local/drbd/etc/rc.d/init.d/drbd / etc/init.d/

[root@server1 drbd-8.4.3] # chkconfig-- add drbd

[root@server1 drbd-8.4.3] # cd drbd

[root@server1 drbd] # make clean

[root@server1 drbd] # make KDIR=/usr/src/kernels/2.6.32-504.el6.x86_64/

[root@server1 drbd] # cp drbd.ko / lib/modules/2.6.32-504.el6.x86_64/kernel/lib/

[root@server1 drbd] # depmod

[root@server1 drbd] # cp-RAccord local etc/ha.d/resource.d/ * / etc/ha.d/resource.d/

[root@server1 drbd] # cd / usr/local/drbd/etc/drbd.d/

[root@server1 drbd] # cat / usr/local/drbd/etc/drbd.conf

# You can find an example in/usr/share/doc/drbd.../drbd.conf.example

/ / all resources ending with. res in this directory are resource files

Include "drbd.d/global_common.conf"

Include "drbd.d/*.res"

Configure global_common.conf file (master and slave are consistent)

[root@server1 drbd.d] # pwd

/ usr/local/drbd/etc/drbd.d

[root@server1 drbd.d] # cp global_common.conf {,-$(date+%s)}

[root@server1 drbd.d] # vim global_common.conf

Global {

Usage-count yes; / / whether to keep statistics on usage information. Default is yes.

}

Common {

Startup {

Wfc-timeout 120; / / timeout for waiting for a connection

Degr-wfc-timeout 120

}

Disk {

On-io-error detach; / / actions performed when an error occurs in IO

}

Net {

Protocol C; / / replication mode is the third

}

}

Configure resource file (master and slave are consistent)

[root@server1 drbd.d] # vim r0.res

Resource R0 {/ / R0 resource name

On server1 {

Device / dev/drbd0; / / logical device path

Disk / dev/sdb1; / / physical device

Address 192.168.200.101virtual 7788; / / master node

Meta-disk internal

}

On server2 {

Device / dev/drbd0

Disk / dev/sdb1

Address 192.168.200.102 virtual 7788; / / standby node

Meta-disk internal

}

}

[root@server1 drbd.d] # scp global_common.conf r0.res

192.168.200.102:/usr/local/drbd/etc/drbd.d

Create metadata (operate on two nodes)

[root@server1 drbd.d] # modprobe drbd

[root@server1 drbd.d] # lsmod | grep drbd

Drbd 310268 0

Libcrc32c 1246 1 drbd

[root@server1 drbd.d] # dd if=/dev/zero bs=1M count=1of=/dev/sdb1

[root@server1 drbd.d] # drbdadm create-md R0 / / output the following information

The server's response is:

You are the 57184th user to install this version

Writing meta data...

Initializing activity log

NOT initializing bitmap

New drbd meta data block successfully created. / / Ctrl+C ends when the success information is output

Note:

The following error message appears when the command "drbdadm create-md R0" is executed.

Device sizewould be truncated, which

Would corruptdata and result in

'accessbeyond end of device' errors.

You need toeither

* useexternal meta data (recommended)

* shrink thatfilesystem first

* zero outthe device (destroy the filesystem)

Operationrefused.

Command'drbdmeta 0 v08 / dev/xvdb internal create-md' terminated with exit code 40

Drbdadmcreate-md r0: exited with code 40

Solution: initialize the disk file format, dd if=/dev/zero bs=1M count=1 of=/dev/sdb1; sync

Start DRBD (both master and slave nodes)

[root@server1 drbd.d] # / etc/init.d/drbd start

Starting DRBD resources: [

Create res: r0

Prepare disk: r0

Adjust disk: r0

Adjust net: r0

]

.

[root@server1 drbd.d] # netstat-anpt | grep 7788

Tcp 00 192.168.200.101:35654 192.168.200.102:7788

ESTABLISHED-

Tcp 00 192.168.200.101:7788 192.168.200.102:33034

ESTABLISHED-

[root@server2 drbd.d] # netstat-anpt | grep 7788

Tcp 00 192.168.200.102:7788 192.168.200.101:48501

ESTABLISHED-

Tcp 00 192.168.200.102:10354 192.168.200.101:7788

ESTABLISHED-

Manually verify the master-slave switch:

Initialize the network disk (executed on the primary node)

[root@server1 drbd.d] # drbdadm-overwrite-data-of-peerprimary R0

[root@server1 drbd.d] # watch-n 2 cat / proc/drbd / / dynamically display synchronized content

Version: 8.4.3 (api:1/proto:86-101)

GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515build by root@bogon, 2016-12-04

13:39:22

0: cs:SyncSource ro:Primary/Secondaryds:UpToDate/Inconsistent C r-

Ns:116024 nr:0 dw:0 dr:123552 al:0 bm:7 lo:0 pe:1 ua:7ap:0 ep:1 wo:f oos:10374340

[>] Sync'ed: 1.2% (10128 /

Data synchronization testing (66 steps before operation on primary node, three steps after operation on secondary node)

Operation on server1

[root@server1 drbd.d] # mkfs.ext4 / dev/drbd0

[root@server1 drbd.d] # mkdir / mysqldata

[root@server1 drbd.d] # mount / dev/drbd0 / mysqldata

[root@server1 drbd.d] # echo www.crushlinux.com > / mysqldata/file / / create a test file

[root@server1 ~] # umount / dev/drbd0

[root@server1 ~] # drbdadm secondary R0 / / Primary is reduced to Secondary

Operation on server2

[root@server2 drbd.d] # drbdadm primary R0 / / Secondary upgrade is the main

[root@server2 drbd.d] # mkdir / mysqldata

[root@server2 drbd.d] # mount / dev/drbd0 / mysqldata

[root@server2 drbd.d] # ls / mysqldata / / View data on the slave node

File lost+found / / you can see the created file

Install MySQL:

Change the storage location of the Mysql database to a shared directory (both master and slave)

[root@server1 ~] # yum-y install mysql mysql-server

[root@server1 ~] # vim / etc/my.cnf

2datadir=/mysqldata/mysql

[root@server1] # chown-R mysql.mysql / mysqldata

[root@server1 ~] # chkconfig mysqld on

Note: at this time, we have modified the data directory and its owner and permissions, and sometimes the database cannot be started because of this operation.

Solution:

First, check to see if your selinux is open and turn it off.

Second, in the / etc/apparmor.d/usr.sbin.mysqld file, there are two lines that specify the path permissions for mysql to use the data file. Just change it and restart / etc/init.d/apparmor restart.

Conduct database testing

Because of the previous operation, the server2 node is now reduced to secondary

[root@server2 ~] # umount / dev/drbd0

[root@server2 ~] # drbdadm secondary R0

Upgrade server1 to master node

[root@server1 ~] # drbdadm primary R0

[root@server1 ~] # mount / dev/drbd0 / mysqldata

[root@server1 ~] # / etc/init.d/mysqld start

Create a library crushlinux on server1, then downgrade master to standby and upgrade server2 to primary to see if the library is synchronized.

[root@server1 ~] # mysql

Mysql > create database crushlinux

Query OK, 1 row affected (0.00 sec)

Mysql > exit

Bye

Operation of [root@server1 ~] # service mysqld stop / / server1

[root@server1 ~] # umount / dev/drbd0 / / server1 operation

Operation of [root@server1 ~] # drbdadm secondary R0 / / server1

Operation on server2

[root@server2 drbd.d] # drbdadm primary R0 / / server2 operation

[root@server2 drbd.d] # Operation of mount / dev/drbd0 / mysqldata / / server2

[root@server2 drbd.d] # service mysqld start / / server2 operation

[root@server2 drbd.d] # ls / mysqldata/mysql/ server2 operation

Crushlinux ibdata1 ib_logfile0 ib_logfile1 mysql test

Configure heartbeat: configure the ha.cf file (master and slave are roughly the same)

[root@server1 ~] # cd / usr/share/doc/heartbeat-3.0.4/

[root@server1 heartbeat-3.0.4] # cp ha.cf authkeysharesources / etc/ha.d/

[root@server1 heartbeat-3.0.4] # cd / etc/ha.d/

[root@server1 ha.d] # vim ha.cf

29 logfile / var/log/ha-log

34 logfacility local0

48 keepalive 2 / / how often is the test?

56 deadtime 10 / / how long after being out of touch, I think the other person is dead (seconds)

61 warntime 5 / / how long have you been unable to contact the start warning prompt

71 initdead 100 / / mainly for a period of neglect after restart

76 udpport 694 / / UDP port

121 ucast eth0 192.168.200.102 / / enter the IP of the other party (the difference between master and slave)

Whether to switch back to auto_failback on / / node after repair

211node server1 / / Node name

212 node server2 / / Node name

253 respawn hacluster / usr/lib64/heartbeat/ipfail / / programs that control IP switching

Configure hasresources components (master-slave consistency)

[root@server1 ha.d] # vim haresources

Server1 IPaddr::192.168.200.254/24/eth0:0 drbddisk::r0

Filesystem::/dev/drbd0::/mysqldata::ext4 mysqld / / Note it is one line.

[root@server1 ha.d] # ln-s / etc/init.d/mysqld / etc/ha.d/resource.d/mysqld

Server1IPaddr::192.168.200.254/24/eth0 # hostname, followed by virtual IP address, interface

Drbddisk::r0 # manage drbd resources

Filesystem::/dev/drbd0::/mysqldata::ext4mysqld # file system, directory and format, followed by nfs resource script

Configure authkeys file (master and slave are consistent)

[root@server1 ha.d] # vim authkeys

23 auth 1

24 1 crc

[root@server1 ha.d] # chmod 600 authkeys

HA authentication:

Master-slave node starts heartbeat

[root@server1 ha.d] # service heartbeat start

Check whether the primary node VIP exists

[root@server1 ha.d] # ip a / / need to wait 10 seconds

Inet 192.168.200.254/24 brd 192.168.200.255 scopeglobal secondary eth0:0

Verify: first stop the heartbeat service on server1 to see if VIP can be transferred

At this point, the mysql service on server2 is turned off.

Server1

[root@server2 ha.d] # mysqladmin-uroot ping / / slave node operation

Mysqladmin: connect to server at 'localhost' failed

Error: 'Can't connect to local MySQL server throughsocket' / var/lib/mysql/mysql.sock' (2)'

Check that mysqld is running and that the socket:'/var/lib/mysql/mysql.sock' exists!

Server2

[root@server1 ha.d] # service heartbeat stop / / Master node operation

Stopping High-Availability services: Done.

[root@server2 ha.d] # ip a / / slave node operation

Inet 192.168.200.254/24 brd 192.168.0.255 scope globalsecondary eth0:0

[root@server2 ha.d] # mysqladmin-uroot ping / / slave node operation, and mysql is found to be started

Mysqld is alive

At this time, you do not have the function of VIP drift after stopping mysql, so you need to add script implementation. When you find that the mysql service is down, stop the heartbeat service and implement the VIP transfer (both parties need to execute in the background)

[root@server1 ~] # vim chk_mysql.sh

#! / bin/bash

Mysql= "/ etc/init.d/mysqld"

Mysqlpid=$ (ps-C mysqld-- no-header | wc-l)

Whiletrue

Do

If [$mysqlpid-eq 0]; then

$mysqlstart

Sleep 3

Mysqlpid=$ (ps-C mysqld-- no-header | wc-l)

If [$mysqlpid-eq 0]; then

/ etc/init.d/heartbeatstop

Echo "heartbeat stopped,please check your mysql!" | tee-a

/ var/log/messages

Fi

Fi

Done

[root@server1 ha.d] # bash chk_mysql.sh &

[root@server1 ha.d] # echo "bash chk_mysql.sh &" > > / etc/rc.local

Configure master-slave replication to keep time synchronized (both master and slave)

[root@server1] # crontab-e

* / 10 * ntpdate time.nist.gov

Modify the configuration files of four database hosts (note that server_id is not the same) to open the binlog log

[root@server1~] # vim / etc/my.cnf

[mysqld]

Datadir=/var/lib/mysql

Socket=/var/lib/mysql/mysql.sock

User=mysql

# Disabling symbolic-links is recommended to preventassorted security risks

Symbolic-links=0

Relay-log = relay-log-bin

Server_id = 1

Relay-log-index= slave-relay-bin.index

[mysqld_safe]

Log-error=/var/log/mysqld.log

Pid-file=/var/run/mysqld/mysqld.pid

Restart the service

[root@server1 ~] # / etc/init.d/mysqld restart

Stop mysqld: [OK]

Starting mysqld: [OK]

Authorized on serve1, allowing synchronization from the CVM to view the binlog of the primary CVM

[root@server1 ~] # msyql

Mysql > grant replication slave on *. * to'user'@'192.168.200.%' identified by '123456'

Mysql > flush privileges

Mysql > show master status

+-+

| | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | |

+-+

| | mysql-bin.000002 | 187|

+-+

1 row in set (0.00 sec)

Synchronize the server1 from the CVM to view the slave CVM status.

Mysql > change master to master_host='192.168.200.254',master_user='user',master_password='123456',master_log_file='mysql-bin.000002', master_log_pos=106

Mysql > flush privileges

Mysql > start slave

Mysql > show slave status\ G

Create a library in the master database with VIP to verify whether the slave server is synchronized

Server1

Mysql > create database abc

Slave

Mysql > show databases

+-+

| | Database |

+-+

| | information_schema |

| | abc |

| | b |

| | mysql |

| | test |

+-+

5 rows in set (0.00 sec)

Configure LVS+keepalived to load install keepalived service, and configure VIP and node health check

Operate on the primary node

[root@localhost ~] # yum-y install keepalived

[root@localhost ~] # cd / etc/keepalived/

[root@localhost ~] # cp keepalived.confkeepalived.conf.bak

[root@localhost ~] # vim keepalived.conf

! Configuration File for keepalived

Global_defs {

Notification_email {

Acassen@firewall.loc

Failover@firewall.loc

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 192.168.200.1

Smtp_connect_timeout 30

Router_id LVS_DEVEL

}

Vrrp_instance VI_1 {

StateMASTER

Interface eth0

Virtual_router_id 51

Priority100

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.200.100

}

}

Virtual_server 192.168.200.100 3306 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Nat_mask 255.255.255.0

Persistence_timeout 50

Protocol TCP

Real_server 192.168.200.103 3306 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

Real_server 192.168.200.104 3306 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

}

The operation of the standby node is the same as that of the primary node, only the configuration file has some differences.

! Configuration File for keepalived

Global_defs {

Notification_email {

Acassen@firewall.loc

Failover@firewall.loc

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 192.168.200.1

Smtp_connect_timeout 30

Router_id LVS_DEVEL

}

Vrrp_instance VI_1 {

StateBACKUP

Interface eth0

Virtual_router_id 51

Priority 50

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

192.168.200.100

}

}

Virtual_server 192.168.200.100 3306 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Nat_mask 255.255.255.0

Persistence_timeout 50

Protocol TCP

Real_server 192.168.200.103 3306 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

Real_server 192.168.200.104 3306 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

}

Write a script to configure the slave database (both master and slave)

[root@slave1~] # vim / opt/lvs-dr

#! / bin/bash

VIP= "192.168.200.100"

/ sbin/ifconfigeth0 192.168.200.103/24 up

/ sbin/ifconfig lo:0$ VIP broadcast $VIP netmask255.255.255.255 up

/ sbin/route add-host $VIP dev lo:0

Echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce

Echo "1" > / proc/sys/net/ipv4/conf/all/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce

[root@slave2~] # vim / opt/lvs-dr

#! / bin/bash

VIP= "192.168.200.100"

/ sbin/ifconfigeth0 192.168.200.104/24 up

/ sbin/ifconfig lo:0$ VIP broadcast $VIP netmask255.255.255.255 up

/ sbin/route add-host $VIP dev lo:0

Echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce

Echo "1" > / proc/sys/net/ipv4/conf/all/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce

Add execute permissions, execute scripts

Chmod + x / opt/lvs-dr

Echo "/ opt/lvs-dr" > > / etc/rc.local

/ opt/lvs-dr

[root@slave1~] # ip a

1: lo: mtu 16436 qdiscnoqueue state UNKNOWN

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet127.0.0.1/8 scope host lo

Inet 192.168.200.100/32 brd192.168.200.100 scope global lo:0

2: eth0: mtu1500 qdisc pfifo_fast state UP qlen 1000

Link/ether00:0c:29:3f:03:d5 brd ff:ff:ff:ff:ff:ff

Inet192.168.200.103/24 brd 192.168.200.255 scope global eth0

Start keepalived

[root@localhost keepalived] # / etc/init.d/keepalivedstart

Starting keepalived: [OK]

Install ipvsadm to view node records (master and slave nodes are to be configured)

[root@localhost ~] # yum-y install ipvsadm

[root@localhost ~] # ipvsadm-Ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port ForwardWeight ActiveConn InActConn

TCP 192.168.200.100:3306 rr persistent 50

-> 192.168.200.103 Route 3306

-> 192.168.200.104 Route 3306 100

[root@localhost ~] # / etc/init.d/ipvsadm save

[root@localhost ~] # / etc/init.d/ipvsadm restart

Verify that the master switch is synchronously shutting down heartbeat from the database server1

[root@server1 ~] # service heartbeat stop

Stopping High-Availability services: Done.

View synchronization information on slave

Mysql > show slave status\ G

* 1. Row**

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.200.253

Master_User: myslave

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: mysql-bin.000008

Read_Master_Log_Pos: 106

Relay_Log_File: mysqld-relay-bin.000023

Relay_Log_Pos: 251

Relay_Master_Log_File: mysql-bin.000008

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

Replicate_Do_DB:

Replicate_Ignore_DB:

Replicate_Do_Table:

Replicate_Ignore_Table:

Replicate_Wild_Do_Table:

Replicate_Wild_Ignore_Table:

Last_Errno: 0

Last_Error:

Skip_Counter: 0

Exec_Master_Log_Pos: 106

Relay_Log_Space: 552

Until_Condition: None

Until_Log_File:

Until_Log_Pos: 0

Master_SSL_Allowed: No

Master_SSL_CA_File:

Master_SSL_CA_Path:

Master_SSL_Cert:

Master_SSL_Cipher:

Master_SSL_Key:

Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

Last_IO_Errno: 0

Last_IO_Error:

Last_SQL_Errno: 0

Last_SQL_Error:

1 row in set (0.00 sec)

If you are out of sync, stop synchronizing and restart to check.

Verify that the slave database polls to view connection information on the master lvs

[root@localhost~] # watch ipvsadm-Lnc / / View remote connection information in real time

-c (--connection) displays the current connection information of LVS

Connection testing through VIP on other hosts

[root@localhostkeepalived] # mysql-umydb-h292.168.200.100-p123456-e 'show databases;'

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | slave1 |

| | test |

+-+

Check the main lvs information and test again the first time the connection is disconnected.

[root@localhost keepalived] # mysql-umydb-h292.168.200.100-p123456-e'show databases;'

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | slave2 |

| | test |

+-+

After reading this article on how to achieve database high availability clustering with mysql, heartbeat and drbd, many readers will want to know more about it. If you need more industry information, you can follow our industry information section.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report