Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the High availability Cluster of MySQL by CoroSync+Drbd+MySQL

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces CoroSync+Drbd+MySQL how to achieve MySQL high-availability cluster, the article is very detailed, has a certain reference value, interested friends must read it!

Node planning:

Node1.huhu.com172.16.100.103

Node2.huhu.com172.16.100.104

Resource name Planning:

Resource name: can be any ACSII code character except white space character

DRBD device: on two nodes, this DRBD device file, usually / dev/drbdN, with major equipment number 147,

Disk: on both nodes, each provides storage devices

Network configuration: network properties used by both parties to synchronize data

DRBD has been integrated into the kernel since Linux kernel 2.6.33.

1. Configure double-click mutual trust (based on secret key authentication), HOSTS files, time synchronization

1) the hostname and the corresponding IP address resolution service of all nodes can work properly, and the hostname of each node needs to be consistent with the result of the "uname-n" command; therefore, you need to ensure that the / etc/hosts files on both nodes are as follows:

172.16.100.103node1.huhu.com node1 172.16.100.104node2.huhu.com node2

Node1:

# sed-I's @\ (HOSTNAME=\). * @\ 1node1.huhu.com hostname node1.huhu.com'/ etc/sysconfig/network # hostname node1.huhu.com

Node2:

# sed-I's @\ (HOSTNAME=\). * @\ 1node2.huhu.com hostname node2.huhu.com'/ etc/sysconfig/network # hostname node2.huhu.com

2) set that the two nodes can communicate with ssh based on the key, which can be achieved by commands like the following:

# yum install openssh-clients

Node1:

# ssh-keygen-t rsa # ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node2

Node2:

# ssh-keygen-t rsa # ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node1

Configure time synchronization:

* / 5* root / usr/sbin/ntpdate ntp.api.bz & > / dev/null

two。 Create and configure DRBD

Execute on Node1:

# rpm-Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm # ssh node2 'rpm-Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm' # yum update-y # ssh node2' yum update-y' # yum install drbd84-utils kmod-drbd84-y # ssh node2 'yum install drbd84-utils kmod-drbd84-y'

Load the module into the kernel:

# / sbin/modprobe drbd # ssh node2'/ sbin/modprobe drbd'

Configuration file for DRBD:

/ etc/drbd.conf / etc/drbd.d/global_common.conf / etc/drbd.d/resource.d/ # yum-y install parted # ssh node2 'yum-y install parted' # fdisk / dev/sdb

N New partition

P primary partition

1 zone number, two carriage returns are selected according to the default size

Wq save exit

# partprobe / dev/sdb1

Resource planning:

Resource name: mydrbd

DRBD device: / dev/drbd0

Disk: / dev/sdb1

Network configuration: 100m

# cat / etc/drbd.d/global_common.conf | grep-v "#" # cat / etc/drbd.d/global_common.conf | grep-v "#" global {usage-countyes;} common {handlers {pri-on-incon-degr "/ usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > / proc/sysrq-trigger; reboot-f" Pri-lost-after-sb "/ usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > / proc/sysrq-trigger; reboot-f"; local-io-error "/ usr/lib/drbd/notify-io-error.sh; / usr/lib/drbd/notify-emergency-shutdown.sh;echo o > / proc/sysrq-trigger; halt-f" } startup {} options {} disk {on-io-errordetach;} net {cram-hmac-alg "sha1"; shared-secret "1q2w3e4r5t6y";} syncer {rate 200m;}} # cat mydrbd.res resourcemydrbd {device/dev/drbd0; disk/dev/sdb1; meta-diskinternal; onnode1.huhu.com {address172.16.100.103:7789;} onnode1.huhu.com {address172.16.100.104:7789 }}

Copy the configuration file to the node2 node

Scp-r / etc/drbd.* node2:/etc/

On both nodes, initialize the defined resources and start the service

# drbdadm create-md mydrbd#ssh node2 'drbdadm create-md mydrbd' # / etc/init.d/drbd start # ssh node2' / etc/init.d/drbd start'

View the status of the DRBD device:

# cat / proc/drbd version:8.4.4 (api:1/proto:86-101B) GIT-hash:599f286440bd633d15d5ff985204aff4bccffadd build by phil@Build64R6, 2013-10-1415 api:1/proto:86 33 api:1/proto:86 06 0:cs:Connected ro:Secondary/Secondaryds:Inconsistent/Inconsistent C r-ns:0nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:2096348

Currently, both nodes are in the secondary state, so make their node1 as the master node manually:

# drbdadm-overwrite-data-of-peer primary mydrbd # cat / proc/drbd version:8.4.4 (api:1/proto:86-101N) GIT-hash:599f286440bd633d15d5ff985204aff4bccffadd build by phil@Build64R6, 2013-10-1415 cat 3306 0:cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDateC r-ns:2096348nr:0 dw:0 dr:2097012 al:0 bm:128 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Format the drbd partition on the primary node and mount it

# mke2fs-j / dev/drbd0

# mkdir / mydata

# mount / dev/drbd0 / mydata/

# cp / etc/inittab / mydata/

# ls-lh / mydata/

Total20K

-rw-r--r--.1 root root 884 Jul 8 17:24 inittab

Drwx-.2 root root 16K Jul 8 17:23 lost+found

At this point, the drbd partition is ready for normal use.

Active / standby switching of DRBD Partition

Execute on the primary node:

# umount / mydata/

# drbdadm secondary mydrbd

# drbd-overview

0:mydrbd/0Connected Secondary/Secondary UpToDate/UpToDateC r-

Execute on secondary

# drbd-overview ensures that both states have a secondary status

# drbdadm primary mydrbd

# mkdir-p / mydata

# mount / dev/drbd0 / mydata/

# ls-lh / mydata/

Total20K

-rw-r--r--.1 root root 884 Jul 8 17:24 inittab

Drwx-.2 root root 16K Jul 8 17:23 lost+found

# drbd-overview

0:mydrbd/0Connected Primary/Secondary UpToDate/UpToDate Cr- / mydata ext3 2.0G 36m 1.9G 2%

The state has changed, primary / secondary

3. Configure the coresync service

The drbd service is stopped on each node, and the boot is turned off

# / etc/init.d/drbd stop

# ssh node2'/ etc/init.d/drbd stop'

# chkconfig drbd off

# ssh node2 'chkconfig drbd off'

# chkconfig-- list | grep drbd

# ssh node2 'chkconfig-- list | grep drbd'

Drbd0:off 1:off 2:off 3:off 4:off 5:off 6:off

Install corosync

# yum install libibverbs librdmacm lm_sensors libtool-ltdl openhpi-libs openhpiperl-TimeDate

# yum install corosync pacemaker

# ssh node2'# yum install libibverbs librdmacm lm_sensors libtool-ltdlopenhpi-libs openhpi perl-TimeDate'

Wget http://ftp5.gwdg.de/pub/opensuse/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/crmsh-2.1-1.1.x86_64.rpm&& wget http://ftp5.gwdg.de/pub/opensuse/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/pssh-2.3.1-4.1.x86_64.rpm

# ssh node2 'wget http://ftp5.gwdg.de/pub/opensuse/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/crmsh-2.1-1.1.x86_64.rpm&& wget http://ftp5.gwdg.de/pub/opensuse/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/pssh-2.3.1-4.1.x86_64.rpm'

Yum--nogpgcheck localinstall crmsh-2.1-1.1.x86_64.rpm pssh-2.3.1-4.1.x86_64.rpm

If the installation fails, add the following sources

# vim / etc/yum.repos.d/ha-clustering.repo

[haclustering]

Name=HAClustering

Baseurl= http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/

Enabled=1

Gpgcheck=0

# yum-- nogpgcheck localinstall crmsh-2.1-1.1.x86_64.rpmpssh-2.3.1-4.1.x86_64.rpm

The above is also performed on node2

Configure corosync

# cd / etc/corosync/

# cp corosync.conf.example corosync.conf

# cat corosync.conf | grep-v "^ #" | sed-eBay / ^ $/ d'

Compatibility: whitetank

Totem {

Version: 2

Secauth: on

Threads: 2

Interface {

Ringnumber: 0

Bindnetaddr: 172.16.100.0

Mcastaddr: 226.94.8.9

Mcastport: 5405

Ttl: 1

}

}

Logging {

Fileline: off

To_stderr: no

To_logfile: yes

To_syslog: no

Logfile: / var/log/cluster/corosync.log

Debug: off

Timestamp: on

Logger_subsys {

Subsys: AMF

Debug: off

}

}

Service {

Ver: 0

Name: pacemaker

# use_mgmtd: yes

}

Aisexec {

User: root

Group: root

}

Amf {

Mode: disabled

}

Generate secret key

# corosync-keygen

# scp-p authkey corosync.conf node2:/etc/corosync/

Create a log file directory

# mkdir-p / var/log/cluster/-pv

# ssh node2 'mkdir-p / var/log/cluster/-pv'

Start the corosync service

# service corosync start

# ssh node2 'service corosync start'

Check whether the corosync engine has been started

# grep-e "Corosync Cluster Engine"-e "configuration file" / var/log/cluster/corosync.log

Jul09 10:28:14 corosync [MAIN] Corosync Cluster Engine ('1.4.1'): started andready to provide service.

Jul09 10:28:14 corosync [MAIN] Successfully readmain configuration file'/ etc/corosync/corosync.conf'.

Check whether the communication between node members is normal

# grep TOTEM / var/log/cluster/corosync.log

Jul09 10:28:14 corosync [TOTEM] Initializing transport (UDP/IP Multicast).

Jul09 10:28:14 corosync [TOTEM] Initializing transmit/receive security:libtomcrypt SOBER128/SHA1HMAC (mode 0).

Jul09 10:28:14 corosync [TOTEM] The network interface [172.16.100.103] is now up.

Jul09 10:28:14 corosync [TOTEM] A processor joined or left the membership and anew membership was formed.

Jul09 10:28:29 corosync [TOTEM] A processor joined or left the membership and anew membership was formed.

Check whether pacemaker starts properly

# grep pcmk_startup / var/log/cluster/corosync.log

Jul09 10:28:14 corosync [pcmk] info: pcmk_startup: CRM: Initialized

Jul09 10:28:14 corosync [pcmk] Logging: Initialized pcmk_startup

Jul09 10:28:14 corosync [pcmk] info: pcmk_startup:Maximum core file size is: 18446744073709551615

Jul09 10:28:14 corosync [pcmk] info: pcmk_startup: Service: 9

Jul09 10:28:14 corosync [pcmk] info: pcmk_startup: Local hostname: node1.huhu.com

View error messages

# grep ERROR / var/log/cluster/corosync.log | grep-v unpack_resources

Jul09 10:28:14 corosync [pcmk] ERROR: process_ais_conf: You have configured acluster using the Pacemaker plugin for Corosync. The plugin is not supported inthis environment and will be removed very soon.

Jul09 10:28:14 corosync [pcmk] ERROR: process_ais_conf: Please see Chapter 8 of'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on usingPacemaker with CMAN

Jul09 10:28:35 [1373] node1.huhu.com pengine: notice: process_pe_message:Configuration ERRORs found during PE processing. Please run "crm_verify-L" to identify issues.

Note: since no stonith device is used here, the error can be ignored

# crm status

Lastupdated: Wed Jul 9 10:49:53 2014

Lastchange: Wed Jul 9 10:19:07 2014 via crmd on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node1.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

0Resources configured

Online: [node1.huhu.com node2.huhu.com]

The above indicates that the corosync configuration starts normally.

Shut down the stonith device and verify the submission

Crm (live) # configure

Crm (live) configure#property stonith-enabled=false

Crm (live) configure#verify

Crm (live) configure#commit

You cannot shut down the cluster service when you do not have a legal number of votes.

Crm (live) configure# property no-quorum-policy=ignore

Crm (live) configure#verify

Crm (live) configure#commit

Configure resource stickiness and prefer the current node

Crm (live) configure#rsc_defaults resource-stickiness=100

Crm (live) configure#verify

Crm (live) configure#commit

View current configuration

Crm (live) configure#show

Nodenode1.huhu.com

Nodenode2.huhu.com

Propertycib-bootstrap-options:\

Dc-version=1.1.10-14.el6_5.3-368c726\

Cluster-infrastructure= "classicopenais (with plugin)"\

Expected-quorum-votes=2\

Stonith-enabled=false\

No-quorum-policy=ignore

Rsc_defaultsrsc-options:\

Resource-stickiness=100

Crm (live) configure#

View the resource agent for drbd

Crm (live) configure#cd..

Crm (live) # ra

Crm (live) ra#providers drbd

Linbit

Note: only linbit does not have heartbeat in previous versions of beartbeat,corosync1.4.

View original data

Crm (live) ra#meta ocf:linbit:drbd

Define resources:

Crm (live) configure#primitive mysql_drbd ocf:linbit:drbd paramsdrbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitorrole=Master interval=50s timeout=30s op monitorrole= Slave interval=60s timeout=30s

Define cluster resources:

Crm (live) configure#master MS_mysql_drbd mysql_drbd metamaster-max= "1" master-node-max= "1" clone-max= "2" clone-node-max= "1" notify= "true"

Crm (live) configure#show mysql_drbd

Primitivemysql_drbd ocf:linbit:drbd\

Paramsdrbd_resource=mydrbd\

Opstart timeout=240 interval=0\

Opstop timeout=100 interval=0\

Opmonitor role=Master interval=50s timeout=30s\

Opmonitor role=Slave interval=60s timeout=30s

Crm (live) configure#show MS_mysql_drbd

MsMS_mysql_drbd mysql_drbd\

Metamaster-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

Crm (live) configure#verify

Crm (live) configure#commit

Crm (live) configure#cd

Crm (live) # status

Lastupdated: Wed Jul 9 11:54:30 2014

Lastchange: Wed Jul 9 11:54:17 2014 via cibadmin on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

2Resources configured

Online: [node1.huhu.com node2.huhu.com]

Master/Slave Set: MS_mysql_drbd [mysql_drbd]

Masters: [node1.huhu.com]

Slaves: [node2.huhu.com]

Crm (live) #

Master and slave resources have been defined

[root@node1corosync] # drbd-overview

0:mydrbd/0Connected Primary/Secondary UpToDate/UpToDate C r-

[root@node1corosync] #

At this time, the current node has become the main resource.

Manually do a master-slave switch:

# crm node standby

# crm status

Lastupdated: Wed Jul 9 12:01:44 2014

Lastchange: Wed Jul 9 12:01:29 2014 via crm_attribute on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

2Resources configured

Nodenode1.huhu.com: standby

Online: [node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node2.huhu.com]

Stopped: [node1.huhu.com]

# crm node online

# crm status

Lastupdated: Wed Jul 9 12:02:46 2014

Lastchange: Wed Jul 9 12:02:43 2014 via crm_attribute on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

2Resources configured

Online: [node1.huhu.com node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node2.huhu.com]

Slaves: [node1.huhu.com]

# drbd-overview

0:mydrbd/0Connected Secondary/Primary UpToDate/UpToDate C r-

[root@node1corosync] #

The current node is switched to a slave node.

At this point, the resources can be switched between master and slave, but the file system is not mounted.

Therefore, the file system must be defined.

Crm (live) configure#primitive mystore ocf:heartbeat:Filesystem paramsdevice=/dev/drbd0 directory=/mydata fstype=ext3 op start timeout=60 op stoptimeout=60

Crm (live) configure#verify

Note: never submit here, because you must ensure that the file system is with the master node and define the arrangement constraint

Crm (live) configure#colocation mystore_with_MS_mysql_drbd inf: mystoreMS_mysql_drbd:Master

Defines that the storage resource must be with the primary node of the resource

Crm (live) configure#order mystore_after_MS_mysql_drbd mandatory:MS_mysql_drbd:promote mystore:start

Defines that storage resources must be mounted after the primary node is started

Crm (live) configure#verify

Crm (live) configure#commit

Crm (live) configure#cd..

Crm (live) # status

Lastupdated: Wed Jul 9 12:25:25 2014

Lastchange: Wed Jul 9 12:22:30 2014 via cibadmin on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

3Resources configured

Online: [node1.huhu.com node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node1.huhu.com]

Slaves: [node2.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node1.huhu.com

Crm (live) #

You can see that Master is on node1, and mystore is launched on node1.

[root@node1~] # ls-lh / mydata/

Total20K

-rw-r--r--.1 root root 884 Jul 8 17:24 inittab

Drwx-2 root root 16K Jul 8 17:23 lost+found

[root@node1~] #

Manually simulate a switch

[root@node1corosync] # crm node standby

[root@node1corosync] # crm status

Lastupdated: Wed Jul 9 12:28:55 2014

Lastchange: Wed Jul 9 12:28:49 2014 via crm_attribute on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

3Resources configured

Nodenode1.huhu.com: standby

Online: [node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node2.huhu.com]

Stopped: [node1.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node2.huhu.com

[root@node1corosync] #

[root@node2~] # ls-lh / mydata/

Total20K

-rw-r--r--.1 root root 884 Jul 8 17:24 inittab

Drwx-2 root root 16K Jul 8 17:23 lost+found

Youhave new mail in / var/spool/mail/root

[root@node2~] #

This switches to the node2 node.

4. Configure MySQL with DRBD and corosync

Create MySQL users and groups on the node1 node, respectively

# groupadd-g 3306 mysql

# useradd-u 3306-g mysql-s / sbin/nologin-M mysql

# id mysql

Uid=3306 (mysql) gid=3306 (mysql) groups=3306 (mysql)

# ssh node2 'groupadd-g 3306 mysql'

# ssh node2 'useradd-u 3306-g mysql-s / sbin/nologin-M mysql'

# wget http://cdn.mysql.com/Downloads/MySQL-5.5/mysql-5.5.38-linux2.6-x86_64.tar.gz

# tar zxvf mysql-5.5.38-linux2.6-x86_64.tar.gz-C / usr/local/

# cd / usr/local/

# ln-s mysql-5.5.38-linux2.6-x86_64/ mysql

# cd mysql

# chown root:mysql-R.

# cp support-files/my-huge.cnf / etc/my.cnf

# cp support-files/mysql.server / etc/init.d/mysqld

# [- x / etc/init.d/mysqld] & & echo "ok" | | echo "NO"

Ensure that you are currently operating on the primary node

# drbd-overview

0:mydrbd/0Connected Primary/Secondary UpToDate/UpToDate Cr- / mydata ext3 2.0G 36m 1.9G 2%

# mkdir-p / mydata/data

# chown-R mysql:mysql / mydata/data/

# scripts/mysql_install_db-user=mysql--datadir=/mydata/data

Vim/etc/my.cnf

Datadir=/mydata/data

# chkconfig-add mysqld

# chkconfig mysqld off

# service mysqld start

Make sure OK is started

# / usr/local/mysql/bin/mysql-uroot-e "CREATE DATABASE mydb"

[root@node1mysql] # / usr/local/mysql/bin/mysql-uroot-e "SHOW DATABASES"

+-+

| | Database |

+-+

| | information_schema |

| | mydb |

| | mysql |

| | performance_schema |

| | test |

+-

# service mysqld stop

# chkconfig-- list | grep 3:off | grep mysql

Mysqld0:off 1:off 2:off 3:off 4:off 5:off 6:off

[root@node1mysql] #

Switch the storage resource to node2 and configure it in node2MySQL

# crm node standby

[root@node1mysql] # crm status

Lastupdated: Wed Jul 9 14:45:36 2014

Lastchange: Wed Jul 9 14:45:29 2014 via crm_attribute on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

3Resources configure

Nodenode1.huhu.com: standby

Online: [node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node2.huhu.com]

Stopped: [node1.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node2.huhu.com

[root@node1mysql] # crm node online

[root@node1mysql] # crm status

Lastupdated: Wed Jul 9 14:45:52 2014

Lastchange: Wed Jul 9 14:45:49 2014 via crm_attribute on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

3Resources configure

Online: [node1.huhu.com node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node2.huhu.com]

Slaves: [node1.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node2.huhu.com

[root@node1mysql] #

# scp / root/mysql-5.5.38-linux2.6-x86_64.tar.gz node2:/root/

# scp / etc/my.cnf node2:/etc/my.cnf

# scp / etc/init.d/mysqld node2:/etc/init.d/mysqld

Install MySQL on node2

# tar zxvf mysql-5.5.38-linux2.6-x86_64.tar.gz-C / usr/local/

# cd / usr/local/

# ln-s mysql-5.5.38-linux2.6-x86_64/ mysql

# cd mysql

# chown root:mysql-R.

Note: never create / mydata/data manually, otherwise the file will be corrupted

Missing related library file # yum install libaio

# service mysqld start

# / usr/local/mysql/bin/mysql-uroot-e "SHOW DATABASES"

+-+

| | Database |

+-+

| | information_schema |

| | mydb |

| | mysql |

| | performance_schema |

| | test |

+-+

[root@node2mydata] #

# service mysqld stop

# chkconfig mysqld off

Configure MySQL to become a cluster resource

Crm (live) # configure

Crm (live) configure#primitive mysqld lsb:mysqld

Crm (live) configure#verify

Crm (live) configure# colocation mysqld_with_mystore inf: mysqld mystore

Crm (live) configure#show xml

The MySQL service must be with the MySQL storage resource

Crm (live) configure#order mysqld_after_mystore mandatory: mystore mysqld

Crm (live) configure#verify

The MySQL service must be after the MySQ storage transfer, so define order constraints

Crm (live) # status

Lastupdated: Wed Jul 9 16:18:27 2014

Lastchange: Wed Jul 9 16:18:16 2014 via cibadmin on node2.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

4Resources configured

Online: [node1.huhu.com node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node2.huhu.com]

Slaves: [node1.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node2.huhu.com

Mysqld (lsb:mysqld): Started node2.huhu.com

Crm (live) #

So log in to the node2 node

# / usr/local/mysql/bin/mysql-uroot-e "SHOW DATABASES"

+-+

| | Database |

+-+

| | information_schema |

| | mydb |

| | mysql |

| | performance_schema |

| | test |

+-+

# / usr/local/mysql/bin/mysql-uroot-e "DROP DATABASE mydb"

# / usr/local/mysql/bin/mysql-uroot-e "CREATE DATABASE testdb"

Here, the master-slave section is switched again.

# crm node standby

# crm status

Masters: [node1.huhu.com]

Stopped: [node2.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node1.huhu.com

Mysqld (lsb:mysqld): Started node1.huhu.com

# crm node online

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node1.huhu.com]

Slaves: [node2.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node1.huhu.com

Mysqld (lsb:mysqld): Started node1.huhu.com

On the node1 node

# / usr/local/mysql/bin/mysql-uroot-e "SHOW DATABASES"

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | test |

| | testdb |

+-

Testdb display normally

* define a virtual IP resource for MySQL

Crm (live) configure#primitive myip ocf:heartbeat:IPaddr paramsip=172.16.100.119 nic=eth0 cidr_netmask=24

Crm (live) configure#verify

Crm (live) configure#colocation myip_with_MS_mysql_drbd inf:MS_mysql_drbd:Master myip

Crm (live) configure#verify

Crm (live) configure#show xml

Crm (live) configure#commit

Crm (live) configure#cd..

Crm (live) # status

Lastupdated: Wed Jul 9 16:46:27 2014

Lastchange: Wed Jul 9 16:46:20 2014 via cibadmin on node1.huhu.com

Stack:classic openais (with plugin)

CurrentDC: node2.huhu.com-partition with quorum

Version:1.1.10-14.el6_5.3-368c726

2Nodes configured, 2 expected votes

5Resources configured

Online: [node1.huhu.com node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node1.huhu.com]

Slaves: [node2.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node1.huhu.com

Mysqld (lsb:mysqld): Started node1.huhu.com

Myip (ocf::heartbeat:IPaddr): Started node1.huhu.com

Crm (live)

You can see that myip has been started in node1.

# ip addr

1:lo: mtu 16436 qdisc noqueue state UNKNOWN

Link/loopback00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet127.0.0.1/8 scope host lo

Inet6::1/128 scope host

Valid_lftforever preferred_lft forever

2:eth0: mtu 1500 qdisc pfifo_fast stateUP qlen 1000

Link/ether00:0c:29:a9:86:42 brd ff:ff:ff:ff:ff:ff

Inet172.16.100.103/24 brd 172.16.100.255 scope global eth0

Inet172.16.100.119/24 brd 172.16.100.255 scopeglobal secondary eth0

Inet6fe80::20c:29ff:fea9:8642/64 scope link

Valid_lftforever preferred_lft forever

5. Perform MySQL login authentication on its node

Log in to MySQL to create a user

# / usr/local/mysql/bin/mysql-uroot-e "GRANT ALL ON *. * TO root@'%'IDENTIFIED BY '123.compositionFLUSH PRIVILEGES"

# mysql-uroot-p123.com-h272.16.100.119-e "SHOW DATABASES"

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | test |

| | testdb |

+-+

[root@localhost~] #

Switch between the simulated master and slave nodes:

# crm node standby

# crm node online

# crm status

Online: [node1.huhu.com node2.huhu.com]

Master/SlaveSet: MS_mysql_drbd [mysql_drbd]

Masters: [node2.huhu.com]

Slaves: [node1.huhu.com]

Mystore (ocf::heartbeat:Filesystem): Started node2.huhu.com

Mysqld (lsb:mysqld): Started node2.huhu.com

Myip (ocf::heartbeat:IPaddr): Started node2.huhu.com

# mysql-uroot-p123.com-h272.16.100.119-e "SHOW DATABASES"

[root@node2~] # crm

Crm (live) # configure

Crm (live) configure# show

Nodenode1.huhu.com\

Attributes standby=off

Nodenode2.huhu.com\

Attributes standby=off

Primitive myipIPaddr\

Params ip=172.16.100.119 nic=eth0 cidr_netmask=24

Primitive mysql_drbdocf:linbit:drbd\

Params drbd_resource=mydrbd\

Op start timeout=240 interval=0\

Op stop timeout=100 interval=0\

Op monitor role=Masterinterval=50s timeout=30s\

Op monitor role=Slaveinterval=60s timeout=30s

Primitive mysqldlsb:mysqld

Primitive mystoreFilesystem\

Params device= "/ dev/drbd0" directory= "/ mydata" fstype=ext3\

Op start timeout=60 interval=0\

Op stop timeout=60 interval=0

MsMS_mysql_drbd mysql_drbd\

Meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

Colocationmyip_with_MS_mysql_drbd inf: MS_mysql_drbd:Master myip

Colocationmysqld_with_mystore inf: mysqld mystore

Colocationmystore_with_MS_mysql_drbd inf: mystore MS_mysql_drbd:Master

Ordermysqld_after_mystore Mandatory: mystore mysqld

Ordermystore_after_MS_mysql_drbd Mandatory: MS_mysql_drbd:promote mystore:start

Property cib-bootstrap-options:\

Dc-version=1.1.10-14.el6_5.3-368c726\

Cluster-infrastructure= "classic openais (with plugin)"\

Expected-quorum-votes=2\

Stonith-enabled=false\

No-quorum-policy=ignore

Rsc_defaults rsc-options:\

Resource-stickiness=100

Crm (live) configure#

The above is all the content of the article "how CoroSync+Drbd+MySQL implements the high availability cluster of MySQL". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report