Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement highly available MySQL with DRBD and Corosync

2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how DRBD and Corosync achieve high availability MySQL. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

(1) transaction information needs to be transferred between nodes, and the identification of nodes between nodes is achieved through node names, so DNS resolution is required to correspond to the corresponding IP node names. However, if you rely on DNS servers, the high availability cluster service increases the risk. In order to avoid the hidden dangers of DNS servers, the local / etc/hosts configuration file is directly used to define configuration parsing.

(2) the node name must be the same as the name displayed by the 'uname-n' command

(3) the management of highly available cluster nodes, such as stopping a node, cannot stop its services on its own node, but needs to stop other nodes on a node that is running normally; therefore, provide ssh mutual trust communication (configure each node to communicate with the node based on key)

(4) time synchronization is needed.

Configuration of network communication between nodes

Test1 Node IP configuration

Test2 Node IP configuration

Complete the configuration and restart the network service

Node name configuration of each node

Test1 Node name configuration

# vim / etc/sysconfig/network

# hostname test1.magedu.com

Test2 Node name configuration

# vim / etc/sysconfig/network

# hostname test2.magedu.com

Once the configuration is complete, log in to the terminal again.

Hostname resolution configuration

RS1 hostname resolution configuration

# vim / etc/hosts

RS2 hostname resolution configuration

# vim / etc/hosts

Configuration of ssh mutual trust function between nodes

Node test1 (abbreviation) configuration

# ssh-keygen-t rsa-f ~ / .ssh/id_rsa-P''

# ssh-copy-id-I. ssh / id_rsa.pub root@test2.magedu.com

Test it

Node test2 configuration

# ssh-keygen-t rsa-f ~ / .ssh/id_rsa-P''

# ssh-copy-id-I. ssh / id_rsa.pub root@test1.magedu.com

test

Time synchronization

Configure a host as a time server for time synchronization. The time server provided by the experiment is directly used here, and no other configuration is made.

Node test1 and test2 synchronize simultaneously

# ntpdate 172.16.0.1

Planning and task setting

Define a scheduled task that synchronizes time all the time

# crontab-e

* / 5 * ntpdate 172.16.0.1 > / dev/null

List of rpm packages to be installed on nodes test1 and test2 in this lab

Cluster-glue-1.0.6-1.6.el5.i386.rpm

Cluster-glue-libs-1.0.6-1.6.el5.i386.rpm

Corosync-1.4.5-1.1.i386.rpm

Corosynclib-1.4.5-1.1.i386.rpm

Heartbeat-3.0.3-2.3.el5.i386.rpm

Heartbeat-libs-3.0.3-2.3.el5.i386.rpm

Libesmtp-1.0.4-5.el5.i386.rpm

Openais-1.1.3-1.6.el5.i386.rpm

Openaislib-1.1.3-1.6.el5.i386.rpm

Pacemaker-1.1.5-1.1.el5.i386.rpm

Pacemaker-cts-1.1.5-1.1.el5.i386.rpm

Pacemaker-libs-1.1.5-1.1.el5.i386.rpm

Resource-agents-1.0.4-1.1.el5.i386.rpm

Operation configuration on node test1

Prepare the configuration file

# cd / etc/corosync/

# cp corosync.conf.example corosync.conf

# vim corosync.conf

The modifications are as follows

Totem {

Secauth:on

Interface {

Bindnetaddr:172.16.0.0

Mcastaddr:239.151.51.51

Added content

Service {

Ver: 0

Name: pacemaker

}

Aisexec {

User: root

Group: root

}

Create a log file directory

# mkdir / var/log/cluster

# ssh test2 'mkdir / var/log/cluster'

Generate a pair of keys

# corosync-keygen

Copy key file and configuration file to test2 node

# scp-p authkey corosync.conf test2:/etc/corosync/

Start corosync

View node status information

Start crm configuration

Description: because there are no stonith devices and there are only two nodes, you need to disable the stonith feature and change the node default properties

Crm (live) configure#property stonith-enabled=false

Crm (live) configure#verify

Crm (live) configure#property no-quorum-policy=ignore

Crm (live) configure#verify

Crm (live) configure#commit

Define resource stickiness and configure test1 stickiness on the current node

Crm (live) configure#rsc_defaults resource-stickiness=100

Crm (live) configure#verify

Crm (live) configure#commit

View global resource configuration information

Allocation of resources

(1) configure drbd as a basic resource

(2) configure drbd as a clone resource

Resource Agent View

Crm (live) # ra

Crm (live) ra#providers drbd

View metadata information

Crm (live) ra# metaocf:heartbeat:drbd

Define a master resource and a master-slave resource

Crm (live) # configure

Crm (live) configure#primitive mydrbdservice ocf:heartbeat:drbd params drbd_resource=mydrbd op starttimeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30op monitor role=Slave interval=30 timeout=30

Crm (live) configure#ms ms_mydrbd mydrbdservice meta master-max=1 master-node-max=1 clone-max=2clone-node-max=1 notify=true

Crm (live) configure#verify

Crm (live) configure#commit

Status information view

Master-slave conversion verification

Crm (live) # node

Crm (live) node#standby

Crm (live) node#online test1.magedu.com

Configure resources to enable automatic mounting through NFS

Filesystem resource addition

Crm (live) configure#primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0directory=/mydata fstype=ext3 op start timeout=60 op stop timeout=60

Define the arrangement colocation so that the Filesystem must be with the master node

Crm (live) configure#colocation mystore_with_ms_mydrbd inf: mystore ms_mydrbd:Master

Define permutation order constraints

Crm (live) configure#order mystore_after_ms_mydrbd mandatory: ms_mydrbd:promote mystore:start

View node status information

Check whether the mount is successful on the node test2

Verification of handover between active and standby nodes

Crm (live) # node

Crm (live) node#standby test2.magedu.com

Status information view

Check whether Filesystem is mounted successfully on the test1 node

Prepare the mysql service

Now the primary node is test1, first configure the mysql service on node 1

Create mysql users and mysql groups

# groupadd-g3306 mysql

# useradd-g 3306 muru 3306-M mysql

Prepare mysql

# tar xf mysql-5.5.28-linux2.6-i686.tar.gz-C / usr/local/

# ln-sv mysql-5.5.28-linux2.6-i686mysql

Prepare the data catalog

# cd / mydata

# mkdir data

# chownmysql.mysql data-R

Change the mysql file to belong to the master group

# cd/usr/local/mysql

# chown root.mysql/usr/local/mysql/*-R

Prepare configuration files and service scripts

# cd/usr/local/mysql

# cpsupport-files/my-large.cnf / etc/my.cnf

# cpsupport-files/mysql.server / etc/init.d/

Modify the configuration file

# vim / etc/my.cnf

Add the following (number of threads and datadir directory location)

Thread_concurrency= 2

Datadir=/mydata/data

The mysql service cannot be booted.

# chkconfig-addmysqld

# chkconfigmysqld off

Initialize mysql

# cd/usr/local/mysql

# scripts/mysql_install_db-user=mysql-datadir=/mydata/data

Start mysql

The test2 node mysql service configuration is the same as the test1 node, and the process is as follows

1 turn off the mysql service of node test1

2 convert node test2 to Master

Crm (live) # node

Crm (live) node#standby test1.magedu.com

Crm (live) node#online test1.magedu.com

3 the mysql service process to start configuring test2 is the same as test1 (no initialization operation is performed)

Configure the mysql service as a highly available service

Add mysql as a cluster resource

Crm (live) configure#primitive mysqld lsb:mysqld

Crm (live) configure#verify

Crm (live) configure#

Define colocation constraints (mysql with mystore; that is, with the primary node)

Crm (live) configure#colocation mysql_with_mystore inf: mysqld mystore

Crm (live) configure#verify

Define order constraints (finally start the mysql service)

Crm (live) configure#order mysqld_after_mystore mandatory: mystore mysqld

Crm (live) configure#verify

Crm (live) configure#commit

View node status information

Check the startup status of mysql service on test2

Master-slave node switching test

Crm (live) # node

Crm (live) node#standby test2.magedu.com

Crm (live) node#online test2.magedu.com

View status information

Check whether the test1 node is running the mysql service successfully

Here, a high-availability mysql based on drbd and corosync is completed. I hope I can provide you with some help.

This is the end of the article on "how DRBD and Corosync achieve high availability MySQL". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report