Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Tutorial on configuring and installing drbd+corosync to achieve High availability mysql

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The following brings you about the configuration and installation of drbd+corosync to achieve high availability mysql tutorials, I believe you must have seen similar articles. What's the difference between what we bring to everyone? Let's take a look at the body. I'm sure you'll get something after reading the configuration and installation of drbd+corosync to achieve high availability mysql tutorials.

Premise:

There are two test nodes in this configuration, node1 and node2, the IP addresses of the phase are 202.207.178.6 and 202.207.178.7 respectively, and the management node 202.207.178.8 is configured for node1 and node2. At this point, drbd has been configured and can work properly!

(to avoid impact, turn off the firewall and SElinux,DRBD configuration first. For more information, please see http://10927734.blog.51cto.com/10917734/1867283.)

First, install corosync

1. Stop the drbd service and disable it from booting up and starting automatically

Primary node:

[root@node2 ~] # umount / mydata/

[root@node2 ~] # drbdadm secondary mydrbd

[root@node2 ~] # service drbd stop

[root@node2 ~] # chkconfig drbd off

Slave node:

[root@node1 ~] # service drbd stop

[root@node1 ~] # chkconfig drbd off

2. Install related software packages

[root@fsy] # for I in {1.. 2}; do ssh node$I 'mkdir / root/corosync/'; scp * .rpm node$I:/root/corosync; ssh node$I' yum-y-nogpgcheck localinstall / root/corosync/*.rpm'; done

(copy heartbeat-3.0.4-2.el6.i686.rpm and heartbeat-libs-3.0.4-2.el6.i686.rpm to the home directory)

[root@fsy] # for I in {1.. 2}; do ssh node$I 'yum-y install cluster-glue corosync libesmtp pacemaker pacemaker-cts'; done

3. Create the required log directory

[root@node1 corosync] # mkdir / var/log/cluster

[root@node2 ~] # mkdir / var/log/cluster

4. Configure corosync (the following command is executed on node1), and try to start

# cd / etc/corosync

# cp corosync.conf.example corosync.conf

Then edit the corosync.conf and add the following:

Modify the following statement:

Bindnetaddr: 202.207.178.0 # network address, the network address range in which the node is located

Secauth: on # turn on security authentication

Threads: number of threads started by 2 #

To_syslog: no (not logging in the default location)

Add the following to define pacemaker to start with corosync, and to define working users and groups for corosync:

Service {

Ver: 0

Name: pacemaker

}

Aisexec {

User: root

Group: root

}

Generate the authentication key file used for communication between nodes:

# corosync-keygen

Copy corosync and authkey to node2:

# scp-p corosync.conf authkey node2:/etc/corosync/

Attempt to start, (the following command is executed on node1):

# service corosync start

Note: starting node2 needs to be done with the above command on node1, not directly on the node2 node.

# ssh node2'/ etc/init.d/corosync start'

5. Whether the test is normal

Check to see if the corosync engine starts properly:

# grep-e "Corosync Cluster Engine"-e "configuration file" / var/log/cluster/corosync.log

Output the following:

Oct 23 00:38:06 corosync [MAIN] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.

Oct 23 00:38:06 corosync [MAIN] Successfully read main configuration file'/ etc/corosync/corosync.conf'

Check to see if initialization member node notifications are issued properly:

# grep TOTEM / var/log/cluster/corosync.log

The output is as follows:

Oct 23 00:38:06 corosync [TOTEM] Initializing transport (UDP/IP Multicast).

Oct 23 00:38:06 corosync [TOTEM] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

Oct 23 00:38:06 corosync [TOTEM] The network interface [202.207.178.6] is now up.

Oct 23 00:39:35 corosync [TOTEM] A processor joined or left the membership and a new membership was formed.

Check to see if any errors occur during startup:

# grep ERROR: / var/log/messages | grep-v unpack_resources

Check to see if pacemaker starts properly:

# grep pcmk_startup / var/log/cluster/corosync.log

The output is as follows:

Oct 23 00:38:06 corosync [pcmk] info: pcmk_startup: CRM: Initialized

Oct 23 00:38:06 corosync [pcmk] Logging: Initialized pcmk_startup

Oct 23 00:38:06 corosync [pcmk] info: pcmk_startup: Maximum core file size is: 4294967295

Oct 23 00:38:06 corosync [pcmk] info: pcmk_startup: Service: 9

Oct 23 00:38:06 corosync [pcmk] info: pcmk_startup: Local hostname: node1

Use the following command to view the startup status of the cluster node:

# crm_mon

Last updated: Tue Oct 25 17:28:10 2016 Last change: Tue Oct 25 17:21:56 2016 by hacluster via crmd on node1

Stack: classic openais (with plugin)

Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0)-partition with quorum

2 nodes and 0 resources configured, 2 expected votes

Online: [node1 node2]

From the above information, you can see that both nodes have been started normally, and the cluster is in a normal working state.

II. Allocation of resources and constraints

1. Install the crmsh package:

Pacemaker itself is only a resource manager, we need an interface to define and manage resources on pacemker, and crmsh is the configuration interface of pacemaker. Since pacemaker 1.1.8, crmsh has developed into a stand-alone project.

It is no longer available in pacemaker. Crmsh provides a command line interactive interface to manage Pacemaker clusters. It has more powerful management functions and is also easier to use. It has been widely used in more clusters, such as pcs.

Add the following to the configuration file under / etc/yum.repo.d/

[ewai]

Name=aaa

Baseurl= http://download.opensuse.org/repositories/network:/ha- clustering:/Stable/CentOS_CentOS-6/

Enabled=1

Gpgcheck=0

# yum clean all

# yum makecache

[root@node1 yum.repos.d] # yum install crmsh

2. Check the configuration file for syntax errors and configure it

Crm (live) configure# verify

We can disable stonith first with the following command:

# crm configure property stonith-enabled=false

Or crm (live) configure# property stonith-enabled=false

Crm (live) configure# commit

Configure the processing method that does not have a legal number of votes:

Crm (live) configure# property no-quorum-policy=ignore

Crm (live) configure# verify

Crm (live) configure# commit

Configure resource stickiness to make resources more willing to stay on the current node

Crm (live) configure# rsc_defaults resource-stickiness=100

Crm (live) configure# verify

Crm (live) configure# commit

3. Allocate resources

Define a resource named mysqldrbd:

(interval: defines the time interval for monitoring)

Crm (live) configure# primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30

Crm (live) configure# verify

Define a resource of master-slave type named ms_mysqldrbd:

Indicates that it is a clone of mysqldrbd. Master-max=1: defines a maximum of one primary resource, master-node-max=1: a master resource can only appear on one node at a time, clone-max=2: defines a maximum of two clone resources, and clone-node-max: defines that only one clone resource can be started on each node.

Crm (live) configure# ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

Crm (live) configure# verify

Crm (live) configure# commit

4. Test

[root@node1 ~] # crm status

Last updated: Sun Oct 23 13:05:43 2016 Last change: Sun Oct 23 13:03:52 2016 by root via cibadmin on node1

Stack: classic openais (with plugin)

Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0)-partition with quorum

2 nodes and 2 resources configured, 2 expected votes

Online: [node1 node2]

Full list of resources:

Master/Slave Set: ms_mysqldrbd [mysqldrbd]

Masters: [node1]

Slaves: [node2]

[root@node1 ~] # drbd-overview

0:mydrbd Connected Primary/Secondary UpToDate/UpToDate C r-

[root@node1 ~] # crm node standby

[root@node1 ~] # crm status

Last updated: Sun Oct 23 13:06:30 2016 Last change: Sun Oct 23 13:06:25 2016 by root via crm_attribute on node1

Stack: classic openais (with plugin)

Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0)-partition with quorum

2 nodes and 2 resources configured, 2 expected votes

Node node1: standby

Online: [node2]

Full list of resources:

Master/Slave Set: ms_mysqldrbd [mysqldrbd]

Masters: [node2]

Stopped: [node1]

[root@node1 ~] # crm node online

[root@node1 ~] # crm status

Last updated: Sun Oct 23 13:07:00 2016 Last change: Sun Oct 23 13:06:58 2016 by root via crm_attribute on node1

Stack: classic openais (with plugin)

Current DC: node1 (version 1.1.14-8.el6_8.1-70404b0)-partition with quorum

2 nodes and 2 resources configured, 2 expected votes

Online: [node1 node2]

Full list of resources:

Master/Slave Set: ms_mysqldrbd [mysqldrbd]

Masters: [node2]

Slaves: [node1]

The service is normal!

5. Configure a file system resource to mount the DRBD automatically, and configure the arrangement constraint to make the resource together with the master node; at the same time, configure a sequence constraint to start drbd first and then start mystor

Crm (live) configure# primitive mystore ocf:Filesystem params device=/dev/drbd0 directory=/mydata fstype=ext4 op start timeout=60 op stop timeout=60

Crm (live) configure# verify

Crm (live) configure# colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master

Crm (live) configure# order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start

Crm (live) configure# verify

Crm (live) configure# commit

Test:

[root@node2 ~] # crm node standby

[root@node2 ~] # crm status

Last updated: Sun Oct 23 13:45:26 2016 Last change: Sun Oct 23 13:45:20 2016 by root via crm_attribute on node2

Stack: classic openais (with plugin)

Current DC: node2 (version 1.1.14-8.el6_8.1-70404b0)-partition with quorum

2 nodes and 3 resources configured, 2 expected votes

Node node2: standby

Online: [node1]

Full list of resources:

Master/Slave Set: ms_mysqldrbd [mysqldrbd]

Masters: [node1]

Stopped: [node2]

Mystore (ocf::heartbeat:Filesystem): Started node1

[root@node1 yum.repos.d] # ls / mydata/

Fsy lost+found

At this time the test, everything is normal!

Install Mysql (first on the master node, then on the slave node)

1. Extract the downloaded package to / usr/local and enter this directory

# tar xf mysql-5.5.52-linux2.6-i686.tar.gz-C / usr/local

# cd / usr/local/

two。 Create a link to the unzipped directory and enter it

# ln-sv mysql-5.5.52-linux2.6-i686 mysql

# cd mysql

3. Create MySQL users (to make them system users) and MySQL groups

# groupadd-r-g 306 mysql

# useradd-g 306-r-u 306 mysql

4. Make all files under mysql belong to mysql users and mysql groups

# chown-R mysql:mysql / usr/local/mysql/*

5. Create a data directory and make it belong to the mysql user and mysql group. No one else has permission.

# mkdir / mydata/data

# chown-R mysql:mysql / mydata/data/

# chmod o-rx / mydata/data/

6. Ready to start installation

# scripts/mysql_install_db-user=mysql-datadir=/mydata/data/

7. For security after installation, change the permissions of all files under / usr/local/mysql

# chown-R root:mysql / usr/local/mysql/*

8. Prepare the startup script and disable it to boot automatically

# cp support-files/mysql.server / etc/init.d/mysqld

# chkconfig-add mysqld

# chkconfig mysqld off

9. Edit database configuration file

# cp support-files/my-large.cnf / etc/my.cnf

# vim / etc/my.cnf, modify and add the following:

Thread_concurrency = 2 (the number of threads is changed to 2 because my number of CPU is 1)

Datadir = / mydata/data

10. Start mysql

# service mysqld start

# / usr/local/mysql/bin/mysql

11. Test whether it is normal or not

Mysql > show databases

Mysql > CREATE DATABASE mydb

Mysql > show databases

twelve。 Turn off the mysql service on the master node, turn the slave node into the master node, and install mysql

[root@node1 mysql] # service mysqld stop

[root@node1 mysql] # crm node standby

[root@node1 mysql] # crm node online

13. Extract the downloaded package to / usr/local and enter this directory

# tar xf mysql-5.5.52-linux2.6-i686.tar.gz-C / usr/local

# cd / usr/local/

14. Create a link to the unzipped directory and enter it

# ln-sv mysql-5.5.52-linux2.6-i686 mysql

# cd mysql

15. Create MySQL users (to make them system users) and MySQL groups

# groupadd-r-g 306 mysql

# useradd-g 306-r-u 306 mysql

16. Make all files under mysql belong to root users and mysql groups

# chown-R root:mysql / usr/local/mysql/*

17. Prepare the startup script and disable it to boot automatically

# cp support-files/mysql.server / etc/init.d/mysqld

# chkconfig-add mysqld

# chkconfig mysqld off

18. Edit database configuration file

# cp support-files/my-large.cnf / etc/my.cnf

# vim / etc/my.cnf, modify and add the following:

Thread_concurrency = 2 (the number of threads is changed to 2 because my number of CPU is 1)

Datadir = / mydata/data

19. Start mysql

# service mysqld start

# / usr/local/mysql/bin/mysql

20. Test whether it is normal or not

Mysql > show databases

Found a mydb database!

The test was successful!

4. Configure mysql resources

1. Stop the mysql service on the primary node

# service mysqld stop

2. Define the main resource

Crm (live) configure# primitive mysqld lsb:mysqld

Crm (live) configure# verify

3. Define resource constraints

Define permutation constraints to bring mysqld and mystore together

Crm (live) configure# colocation mysqld_with_mystore inf: mysqld mystore

Crm (live) configure# verify

Define sequence constraints so that mystore starts first and mysqld then starts

Crm (live) configure# order mysqld_after_mystore mandatory: mystore mysqld

Crm (live) configure# verify

Crm (live) configure# commit

4. Test

1) connect to mysql on the primary node and create a database

Mysql > CREATE DATABASES hellodb

Mysql > show databases

2) Node switching (on the primary node)

# crm node standby

# crm node online

3) Test on the original slave node (and now the master node)

Mysql > show databases

Found a hellodb database!

The test was successful!

At this point, the highly available mysql configuration of drbd+corosync is complete!

Do you think this is what you want from the above tutorial on configuring and installing drbd+corosync to achieve high availability mysql? If you want to know more about it, you can continue to follow our industry information section.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report