Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Heartbeat+DRBD+MySQL high availability scheme

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Heartbeat+DRBD+MySQL high availability scheme

=

Overview:

=

Scheme introduction

1. Introduction of the scheme and its advantages and disadvantages

Introduction of ★ Scheme

In this scheme, Heartbeat dual-computer hot standby software is used to ensure the high stability and continuity of the database, and the consistency of data is guaranteed by DRBD. By default, there is only one mysql working. When there is a problem with the master mysql server, the system will automatically switch to the slave to continue to provide services. When the master database is repaired, the service will be switched back to continue to be provided by the master mysql.

Advantages and disadvantages of ★ scheme

Advantages of ◆:

High security, high stability, high availability, automatic switching when failure occurs.

Disadvantages of ◆:

There is only one server to provide services, the cost is relatively high, it is not easy to expand, and brain cracks may occur.

two。 Software introduction

Introduction to ★ Heartbeat

Official site: http://linux-ha.org/wiki/Main_Page

Heartbeat can quickly transfer resources (VIP address and program service) from one malfunctioning server to another normal server to provide services. Heartbeat is similar to keepalived. Heartbeat can achieve failover function, but can not achieve health check on the back end.

Introduction to ★ DRBD

Official site: http://www.drbd.org/

DRBD (DistributedReplicatedBlockDevice) is a software that synchronizes and mirrors data between remote servers based on block device level. It is a storage replication solution that implements no sharing and mirrors the content of block devices between servers. It can achieve real-time mirroring or synchronous replication based on block device level between two servers in the network (both servers write successfully) / asynchronous replication (local server writes successfully), which is equivalent to the RAID1 of the network. Because it is based on block devices (disk, LVM logical volume), at the bottom of the file system, data replication is faster than the cp command. DRBD has been officially written into the documentation manual by MySQL as one of the recommended high-availability solutions

Scheme topology and applicable scenarios

Applicable scenarios:

It is suitable for scenarios where the amount of database access is not too large, the number of visits will not grow too fast in the short term, and the requirement for database availability is very high.

Installation, deployment and testing

1. Introduction of the test environment (both firewall and selinux have been turned off)

Hostnam

IP address

System DRBD disk heartbeat version per2172.22.144.232CentOS 6.5

/ dev/sdb32.1.4-12per3172.22.144.233CentOS 6.5/dev/sdb32.1.4-12

two。 Test environment preparation:

★ local yum source configuration

Copy all the files on the CD used when installing the system to the host / cdrom

★ Ntpserver configuration

The time between the nodes of the cluster service must be synchronized, so you need to build a NTP server here. Here, you choose to build a server (node1), and the other nodes (configure crontab) synchronize with ntpdate serverip. The specific deployment steps are as follows:

★ domain name resolution configuration

Write all the IP and hostnames to one server's / etc/hosts, and then scp to each server.

Mutual trust communication between two computers of ★ master and passive service device

DRBD installation configuration and startup testing:

1. Install dependency packages (both node1 and node2)

Yum install-y gcc gcc-c++ make glibc flex kernel-devel kernel-headers PyXML net-snmp-libs tigervnc-server

Installation and configuration of 2.DRBD (both node1 and node2)

1) the installation package is prepared as follows:

[root@node1 heartbeat+drbd+mysql] # cd drbd/ [root@node1 drbd] # ll Total usage 45652 root root 224376 April 26 17:15 drbd83-utils-8.3.16-1.el6.elrepo.x86 root root 64.rpmKuhashi-1 root root 688328 April 26 17:15 drbd-8.4.3.tar.gz-rw-r--r-- 1 root root 30514788 April 26 17:16 kernel-2. Kernel-firmware-2.6.32-1 root root 15133064 April 26 17:16 kernel-firmware-2.6.32-504.12.el6.noarch.rpmkashashi-1 RPM 177360 April 26 17:16 kmod-drbd83-8.3.16-3.el6.elrepo.x86_64.rpm

2) decompress the drbd-8.4.3.tar.gz package, enter it to the directory after decompression, and execute the command as follows:

# decompress [root@node1 drbd] # tar-zxvf drbd-8.4.3.tar.gz [root@node1 drbd] # cd drbd-8.4.3 [root@node1 drbd-8.4.3] # lsautogen.sh configure documentation drbd-kernel.spec.in filelist-redhat preamble preamble-sles11 scriptsbenchmark configure.ac drbd drbd-km.spec.in filelist-suse preamble-rhel5 README userChangeLog COPYING drbd_ Config.h drbd.spec.in Makefile.in preamble-sles10 rpm-macro-fixes [root@node1 drbd-8.4.3] #. / configure-- prefix=/usr/local/drbd-- with-km-- with-heartbeat [root@node1 drbd-8.4.3] # make KDIR=/usr/src/kernels/ `uname-r` [root@node1 drbd-8.4.3] # the compiled file is in the / usr/local/drbd path

3) enter the / usr/local/drbd directory and complete the following operations:

[root@node1 drbd] # mkdir-p / usr/local/drbd/var/run/drbd [root@node1 drbd] # cp / usr/local/drbd/etc/rc.d/init.d/drbd / etc/rc.d/init.d# add to the service [root@node1 init.d] # chkconfig-- add drbd [root@node1 init.d] # chkconfig drbd on

4) load drbd module

[root@node1 init.d] # modprobe drbd# check whether to load drbd module [root@node1 init.d] # lsmod | grep drbddrbd 326138 0 libcrc32c 1246 1 drbd

Configuration startup of 3.DRBD

1) the configuration file for editing drbd is as follows (node1 is the same as node2):

[root@node1 etc] # pwd/usr/local/drbd/etc [root@node1 etc] # vim drbd.conf # You can find an examplein / usr/share/doc/drbd.../drbd.conf.exampleinclude "drbd.d/global_common.conf"; include "drbd.d/*.res"; resource data {# create a resource named "data" protocol C # choose drbd's C protocol (data synchronization protocol, in which C returns after receiving and writing data to confirm success) startup {wfc-timeout 0; degr-wfc-timeout 120;} disk {on-io-error detach;} net {timeout 60; connect-int 10; ping-int 10; max-buffers 2048 Max-epoch-size 2048;} syncer {rate 100m;} on node1 {# set a node named device / dev/drbd0; # with their respective hostnames and set the resource device / dev/drbd0 to the actual physical partition / dev/sda3 disk / dev/sdb3 Address 172.21.1.112on node2 7788; # set listening address and port meta-disk internal; # internal} on node2 {device / dev/drbd0; disk / dev/sdb3 in the same LAN Address 172.21.1.113 7788; meta-disk internal;}}

2) initialize the resource and start the service (node1 and node2 operations are the same)

# / dev/sdb3 is a DRBD partition, and it may also be a logical volume in the experimental environment, which can be modified according to the situation. # if not, an error was reported when creating the resource [root@node1 ~] # dd if=/dev/zero of=/dev/sdb3 bs=1M count=100 recorded 100'0 read in, 100'0 write out 104857600 bytes (105 MB) replicated, 3.33403 seconds, 31. 5 MB/ seconds [root@node1 ~] # drbdadm create-md datayou are the 57124th user to install this versionWriting meta data...initializing activity logNOT initializing bitmapNew drbd meta data block successfully created.success

3) start and view DRBD

[root@node1 init.d] # pwd/etc/init.d# startup service [root@node1 init.d] # / drbd startStarting DRBD resources: [create res: data prepare disk: data adjust disk: data adjust net: data] outdated-wfc-timeout has to be shorter than degr-wfc-timeoutoutdated-wfc-timeout implicitly set to degr-wfc-timeout (120s) # check whether port 7788 is listening on [root@node1 init.d] # netstat-tnpActive Internet connections Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0248 172.21.1.112 ESTABLISHED 22 172.21.1.58 ESTABLISHED 3922/sshd tcp 52000 ESTABLISHED 3922/sshd tcp 00 172.21.1.112 ESTABLISHED 172.21.1.112 -tcp 0 172.21.1.112 ESTABLISHED-22 172.21.1.58 ESTABLISHED-0172.21.1.112 ESTABLISHED-

4) check the status of DRBD, you can see that there is no master node at this time, and both nodes are Secondary

[root@node1 init.d] #. / drbd statusdrbd driver loaded OK Device status:version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@node1 2017-04-28 10:22:42m:res cs ro ds p mounted fstype0:data Connected Secondary/Secondary Inconsistent/Inconsistent C [root@node1 sbin] # pwd/usr/local/drbd/sbin [root@node1 sbin] #. / drbd-overview 0:data/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-

4. Set the node1 node as the primary node

[root@node1] # drbdsetup / dev/drbd0 primary-- if you look at force# again, you can see that the data synchronization process has already begun [root@node1 sbin] #. / drbd-overview 0:data/0 SyncSource Primary/Secondary UpToDate/Inconsistent Crymuri Murray n-[>.] Sync'ed: 0.2% (10236 / 10244) M [root@node1 sbin] #. / drbd-overview 0:data/0 SyncSource Primary/Secondary UpToDate/Inconsistent Murray Murray n-[>.] Sync'ed: 7.7% (9464 sync'ed 10244) M [root@node1 init.d] #. / drbd statusdrbd driver loaded OK; device status:version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@node1, 2017-04-28 10:22:42m:res cs ro ds p mounted fstype... Sync'ed: 22.0% (8000amp 10244) M0:data SyncSource Primary/Secondary UpToDate/Inconsistent C # at this time, you can use watch-n 1 ". / drbd-overview" to monitor the synchronization progress # and check the status again after the data synchronization is completed, and you can find that the node has been licensed real-time status. And the node already has a master / slave [root@node1 sbin] #. / drbd-overview 0:data/0 Connected Primary/Secondary UpToDate/UpToDate C r-

5. Create a file system, view the maximum mount limit and lift it

The mount of the file system can only be done on the Primary node, so the drbd device can be formatted only after the primary node is set:

[root@node1 ~] # mkfs.ext4 / dev/drbd0mke2fs 1.41.12 (17-May-2010) File system tag = operating system: Linux Block size = 4096 (log=2) Block size = 4096 (log=2) Stride=0 blocks, Stripe width=0 blocks655776 inodes, 2622521 blocks131126 blocks (5.00%) reserved for the super user first data block = 0Maximum filesystem blocks=268854886481 block groups32768 blocks per group, 32768 fragments per group8096 inodes per groupSuperblock backups stored on blocks: 3276898304,163840,229376,294912,819200,884736 1605632 is writing inode table: complete Creating journal (32768 blocks): complete Writing superblocks and filesystem accounting information: complete This filesystem will be automatically checked every 38 mounts or180 days, whichever comes first. Use tune2fs-c or-I to override.#### View maximum mount limit [root@node1 ~] # tune2fs-l / dev/drbd0 | grep ^ MMount count: 0Maximum mount count: 3 lift the maximum mount limit [root@node1 ~] # tune2fs-I 0-c 0 / dev/drbd0tune2fs 1.41.12 (17-May-2010) Setting maximal mount count to-1Setting interval between checks to 0 seconds [root@node1 ~] # tune2fs-l / dev/drbd0 | grep ^ MMount count: 0Maximum mount count:-1

Note:

If drbd is managed by heartbeat, both machines should be set to secondary

Slave node (standby) cannot mkfs and mount;Secondary nodes do not allow any operations on DRBD devices, including read-only, all read and write operations can only be carried out on the Primary node. Only when the Primary node dies, the Secondary node can be promoted to Primary node and continue to work.

6.DRBD master-slave switch to verify that DRBD is working correctly

1) Master node operation

[root@node1 ~] # mkdir / mydata [root@node1 ~] # mount / dev/drbd0 / mydata [root@node1 ~] # df-hFilesystem Size Used Avail Use% Mounted on/dev/sda3 97G 9.0G 83G 10% / tmpfs 491m 72K 491m 1% / dev/shm/dev/sda1 194M 29M 155M 16% / boot/dev/drbd0 9.9G 151m 9.2G 2% / mydata [root@node1 ~] # Ls / mydatalost+found# create sample data [root@node1 ~] # echo 123456 > / mydata/testfile [root@node1 ~] # ls / mydatalost+found testfile# unmount the primary node [root@node1 ~] # umount / mydata [root@node1 ~] # dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/sda3 100944296 9406416 10% / tmpfs 502204 72 502132 1% / dev/shm/dev/sda1 198337 29472 158625 16% / boot# master service node becomes slave node [root@node1 ~] # drbdsetup / dev/drbd0 secondary

2) operate from the node

# create mount directory from node [root@node2 ~] # mkdir / mydata# upgrade slave node service master node [root@node2] # drbdsetup / dev/drbd0 primary# mount drbd [root@node2 ~] # mount / dev/drbd0 / mydata [root@node2 ~] # dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/sda3 100944296 5168556 90648012 6% / tmpfs 502204 72 502132 1% / dev/ Shm/dev/sda1 198337 29472 158625 16% / boot/dev/drbd0 10325420 154140 96467776% / mydata# View sample data for data created on node1 [root@node2 ~] # ls / mydata/lost+found testfile [root@node2] # cat / mydata/testfile 123456

This shows that the drbd is configured correctly and working properly.

=

MySQL installation configuration and startup (both nodes are installed at the same time)

1.mysql installation, I am here to simply install the compiled binary package directly (both servers need to be installed, the operation is the same, but the second mysql does not need initialization data)

[root@node1 mysql] # lsmysql-5.6.36-linux-glibc2.5-x86_64.tar.gz [root@node1 mysql] # tar xvf mysql-5.6.36-linux-glibc2.5-x86_64.tar.gz-C / usr/local/ [root@node1 local] # cd / usr/local/mysql [root@node1 local] # ln-smysql-5.6.36-linux-glibc2.5-x86_64 mysql [root@node1 local] # ll Total usage 48drwxr-xr-x. 2 root root 4096 April 27 10:07 bindrwxr-xr-x 7 root root 4096 April 28 10:23 drbddrwxr-xr-x. 2 root root 4096 September 23 2011 etcdrwxr-xr-x. 2 root root 4096 September 23 2011 gamesdrwxr-xr-x. 3 root root 4096 April 27 10:07 includedrwxr-xr-x. 4 root root 4096 April 27 10:07 libdrwxr-xr-x. 2 root root 4096 September 23 2011 lib64drwxr-xr-x. 2 root root 4096 September 23 2011 libexeclrwxrwxrwx 1 root root 34 April 28 14:40 mysql- > mysql-5.6.36-linux-glibc2.5-x86_64drwxr-xr-x 13 root root 4096 April 28 14:37 mysql-5.6.36-linux-glibc2.5-x86_64drwxr-xr-x. 2 root root 4096 September 23 2011 sbindrwxr-xr-x. 5 root root 4096 April 6 18:50 sharedrwxr-xr-x. 2 root root 4096 September 23 2011 src# create mysql users and mysql groups. If any, there is no need to create [root@192.168.0.10 local] # groupadd mysql [root@192.168.0.10 local] # useradd-r-g mysql mysql [root@node1 mysql] # pwd/usr/local/mysql [root@node1 mysql] # chown-R mysql.mysql *

two。 Create the / mydata/data directory as the datadir of the mysql database, and modify its owner and group to mysql

[root@node1 ~] # mkdir / mydata/data [root@node1 ~] # chown mysql.mysql / mydata/data/

3. Initialize the mysql database directory (only on the first server)

Note: mount the mirror partition / dev/drbd0 to / mydata before initializing the database. Take node1 as an example.

1) first promote the node1 primary node, and mount / dev/drbd0 to / mydata

[root@node1 ~] # drbdsetup / dev/drbd0 primary [root@node1 ~] # mount / dev/drbd0 / mydata [root@node1 ~] # dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/sda3 100944296 11479256 84337312 12% / tmpfs 502204 72 502132 1% / dev/shm/dev/sda1 198337 29472 158625 16% / boot/dev/drbd0 10325420 154140 9646776 2% / mydata [root @ node1 sbin] # cd / usr/local/drbd/sbin [root@node1 sbin] #. / drbd-overview 0:data/0 Connected Primary/Secondary UpToDate/UpToDate C r-/ mydata ext4 9.9G 151m 9.2G 2%

2) initialize the mysql database on the node1 master service node, as follows:

[root@node1 scripts] # / usr/local/mysql/scripts [root@node1 scripts] #. / mysql_install_db-- user=mysql-- datadir=/mydata/data/-- basedir=/usr/local/mysql [root@node1 scripts] # ls / mydata/data lost+found testfile [root@node1 scripts] # ll / mydata/data/ Total consumption 110604 mysql mysql RW ll-1 mysql mysql 12582912 April 28 15:24 ibdata1-rw-rw---- 1 mysql mysql 50331648 April 28 15 24 ib_logfile0-rw-rw---- 1 mysql mysql 50331648 April 28 15:23 ib_logfile1drwx- 2 mysql mysql 4096 April 28 15:23 mysqldrwx- 2 mysql mysql 4096 April 28 15:23 performance_schemadrwx- 2 mysql mysql 4096 April 28 15:23 test

3) configure mysql startup (both nodes do it at the same time)

[root@node1 mysql] # pwd/usr/local/mysql [root@node1 mysql] # lsbin COPYING data docs include lib man my.cnf mysql-test README scripts share sql-bench support-files [root@node1 mysql] # ll support-files/ Total usage 32-rwxr-xr-x 1 mysql mysql 1153 March 18 15:06 binary-configure-rw-r--r-- 1 mysql mysql 773 March 18 14:43 magic-rw-r--r-- 1 mysql mysql 1126 March 18 15: 06 my-default.cnf # mysql configuration file-rwxr-xr-x 1 mysql mysql 1061 March 18 15:06 mysqld_multi.server-rwxr-xr-x 1 mysql mysql 894 March 18 15:06 mysql-log-rotate-rwxr-xr-x 1 mysql mysql 10565 March 18 15:06 mysql.server # mysql startup script [root@node1 mysql] # cp support-files/my-default.cnf / etc/my.cnf [root@node1 mysql] # Cp support-files/mysql.server / etc/init.d/mysqld [root@node1 mysql] # chmod 755 / etc/init.d/mysqld

4) modify the startup configuration file / etc/my.cnf of mysql (both nodes do it at the same time) and start the mysql service

[root@node1 init.d] # cat / etc/ my.cnf [mysqld] datadir=/mydata/datasocket=/mydata/data/mysql.sockuser=mysqlcharacter_set_server = utf8init_connect = 'SET NAMES utf8'sql_mode=NO_ENGINE_SUBSTITUTION STRICT_TRANS_TABLES# Disabling symbolic-links is recommended to prevent assorted security riskssymbolic-links=0skip_name_resolveinnodb_file_per_table= on [mysqld _ safe] log-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pid# startup service [root@node1 init.d] #. / mysqld statusMySQL is not running [failed] [root@node1 init.d] #. / mysqld startStarting MySQL.. .. [OK] # because mysql was installed by default on the system, the mysql.sock found by the client is still under the default / var/lib/mysql/mysql.sock path, but now the specified mysql.sock is under / mydata/data, so the error [root@wztao data] # mysqlERROR 2002 (HY000): Can't connect to local MySQL server through socket'/ var/lib/mysql/mysql.sock' (2) # solves this problem, and you can create a soft connection. Or specify the mysql.sock file path (mysql-S / mydata/data/mysql.sock) when the mysql client logs in As follows: [root@wztao data] # mkdir / var/lib/mysql [root@wztao data] # ln-s / mydata/data/mysql.sock / var/lib/mysql/mysql.sock [root@wztao data] # ll / var/lib/mysql/mysql.socklrwxrwxrwx 1 root root 23 Feb 23 17:26 / var/lib/mysql/mysql.sock-> / mydata/data/mysql.sock# set the login password for mysql, log in to the database, and create the table [root@node1 init.d] # mysqlWelcome to the MySQL monitor. Commands end with; or\ g.Your MySQL connection id is 1Server version: 5.6.36 MySQL Community Server (GPL) Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or itsaffiliates. Other names may be trademarks of their respectiveowners.Type 'help;' or'\ h' for help. Type'\ c'to clear the current input statement.mysql > SET PASSWORD=PASSWORD ('admin'); Query OK, 0 rows affected (0.05 sec) mysql > show databases +-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | test | +-+ 4 rows in set (0.00 sec) mysql > create database db1;Query OK, 1 row affected (0.01 sec) mysql > show databases +-+ | Database | +-+ | information_schema | | db1 | | mysql | | performance_schema | | test | +-+ 5 rows in set (0.01 sec) mysql >\ qBye# View the database directory Db1 already exists [root@node1 init.d] # ls / mydata/data/auto.cnf db1 ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock node1.pid performance_schema test

5) after performing the above operations, execute the following command on node1 to prepare for the deployment of heartbeat:

# stop mysql service; [root@node1 init.d] #. / mysqld stopShutting down MySQL.. [OK] # Unmount drbd partition: [root@node1 ~] # umount / mydata/# reduce node1 to slave node: [root@node1 ~] # drbdsetup / dev/drbd0 secondary [root@node1 ~] # / usr/local/drbd/sbin/drbd-overview 0:data/0 Connected Secondary/Secondary UpToDate/UpToDate C r-

=

Heartbeat installation configuration and startup (both node1 and node2 are installed)

1. Deployment confirmation:

1) the mysql service is turned off and self-startup is turned off.

[root@node1~] # / etc/init.d/mysqld stop [root@node1 init.d] # chkconfig mysqld off [root@node1 init.d] # chkconfig-- list mysqldmysqld 0: close 1: close 2: close 3: close 4: close 5: close 6: close

2) the drbd service must be open, and both nodes are in Secondary state

[root@node1] # / usr/local/drbd/sbin/drbd-overview 0:data/0 Connected Secondary/Secondary UpToDate/UpToDate C r-

3) Mutual trust communication between master and standby computers

[root@node1] # Tuesday, May 02, date;ssh node2 date2017 13:16:06 Tuesday, May 2, CST2017, 13:16:06 CST

two。 Install and configure Heartbeat

1) install heartbeat (note: if the machine has installed other versions of heartbeat before you need to uninstall it before you can install heartbeat-2.1.4, uninstall the installation packages on which it depends, otherwise the installation will conflict)

[root@node1 heartbeat] # lsheartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm libnet-1.1.6-7.el6.x86_64.rpmheartbeat-gui-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x8664.rpm [root @ node1 heartbeat] # rpm-ivh libnet-1.1.6-7 .el6.x86 _ 64.rpm [root @ node1 heartbeat] # rpm-ivh heartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpm heartbeat-gui-2.1.4-12.el6.x86 12.el6.x86_64.rpm heartbeat-pils-2.1.4 64.rpm recording. # [100%] 1:heartbeat-pils # [25%] 2:heartbeat-stonith # # # [50%] 3:heartbeat # [75%] 4:heartbeat-gui # # # [100%]

2) configure heartbeat. The heartbeat installed by default does not have a configuration file, but there are sample files. Only two configuration files, ha.cf and authkeys, are needed here.

[root@node1 ~] # cp / usr/share/doc/heartbeat-2.1.4/ {authkeys Ha.cf} / etc/ha.d/ [root@node1 ~] # cd / etc/ha.d/ [root@node1 ha.d] # ll Total amount 40 root root harcdrwxr-xr-x 2 May 2 13:33 authkeys-rw-r--r-- 1 root root 10539 May 2 13:33 ha.cf-rwxr-xr-x 1 root root 745 September 10 2013 harcdrwxr-xr-x 2 root root May 2 13:05 rc. Root root README.configdrwxr-xr-x-1 root root 692 September 10 2013 root root 2 root root 4096 May 2 13:05 resource.d-rw-r--r-- 1 root root 7864 September 10 2013 shellfuncs# modified its permission to 600 [root@node1 ha.d] # chmod 600 authkeys [root@node1 ha.d] # ll Total usage 40 color RW-1 root root 645 May 2 13:33 authkeys-rw- Ha.cf-rwxr-xr-x 1 root root 745 September 10 2013 harcdrwxr-xr-x 2 root root 4096 May 2 13:05 rc.d-rw-r--r-- 1 root root 692 September 10 2013 README.configdrwxr-xr-x 2 root root 4096 May 2 13:05 resource.d-rw-r--r-- 1 root root 7864 September 10 2013 shellfuncs

3) the configuration file is modified as follows:

[root@node2 ha.d] # vim authkeys auth 11 md5 91961e19f5730f736d27c07ffbc093d1 [root@node1 ha.d] # vim ha.cf logfacility local0 keepalive 2 # > interval time to send heartbeat udpport 694 # > > Communication port ucast eth0 172.22.1.113 # > > heartbeat line network port, other heartbeat port ip When configuring psae2, write 172.21.1.112 auto_failback on node psae1 # > set the nodes in the cluster, and the node name must be the same as uname-n node psae2 crm on # > enable crm

Copy files to psea2

Copy the above two configuration files to psae2 and modify the ucastip in / etc/ha.d/ha.cf to the ip of psae1

[root@node1 ha.d] # scp-p authkeys ha.cf node2:/etc/ha.d/ [root@node2 ~] # vim / etc/ha.d/ha.cf ucast eth0 172.21.1.112

3. You can start heartbeat after checking that there are no errors in the configuration file.

[root@node1 ha.d] # service heartbeat startStarting High-Availability services: Done. [root@node2 ha.d] # service heartbeat startStarting High-Availability services: Done. [root@node1 ha.d] # netstat-unlp | grep 694udp 0 0 0.0 0. 0. 0. 0. 0. 0. 0. 0. 0. 4035/heartbeat: wri [root@node2 ha.d] # ss-tunlp | grep 694udp UNCONN 0 0 *: 694 *: * users: ("heartbeat" 11523d9), ("heartbeat", 11524d9)

3. Configure Heartbeat cluster resources (which can only be done on one server): vip, drbd, mysql

1) it is recommended to set the password to pachira for the user configured by connecting the client to the server, and both the master and slave servers need to operate.

[root@node1 ha.d] # passwd hacluster changes the password of the user hacluster. New password: invalid password: it is based on dictionary words invalid password: too simple to re-enter the new password: passwd: all authentication tokens have been successfully updated.

2) execute the # hb_gui & command to start the heartbeat graphical client program and use VNC to connect to the linux desktop

Resource addition order group-- > vip-- > drbd-- > mysqld-- > p_monitor

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report