In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to achieve MySQL high availability with Heartbeat+DRBD. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.
DRBD is a block device that can be used in high availability (HA). It is similar to a network RAID-1 function. When you write the data to the local file system, the data will also be sent to another host in the network. Recorded in a file system in the same form. The data of the local (primary node) and remote host (standby node) can be synchronized in real time. When the local system fails, a copy of the same data is retained on the remote host and can continue to be used.
1.3 Environmental preparation
Two hosts: 192.168.100.8192.168.100.9
You need to specify a hard disk partition used by DRBD for both the local host and the remote host. The two partitions must be the same size. We specify the / dev/sda2 partition of the two hosts as the partition for the use of DRBD. The sizes of both partitions are 37G.
1.4 installation and configuration of DRBD
1. First download the source code package from www.drbd.org (the drbd-8.3.6 version of the package I downloaded)
2. Check whether there is linux kernel source code on the host. If not, you need to find the corresponding version of the source code package to install it.
3. Start installing drbd:
1) decompress: tar-zxvf drbd-8.3.6.tar.gz
2) enter the drbd source directory and compile drbd according to the location of the kernel source code:
Cd drbd-8.3.6
. / configure-- with-km
Make KDIR=/usr/src/kernels/2.6.18-92.el5-i686/
Make install
4. Now you can load and install the drbd module
Insmod / lib/modules/2.6.18-92.el5/kernel/drivers/block/drbd.ko
Check whether it has been successful by lsmod
# lsmod | grep drbd
If there is, it is a success.
5. Change the drbd configuration file:
Cp. / scripts/drbd.conf / etc/drdb.conf
Vi / etc/drbd.conf
Modify the configuration under resource R0
On test9 {
Device / dev/drbd0
Disk / dev/sda2
Address 192.168.100.9:7788
Flexible-meta-disk internal
}
On test8 {
Device / dev/drbd0
Disk / dev/sda2
Address 192.168.100.8:7788
Meta-disk internal
}
...
6. Primary node settings:
1) create a matadata:
# drbdadm create-md all
Encounter an error
Found ext3 filesystem
38515836 kB data area apparently used
38514624 kB left usable by current configuration
Device size would be truncated, which would corrupt data and result in 'access beyond end of device' errors.
You need to either
* use external meta data (recommended)
* shrink that filesystem first
* zero out the device (destroy the filesystem)
Operation refused.
Command 'drbdmeta 0 v08 / dev/sda2 internal create-md' terminated with exit code 40
Drbdadm create-md r0: exited with code 40
As suggested by the error, Zero out the first part of device by running following command:
Execute the following sentence
# dd if=/dev/zero of=/dev/sda2 bs=1M count=128
And then re-execute
# drbdadm create-md all
2) start drbd:
# / etc/init.d/drbd start
3) set the primary node:
# drbdadm primary all
4) create a file system on the new device
# mkfs.ext3 / dev/drbd0
5) put the file system on mount
# mkdir / drbddata
# mount / dev/drbd0 / drbddata
7. Secondary node settings:
1) create a matadata:
# drbdadm create-md all
2) start drbd:
# / etc/init.d/drbd start
Note: modprobe does not automatically load the drdb module when drbd start on my system
So you need to change the $MODPROBE-s drdb in / etc/init.d/drbd to
Insmod / lib/modules/2.6.18-92.el5/kernel/drivers/block/drbd.ko
Note: do not create a file system here (because this information will be synchronized from the primary node).
8. After the primary and secondary nodes are configured and started, check whether the configuration is successful
1) check the process:
A) primary:
[root@test9 /] # ps auxf | grep drbd
Root 4735 0.0 3912 656 pts/0 S+ 10:31 0:00 _ grep drbd
Root 3479 0.0 0.0 0 0? S 08:48 0:00 [drbd0_worker]
Root 3491 0.0 0.0 0 0? S 08:48 0:00 [drbd0_receiver]
Root 3503 0.0 0.0 0 0? S 08:48 0:00 [drbd0_asender]
B) secondary:
Root@test8:/ > ps auxf | grep drbd
Root 4543 0.0 3912 660 pts/0 S+ 10:31 0:00 _ grep drbd
Root 3393 0.0 0.0 0 0? S 08:48 0:00 [drbd0_worker]
Root 3405 0.0 0.0 0 0? S 08:48 0:00 [drbd0_receiver]
Root 3415 0.0 0.0 0 0? S 08:48 0:00 [drbd0_asender]
You can see that the processes of both nodes are up, and each drbd device will have three processes: drbd0_worker is the main entry of drbd0, drbd0_asender is the data sending process of drbd0 on primary, and drbd0_receiver is the data receiving process of drbd0 on secondary.
2) View the output of / dev/drbd file:
[root@test9 /] # cat / proc/drbd
Version: 8.3.6 (api:88/proto:86-91)
GIT-hash: f3606c47cc6fcf6b3f086e425cb34af8b7a81bbf build by root@test9, 2009-12-11 18:47:35
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate Crymurmuri-
Ns:8 nr:0 dw:4 dr:473 al:1 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
Root@test8:/ > cat / proc/drbd
Version: 8.3.6 (api:88/proto:86-91)
GIT-hash: f3606c47cc6fcf6b3f086e425cb34af8b7a81bbf build by root@test8, 2009-12-11 18:14:27
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate Crymurmuri-
Ns:0 nr:8 dw:8 dr:0 al:0 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
The output file begins with the version information of drbd, followed by some status information of data synchronization. The meaning of each state is introduced from the mysql document as follows:
Cs-connection state
St-node state (local/remote)
Ld-local data consistency
Ds-data consistency
Ns-network send
Nr-network receive
Dw-disk write
Dr-disk read
Pe-pending (waiting for ack)
Ua-unack'd (still need to send ack)
Al-access log write count
3) further verify that the data is synchronized correctly:
A) umount the file system on test9, and then execute drbdadm secondary all to change it to secondary mode
[root@test9 /] # umount / drbddata
[root@test9 /] # drbdadm secondary all
B) change drbdadm primary all to primary mode in the original test8, and then mount the file system
Root@test8:/ > drbdadm primary all
Root@test8:/ > mount / dev/drbd0 / drbddata
Check whether the files previously written under test9 have been fully synchronized to the test8. After verification, the data has been synchronized.
4) finally, manually test a switch together with mysql:
A) close mysql under the master node test9, release resources, and switch resources to secondary mode:
[root@test9 ha.d] # mysqladmin-u root shutdown
[root@test9 ha.d] # umount / drbddata
[root@test9 ha.d] # drbdadm secondary all
B) acquire resources under secondary node test8, switch to primary mode on mount, and start mysql:
Root@test8:/root/mysql-5.0.51a > drbdadm primary all
Root@test8:/root/mysql-5.0.51a > mount / dev/drbd0 / drbddata
Root@test8:/usr/local/mysql/bin >. / mysqld_safe-- user=mysql &
[1] 27900
Root@test8:/usr/local/mysql/bin > Starting mysqld daemon with databases from / drbddata/mysqldata
Root@test8:/usr/local/mysql/bin >
Root@test8:/usr/local/mysql/bin >
Root@test8:/usr/local/mysql/bin > tail-f / drbddata/mysqldata/test8.err
080303 13:53:25 mysqld started
080303 13:53:26 InnoDB: Started; log sequence number 0 43656
080303 13:53:26 [Note] / usr/local/mysql/libexec/mysqld: ready for connections.
Version: '5.0.51 a Murlog' socket:'/ usr/local/mysql/sock/mysql.sock' port: 3306 Source distribution
C) Log in to the database to check the data. Here I also test the write on the new master node to check whether the data is normal after switching back to the master node:
Root@test8:/usr/local/mysql/bin > mysql-u root
Mysql > show databases
+-+
| | Database |
+-+
| | information_schema |
| | mysql |
| | test |
+-+
3 rows in set (0.03 sec)
Mysql > use test
Database changed
Mysql > show tables
+-+
| | Tables_in_test |
+-+
| | T1 |
| | T2 |
| | T3 |
+-+
3 rows in set (0.00 sec)
Mysql > create table T4 (id int)
Query OK, 0 rows affected (0.07 sec)
D) switch back to their original mode through the same switching steps as before, start the mysql of the primary node, and check the data:
...
[root@test9 ha.d] # drbdadm primary all
[root@test9 ha.d] # mount / dev/drbd0 / drbddata
[root@test9 ha.d] # mysqld_safe-- user=mysql &
[root@test9 ha.d] # mysql-uroot
Mysql > use test
Database changed
Mysql > show tables
+-+
| | Tables_in_test |
+-+
| | T1 |
| | T2 |
| | T3 |
| | T4 |
+-+
4 rows in set (0.00 sec)
Mysql > insert into T4 values
Query OK, 1 row affected (0.01sec)
Mysql > commit
Query OK, 0 rows affected (0.00 sec)
Mysql > select * from T4
+-+
| | id |
+-+
| | 111 |
+-+
1 row in set (0.00 sec)
A few points to pay attention to:
1. Except that the file system of primary node is created manually, the file system of all other secondary nodes is done through synchronization.
2. Before the mount drbd device, the node must be set to primary mode, and if one side does not have umount and the other side cannot mount, the direct mount will report the following error:
Root@test8:/ > mount / dev/drbd0 / drbddata
Mount: block device / dev/drbd0 is write-protected, mounting read-only
3. DRBD also supports Dual-primary mode, but it needs cluster file systems such as GFS to support it.
1.5 performance testing
Test options: sysbench-num-threads=10-max-requests=10000-test=oltp
-mysql-table-engine=innodb-oltp-table-size=1000000
uses local disk directly
OLTP test statistics:
Queries performed:
Read: 140000
Write: 50000
Other: 20000
Total: 210000
Transactions: 10000 (206.65 per sec.)
Deadlocks: 0 (0.00 per sec.)
Read/write requests: 190000 (3926.39 per sec.)
Other operations: 20000 (413.30 per sec.)
Test execution summary:
Total time: 48.3905s
Total number of events: 10000
Total time taken by event execution: 483.6677
Per-request statistics:
Min: 9.59ms
Avg: 48.37ms
Max: 255.62ms
Approx. 95 percentile: 96.74ms
Threads fairness:
Events (avg/stddev): 1000.0000Universe 10.23
Execution time (avg/stddev): 48.3668am 0.01
uses DRBD
OLTP test statistics:
Queries performed:
Read: 140000
Write: 50000
Other: 20000
Total: 210000
Transactions: 10000 (174.69 per sec.)
Deadlocks: 0 (0.00 per sec.)
Read/write requests: 190000 (3319.17 per sec.)
Other operations: 20000 (349.39 per sec.)
Test execution summary:
Total time: 57.2433s
Total number of events: 10000
Total time taken by event execution: 572.2720
Per-request statistics:
Min: 11.87ms
Avg: 57.23ms
Max: 342.05ms
Approx. 95 percentile: 141.95ms
Threads fairness:
Events (avg/stddev): 1000.0000thumb 16.01
Execution time (avg/stddev): 57.2272
It can be seen that DRBD reduces the performance of MySQL by about 15%. DRBD mainly affects write performance and has little impact on reading, so DRBD is a good choice for applications with low update frequency but high availability requirements.
1.6 automatic failover using Heartbeat
1. Suppose the public network IP of server A _ Magi B is as follows:
A 192.168.100.9
B 192.168.100.8
Cluster virtual IP: 192.168.100.201
2. Set the hostname of server A _ Magi B to test9 and test8
If not, modify the HOSTNAME section of / etc/sysconfig/network and execute
# hostname test9 to take effect immediately.
Add two lines to / etc/hosts:
192.168.100.10 test9
192.168.100.9 test8
4. Modify the / etc/sysctl.conf file of server A _ Magi B, add the following 5 lines, and execute
# sysctl-p to take effect immediately.
Net.ipv4.ip_forward = 1
Net.ipv4.conf.all.arp_ignore = 1
Net.ipv4.conf.eth0.arp_ignore = 1
Net.ipv4.conf.all.arp_announce = 2
Net.ipv4.conf.eth0.arp_announce = 2
5. Enter servers An and B with root, and perform the following configuration:
Insert a line after #! / bin/sh in / etc/init.d/heartbeat:
Ifconfig lo:100 192.168.100.201 netmask 255.255.255.255 up
# chkconfig heartbeat-- level 35 on
# cd / etc/ha.d enter the cluster configuration file directory
# vi authkeys create cluster authentication file
Auth 3
3 md5 HA_2009
# necessary actions for chmod 600authkeys
# vi ha.cf create cluster node file
Logfile / var/log/ha.log
# logfacility local0
Keepalive 2
Deadtime 30
Warntime 10
Initdead 80
Mcast eth0 231.231.231.232 694 1 0
# # if you have a dual network card, you'd better make a cross-wire connection between two machines. Change eth0 to eth2
# ucast eth0 192.168.100.8 # (test9 points directly to ucast eth0 192.168.100.9 of peer ip,test8)
Ping 192.168.100.2
Auto_failback on
Node test9
Node test8
Respawn hacluster / usr/lib/heartbeat/ipfail
Apiauth ipfail gid=haclient uid=hacluster
There are two nodes in the cluster, test9 and test8, which communicate through multicast IP (mainly for more than 2 nodes)
Ping 192.168.100.2 gateway ping detection
# vi / etc/ha.d/resource.d/vip.sh create our own cluster IP switch shell script
#! / bin/sh
Case "$4" in
Start)
Ifconfig lo:100 down
Ifconfig $1 up 100 $2 netmask $3 up
Stop)
Ifconfig $1la100 down
Ifconfig lo:100 $2 netmask 255.255.255.255 up
Esac
/ etc/ha.d/resource.d/SendArp 192.168.100.201/eth0 start
The LVS cluster backup node is provided to listen on the cluster virtual IP at the loop address for use in the application server.
The last line updates the cached MAC address in the client's arp.
# chmod + x resource.d/vip.sh
# vi / etc/ha.d/resource.d/mysql.sh create a shell script for mysql startup and stop
#! / bin/sh
Case "$1" in
Start)
/ drbddata/mysql-xtradb/bin/mysqld_safe-defaults-file=/drbddata/mysql-xtradb/my.cnf-user=mysql > / dev/null 2 > & 1 &
Stop)
/ drbddata/mysql-xtradb/bin/mysqladmin-S/drbddata/mysql-xtradb/mysql.sock shutdown
Esac
# vi haresources create a cluster resource file
Test9 drbddisk::r0
Filesystem::/dev/drbd0::/drbddata::ext3
Vip.sh::eth0::192.168.100.201::255.255.255.0
Delay::5::0
Mysql.sh
Perform service heartbeat start tests separately on AB
Pay attention to check the log / var/log/ha.log
After boot is complete, mysql on test9 is available normally
Mysql-h292.168.100.201-P3308-uroot-p
Welcome to the MySQL monitor. Commands end with; or g.
Your MySQL connection id is 1
Server version: 5.1.39-xtradb-log MySQL Community Server (GPL)
Type 'help;' or' h' for help. Type 'c'to clear the current input statement.
Mysql >
This is the end of the article on "how Heartbeat+DRBD achieves MySQL high availability". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.