In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "how to build Heartbeat+DRBD+Mysql+Lvs+Keepalived high availability", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to build Heartbeat+DRBD+Mysql+Lvs+Keepalived high availability" this article.
Environment introduction:
Primary node mysql1:192.168.9.25
Primary node mysql2:192.168.9.26
VIP:192.168.9.231 written by Heartbeat
LVS1:192.168.9.27
LVS2:192.168.9.28
VIP:192.168.9.230 distributed by keepalived
Mysql1 to be distributed: 192.168.9.29
Mysql2:192.168.9.30 to be distributed
Note for the overall installation: 1 of the two nodes of DRBD, only that master node can mount the disk DRBD at the same time. 2, write the MySQL data files, log files, and temporary files to the DRBD disk, that is, datadir, tmpdir, log_error, log-bin these parameters. 3, configuration file about MySQL (/ etc/my.cnf)
You should also put it on the drdb disk and delete the my.cnf from the master node and the / etc directory on the slave node, but you must establish a soft connection to / etc/my.cnf. 4 in the keepalived keepalived architecture, lvs implements distribution, keepalived implements the high availability of lvs, and vip in the keepalived configuration serves as the vip for distribution.
Heartbeat+drbd build process:
If the primary server goes down, the loss is immeasurable. In order to ensure the uninterrupted service of the main server, it is necessary to achieve redundancy to the server. Among the many solutions to achieve server redundancy, heartbeat provides us with a cheap, scalable and highly available clustering solution. We create a highly available (HA) cluster server under Linux through heartbeat+drbd.
DRBD is a block device that can be used in high availability (HA). It is similar to a network RAID-1 function. When you write the data to the local file system, the data will also be sent to another host in the network. Recorded in a file system in the same form. The data of the local (primary node) and remote host (standby node) can be synchronized in real time. When the local system fails, a copy of the same data is retained on the remote host and can continue to be used. Using the DRBD function in HA can be used instead of using a shared array. Because the data exists on both the local and remote hosts. When switching, the remote host can continue to serve as long as it uses the backup data above it.
Heartbeat realizes that when there is a problem with the local node, it can automatically detect it and complete the switching between the active and standby nodes to achieve high availability.
Different from MHA: mha points to the real IP from change. And the vip that DRBD points to
1DRBD deployment (both master nodes will be deployed)
For the installation of DRBD, the yum that comes with centOS6.5 does not have packages for drbd and Heartbeat. You need to update the yum source before installing it. The steps are as follows:
[root@master2 ~] # yum install kernel-devel # # upgrade the kernel
[root@master2 ~] # rpm-Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm # # centOS6's third-party toolkit set
[root@master2 ~] # rpm-Uvh http://www.elrepo.org/elrepo-release-5-5.el5.elrepo.noarch.rpm # # centOS5's third-party toolkit set
[root@master2 ~] # yum-y install drbd83-utils kmod-drbd83
[root@master2 ~] # modprobe drbd # loads DRBD. Execute the command if an error is reported, restart the system and then execute it.
[root@master2 ~] # lsmod | grep drbd
Drbd 332493 4 # displays this message, indicating that drbd is loaded successfully
2the configuration of DRBD
Before configuration, you need to use fdisk-l or df-v to check the partition of the system. Here, the existing / dev/sdb partition of the system is used as the configuration of drbd.
2. For the configuration of DRBD, the / etc/drbd.conf and hosts files, the hosts of dbserver1 and dbserver2 is added as follows:
[root@master1 ~] # cat / etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
:: 1 localhost6.localdomain6 localhost6
192.168.9.25 master1
192.168.9.26 master2
3. # configuration / etc/sysconfig/network file, configured on both sides of the file
[root@master1 ~] # cat / etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=master1 、 master2
4. / etc/drbd.conf file is as follows (the configuration of dbserver1 is the same as that of dbserver2):
[root@master1 ~] # cat / etc/drbd.conf # # pay attention to the deletion of the # # comment when it is actually configured.
# You can find an example in / usr/share/doc/drbd.../drbd.conf.example
# include "drbd.d/global_common.conf"
# include "drbd.d/*.res"
Global {usage-count yes;}
Common {syncer {rate 10m;}} # set the maximum network rate of synchronization between master and slave nodes (in bytes)
Resource r1 {
Protocol C; # # uses the third synchronization protocol of drbd, which means that the write is considered complete after receiving the write confirmation from the remote host.
Startup {
}
Disk {
On-io-error detach
# size 1G
}
Net {
}
On mysql_master1 {
Device / dev/drbd0
Disk / dev/sdb; # here the disk or partition is preferably the same on both sides, and logical partitioning can also be used.
Address 192.168.9.25:7888
Meta-disk internal
}
On mysql_master2 {
Device / dev/drbd0
Disk / dev/sdb
Address 192.168.9.26 7888; # Port is set to 7888 as above
Meta-disk internal
}
}
5.
Execute the dd test command first to overwrite the device block information in the file system, otherwise an error will be reported when creating the resource in step 6
Dd if=/dev/zero of=/dev/sdb bs=1M count=100
6. Once drbd is configured, you need to create drbd Metabase information on dbserver1 and dbserver2 using the following command
Drbdadm create-md R1, R1 is the resource name defined after resource in the configuration file. The following shows that the resource was established successfully. When executing the command, the hard disk of disk / dev/sdb in the configuration file is not mounted. Neither of the two machines is mounted.
[root@master1 ~] # drbdadm create-md R1
Writing meta data...
Initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
Success
6. Start and stop of DRBD
[root@master1 ~] # / etc/rc.d/init.d/drbd start # start drbd
[root@master1 ~] # / etc/rc.d/init.d/drbd stop # stop drbd
[root@master1 ~] # / etc/rc.d/init.d/drbd restart # restart drbd
7. View DRBD status
Watch-n 1 cat / proc/drbd
/ etc/init.d/drbd status
8. Set the current node as the primary node, format it and mount it. Note that only the master node can be mounted.
[root@master1 ~] # drbdadm primary all
If the previous command is unsuccessful, execute the following command.
[root@master1] # drbdadm-overwrite-data-of-peer primary all
[root@master1] # mkfs.ext3 / dev/drbd0 # # this is / dev/sdb
[root@master1 ~] # mkdir-p / mysql/data # # create the mount point of the drbd disk
[root@master1 ~] # mount / dev/drbd0 / mysql/data
After the master drbd is mounted, you can write data to the drbd directory, that is, the / mysql/data directory, and then switch between master and slave. If the data is synchronized, the drbd is built successfully.
To switch from master to slave, you need to unmount the file system first, and then execute the command that is degraded to slave:
[root@master1 ~] # umount / dev/drbd0
[root@master1 ~] # drbdadm secondary all
To switch from master to master, execute the command to upgrade to master and then hang it on the file system:
[root@master1 ~] # drbdadm primary all if not successful drbdsetup / dev/drbd0 primary-o
[root@master1 ~] # mount / dev/drbd0 / mysql/data
9, it can be judged as follows that master1 is the Primary state.
[root@master1 ~] # more / proc/drbd | grep ro
0: cs: Connected ro: Primary/Secondary ds:UpToDate/UpToDate C r-
Migration of Mysql on DRBD (the purpose of this step is to place the guaranteed data directory, log directory, and temporary file directory in the drbd directory)
1. Just assign the data directory to the mysql directory mounted by drbd and put the my.cnf configuration file in mysql.
Turn off mysql for dbserver1 and dbserver2
/ etc/rc.d/init.d/mysqld stop
B) create a directory and log directory on dbserver1 to store database data
Mkdir-p / mysql/data / mysql/log / mysql/tmp
C) modify the owner of the mysql directory
Chown-R mysql:mysql / mysql
D) move the configuration file to the mysql directory in dbserver1:
Mv / etc/my.cnf / mysql
Delete / etc/my.cnf,rm-f / etc/my.cnf on dbserver2
Execute the following command on dbserver1 and dbserver2 to create a soft link, and you need to write the full path
Ln-s / mysql/my.cnf / etc/my.cnf
E) modify the data directory of / etc/my.cnf to point to / mysql/data
F) move the original mysql data file to / mysql/data
G) start mysql. If you do not move the contents of the data directory, you need to reinitialize mysql and start mysql.
Initialization process: {(1) enter the. / mysql/bin directory and execute the script. / mysql_install_db
(2) after executing (1), two directory files mysql and test are created under the. / mysql/var directory.
(3) modify the permissions of mysql and test directories and all files under the directory: chown mysql:mysql-R mysql test}
The configuration of the master file / etc/my.cnf on Mysql-master1 is as follows. Of course, master2 and master1 use the same configuration file and do not need to be configured.
[root@mysql_master1 ~] # cat / etc/my.cnf
[mysql]
Default-character-set=utf8
[mysqld]
User=mysql
Pid-file=/var/run/mysql/mysql.pid
# socket=/var/run/mysql/mysql.sock
Socket=/tmp/mysql.sock
Basedir=/usr/local/mysql
Datadir=/mysql/data
Tmpdir=/mysql/tmp
Max_connections = 2000 # define maximum number of connections
Server-id=3 # this option must be unique for each MySQL server
# begin innodb settiong###
Innodb_file_per_table=1
Innodb_lock_wait_timeout=500
Innodb_buffer_pool_size=512M
# end innodb setting###
# key buffer size set###
Key-buffer-size=500M
Sort_buffer_size=500M
Max_user_connections=1000
Table-cache=5000
Query_cache_size=500M
# key buffer size set###
# begin bin log###
Log-bin=/mysql/log/binlog
Log-bin-index=/mysql/log/binlog.index
Expire-logs-days=90
# end bin log###
# begin general log###
# general_log=1
# general_log_file=/mysql/log/record.log
# end general log###
# begin error log###
Log_error=/mysql/log/error.log
# end error log###
# begin skip name resolve###
# end skip name resolve###
# being slow query log###
Slow_query_log=1
Long_query_time=60
Slow_query_log_file=/mysql/log/slow.log
# end slow query log###
3. Master-slave configuration of Mysql
Create a replication account on the mysql master
Mysql > create user repl identified by'% # 7a@H)'
For authorization from mysql
Mysql > GRANT replication slave ON *. * TO 'repl'@'%'
Mysql > Flush privileges
4. Mysql slave / etc/my.cnf configuration
[root@localhost ~] # cat / etc/my.cnf
[mysql]
Default-character-set=utf8
[mysqld]
User=mysql
Pid-file=/var/run/mysql/mysql.pid
# socket=/var/run/mysql/mysql.sock
Socket=/tmp/mysql.sock
Basedir=/usr/local/mysql
Datadir=/wgz/mysql/data
Tmpdir=/wgz/mysql/tmp
Max_connections = 2000 # define maximum number of connections
Server-id=12 # this option must be unique for each MySQL server
Read_only=1 # set read-only
# begin innodb settiong###
Innodb_file_per_table=1
Innodb_lock_wait_timeout=500
Innodb_buffer_pool_size=512M
# end innodb setting###
# key buffer size set###
Key-buffer-size=500M
Sort_buffer_size=500M
Max_user_connections=1000
Table-cache=5000
Query_cache_size=500M
# key buffer size set###
# begin bin log###
Log-bin=/wgz/mysql/log/binlog
Log-bin-index=/wgz/mysql/log/binlog.index
Expire-logs-days=90
# end bin log###
# begin general log###
# general_log=1
# general_log_file=/wgz/mysql/log/record.log
# end general log###
# begin error log###
Log_error=/wgz/mysql/log/error.log
# end error log###
# begin skip name resolve###
# end skip name resolve###
# being slow query log###
Slow_query_log=1
Long_query_time=60
Slow_query_log_file=/wgz/mysql/log/slow.log
# end slow query log###
5. Do the operation of pointing to the master from the mysql
Mysql > change master to master_host='192.168.15.47', master_port=3306, master_user='repl', master_password='%#7a@H)', master_log_file='binlog.000011',master_log_pos=3234
Start master-slave replication
Mysql > start slave
# View synchronization status
Mysql > show slave status\ G
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Slave_IO_Running and Slave_SQL_Running: the two display Yes show that the master-slave synchronization is successful.
Deployment of Heartbeat (automatic switching between two masters)
To install heartbeat, you need to install the epel extension source
[root@master1 ~] # yum-y install epel-release
[root@master1 ~] # yum install-y heartbeat
2. The configuration of Hearbeat mainly includes three configuration files, the configuration of authkeys,ha.cf and haresources. Let's take a look at the following!
The configuration of the Authkerys password authentication file is the same on both machines.
[root@master1 ~] # vim / etc/ha.d/authkeys
Auth 1
1 crc
Modify the permissions of the file after the configuration of [root@master1 ~] # chmod 600 / etc/ha.d/authkeys # is completed, otherwise heartbeat starts to report an error
Configuration of ha.cf
Configuration of ha.cf for Master1 (dbserver1)
[root@master1 ~] # cat / etc/ha.d/ha.cf
Logfile / var/log/ha-log
Logfacility local0
Keepalive 2
Deadtime 30
Warntime 10
Initdead 60
Udpport 694
Ucast eth0 192.168.9.26 # write each other's ip
Auto_failback off # # it is recommended to use off. If you use on, it will cause switching back and forth between master and slave devices, which will increase the cost.
Node master1
Node master2
Ping 192.168.15.254 # Gateway is fine
Respawn hacluster / usr/lib64/heartbeat/ipfail
Configuration of ha.cf for Master2 (dbserver2)
[root@master2 ~] # cat / etc/ha.d/ha.cf
Logfile / var/log/ha-log
Logfacility local0
Keepalive 2
Deadtime 30
Warntime 10
Initdead 60
Udpport 694
Ucast eth0 192.168.9.25 # write each other's ip
Auto_failback on
Node master1
Node master2
Ping 192.168.9.254
Respawn hacluster / usr/lib64/heartbeat/ipfail
For the configuration of haresources, the configurations of the two machines are exactly the same. Mysqld needs to be made into a service and placed in the / etc/rc.d/init.d/ directory. The configuration is as follows:
[root@master1 ~] # cat / etc/ha.d/haresources
Master1 IPaddr::192.168.9.231/24/eth0:0 drbddisk::r1 Filesystem::/dev/drbd0::/mysql/data::ext3 mysqld
The above / etc/ha.d/haresources configuration file states that 192.168.9.231/24/eth0:0192.168.9.231/24 is the highly available vip,eth0:0 for mysql: indicates that vip goes through the eth0:0 port, R1: indicates the resource library defined by drbd above, / dev/drbd0: indicates the hard disk partition formed by the above drbd, / mysql/data: indicates the directory mounted by drbd, and mysqld: represents the service of mysql.
5. Management of Heartbeat
After configuring heartbeat, you need to remove mysql from the self-starting server, because when the master heartbeat starts, the drdb file system will be mounted and the mysql will be started. When switching, the mysql on the master will be stopped and unmounted, the file system will be mounted on the slave, and mysql will be started. Therefore, you need to do the following:
[root@master1 ~] # chkconfig mysqld off
[root@master1] # chkconfig-- add heartbeat
[root@master1 ~] # chkconfig heartbeat on
[root@master1 ~] # service heartbeat start # start Heartbeat
6. Heartbeat+DRBD+Mysql test
After the script determines that the mysql is down, the script stop the heartbeat service, and the partition mount of vip and drbd is transferred to the drbd slave. When the master mysql repair starts, when the heartbeat service is started, the partition mount of vip and drbd is transferred to the drbd master.
7. Judge the mysql service and stop the heartbeat service script, execute it in the upper and background of the master, nohup / opt/check_mysql_heartbeat.sh &, the password in the script needs to be changed to the root password of the current mysql.
[root@mysql_master1 opt] # cat check_mysql_heartbeat.sh
#! / bin/bash
Trap 'echo PROGRAM INTERRUPTED; exit 1' INT
Username=root
Password=123456
Nasty 0
Log='/var/log/mysqlmon.log'
While true
Do
If mysql-u$ {username}-p$ {password}-e "use test" > & / dev/null
Then
Echo `date + "% Y-%m-%d% H:%M:%S" `mysqld is alive! > > ${log}
Nasty 0
Else
Echo "`date +" Y-%m-%d H:%M:%S "`mysqld cannot be connected!" > > ${log}
N $[n + 1]
If [$n-eq 3]
Then
/ etc/init.d/heartbeat stop
Echo "`date +" Y-%m-%d H:%M:%S "`mysqld switched to backup!" > > ${log}
Echo "`date +" Y-%m-%d H:%M:%S "`mysqld switched to backup"
Break
Fi
Fi
Sleep 10
Done
Test summary:
Determine that the script can be executed in the upper and background of the drbd master, but not on the slave. If the master mysql is down, the script will kill the heartbeat service, and the vip and mount will be automatically switched to the slave. After the master's mysql and heartbeat repair starts, the vip and mount will automatically switch to the master.
Two: lvs+keepalived installation and deployment
Test environment:
LVS1:192.168.9.27
LVS2:192.168.9.28
VIP:192.168.9.230 of keepalived
Mysql1 to be distributed: 192.168.9.29
Mysql2:192.168.9.30 to be distributed
Specific installation steps:
# download ipvsadm (i.e. LVS package)
Wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gz
# decompression
[root@lvs-a ~] # tar-zxf ipvsadm-1.24.tar.gz
# enter the ipvsadm directory
[root@lvs-a ~] # cd / usr/local/ipvsadm-1.24
# install the development package and library files
[root@lvs-an ipvsadm-1.24] # # yum install zlib-devel gcc gcc-c++ openssl-devel pcre-devel libtool kernel-devel ncurses-devel-y
# create a kernel connection
[root@lvs-an ipvsadm-1.24] # ln-sv / usr/src/kernels/2.6.18-194.el5-i686/ / usr/src/linux # (2.6.18-194.el5-i686) is modified according to the system version.
# compile and install
[root@lvs-an ipvsadm-1.24] # make;make install
# enter the directory
[root@lvs-a ~] # cd / usr/local/
# download keepalived
Wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz
# decompression
[root@lvs-an ipvsadm-1.24] # tar-zxf keepalived-1.2.12.tar.gz
# enter the keepalived directory
[root@lvs-a ~] cd keepalived-1.2.12
# compile and install
[root@lvs-a keepalived-1.2.12] #. / configure-- prefix=/usr/local/keepalived # there are three yes displayed after compilation, indicating that the compilation was successful, otherwise the installation would not be successful.
# the following is version 1.2.12. / configure results are as follows
# if version 1.1.17. / configure results are as follows
[root@lvs-a keepalived-1.2.12] # make
[root@lvs-a keepalived-1.2.12] # make install
# copy startup files
[root@lvs-a keepalived-1.2.12] # cp / usr/local/keepalived/etc/rc.d/init.d/keepalived / etc/init.d/ # # so that you can use the service keepalived command.
# copy command file
[root@lvs-a keepalived-1.2.12] # cp / usr/local/keepalived/sbin/keepalived / usr/sbin/
# copy [configuration file]
[root@lvs-a keepalived-1.2.12] # cp / usr/local/keepalived/etc/sysconfig/keepalived / etc/sysconfig/
# create a new main configuration file directory
[root@lvs-a keepalived-1.2.12] # mkdir-p / etc/keepalived
# Editing configuration file LVS1
[root@lvs-a sysconfig] # vim / etc/keepalived/keepalived.conf
! Configuration File for keepalived
Global_defs {
Router_id lvs
}
Vrrp_sync_group http {# set the vrrp group and name it casually
Group {
Mysql # A random name
}
}
Vrrp_instance apache {# define an vrrp instance
State MASTER # sets the status of lvs, MASTER and BACKUP, which must be uppercase, master node master, slave node backup
Interface eth0 # sets the interface of the external service, that is, the network interface of / LVS monitoring
Virtual_router_id 01 # sets the interface for lvs listening. Under the same instance, virtual_router_id must be the same. Name it casually.
Priority 500 # # sets the priority. The higher the value, the higher the priority, that is, the higher the value of the primary node.
Advert_int 1 # / / the time interval between MASTER and BACKUP load balancers for synchronization checks (in seconds)
Authentication {# set authentication type and password
Auth_type PASS
Auth_pass aabb # # password
}
Virtual_ipaddress {# # set the VIP of keepalived
192.168.9.230 # if there are more than one, just add it down
# 192.168.9.231
# 192.168.9.232
}
}
Virtual_server 192.168.9.230 3306 {# # define a virtual server
Delay_loop 6 # health check-up time in seconds.
Lb_algo rr # load scheduling algorithm, which is set to rr, that is, polling algorithm
The mechanism of lb_kind DR # LVS to achieve load balancing. You can choose three modes: NAT, TUN and DR.
Nat_mask 255.255.255.0 # you don't need to enter this parameter
# persistence_timeout 50
Protocol TCP
Real_server 192.168.9.29 3306 {# # Target Server IP to be distributed
Weight 1 # sets the weight, that is, the number of distributions, which means one for each distribution
TCP_CHECK {# judge the health status of RealServer through tcpcheck
Connect_timeout 3 # # connection timeout
# nb_get_retry 3 # number of reconnections, with default value
# delay_before_retry 3 # default value for interval between reconnection
Connect_port 3306
}
}
Real_server 192.168.9.30 3306 {
Weight 1
TCP_CHECK {
Connect_timeout 3
Connect_port 3306
}
}
}
# restart keepalived after configuration
[root@lvs-a sysconfig] # service keepalived restart
# add your own gateway
[root@lvs-a network-scripts] # route add-host 192.168.9.254 dev eth0
# installing keepalived is the same as lvs1
# configure LVS2 after installing keepalived, as follows:
[root@lvs-b ~] # cat / etc/keepalived/keepalived.conf
! Configuration File for keepalived
Global_defs {
Router_id lvs
}
Vrrp_sync_group http {
Group {
Mysql
}
}
Vrrp_instance apache {
State BACKUP
Interface eth0
Virtual_router_id 01
Priority 400
Advert_int 1
# nopreempt
Authentication {
Auth_type PASS
Auth_pass aabb
}
Virtual_ipaddress {
192.168.9.230
}
}
Virtual_server 192.168.9.230 3306 {
Delay_loop 6
Lb_algo rr
Lb_kind DR
Nat_mask 255.255.255.0
# persistence_timeout 50
Protocol TCP
Real_server 192.168.9.29 3306 {
Weight 1
TCP_CHECK {
Connect_timeout 3
Connect_port 3306
}
}
Real_server 192.168.9.30 3306 {
Weight 1
TCP_CHECK {
Connect_timeout 3
Connect_port 3306
}
}
}
# restart keepalived after configuration
Service keepalived restart
# add your own gateway
Route add-host 192.168.9.254 dev eth0
# View
Route-n
# View the distribution results
Ipvsadm-ln
# clear distribution
Ipvsadm-C
# View Virtual ip (VIP)
Ip addr
# configure mysql-realserver1 to be executed on 192.168.9.29:
[root@lvs-a ~] # ifconfig lo:0 192.168.9.230 netmask 255.255.255.0 broadcast 192.168.9.230 up # # set a temporary IP
Set the route to the host
[root@lvs-b ~] # route add-host 192.168.9.230 dev lo:0
Set default rout
[root@lvs-b ~] # route add default gw 192.168.9.137
Ensure that during the connection process of the arp protocol, the router only knows that 192.168.9.230 corresponds to the dispenser.
[root@lvs-b ~] # echo 1 > / proc/sys/net/ipv4/conf/all/arp_ignore
[root@lvs-b ~] # echo 2 > / proc/sys/net/ipv4/conf/all/arp_announce
# configure mysql-realserver2 to be executed on 192.168.9.30:
[root@lvs-a ~] # ifconfig lo:0 192.168.9.230 netmask 255.255.255.0 broadcast 192.168.9.230 up
Set the route to the host
[root@lvs-a ~] # route add-host 192.168.9.230 dev lo:0
Set default rout
[root@lvs-a ~] # route add default gw 192.168.153.137
Ensure that during the connection process of the arp protocol, the router only knows that 192.168.9.230 corresponds to the dispenser.
[root@lvs-a ~] # echo 1 > / proc/sys/net/ipv4/conf/all/arp_ignore
[root@lvs-a ~] # echo 2 > / proc/sys/net/ipv4/conf/all/arp_announce
# launch separate mysql after the configuration of two mysql-realserver is completed
Service mysqld start
LVS Distribution Test:
1, create a database called slave1 in mysql-realserver1 (192.168.9.29):
Mysql > create database slave1
Query OK, 1 row affected (0.00 sec)
2, create a database called slave2 on mysql-realserver2 (192.168.9.30):
Mysql > create database slave2
Query OK, 1 row affected (0.00 sec)
3, execute on a machine that can ping mysql-realserver1 and mysql-realserver2 (be careful not to execute on the lvs machine you are using, for example, lvs now uses [root@lvs-a ~] # this machine, if you execute it here, it will report an error.
[root@lvs-a] # mysql-uroot-pliuwenhe-h292.168.9.230-e'show databases'
ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.9.230'
You have new mail in / var/spool/mail/root
[root@lvs-b] # mysql-uroot-pliuwenhe-h292.168.9.230-e'show databases'
+-+
| | Database |
+-+
| | information_schema |
| | he |
| | liuwenhe |
| | mysql |
| | performance_schema |
| | slave1 |
| | test |
+-+
[root@lvs-b] # mysql-uroot-pliuwenhe-h292.168.9.230-e'show databases'
+-+
| | Database |
+-+
| | information_schema |
| | he |
| | liuwenhe |
| | mysql |
| | performance_schema |
| | slave2 |
| | test |
+-+
Indicates that the lvs was distributed successfully.
Fifth, fault handling:
Recovery process when the primary drbd (mysql) is down:
1. Resume and start the mysql service on the primary drbd first
Service mysqld start
two。 After confirming that the mysql of the primary drbd is started, when the heartbeat service is started, the vip and mount are automatically switched to the primary drbd
Service heartbeat start
3. Execute background scripts on the main drbd
Nohup / opt/check_mysql_heartbeat.sh &
VI. Reference website
Http://blog.chinaunix.net/uid-20639775-id-3337484.html # deployment main reference URL
Http://www.centoscn.com/CentosServer/cluster/2015/0605/5604.html# installation drbd URL
Http://www.51ou.com/browse/Apache/60681.html # installation heartbeat URL
Migration mysql description: steps to pay attention to in mysql migration
After the deployment in drbd is completed, there is no problem in switching drbd synchronization between master and slave. Move the configuration file of mysql to the mount directory of drbd, delete or rename the configuration file on drbd slave, and then soft connect the configuration file under the mount directory to / etc/my.cn, because they use the same configuration file, establish the mysql data directory and log directory under the mount directory, and modify the master mysql of the directory. If the original mysql has data to move the data and logs to the newly built directory, the configuration file will also modify the corresponding directory, and then restart the mysql. If the data is not moved, the database needs to be initialized and then restarted. At this time, you can manually switch between the master and slave of the drbd according to the steps on the URL to see whether the mysql of the slave drbd can be started normally, whether the data inserted by the mysql on the master is synchronized, and if so. Then you can start the configured heartbeat service. See if the current master drbd is mounted and whether the vip is displayed. After executing the background script on the master drbd, shut down the mysql service on the master drbd to see whether the mysql on the slave drbd starts normally, and whether the mount and vip automatically switch to the slave. If the above three items drift normally, the construction is successful.
Troubleshooting instructions:
When the main mysql goes down and needs to be repaired, you need to start heartbeat first, otherwise the drbd will not float to the master, and the natural mount will not switch over, let alone start mysql. The mysql master configuration files are all on the drbd disk, so when repairing, after starting heartbeat, check whether the mysql of the master drbd starts normally, if not, solve it and then start it.
Appendix 1
The following is a detailed profile parsing of keepalived:
[root@localhost kernels] # cat / etc/keepalived/keepalived.conf
! Configuration File for keepalived
Global_defs {
# notification_email {
# acassen@firewall.loc
# failover@firewall.loc
# sysadmin@firewall.loc
#}
# notification_email_from Alexandre.Cassen@firewall.loc
# smtp_server 192.168.200.1
# smtp_connect_timeout 30
Router_id LVS_DEVEL / / load balancer identity, which can be the same within the same network segment
}
Vrrp_sync_group VGM {/ / define a vrrp group
Group {
VI_1
}
}
Vrrp_instance VI_1 {/ / define vrrp instance
State MASTER / / Master LVS is MASTER, slave BACKUP
Network interface of interface eth0 / / LVS monitoring
Virtual_router_id 51 / / virtual_router_id must be the same under the same instance
Priority 100 / / defines priority. The higher the number, the higher the priority.
Time interval between advert_int 5 / / MASTER and BACKUP load balancers for synchronization checks (in seconds)
Authentication {/ / verify type and password
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {/ / Virtual IP
192.168.1.8
# 192.168.1.9 / if there are more than one, just add it down
# 192.168.1.7
}
}
Virtual_server 192.168.1.8 80 {/ / define a virtual server
Delay_loop 6 / / Health check time (in seconds)
Lb_algo rr / / load scheduling algorithm, which is set to rr, that is, polling algorithm.
Lb_kind DR / / LVS is a mechanism for load balancing. There are three optional modes: NAT, TUN and DR.
Persistence_timeout 50 / / session duration in seconds (can be appropriately extended to maintain session)
Protocol TCP / / forwarding protocol type, including tcp and udp
Sorry_server 127.0.0.1 80 / / web servers all failed, vip points to native port 80
Real_server 192.168.1.16 80 {/ / define WEB server
Weight 1 / / weight
TCP_CHECK {/ / judge the health status of RealServer through tcpcheck
Connect_timeout 5 / / connection timeout
Nb_get_retry 3 / / number of reconnections
Delay_before_retry 3 / / reconnection interval
Connect_port 80 / / Inspection Port
}
}
Real_server 192.168.1.17 80 {
Weight 1
TCP_CHECK {
Connect_timeout 5
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
, solve it and then start it.
The above is all the contents of the article "how to build Heartbeat+DRBD+Mysql+Lvs+Keepalived High availability". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.