In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shares with you the content of a sample analysis of MYSQL+ replication + MHA+VIP configuration. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
1. Mysql installation
Installation of ① cmake
Cp cmake-2.8.10.2.tar.gz / uar/local
Tar-zxvf cmake-2.8.10.2.tar.gz / / decompress the package
Cd cmake-2.8.10.2
. / configure
Make
Make install
② mysql source code installation
Install with source code. Copy the mysql-5.6.16.tar source code to the server and decompress it. After decompressing it, enter the mysql-5.6.16 directory.
Install mysql using cmake source code (if you plan to install to a different path, make sure to modify the path / usr/local/mysql in the following statement! )
Cmake-DCMAKE_INSTALL_PREFIX=/usr/local/mysql\
-DMYSQL_UNIX_ADDR=/usr/local/mysql/mysql.sock\
-DDEFAULT_CHARSET=utf8\
-DDEFAULT_COLLATION=utf8_general_ci\
-DWITH_MYISAM_STORAGE_ENGINE=1\
-DWITH_INNOBASE_STORAGE_ENGINE=1\
-DWITH_ARCHIVE_STORAGE_ENGINE=1\
-DWITH_BLACKHOLE_STORAGE_ENGINE=1\
-DWITH_MEMORY_STORAGE_ENGINE=1\
-DWITH_READLINE=1\
-DENABLED_LOCAL_INFILE=1\
-DMYSQL_DATADIR=/mysql/data\
-DMYSQL_USER=mysql\
-DMYSQL_TCP_PORT=3306\
-DENABLE_DOWNLOADS=1
After copying the above, enter, and then start the cmake process, which usually doesn't take very long.
Start compiling the source code after cmake. This step will take a long time. Please wait patiently.
Make (Note: this step can be compiled with make-j multiple CPU to speed up, single-core CPU do not add this parameter)
Install the compiled program
Make install
Note: if you need to reinstall mysql, just make install under / usr/local/src/mysql-5.6.16. Cmake and make are no longer needed.
Clear temporary installation files
Make clean
③ initialization data directory
Cd / usr/local/mysql/scripts/
Create a mysql user and assign the / mysql / user/local/mysql directory to mysql
Useradd mysql
Chown-Rf mysql:mysql / mysql
Chown-Rf mysql:mysql / usr/local/mysql
. / mysql_install_db-- datadir=/mysql/data-- basedir=/usr/local/mysql-- user=mysql
Two OK words appear, indicating that the data directory has been initialized under / mysql/data
④ mysql Registration Service and start
Cd / usr/local/mysql/support-files
Cp mysql.server / etc/rc.d/init.d/mysql
Cp my-default.cnf / etc/my.cnf
Chkconfig-add mysql
Chkconfig mysql on
Service mysql start
Mysql-u mysql-S / usr/local/mysql/mysql.sock
Sock information is required for the connection. You can specify the path by-S or write the socket path in my.cnf, and configure the corresponding path, otherwise the connection will report an error. It is recommended to write it in the configuration file.
[mysqld]
Socket=/usr/local/mysql/mysql.sock
[client]
Socket=/usr/local/mysql/mysql.sock
Other my.cnf parameters are added later
2. Mysql master-slave replication
In this lab, we build a master-slave replication environment through three servers.
Master 10.39.251.187
Slaver1 10.39.251.188
Slaver2 10.39.251.189
Modify the parameter files of the master and slave libraries (/ etc/my.cnf)
Main library: 10.39.251.187
# [mysqld] tag to append
Server_id = 1
Log-bin=/mysql/data/mysql-bin
Log-bin-index=/mysql/data/mysql-binlog.index
Log_slave_updates=1
Sync-binlog = 1
Slave library 1vl 10.39.251.188 (backup master)
# [mysqld] tag to append
Server_id = 2
Log-bin=/mysql/data/mysql-bin
Log-bin-index=/mysql/data/mysql-binlog.index
Log_slave_updates=1
Sync-binlog = 1
From library 2vl 10.39.251.189
# [mysqld] tag to append
Read_only=1
Server_id = 3
Log in to the master server to create a connection account for the slave server and assign permissions
Mysql > GRANT replication slave ON *. * TO 'repluser'@'%' identified by' oracle'
Mysql > flush privileges
Initialization data:
All slave libraries: if there is data before the slave library, kill the original data and keep the slave library clean
Service mysql stop # stop the Mysql service
Cd / mysql/data # enter the data directory
Rm-fr database directory # Delete database directory
Service mysql restart # start the Mysql service
Master library: export the data of the master library and import all slave libraries to keep the data consistent
Mysqldump-u root-poracle123 m > / tmp/full.sql # main library export data
Scp / tmp/full.sql root@ data exported from the library IP:/tmp/ # master library is transferred to all slave libraries via scp
All slave libraries: import the data transferred from the master library
Mysql-u root-poracle123 m
< /tmp/full.sql #导入主库传递过来的数据 主库:为主库加上只读锁,查看当前Binlog日志情况 mysql>Flush tables with read lock; # add a read lock to the main library
Mysql > show master status
+-+ +
| | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | |
+-+ +
| | mysql-bin.000003 | 411 | | mysql,information_schema |
+-+ +
Mysql > unlock tables; # unlock the main library
Synchronize all slave libraries to the master database and enable replication
Execute from the library:
Change master to master_host='10.39.251.187'
Master_port=3306, master_user='repluser'
Master_password='oracle', master_log_file='mysql-bin.000003',master_log_pos=411
Start slaver
Start slave
Before viewing from the library:
Mysql > show slave status\ G
* * 1. Row *
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.10.21
Master_User: repluser
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 411
Relay_Log_File: my3306-relay-bin.000002
Relay_Log_Pos: 283
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Seconds_Behind_Master: 0
Test synchronization:
The master database creates tables and inserts records, and queries in the slave database. It is very simple. Generally, the test is normal, and MySQL will automatically submit it.
Semi-synchronous replication:
In addition to supporting the built-in asynchronous replication mechanism, MySQL5.5 also provides a mechanism for interfaces to support semi-synchronous replication.
Disadvantages of asynchronous replication:
MySQL replication defaults to asynchronous replication, and Master writes events to binlog, but does not know whether or when Slave has been received and processed. In the case of an asynchronous replication mechanism, if the Master goes down, the transaction is committed on the Master, but it is likely that these transactions are not transferred to any Slave. Assuming that there is a Master- > Salve failover mechanism, Slave may also lose transactions.
The concept of semi-synchronous replication:
i.
When a Slave host connects to the Master, you can see if it is in the mechanism of semi-synchronous replication.
ii.
When semi-synchronous replication is enabled on Master, at least one Slave should enable it. At this point, a thread commiting a transaction on the Master will be blocked until it is known that a Slave with semi-synchronous replication enabled has received all events for the transaction, or waits for a timeout.
iii.
Slave notifies you that it has been received when the events of a transaction have been written to its relay-log and flushed to disk.
iv.
If the wait times out, that is, the Master is not told that it has been received, the Master is automatically converted to an asynchronous replication mechanism. When at least one semi-synchronous Slave catches up, Master and its Slave automatically switch to a semi-synchronous replication mechanism.
v.
Semi-synchronous replication will not work until Master,Slave is enabled; otherwise, only on one side, it will still be asynchronous replication.
Semi-synchronous replication installation method:
Environmental requirements:
i.
MySQL5.5 or above
ii.
Installing plug-ins on MySQL requires database support for dynamic loading. Check whether it is supported or not, and use the following test:
Mysql > show global variables like 'have_dynamic_loading'
+-+ +
| | Variable_name | Value |
+-+ +
| | have_dynamic_loading | YES |
+-+ +
1 row in set (0.00 sec)
iii. Semi-synchronous replication is based on a replication environment. That is, before configuring semi-synchronous replication, there is already a replicated environment.
Installation:
Execute on Master:
Mysql > INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so'
Execute on each Slave:
Mysql > INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so'
If you don't know the directory of Plugin, look for it as follows:
Mysql > show global variables like 'plugin_dir'
+-- +
| | Variable_name | Value |
+-- +
| | plugin_dir | / opt/usr/local/mysql/lib/plugin/ |
+-- +
Check that Plugin is installed correctly:
Mysql > show plugins
Or
Mysql > select * from information_schema.plugins
Configuration:
Execute on Master:
Mysql > SET GLOBAL rpl_semi_sync_master_enabled = 1
Mysql > SET GLOBAL rpl_semi_sync_master_timeout = N
Execute on Slave:
Mysql > SET GLOBAL rpl_semi_sync_slave_enabled = 1
Description: http://www.linuxidc.com
If you turn on semi-synchronous replication on a running Slave, you must first stop Slave Imax O, enable semi-synchronization, and then turn on Slave I Dot O.
Mysql > STOP SLAVE IO_THREAD; START SLAVE IO_THREAD
If not, Slave will still replicate asynchronously.
As you know, if you do not write the settings of the variables to the configuration file, the next time you restart the database, it will fail. Write configuration file:
On Master:
[mysqld]
Rpl_semi_sync_master_enabled=1
Rpl_semi_sync_master_timeout=1000 # 1 second
On Slave:
[mysqld]
Rpl_semi_sync_slave_enabled=1
3. MHA installation
MHA features:
MHA monitors the primary server of the replication architecture and automatically fails over once a primary server failure is detected. Even if some slave servers do not receive the latest relay log,MHA, they automatically identify differences from the latest slave server relay log and apply these logs to other slave servers, so all slave servers are consistent. MHA usually completes the failover in a few seconds, detects the failure of the primary server in 9-12 seconds, shuts down the failed primary server in 7-10 seconds to avoid brain fissure, applies different relay log to the new primary server in a few seconds, and the whole process can be completed in 10-30 seconds. You can also set the priority to designate one of the slave as a candidate for master. Because MHA fixes consistency between slaves, any slave can be changed into a new master without consistency problems, resulting in replication failure
Installation steps:
Check whether the server firewall and SELINUX are turned off
① creates and authorizes administrative users on master
Mysql > CREATE USER 'mha'@'%' IDENTIFIED BY' mhapwd'
Mysql > grant all privileges on *. * to 'mha'@'%' identified by' mhapwd'
Mysql > flush privileges
② is checking for synchronization from the slave library.
Mysql > select user,host from mysql.user
③ configures mysql environment variables, and each node needs to be configured
[root@mysql-01 ~] # echo 'PATH=/usr/local/mysql/bin:$PATH' > > / etc/profile
[root@mysql-01 ~] # source / etc/profile
[root@mysql-01 ~] # which mysql
# establish mysql,mysqlbinlog soft links
Mv / usr/bin/mysql / usr/bin/mysql_bak
Ln-s / usr/local/mysql/bin/* / usr/bin/
④ configure ssh password-free login. Configure / etc/hosts in each server first.
Execute on one of the 251.187.
Ssh-keygen-t rsa return all the way.
Cd / root/.ssh/
Cat id_rsa.pub > > authorized_keys / / establish local trust
Scp id_rsa.pub root@10.39.251.188:/root/.ssh/id_rsa187.pub
Scp id_rsa.pub root@10.39.251.189:/root/.ssh/id_rsa187.pub
Enter the root home directories of 251.188 and 251.189
Cd / root/.ssh/
Cat id_rsa187.pub > > authorized_keys / / establish remote trust
Chmod 700. ssh/*
At this point, 187s can be logged in to 187,188,189without entering a password through ssh. Similarly, step ④ is performed in 188,189so that the three servers can connect to each other without entering a password (be careful not to copy key by copy and paste). If you still have problems with SSH configuration, you can check / var/log/secure log information.
⑤ install MHA
Need to install some packages for support, use yum network source; if you encounter problems with installation, you can try yum update to update yum source or yum clean all to clear the cache
Install node in 187,188,189
Yum-y install perl-DBD-MySQL ncftp
Rpm-ivh mha4mysql-node-0.54-0.el6.noarch.rpm
Install manager in 189s
Yum install perl
Yum install cpan
Yum install perl-Config-Tiny
Yum install perl-Time-HiRes
Yum install perl-Log-Dispatch
Yum install perl-Parallel-ForkManager
If there is an error in installing the perl-Log-Dispatch,perl-Parallel-ForkManager installation package: you need to install epel first (please refer to https://fedoraproject.org/wiki/EPEL)
Rpm-ivh epel-release-6-8.noarch.rpm
Rpm-ivh mha4mysql-manager-0.54-0.el6.noarch.rpm
⑥ configuration HMA
Mkdir-p / etc/masterha
Mkdir-p / mha/app1
Touch / etc/masterha/app1.cnf
Vi / etc/masterha/app1.cnf
Insert the following:
[server default]
Manager_workdir=/mha/app1
Manager_log=/mha/app1/manager.log
Ssh_user=root
User=root
Password=oracle123
Repl_user=repluser
Repl_password=oracle
Ping_interval=3
Master_ip_failover_script=/masterha/app1/master_ip_failover
[server1]
Hostname=10.39.251.187
Port=3306
Master_binlog_dir=/mysql/data
Candidate_master=1
[server2]
Hostname=10.39.251.188
Port=3306
Master_binlog_dir=/mysql/data
Candidate_master=1
[server3]
Hostname=10.39.251.189
Port=3306
Master_binlog_dir=/mysql/data
No_master=1
Save exit
The masterha_check_ssh tool verifies that the ssh trust login is successful
Masterha_check_ssh-conf=/etc/masterha/app1.cnf
If you encounter an error report: Can't locate MHA/NodeConst.pm in @ INC (@ INC contains: / usr/local/lib64/perl5 / usr/local/share/perl5 / usr/lib64/perl5/vendor_perl / usr/share/perl5/vendor_perl / usr/lib64/perl5/ usr/share/perl5.) At / usr/share/perl5/vendor_perl/MHA/ManagerConst.pm line 25
Solution: cp-rvp / usr/lib/perl5/vendor_perl/MHA / usr/local/lib64/perl5/
(both the database node and the management node of mha need to perform this step)
The masterha_check_repl tool verifies that mysql replication is successful
Masterha_check_repl-conf=/etc/masterha/app1.cnf
⑦ create master_ip_failover script
Cd / mha/app1
Touch master_ip_failover
Chmod 755 master_ip_failover
Vi master_ip_failover
Insert the following:
#! / usr/bin/env perl
Use strict
Use warnings FATAL = > 'all'
Use Getopt::Long
My (
$command, $ssh_user, $orig_master_host, $orig_master_ip
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
My $vip = '10.39.251.192 virtual IP
My $gateway = '10.39.251.255potential gateway IP
My $interface = 'eth0'
My $key = "1"
My $ssh_start_vip = "/ sbin/ifconfig $interface:$key $vip;/sbin/arping-I $interface-c 3-s $vip $gateway > / dev/null 2 > & 1"
My $ssh_stop_vip = "/ sbin/ifconfig $interface:$key down"
GetOptions (
'command=s' = >\ $command
'ssh_user=s' = >\ $ssh_user
'orig_master_host=s' = >\ $orig_master_host
'orig_master_ip=s' = >\ $orig_master_ip
'orig_master_port=i' = >\ $orig_master_port
'new_master_host=s' = >\ $new_master_host
'new_master_ip=s' = >\ $new_master_ip
'new_master_port=i' = >\ $new_master_port
);
Exit & main ()
Sub main {
Print "\ n\ nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\ n\ n"
If ($command eq "stop" | | $command eq "stopssh") {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database
# invalidate orig_master_ip here.
My $exit_code = 1
Eval {
Print "Disabling the VIP on old master: $orig_master_host\ n"
& stop_vip ()
$exit_code = 0
}
If ($@) {
Warn "Got Error: $@\ n"
Exit $exit_code
}
Exit $exit_code
}
Elsif ($command eq "start") {
# all arguments are passed.
# If you manage master ip address at global catalog database
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
My $exit_code = 10
Eval {
Print "Enabling the VIP-$vip on the new master-$new_master_host\ n"
& start_vip ()
$exit_code = 0
}
If ($@) {
Warn $@
Exit $exit_code
}
Exit $exit_code
}
Elsif ($command eq "status") {
Print "Checking the Status of the script.. OK\ n"
`ssh $ssh_user\ @ $orig_master_host\ "$ssh_start_vip\" `
Exit 0
}
Else {
& usage ()
Exit 1
}
}
# A simple system call that enable the VIP on the new master
Sub start_vip () {
`ssh $ssh_user\ @ $new_master_host\ "$ssh_start_vip\" `
}
# A simple system call that disable the VIP on the old_master
Sub stop_vip () {
`ssh $ssh_user\ @ $orig_master_host\ "$ssh_stop_vip\" `
}
Sub usage {
"Usage: master_ip_failover-- command=start | stop | stopssh | status-- orig_master_host=host-- orig_master_ip=ip-- orig_master_port=port-- new_master_host=host-- new_master_ip=ip-- new_master_port=port\ n"
}
⑧ starts MHA
Nohup masterha_manager-- conf=/etc/masterha/app1.cnf > / mha/app1/mha_manager.log
< /dev/null 2>& 1 & (Note: the process will also die when it is verified that shell is turned off, and there will be no problem if it can be written to a script for execution.)
Check: masterha_check_status-- conf=/etc/masterha/app1.cnf
App1 (pid:19570) is running (0:PING_OK), master:10.39.251.187
Stop manager:
Masterha_stop-conf=/etc/masterha/app1.cnf
# if it cannot be stopped, add the-- abort option
The masterha_check_repl tool verifies that mysql replication is successful
Masterha_check_repl-conf=/etc/masterha/app1.cnf
Add a virtual IP to the master node:
/ sbin/ifconfig eth0:0 10.39.251.192 netmask 255.255.255.0 up
4. MHA handoff
Stop the master node mysql on 187, check the manager log tail-f / mha/app1/manager.log master node has been switched, log in to the mysql of slaver to view show slave status\ G, and find that the master has been switched to 188and the virtual IP192 has been bound to the 188node. The client can still access 192normally.
There are several issues that need to be paid attention to during the switching process:
1. The switching process automatically shuts down the read_only
two。 Manual deletion / masterha/app1/app1.failover.complete needs to be deleted after switching before the second test can be conducted.
3. Once a handover occurs, the management process will exit and cannot be tested again. You need to add the failed database to the MHA environment.
4. The original primary node can only be set to slave when the original primary node is rejoined to the MHA
Change master to MASTER_HOST='192.168.16.5', MASTER_USER='replicationuser',MASTER_PASSWORD='replicationuser',MASTER_LOG_FILE='mysql-bin.000004',MASTER_LOG_POS=106
You need to reset slave before.
5. There are several ways to take over the ip address. Here, MHA automatically invokes the ip alias. The advantage is that it can ensure the consistency between the database state and the business Ip switch. After starting the management node, vip will automatically alias to the current master node, and keepalived can only check the health of 3306, but it is easy to misjudge the handover if it fails to check the slave-SQL and slave-IO processes in MySQL replication.
6. Note: secondary slave server needs to open log_slave_updates
7. Manual switching requires defining the master_ip_online_change_script script first, otherwise the mysql,Ip address will only be switched and will not be bound. You can configure the script according to the template
8. By setting no_master=1, a node can never become a new master node.
Resume cluster operation:
① deletes the app1.failover.complete file on 189,
Cd / mha/app1
Rm-f app1.failover.complete
② original 187Master Node Service started
Service mysql start
③ 189Management Node to check for synchronous errors
Masterha_check_repl-conf=/etc/masterha/app1.cnf
Fri Aug 21 11:23:34 2015-[error] [/ usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln604] There are 2 non-slave servers! MHA manages at most one non-slave server. Check configurations.
⑤ to view the information on the current master188
Show master status
+-+
| | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | |
+-+
| | mysql-bin.000001 | 3721 | |
+-+
1 row in set (0.00 sec)
④ configures the 187node mysql as the new slave and starts the synchronization process
Change master to master_host='10.39.251.188'
Master_port=3306, master_user='repluser'
Master_password='oracle', master_log_file='mysql-bin.000001',master_log_pos=3721
Mysql > start slave
Check the synchronization status successfully on the management node again:
Masterha_check_repl-conf=/etc/masterha/app1.cnf
Note: after following the steps above, the 187nodes have been added to the cluster as slaver, but the newly generated data in 188,189 during the downtime is not in the 187s, so you still need to import the latest data from the master node backup before starting synchronization. For steps, refer to the master-slave replication section written above.
⑤ starts MHA
Nohup masterha_manager-- conf=/etc/masterha/app1.cnf > / mha/app1/mha_manager.log
< /dev/null 2>& 1 &
Thank you for reading! This is the end of this article on "MYSQL+ replication + sample Analysis of MHA+VIP configuration". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.