In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly describes the highly available rapid deployment steps of MHA and the simple use of commands
1. Download the installation package
Mha dependency package: http://rpm.pbone.net/index.php3
Perl-DBD-MySQL
Perl-Config-Tiny
Perl-Log-Dispatch
Perl-Parallel-ForkManager
Perl-Config-IniFiles
Perl-MailTools
Perl-Params-Validate
Perl-TimeDate
Perl-IO-stringy
Cmake-2.8.12.2.tar.gz http://www.linuxfromscratch.org/blfs/view/7.5/general/cmake.html
DBD-mysql-4.033_02.tar.gz http://www.filewatcher.com/d/FreeBSD/distfiles/Other/DBD-mysql-4.018.tar.gz.133427.html
DBI-1.636.tar.gz http://www.cpan.org/modules/by-module/DBI/
Mha rpm package installation: https://code.google.com/p/mysql-master-ha/
Mha4mysql-manager-0.56-0.el6.noarch.rpm
Mha4mysql-node-0.56-0.el6.noarch.rpm
Mha source code installation: https://code.google.com/p/mysql-master-ha/
Mha4mysql-node-0.56.tar.gz
Mha4mysql-manager-0.56.tar.gz
MySQL installation package:
Percona-server-5.6.32-78.0.tar.gz https://www.percona.com/downloads/Percona-Server-5.6/LATEST/
2. Extract the installation package
For i in `ls`; do tar-xzvf $I; tar-xvf $I; done
3. Install the dependency package
Configure the local yum source:
[root@node3 MHA] # mkdir / media/cdrom
[root@node3 MHA] # mount CentOS-6.4-x86_64-bin-DVD1.iso / media/cdrom/-o loop
[root@node3 MHA] # rm-rf / etc/yum.repos.d/*.repo
[root@node3 MHA] # vi / etc/yum.repos.d/CentOS6.repo
[Base]
Name=CentOS6 ISO Base
Baseurl= file:///media/cdrom
Enabled=1
Gpgcheck=0
Dependent package check installation:
Yum install-y git scons gcc gathers + gcc-c++ openssl check cmake bison libaio libboost-all-dev libasio-dev libaio-dev libncurses5-dev libreadline-dev libpam-dev ncurses-devel
Yum-y install gcc gcc-c++ gcc-g77 autoconf automake zlib* fiex* libxml* ncurses-devel libmcrypt* libtool-ltdl-devel* make cmake
Rpm-q git scons gcc gathers + gcc-c++ openssl check cmake bison libaio libboost-all-dev libasio-dev libaio-dev libncurses5-dev libreadline-dev libpam-dev ncurses-devel
Cmake installation:
[root@node3 MHA] # cd cmake-2.8.12.2
[root@node3 cmake-2.8.12.2] #. / bootstrap
[root@node3 cmake-2.8.12.2] # make- j 8
[root@node3 cmake-2.8.12.2] # make install
Perl tool installation
[root@node3 MHA] # rpm-ivh perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpm
[root@node3 MHA] # rpm-ivh perl-Config-Tiny-2.12-7.1.el6.noarch.rpm
[root@node3 MHA] # rpm-ivh perl-Params-Validate-0.95-5.9.x86_64.rpm
[root@node3 MHA] # rpm-ivh perl-TimeDate-2.22-1.noarch.rpm
[root@node3 MHA] # rpm-ivh perl-MailTools-2.04-4.el6.noarch.rpm (not required)
[root@node3 MHA] # rpm-ivh perl-Convert-BinHex-1.119-4.el6.noarch.rpm-- nodeps (not required)
[root@node3 MHA] # rpm-ivh perl-List-MoreUtils-0.33-107.1.x86_64.rpm (not required)
[root@node3 MHA] # rpm-ivh perl-IO-stringy-2.110-8.el6.noarch.rpm-- nodeps (not required)
[root@node3 MHA] # rpm-ivh perl-Config-IniFiles-2.72-2.el6.noarch.rpm-nodeps
[root@node3 MHA] # rpm-ivh perl-Mail-Sender-0.8.16-3.el6.noarch.rpm
[root@node3 MHA] # rpm-ivh perl-Mail-Sendmail-0.79-12.el6.noarch.rpm
[root@node3 MHA] # rpm-ivh perl-Log-Dispatch-2.27-1.el6.noarch.rpm-nodeps
DBI installation:
[root@node3 MHA] # cd DBI-1.636
[root@node3 DBI-1.636] # perl Makefile.PL
[root@node3 DBI-1.636] # make-j 8
[root@node3 DBI-1.636] # make install
DBD installation:
[root@node3 MHA] # cd DBD-mysql-4.033_02
[root@node3 DBD-mysql-4.033_02] # perl Makefile.PL-- mysql_config=/usr/local/mysql/bin/mysql_config
[root@node3 DBD-mysql-4.033_02] # make-j 8
[root@node3 DBD-mysql-4.033_02] # make install
4. MySQL installation and master-slave deployment
Percona Server 5.6 installation
[root@node3 MHA] # tar-xzvf percona-server-5.6.32-78.0.tar.gz
[root@node3 MHA] # useradd mysql-s / sbin/nologin
[root@node3 MHA] # cd percona-server-5.6.32-78.0
# cmake-DCMAKE_INSTALL_PREFIX=/usr/local/mysql-DMYSQL_UNIX_ADDR=/usr/local/mysql/mysql.sock-DDEFAULT_CHARSET=utf8-DDEFAULT_COLLATION=utf8_general_ci-DWITH_MYISAM_STORAGE_ENGINE=1-DWITH_INNOBASE_STORAGE_ENGINE=1-DWITH_ARCHIVE_STORAGE_ENGINE=1-DWITH_BLACKHOLE_STORAGE_ENGINE=1-DWITH_MEMORY_STORAGE_ENGINE=1-DENABLED_LOCAL_INFILE=1-DMYSQL_DATADIR=/usr/local/mysql/data/ -DMYSQL_USER=mysql-DENABLE_DOWNLOADS=1-DWITH_WSREP=1-DWITH_EDITLINE=0
[root@node3 percona-server-5.6.32-78.0] # make-j 4
[root@node3 percona-server-5.6.32-78.0] # make install
Initialize the database:
[root@node3 MHA] # cd / usr/local/mysql
[root@node3 mysql] # / scripts/mysql_install_db-- user=mysql-- basedir=/usr/local/mysql-- datadir=/usr/local/mysql/data/
[root@node3 mysql] # cp support-files/mysql.server / etc/init.d/mysqld
[root@node3 mysql] # chkconfig-- add mysqld
[root@node3 mysql] # chkconfig mysqld on
[root@node3 mysql] # chown-R mysql.mysql / usr/local/mysql
[root@node3 mysql] # vi ~ / .bash_profile
PATH=$PATH:$HOME/bin:/usr/local/mysql/bin
[root@node3 mysql] # source ~ / .bash_profile
Configure my.cnf
Node1 node:
[client]
Socket=/usr/local/mysql/mysql.sock
[mysqld]
Datadir=/usr/local/mysql/data
User=mysql
Log-bin=mysql-binlog
Binlog_format=ROW
Server-id=1
Sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
Innodb_locks_unsafe_for_binlog=1
Innodb_autoinc_lock_mode=2
Node2 node
[client]
Socket=/usr/local/mysql/mysql.sock
[mysqld]
Datadir=/usr/local/mysql/data
User=mysql
Log-bin=mysql-binlog
Binlog_format=ROW
Server-id=2
Sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
Innodb_locks_unsafe_for_binlog=1
Innodb_autoinc_lock_mode=2
Node3 node
[client]
Socket=/usr/local/mysql/mysql.sock
[mysqld]
Datadir=/usr/local/mysql/data
User=mysql
Log-bin=mysql-binlog
Binlog_format=ROW
Server-id=3
Sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
Innodb_locks_unsafe_for_binlog=1
Innodb_autoinc_lock_mode=2
Start the Node1 database
[root@node1 mysql] # service mysqld start
Clean up MySQL useless accounts
Mysql > delete from mysql.user where user =''
Mysql > delete from mysql.user where user = 'root' and host='::1'
Mysql > delete from mysql.user where user = 'root' and host='node1'
Mysql > delete from mysql.user where user = 'root' and host='127.0.0.1'
Create a manager management account
Mysql > GRANT SUPER,RELOAD,REPLICATION CLIENT,SELECT ON *. * TO manager@'10.x.x.%' IDENTIFIED BY 'manager'
Mysql > GRANT CREATE,INSERT,UPDATE,DELETE,DROP ON*.* TO manager@'10.x.x.%'
Create a master-slave copy account
Mysql > GRANT RELOAD, SUPER, REPLICATION SLAVE ON*.* TO 'replicat'@'10.x.x.%' IDENTIFIED BY' backup'
Mysql > flush privileges
Start the node2 node database
[root@node2 mysql] # service mysqld start
Start the node3 node database
[root@node3 mysql] # service mysqld start
Master-slave database replication build:
Execute node1 after logging in to MYSQL
Mysql > show master status\ G
* * 1. Row *
File: mysql-binlog.000005
Position: 973
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set:
1 row in set (0.00 sec)
Execute node2 after logging in to MYSQL
Mysql > change master to master_host='masterip',master_user='replicat',master_password='backup',master_port=3306,master_log_file='mysql-binlog.000005',master_log_pos= 973
Mysql > start slave
Execute node3 after logging in to MYSQL
Mysql > change master to master_host='masterip',master_user='replicat',master_password='backup',master_port=3306,master_log_file='mysql-binlog.000005',master_log_pos= 973
Mysql > start slave
5. MHA installation and deployment
Node1 is the master node, node2,node3 is the slave node, and node3 is the manager node
1) configure ssh peer
Node1
[root@node1 MHA] # ssh-keygen-t rsa
[root@node1 MHA] # vi ~ / .ssh/authorized_keys
Put the contents of node1,node2,node3 's public key file id_rsa.pub into it
Node2
[root@node2 MHA] # ssh-keygen-t rsa
Copy the authorized_keys of node1 to / root/.ssh/
Node3
[root@node3 MHA] # ssh-keygen-t rsa
Copy the authorized_keys of node1 to / root/.ssh/
Verify each other's login through ssh to ensure that ssh login does not need to enter a password.
Need to configure / etc/hosts hostname and IP resolution
Ip2 node2
Ip3 node3
Ip1 node1
2) node node source code installation
Node1
[root@node1 MHA] # tar-xzvf mha4mysql-node-0.56.tar.gz
[root@node1 MHA] # cd mha4mysql-node-0.56
[root@node1 mha4mysql-node-0.56] # perl Makefile.PL
[root@node1 mha4mysql-node-0.56] # make-j 8
[root@node1 mha4mysql-node-0.56] # make install
Node2
[root@node2 MHA] # tar-xzvf mha4mysql-node-0.56.tar.gz
[root@node2 MHA] # cd mha4mysql-node-0.56
[root@node2 mha4mysql-node-0.56] # perl Makefile.PL
[root@node2 mha4mysql-node-0.56] # make-j 8
[root@node2 mha4mysql-node-0.56] # make install
Node3
[root@node3 MHA] # tar-xzvf mha4mysql-node-0.56.tar.gz
[root@node3 MHA] # cd mha4mysql-node-0.56
[root@node3 mha4mysql-node-0.56] # perl Makefile.PL
[root@node3 mha4mysql-node-0.56] # make-j 8
[root@node3 mha4mysql-node-0.56] # make install
3) manager node source code installation
[root@node3 MHA] # tar-xzvf mha4mysql-manager-0.56.tar.gz
[root@node3 MHA] # cd mha4mysql-manager-0.56
[root@node3 mha4mysql-manager-0.56] # perl Makefile.PL
[root@node3 mha4mysql-manager-0.56] # make-j 8
[root@node3 mha4mysql-manager-0.56] # make install
4) configure MHA for manager node
[root@node3 MHA] # mkdir / etc/mha
[root@node3 MHA] # mkdir-p / usr/local/mha/log
[root@node3 MHA] # vi / etc/mha/manager.cnf
[server default]
Manager_workdir=/usr/local/mha/log
Manager_log=/usr/local/mha/log/manager.log
# account name of ssh key-free login
Ssh_user=root
# mha Management account
User=manager
Password=manager
# mysql copy account, used to synchronize binary logs between master and slave computers, etc.
Repl_user=replicat
Repl_password=backup
# ping interval, used to detect whether the master is normal
Ping_interval=1
[server1]
Hostname=node1
Ip=IP1
# after the MasterCard goes down, priority is given to enabling this as the new master
# candidate_master=1
Master_binlog_dir=/usr/local/mysql/data
[server2]
Hostname=node2
Ip=IP2
Candidate_master=1
Master_binlog_dir=/usr/local/mysql/data
[server3]
Hostname=node3
Ip=IP3
Master_binlog_dir=/usr/local/mysql/data
# # check whether the ssh connection is normal on the manager node
[root@node3 MHA] # masterha_check_ssh-- conf=/etc/mha/manager.cnf
5) start manager
[root@node3 MHA] # nohup masterha_manager-- conf=/etc/mha/manager.cnf > / usr/local/mha/log/mha_manager.log 2 > & 1 &
6) check manager status
[root@node3 MHA] # masterha_check_status-- conf=/etc/mha/manager.cnf
7) close manager
[root@node3 MHA] # masterha_stop-- conf=/etc/mha/manager.cnf
6. Daily management of MHA
1) check the replication structure
Masterha_check_repl-conf=/etc/mha/manager.cnf
2) check MHA status
Masterha_check_status-conf=/etc/mha/manager.cnf
3) start MHA Manager
Nohup masterha_manager-- conf=/etc/mha/manager.cnf > / usr/local/mha/log/mha_manager.log 2 > & 1 &
4) Manual online switching
Masterha_master_switch-conf=/etc/mha/manager.cnf-master_state=alive-new_master_host=node1-new_master_port=3306-orig_master_is_new_slave-running_updates_limit=10000
5) Manual failover
Masterha_master_switch-conf=/etc/mha/manager.cnf-master_state=dead-dead_master_host=node1-dead_master_port=3306-new_master_host=node2-new_master_port=3306-ignore_last_failover
6) Manual step-by-step switching detailed steps
Set manager automatic monitoring to off: masterha_stop-- conf=/etc/mha/manager.cnf
Set the node1 node masterdead:masterha_master_switch-- conf=/etc/mha/manager.cnf-- master_state=dead-- dead_master_host=node1 set the node2 node to the new master:masterha_master_switch-- conf=/etc/mha/manager.cnf-- master_state=alive-- new_master_host=node2
Non-interactive online switching: masterha_master_switch-- conf=/etc/mha/manager.cnf-- master_state=alive-- new_master_host=node2-- interactive=0
7. Installation error message and solution
1) error message 01
[root@node3 MHA] # masterha_check_repl-- conf=/etc/mha/manager.cnf
Sun Aug 21 10:19:11 2016-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.
Sun Aug 21 10:19:11 2016-[info] Reading application default configuration from / etc/mha/manager.cnf..
Sun Aug 21 10:19:11 2016-[info] Reading server configuration from / etc/mha/manager.cnf..
Sun Aug 21 10:19:11 2016-[info] MHA::MasterMonitor version 0.56.
Sun Aug 21 10:19:11 2016-[error] [/ usr/local/share/perl5/MHA/ServerManager.pm, ln301] Got MySQL error when connecting node3 (x.x.x.x:3306): 1130:Host 'node3' is not allowed to connect to this MySQL server, but this is not a MySQL crash. Check MySQL server settings.
At / usr/local/share/perl5/MHA/ServerManager.pm line 297
Sun Aug 21 10:19:11 2016-[error] [/ usr/local/share/perl5/MHA/ServerManager.pm, ln301] Got MySQL error when connecting node2 (x.x.x.x:3306): 1130:Host 'x.x.x' is not allowed to connect to this MySQL server, but this is not a MySQL crash. Check MySQL server settings.
At / usr/local/share/perl5/MHA/ServerManager.pm line 297
Sun Aug 21 10:19:11 2016-[error] [/ usr/local/share/perl5/MHA/ServerManager.pm, ln301] Got MySQL error when connecting node1 (x.x.x.x:3306): 1045:Access denied for user 'root'@'xx.xx.xx.xx' (using password: NO), but this is not a MySQL crash. Check MySQL server settings.
At / usr/local/share/perl5/MHA/ServerManager.pm line 297
Sun Aug 21 10:19:11 2016-[error] [/ usr/local/share/perl5/MHA/ServerManager.pm, ln309] Got fatal error, stopping operations
Sun Aug 21 10:19:11 2016-[error] [/ usr/local/share/perl5/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations. At / usr/local/share/perl5/MHA/MasterMonitor.pm line 326
Sun Aug 21 10:19:11 2016-[error] [/ usr/local/share/perl5/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers.
Sun Aug 21 10:19:11 2016-[info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK!
Solution:
You need to add mysql management account and password to the manager node configuration file, and you need to grant permissions to SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SUPER, and REPLICATION CLIENT.
2) error message 02
Tue Aug 23 14:10:51 2016-[info] Checking MHA is not monitoring or doing failover..
Tue Aug 23 14:10:51 2016-[error] [/ usr/local/share/perl5/MHA/MasterRotate.pm, ln142] Getting advisory lock failed on the current master. MHA Monitor runs on the current master. Stop MHA Manager/Monitor and try again.
Tue Aug 23 14:10:51 2016-[error] [/ usr/local/share/perl5/MHA/ManagerUtil.pm, ln177] Got ERROR: at / usr/local/bin/masterha_master_switch line 53
Solution:
When you do manual switching, you need to turn off the automatic switching monitoring first.
[root@node3] # masterha_stop-- conf=/etc/mha/manager.cnf
Stopped manager successfully.
[root@node3] # masterha_master_switch-global_conf=/etc/mha/masterha_default.conf-conf=/etc/mha/manager.cnf-master_state=alive-new_master_host=node2-new_master_port=3306-orig_master_is_new_slave-running_updates_limit=10000
8. Appendix
1) MHA common tool commands
-- Manager tool
Masterha_master_monitor # detects whether master is down
Masterha_master_switch # for manual Master switching
Masterha_manager # start MHA monitoring
Masterha_stop # stop MHA monitoring
Masterha_check_status # check the running status of MHA
Masterha_check_ssh # check whether the SSH login between each Node is normal
Masterha_check_repl # check whether mysql replication is normal
Masterha_secondary_check # check the configuration of multiple routes
Masterha_conf_host # add or remove configured Server information
-- Node tool
Save_binary_logs # saves and copies binary logs of master
Apply_diff_relay_logs # identifies differential relay log events and applies them to other Slave
Filter_mysqlbinlog # removes unnecessary Rollback events (MHA no longer uses this tool)
Purge_relay_logs # clears the relay log (does not block SQL threads)
Note: Node tools are usually triggered by Manager scripts and do not need to be called manually
2) Appendix script
Turn off relay log automatic deletion and clean up regularly
Purge_relay_logs-user=root-password=xxx-disable_relay_log_purge-port=3306
Multiple checks to prevent single point of failure of the network
Secondary_check_script=/etc/mha/masterha_secondary_check-s node2-s mysql-- user=root-- master_host=node1-- master_ip=ip1-- master_port=3306
Prevention of cerebral fissure
Shutdown_script = / etc/mha/power_manager
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.