Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mysql3

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

MHA (Master High Availability) is a relatively mature solution in terms of high availability of MySQL. When MHA monitors a master node failure, it will upgrade the slave node with the latest data to a new master node. During this period, MHA will avoid consistency problems by obtaining additional information from other nodes. MHA also provides the online switching function of master nodes.

MHA services have two roles, MHA Manager (management node) and MHA Node (data node):

MHA Manager: usually deployed on a separate machine to manage multiple master/slave clusters, each master/slave cluster is called an application.

MHA node: runs on each MySQL server (master/slave/manager), which speeds up failover by monitoring scripts that have the ability to parse and clean up logs.

Environment

There are four nodes in this experimental environment, and their roles are assigned as follows.

Manager: MHA Manager

Master: MariaDB master

Slave1: MariaDB slave

Slave2: MariaDB slave

Modify the name of each node and add to the configuration content of the / etc/hosts file of each node:

172.16.1.2 manager.zrs.com manager

172.16.1.3 master.zrs.com master

172.16.1.4 slave1.zrs.com slave1

172.16.1.5 slave2.zrs.com slave2

Initial primary node master configuration:

Server_id=1

Relay_log=relay-log

Log_bin=master-log

Configuration of node slave1:

Server_id=2

Relay_log=relay-log

Log_bin=master-log

Relay_log_purge=0

Read_only=1

Configuration of node slave2:

Server_id=3

Relay_log=relay-log

Log_bin=master-log

Relay_log_purge=0

Read_only=1

The following is the master-slave replication architecture

Primary server

Authorize from the server and refresh

MariaDB [(none)] > grant replication slave,replication client on. To 'repluser'@'172.16.1.4'identified by' replpass'

MariaDB [(none)] > grant replication slave,replication client on. To 'repluser'@'172.16.1.5'identified by' replpass'

MariaDB [(none)] > flush privileges

Configure from the server

Both slave specify the primary server

MariaDB [(none)] > change master to master_host='172.16.1.3',master_user='repluser',master_password='replpass',master_log_file='binlog.000003',master_log_pos=245

Start io thread and sql thread

MariaDB [(none)] > start slave io_thread

MariaDB [(none)] > start slave sql_thread

Authorization on all MySQL nodes

MariaDB [(none)] > GRANT ALL ON. TO 'mhamngr'@'172.16.1.%' IDENTIFIED BY' mhapass'

Establish key-free communication

All the nodes in the MHA cluster need to communicate with each other based on ssh mutual trust to achieve remote control and data management functions. After the Manager node generates a key pair and sets it to remotely connect to the local host, copy the private key file and authorized_keys file to all the remaining nodes.

[root@manager] # ssh-keygen-t rsa-P''

Generating public/private rsa key pair.

Enter file in which to save the key (/ root/.ssh/id_rsa): .ssh / id_rsa

Your identification has been saved in .ssh / id_rsa.

Your public key has been saved in .ssh / id_rsa.pub.

The key fingerprint is:

80:59:23:b9:f8:ce:7e:86:66:ad:23:82:b3:d9:a8:81 root@manager.zrs.com

The key's randomart image is:

+-[RSA 2048]-+

| | o |

|. =. | |

| .o.. | |

|. . . | |

|. S |

|. . | |

| | E o o |

| + =. B + |

| | * +. = o = |

+-+

[root@manager ~] # cat .ssh / id_rsa.pub > > .ssh / authorized_keys

[root@manager ~] # chmod go= .ssh / authorized_keys

[root@manager ~] # scp-p. Ssh / id_rsa. SSH / authorized_keys root@master:/root/.ssh/

The authenticity of host 'master (172.16.1.3)' can't be established.

ECDSA key fingerprint is 65:f7:d6:d7:ae:3b:a2:dc:2b:bc:33:64:0e:47:11:b4.

Are you sure you want to continue connecting (yes/no)? Yes

Warning: Permanently added 'master' (ECDSA) to the list of known hosts.

Root@master's password:

Id_rsa 100% 1675 1.6KB/s 00:00

Authorized_keys 100% 402 0.4KB/s 00:00

[root@manager ~] # scp-p. Ssh / id_rsa. SSH / authorized_keys root@slave1:/root/.ssh/

The authenticity of host 'slave1 (172.16.1.4)' can't be established.

ECDSA key fingerprint is eb:b4:c4:c4:aa:15:2c:f8:6b:e8:cc:59:75:7a:a5:89.

Are you sure you want to continue connecting (yes/no)? Yes

Warning: Permanently added 'slave1' (ECDSA) to the list of known hosts.

Root@slave1's password:

Id_rsa 100% 1675 1.6KB/s 00:00

Authorized_keys 100% 402 0.4KB/s 00:00

[root@manager ~] # scp-p. Ssh / id_rsa. SSH / authorized_keys root@slave2:/root/.ssh/

The authenticity of host 'slave2 (172.16.1.5)' can't be established.

ECDSA key fingerprint is be:2f:9f:d7:f8:2e:09:b1:7d:29:c2:76:53:0f:d2:94.

Are you sure you want to continue connecting (yes/no)? Yes

Warning: Permanently added 'slave2172.16.1.5' (ECDSA) to the list of known hosts.

Root@slave2's password:

Id_rsa 100% 1675 1.6KB/s 00:00

Authorized_keys 100% 402 0.4KB/s 00:00

Install MHA

In addition to the source package, MHA officially provides a package in rpm format, which can be downloaded from https://code.google.com/p/mysql-master-ha/wiki/Downloads?tm=2. This installation is in rpm format, and MHA Node is required on all nodes of manager and node.

Install MHA Manager

Rpm installation method:

[root@manager ~] # yum install perl-DBD-MySQLperl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager

[root@manager] # rpm-ivh mha4mysql-node-0.56-0.el6.noarch.rpm

[root@manager] # rpm-ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

Tar package installation method:

Tar-zxf mha4mysql-manager-0.56.tar.gz

Cd mha4mysql-manager-0.56

Perl Makefile.PL

Make

Make install

Install MHA Node

Rpm installation method:

~] # yum install perl-DBD-MySQL

~] # rpm-ivh mha4mysql-node-0.56-0.el6.noarch.rpm

Tar package installation method:

Tar-zxfmha4mysql-node-0.56.tar.gz

Cd mha4mysql-node-0.56

Perl Makefile.PL

Make

Make install

Initialize MHA

The Manger node needs to provide a dedicated configuration file for each monitored master/slave cluster, and all master/slave clusters can share the global configuration. The global profile defaults to / etc/masterha_default.cnf, which is an optional configuration. If only one set of master/slave clusters is monitored, the default configuration information of each server can also be provided directly through the configuration of application. The configuration file path of each application is customized, and / etc/masterha/app1.cnf will be used in this lab.

[server default]

User=mhamngr

Password=mhapass

Manager_workdir=/data/masterha/app1

Manager_log=/data/masterha/app1/manager.log

Remote_workdir=/data/masterha/app1

Ssh_user=root

Repl_user=repluser

Repl_password=replpass

Ping_interval=1

[server1]

Hostname=172.16.1.3

Candidate_master=1

[server2]

Hostname=172.16.1.4

Candidate_master=1

[server3]

Hostname=172.16.1.5

Check whether the configuration of ssh mutual trust communication between nodes is normal.

[root@manager] # masterha_check_ssh-- conf=/etc/masterha/app1.cnf

In the output, the last line is shown below, indicating that it has passed the test.

[info] All SSH connection tests passed successfully.

Check whether the connection configuration parameters of the managed MySQL replication cluster are normal

[root@manager] # masterha_check_repl-- conf=/etc/masterha/app1.cnf

In the output, the last line is shown below, indicating that it has passed the test.

MySQL Replication Health is OK.

Start MHA

[root@manager] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf > / data/masterha/app1/manager.log 2 > & 1 &

View the status of the master node

[root@manager] # masterha_check_status-- conf=/etc/masterha/app1.cnf

App1 (pid:23265) is running (0:PING_OK), master:172.16.1.3

[root@manager ~] #

Stop MHA

[root@manager] # masterha_stop-- conf=/etc/masterha/app1.cnf

Stopped app1 successfully.

MHA provides a number of utility programs, common of which are shown below.

Manager node:

Masterha_check_ssh:MHA dependent SSH environment detection tool

Masterha_check_repl:MySQL replication environment detection tool

Masterha_manager:MHA service master program

Masterha_check_status:MHA Health status Detection tool

Masterha_master_monitor:MySQL master Node availability Monitoring tool

Masterha_master_switch:master Node switching tool

Masterha_conf_host: add or remove configured nodes

Masterha_stop: a tool for shutting down MHA services

Node node:

Save_binary_logs: save and copy the binary log of master:

Apply_diff_relay_logs: identify different relay log events and apply them to other slave:

Filter_mysqlbinlog: remove unnecessary ROLLBACK events (MHA no longer uses this tool):

Purge_relay_logs: clear the relay log (does not block the SQL thread):

Custom extension:

Secondary_check_script: check the availability of master through multiple network routes

Master_ip_failover_script: update the masterip used by application

Shutdown_script: force shutdown of master nodes

Report_script: sending report

Init_conf_load_script: loading initial configuration parameters

Master_ip_online_change_script: update master node ip address

Test failover

Turn off the mariadb service on the master node

[root@master] # killall-9 mysqld mysqld_safe

Looking at the log, it is found that the node 172.16.1.3 is down, and 172.16.1.4 is promoted to master.

Use zabbix to complete masterha-manager restart

Rough steps

Skipping the installation steps of zabbix server and zabbix agent, I installed both zabbix server and zabbix agent on the manager host to monitor the manager management process started by nohup just set. As soon as I found that the background command execution was over, I immediately called the script through the conditions and triggers set in zabbix, so that the manager process was always running on the manager.

The configuration that needs to be completed on agent:

The 1.zabbix user has the required administrative privileges

Edit / etc/sudoers file

The comment is as follows: because the system defaults to the user who can log in through tty, execute the command, and zabbix does not have the permission to log in to the system, so put this comment

Add something such as the following line: this is not safe, but in a production environment, use a safer method

# Defaults requiretty

Zabbix ALL= (ALL) NOPASSWD:ALL

The 2.agent process should allow the execution of remote commands

Open the remote command and set the configuration in the / etc/zabbix/zabbix_agentd.conf configuration file to 1.

EnableRemoteCommands=1

3. Start the service

[root@manager ~] # systemctl start zabbix-agent.service

4. Set Hosts,items,Triggers,Actions (Action,Conditions,Operations) on the client interface

It should be noted that Operations needs to set up the Commands calling script to start the MHA program.

[root@manager ~] # cat mannager.sh

Nohup masterha_manager-- conf=/etc/masterha/app1.cnf > / data/masterha/app1/manager.log 2 > & 1 &

5. You can test whether zabbix can complete the call of the script and the background startup of manager according to the set transaction action.

To turn off processes executed by nohup, use the

[root@manager] # kill-9 + id # this id number needs to be queried first.

Manual get acquisition:

[root@manager] # zabbix_get-s 172.16.1.2-k masterha.manager

two

Get it by get again:

[root@manager] # zabbix_get-s 172.16.1.2-k masterha.manager

0

When 0 is shown here, and you can see through the ps command that the process has indeed been started, you can use zabbix to complete the masterha-manager restart successfully.

Zabbix_get is a zabbix command that gets a value from the command line:

-s the ip address to be looked up, either locally or remotely

-Port of p zabbix_agentd

-k key value

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report