Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MHA Architecture of mysql

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Role

Ip address

Hostnam

Server id

Types

Master

192.168.1.131

Master1

one

Master (write)

Master (candicate)

192.168.1.132

Master2

two

From (read)

Slave

192.168.1.133

Slave1

three

From (read)

Manager

192.168.1.134

Manager

Management node

Prepare the basic environment:

1. Check the selinux,iptables setting after configuring the IP address, shut down the selinux,iptables service so that there is no error in the later master-slave synchronization, and the time should be synchronized

two。 Configure epelyum and use perl packages on four machines

[root@master1 ~] # yum-y install epel-rpm-macros.noarch

[root@master1 ~] # yum-y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles ncftp perl-Params-Validate perl-CPAN perl-Test-Mock-LWP.noarch perl-LWP-Authen-Negotiate.noarch perl-devel perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker

3. Configure the hosts environment

[root@master1 ~] # vim / etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.131 master1

192.168.1.132 master2

192.168.1.133 slave1

192.168.1.134 manager

[root@master1 ~] # for i in master2 slave1 manager; do scp / etc/hosts $i:/etc/; done

4. To establish an interactive login environment for ssh, all four stations should do it.

[root@master1] # ssh-keygen-t rsa

[root@master1] # for i in master1 master2 slave1 manager; do ssh-copy-id-I ~ / .ssh/id_rsa.pub $I _ done

Configure semi-synchronous replication of mysql

In order to minimize the data loss caused by hardware damage and downtime of the main database, it is recommended to configure semi-synchronous replication as MySQL while configuring MHA.

Note: mysql semi-synchronous plug-in is provided by Google, specific location / usr/local/mysql/lib/plugin/, one is master

Semisync_master.so is used, and one is semisync_slave.so used by slave. Let's configure it in detail. If you don't know the directory of Plugin, look for it as follows:

Mysql > show variables like'% plugin_dir%'

+-- +

| | Variable_name | Value |

+-- +

| | plugin_dir | / usr/local/mysql/lib/plugin/ |

+-- +

1 row in set (0.01 sec)

1. Install relevant plug-ins (master, Candicate master,slave) on the master and slave nodes respectively.

Installing plug-ins on MySQL requires database support for dynamic loading. To check if it is supported, use the following test

Mysql > show variables like'% have_dynamic_loading%'

+-+ +

| | Variable_name | Value |

+-+ +

| | have_dynamic_loading | YES |

+-+ +

All mysql database servers with semi-synchronous plug-ins (semisync_master.so,semisync_slave.so) installed

Mysql > install plugin rpl_semi_sync_master soname 'semisync_master.so'

Mysql > install plugin rpl_semi_sync_slave soname 'semisync_slave.so'

Check that Plugin is installed correctly:

Mysql > show plugins

| | partition | ACTIVE | STORAGE ENGINE | NULL | GPL | |

| | ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL | |

| | FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL | |

| | ngram | ACTIVE | FTPARSER | NULL | GPL | |

| | rpl_semi_sync_master | ACTIVE | REPLICATION | semisync_master.so | GPL | |

| | rpl_semi_sync_slave | ACTIVE | REPLICATION | semisync_slave.so | GPL | |

+-+

Or

Mysql > select * from information_schema.plugins

View semi-synchronous related information

Mysql > show variables like'% rpl_semi_sync%'

+-+ +

| | Variable_name | Value |

+-+ +

| | rpl_semi_sync_master_enabled | OFF |

| | rpl_semi_sync_master_timeout | 10000 | |

| | rpl_semi_sync_master_trace_level | 32 | |

| | rpl_semi_sync_master_wait_for_slave_count | 1 | |

| | rpl_semi_sync_master_wait_no_slave | ON |

| | rpl_semi_sync_master_wait_point | AFTER_SYNC |

| | rpl_semi_sync_slave_enabled | OFF |

| | rpl_semi_sync_slave_trace_level | 32 | |

+-+ +

two。 Modify mysql configuration file to set master-slave synchronization

[root@master1 ~] # vim / etc/my.cnf

Server-id = 1

Log-bin=mysql-bin

Binlog_format=mixed

Log-bin-index=mysql-bin.index

Rpl_semi_sync_master_enabled=1

Rpl_semi_sync_master_timeout=1000

Rpl_semi_sync_slave_enabled=1

Relay_log_purge=0

Relay-log = relay-bin

Relay-log-index = slave-relay-bin.index

Rpl_semi_sync_master_enabled=1 1 table is enabled, 0 means off

Rpl_semi_sync_master_timeout=10000: millisecond unit. After the master server waits for a confirmation message for 10 seconds,

Instead of waiting, it becomes asynchronous.

[root@master2 ~] # vim / etc/my.cnf

Server-id = 2

Log-bin=mysql-bin

Binlog_format=mixed

Log-bin-index=mysql-bin.index

Relay_log_purge=0

Relay-log = relay-bin

Relay-log-index = slave-relay-bin.index

Rpl_semi_sync_master_enabled=1

Rpl_semi_sync_master_timeout=10000

Rpl_semi_sync_slave_enabled=1

Note: relay_log_purge=0, forbids the SQL thread to delete a relay log automatically after it has been executed, for MHA

In the scenario, the recovery of some delayed slave libraries depends on the relay log of other slave libraries, so the automatic deletion function is disabled.

[root@slave1 ~] # vim / etc/my.cnf

Server-id = 3

Log-bin = mysql-bin

Relay-log = relay-bin

Relay-log-index = slave-relay-bin.index

Read_only = 1

Rpl_semi_sync_slave_enabled=1

View semi-synchronous related information on master

Mysql > show variables like'% rpl_semi_sync%'

View semi-synchronous status:

Mysql > show status like'% rpl_semi_sync%'

Rpl_semi_sync_master_status: shows whether the primary service is in asynchronous or semi-synchronous replication mode

Rpl_semi_sync_master_clients: shows how many slave servers are configured for semi-synchronous replication mode

Rpl_semi_sync_master_yes_tx: displays the number of successful submissions confirmed from the server

Rpl_semi_sync_master_no_tx: displays the number of unsuccessful submissions confirmed from the server

Rpl_semi_sync_master_tx_avg_wait_time: the average extra waiting time for a transaction to open semi_sync

Rpl_semi_sync_master_net_avg_wait_time: the average waiting time to the network after the transaction enters the waiting queue

Master-slave synchronization

Master1 host

Mysql > grant replication slave on *. * to rep@'192.168.1.%' identified by '123'

Mysql > grant all privileges on *. * to manager@'192.168.1.%' identified by '123'

Mysql > show master status

The first grant command is to create an account for master-slave replication, which can be created on the host of master and candicate master.

The second grant command is to create a MHA administrative account, which needs to be executed on all mysql servers. MHA will require remote login to the database in the configuration file, so do the necessary authorization.

Master2 host

Mysql > grant replication slave on *. * to rep@'192.168.1.%' identified by '123'

Mysql > grant all privileges on *. * to manager@'192.168.1.%' identified by '123'

Mysql > change master to master_host='192.168.1.131',master_port=3306,master_user='rep',master_password='123',master_log_file='mysql-bin.000001',master_log_pos=739

Mysql > start slave

Mysql > show slave slave

Slave1 host

Mysql > grant all privileges on *. * to manager@'192.168.1.%' identified by '123'

Mysql > change master to master_host='192.168.1.131',master_port=3306,master_user='rep',master_password='123',master_log_file='mysql-bin.000001',master_log_pos=739

Mysql > start slave

Mysql > show slave status\ G

View the semi-synchronous status of the master1 server

Mysql > show status like'% rpl_semi_sync%'

Configure mysql-mha

Install mha4mysql-node-0.56.tar.gz on all database nodes

[root@master1 ~] # tar zxf mha4mysql-node-0.56.tar.gz

[root@master1 ~] # cd mha4mysql-node-0.56/

[root@master1 mha4mysql-node-0.56] # perl Makefile.PL

[root@master1 mha4mysql-node-0.56] # make & & make install

Mha4mysql-node-0.56.tar.gz is also installed on the other two data nodes (procedure is brief)

Both need to be installed on the management node: mha4mysql-node-0.56.tar.gz and mha4mysql-manager-0.56.tar.gz

[root@manager ~] # tar zxf mha4mysql-node-0.56.tar.gz

[root@manager ~] # cd mha4mysql-node-0.56/

[root@manager mha4mysql-node-0.56] # perl Makefile.PL

[root@manager mha4mysql-node-0.56] # make & & make install

[root@manager mha4mysql-node-0.56] # cd

[root@manager ~] # tar zxf mha4mysql-manager-0.56.tar.gz

[root@manager ~] # cd mha4mysql-manager-0.56/

[root@manager mha4mysql-manager-0.56] # perl Makefile.PL

[root@manager mha4mysql-manager-0.56] # make & & make install

[root@manager mha4mysql-manager-0.56] # mkdir / etc/masterha

[root@manager mha4mysql-manager-0.56] # mkdir-p / masterha/app1

[root@manager mha4mysql-manager-0.56] # mkdir / scripts

[root@manager mha4mysql-manager-0.56] # cp samples/conf/* / etc/masterha/

[root@manager mha4mysql-manager-0.56] # cp samples/scripts/* / scripts/

[root@manager mha4mysql-manager-0.56] # vim / etc/masterha/app1.cnf

[server default]

Manager_workdir=/masterha/app1

Manager_log=/masterha/app1/manager.log

User=manager

Password=123

Ssh_user=root

Repl_user=rep

Repl_password=123

Ping_interval=1

[server1]

Hostname=192.168.1.131

Port=3306

Master_binlog_dir=/usr/local/mysql/data

Candidate_master=1

[server2]

Hostname=192.168.1.132

Port=3306

Master_binlog_dir=/usr/local/mysql/data

Candidate_master=1

[server3]

Hostname=192.168.1.133

Port=3306

Master_binlog_dir=/usr/local/mysql/data

No_master=1

Save exit

[root@manager mha4mysql-manager-0.56] # > / etc/masterha/masterha_default.cnf

Manager_workdir=/masterha/app1 / / set the working directory of manager

Manager_log=/masterha/app1/manager.log / / set the log of manager

User=manager / / set monitoring user manager

Password=123456 / / password of monitoring user manager

Ssh_user=root / / ssh connection user

Repl_user=mharep / / Master-Slave replication user

Repl_password=123.abc / / Master-Slave copy user password

Ping_interval=1 / / sets the time interval for monitoring the main library and sending ping packets. The default is 3 seconds. There are no responses after three attempts.

Railover is carried out automatically when appropriate.

Master_binlog_dir=/usr/local/mysql/data / / sets the location where master saves binlog so that MHA can

To find master's log, what I have here is mysql's data directory.

Candidate_master=1 / / is set as a candidate master. If this parameter is set, the master-slave switch will occur.

Promote this from library to master library.

Ssh validity verification:

[root@manager mha4mysql-manager-0.56] # masterha_check_ssh-global_conf=/etc/masterha/masterha_default.cnf-conf=/etc/masterha/app1.cnf

The correct display of the result is not wrong.

Verify the validity of cluster replication (mysql must be started):

[root@manager mha4mysql-manager-0.56] # masterha_check_repl-global_conf=/etc/masterha/masterha_default.cnf-conf=/etc/masterha/app1.cnf

Where the last error was reported after the execution of the order

The workaround is performed on all servers:

[root@manager mha4mysql-manager-0.56] # ln-s / usr/local/mysql/bin/* / usr/local/bin/

Start manger

[root@manager] # nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / tmp/mha_mager.log &

[1] 1319

Status check

[root@manager] # masterha_check_status-- conf=/etc/masterha/app1.cnf

App1 (pid:1319) is running (0:PING_OK), master:192.168.1.131

Failover verification:

1. Stop the master1.

[root@master1 mha4mysql-node-0.56] # service mysqld stop

two。 View MHA Log

[root@manager] # tail-f / masterha/app1/manager.log

3. Log in to slave1's mysql to view slave status

Mysql > show slave status\ G

Daily main operation steps on the MHA Manager side

1) check to see if there are any of the following files, and delete them.

After the master-slave switch occurs, the MHAmanager service stops automatically, and in manager_workdir (/ masterha/app1)

Generate the file app1.failover.complete under the directory. To start MHA, you must first make sure that there is no such file)

[root@manager ~] # ll / masterha/app1/

Total 24

-rw-r--r-- 1 root root 0 Aug 8 02:02 app1.failover.complete

-rw-r--r-- 1 root root 20955 Aug 8 02:02 manager.log

[root@manager ~] # rm-rf / masterha/app1/app1.failover.complete

2) check MHA replication check: (need to set master1 to slave server of candicatade)

Mysql > CHANGE MASTER TO MASTER_HOST='192.168.1.132', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=739, MASTER_USER='rep', MASTER_PASSWORD='123'

Mysql > start slave

3) stop MHA: masterha_stop-- conf=/etc/masterha/app1.cnf

4) start MHA:

# nohup masterha_manager-- conf=/etc/masterha/app1.cnf & > / tmp/mha_manager.log &

When a slave node is down, it cannot be started by default, plus-- ignore_fail_on_start even if a node is down.

MHA can be started as follows:

# nohup masterha_manager-conf=/etc/masterha/app1.cnf-ignore_fail_on_start

& > / tmp/mha_manager.log &

5) check status:

# masterha_check_status-- conf=/etc/masterha / app1.cnf

6) check the log:

# tail-f / masterha/app1/manager.log

7) follow-up work of master-slave switch

Refactoring:

Refactoring means that your master is dead, switch to Candicate master, and Candicate master becomes master, so a solution for refactoring is to restore the original master library to a new slave master library after switching, then restore the original master library to a new slave library, and then re-perform the above 5 steps. When the original master database data file is complete, the last executed CHANGE MASTER command can be found in the following ways:

[root@manager ~] # grep "CHANGE MASTER TO MASTER" / masterha/app1/manager.log | tail-1

Wed Aug 8 02:02:40 2018-[info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.1.132', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=739, MASTER_USER='rep', MASTER_PASSWORD='xxx'

Delete relay logs periodically

In configuring master-slave replication, the parameter relay_log_purge=0 is set on the slave, so the slave node needs to delete the relay log periodically. It is recommended that each slave node delete the relay log at a different time.

Corntab-e

0 5 * / usr/local/bin/purge_relay_logs-user=root-password=pwd123-port=3306-disable_relay_log_purge > > / var/log/purge_relay.log 2 > & 1

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report