Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Highly available Architecture for MySQL-MHA

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

At present, there are several mature schemes for MySQL high availability, such as heartbeat+drbd, MHA, MySQL replication and so on. The scalability of heartbeat+drbd scheme is poor, and the master server is responsible for reading and writing, and the slave database does not provide read function, so it is suitable for the environment with low data growth and high consistency requirements, such as banking, financial industry and so on. Today we will focus on the high availability architecture of MHA.

MHA is an excellent high-availability software for failover and master-slave upgrade in high-availability environments. In the MySQL failover process, MHA can automatically complete the database failover within 0-30 seconds, and in the switching process, maximize the consistency of data, in order to achieve the real sense of high availability. MHA High availability is based on MySQL master-slave replication. Learn about the two most common ways of MySQL replication:

Asynchronous replication: after the master database writes and commits the transaction, the records are written into the master binary log and returned to the client. There is a certain delay between the master database and the slave database, so there is a hidden danger. When the master database commits a transaction and writes the binary log, and the slave database has not yet got the binary log pushed by the master database, the master database goes down, which will cause the data of the master and slave server to be inconsistent.

Semi-synchronous replication: every time the transaction is successfully committed, the master library does not give timely feedback to the client, but waits for one of the slave libraries to receive the binary log and write the relay log before returning the operation to the client successfully.

MHA composition:

MHA Manager: management node, which can be deployed separately on a separate server, managing multiple master-slave clusters, or on a single Slave.

MHA Node: data node, running on each MySQL server.

MHA Manager regularly detects the master nodes in the cluster, and when the master fails, it can automatically upgrade the slave of the latest data to the new master, and then redirect all other slave to the new master. The entire failover process is completely transparent to the application.

How MHA works:

1) Save binary log events from a down master

2) identify the Slave with the latest updates

3) apply differential relay logs to other slave servers

4) apply binary log events saved from master

5) upgrade a new Slave to master

6) connect other Slave to the new master and copy

Example: MHA highly available architecture (if you can turn off the firewall in the private network, otherwise, please open the appropriate port)

Manager:node1:192.168.154.128

Master:node2:192.168.154.156

Slave:node3:192.168.154.130

Slave:node4:192.168.154.154

Configure master-slave replication:

1) Primary node:

[root@node2 ~] # vim / etc/my.cnf

Innodb_file_per_table=1 # Open separate tablespaces

Skip_name_resolve # forbids domain name resolution

Log-bin=master-bin

Relay-log=relay-bin

Server-id=1

[root@node2 ~] # service mysqld restart

View binary log information

Mysql > show master status

+-+

| | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | |

+-+

| | master-bin.000001 | 106 | |

+-+

1 row in set (0.00 sec)

Establish an authorized user:

Mysql > grant replication slave,replication client on *. * to 'slave'@'192.168.154.%' identified by' slave'

Query OK, 0 rows affected (0.06 sec)

Mysql > flush privileges

Query OK, 0 rows affected (0.00 sec)

2) slave node:

[root@node3 ~] # vim / etc/my.cnf

Innodb_file_per_table=1

Skip_name_resolve

Log-bin=slave-bin

Relay-log=relay-bin

Server_id=2

Read_only=1

Relay_log_purge=0

[root@node3 ~] # service mysqld restart

[root@node4 ~] # vim / etc/my.cnf

Innodb_file_per_table=1

Skip_name_resolve

Log-bin=slave-bin

Relay-log=relay-bin

Server_id=3

Read_only=1 # enables read-only mode

Relay_log_purge=0 # turn off automatic cleaning of relay logs

[root@node4 ~] # service mysqld restart

Set up synchronization:

Mysql > change master to master_host='192.168.154.156',master_user='slave',master_password='slave',master_log_file='master-bin.000001',master_log_pos=106

Query OK, 0 rows affected (0.03 sec)

Mysql > start slave

Query OK, 0 rows affected (0.01 sec)

Mysql > show slave status\ G

* * 1. Row *

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.154.156

Master_User: slave

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: master-bin.000001

Read_Master_Log_Pos: 354

Relay_Log_File: relay-bin.000002

Relay_Log_Pos: 500

Relay_Master_Log_File: master-bin.000001

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

Replicate_Do_DB:

Replicate_Ignore_DB:

Replicate_Do_Table:

Replicate_Ignore_Table:

Replicate_Wild_Do_Table:

Replicate_Wild_Ignore_Table:

Last_Errno: 0

Last_Error:

Skip_Counter: 0

Exec_Master_Log_Pos: 354

Relay_Log_Space: 649

Until_Condition: None

Until_Log_File:

Until_Log_Pos: 0

Master_SSL_Allowed: No

Master_SSL_CA_File:

Master_SSL_CA_Path:

Master_SSL_Cert:

Master_SSL_Cipher:

Master_SSL_Key:

Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

Last_IO_Errno: 0

Last_IO_Error:

Last_SQL_Errno: 0

Last_SQL_Error:

1 row in set (0.00 sec)

3) create an account with administrative permissions on the master node

Mysql > grant all on *. * to 'zwj'@'192.168.154.%' identified by' zwj'

Query OK, 0 rows affected (0.00 sec)

Configure key login between clusters

On node1:

[root@node1] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.156

[root@node1] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.130

[root@node1] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.154

[root@node1 ~] # ssh 192.168.154.154 'ifconfig' # verification

Eth0 Link encap:Ethernet HWaddr 00:0C:29:67:65:ED

Inet addr:192.168.154.154 Bcast:192.168.154.255 Mask:255.255.255.0

Inet6 addr: fe80::20c:29ff:fe67:65ed/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:26253 errors:0 dropped:0 overruns:0 frame:0

TX packets:42416 errors:0 dropped:0 overruns:0 carrier:0

Collisions:0 txqueuelen:1000

RX bytes:23453164 (22.3MiB) TX bytes:2514457 (2.3MiB)

Interrupt:19 Base address:0x2024

On node2:

[root@node2] # ssh-keygen-t rsa

[root@node2] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.128

[root@node2] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.130

[root@node2] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.154

On node3:

[root@node3 log] # ssh-keygen-t rsa

[root@node3] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.128

[root@node3] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.156

[root@node3] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.154

On node4:

[root@node4] # ssh-keygen-t rsa

[root@node4] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.128

[root@node4] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.156

[root@node4] # ssh-copy-id-I. / .ssh / id_rsa.pub root@192.168.154.130

3. Install MHA Manager on node1:

[root@node1 ~] # yum install perl-DBD-MySQL-y

[root@node1 ~] # tar-zxf mha4mysql-node-0.56.tar.gz

[root@node1 ~] # cd mha4mysql-node-0.56

[root@node1 mha4mysql-node-0.56] # perl Makefile.PL

[root@node1 mha4mysql-node-0.56] # make

[root@node1 mha4mysql-node-0.56] # make install

[root@node1 mha4mysql-manager-0.56] # yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes-y # install perl modules that MHA Manger depends on

[root@node1 ~] # tar-zxf mha4mysql-manager-0.56.tar.gz

[root@node1 ~] # cd mha4mysql-manager-0.56

[root@node1 mha4mysql-manager-0.56] # perl Makefile.PL

[root@node1 mha4mysql-manager-0.56] # make

[root@node1 mha4mysql-manager-0.56] # make install

Install MySQL node (on all MySQL servers)

[root@node2 ~] # yum install perl-DBD-MySQL-y

[root@node2 ~] # cd mha4mysql-node-0.56/

[root@node2 mha4mysql-node-0.56] # perl Makefile.PL

[root@node2 mha4mysql-node-0.56] # make

[root@node2 mha4mysql-node-0.56] # make install

5. Create a working directory and configure MHA:

[root@node1 ~] # mkdir-pv / etc/masterha

[root@node1 ~] # vim / etc/masterha/appl.cnf

[server default]

User=zwj

Password=zwj

Manager_workdir=/etc/masterha/appl

Manager_log=/etc/masterha/appl/manager.log

Remote_workdir=/etc/masterha/appl

Ssh_user=root

Repl_user=slave

Repl_password=slave

Ping_interval=1

[server1]

Hostname=192.168.154.156

[server2]

Hostname=192.168.154.130

Candidate_master=1 # is set as an alternate master

[server3]

Hostname=192.168.154.154

Check the SSH connection status:

[root@node1] # masterha_check_ssh-- conf=/etc/masterha/appl.cnf

Wed May 10 00:12:58 2017-[warning] Global configuration file / etc/masterha_default.cnf not found. Skipping.

Wed May 10 00:12:58 2017-[info] Reading application default configuration from / etc/masterha/appl.cnf..

Wed May 10 00:12:58 2017-[info] Reading server configuration from / etc/masterha/appl.cnf..

Wed May 10 00:12:58 2017-[info] Starting SSH connection tests..

Wed May 10 00:13:15 2017-[debug]

Wed May 10 00:12:59 2017-[debug] Connecting via SSH from root@192.168.154.154 (192.168.154.154) to root@192.168.154.156 (192.168.154.156)..

Wed May 10 00:13:05 2017-[debug] ok.

Wed May 10 00:13:05 2017-[debug] Connecting via SSH from root@192.168.154.154 (192.168.154.154) to root@192.168.154.130 (192.168.154.130)..

Wed May 10 00:13:15 2017-[debug] ok.

Wed May 10 00:13:20 2017-[debug]

Wed May 10 00:12:58 2017-[debug] Connecting via SSH from root@192.168.154.130 (192.168.154.130) to root@192.168.154.156 (192.168.154.156)..

Wed May 10 00:13:11 2017-[debug] ok.

Wed May 10 00:13:11 2017-[debug] Connecting via SSH from root@192.168.154.130 (192.168.154.130) to root@192.168.154.154 (192.168.154.154)..

Wed May 10 00:13:20 2017-[debug] ok.

Wed May 10 00:13:35 2017-[debug]

Wed May 10 00:12:58 2017-[debug] Connecting via SSH from root@192.168.154.156 (192.168.154.156) to root@192.168.154.130 (192.168.154.130)..

Wed May 10 00:13:15 2017-[debug] ok.

Wed May 10 00:13:15 2017-[debug] Connecting via SSH from root@192.168.154.156 (192.168.154.156) to root@192.168.154.154 (192.168.154.154)..

Wed May 10 00:13:35 2017-[debug] ok.

Wed May 10 00:13:35 2017-[info] All SSH connection tests passed successfully.

Check the entire replication environment:

[root@node1] # masterha_check_repl-- conf=/etc/masterha/appl.cnf

...

192.168.154.156 (192.168.154.156 purl 3306) (current master)

+-192.168.154.130 (192.168.154.130 purl 3306)

+-192.168.154.154 (192.168.154.154purl 3306)

Wed May 10 00:33:36 2017-[info] Checking replication health on 192.168.154.130..

Wed May 10 00:33:36 2017-[info] ok.

Wed May 10 00:33:36 2017-[info] Checking replication health on 192.168.154.154.15.

Wed May 10 00:33:36 2017-[info] ok.

Wed May 10 00:33:36 2017-[warning] master_ip_failover_script is not defined.

Wed May 10 00:33:36 2017-[warning] shutdown_script is not defined.

Wed May 10 00:33:36 2017-[info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

8. Enable MHA Manager monitoring:

[root@node1] # nohup masterha_manager-- conf=/etc/masterha/appl.cnf > / etc/masterha/appl/manager.log 2 > & 1 &

[1] 8300

View MHA Manager monitoring:

[root@node1] # masterha_check_status-- conf=/etc/masterha/appl.cnf

Appl (pid:8300) is running (0:PING_OK), master:192.168.154.156

Turn off MHA Manager monitoring:

[root@node1] # masterha_stop-- conf=/etc/masterha/appl.cnf

Stopped appl successfully.

[1] + Exit 1 nohup masterha_manager-conf=/etc/masterha/appl.cnf > / etc/masterha/appl/manager.log 2 > & 1

9 simulated downtime of the main database:

[root@node2 ~] # service mysqld stop

Stopping mysqld: [OK]

Looking at slave (node4), you can see that master has changed.

...

Mysql > show slave status\ G

* * 1. Row *

Slave_IO_State: Waiting for master to send event

Master_Host: 192.168.154.130

Master_User: slave

Master_Port: 3306

Connect_Retry: 60

Master_Log_File: slave-bin.000003

Read_Master_Log_Pos: 106

Relay_Log_File: relay-bin.000002

Relay_Log_Pos: 251

Relay_Master_Log_File: slave-bin.000003

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report