Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Mysql5.6 deploys the cluster infrastructure environment

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Mysql5.6 deploys the cluster infrastructure environment

-

Deploy the cluster infrastructure environment

MySQL-MMM architecture deployment

MySQL-MMM architecture usage

Deploy the cluster infrastructure environment

192.168.4.10 master1

192.168.4.11 master2

192.168.4.12 slave1

192.168.4.13 slave2

192.168.4.100 monitor

Five mysql5.6 are used, of which 192.168.4.10 and 192.168.4.11 are used as mysql dual master servers, and 192.168.4.12 and 192.168.4.13 are used as slave servers of the master server.

192.168.4.100 as a management monitoring server in mysql-MMM architecture

Step 1: configure hosts local resolution

1. Configure native hosts resolution records

[root@master1 ~] # cat / etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.4.10 master1 master1.tarena.com

192.168.4.11 master2 master2.tarena.com

192.168.4.12 slave1 slave1.tarena.com

192.168.4.13 slave2 slave2.tarena.com

192.168.4.100 monitor monitor.tarena.com

two。 Test hosts parsing succeeded

[root@master1] # ping-c 3 master1

PING master1 (192.168.4.10) 56 (84) bytes of data.

64 bytes from master1 (192.168.4.10): icmp_seq=1 ttl=64 time=0.019 ms

64 bytes from master1 (192.168.4.10): icmp_seq=2 ttl=64 time=0.019 ms

64 bytes from master1 (192.168.4.10): icmp_seq=3 ttl=64 time=0.022 ms

-master1 ping statistics

3 packets transmitted, 3 received, 0 packet loss, time 2000ms

Rtt min/avg/max/mdev = 0.019max 0.020max 0.022max 0.001mm ms

Step 2: deploy the database host

Http://xmomo.blog.51cto.com/5994484/1939747

Step 3: deploy the dual-master and multi-slave structure

1) Database authorization (4 database host master1,master2,slave1,slave2 do the following)

To deploy master-slave synchronization, you only need to authorize a master-slave synchronization user, but we need to deploy the mysql-MMM architecture

So here we will also set up the authorization for the users required by mysql-MMM. Then authorize a test user to test for use when the architecture is built.

Master1 settings:

[root@master1] # mysql-uroot-ppwd123

In order to facilitate the test, I directly allow all addresses to access the database authorization part, and the production environment should be careful.

Mysql > grant replication slave on *. * to slaveuser@ "%" identified by "pwd123"; / / Master-Slave synchronous authorization

Query OK,0 rows affected (0.00 sec)

Mysql > grant replication client on *. * to monitor@ "%" identified by "monitor"; / / MMM required architecture user authorization

Query OK,0 rows affected (0.06 sec)

Mysql > grant replication client,process,super on *. * to agent@ "%" identified by "agent"; / / MMM required architecture user authorization

Query OK,0 rows affected (0.00 sec)

Mysql > grant all on *. * to root@ "%" identified by "pwd123"; / / Test user authorization

Query OK,0 rows affected (0.00 sec)

Mysql >

2) Open the main database bin-log log and set server_id (master1,master2)

Master1 settings:

[root@master1 ~] # vi / etc/my.cnf

[mysqld]

Server_id=100 / / sets server_id, the server's unique ID, which must be an integer between 1 and 232-1

Log-bin / / (main database must be enabled) enable bin-log log

Master2 settings:

[root@master2 ~] # vi / etc/my.cnf

[mysqld]

Server_id=101

Log-bin

Restart the mysql services of the two main libraries

3) set server_id (slave1,slave2) from the library

Slave1 settings:

[root@slave1 ~] # vi / etc/my.cnf

[mysqld]

Log-bin / / main library must be enabled

Server-id=102

Note: in a cluster, all server ID numbers must be unique.

The binary logging of the mysql master library must be enabled, and the binary logging function on the slave server does not need to be enabled.

However, you can also enable data backup and recovery by enabling the binary logging function of the slave server, and in some more complex topology environments, the mysql slave server can also act as the master server of other slave servers.

Slave2 settings:

[root@slave2 ~] # vi / etc/my.cnf

[mysqld]

Log-bin

Server-id=103

Restart the mysql services of two slave libraries

4) configure the master-slave relationship

Configure master2, slave1, slave2 to become the slave server of master1

View the master1 server bin-log log usage node information:

[root@master1] # mysql-uroot-ppwd123

....

Mysql > flush tables with read lock

Mysql > show master status\ G

* * 1. Row *

File: master1-bin.000001

Position:120

Binlog_Do_DB:

Binlog_Ignore_DB:

Executed_Gtid_Set:

1 row inset (0.00 sec)

Mysql >

Mysql > unlock tables

Note: do not operate the master server mysql after performing this step to prevent the status value of the master server from changing

To prevent other hosts from manipulating the master database, you can use read-only locking tables to prevent the database from being modified.

The flush tables with read lock; command performs read-only locking on all tables in all databases

Write operations for all databases are rejected after read-only locking, but read operations can continue.

Perform locking to prevent someone from modifying the data while viewing binary log information

Finally, an end operation is performed on the global lock using the unlock tables; statement.

Set master2 to the slave library of master1:

[root@master2] # mysql-uroot-ppwd123

....

Mysql > change master to master_host='192.168.4.10',master_user='slaveuser',master_password='pwd123',master_log_file='master1-bin.000001',master_log_pos=120

The key operation of data replication is to configure the slave server to connect to the master server for data replication. We need to tell the slave server all the necessary information to establish a network connection.

This can be done by using the CHANGE MASTER TO statement

MASTER_HOST specifies the hostname or IP address of the host server

The name of the account created on the MASTER_USER primary server with replication permissions

MASTER_PASSWORD is the password of the account

MASTER_LOG_FILE specifies the primary server binary log file name

The location where the MASTER_LOG_POS primary server binary log is currently recorded.

Mysql > start slave; / / start the synchronization process

Note: I thought I had been a master and slave before and reported an error when starting the synchronization process (1872). You can use the command reset slave; and then reconnect at change master to.

Mysql > show slave status\ G / / check whether the master and slave are successful

....

After starting the synchronization process, check whether the IO node and the SQL node are Yes. If both are Yes, the master and slave are normal.

Slave_IO_Running: Yes / / IO node is normal

Slave_SQL_Running: Yes / / SQL node is normal

....

Set slave1 to master1 from:

[root@slave1] # mysql-uroot-ppwd123

Mysql > change master to master_host='192.168.4.10',master_user='slaveuser',master_password='pwd123',master_log_file='master1-bin.000001',master_log_pos=120

Mysql > start slave

Mysql > show slave status\ G

....

Slave_IO_Running: Yes / / IO node is normal

Slave_SQL_Running: Yes / / SQL node is normal

....

Mysql >

Set slave2 to master1 from:

[root@slave2] # mysql-uroot-ppwd123

....

Mysql > change master to master_host='192.168.4.10',master_user='slaveuser',master_password='pwd123',master_log_file='master1-bin.000001',master_log_pos=120

Mysql > start slave

Mysql > show slave status\ G

....

Slave_IO_Running: Yes / / IO node is normal

Slave_SQL_Running: Yes / / SQL node is normal

....

-

5) configure the master-slave relationship, and configure master1 as the slave of master2

View bin-log usage information for master2:

[root@master2] # mysql-uroot-ppwd123

....

Mysql > show master status\ G

* * 1. Row *

File: master2-bin.000001

Position:120

Binlog_Do_DB:

Binlog_Ignore_DB:

Executed_Gtid_Set:

1 row inset (0.00 sec)

Set master1 to become the slave of master2:

[root@master1] # mysql-uroot-ppwd123

....

Mysql > change master to master_host='192.168.4.11',master_user='slaveuser',master_password='pwd123',master_log_file='master2-bin.000001',master_log_pos=120

Mysql > start slave

Query OK,0 rows affected (0.27 sec)

Mysql > show slave status\ G

....

Slave_IO_Running: Yes / / IO node is normal

Slave_SQL_Running: Yes / / SQL node is normal

....

6) Test whether the master-slave architecture is successful

Master1 updates the data to see if other hosts are synchronized:

[root@master1] # mysql-uroot-ppwd123

....

Mysql > show databases

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | test |

+-+

4 rows inset (0.00 sec)

Mysql > create database tarena

Mysql > show databases

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

5 rows inset (0.00 sec)

Mysql >

Master2 host view:

[root@master2] # mysql-uroot-ppwd123-e "show databases"

Warning: Using a password on the command line interface can be insecure.

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

Slave1 host view:

[root@slave1] # mysql-uroot-ppwd123-e "show databases"

Warning: Using a password on the command line interface can be insecure.

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

Slave2 host view:

[root@slave2] # mysql-uroot-ppwd123-e "show databases"

Warning: Using a password on the command line interface can be insecure.

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

=

2 mysql-MMM architecture deployment

Five centos servers are used, of which 192.168.4.10 and 192.168.4.11 are used as mysql dual master servers, and 192.168.4.12 and 192.168.4.13 are used as slave servers for primary servers.

192.168.4.100 is used as a management and monitoring server in mysql-MMM architecture to monitor the working status of mysql master and slave servers and to determine the removal or recovery of failed nodes.

Step 1: install mysql-MMM

Installation depends on yum sources (master1,master2,slave1,slave2,monitor of 5 servers in the mysql cluster).

Rpm-ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

1) install monitor program

On the management server (monitor), execute the following command:

[root@monitor ~] # yum-y install mysql-mmm-monitor* perl-Time-HiRes*

2) install the agent

4 servers in the mysql cluster (master1, master2, slave1, slave2)

# yum-y install mysql-mmm-agent*

....

Step 2: modify the configuration file

1) modify the public configuration file

In this case, the five servers (master1, master2, slave1, slave2, monitor) of the mysql cluster need to be configured. You can configure one before using scp to replicate.

[root@master1 ~] # vim / etc/mysql-mmm/mmm_common.conf

Active_master_role writer

Cluster_interface eth0

Pid_path / var/run/mmm_agentd.pid

Bin_path / usr/lib/mysql-mmm/

Replication_user slaveuser / / set the user for master-slave synchronization

Replication_password pwd123 / / set the master-slave synchronization user password

Agent_user agent / / mmm-agent control database user

Agent_password agent / / mmm-agent control database user password

/ / set the first master server

Ip 192.168.4.10//master1 IP address

Mode master

Peer master2 / / specify another master server

/ / specify another master server

Ip 192.168.4.11

Mode master

Peer master1

/ / set the first slave server

Ip 192.168.4.12//slave1 IP address

Mode slave / / this paragraph is configured with a slave server

Ip 192.168.4.13

Mode slave

/ / set the working mode of the write server

Hosts master1,master2 / / provides the master server for writing

Ips 192.168.4.200 / / set the VIP address

Mode exclusive / / exclusive mode

/ / set the working mode of the read server

Hosts slave1,slave2 / / provide server information for reading

Ips 192.168.4.201192.168.4.202 / / multiple virtual IP

Mode balanced / / equilibrium mode

....

2) modify the management host configuration file (monitor host configuration)

[root@monitor ~] # vim / etc/mysql-mmm/mmm_mon.conf

Include mmm_common.conf

Ip 192.168.4.100amp / sets the IP address of the management host

Pid_path / var/run/mmm_mond.pid

Bin_path / usr/lib/mysql-mmm/

Status_path / var/lib/misc/mmm_mond.status

Ping_ips 192.168.4.10192.168.4.11192.168.4.12192.168.4.13 / / set the monitored database

Monitor_user monitor / / Monitoring database mysql users

Monitor_password monitor / / Monitoring database mysql user password

Debug 0

....

[root@monitor ~] #

3) modify the client configuration file

Master1 configuration

[root@master1 ~] # vi / etc/mysql-mmm/mmm_agent.conf

Include mmm_common.conf

This master1

Master2 configuration

[root@master2 ~] # vi / etc/mysql-mmm/mmm_agent.conf

Include mmm_common.conf

This master2

Slave1 configuration

[root@slave1 ~] # vi / etc/mysql-mmm/mmm_agent.conf

Include mmm_common.conf

This slave1

Slave2 configuration

[root@slave2 ~] # vi / etc/mysql-mmm/mmm_agent.conf

Include mmm_common.conf

This slave2

=

3. Mysql-MMM architecture usage

Start the MMM cluster architecture

Set the server in the cluster to online status

After the deployment of mysql-MMM architecture is completed, it needs to be started. The database side starts the mmm-agent process, and the management side starts the mmm-monitor process. After startup, all database hosts are set to online.

Step 1: start the MMM cluster architecture

1) start the mmm-agent process

Master1 operation:

[root@master1 ~] # / etc/init.d/mysql-mmm-agent start

Master2 operation:

[root@master2 ~] # / etc/init.d/mysql-mmm-agent start

Slave1 operation:

[root@slave1 ~] # / etc/init.d/mysql-mmm-agent start

Slave2 operation:

[root@slave2 ~] # / etc/init.d/mysql-mmm-agent start

2) start the mmm-monitor process

Monitor host operation:

[root@monitor ~] # / etc/init.d/mysql-mmm-monitor start

Step 2: set the server in the cluster to online status

Control commands can only be executed on the monitor server on the management side.

View the status of each server in the current cluster:

[root@monitor ~] # mmm_control show

Master1 (192.168.4.10) master/AWAITING_RECOVERY. Roles:

Master2 (192.168.4.11) master/AWAITING_RECOVERY. Roles:

Slave1 (192.168.4.12) slave/AWAITING_RECOVERY. Roles:

Slave2 (192.168.4.13) slave/AWAITING_RECOVERY. Roles:

Set the status of 4 database hosts to online:

[root@monitor ~] # mmm_control set_online master1

OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!

[root@monitor ~] # mmm_control set_online master2

OK: State of 'master2' changed to ONLINE. Now you can wait some time and check its new roles!

[root@monitor ~] # mmm_control set_online slave1

OK: State of 'slave1' changed to ONLINE. Now you can wait some time and check its new roles!

[root@monitor ~] # mmm_control set_online slave2

OK: State of 'slave2' changed to ONLINE. Now you can wait some time and check its new roles!

Check the status of each server in the current cluster again:

[root@monitor ~] # mmm_control show

Master1 (192.168.4.10) master/ONLINE. Roles:writer (192.168.4.200)

Master2 (192.168.4.11) master/ONLINE. Roles:

Slave1 (192.168.4.12) slave/ONLINE. Roles:reader (192.168.4.201)

Slave2 (192.168.4.13) slave/ONLINE. Roles:reader (192.168.4.202)

Step 3: test the mysql-MMM architecture

2) mysql-MMM virtual IP access test

[root@client] # mysql-h292.168.4.200-uroot-ppwd123-e "show databases"

Warning: Using a password on the command line interface can be insecure.

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

[root@client ~] #

[root@client] # mysql-h292.168.4.201-uroot-ppwd123-e "show databases"

Warning: Using a password on the command line interface can be insecure.

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

[root@client ~] #

[root@client] # mysql-h292.168.4.202-uroot-ppwd123-e "show databases"

Warning: Using a password on the command line interface can be insecure.

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

[root@client ~] #

3) main database downtime test

[root@master1 ~] # service mysql stop / / stop services on master1

Shutting down mysql.... [OK]

[root@monitor ~] # mmm_control show / / View the status of servers in the cluster

From the output, you can see that the virtual IP switches from master1 to master2:

Master1 (192.168.4.10) master/HARD_OFFLINE. Roles:

Master2 (192.168.4.11) master/ONLINE. Roles:writer (192.168.4.200)

Slave1 (192.168.4.12) slave/ONLINE. Roles:reader (192.168.4.201)

Slave2 (192.168.4.13) slave/ONLINE. Roles:reader (192.168.4.202)

[root@client ~] # mysql-h292.168.4.200-uroot-ppwd123-e "show databases" / / access virtual IP test

Warning: Using a password on the command line interface can be insecure.

+-+

| | Database |

+-+

| | information_schema |

| | mysql |

| | performance_schema |

| | tarena |

| | test |

+-+

[root@client ~] #

The MySQL-MMM cluster architecture is complete.

Problems encountered in the process:

When I checked the status of the cluster server for the first time by mmm_control show, I found that except that the status of master1 was AWAITING_RECOVERY, the other three were HARD_OFFLINE.

This is because the monitor user I originally created has not been synchronized in the past, just recreate it in the main library master1.

View all users in the MYSQL database

Mysql > SELECT DISTINCT CONCAT ('User:'', user,'''@''',host,''';') AS query FROM mysql.user

There is another problem. My environment is cloned by virtual machine. When configuring master and slave, I encounter the problem of UUID conflict.

After stopping the database from backing up the auto.cnf in the directory where the database is located, delete it. Just restart it.

The main library master1 must remember to turn on log-slave-updates.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report