In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to test sandbox and MHA. The content is concise and easy to understand. It will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
After writing a script to build one master and multiple followers yesterday, brother Qilong suggested that I take a look at the functions of sandbox, which can build a master-slave environment in seconds. I simply tried it, and it was really good and powerful.
Environment deployment is actually very simple, if there is a network environment, you can directly cpan a command. Or you can install it using wget.
Install sandbox
Using cpan to install is very simple, with the following command:
Cpan MySQL::Sandbox
The output of some logs will prompt you to install successfully, and there will be a few more make_sandbox-related commands under / usr/local/bin.
[root@grtest bin] # ll make*
-r-xr-xr-x 1 root root 8681 Apr 12 16:16 make_multiple_custom_sandbox
-r-xr-xr-x 1 root root 13862 Apr 12 16:16 make_multiple_sandbox
-r-xr-xr-x 1 root root 22260 Apr 12 16:16 make_replication_sandbox
-r-xr-xr-x 1 root root 11454 Apr 12 16:16 make_sandbox
-r-xr-xr-x 1 root root 4970 Apr 12 16:16 make_sandbox_from_installed
-r-xr-xr-x 1 root root 7643 Apr 12 16:16 make_sandbox_from_source
-r-xr-xr-x 1 root root 5772 Apr 12 16:16 make_sandbox_from_url
Another way is by installing the package, which can be done by compiling and installing.
You can download the installation package using wget:
# wget https://launchpad.net/mysql-sandbox/mysql-sandbox-3/mysql-sandbox-3/+download/MySQL-Sandbox-3.0.25.tar.gz can then be installed using make,make install.
For example, if I want to deploy a MySQL database environment, we will give a binary installation package and make_sandbox it directly.
There is one thing to note about the # make_sandbox mysql-5.7.17-linux-glibc2.5-x86_64.tar.gz command, that is, for security reasons, the default use of root is sensitive and the following warning is thrown. The main thing is to confirm to you whether you really want to do it here. If it is an online environment, the risk of operation is very high, so it is a special hint that you need to set a variable value before confirming it.
# make_sandbox percona-server-5.6.25-73.1.tar.gz
MySQL Sandbox should not run as root
If you know what you are doing and want to
Run as root nonetheless, please set the environment
Variable 'SANDBOX_AS_ROOT' to a nonzero value, let's give this variable a value, such as go.
Export SANDBOX_AS_ROOT=go a set of database environment is automatically deployed, it is rare to automatically generate the corresponding shortcut script, if the next task to do some batch management classes, it is very fast and convenient, here the database installation directory is msb_5_7_17, data files are in this directory.
[root@grtest sandboxes] # ll
Total 48
-rwxr-xr-x 1 root root 54 Apr 12 16:35 clear_all
Drwxr-xr-x 4 root root 4096 Apr 12 16:35 msb_5_7_17
-rw-r--r-- 1 root root 3621 Apr 12 16:35 plugin.conf
-rwxr-xr-x 1 root root 56 Apr 12 16:35 restart_all
-rwxr-xr-x 1 root root 2145 Apr 12 16:35 sandbox_action
-rwxr-xr-x 1 root root 58 Apr 12 16:35 send_kill_all
-rwxr-xr-x 1 root root 54 Apr 12 16:35 start_all
-rwxr-xr-x 1 root root 55 Apr 12 16:35 status_all
-rwxr-xr-x 1 root root 53 Apr 12 16:35 stop_all
-rwxr-xr-x 1 root root 4514 Apr 12 16:35 test_replication
-rwxr-xr-x 1 root root 52 Apr 12 16:35 use_all connects to the database, only one use command is required.
. / use
Welcome to the MySQL monitor. Commands end with; or\ g.
Your MySQL connection id is 6
Server version: 5.7.17 MySQL Community Server (GPL)
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
Affiliates. Other names may be trademarks of their respective
Owners.
Type 'help;' or'\ h' for help. Type'\ c'to clear the current input statement.
The same is true of mysql [localhost] {msandbox} ((none)) > other start and stop commands, which are very fast and convenient.
To build a master-slave environment, the steps are simple, and the output log is also very simple. For example, if I specify a binary directory 5.7.17 that has been decompressed, an one-master and two-slave environment will be created by default.
# export SANDBOX_AS_ROOT=go
# make_replication_sandbox 5.7.17
Installing and starting master
Installing slave 1
Installing slave 2
Starting slave 1
.. Sandbox server started
Starting slave 2
. Sandbox server started
Initializing slave 1
Initializing slave 2
Replication directory installed in $HOME/sandboxes/rsandbox_5_7_17 to check the status of the master and slave, use status_all.
#. / status_all
REPLICATION rsandbox_5_7_17
Master on
Port: 20192
Node1 on
Port: 20193
Node2 on
Port: 20194
MHA rapid testing
Of course, the above work can be done using sandbox, or you can use custom scripts to do, each has its own advantages, relatively speaking, manual scripting is at least a little more clear.
To dynamically build one master and many slaves, one of my ideas is to quickly simulate the environment of MHA.
Let's first create a database user mha_test as a connected user in the configuration
GRANT ALL PRIVILEGES ON *. * TO 'mha_test'@'%' identified by' mha_test'; then specify a configuration file that reads as follows:
# cat / home/mha/conf/app1.cnf
[server default]
Manager_workdir=/home/mha/manager
Manager_log=/home/mha/manager/app1/manager.log
Port=24801
User=mha_test
Password=mha_test
Repl_user=rpl_user
Repl_password=rpl_pass
[server1]
Hostname=127.0.0.1
Port=24801
Candidate_master=1
[server2]
Hostname=127.0.0.1
Candidate_master=1
Port=24802
[server3]
Hostname=127.0.0.1
Candidate_master=1
Because port=24803 is the same server, it can quickly simulate the disaster recovery switching and fast recovery of MHA.
Use the following script to detect the SSH situation.
# masterha_check_ssh-- conf=/home/mha/conf/app1.cnf is basically the following ssh connection, please check.
Wed Apr 12 18:35:52 2017-[debug] Connecting via SSH from root@127.0.0.1 (127.0.0.1) to root@127.0.0.1 (127.0.1).
Wed Apr 12 18:35:52 2017-[debug] ok.
Wed Apr 12 18:35:52 2017-[debug] Connecting via SSH from root@127.0.0.1 (127.0.0.1) to root@127.0.0.1 (127.0.1).
Wed Apr 12 18:35:52 2017-[debug] ok.
Wed Apr 12 18:35:52 2017-[info] All SSH connection tests passed successfully. To check the replication of the master and slave, you can use the following command
Masterha_check_repl-conf=/home/mha/conf/app1.cnf
The output log section is as follows, you can see the master-slave relationship and replication detection can be seen clearly.
Wed Apr 12 18:35:29 2017-[info]
127.0.0.1 (127.0.0.1 virtual 24801) (current master)
+-- 127.0.0.1 (127.0.0.1 purl 24802)
+-- 127.0.0.1 (127.0.0.1 purl 24803)
Wed Apr 12 18:35:29 2017-[info] Checking replication health on 127.0.0.1..
Wed Apr 12 18:35:29 2017-[info] ok.
Wed Apr 12 18:35:29 2017-[info] Checking replication health on 127.0.0.1..
Wed Apr 12 18:35:29 2017-[info] ok.
Wed Apr 12 18:35:29 2017-[warning] master_ip_failover_script is not defined.
Wed Apr 12 18:35:29 2017-[warning] shutdown_script is not defined.
Wed Apr 12 18:35:29 2017-[info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK. Then we start MHA-manager.
Nohup masterha_manager-- conf=/home/mha/conf/app1.cnf > / tmp/mha_manager.log 2 > & 1 & if you check the status of the current MHA, you can use the following command:
# masterha_check_status-conf=/home/mha/conf/app1.cnf
App1 (pid:11701) is running (0:PING_OK), master:127.0.0.1 at this time, let's break it. We can manually Kill the mysqld_safe and mysqld services on port 24081.
This will find in the log that MHA is working.
Tail-f / home/mha/manager/app1/manager.log
Wed Apr 12 22:54:53 2017-[info] Resetting slave info on the new master..
Wed Apr 12 22:54:53 2017-[info] 127.0.0.1: Resetting slave info succeeded.
Wed Apr 12 22:54:53 2017-[info] Master failover to 127.0.0.1 (127.0.1 Master failover to 24802) completed successfully.
Wed Apr 12 22:54:53 2017-[info]
-Failover Report-
App1: MySQL Master failover 127.0.0.1 (127.0.0.1 to 24801) succeeded
Master 127.0.0.1 (127.0.0.1 24801) is down!
Check MHA Manager logs at grtest:/home/mha/manager/app1/manager.log for details.
Started automated (non-interactive) failover.
Selected 127.0.0.1 (127.0.1 24802) as a new master.
127.0.0.1 (127.0.0.1 OK 24802): Applying all logs succeeded
127.0.0.1 (127.0.0.1 replicating from 24803): OK: Slave started, 127.0.0.1 (127.0.0.1)
127.0.0.1 (127.0.0.1): Resetting slave info succeeded.
Master failover to 127.0.0.1 (127.0.1 24802) completed successfully. In this way, the mysql service at port 24802 will automatically take over from the slave library to the master library. The slave library of port 24803 will automatically accept data changes from the service of port 24802.
The whole process is methodical and will be done in five stages.
The above content is how to test sandbox and MHA. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.