In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
How to back up distributed logic in MySQL, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.
Abstract
The importance of regular backups has been reflected in the database lifecycle. There are different styles: Percona XtraBackup, binary log backups, disk snapshots (lvm,ebs, etc.) and classic: logical backups, backups that can be done using tools such as mysqldump,mydumper or mysqlpump. Each of them has a specific purpose, MTTR, retention policy, etc.
The importance of regular backups has been reflected in the database lifecycle. There are different styles: Percona XtraBackup, binary log backups, disk snapshots (lvm,ebs, etc.) and classic: logical backups, backups that can be done using tools such as mysqldump,mydumper or mysqlpump. Each of them has a specific purpose, MTTR, retention policy, etc.
Another fact is that once datadir grows, backing up can be a very slow task: storing more data, reading and backing up more data. And, another fact is that not only will the data grow, but also the number of MySQL instances available in the environment will increase (usually). So why not take advantage of more MySQL instances for logical backups to make this operation faster?
Distributed backup (or use all available slaves)
The idea is simple: instead of getting the entire backup from a single server, use all available servers. This proof of concept focuses only on using replicas on the master / slave topology. You can also use Master, but in this case, I decided not to use it to avoid additional backup overhead.
test
In a master / slave topology:
Graphics from Orchestrator GUI
Use a small datadir of about 64GB data (excluding index size) and 300 tables (schema "sb"):
+-+ | TABLE_SCHEMA | ENGINE | TABLES | ROWS | DATA (M) | INDEX (M) | TOTAL (M) | +- -+ | meta | InnoDB | 1 | 0 | 0.01 | 0.00 | 0.01 | percona | InnoDB | 1 | 2 | 0. | 0.1 | 0.03 | | sb | InnoDB | 295924962 | 295924962 | 63906.82 | 4654.68 | 68561.51 | | sys | InnoDB | 1 | 6 | 0.01 | 0.00 | 0.01 | +-| -- +
Using 3 replicas, a distributed logical backup using mysqldump took 6 minutes and 13 seconds:
[root@mysql1] # ls-lh / data/backups/20200101/total 56G Murray. 1 root root 19G Jan 1 14:37 mysql2.sql-rw-r--r--. 1 root root 19G Jan 1 14:37 mysql3.sql-rw-r--r--. 1 root root 19G Jan 1 14:37 mysql4.sql [root@mysql1 ~] # stat / data/backups/20200101/mysql2.sql File:'/ data/backups/20200101/mysql2.sql' Size: 19989576285 Blocks: 39042144 IO Block: 4096 regular fileDevice: 10300h/66304d Inode: 54096034 Links: 1Access: (0644 Melissa Melasha -) Uid: (0 / root) Gid: (0 / root) Context: unconfined_u:object_r:unlabeled_t: S0Access: 2020-01-01 14 Birth 34.948124516 + 0000Modify: 2020-01-01 1437 0000Change: 41.297640837 + 0000
The same backup type on a single copy took 11 minutes and 59 seconds:
[root@mysql1 ~] # time mysqldump-hmysql2-- single-transaction-- lock-for-backup sb > / data/backup.sql real 11m58.816suser 9m48.871ssys 2m6.492s [root@mysql1 ~] # ls-lh / data/backup.sql-rw-r--r--. 1 root root 56G Jan 1 14:52 / data/backup.sql
To put it another way:
Distributed servers are 48% faster!
This is a fairly small data set. It's worth a try. So how does it work?
Concept
The logic is simple and can be divided into multiple stages.
Phase 1: prepare
Find out the number of copies available
Find out the number of tables in the schema to back up
Divide the number of tables among all available copies. The resulting block will be the table that each copy will back up.
Phase 2: ensure consistency
Prevents the primary server from changing the location of the binary log. Typically, this is done through FLUSH TABLES with read locks, but this PoC uses the cool features of LOCK BINLOG FOR BACKUP on Percona Server for MySQL and is less disruptive.
Find the latest copy
Use START SLAVE UNTIL to match all other copies with the latest copy
Start a mysqldump for each copy with the appropriate table block and use-lock-for-backup (another function of Percona Server)
The complete script can be found here:
Https://github.com/nethalo/parallel-mysql-backup/blob/master/dist_backup.sh
It is worth mentioning that the script has its own log, which describes each step, and it looks like this:
[200101-16:01:19] [OK] Found 'mysql' bin [200101-16:01:19] [Info] SHOW SLAVE HOSTS executed [200101-16:01:19] [Info] Count tables OK [200101-16:01:19] [Info] table list gathered [200101-16:01:19] [Info] CREATE DATABASE IF NOT EXISTS percona [200101-16:01:19] [Info] CREATE TABLE IF NOT EXISTS percona.metabackups [200101-16:01:19] [Info] TRUNCATE TABLE percona.metabackups [200101-16:01:19] [Info] Executed INSERT INTO percona.metabackups (host Chunkstart) VALUES ('mysql3',0) [200101-16:01:19] [Info] Executed INSERT INTO percona.metabackups (host,chunkstart) VALUES (' mysql4',100) [200101-16:01:19] [Info] Executed INSERT INTO percona.metabackups (host,chunkstart) VALUES ('mysql2' [200101-16:01:19] [Info] lock binlog for backup set [200101-16:01:19] [Info] slave status position on mysql3 [200101-16:01:19] [Info] slave status file on mysql3 [200101-16:01:19] [Info] slave status position on mysql4 [200101-16:01:19] [Info] slave status file on mysql4 [200101-16:01:19] [Info] slave status position on mysql2 [200101-16:01:19] [Info] slave status file on mysql2 [200101-16:01:19] [Info] set STOP SLAVE START SLAVE UNTIL MASTER_LOG_FILE = 'mysql-bin.000358', MASTER_LOG_POS = 895419795 on mysql3 [200101-16:01:20] [Info] set STOP SLAVE; START SLAVE UNTIL MASTER_LOG_FILE =' mysql-bin.000358', MASTER_LOG_POS = 895419795 on mysql4 [200101-16:01:20] [Info] set STOP SLAVE START SLAVE UNTIL MASTER_LOG_FILE = 'mysql-bin.000358' MASTER_LOG_POS = 895419795 on mysql2 [200101-16:01:20] [Info] Created / data/backups/20200101/ directory [200101-16:01:20] [Info] Limit chunk OK [200101-16:01:20] [Info] Tables list for mysql3 OK [Info] [OK] Dumping mysql3 [200101-16:01:20] [Info] Limit chunk OK [200101-16:01:20] [Info] Tables list for mysql4 OK [200101-16:01 20] [OK] Dumping mysql4 [200101-16:01:20] [Info] Limit chunk OK [200101-16:01:20] [Info] Tables list for mysql2 OK [200101-16:01:20] [OK] Dumping mysql2 [200101-16:01:20] [Info] UNLOCK BINLOG executed [200101-16:01:20] [Info] set start slave on mysql3 [200101-16:01:20] [Info] set start slave on mysql4 [200101-16:01:20] [Info] some basic requirements for set start slave on mysql2
Because the tool uses the command SHOW SLAVE HOSTS, you must set the variable report_host, which is probably already set if you are using Orchestrator.
The host set in the "report_host" variable should be accessible. For example, an IP or host that can actually be parsed (DNS, edit the / etc / hosts file).
There are no replication filters on any of the copies involved. This ensures the consistency of the data.
The script should currently be run locally on the primary server.
Because backup locks are used, they can only be used on Percona Server.
It is expected that the MySQL user credentials will be available in the home directory within the .my.cnf file.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.