In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
There is an article online: xtrabackup compressed backup migration scheme for TB-level mysql data. I tested it myself today to see the effect. The conclusion is given at the end. A brief introduction to the environment:
Vmware linux CentOS release 6.5 (Final) x86x64 installed under win7
Mysql 5.6.32
Xtrabackup version 2.2.12 based on MySQL server 5.6.24 Linux (x86x64)
The mysql library is 11.6G
1. The first script is:
Time innobackupex-defaults-file=/usr/local/mysql/my.cnf-parallel=10-compress-threads=10-user='xtrabk'-password='123456' / tmp/backup1
The usage time is as follows:
Real 12m3.511s
User 0m13.338s
Sys 0m53.715s
No compression, the total file content is 7.8G wa CPU most of the time in use is the "% wa" project, the io throughput is 40m/sec. It can be seen that the bottleneck is in the Istroke O.
two。 The second test script is:
Time innobackupex-defaults-file=/usr/local/mysql/my.cnf-parallel=8-user='xtrabk'-password='123456'-socket=/tmp/mysql.sock-compress-threads=8-stream=xbstream-compress / tmp > / tmp/backup1/backup4tpcc1000.tar
Usage time:
Real 12m51.665s
User 0m34.527s
Sys 1m53.238s
The above statement is backed up to 6.2g, mostly "% wa" projects, and the throughput per second is:
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
Sda 0.50 0.00 0.01 00
Sdb 106.50 18.32 10.56 73 42
Dm-0 2713.75 18.20 10.31 72 41
According to reason, I opened and read the file, turned on the compression, but it was useless.
3. The third script is:
Time innobackupex-defaults-file=/usr/local/mysql/my.cnf-user='xtrabk'-password='123456'-stream=tar-socket=/tmp/mysql.sock / tmp | gzip-> / tmp/backup1/backuptpcc1000.tar.gz
Real 9m16.808s
User 6m50.564s
Sys 0m36.050s
After compression, the file is compressed to 4.5g, single thread compression. The characteristics of this script are: no parallel reading, using gzip compression, the total usage time is reduced to 9 minutes. The bottleneck should be compression.
4. The fourth script:
Time innobackupex-- defaults-file=/usr/local/mysql/my.cnf-- user='xtrabk'-- password='123456'-- stream=tar-- socket=/tmp/mysql.sock / tmp | pigz-8-p 15 > / tmp/backup1/backup2tpcc1000.tar.gz
Real 6m59.320s
User 17m57.191s
Sys 1m3.678s
The compressed file is compressed by 8 threads of 4.4G, and most of the time in use of cpu is the "% us" project. Disk io means there is no pressure, I will not post it. The use time is shortened to 7 minutes. With pigz compression, 15 thread compression is turned on, and cpu runs fast. Then I added the parameter-parallel=8 to the above statement, but with eggs, the effect is the same, or 7 minutes.
Conclusion: when there is no compression, the bottleneck of backup is io, and the backup file is 7.8G, so it takes the longest time. Using pigz to compress backups, the bottleneck is cpu. Obviously, cpu is much faster than io, so it takes the shortest time.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.