Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

A detailed discussion on the Real case of MySQL self-testing backup Strategy

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The following is mainly to bring you a real case of MySQL test backup strategy in detail. I hope these words can bring you practical use, which is also the main purpose of my editing this article. All right, don't talk too much nonsense, let's just read the following.

Whether you have ever been worried about the backup of the online library, here is a complete backup scheme for backing up the dependent database, which can be tested by yourself.

Description:

Backup slave database, complete once a week at 6: 00 a. M. every Monday, other time backup relay log enable rsync service on slave database for remote backup on local CVM use rsync command to synchronize database backup regularly this backup can be used to add new Slave for Master or to restore Master

I. Server-side configuration

1. Backup script written by Python

Root@DBSlave:~# cat / scripts python#-*- coding:utf-8 zip / unzip# backup policies example''1. Complete once a week, the rest are backup relay log 2. The database is fully backed up at 6:00 every Monday morning. Tuesday to Sunday, 12:00, 18:00, 6:00, backup relay log'# Planning backup directory # backup directory is created on a weekly basis # "% W": the week ordinal of the year Take Monday as the first day of the week (00-53) Date_Time = datetime.datetime.now (). Strftime ("% Y-%m-%d-%H-%M-%S") #% F: Week_Date = datetime.datetime.now (). Strftime ("% Ymuri% W") Dir = "/ data/backup" Backup_Dir = Dir+'/'+ Week_Date#-create a backup directory Generate a directory once a week because once a week do full if os.path.isdir (Backup_Dir) is False: os.makedirs (Backup_Dir) # set database connection information # mysqldump option #-- skip-tz-utc: keep the time zone before the table export is the same #-- master-data=2: write a "change master to" statement and comment on backup, equal to 1 Will not comment #-dump-slave=2: back up the database of slave Use. #-- quick when adding slave for master: retrieve the rows of the table from the server in the row at once The function is to speed up the export of table #-routines: export stored procedure #-- triggers: export trigger #-- set-gtid-purged=OFF: prevent backup data from conflicting with its GTID when it is imported into a new instance So do not add GTID information when backing up data #-single-transaction: issue BEGIN SQL statements before dumping data from the server to ensure data consistency as far as possible, but this parameter only applies to storage engines such as innodb #-dump-slave=2: write change statements connected to the master database from the slave library during backup and comment, equal to 1 No comment # set database backup information DB ='- uroot-p123456' # specify login account and password ARG ='- dump-slave=2-- skip-tz-utc-- routines-- triggers-- default-character-set=utf8-- single-transaction' # specify backup parameter DB_NAME = "dbname" # database name Back_DBNAME = DB_NAME +'_'+ Date_Time + '.sql' # Database backup name Logs_File = Backup_Dir +'/ logs' # specify the file output of the log at the time of backup Mysql_Bin = "/ usr/bin/mysql" # specify the path of the [mysql] command MysqlDump_Bin = "/ usr/bin/mysqldump" # specify the path of the [mysqldump] command Relay_Log_Dir = "/ data/logs/relay_log" # specify the relay log relay _ Log_Info = "/ data/logs/relay_log/relay-bin.info" # used to get the relay log currently in use # definition delete the old backup def Del_Old ():''delete the old backup 36 days ago' 'OLD_Files = os.popen ("find% s-type f-mtime + 36"% (Dir)) .readlines () if len (OLD_Files) > 0: for OLD_FIle in OLD_Files: FileName = OLD_FIle.split ("\ n") [0] os.system ("rm-f% s"% (FileName)) # Delete empty directory All_Dir = os.popen ("find% s-type d"% (Dir +'/ *')). Readlines () for Path_Dir in All_Dir: Path_Dir = Path_Dir.split ("\ n") [ 0] Terms = os.popen ("ls% s | wc-l"% (Path_Dir)) .read () if int (Terms) = 0: os.system ("rm-rf% s"% (Path_Dir)) # backup relay log files that have been synchronized def ZIP_And_Del_Existed ():''compress and delete the synchronized log files To prevent the relay log from being deleted before synchronization is completed, a judgment is made here. Only compress and delete synchronized relay logs''# get all relay logs Relog_List = os.popen ("ls% s | grep\" ^ relay-bin.*\ "| grep-v\" relay-bin.in*\ "% (Relay_Log_Dir)). Readlines () # get the relay log file CurRelay = os.popen (" cat% s) that is currently in use. | | head-n 1 "% (Relay_Log_Info) .readline () .split ("\ n ") [0] CurRelay_MTime = os.path.getmtime (CurRelay) # get the last modification time of the file currently in use # cycle all relay log files By comparing with the last modification time of the relay log, the relay log that needs to be backed up Need_ZIP_FName = [] # defines the file name for FileName in Relog_List that needs to be compressed and deleted:''add the files whose modification time is less than the [relay log currently in use] to the list [Need_ZIP_FName] for backup / deletion. '' FName = FileName.split ("\ n") [0] FName_MTime = os.path.getmtime ("% s Relay_Log_Dir,FName)) if FName_MTime

< CurRelay_MTime: Need_ZIP_FName.append("%s/%s"%(Relay_Log_Dir,FName)) os.system("zip -j %s/Relay_log_%s.zip %s" % (Backup_Dir, Date_Time," ".join(Need_ZIP_FName))) # 获取已经压缩的中继日志文件,然后删除 for Relay_Log in Need_ZIP_FName: os.system("rm -f %s"%(Relay_Log))# 开始执行备份.(判断,如果今天是星期一则进行全备,不是星期一则增量备份)IF_Week = datetime.datetime.now().strftime('%w')if int(IF_Week) == 1: # 匹配是否已经存在全备 Test = os.popen('ls %s | grep -E \"^%s.*([0-9]{2}-[0-9]{2}-[0-9]{2}).sql.zip\" | wc -l'%(Backup_Dir,DB_NAME)).readline() if int(Test) == 0: # 如果星期一已经进行全备,则开始增量备份 with open(Logs_File,'a+') as file: file.writelines("####---------- 分界线 ----------####\n") file.writelines("###start >

> full datetime:% s\ n "% (datetime.datetime.now (). Strftime ("% Y-%m-%d-%H-%M-%S ")) file.writelines (" # today is the week% s\ n "% (IF_Week)) file.writelines (" # stop slave\ n ") os.system ("% s-e\ "stop slave\"% (Mysql_Bin) DB) file.writelines ("# status slave\ n") Show_Slave = os.popen ("% s% s-e\" show slave status\ G\ "% (Mysql_Bin) DB) .readlines () file.writelines (Show_Slave) file.writelines ("# backup\ n") os.system ("% s% s >% s Mab% s"% (MysqlDump_Bin,DB,ARG,DB_NAME,Backup_Dir Back_DBNAME)) file.writelines ("# backup done & & start slave | datetime:% s\ n"% (datetime.datetime.now (). Strftime ("% Y-%m-%d-%H-%M-%S")) os.system ("% s-e\" start slave \ "% (Mysql_Bin,DB) time.sleep (5) file.writelines (" # slave status\ n ") Show_Slave = os.popen ("% s% s-e\ "show slave status\ G\"% (Mysql_Bin) Readlines () file.writelines (Show_Slave) file.writelines ("# done > > complete\ n") os.system ("zip-j% s/%s.zip% s)% s" (Backup_Dir,Back_DBNAME,Backup_Dir,Back_DBNAME) os.system ("rm% s% s"% (Backup_Dir) Back_DBNAME)) file.writelines ("\ n\ n") Del_Old () else: with open (Logs_File) As file: file.writelines ("#-demarcation line-#\ n") file.writelines ("# start > incremental backup datetime:% s\ n"% (datetime.datetime.now (). Strftime ("% Ymurf% MFG% dafe% HFT% M- % S ")) file.writelines (" # today is week% s\ n "% (IF_Week)) file.writelines (" # stop slave\ n ") os.system ("% s% s-e\ "stop slave\"% (Mysql_Bin) DB) file.writelines ("# status slave\ n") Show_Slave = os.popen ("% s% s-e\" show slave status\ G\ "% (Mysql_Bin) DB) .readlines () file.writelines (Show_Slave) file.writelines ("# backup\ n") ZIP_And_Del_Existed () file.writelines ("# backup done & & start slave | datetime:% s\ n"% (datetime.datetime.now (). Strftime ("% Y-%m-%d-%H-%M-%S") ) os.system ("% s% s-e\" start slave \ "% (Mysql_Bin,DB) time.sleep (5) file.writelines (" # slave status\ n ") Show_Slave = os.popen ("% s% s-e\ "show slave status\ G\"% (Mysql_Bin) DB) .readlines () file.writelines (Show_Slave) file.writelines ("# done > > incremental backup completion\ n") file.writelines ("\ n\ n") Del_Old () else: with open (Logs_File As file: file.writelines ("#-demarcation line-#\ n") file.writelines ("# start > incremental backup datetime:% s\ n"% (datetime.datetime.now (). Strftime ("% Ymurf% MFG% dafe% HFG% MFor%) S ")) file.writelines (" # today is week% s\ n "% (IF_Week)) file.writelines (" # stop slave\ n ") os.system ("% s% s-e\ "stop slave\"% (Mysql_Bin) DB) file.writelines ("# status slave\ n") Show_Slave = os.popen ("% s% s-e\" show slave status\ G\ "% (Mysql_Bin) DB) .readlines () file.writelines (Show_Slave) file.writelines ("# backup\ n") ZIP_And_Del_Existed () file.writelines ("# backup done & & start slave | datetime:% s\ n"% (datetime.datetime.now (). Strftime ("% Y-%m-%d-%H-%M-%S")) Os.system ("% s% s-e\" start slave \ "% (Mysql_Bin, DB) time.sleep (5) file.writelines (" # slave status\ n ") Show_Slave = os.popen ("% s% s-e\ "show slave status\ G\"% (Mysql_Bin) DB) .readlines () file.writelines (Show_Slave) file.writelines ("# done > > incremental backup completed\ n") file.writelines ("\ n\ n") Del_Old ()

2. Schedule tasks

Root@DBSlave:~# cat / etc/cron.d/general#mysql backup0 6 * root python / scripts/mysql_slave_backup.py0 12 * root python / scripts/mysql_slave_backup.py0 18 * root python / scripts/mysql_slave_backup.py

3. Rsync configuration

Root@DBSlave:~# cat / etc/rsyncd.confuid = 0gid = 0use chroot = yesaddress = "current host public network address" port = 8638log file = / var/log/rsync.logpid file = / var/run/rsync.pidhosts allow = "only a certain IP connection is allowed" [databases] path = / data/backup/comment = databasesread only = yesdont compress = * .gz * .bz2 * .zip# only remoteuser users auth users = remoteusersecrets file = / etc/rsyncd_users.dbroot@DBSlave:~# Cat / etc/rsyncd_users.db# format: user name: password remoteuser:password

Second, local backup host configuration

1. Create a rsync password file

Root@localhost:~# cat / etc/server.passremoteuser:password

2. Synchronization script

Root@localhost:~# cat / scriptsUniverse backup.ScriptsCharger binbinBash Dest= $(which ssh) Logs_Dir= "/ Backup/logs.txt" Rsync=$ (which rsync) Project= "databases" Dest= "/ Backup/" IF_DEL_FILE=$ (find ${Dest}-type f-mtime + 36-name "*" | wc-l) DEL_FILE=$ (find ${Dest}-type f-mtime + 36-name "*") # Delete the old backup RMOLD () {if ["${IF_" DEL_FILE} "- gt" 0] then for filename in ${DEL_FILE} do rm-f ${filename} done rmdir ${Dest} * # Delete the empty directory fi} # execute the synchronization command Backup () {echo "# #-datetime: `date +% Fashi% Hmurf% Mashi% S` -# "> ${Logs_Dir} echo" # start rsync "> > ${Logs_Dir} ${Rsync}-azH-- password-file=/etc/server.pass-- bwlimit=300-- IP address of port=8638 remoteuser@ database rsync listening:: ${Project} ${Dest} & > > ${Logs_Dir} echo" # end rsync-dateime: `date +% Fmuri% Hmurb% Mafe% S`-# "> ${Logs_Dir} echo-e"\ n\ n "> ${Logs_Dir} RMOLD} # judge that if you are currently synchronizing, you will no longer execute the synchronization command IFProcess () {ps-ef | grep" ${Rsync}-- password-file=/etc/server.pass-- bwlimit=300-- IP address of port=8638 remoteuser@ database rsync listener:: ${Project} "| grep-v" grep "& > / dev/null if [[" $? "= = 0]] then exit 0 else Backup fi} IFProcess

3. Plan tasks

Root@localhost:~# cat / etc/cron.d/general

01 23 * root / bin/sh / scripts/backup.sh

For the above real cases of MySQL personal test backup strategy, we do not think it is very helpful. If you need to know more, please continue to follow our industry information. I'm sure you'll like it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report