Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Ants move to migrate mysql database

2025-03-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Take advantage of the opportunity of the relocation of the computer room, intend to do a business integration. The existing architecture was planned and operational in 2010, and there are more and more projects over time. Open the nginx configuration file and there are more than 40 lines of include inclusions, each of which is a project (some web, some app). An entire cabinet, old equipment, load balancing high availability architecture. In order to ensure business consistency and reduce costs, business data (developed applications and data uploaded by users) share a set of NFS; businesses share the same set of physical databases (a physical server mysql creates multiple libraries). With the growth of business and traffic, this hidden danger is becoming more and more worrying, mainly in the following aspects:

◆ security issues

Dozens of sites share directories and share them with physical servers in the way of nfs. As long as any one of these projects has a security loophole, dedicated people can come in and do whatever they want, making the site fall. From time to time, malicious code is injected and purged, but it is not long before it is tampered with. We all know that the loophole is not necessarily the one that has been tampered with. But there are too many sites and no quarantines to scan with security tools (a site takes an average of one day to scan).

[root@web57 ~] # more / usr/local/nginx/conf/nginx.conf

User www www

Worker_processes 6

Worker_rlimit_nofile 51200

Events {

Use epoll

# use kqueue; # FreeBSD system

Worker_connections 51200

}

Http {

Include mime.types

Default_type application/octet-stream

# charset gb2312

Server_names_hash_bucket_size 256

Client_header_buffer_size 256k

Large_client_header_buffers 4 256k

Client_max_body_size 500m

... Omit some...

Include vhosts/faxian.quanzhen.com.conf

Include vhosts/www.quanzhen.com.conf

Include vhosts/news.quanzhen.com.conf

Include vhosts/s.quanzhen.com.conf

Include vhosts/down.quanzhen.com.conf

Include vhosts/static.quanzhen.com.conf

Include vhosts/image.quanzhen.com.conf

Include vhosts/3g.quanzhen.com.conf

Include vhosts/mini.quanzhen.com.conf

Include vhosts/xml.quanzhen.com.conf

Include vhosts/mayiapi.quanzhen.com.conf

Include vhosts/www.android77.com.conf

Include vhosts/fahongbao.android77.com.conf

Include vhosts/update.android77.com.conf

Include vhosts/dev.quanzhen.com.conf

Include vhosts/qr.110.cc.conf

Include vhosts/110.cc.conf

Include vhosts/eggserver.quanzhen.com.conf

Include vhosts/apkegg.quanzhen.com.conf

Include vhosts/eggserver.yidong7.cn.conf

Include vhosts/www.yidong7.cn.conf

Include vhosts/down.yidong7.cn.conf

Include vhosts/wan.quanzhen.com.conf

Include vhosts/open.quanzhen.com.conf

Include vhosts/bakdown.yidong7.cn.conf

Include vhosts/hanhua.quanzhen.com.conf

Include vhosts/mpk.quanzhen.com.conf

Include vhosts/android.quanzhen.com.conf

Include vhosts/pay.quanzhen.com.conf

Include vhosts/cmstop.quanzhen.cn.conf

Include vhosts/news.quanzhen.cn.conf

Include vhosts/pingce.quanzhen.cn.conf

Include vhosts/gonglue.quanzhen.cn.conf

Include vhosts/hao.quanzhen.cn.conf

Include vhosts/all.quanzhen.cn.conf

Include vhosts/s.quanzhen.cn.conf

Include vhosts/apkz.quanzhen.com.conf

Include vhosts/ajax.quanzhen.com.conf

Include vhosts/union.quanzhen.com.conf

Include vhosts/mai.quanzhen.com.conf

Include vhosts/blog.quanzhen.com.conf

Include vhosts/guazi.quanzhen.com.conf

Include vhosts/lockscreen.yidong7.cn.conf

Include vhosts/dsp.pujia8.com.conf

Include vhosts/3svx4haii9.quanzhen.com.conf

Include vhosts/u.quanzhen.com.conf

Include vhosts/bianji.quanzhen.com.conf

Include vhosts/default.conf

}

◆ performance issu

Mainly focused on the database, as long as there is a library problem, causing lock table or other competition, all related business will hang up, annoying ah. Want to split, the decision maker believes that the cabinet is already full, if you have a new machine, you have to rent another cabinet, taking into account the cost and other issues, just do not have an accident.

The integration plan is to migrate part of the business to the public cloud, free up the server, and expand the configuration of the existing equipment (memory, hard disk, etc., old machines directly off the shelves). Leaving high-configured ones for virtualization can not only reduce the number of devices (lower escrow fees), but also facilitate daily maintenance.

So much has been said above, which seems to have little to do with technology, but it still has reference significance for some projects with legacy problems. Next, let's get down to business and take a look at the status of the project we are going to migrate. The data to be migrated to the cloud includes website data and database data. Website data is relatively easy to handle, and rsync is synchronized to the corresponding directory, while the database is relatively troublesome. Two databases, one with a capacity of 38G and the other with a capacity of 29G, are not too large, but the common ibdata1 file has 123G. At first, I tried to import these two libraries directly into Aliyun's RDS. After several failures, the consultant customer service got the reply that RDS does not support sub-table databases for the time being. To save costs, purchase a high-configured CVM (cpu 8core, 32GB memory and 1T efficient cloud disk) and deploy it to mysql5.6 for the use of two databases.

The first attempt

Estimated that 200 gigabytes of data, greedy, see a performance can not be migrated. A few days in advance, get the cloud environment ready (can come out the test page), the operation department sends out the notice, and then at 0:30 one night, some people are in the office, some are at home, squinting, solemnly typing the characters "screen" on the keyboard. After getting the unanimous permission to export the database in the QQ group, the younger brother carefully sent an instruction:

[root@db-209 ~] # innobackupex-- user=root-- passwor='i%=KGb76'\

-- defaults-file=/etc/my.cnf\

-databases= "quanzhen_mobile7lockscreen quanzhen_equipment" / data/bakmysql/

InnoDB Backup Utility v1.5.1 InnoDB Backup Utility xtrabackup; Copyright 2003, 2009 Innobase Oy

And Percona Ireland Ltd 2009-2012. All Rights Reserved.

This software is published under

The GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

180618 00:30:31 innobackupex: Starting mysql with options:-- defaults-file='/etc/my.cnf'-- password=xxxxxxxx-- user='root'-- unbuffered-

180618 00:30:31 innobackupex: Connected to database with mysql child process (pid=20090)

180618 00:30:37 innobackupex: Connection to database server closed

IMPORTANT: Please check that the backup run completes successfully.

At the end of a successful backup run innobackupex

Prints "completed OK!".

Innobackupex: Using mysql Ver 14.12 Distrib 5.0.95, for redhat-linux-gnu (x86 / 64) using readline 5.1

Innobackupex: Using mysql server version Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Innobackupex: Created backup directory / data/bakmysql/2018-06-1800-30-37

180618 00:30:37 innobackupex: Starting mysql with options:-- defaults-file='/etc/my.cnf'-- password=xxxxxxxx-- user='root'-- unbuffered-

180618 00:30:37 innobackupex: Connected to database with mysql child process (pid=20123)

180618 00:30:39 innobackupex: Connection to database server closed

180618 00:30:39 innobackupex: Starting ibbackup with command: xtrabackup_55-- defaults-file= "/ etc/my.cnf"-- defaults-group= "mysqld"-- backup-- suspend-at-end-- target-dir=/data/bakmysql/2018-06-18: 00-30-37-tmpdir=/tmp

Innobackupex: Waiting for ibbackup (pid=20132) to suspend

Innobackupex: Suspend file'/ data/bakmysql/2018-06-1800-30-37According to xtrabackupply suspended`

Xtrabackup_55 version 2.0.7 for Percona Server 5.5.16 Linux (x86x64) (revision id: 552)

Xtrabackup: uses posix_fadvise ().

Xtrabackup: cd to / data/mysql_db

Xtrabackup: Target instance is assumed as followings.

Xtrabackup: innodb_data_home_dir =. /

Xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend

Xtrabackup: innodb_log_group_home_dir =. /

Xtrabackup: innodb_log_files_in_group = 2

Xtrabackup: innodb_log_file_size = 5242880

> log scanned up to (601191481892)

[01] Copying. / ibdata1 to / data/bakmysql/2018-06-1800-30-37/ibdata1

> log scanned up to (601191481892)

> log scanned up to (601191481892)

> log scanned up to (601191481892)

> log scanned up to (601191481892)

> log scanned up to (601191481892)

> log scanned up to (601191481892)

> log scanned up to (601191481892)

... Omit...

It is optimistically estimated that the entire migration will be completed by 7am, and several people will take turns to monitor the progress, and then wake up the rest after one step in order to proceed to the next step. As a result, the innobackupex was not completed until more than 6: 00 in the morning, and there are still several steps away. each step is equally time-consuming, and we can only declare the temporary failure of the migration. Choose an auspicious day of the zodiac and migrate in two stages.

The second split migration

The perpetual calendar excludes the days when recent events are inappropriate, and then choose to make use of God's earth branch, select logs, make appointments with relevant people, and continue to migrate. With the lesson of the last time, I cleaned up the libraries to be migrated before migration, deleted some useless data, and saved several gigabytes of space. In the source database, execute the instructions:

[root@db-209 ~] # innobackupex-- user=root-- passwor='i%=KGb76'\

-defaults-file=/etc/my.cnf-databases= "quanzhen_equipment" / data/bakmysql/

After I handed it in, I lay down and went to bed. At three o'clock in the morning, the phone rang and told me that the first step was completed.

[root@db-209] # innobackupex-- apply-log / data/bakmysql/2018-06-1800-30-37

The log application is executed very quickly, and enter is over. Then tar packages and copies the files to the target server, because the leased egress bandwidth is too small (the total bandwidth is 30m). Now do readers know why they need to visit the trough at night for migration? It took some time to copy the file to the target service

The target server only needs to install the mysql software and create the directory / data/mysql_db, and there is no need to perform database initialization, because the data directory must be empty when innobackupex is imported. The configuration of Aliyun is much higher than that of the source server, and the decompression of the file will be completed quickly.

Check the mysql options file / etc/my.cnf, and note that it is the options file. Set "- datadir=/data/mysql_db" to perform the import operation. The instructions are as follows:

[root@msyql mysql_db] # innobackupex-- defaults-file=/etc/my.cnf\

-- copy-back / data/db_bk/2018-06-1800-30-37

It doesn't matter that the library mysql is not exported when the source data is exported, since there is only one account that needs to be created. Next, initialize the database and create an application account, and do the following:

[root@msyql mysql_db] # cd / usr/local/mysql/

[root@msyql] # scripts/mysql_install_db-- user=mysql-- datadir=/data/mysql_db

[root@msyql ~] # mysql

Mysql > grant all on quanzhen_equipment.* to...

Also remember to eliminate the empty password for mysql.

Compare the number of tables between the source database and the target database, and randomly select some large tables to compare the number of records. After confirming the integrity of the data, the gang went to debug the application, and the follow-up work did not show, it was none of my business.

The third split and migration

With the last successful experience, this time I am full of confidence, but there is still a worry that when the target library is imported, the data directory is required to be empty. My younger brother came to ask for my advice before I started. I was worried that there might be obstacles, so I said to him, as long as you export the data from the origin server and prepare it, put it in the target database, and I will take care of the rest myself.

I have two choices. One is to use the option "--force-non-empty-directories". If not, get another mysql instance, enable port 3307, and run dual instances. Try the first option first to see if it can proceed. The specific instructions are:

[root@msyql db_bk] # pwd

/ data/db_bk

[root@msyql db_bk] # innobackupex-defaults-file=/etc/my.cnf-copy-back\

-- force-non-empty-directories 2018-06-2200-24-52

180623 23:31:57 innobackupex: Starting the copy-back operation

IMPORTANT: Please check that the copy-back run completes successfully.

At the end of a successful copy-back run innobackupex

Prints "completed OK!".

Innobackupex version 2.4.11 based on MySQL server 5.7.19 Linux (x86x64) (revision id: b4e0db5)

Innobackupex: Can't create/write to file'/ data/mysql_db/ib_logfile0' (Errcode: 17-File exists)

[01] error: cannot open the destination stream for ib_logfile0

Error: copy_file () failed.

Sadly, there is a file with the same name, no! Terminate the operation directly. All right, I'll remove the file "ib_logfile0, ib_logfile1" and then execute it, but it still won't work. I'll remind you that the file "ibdata1" exists. This is a big thing. Although I am worried that the newly imported ibdata1 may not contain information about the existing database, I can't help but want to give it a try. Some readers may ask that this may destroy the original data in the database. In fact, I want to come to this level. I backed up the whole database a long time ago and bought insurance.

I was staring at the screen to check the output, hoping that it would go well. Suddenly, news came from QQ group, asking how it was going and when it would be finished. As soon as I saw the time, it was six o'clock, and the northern land was already bright. It's too late. Stop the process and try copying files directly without using innobuckupex. I was not sure, so I carefully compared the database directory with the three files "ibdata1, ib_logfile0, and ib_logfile1" in the exported data directory and found that they were exactly the same size. Anyway, remove these files from the existing database, and there are three files from the export directory cp. When the copy is finished, the mysqld_safe startup service is executed and fails, indicating that ib_logfile0 does not have write permission; this is easy, just a chown instruction. Then execute to start the mysql service, normal.

So is the data correct? I'm not sure. In case it's not right, I'll install another mysql, import data, start it with two instances, and then try to integrate it later. The server purchased by Aliyun, which communicates with each other on an intranet, won't waste too much time on transmission.

Since the service is normal, let's check the data, in case of luck (the other night I dreamed that I could fly, grabbed a giant swan, and I put my arms around the swan's neck beautifully. ), the data is fully available! I quietly compared it for a while, but there was no difference. I called others in the QQ group, saying that the import was blocked and unsuccessful several times, and then some uncertain measures were taken. The mysql service is up. Please verify the data to see if it is completely available. Several programmers were busy for a while and got an answer that the data was fully available. At this point, my work is finished.

Some people may despise me, why not test it first? Do not develop a sound process? That's a good question! I have advised the decision maker several times to prepare some resources. To put it bluntly, it is possible to prepare an idle server and exercise in the private network, even during the day (copying data on the intranet, not in the bandwidth accessed by users), but there are no resources for me, but things have to be done. Although a little tired, toss about, turn over to think, we play hanging also gain experience, otherwise there would not be this article published, do you think?

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report