Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Use DataSync to migrate data to Gaussdb T

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Original link: introduction to https://www.modb.pro/db/22926?cyn I and DataSync migration data

DataSync supports GaussDB 100 (GaussDB T) online migration and Sybase, Oracle, MySQL, GaussDB 100 V100R003C10, SQL Server offline migration. After configuring the relevant configuration information of the source database and the target database, you can start the migration tool, and relevant log files and reports will be generated during the operation, which is convenient for users to manage and maintain.

The DataSync implementation functions are as follows:

Data export: only offline migration is supported.

Data import: only offline migration is supported.

Data export + import: offline migration and online migration are supported.

Table structure verification: table structure verification is required for both offline migration and online migration. Users can check the table structure separately before performing the data migration, or the table structure verification can be performed automatically by the DataSync tool during the data migration.

The restrictions on using DataSync are as follows:

1. When in use, you must ensure that the server that migrates the database and the server that migrates the database can connect normally.

2. Make sure that you can connect to the database correctly through the configured IP, port, user name and password.

3. When in use, the user needs to evaluate whether the space occupied by the target database and the space used in the splicing process are satisfied.

4. When using, make sure that JAVA is installed on the equipment running the migration tool, version 1.8 or above.

5. When in use, make sure that the IP of the device running the migration tool is configured in the whitelist of the source database and target database.

6 users need to place the source database JDBC driver to be exported in dependency-jars and the driver package name needs to be

Be consistent with the specification, otherwise it will cause the connection to the database to fail, or call the tool by specifying the path of the jar package

Example:

Java-Xbootclasspath/a:/sybaseDriver/jconn4_sybase.jar:/sybaseDriver/bcprov-jdk16-1.46.jar jar DSS.jar

7. The database password does not support "(space) and"; "(semicolon), otherwise the tool cannot use the zsql command to connect to the database. For more information, please refer to the password section of the GaussDB 100V300R001 user manual. Make sure that the configuration user has permission to perform relevant actions, otherwise the running tool will not achieve the desired results. Please refer to Table 2-1 to configure the permissions required by the user.

8. All paths in the data_path configuration item must follow the whitelist rules of migration tool paths: paths are only allowed to contain uppercase and lowercase letters, numbers,','/',':','-','_', cannot start with'-', and cannot contain / and\ at the same time

2. Online migration example

The following is an example of online migration from MySQL to gaussdb100. Oracle database can also be migrated online.

The source library is the mysql example:

It can be executed on the source database side, there is no need to create the table structure on the target side, the table structure can be automatically created in the target database, and the data can be synchronized.

Before configuring the cfg.ini process, you need to generate database server ciphertext, database user ciphertext, and target server ciphertext respectively.

Generate ciphertext for the password that needs to be configured in the cfg.ini file.

Execute the java-jar DSS.jar-pwd |-password password_type command in the current directory of DSS.jar, and then enter the plaintext password as prompted to get the encrypted password.

Where password_type represents the type of password in the cfg.ini file to generate ciphertext. The range of values is: integer, [1]. The specific meaning is as follows:

-1: export database password

-2: import database password

-3: import server password

-4: the password of the remote server where the data file used by the export operation is located

-5: the password of the remote server where the data file used by the import operation is located

-6: export the database to generate the password set for the truststore file

-7: export the database to generate the password set for the keystore file

-8: import the database to generate the password set by the truststore file

-9: import the database to generate the password set by the keystore file. Passwords should be encrypted one by one according to the type and cannot be reused. For example, if you export the database and import the database with the same password, you need to encrypt it using the "java-jar DSS.jar-pwd |-password 1" and "java-jar DSS.jar-pwd |-password 2" commands, respectively. Example:

Dbuser@plat:~/gppTest/verf0603/DataSync > java-jar DSS.jar-password 1 DataSync (1.1.0), From Huawei! Copyright ©Huawei Technologies Co, Ltd 2019 All Rights Reserved.

1. Configure the online migration process

# more cfg.ini {"flow_type": 3,-- > 1: export 2: import 3: online migration "export_db": {"database_type": 3,-- > 2:Oracle. 3:MySQL . 6:GaussDB 100 "db": {"ip": "192.168.188.155", "username": "root", "password": "mfkoBUgy1NgfG51rZUlYmg==", / * java-jar DSS.jar-password 1--export database Here use the command to generate database user ciphertext password * / "port": 3306, "db_name": "yf", "server_name": "", "trust_store": "," trust_store_password ":" "key_store": "," key_store_password ":"}}," import_db ": {" database_type ": 6," db ": {" ip ":" 192.168.212.221 " "username": "TEMPTEST", "password": "AMeijynMP4OvdRzZeA/Tiw==", / * java-jar DSS.jar-password 2--import database Here is to generate gaussdb ciphertext user password * / "port": 1888, "trust_store": "", "trust_store_password": "", "key_store": "", "key_store_password": "" "table_space": "", "index_table_space": ""}, "server": {"ip": "192.168.212.221", "username": "omm", "password": "u9ijn15bju6i2YJIXYU46w =" / * java-jar DSS.jar-password 3--import server-- > here is to generate omm user ciphertext password * / "pub_key_file": "," port ": 22}} -> online online migration all the following paths do not need to enter "data_path": {"export_local_path": "," export_remote_path ": {" ip ":", "username": "," password ":" "pub_key_file": "," port ": 22," path ":"}," import_local_path ":"," import_remote_path ": {" ip ":" "username": "," password ":", "pub_key_file": "", "port": 22, "path": ""}} "option": {"column_separator": "|", "row_separator": "\ n", "data_check_type": 1, "compression_before_translate": false, "compress_with_zstd_mode": false, "disable_foreign_key": true "check_ddl": true, "nls_lang": "utf8", "delete_file": true, "ignore_lost_table": 3, / * Note to use online migration It needs to be changed to 3. Automatically create table structures in the target library * / "disable_trigger": true, "check_obj_exists": true, "ignore_sync_ddl": false, "ignore_case_in_exc": false, "multiple_schema": false, "fetch_size": 2000 "disable_index": false, "enlarge_charater_length": false, "create_pk_and_index_after_import": false, "max_table_in_ddl_file": 0

2. Select the source database and source table to be migrated, and map to the target library

[root@192-168188155 config] # more exp_obj.ini yf.frame_att:TEMPTEST.FRAME_ATT---Source database table structure Gaussdb 100Table structure MySQL library name and table name in lowercase gaussdb 100username and table name in uppercase

3. The source side is executed as a root user

Execute the online migration command:

[root@192-188155 DataSync] # java-jar DSS.jar-I config/exp_obj.ini DataSync (1.1.1), From Huawei! Copyright ©Huawei Technologies Co, Ltd 2019 All Rights Reserved. Start checking config..[ok] Start syncing DDL..[ok] Start converting environment..[ok] Start syncing data. . [ok] Data syncing.. [0/1] Data sync completed.. [1/1] Start recovering envrionment. . [ok] Start collecting results. [ok] Task start time.. [2019-12-31 13:51:19] Task end time... [2019-12-31 13:57:13] Total spent time.. [354.075s] Export successful data (rows). [217] Export failed data (rows). Export data failed table count. [0] Import successful data (rows). [217] Import failed data (rows). [196389] Import failed table (tables). . [1] Report details path.. [. / logs/reports_2019-12-31/13h-51m-19s/] [root@192-168188155 DataSync] #

4. View the migration report to verify data consistency

The following logs are generated during the run of the DataSync tool:

Dss_error_log.log: log any exceptions and errors that occur while the tool is running for maintenance personnel to locate and analyze the problem.

DDLReport.csv: can be viewed if it fails during migration. This is the table object that failed to create the table structure.

Dss_info_log.log: the tables that failed the migration will be recorded. The problem sql can be checked in this log.

Third, pay attention to the process of migration

1. Datasync migrates data through load, only table data is migrated.

2. Datasync will check the table structure information before migration. If there is an error, it will interrupt the migration. For more information about table structure errors, please see DDLReport.csv.

3. Dss_info_log.log-> tables that failed to migrate and sql of problems can be checked in this log

4. Datasync will check the ddl table structure before importing. If the creation of a table ddl fails, the import process will be interrupted, which is a key step, so to eliminate the problematic tables and then import them, you can use the command

Grep-E', ERROR, |, FAILED,'. / logs/reports_2020-01-02/13h-51m-15s/CreateTblReport.csv. Filter out failed tables (sometimes you need to pay attention to the case of the table names of the source tables) and eliminate them in exp_obj.ini. The reason for the failure of detailed creation is found in dss_error_log.log

5. If you need to re-import because of a ddl problem, you can delete the previous schema before gaussdb, and then re-datasync the import

6. Some source tables have self-increasing columns during migration, which will fail to create.

ERROR: [TEMPTEST.FRA _ R Missing table creation Task] Build Table Errors,sql:CREATE TABLE TEMPTEST.FRA_R (ROW_ID FLOAT AUTO_INCREMENT not null, OUGUID VARCHAR, ROL EGUID VARCHAR, UPDATETIME CHAR, PRIMARY KEY (ROW_ID)) Reason:com.huawei.gauss.exception.GaussException: [errorCode=GS-00601, SQLState='28000', reason='errLine = 1, errC olumn = 52, errMsg=Sql syntax error: auto increment column ROW_ID only support int type,ioClient:@4082ba93', sql=CREATE TABLE TEMPTEST.FRAME_ROLE4OU (ROW_ID FLOAT AUTO_INCREMENT not nul l, OUGUID VARCHAR, ROLEGUID VARCHAR, UPDATETIME CHAR (40), PRIMARY KEY (ROW_ID)), sessionId=73]

7. If mysql is on the same server as gaussdb, you need to run it as omm, and the file directory should be authorized.

2020-01-02 11:01:33 AM [Thread: synctask:8851] [Class:com.huawei.dss.syncmgr.importtask.LoadImportCommon > > Method: com.huawei.dss.syncmgr.importtask.LoadImportCommon.setSourceCount (LoadImportCommon.java:810)] ERROR: [.clients _ attac load importtask] awk read data file lines failed Failed result: awk: fatal: cannot open file `/ root/dataMigration/yf1b_stdzwfw9_7_11/workflow_attachstorageconfig.dat' for reading (Permission denied) 2020-01-02 11:01:33 AM [Thread: synctask:8852] [Class:com.huawei.dss.syncmgr.importtask.LoadImportCommon > > Method: com.huawei.dss.syncmgr.importtask.LoadImportCommon.setSourceCount (LoadImportCommon.java:813)] ERROR:statusCode=-1 Result=awk: fatal: cannot open file `/ root/dataMigration/yf/workflow_at.dat' for reading (Permission denied) 2020-01-02 11:01:33 AM [Thread: synctask:8852] [Class:com.huawei.dss.syncmgr.importtask.LoadImportCommon > > Method: com.huawei.dss.syncmgr.importtask.LoadImportCommon.doImport (LoadImportCommon.java:160)] ERROR: [yf1b _ stdzwfw9_7_11.workflow_attachstorageconfig load importtask] task init failed [root@database DataSync] # chown-R omm:dbgrp / opt / DataSync [omm@database 11h-18m-20s] $java-jar DSS.jar-I config/exp_obj.ini

8. When mysql migrates online to create a table, the field size of the gaussdb table is slightly larger than that of the source table

9. Failed to create the word containing the system reservation

2020-01-02 13:51:32 PM [Thread: main:17745] [Class:com.huawei.dss.envconvert.ddlvalidator.impl.DdlValidatorImpl > > Method: com.huawei.dss.envconvert.ddlvalidator.impl.DdlValidatorImpl.createHandler (DdlValidatorImpl.java:1806)] ERROR: [admin. Audit _ ONLINE_REGISTER Missing table creation Task] Build Table Errors,sql:CREATE TABLE ADMIN.AUDIT_ONLINE_REGISTER (BelongXiaQuCode VARCHAR, OperateUserName VARCHAR, OperateDate DATE, Row_ID INTEGER, YearFlag VARCHAR (16) RowGuid VARCHAR not null, ACCOUNTGUID VARCHAR, ACCOUNTSTATUS INTEGER, LASTLOGINDATE DATE, LOGINID VARCHAR, PASSWORD VARCHAR, IS_FIRSTLOGIN INTEGER, MOBILE VARCHAR, PWDLEVEL VARCHAR, USERTYPE VARCHAR, USERNAME VARCHAR, IDNUMBER VARCHAR, AUTHETYPE VARCHAR, ISREALNAME VARCHAR, USERSOURCE VARCHAR, ISQYFR VARCHAR, QYISACTIVE VARCHAR WXID VARCHAR, WBID VARCHAR, QQID VARCHAR, LOCK VARCHAR, YPTOKEN VARCHAR, openid VARCHAR, refreshtoken VARCHAR, PROFILEPIC VARCHAR, failedtimes INTEGER, faileddate DATE, islock VARCHAR, COMPANYIDNUMBER VARCHAR, RowGuid) Reason:com.huawei.gauss.exception.GaussException: [errorCode=GS-00601, SQLState='28000', reason='errLine = 1, errColumn = 709, errMsg=Sql syntax error: invalid column name 'LOCK',ioClient:@38cee291', sql=CREATE TABLE ADMIN.AUDIT_ONLINE_REGISTER (BelongXiaQuCode VARCHAR, OperateUserName VARCHAR, OperateDate DATE, Row_ID INTEGER, YearFlag VARCHAR, RowGuid VARCHAR, not null, ACCOUNTGUID VARCHAR, ACCOUNTSTATUS INTEGER, LASTLOGINDATE DATE, LOGINID VARCHAR) PASSWORD VARCHAR, IS_FIRSTLOGIN INTEGER, MOBILE VARCHAR, PWDLEVEL VARCHAR, USERTYPE VARCHAR, USERNAME VARCHAR, IDNUMBER VARCHAR, AUTHETYPE VARCHAR, ISREALNAME VARCHAR, USERSOURCE VARCHAR, ISQYFR VARCHAR, QYISACTIVE VARCHAR, WXID VARCHAR, WBID VARCHAR, QQID VARCHAR, LOCK VARCHAR YPTOKEN VARCHAR, openid VARCHAR, refreshtoken VARCHAR, PROFILEPIC VARCHAR, failedtimes INTEGER, faileddate DATE, islock VARCHAR, COMPANYIDNUMBER VARCHAR, PRIMARY KEY (RowGuid), sessionId=59]

10. Field information that is not in the datasync conversion rules will also fail.

The following fields of type bit are not found in gaussdb Try changing to BOOLEAN (entered as a string'0' or'1') or int type yf,portal_item,ADMIN,PORTAL_ITEM,ERROR,-- -- 2020-01-02 13:51:32 PM [Thread: main:17243] [Class:com.huawei.dss.envconvert.ddlvalidator.impl.DdlValidatorImpl > > Method: com.huawei.dss.envconvert.ddlvalidator.impl.DdlValidatorImpl.getColStr (DdlValidatorImpl.java:1692)] ERROR: [admin. Portal _ ITEM Missing table creation Task] AUTOLOADSCRIPT column type:bit not in a type mapping file

11. CompleteReport.csv is a report after the import is completed, which is used to show the tables and rows that failed to export and import.

[omm@database DataSync] $java-jar DSS.jar-I config/exp_obj.ini DataSync (1.1.1), From Huawei! Copyright ©Huawei Technologies Co, Ltd 2019 All Rights Reserved Start checking config..[ok] Start syncing DDL..[ok] Start converting environment..[ok] Start syncing data. . [ok] Data syncing.. [0/935] Data syncing.. [86/935] Data syncing. .. [165/935] Data syncing.. [276/935] Data syncing.. [377/935] Data syncing.. [470/935] Data syncing.. [586/935] Data syncing... . [684/935] Data syncing.. [844/935] Data sync completed.. [935/935] Start recovering envrionment. . [ok] Start collecting results. [ok] Task start time.. [2020-01-02 15:02:15] Task end time... . [2020-01-02 15:03:21] Total spent time. [66.245s] Export successful data (rows). [159689] Export failed data (rows). Export data failed table count. [0] Import successful data (rows). [134979] Import failed data (rows). [224956] Import failed table (tables). . [26] Report details path.. [. / logs/reports_2020-01-02/15h-02m-15s/] [omm@database DataSync] $pwd/opt/DataSync [omm@database DataSync] $where Import failed table (tables).. . [26]-> from the file "CompleteReport.csv" FAILED, "field Table name there are 26 table examples with import errors in CompleteReport.csv, such as: the actual reported export and the number of imported rows are incorrect, the actual number of failed rows of import does not match, and the source table obviously does not have so many rows of data. But there are many lines in the report generated by datasync: REPORTTYPE,SOURCEDB,TARGETDB,SOURCETBL,TARGETTBL,EXPORTSTART,EXPORTEND,EXPORTCOST (MS), EXPORTEDROWS,EXPORTFAILEDROWS,EXPORTRESULT,IMPORTSTART,IMPORTEND,IMPORTCOST (MS), IMPORTEDROWS,IMPORTFAILEDROWS,IMPORTRESULT,ALLCOSTS (MS), ISFKENABLEDdump&load,yf1b_stdzwfw9_7_11,ADMIN,cert_metadata_lyg,CERT_METADATA_LYG,2020-01-02 1515 purge 02purl 577Magi 2020-01-02 15purl 02purl 577ms4072magnum 4072magnum 4072magnum 4072magnum 05cesse EDmagin2020-01-0215purge 02purl 57748ms07248ms04072lane failed. 343ms,No ForeignKey

12. View some tables that failed to import in the

[omm@database] $ll / home/omm/dataMigration/import_error_files/total 28 RW-1 omm dbgrp 195 Jan 2 15:02 cert_metadata_lyg.dat-rw- 1 omm dbgrp 201 Jan 2 15:03 cert_metadata_rd.dat-rw- 1 omm dbgrp 193 Jan 2 15:03 cert_metadata_rg.dat-rw- 1 omm dbgrp 193 Jan 2 15:02 cert_metadata_ Xy.dat-rw- 1 omm dbgrp 192 Jan 2 15:02 cert_metadata_yz.dat-rw- 1 omm dbgrp 69 Jan 2 15:03 jgzf_yearsj.dat-rw- 1 omm dbgrp 2 15:02 table_ struct. [omm @ database ~] $ll / home/omm/dataMigration/yf/total 63168 RW-1 omm dbgrp 2111 Jan 2 15:03 asudit_online_evaluat.dat -rw- 1 omm dbgrp 101454 Jan 2 15:02 asudit_project_docsnap.dat-rw- 1 omm dbgrp 3158 Jan 2 15:02 asudit_sp_business.dat-rw- 1 omm dbgrp 40147 Jan 2 15:03 asudit_task.dat

13. Failed table data can be imported manually through load

[omm@database yf] $zsql admin/admin_123@172.16.4.81:1888-Q-c "LOAD DATA INFILE'/ home/omm/dataMigration/yf/table_struct.dat' INTO TABLE\" ADMIN\ ".\" TABLE_STRUCT2\ "FIELDS TERMINATED BY'| 'LINES TERMINATED BY'\ n 'columns enclosed by'\" 'optionally ignore CHARSET=utf8 THREADS 10 ERRORS 0 "connected.SQL > 6231 rows have been committed.Complete the data load.totally read rows: 6231 ignored rows: 0 loaded Rows: 6231 committed rows: 6231 error rows: 0 skip rows: 0

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report