Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summary of several problems in data Migration

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

To sum up some of the problems encountered in the data migration front last night, although the overall completion is in accordance with the plan, and completed ahead of schedule, but even a little operation will cause some serious impact.

Generally speaking, what needs to be done is to move the core business servers from one computer room to another. In this process, because of the importance of the environment and the hardware and software, it is generally divided into the following three directions of technical solutions.

Migrate part of the core business from Solaris to X86 platform, and upgrade the database version

Migrate part of the core business of x86 platform. The operation in this direction is relatively simple, which is basically active / standby switching.

Integrate part of the environment of X86 platform, for example, database an is a database an after integration.

All this work needs to be done in a few hours, and there are no data problems.

Technical solution 1, is a cross-platform database migration upgrade, and we use a hybrid technology combination. For example, for small tables, data classes do not use Datapump for full synchronization, for medium tables use prebuilt of materialized views to achieve incremental refresh, for large tables, use OGG replication, of course, why neutral tables and large tables should be treated separately, can we use OGG? This is mainly due to factors such as the team, rather than just the technical feasibility.

Technical solution 2, this part is relatively conventional, is the active / standby switching. In fact, there is no more to talk about in the process of active and standby switching, and there is no reason to cut it in the middle. As long as there is no problem with the configuration, there is only one command in DG Broker.

Technical solution 3, this part involves data integration, and on this basis need to do a database upgrade, if the amount of data is not large, in fact, Datapump is sufficient, if the amount of data is at the TB level, to achieve this kind of data integration and upgrade requirements will be some difficult, at least the vast majority of cases I see is through incremental or logical replication.

The requirements for the migration are generally as mentioned above, the maintenance time is limited, and it takes less than 3 hours to complete, either succeed or fall back.

I came up with several problems encountered in the migration, many of which are still very representative, and they are also areas that we need to constantly improve and improve when we make technical solutions.

Question 1:

When using prebuilt's materialized view incremental refresh, during the final data confirmation phase, another incremental refresh was attempted and the following error was thrown.

SQL > exec dbms_mview.refresh ('GAMEUSER.PEAK_LOGINLOG','F')

BEGIN dbms_mview.refresh ('GAMEUSER.PEAK_LOGINLOG','F'); END

*

ERROR at line 1:

ORA-04062: timestamp of package "SYS.DBMS_SNAPSHOT_UTL" has been changed

ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2809

ORA-06512: at "SYS.DBMS_SNAPSHOT", line 3025

ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2994

ORA-06512: at line 1

Question 2:

There is also a part of the materialized view incremental refresh will occur when the hang situation, although the main library materialized view log data is not much, but the refresh process is very slow.

Exec dbms_mview.refresh ('TLBB.PURSE_RESERVE_RECORD','F')

The above two kinds of problems are very sensitive in the data migration with no time waiting for people, so if the amount of data in the table is not too large, just synchronize it all or Datapump to do it again.

Another trick is that if the refreshed table is extremely large, first check the materialized view log, if there is no data, the heart will be much more secure, even if there is a small problem when refreshing, the heart is still bright.

Question 3:

An error was thrown when exporting data from the source database using DAtapump. For migration projects that rely on Datapump, it will be difficult to use Datapump well. Here is a basic export method. Of course, there may be some problems in the 10g version, such as the use of parallelism, which may prompt overflow and failure.

Expdp xx/xxx dumpfile=gameuser.dmp directory=dp_dir parfile=gameuser.par parallel=4

Question 4:

This problem is encountered after the database has switched between master and slave. If you look at the log, you can see that it is an archiving problem, but in fact, the flashback area is enough, and the archiving path is also valid.

Mon Jul 24 04:10:13 2017

ARC0: LGWR is actively archiving destination LOG_ARCHIVE_DEST_1

ARCH: Archival stopped, error occurred. Will continue retrying

ORACLE Instance acccomdb-Archival Error

ORA-16014: log 1 sequence# 31829 not archived, no available destinations

ORA-00312: online log 1 thread 1:'/ U01 Grease appScale oracleUniple oradata Universe acccom dbUnip redo01.log'

Mon Jul 24 04:13:11 2017

Starting background process SMCO

Mon Jul 24 04:13:11 2017

SMCO started with pid=39, OS id=51303

Preliminary analysis shows that the archive path is not standard, such as setting multiple archive path parameters. For example, log_archive_dest1,log_archive_dest2 actually has different meanings and uses. The solution to the problem is to clear these path parameters and reset DG Broker to initialize. The effect is quick and it can be achieved in one step.

Question 5:

DB link's problem, to be honest, DB link looks up databases among multiple databases, and it feels like a spider's web. We can use tnsping to verify the configuration of tnsnames.ora. However, if the port is open, it does not necessarily prove that there is nothing wrong with the configuration of tns.

For example, the following error message is a problem with DB link, but the error message is different.

Java.sql.SQLException: ORA-04053: error occurred when validating remote object GAMECARD.USECARDMAIN@DB_SWORD_TEST0

ORA-00604: error occurred at recursive SQL level 1

ORA-02019: connection description for remote database not found

Java.sql.SQLException: ORA-04045: errors during recompilation/revalidation of APP_TL_SDE_128.CHONG_KAMI_RECHARGE_NEW?

ORA-04052: error occurred when looking up remote object TLBB.USER_POINT@GCDB?

ORA-00604: error occurred at recursive SQL level 3?

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

We need to take some time to fix this kind of problem, and the troubleshooting process will deviate from the direction of the problem because of the error provided by the information. We need to calm down and be more careful.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report