Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the method to integrate and migrate multiple sets of Oracle10g to 11g

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what is the method of integrating and migrating multiple sets of Oracle10g to 11g". The content of the explanation in the article is simple and clear, and it is easy to learn and understand. please follow the editor's train of thought to study and learn "what is the method of integrating and migrating multiple sets of Oracle10g to 11g".

In data migration, in addition to cross-platform, full, and incremental data migration, there is another category that will upgrade the existing difficulty, that is, integrated migration, for example, there are two pieces of data before, and there is one after migration. If you add in the smooth upgrade of the database version, it is worth thinking about the solution.

If the two source libraries are small, it is actually a method to use Datapump directly. The biggest advantage is simple operation and strong controllability, and the bottleneck is also obvious. With the growth of the amount of data, the migration time will increase linearly. From the perspective of logical migration, the dependence on version upgrade is not high.

If the two source libraries are very large, such as 5T, the integration is 10T, this magnitude, give you an hour to finish, but also to do a smooth upgrade of the database, it will be quite difficult.

Let's briefly sort out where most of the time is spent.

1. Data export, we also need additional configuration of disk space and storage, basically more than 200% of the redundant space, let's pat the head for a time, such as 30 minutes.

two。 Data dump is transferred to the target database, which depends on several points, such as the network link configuration of the source database, the upper limit of bandwidth, and so on. Assuming that none of this is a problem, it will take at least 60 minutes to pat the head.

3. If we get to this point as expected, the time will run out before the work of data migration is officially started. Let's go ahead and import the data. according to the current data of PCIE-SSD POC, 5T, ideally, non-archived import will take at least 500 minutes.

So the above solution is destined to be a failed migration case, but we can optimize a lot of things from it until our needs are met.

Let's put aside the above scheme and briefly recall the nature of database migration and database upgrade. First of all, data can be roughly divided into system tablespace data (system,sysaux,undo), application data (table data, index, etc.), but in the form of tablespaces, data files, if cross-platform, we consider the logical consistency of the data, and if not cross-platform. What is considered is to consider physical consistency as much as possible. In an integrated migration, physical consistency is difficult to achieve, but we can achieve it to the maximum extent.

Then there is the essence of database upgrade. In essence, database upgrade is the upgrade of data dictionary. For data files, simply speaking, there is no difference.

So when the database is upgraded from a low version to a high version, such as 10g to 11g, the data file is essentially the same, then the change is the data dictionary, and we can learn from each other. We only focus on this part of the data dictionary, and there will be a clear direction when we migrate.

So how to optimize the above failed cases? We can greatly reduce the export time, reduce the data dump transmission time, to put it more confidently, we can not export data, not to transfer dump. The answer is obviously yes, and that is to make full use of Data Guard.

In this way, the preliminary work is already in place before the formal migration, and what we need to do in the upgrade process is to focus on the upgrade of the data dictionary, and how to do the migration is achieved by transmitting the tablespace.

Assuming that the database we want to migrate is peak,extradb, and we plan to consolidate the database as peak, then there should be the following instance on the server, obviously there are two databases called peak, and there will be no conflict because of the difference in ORACLE_HOME.

$ps-ef | grep smon

Oracle 77606 1 0 Jul03? 00:00:03 ora_smon_extradb

Oracle 97643 1 0 14:39? 00:00:00 ora_smon_peak

Oracle 98133 1 0 14:49? 00:00:00 ora_smon_peak

Oracle 98486 98195 0 15:15 pts/0 00:00:00 grep smon

According to the final result of the target library, the directory result under our oradata is roughly as follows:

Drwxr-xr-x 2 oracle oinstall 4096 Jul 14 15:04 extradb

Drwxr-xr-x 2 oracle oinstall 4096 Jul 14 15:01 peak

Drwxr-xr-x 2 oracle oinstall 4096 Jul 14 14:46 peak_old

Peak is the final data file, the data files of extradb and peak are all under the peak directory, while the system tablespace of extradb is under the extradb directory, and the data dictionary of the source database peak is under peak_old.

If you want to migrate data files, it is easy to operate on the repository, you can refer to the following dynamic SQL.

Select 'alter database rename file' | | chr (39) | | name | | chr (39) | |'to'| | chr (39) | | replace (name,'/extradb/','/peak/') | | chr (39) | |'; 'from v$datafile |

At this time, the files in the peak directory are like going to a party, everyone is sitting together, but there is still a lack of contact and no connection between them.

Before the migration, you need to do a basic check, of course, this work should be done in advance. At least verify it at that time.

Exec dbms_tts.transport_set_check (TS_LIST= > 'USERS,PEAK_DATA,PEAK_INDEX,PEAK_CHANNEL_DATA,PEAK_CHANNEL_INDEX,PEAK_NEW_DATA,PEAK_NEW_INDEX',INCL_CONSTRAINTS= > TRUE,full_check= > true)

Then check select * from transport_set_violations

Is there any conflicting information?

When you set the tablespace to the read only state during migration, you can use the following dynamic SQL to generate batch migration scripts.

Select 'alter tablespace' | | tablespace_name | | 'read only;' from dba_tablespaces where tablespace_name not in (' SYSTEM','SYSAUX','UNDOTBS1','TEMP')

Export information for the data dictionary:

Exp\ 'sys/oracle as sysdba\' file=exp_tts_peak.dmp transport_tablespace= y tablespaces=USERS,PEAK_DATA,PEAK_INDEX,PEAK_CHANNEL_DATA,PEAK_CHANNEL_INDEX,PEAK_NEW_DATA,PEAK_NEW_INDEX log=exp_tts_peak.log

This dump is actually very small, and the import process is very short.

The next work is very trivial, that is, initializing the basic user information and preparing to import the information of the data dictionary. What needs to be mentioned here is the part of the users tablespace, this tablespace integration is bound to conflict, so if conditions permit, we can change the name of the tablespace to avoid conflicts and cannot be imported.

Part of the import sentence is as follows, this process is the final mapping, in fact, just like a party, we introduce each other to know each other, resulting in a connection.

Imp\ 'sys/oracle as sysdba\' file=exp_tts_peak.dmp transport_tablespace=y tablespaces=USERS,xxxxx,U01/app/oracle/oradata/peak/peak_new_data04.dbf,/U01/app/oracle/oradata/peak/peak_new_index04.dbf log=imp_tts_peak.log

This is the core of the migration. Yes or no, it all depends on it.

After the migration, keep in mind that you need to set the tablespace to read and write so that most of the migration work is ready in advance.

If you are fully prepared and fully prepared, it will not be a problem to get it done in half an hour.

Thank you for your reading, the above is the content of "what is the method of integrating and migrating multiple sets of Oracle10g to 11g". After the study of this article, I believe you have a deeper understanding of what the method of integrating and migrating multiple sets of Oracle10g to 11g is, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report