Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Analyzing Oracle Data Pump data Import behavior from SQLFile Files

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

In the previous article, "three Tips for using the data pump (Data Pump)" (http://space.itpub.net/17203031/viewspace-768245), we introduced the use of the SQLFILE parameter to output the DDL statements generated by the Data Pump data pump and the corresponding series of SQL statements. In this article, based on the previous results, we analyze the SQLFILE generation file to see how Oracle Data Pump imports the data.

1. Environment introduction and background introduction

We still choose Oracle 11g to carry on the experiment.

SQL > select * from v$version

BANNER

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0-Production

PL/SQL Release 11.2.0.1.0-Production

CORE 11.2.0.1.0 Production

TNS for Linux: Version 11.2.0.1.0-Production

NLSRTL Version 11.2.0.1.0-Production

As a continuation of the previous article, the directory object and dmp file from the previous article are still used.

SQL > select directory_name, directory_path from dba_directories where directory_name='DUMPTEST'

DIRECTORY_NAME DIRECTORY_PATH

DUMPTEST / dumptest

[oracle@bspdev dumptest] $ls-l

-rw-r- 1 oracle oinstall 109694976 Aug 12 03:34 scott.dmp

-rw-r- 1 oracle oinstall 109690880 Aug 12 05:32 scott_par.dmp

Call impdp statement to import data and generate SQLFile.

[oracle@bspdev dumptest] $impdp\ "/ as sysdba\" directory=dumptest dumpfile=scott.dmp sqlfile=scott_all.sql

Import: Release 11.2.0.1.0-Production on Thu Aug 15 05:02:47 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0-Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Master table "SYS". "SYS_SQL_FILE_FULL_01" successfully loaded/unloaded

Starting "SYS". "SYS_SQL_FILE_FULL_01": "/ * AS SYSDBA" directory=dumptest dumpfile=scott.dmp sqlfile=scott_all.sql

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYS". "SYS_SQL_FILE_FULL_01" successfully completed at 05:02:59

[oracle@bspdev dumptest] $ls-l | grep scott_all

-rw-r--r-- 1 oracle oinstall 71324 Aug 15 05:02 scott_all.sql

We get the sql file through FTP and analyze it. The volume of the generated file is large, which is divided into several segments for analysis.

2. Session modification and tracking time fragments

The first thing we see in the trace file is a series of snippets of alter session commands.

-- CONNECT SYS

ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1'

ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1'

ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1'

ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1'

ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1'

ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 199'

Users who can use Data Pump require permissions in the DATAPUMP_IMP_FULL_DATABASE role. These events clearly belong to the preparation of the environment within Oracle.

We can know the method of 10000-10999 time number through some methods.

ORA-10150: import exceptions

ORA-10904: Allow locally managed tablespaces to have user allocation

ORA-10407: enable datetime TIME datatype creation

ORA-10851: Allow Drop command to drop queue tables

ORA-22830: events that enable VARRAY columns to be created as OCT

ORA-25475: events related to the stream Stream

Through the setting of these waiting events, the main purpose is to guarantee the work and reset some system environment. Prevent potential problems and facilitate the import process.

3. User creation and permission settings

Data Pump will automatically create user information and give users the appropriate permissions. The first is user creation.

-- new object type path: SCHEMA_EXPORT/USER

-- CONNECT SYSTEM

CREATE USER "SCOTT" IDENTIFIED BY VALUES 'sCB685EC52372362B49B7EC43AB0B04BCAF2C71F283C5A558FF8E430F5365There F894844C34402B67'

DEFAULT TABLESPACE "USERS"

TEMPORARY TABLESPACE "TEMP"

Note that the user password is displayed in ciphertext, ensuring the original data. At the same time, both Default Tablespace and Tempory Tablespace are explicitly specified. This is why when we use Data Pump to import data, users do not have to create it, but the tablespace must be planned. If there is a problem with the tablespace planning and the user fails to create it, all the subsequent objects are a series of failure messages, and the import is meaningless.

After the user is created, permissions can be granted according to the type of system permissions, role permissions, and object permissions.

-- new object type path: SCHEMA_EXPORT/SYSTEM_GRANT

GRANT EXECUTE ANY PROCEDURE TO "SCOTT"

GRANT CREATE VIEW TO "SCOTT"

GRANT SELECT ANY TABLE TO "SCOTT"

GRANT UNLIMITED TABLESPACE TO "SCOTT"

-- new object type path: SCHEMA_EXPORT/ROLE_GRANT

GRANT "CONNECT" TO "SCOTT"

GRANT "RESOURCE" TO "SCOTT"

GRANT "SELECT_CATALOG_ROLE" TO "SCOTT"

-- new object type path: SCHEMA_EXPORT/DEFAULT_ROLE

ALTER USER "SCOTT" DEFAULT ROLE ALL

Note: a common problem is that Oracle Data Pump goes back to create the permissions granted by the user. But for the role Role, if not in advance, Oracle will report an error.

4. Schema processing

Below, we see calling the pl/sql anonymous block procedure to call the stored procedure under the oracle SYS user.

-- new object type path: SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

-- CONNECT SCOTT

BEGIN

Sys.dbms_logrep_imp.instantiate_schema (schema_name= > SYS_CONTEXT ('USERENV','CURRENT_SCHEMA'), export_db_name= >' WILSON', inst_scn= > '3501442')

COMMIT

END

/

This method is an internal method, and the code is passed by wrap. The author does not know the function of this method. You can only guess that it is an operation related to Schema initialization during Data Pump work. The parameters also include the SCN number.

5. Create the datasheet DDL

The following is the data table DDL statement, all data objects DDL, are displayed in full text. It includes Segment information and Table Annotation.

-- new object type path: SCHEMA_EXPORT/TABLE/TABLE

-- CONNECT SYS

CREATE TABLE "SCOTT". "DEPT"

("DEPTNO" NUMBER (2pc0)

"DNAME" VARCHAR2 (14 BYTE)

"LOC" VARCHAR2 (13 BYTE)

) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING

STORAGE (INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

TABLESPACE "USERS"

CREATE TABLE "SCOTT". "EMP"

("EMPNO" NUMBER (4. 0)

"ENAME" VARCHAR2 (10 BYTE)

"JOB" VARCHAR2 (9 BYTE)

"MGR" NUMBER (4. 0)

"HIREDATE" DATE

"SAL" NUMBER (7. 2)

"COMM" NUMBER (7. 2)

"DEPTNO" NUMBER (2. 0)

) SEGMENT CREATION IMMEDIATE

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING

STORAGE (INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

TABLESPACE "USERS"

Pay attention to a few problems here:

During the creation of the datasheet, the DDL statement used is "full-text". That is, it includes all the parameters that define a data table, including some features that we rarely touch or other versions cannot support. If you encounter compatibility problems, you can modify them directly in SQLFile.

U table space, what we are talking about is that the table space of the data table segment is specified again. If the tablespace does not exist, Data Pump does not create the datasheet and reports an error. And then the later data Import fails. Therefore, ensuring the existence of tablespaces is also a necessary condition.

This includes only data definitions, excluding indexes and reference constraints

6. Create Index

After that, the index object is created. All indexes are created centrally for different index types. Strictly speaking, the index Index is also a segment structure, and segment structure parameters, such as the initial extent size, need to be defined.

-- new object type path: SCHEMA_EXPORT/TABLE/INDEX/INDEX

-- CONNECT SCOTT

CREATE UNIQUE INDEX "SCOTT". "PK_DEPT" ON "SCOTT". "DEPT" ("DEPTNO")

PCTFREE 10 INITRANS 2 MAXTRANS 255

STORAGE (INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

TABLESPACE "USERS" PARALLEL 1

ALTER INDEX "SCOTT". "PK_DEPT" NOPARALLEL

CREATE UNIQUE INDEX "SCOTT". "PK_EMP" ON "SCOTT". "EMP" ("EMPNO")

PCTFREE 10 INITRANS 2 MAXTRANS 255

STORAGE (INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

TABLESPACE "USERS" PARALLEL 1

ALTER INDEX "SCOTT". "PK_EMP" NOPARALLEL

If there is no parallel setting, the index will also be set to noparallel later.

7. Component constraint settings

After the index is created, create constraint objects, including primary key relationships.

-- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

-- CONNECT SYS

ALTER TABLE "SCOTT". "DEPT" ADD CONSTRAINT "PK_DEPT" PRIMARY KEY ("DEPTNO")

USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255

STORAGE (INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

TABLESPACE "USERS" ENABLE

ALTER TABLE "SCOTT". "EMP" ADD CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")

USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255

STORAGE (INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

TABLESPACE "USERS" ENABLE

8. "Import" of Index statistics

When Oracle Data Pump imports, it "imports" the statistics together. Although Oracle can choose to collect statistics after the data table data is inserted, it still chooses to import statistics.

First, Data Pump imports the statistics of the index.

-- new object type path: SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

DECLARE iTunn VARCHAR2 (60)

ITuno VARCHAR2 (60)

C DBMS_METADATA.T_VAR_COLL

Df varchar2 (21): = 'YYYY-MM-DD:HH24:MI:SS'

BEGIN

DELETE FROM "SYS". "IMPDP_STATS"

ITunn: = 'PK_DEPT'

ITuno: = 'SCOTT'

INSERT INTO "SYS". "IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,n12,d1,cl1) VALUES (NULL)

DBMS_STATS.IMPORT_INDEX_STATS ('"'| | iTuno | |'',''| | iTunn | |'", NULL,' "IMPDP_STATS"', NULL,' "SYS"')

DELETE FROM "SYS". "IMPDP_STATS"

END

/

It is an anonymous block of PL/SQL. Oracle imports and exports a statistic that requires an intermediate data table. In the process, we see that Oracle uses an sys.impdp_stats data table. Empty it first, and then insert a piece of data. Finally, the dbms_stats.import_index_stats method is called to import the data table into the system.

The Impdp_stats datasheet is a tool table within Oracle.

SQL > desc impdp_stats

Name Type Nullable Default Comments

--

STATID VARCHAR2 (30) Y

TYPE CHAR (1) Y

VERSION NUMBER Y

FLAGS NUMBER Y

(for reasons of space, there are omissions. )

Each index corresponds to an PL/SQL anonymous block.

9. View View object creation

After creating the Index statistics, create the view view object.

-- new object type path: SCHEMA_EXPORT/VIEW/VIEW

-- CONNECT SCOTT

CREATE FORCE VIEW "SCOTT". "XX" ("OWNER", "OBJECT_NAME", "SUBOBJECT_NAME", "OBJECT_ID", "DATA_OBJECT_ID", "OBJECT_TYPE", "CREATED", "LAST_DDL_TIME", "TIMESTAMP", "STATUS", "TEMPORARY", "GENERATED", "SECONDARY", "NAMESPACE", "EDITION_NAME") AS

Select "OWNER", "OBJECT_NAME", "SUBOBJECT_NAME", "OBJECT_ID", "DATA_OBJECT_ID", "OBJECT_TYPE", "CREATED", "LAST_DDL_TIME", "TIMESTAMP", "STATUS", "TEMPORARY", "GENERATED", "SECONDARY", "NAMESPACE", "EDITION_NAME" from m

10. Foreign key constraint relationship

Foreign key relationships are created after the view.

-- new object type path: SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

-- CONNECT SYS

ALTER TABLE "SCOTT". "EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")

REFERENCES "SCOTT". "DEPT" ("DEPTNO") ENABLE

ALTER TABLE "SCOTT". "T_CHILD2" ADD CONSTRAINT "FK_CHILD2_MASTER" FOREIGN KEY ("MID")

REFERENCES "SCOTT". "T_MASTER" ("ID") ENABLE

After the foreign key is created, it is enabled directly. Note that if the amount of data is large and the previous index relationship is not set up, this process may take a long time.

11. Introduction of data sheet statistics

This is followed by the introduction of data table statistics. Unlike an index, a datasheet statistics is accomplished by multiple impdp_stats records. Import from data table and field information from below, respectively.

-- new object type path: SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

DECLARE

C varchar2 (60)

Nv varchar2 (1)

Df varchar2 (21): = 'YYYY-MM-DD:HH24:MI:SS'

S varchar2 (60): = 'SCOTT'

T varchar2 (60): = 'DEPT'

P varchar2 (1)

Sp varchar2 (1)

Stmt varchar2: = 'INSERT INTO "SYS." IMPDP_STATS "(type,version,c1,c2,c3,c4,c5,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11,d1,r1,r2,ch2,flags) VALUES (' 'Clearing Magic 5) VALUES (' 'Cruise Magi 5) 1, Magi 2, Magi 3, 4, 5, 5, 6, 7, 9, 10, 11, 12, 12, 13, 14, 15, 16, 18. 19pencePerfect 20 Magazine 21)

BEGIN

DELETE FROM "SYS". "IMPDP_STATS"

INSERT INTO "SYS". "IMPDP_STATS" (type,version,flags,c1,c2,c3,c5,n1,n2,n3,n4,n10,n11,n12,d1) VALUES

4,5,20,4,NULL,NULL,NULL

TO_DATE ('2012-06-23 01GRV 3715-56mm df))

C: = 'DEPTNO'

EXECUTE IMMEDIATE stmt USING t,p,sp,c,s

4,.25,4,4,0,10,40,3,nv,nv,nv

TO_DATE ('2012-06-23 01 (' 2012-06-23 01), 'C10B', 'C129', and 'C10B', respectively.

C: = 'DNAME'

EXECUTE IMMEDIATE stmt USING t,p,sp,c,s

4recover.25pyrrine 4retice0reparent3.38863550087541Emaged35pr 4.32285038677786Emaged35pr 10recovernvrecovernv

TO_DATE ('2012-06-23 01VF),' 41434F554E54494E47), 53414C4553)

C: = 'LOC'

EXECUTE IMMEDIATE stmt USING t,p,sp,c,s

4magic.25pyrr4 4retice0reparent3.44300505052090Emag35 4.06405544089997Elegy 35pyrrine 8reigning nvrecovernv

TO_DATE ('2012-06-23 01VO1GRV 37GRV 56 minutes, df),' 424F53544F4EF4E455720594F524BZ), NVJ 2

DBMS_STATS.IMPORT_TABLE_STATS ('"SCOTT"','"DEPT"', NULL,' "IMPDP_STATS"', NULL,NULL,' "SYS"')

DELETE FROM "SYS". "IMPDP_STATS"

END

/

Unlike the index, due to the different number of column in the data table, only a few column statistics are imported into an PL/SQL anonymous block. If there is a large number of column, it may be split into multiple anonymous blocks.

Let's consider a question of statistics: why should Oracle save the statistics data in the DMP file to take up space when it can collect the statistics again? And import it into it after the data.

The author thinks that this is considered by Oracle from two aspects:

First of all, in terms of time, if the data table is large and the structure is complex, it takes a long time to collect statistics. In other words, the process of obtaining statistics has something to do with the size of the data table. On the other hand, there is no difference in the duration of each data in the action of importing data statistics. This must be a comprehensive consideration of Oracle.

In addition, from the point of view of the CBO generation of the execution plan, "importing" statistics is also reasonable. Sometimes we may need to fix the statistics, that is, we hope that the statistics will not be "updated in a timely manner" to solidify the execution plan.

12. Conclusion

The SQLFILE generated by Oracle Datapump can help us understand the details of its operation.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report