Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the difference between expdp and impdp

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly talks about "what is the difference between expdp and impdp". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "what is the difference between expdp and impdp"?

I. the difference between expdp/impdp and exp/imp

1. Exp and imp are client-side tool programs that can be used either on the client side or on the server side.

2. Expdp and impdp are server-side tools, which can only be used on the Oracle server, not on the client side.

3. Imp only applies to exp exported files, not expdp exported files; impdp only applies to expdp exported files, not exp exported files.

4. For servers with more than 10g, you usually cannot export an empty table with 0 rows of data by using exp, but you must use expdp to export at this time.

II. Expdp export steps

(1) create a logical directory:

Step 1: create a real directory on the server; (note: the command to create a logical directory in step 3 does not create a real directory on OS, so create a real directory on the server first. As shown below:)

Step 2: log in to sqlplus with a sys administrator

Oracle@ypdbtest:/home/oracle/dmp/vechcore > sqlplusSQL*Plus: Release 11.2.0.4.0 Production on Tue Sep 5 09:20:49 2017Copyright (c) 1982, 2013, Oracle. All rights reserved.Enter user-name: sys as sysdbaEnter password:Connected to:Oracle Database 11g Enterprise Edition Release 11.2.0.4.0-64bit ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsSQL >

Step 3: create a logical directory

SQL > create directory data_dir as'/ home/oracle/dmp/user';Directory created.

Step 4: check the administrator directory to see if it exists

SQL > select * from dba_directories OWNER DIRECTORY_NAME---DIRECTORY_PATH-- -- SYS DATA_DIR/home/oracle/dmp/user

Step 5: use the sys administrator to give your specified users permission to operate in this directory.

SQL > grant read,write on directory data_dir to user;Grant succeeded.

(2) there are five ways to export dmp with expdp:

The first kind: "full=y", full export database

Expdp user/passwd@orcl dumpfile=expdp.dmp directory=data_dir full=y logfile=expdp.log

The second type: schemas is exported by user

Expdp user/passwd@orcl schemas=user dumpfile=expdp.dmp directory=data_dir logfile=expdp.log

Third: export by tablespace

Expdp sys/passwd@orcl tablespace=tbs1,tbs2 dumpfile=expdp.dmp directory=data_dir logfile=expdp.log

Fourth: export table

Expdp user/passwd@orcl tables=table1,table2 dumpfile=expdp.dmp directory=data_dir logfile=expdp.log

Fifth: guide according to query conditions

Expdp user/passwd@orcl tables=table1='where number=1234' dumpfile=expdp.dmp directory=data_dir logfile=expdp.log

III. Impdp import steps

(1) if it is not the same server, you need to download the above dmp file to the target server first. For more information, please see http://www.cnblogs.com/promise-x/p/7452972.html.

(2) establish a logical directory with reference to the first three steps in the "expdp Export steps"

(3) use the impdp command to import, corresponding to five ways:

The first is "full=y", which is imported into the database.

Impdp user/passwd directory=data_dir dumpfile=expdp.dmp full=y

The second type: user import with the same name, from user A to user A.

Impdp A/passwd schemas=A directory=data_dir dumpfile=expdp.dmp logfile=impdp.log

Third: ① imports tables table1 and table2 from A users into B users.

Impdp Bhand passwdtableship A.Table1 Magneto A.table2 remap_schema=A:B directory=data_dir dumpfile=expdp.dmp logfile=impdp.log

② imports tablespaces TBS01, TBS02, TBS03 into tablespace A_TBS, imports user B's data into A, and generates a new oid to prevent conflicts.

Impdp Ahand passwdremappings tablespaceTBS01purveABS01Following Tbs02MULTS02VOBTBS remap_schema=B:A FULL=Y transform=oid:n directory=data_dir dumpfile=expdp.dmp logfile=impdp.log

Fourth: import tablespaces

Impdp sys/passwd tablespaces=tbs1 directory=data_dir dumpfile=expdp.dmp logfile=impdp.log

The fifth kind: additional data

Impdp sys/passwd directory=data_dir dumpfile=expdp.dmp schemas=system table_exists_action=replace logfile=impdp.log;-- table_exists_action: the action performed when the import object already exists. Valid keywords: SKIP,APPEND,REPLACE and TRUNCATE

IV. Expdp keywords and commands

(1) keyword description (default)

ATTACH connects to an existing job, such as ATTACH [= job name].

COMPRESSION reduces the size of the contents of the dump file, where the valid key values are: ALL, (METADATA_ONLY), DATA_ONLY, and NONE.

CONTENT specifies the data to be unloaded, where the valid key values are: (ALL), DATA_ONLY, and METADATA_ONLY.

The DATA_OPTIONS data layer tag, where the only valid value is: use the XML_CLOBS-write XML data type in CLOB format.

DIRECTORY the directory object, the logical directory, that is, used by the dump and log files.

List of DUMPFILE target dump files (expdp.dmp), such as DUMPFILE=expdp1.dmp, expdp2.dmp.

ENCRYPTION encrypts some or all of the dump files, where valid key values are: ALL, DATA_ONLY, METADATA_ONLY,ENCRYPTED_COLUMNS_ONLY, or NONE.

ENCRYPTION_ALGORITHM specifies how encryption should be done, where valid key values are: (AES128), AES192, and AES256.

ENCRYPTION_MODE method for generating encryption keys, where valid key values are: DUAL, PASSWORD, and (TRANSPARENT).

The password keyword used by ENCRYPTION_PASSWORD to create encrypted column data.

ESTIMATE calculates job estimates, where valid key values are (BLOCKS) and STATISTICS.

ESTIMATE_ONLY calculates job estimates without performing an export.

EXCLUDE excludes specific object types, such as EXCLUDE=TABLE:EMP. Example: EXCLUDE= [object _ type]: [name_clause], [object_type]: [name_clause].

FILESIZE specifies the size of each dump file in bytes.

FLASHBACK_SCN is the SCN used to set the session snapshot back to its previous state. -- specifies the export of table data for a specific SCN time.

The time FLASHBACK_TIME is used to get the SCN closest to the specified time. -be sure to export table data at a specific point in time. Note that FLASHBACK_SCN and FLASHBACK_TIME cannot be used at the same time.

FULL exports the entire database (N).

HELP displays a help message (N).

INCLUDE includes specific object types, such as INCLUDE=TABLE_DATA.

The name of the export job to be created by JOB_NAME.

LOGFILE log file name (export.log).

The name of the remote database that NETWORK_LINK links to the source system.

NOLOGFILE does not write to log files (N).

PARALLEL changes the number of active worker for the current job.

PARFILE specifies the parameter file.

QUERY is used to export the predicate clause of a subset of the table. -- QUERY = [schema.] [table_name:] query_clause.

REMAP_DATA specifies a data conversion function, such as REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

REUSE_DUMPFILES overwrites the target dump file (N) if the file exists.

The percentage of data to be exported by SAMPLE.

A list of scenarios to be exported by SCHEMAS (login scenarios).

The frequency (in seconds) of the job status to be monitored by STATUS when the default value (0) will show the new status when available.

TABLES identifies the list of tables to export-there is only one scenario. -- [schema_name.] table_name [: partition_name] [,...]

TABLESPACES identifies the list of tablespaces to export.

TRANSPORTABLE specifies whether transportable methods can be used, where the valid key value is: ALWAYS, (NEVER).

TRANSPORT_FULL_CHECK validates the storage segment (N) of all tables.

A list of tablespaces from which TRANSPORT_TABLESPACES unloads metadata.

The version of the object to be exported by VERSION, where the valid keywords are: (COMPATIBLE), LATEST or any valid database version.

(2) description of the order

ADD_FILE adds dump files to the dump file set.

CONTINUE_CLIENT returns to record mode. If you are idle, the job is restarted.

EXIT_CLIENT exits the client session and leaves the job running.

The default file size (in bytes) for subsequent FILESIZE ADD_FILE commands.

HELP summarizes interactive commands.

KILL_JOB detaches and deletes jobs.

PARALLEL changes the number of active worker for the current job. PARALLEL= .

_ DUMPFILES overwrites the target dump file (N) if the file exists.

START_JOB starts / resumes the current job.

The frequency (in seconds) of the job status to be monitored by STATUS when the default value (0) will show the new status when available. STATUS [= interval].

STOP_JOB sequentially closes the executed jobs and exits the client. STOP_JOB=IMMEDIATE will immediately close the data pump job.

5. Impdp keywords and commands

(1) keyword description (default)

ATTACH connects to an existing job, such as ATTACH [= job name].

CONTENT specifies the data to be unloaded, where the valid key values are: (ALL), DATA_ONLY, and METADATA_ONLY.

DATA_OPTIONS data layer tag, where the only valid value is: the SKIP_CONSTRAINT_ERRORS- constraint error is not serious.

DIRECTORY is the directory object used by dump files, log files, and sql files, that is, logical directories.

A list of dump files to be imported by DUMPFILE from (expdp.dmp), such as DUMPFILE=expdp1.dmp, expdp2.dmp.

The password keyword used by ENCRYPTION_PASSWORD to access encrypted column data. This parameter is not valid for network import jobs.

ESTIMATE calculates job estimates, where valid keywords are (BLOCKS) and STATISTICS.

EXCLUDE excludes specific object types, such as EXCLUDE=TABLE:EMP.

FLASHBACK_SCN is the SCN used to set the session snapshot back to its previous state.

The time FLASHBACK_TIME is used to get the SCN closest to the specified time.

FULL imports all objects (Y) from the source.

HELP displays a help message (N).

INCLUDE includes specific object types, such as INCLUDE=TABLE_DATA.

The name of the import job to be created by JOB_NAME.

LOGFILE log file name (import.log).

The name of the remote database that NETWORK_LINK links to the source system.

NOLOGFILE does not write to the log file.

PARALLEL changes the number of active worker for the current job.

PARFILE specifies the parameter file.

PARTITION_OPTIONS specifies how partitions should be converted, where valid keywords are: DEPARTITION,MERGE and (NONE).

The predicate clause used by QUERY to import a subset of the table.

REMAP_DATA specifies a data conversion function, such as REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

REMAP_DATAFILE redefines data file references in all DDL statements.

REMAP_SCHEMA loads objects from one scheme into another. REMAP_SCHEMA=SOURCE_SCHEMA:TARGET_SCHEMA

The REMAP_TABLE table name is remapped to another table, such as REMAP_TABLE=OLD_TABLENAME:NEW_TABLENAME

REMAP_TABLESPACE remaps tablespace objects to another tablespace.

REUSE_DATAFILES initializes the tablespace (N) if it already exists.

A list of scenarios to be imported by SCHEMAS.

SKIP_UNUSABLE_INDEXES skips indexes that are set to the useless index state.

SQLFILE writes all SQL DDL to the specified file.

The frequency (in seconds) of the job status to be monitored by STATUS when the default value (0) will show the new status when available.

STREAMS_CONFIGURATION enables loading of stream metadata.

The action performed when the TABLE_EXISTS_ACTION import object already exists. Valid keywords: (SKIP), APPEND,REPLACE and TRUNCATE.

TABLES identifies the list of tables to import.

TABLESPACES identifies the list of tablespaces to import.

The metadata transformation to be applied to the applicable object by TRANSFORM. The valid conversion keywords are SEGMENT_ATTRIBUTES,STORAGE,OID and PCTSPACE.

TRANSPORTABLE is used to select options that can transfer data movement. Valid keywords are ALWAYS and (NEVER). Valid only in NETWORK_LINK schema import operations.

TRANSPORT_DATAFILES A list of data files imported in transferable mode.

TRANSPORT_FULL_CHECK validates the storage segment (N) of all tables.

A list of tablespaces from which TRANSPORT_TABLESPACES wants to load metadata. Valid only in NETWORK_LINK schema import operations.

The version of the object to be exported by VERSION, where the valid keywords are: (COMPATIBLE), LATEST or any valid database version. Valid only for NETWORK_LINK and SQLFILE.

(2) description of the order

CONTINUE_CLIENT returns to record mode. If you are idle, the job is restarted.

EXIT_CLIENT exits the client session and leaves the job running.

HELP summarizes interactive commands.

KILL_JOB detaches and deletes jobs.

PARALLEL changes the number of active worker for the current job. PARALLEL= .

START_JOB starts / resumes the current job. START_JOB=SKIP_CURRENT skips any actions performed when the job stops before starting the job.

The frequency (in seconds) of the job status to be monitored by STATUS when the default value (0) will show the new status when available. STATUS [= interval].

STOP_JOB sequentially closes the executed jobs and exits the client. STOP_JOB=IMMEDIATE will immediately close the data pump job.

At this point, I believe you have a deeper understanding of "what is the difference between expdp and impdp". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report