Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

EXPDP/IMPDP finishing

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

EXPDP/IMPDP finishing

2018-1-7 21:15:50

Migrate users

Note: the user's materialized view cannot be imported automatically.

Let's start with the main features provided by the data pump (including, but not limited to):

1. Support parallel processing of import and export tasks

two。 Support pausing and restarting import and export tasks

3. Support for exporting or importing objects in remote databases through Database Link

4. Support in the import through Remap_schema, Remap_datafile, Remap_tablespace several parameters to automatically modify the object owner, data file or data table space during the import process.

5. Very fine-grained object control is provided during import / export. Through the two parameters Include and Exclude, you can even make a detailed decision on whether an object is included or not.

Except for using the network_link parameter, the files generated by expdp are on the server (the location specified by Directory)

Things to pay attention to when using EXPDP and IMPDP:

EXP and IMP are client-side tools that can be used either on the client side or on the server side.

EXPDP and IMPDP are server-side tools, they can only be used on the ORACLE server, not on the client.

IMP applies only to EXP exported files, not EXPDP exported files, and IMPDP applies only to EXPDP exported files, not EXP exported files.

When using the expdp or impdp command, do not specify the username / password @ instance name as identity for the time being, and then enter it as prompted, such as:

Expdp schemas=scott dumpfile=expdp.dmp DIRECTORY=dmp

First, create a logical directory. This command does not create a real directory in the operating system, but is best created by administrators such as system.

SQL >

Sqlplus / as sysdba

Create directory dmp as'/ dmp'

Second, check the administrator directory (also check whether the operating system exists, because Oracle does not care if the directory exists, if it does not exist, it will make an error)

Select * from dba_directories

Third, give scott users the right to operate in a specified directory, preferably by administrators such as system.

Grant read,write on directory dmp to scott

IV. Export data

1) by user guide

If exporting to one file, parallelism is useless, just as exporting to multiple files is available.

Expdp scott/tiger@orcl schemas=scott dumpfile=expdp.dmp logfile=empdp_20100820.log DIRECTORY=dmp

2) parallel process parallel

Expdp scott/tiger@orcl directory=dmp dumpfile=scott_%U.dmp parallel=4 cluster=n filesize=30g job_name=scott3

3) Guide by table name

Expdp scott/tiger@orcl TABLES=emp,dept dumpfile=expdp.dmp DIRECTORY=dmp

Use the parameter file. The repetition of the parameter is subject to the later parameter.

Expdp ciqusr63/ciqusr63 dumpfile=ciqusr63_%U.dmp logfile=ciqusr63.log parfile=exptables.par

Exptables.par file content

Directory=expdp

Parallel=8

Cluster=n

TABLES=T_ARCHIVE_REC,T_BLACKLIST_CODE

4) Guide according to query conditions

Expdp scott/tiger@orcl directory=dmp dumpfile=expdp.dmp Tables=emp query='WHERE deptno=20'

5) according to the table space guide

Expdp system/manager DIRECTORY=dmp DUMPFILE=tablespace.dmp TABLESPACES=temp,example

6) Import the entire database (only users with dba or exp_full_database and imp_full_database permissions can execute.)

Expdp system/manager DIRECTORY=dmp DUMPFILE=full.dmp FULL=y

7) transmit tablespace mode

Corresponds to the Transport_tablespaces parameter in impdp/expdp. The most significant difference between this mode and the previous modes is that the generated Dump file does not contain specific logical data, but only exports the metadata of the relevant objects (that is, the definition of the object, which can be understood as the creation statement of the table). The logical data is still in the data file in the table space, so both the metadata and the data file need to be copied to the target server at the same time.

This kind of export method is very efficient, and the time overhead is mainly spent on the Ibind O generated by copying data files. Expdp performs the export of the transport tablespace schema, and the user must have the exp_full_database role or the DBA role. When importing through the transport tablespace schema, the user must have the imp_full_database role or the DBA role.

8) Export data through dblink

Network_link mode. When the data file is relatively large, it is imported directly on the target database through network_link mode on the target database, without the need for expdp operation on the source database, which is similar to the normal expdp/impdmp process. The main advantage of this is that it avoids writing large files on the source library, and then needs to move to the target library.

SQL > grant datapump_exp_full_database to frank

Target library

Sql > create database link frank connect to frank identified by frank using 'frank'

Impdp\'/ as sysdba\ 'network_link=frank schemas=frank

Network_link defines a db_link name, which imports the source-side frank schemas into the target library.

9) filter data

Filtering data mainly depends on two parameters, Query and Sample. The Sample parameter is mainly aimed at the expdp export feature.

1. Query

Similar to the Query function in the exp command, but in Expdp, this parameter function is enhanced and the granularity of control is finer. Query in Expdp also specifies a similar where statement to qualify records. The syntax is as follows:

Query = [Schema.] [Table_name:] Query_clause

By default, if Schema.table_name is not specified, Query_clause is valid for all exported tables, or you can specify a different Query_clause for each table, such as: export all id'E' "in table a-- contains all table objects greater than the character E.

Other commonly used operators NOT IN, NOT LIKE, impdp gg/gg@s10ogg job_name=s10ggdmp tables=tcustmer content=data_only network_link='s10pub'

Import only the metadata of objects under the user

Impdp bys/bys schemas=bys directory=dmp file=bys5.dmp logfile=bys6.log content=metadata_only

Import only the data of objects under the user

Impdp bys/bys schemas=bys directory=dmp file=bys5.dmp logfile=bys7.log content=data_only

Import all objects and data under the user:-- if content=, is not written, the default is ALL.

Impdp bys/bys schemas=bys directory=dmp file=bys5.dmp logfile=bys8.log

=

VI. Transfer symbol

The processing of escape characters under the command line

Windows platform:

D:\ > expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott INCLUDE=TABLE:\ "IN ('EMP',' DEPT')\"

Under the Windows platform, object double quotation marks are required for escape, using the escape character\

Unix platform:

Without using parfile files, all symbols need to be escaped, including parentheses, double quotes, single quotes, etc.

Expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott INCLUDE=TABLE:\ "IN\ (\ 'EMP\',\ 'DEP\')\"

7. Advanced filtering

When exporting / importing, we often have the requirement that we only want to export / import table structures, or just want to export / import data. Fortunately, the data pump also provides this function. Use the Content parameter. This parameter has three properties

1) ALL: export / import object definitions and data. The default value of this parameter is ALL.

2) DATA_ONLY: only export / import data.

3) METADATA_ONLY: only export / import object definitions.

Warning: it's worth noting that when performing an export, if you use advanced filtering, such as only exporting data, you need to make sure that the data definition already exists when you import it. Otherwise, the data will become ownerless. If the data definition already exists, it is best to specify data_only when importing, otherwise an ORA-39151 error will be triggered because the object already exists.

Filter existing data

We know that if the imported table object already exists in the target library, and the target side does not create a data integrity constraint (RI) to verify the data, it may cause the data to be imported repeatedly. The data pump provides a new parameter Table_exists_action, which can reduce the generation of duplicate data to some extent. This parameter is used to control what to do if the table object to be imported exists. There are several parameter values:

1) SKIP: skip the table and move on to the next object. This parameter defaults to SKIP. It is worth noting that if you also specify the CONTENT parameter as Data_only, the SKIP parameter is invalid and defaults to APPEND.

2) APPEND: add data to an existing table.

3) TRUNCATE: TRUNCATE the current table, and then add records. Use this parameter with caution unless you confirm that the data in the current table is really useless. Otherwise, data loss may occur.

4) REPLACE: delete and rebuild the table object, and then add data to it. It is worth noting that if the CONTENT parameter is also specified as Data_only, the REPLACE parameter is invalid.

Redefine the Schema or tablespace of the table

We may also encounter the need to transfer the object of user A to user B, or to change the tablespace of the data. The data pump achieves this function through Remap_Schema and Remap_tablespace parameters.

1) REMAP_SCHEMA: redefines the Schema to which the object belongs

This parameter is similar to Fromuser+Touser in IMP and supports the conversion of multiple Schema. The syntax is as follows:

REMAP_SCHEMA=Source_schema:Target_schema [, Source_schema:Target_schema]

For example, the object of An is converted to C user, and C to D user. Remap_schema=a:b,c:d

Warning: you cannot specify remap_schema=a:b,a:c. Warning in the same IMPDP command.

2) REMAP_TABLESPACE: redefine the tablespace in which the object resides.

This parameter is used to remap tablespaces stored by imported objects and supports simultaneous transformation of multiple tablespaces separated by commas. The syntax is as follows:

REMAP_TABLESPACE=Source_tablespace:Target_tablespace [, Source_tablespace:Target_tablespace]

Warning: if you use the Remap_tablespace parameter, make sure that the imported user has read and write permissions to the target tablespace.

IX. Optimize import / export efficiency

1) for exported parallel

For the export, because the dump file can only be operated by one thread (including Imax O processing), if there is only one output DUMP file, even if you specify more parallelism, the actual work is still one and an ORA-39095 error will be triggered. Therefore, it is recommended that you set this parameter to be less than or equal to the number of DUMP files generated. So, how do you control the number of DUMP files generated?

The EXPDP command provides a FILESIZE parameter that specifies the maximum capacity of a single DUMP file. To make effective use of the parallel parameter, the filesize parameter is essential.

For example: a user object takes up about 4G of space, and the actual exported DUMP file is about 3G. If we try to specify a parallelism of 4 and set a single file to no more than 500m when exporting this user, the syntax is as follows:

$expdp user/pwd directory=dump_file dumpfile=expdp_20100820_%U.dmp logfile=expdp_20100820.log filesize=500M parallel=4

2) for imported parallel

For import, it is much easier to use the parallel parameter, and I think import can better reflect the advantages of the parallel parameter. Parameter is set to several, it is assumed that the contents of several tables will be imported into the library at the same time.

For example: a dmp file contains 200 tables, and if we try to specify a parallelism of 10 when importing the DMP file, the syntax is as follows:

$impdp user/pwd directory=dump_file dumpfile=expdp_20100820.dmp logfile=impdp_20100820.log parallel=10

Import only data, not indexes, etc.

Impdp system/oracle directory=dmp dumpfile=itsms_%U.dmp logfile=impitsms2.log parallel=4 remap_schema=itsms:itfw TABLE_EXISTS_ACTION=APPEND CONTENT=data_only

How to enter interactive mode

Here, I am performing the import, and I want to enter interactive mode to see the status of the import. There are two ways to enter interactive mode, and the steps are as follows:

I use Ctrl+C to exit the current mode

Ii executes the Expdp/Impdp command in command line mode, while specifying the attach parameter to connect to the import / export task currently being developed. Such as:

Expdp bam/bam attach=SYS_IMPORT_FULL_01

Warning: if no Attach parameter is specified, the currently running task is entered by default. However, if there is no task currently being assigned and no value is assigned to Attach, an Ora-31626 error will be reported.

When the command line enters interactive mode, the following interface is displayed:

Export >

Operation of interactive mode

In interactive mode, the following operations are supported.

I check the running status of JOB

Export > status

Ii fallback to the command line

Export > continue_client

Iii adds parallelism

Export > parallel=4

Warning: when using export, you cannot specify parallel parameters directly, otherwise you may encounter ORA-39095 errors, because if you want to export in parallel, you must specify multiple export files. In this case, parallel export means that multiple threads work at the same time and export multiple dmp files from the database at the same time.

Iv stop JOB

Export > stop_job

V start JOB

Export > start_job

Vi killed JOB.

Export > kill_job

Vii exits interactive mode

Export > exit_client

Viii specifies the file size

Export > filesize=1G

Ix help

Export > Help

JOB_NAME

Specifies the name of the role to be exported. Default is SYS_XXX.

JOB_NAME=jobname_string

SELECT * FROM DBA_DATAPUMP_JOBS;-- to view the existing job

=

EXPDP parameter description

Here are the available keywords and their descriptions. The default values are listed in square brackets.

ATTACH

Connect to an existing job.

For example, ATTACH=job_name.

CLUSTER

Utilize the cluster resources and distribute the worker processes on the Oracle RAC.

Valid key values are [Y] and N.

COMPRESSION

Reduce the dump file size.

Valid key values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.

CONTENT

Specifies the data to uninstall.

Valid key values are: [ALL], DATA_ONLY, and METADATA_ONLY.

DATA_OPTIONS

Data tier option tag.

The valid key value is: XML_CLOBS.

DIRECTORY

The directory object used to dump files and log files.

DUMPFILE

Specifies the list of target dump file names [expdat.dmp].

For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ENCRYPTION

Encrypt some or all of a dump file.

Valid key values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY, and NONE.

ENCRYPTION_ALGORITHM

Specifies the way to encrypt.

Valid key values are: [AES128], AES192, and AES256.

ENCRYPTION_MODE

The method of generating the encryption key.

Valid key values are: DUAL, PASSWORD, and [TRANSPARENT].

ENCRYPTION_PASSWORD

The password key used to create encrypted data in the dump file.

ESTIMATE

Calculate job estimates.

Valid key values are [BLOCKS] and STATISTICS.

ESTIMATE_ONLY

Calculates job estimates without performing an export.

EXCLUDE

Excludes specific object types.

For example, EXCLUDE=SCHEMA: "= 'HR'".

FILESIZE

Specifies the size of each dump file in bytes.

FLASHBACK_SCN

The SCN used to reset the session snapshot.

FLASHBACK_TIME

The time used to find the nearest corresponding SCN value.

FULL

Export the entire database [N].

HELP

Displays the help message [N].

INCLUDE

Includes specific object types.

For example, INCLUDE=TABLE_DATA.

JOB_NAME

The name of the export job to create.

LOGFILE

Specify the log file name [export.log].

NETWORK_LINK

The name of the remote database link for the source system.

NOLOGFILE

Do not write to the log file [N].

PARALLEL

Change the number of active worker for the current job.

PARFILE

Specifies the parameter file name.

QUERY

The predicate clause used to export a subset of the table.

For example, QUERY=employees: "WHERE department_id > 10".

REMAP_DATA

Specifies the data conversion function.

For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

REUSE_DUMPFILES

Overwrite the target dump file (if it exists) [N].

SAMPLE

The percentage of data to be exported.

SCHEMAS

List of scenarios to export [login scheme].

SERVICE_NAME

The active service name and associated resource group that constrain the Oracle RAC resource.

SOURCE_EDITION

The version used to extract metadata.

STATUS

How often the job status is monitored, where the default value [0] indicates that the new status is displayed as soon as it is available.

TABLES

Identifies the list of tables to export.

For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.

TABLESPACES

Identifies the list of tablespaces to export.

TRANSPORTABLE

Specifies whether transportable methods can be used.

Valid key values are: ALWAYS and [NEVER].

TRANSPORT_FULL_CHECK

Verify the storage segment [N] of all tables.

TRANSPORT_TABLESPACES

A list of tablespaces from which to unload metadata.

VERSION

The version of the object to export.

Valid key values are: [COMPATIBLE], LATEST, or any valid database version.

The following commands are valid in interactive mode.

Note: abbreviations are allowed.

ADD_FILE

Add the dump file to the dump file set.

CONTINUE_CLIENT

Return to event logging mode. If you are idle, the job is restarted.

EXIT_CLIENT

Exit the client session and keep the job running.

FILESIZE

The default file size (in bytes) for subsequent ADD_FILE commands.

HELP

Summarize interactive commands.

KILL_JOB

Detach and delete the job.

PARALLEL

Change the number of active worker for the current job.

REUSE_DUMPFILES

Overwrite the target dump file (if it exists) [N].

START_JOB

Start or resume the current job.

The valid key value is: SKIP_CURRENT.

STATUS

How often the job status is monitored, where

The default value [0] means that the new state is displayed as soon as it is available.

STOP_JOB

Turn off job execution in order and exit the client.

The valid key value is: IMMEDIATE.

IMPDP parameter description

Here are the available keywords and their descriptions. The default values are listed in square brackets.

ATTACH

Connect to an existing job.

For example, ATTACH=job_name.

CLUSTER

Utilize the cluster resources and distribute the worker processes on the Oracle RAC.

Valid key values are [Y] and N.

CONTENT

Specify the data to load.

Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.

DATA_OPTIONS

Data tier option tag.

The valid keyword is: SKIP_CONSTRAINT_ERRORS.

DIRECTORY

The directory object used to dump files, log files, and SQL files.

DUMPFILE

List of dump files to import from [expdat.dmp].

For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ENCRYPTION_PASSWORD

The password key used to access the encrypted data in the dump file.

Invalid for network import job.

ESTIMATE

Calculate job estimates.

Valid keywords are [BLOCKS] and STATISTICS.

EXCLUDE

Excludes specific object types.

For example, EXCLUDE=SCHEMA: "= 'HR'".

FLASHBACK_SCN

The SCN used to reset the session snapshot.

FLASHBACK_TIME

The time used to find the nearest corresponding SCN value.

FULL

Import all objects in the source [Y].

HELP

Displays the help message [N].

INCLUDE

Includes specific object types.

For example, INCLUDE=TABLE_DATA.

JOB_NAME

The name of the import job to create.

LOGFILE

Log file name [import.log].

NETWORK_LINK

The name of the remote database link for the source system.

NOLOGFILE

Do not write to the log file [N].

PARALLEL

Change the number of active worker for the current job.

PARFILE

Specify the parameter file.

PARTITION_OPTIONS

Specifies how partitions should be converted.

Valid keywords are: DEPARTITION, MERGE and [NONE].

QUERY

The predicate clause used to import a subset of the table.

For example, QUERY=employees: "WHERE department_id > 10".

REMAP_DATA

Specifies the data conversion function.

For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.

REMAP_DATAFILE

Redefine the data file reference in all DDL statements.

REMAP_SCHEMA

Load objects from one scheme into another.

REMAP_TABLE

Remap the table name to another table.

For example, REMAP_TABLE=HR.EMPLOYEES:EMPS.

REMAP_TABLESPACE

Remap tablespace objects to another tablespace.

REUSE_DATAFILES

If the tablespace already exists, initialize it [N].

SCHEMAS

A list of scenarios to import.

SERVICE_NAME

The active service name and associated resource group that constrain the Oracle RAC resource.

SKIP_UNUSABLE_INDEXES

Skips indexes that are set to the Index unavailable state.

SOURCE_EDITION

The version used to extract metadata.

SQLFILE

Writes all SQL DDL to the specified file.

STATUS

How often the job status is monitored, where

The default value [0] means that the new state is displayed as soon as it is available.

STREAMS_CONFIGURATION

Enable loading of stream metadata

TABLE_EXISTS_ACTION

The action performed when the import object already exists.

Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.

TABLES

Identifies the list of tables to import.

For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.

TABLESPACES

Identifies the list of tablespaces to import.

TARGET_EDITION

The version used to load metadata.

TRANSFORM

The metadata transformation to apply to the applicable object.

Valid keywords are: OID, PCTSPACE, SEGMENT_ATTRIBUTES and STORAGE.

TRANSPORTABLE

Lets you select the option to transfer data movement.

Valid keywords are: ALWAYS and [NEVER].

Valid only in NETWORK_LINK schema import operations.

TRANSPORT_DATAFILES

A list of data files imported in transferable mode.

TRANSPORT_FULL_CHECK

Verify the storage segment [N] of all tables.

TRANSPORT_TABLESPACES

A list of tablespaces from which to load metadata.

Valid only in NETWORK_LINK schema import operations.

VERSION

The version of the object to import.

Valid keywords are: [COMPATIBLE], LATEST or any valid database version.

Valid only for NETWORK_LINK and SQLFILE.

The following commands are valid in interactive mode.

Note: abbreviations are allowed.

CONTINUE_CLIENT

Return to event logging mode. If you are idle, the job is restarted.

EXIT_CLIENT

Exit the client session and keep the job running.

HELP

Summarize interactive commands.

KILL_JOB

Detach and delete the job.

PARALLEL

Change the number of active worker for the current job.

START_JOB

Start or resume the current job.

The valid keyword is: SKIP_CURRENT.

STATUS

How often the job status is monitored, where

The default value [0] means that the new state is displayed as soon as it is available.

STOP_JOB

Turn off job execution in order and exit the client.

The valid keyword is: IMMEDIATE.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report