Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the parallelism PARALLEL parameter in EXPDP/IMPDP

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces what is the parallelism PARALLEL parameter in EXPDP/IMPDP. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.

If you set up EXPDP parallel=4, you must set 4 EXPDP files, otherwise PARALLEL is problematic, and EXPDP will use a WORKER process to export METADATA, and other WORKER processes will output data at the same time. If the EXPDP job is less than 250m, it will only start a WORKER process. If it is 500m, it will start 2, 1000m and will start 4 WOKER processes, generally adding% U to set up multiple files.

And IMPDP is different, first start a WOKER process METADATA import, and then start multiple WORKER process import, so then you will only see WOKER importing METADATA, and IMPDP if PARALLE=4 also needs > = 4 DMP files, you can also use% U for import.

Nohup expdp system/**** PARALLEL=2 JOB_NAME=full_bak_job full=y dumpfile=exptest:back_%U.dmp logfile=exptest:back.log &

Impdp system/*** PARALLEL=2 EXCLUDE=STATISTICS JOB_NAME=full_imp cluster=no full=y dumpfile=test:back_%U.dmp logfile=test:back_imp.log

After 11GR2, the WORKER process of EXPDP and IMDP will start on multiple INSTANCE, so the DIRECTORY must be on the shared disk, if no shared disk is set or specify cluster=no to prevent error.

When observing the EXPDP/IMPDP woker, it is as follows:

Import > status

Job: FULL_IMP

Operation: IMPORT

Mode: FULL

State: EXECUTING

Bytes Processed: 150300713536

Percent Done: 80

Current Parallelism: 6

Job Error Count: 0

Dump File: / expdp/back_%u.dmp

Dump File: / expdp/back_01.dmp

Dump File: / expdp/back_02.dmp

Dump File: / expdp/back_03.dmp

Dump File: / expdp/back_04.dmp

Dump File: / expdp/back_05.dmp

Dump File: / expdp/back_06.dmp

Dump File: / expdp/back_07.dmp

Dump File: / expdp/back_08.dmp

Worker 1 Status:

Process Name: DW00

State: EXECUTING

Object Schema: ACRUN

Object Name: T_PLY_UNDRMSG

Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Completed Objects: 3

Completed Rows: 3856891

Completed Bytes: 1134168200

Percent Done: 83

Worker Parallelism: 1

Worker 2 Status:

Process Name: DW01

State: EXECUTING

Object Schema: ACRUN

Object Name: T_FIN_PAYDUE

Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Completed Objects: 5

Completed Rows: 2646941

Completed Bytes: 1012233224

Percent Done: 93

Worker Parallelism: 1

Worker 3 Status:

Process Name: DW02

State: EXECUTING

Object Schema: ACRUN

Object Name: MLOG$_T_FIN_CLMDUE

Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Completed Objects: 6

Completed Bytes: 382792584

Worker Parallelism: 1

Worker 4 Status:

Process Name: DW03

State: EXECUTING

Object Schema: ACRUN

Object Name: T_PAY_CONFIRM_INFO

Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Completed Objects: 5

Completed Rows: 2443790

Completed Bytes: 943310104

Percent Done: 83

Worker Parallelism: 1

Worker 5 Status:

Process Name: DW04

State: EXECUTING

Object Schema: ACRUN

Object Name: T_PLY_TGT

Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Completed Objects: 6

Completed Rows: 2285353

Completed Bytes: 822501496

Percent Done: 64

Worker Parallelism: 1

Worker 6 Status:

Process Name: DW05

State: EXECUTING

Object Schema: ACRUN

Object Name: T_FIN_PREINDRCT_CLMFEE

Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Completed Objects: 5

Completed Rows: 6042384

Completed Bytes: 989435088

Percent Done: 79

Worker Parallelism: 1

The English language is as follows:

For Data Pump Export, the value that is specified for the parallel parameter should be less than or equal to the number of files in the dump file set. Each worker or Parallel Execution Process requires exclusive access to the dump file, so having fewer dump files than the degree of parallelism will mean that some workers or PX processes will be unable to write the information they are exporting. If this occurs, the worker processes go into an idle state and will not be doing any work until more files are added to the job. See the explanation of the DUMPFILE parameter in the Database Utilities guide for details on how to specify multiple dump files for a Data Pump export job.

For Data Pump Import, the workers and PX processes can all read from the same files. However, if there are not enough dump files, the performance may not be optimal because multiple threads of execution will be trying to access the same dump file. The performance impact of multiple processes sharing the dump files depends on the I/O subsystem containing the dump files. For this reason, Data Pump Import should not have a value for the PARALLEL parameter that is significantly larger than the number of files in the dump file set.

In a typical export that includes both data and metadata, the first worker process will unload the metadata: tablespaces, schemas, grants, roles, tables, indexes, and so on. This single worker unloads the metadata, and all the rest unload the data, all at the same time. If the metadata worker finishes and there are still data objects to unload, it will start unloading the data too. The examples in this document assume that there is always one worker busy unloading metadata while the rest of the workers are busy unloading table data objects.

If the external tables method is chosen, Data Pump will determine the maximum number of PX processes that can work on a table data object. It does this by dividing the estimated size of the table data object by 250 MB and rounding the result down. If the result is zero or one, then PX processes are not used to unload the table

The PARALLEL parameter works a bit differently in Import than Export. Because there are various dependencies that exist when creating objects during import, everything must be done in order. For Import, no data loading can occur until the tables are created because data cannot be loaded into tables that do not yet exist

On the parallelism of EXPDP/IMPDP PARALLEL parameters is shared here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report