In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains the "expdpnf export problem analysis", the content of the article is simple and clear, easy to learn and understand, now please follow the editor's train of thought slowly in depth, together to study and learn "expdpnf export problem analysis"!
Abort a dead expdp task
-- dba_datapump_jobs lookup data pump job
SELECT owner_name, job_name, operation, job_mode
State, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED
--
SCOTT EXPDP_20051121 EXPORT SCHEMA EXECUTING 1
SCOTT SYS_EXPORT_TABLE_01 EXPORT TABLE NOT RUNNING 0
SCOTT SYS_EXPORT_TABLE_02 EXPORT TABLE NOT RUNNING 0
SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0
-- find the master table of the data pump
SELECT o.status, o.object_id, o.object_type
O.owner | |'. | | object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4 and 2
STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT
--
VALID 85283 TABLE SCOTT.EXPDP_20051121
VALID 85215 TABLE SCOTT.SYS_EXPORT_TABLE_02
VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01
-- Delete master table
DROP TABLE scott.sys_export_table_02
-- for systems with recyclebin enabled, additional runs are required
Purge dba_recyclebin
-abort the expdp job normally--
-- found job_name
Select * from dba_datapump_jobs
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
-
SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL EXECUTING 1 1 3
-- attach the task
Expdp system/sys attach=SYS_EXPORT_FULL_01
Export > stop_job
Export > KILL_JOB
Modify the parallelism of impdp that is performing the import operation to improve the import efficiency
Impdp uname/password attach=SYS_IMPORT_SCHEMA_01-enter to view the details of the task
Rac expdp exported Times error: ORA-31693, ORA-31617, ORA-19505, ORA-27037
Ac expdp exported Times error:
ORA-31693: Table data object "scott". "dept" failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file "/ home/oraclerac/dump_dir/oravs_expdp_05.dmp" for write
ORA-19505: failed to identify file "/ home/oraclerac/dump_dir/oravs_expdp_05.dmp"
ORA-27037: unable to obtain file status
Cause of error:
In the RAC environment, if the PARALLEL parameter is set to greater than 1, the process will be exported on each node of the RAC. If the export directory is not shared, it will report an error.
Solution:
1. Set as shared directory
2. Add the cluster=n option to the export statement to specify that it will be exported locally only
Expdp/impdp data pump partition table import is too slow
The import of partition table data pump is too slow to meet customer migration requirements.
The derived statement is as follows: (10G single node)
Userid='/ as sysdba'
Directory=milk_dir
Dumpfile=mon_%U.dmp
Logfile=0828.log
Schemas=mon
Parallel=8
Import statement (rac of 11G):
Userid='/ as sysdba'
Directory=milk_dir
Dumpfile=mon_%U.dmp
Logfile=0828.log
Parallel=8
CLUSTER=N
The whole process took less than 3 hours to export and nearly 10 hours to import. This process is so slow that during the whole import process, by refreshing the log in real time, it is found that the partition to the partition table is slow, even if there are only 22 pieces of data in a partition. It also needs to be refreshed every 2 minutes, which is equivalent to importing 160 pieces of data in two minutes. No, no, no.
Parfile imported from the new optimization:
Userid='/ as sysdba'
Directory=milk_dir
Dumpfile=mon_%U.dmp
Logfile=0828.log
Parallel=8
ACCESS_METHOD=DIRECT_PATH
CLUSTER=N
The import speed is directly reduced from 10 hours to two hours, which is in line with the efficiency of migration.
ACCESS_METHOD=DIRECT_PATH
Official explanation:
Use ACCESS_METHOD=DIRECT_PATH to import the data, as this access method does not check the table metadata before import
ORA-01555 solves the problem of exporting error snapshots with clob fields
Export table data with clob fields. The report snapshot is too old to export.
This problem is caused by the large table data with clob fields, which needs to be solved by modifying the database flashback parameters.
Check whether the size of the undo table space is enough and whether the undo_retention parameter setting is too small. Generally, it has nothing to do with the undo space. You only need to modify the parameters.
At present, there are two solutions: increase retention or use pctversion
In addition, for tables that use MSSM tablespaces, only pctverion is available and lob retention is not available
1. Increase retention
View undo parameters
SQL > show parameter undo
-- modify undo flashback parameters
Alter system set undo_retention=7200 scope=both
By querying the official oracle website, the lob big field of oracle has its own retention parameter. If only undo_retention is adjusted, but not synchronized to the lob big field, this parameter is still 900s by default. Make sure that the query result is as follows:
Select table_name,column_name,pctversion,retention from dba_lobs where table_name='T_PATIENT_DOC_COLB'
Select table_name,column_name,pctversion,retention from dba_lobs where table_name='T_PATIENTS_DOC_CA'
Select table_name,column_name,pctversion,retention from dba_lobs where table_name='T_CARE_DOC'
Modify the retention setting of the table with large lob fields
ALTER TABLE T_PATIENT_DOC_COLB MODIFY LOB (CONTENT) (retention)
ALTER TABLE T_PATIENTS_DOC_CA MODIFY LOB (SIGN) (retention)
ALTER TABLE T_PATIENTS_DOC_CA MODIFY LOB (TIMESTAMP) (retention)
ALTER TABLE T_CARE_DOC MODIFY LOB (CARE_DOC) (retention)
2. Use pctversion
As you can see from the results, when the specified pctversion is displayed, the retention parameter is invalidated
SQL > alter table T_PATIENTS_DOC_CA modify lob (SIGN) (pctversion 10)
SQL > select column_name, pctversion, retention from user_lobs where table_name = 'qualified PATIENTS'
COLUMN_NAME PCTVERSION RETENTION
SIGN 10
TIMESTAMP 7200
Thank you for your reading, the above is the content of "expdpnf Export problem Analysis". After the study of this article, I believe you have a deeper understanding of the problem of expdpnf export problem analysis, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.