In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what are the common commands of Oracle data pump". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn what commands are commonly used in Oracle data pumps.
Directory related SQL statement: select * from dba_directories
Create directory my_dir as'/ home/oracle/tmp'
Grant read,write on directory my_dir to scott
EXPDP export
Note:
1. Derivative database users need to have read and write permissions to directory_object.
2. The path specified by directory_object already exists in the operating system.
3. Oracle users have read and write permissions to the path specified by directory_object.
4. When system users export users, the metadata that creates users and grants permissions to the system will also be exported, which cannot be exported by ordinary users.
Example of expdp command
# # Export a table, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log tables=scott.emp
# # Export multiple tables, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log tables=\ (scott.emp,scott.dept\)
# # Export a user (export all the objects of this user), for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott
# # Export multiple users, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=\ (scott,hr\)
# # Export the entire database (user data of sys, ordsys and mdsys will not be exported) example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log full=yes
Take exporting a user as an example
# # parallel Export:
Expdp system/oracle directory=my_dir dumpfile=expdp%U.dmp logfile=expdp.log schemas=scott parallel=5
# # Export user metadata (including table definitions, stored procedures, functions, etc.):
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott content=metadata_only
# # Export user stored procedures, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott include=procedure
# # Export user functions and views, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott include=\ (function,view\)
# # Export a user, but does not include index, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott exclude=index
Expdp parameter description:
Attach= [schema _ name.] job_name
Description: nodefault. Connect to the job and enter interactive mode.
Export mode, the following five parameters are mutually exclusive.
Full= [yes | no]
Description: nodefault. Export all data and metadata. To perform a full export, you need to have the datapump_exp_full_database role.
Schemas=schema_name [,...]
Description: default current user's schema. Export users.
Tables= [schema _ name.] table_name [: partition_name] [,...]
Description: nodefault. Export the table.
Tablespaces=tablespace_name [,...]
Description: nodefault. Export tablespaces.
Transport_tablespaces=tablespace_name [,...]
Description: nodefault. Export removable table spaces.
Filter condition, the following three parameters are mutually exclusive:
Query= [schema.] [table_name:] query_clause
Description: nodefault. Export by query criteria.
Exclude=object_type [: name_clause] [,...]
Description: nodefault. Excludes specific object types.
Include=object_type [: name_clause] [,...]
Description: nodefault. Includes specific object types.
Other parameters:
Directory=directory_object
Description: default:data_pump_dir. Export the path.
Dumpfile= [directory _ object:] file_name [,...]
Description: default:expdat.dmp. The file name of the export.
Logfile= [directory _ object:] file_name
Description: default:export.log. The exported log file name.
Content= [all | data_only | metadata_only]
Description: default:all. Specify the data to export.
Parallel=integer
Description: default:1. Parallelism, which should be less than or equal to the number of dmp files, or can use the replacement variable'% U' for 'dumpfile='.
In the RAC environment, if the parallelism is greater than 1, note that the directory should be a shared directory.
Compression= [all | data_only | metadata_only | none]
Description: default:metadata_only. Compress.
Parfile= [directory _ path] file_name
Description: nodefault. Specifies the export parameter file name.
Network_link=source_database_link
Description: nodefault. Connect to the source database for export.
Filesize=integer [b | kb | mb | gb | tb]
Description: default:0 does not limit the size. Specifies the maximum size of each dmp file.
If this parameter is less than the size of the data to be exported, an error ORA-39095 will be reported.
Job_name=jobname_string
Description: default:system-generated name of the form SYS_EXPORT__NN. Specifies the job name.
Version= [compatilble | latest | version_string]
Description: default:compatible. The default compatibility mode allows you to specify the version of the exported dmp file.
IMPDP Import
Note:
1. Files exported by expdp cannot be imported using imp, but can only be imported into the database through impdp.
2. If you encounter an existing object during import, you will skip this object by default and continue to import other objects.
3. When importing, you should confirm whether the tablespace and schema of the dmp file and the target database correspond.
4. When importing a dmp file, you should determine the command when exporting the dmp file in order to import the data smoothly.
Get a dmp file. If you forget the export command, you can confirm it by the following methods (unofficial, do not use production data): confirm whether the dmp file is exported by exp or expdp. 1) xxd test.dmp | more
The file exported by expdp starts with 0301 dint exp and starts with 0303.
2) strings test.dmp | more
Dmp file header information exported by expdp:
"SYS". "SYS_EXPORT_TABLE_01"-job name
X86_64/Linux 2.4.xx-operating system version
Bjdb-Database name
ZHS16GBK-Database character set
11.02.00.04.00-Database version
Dmp file header information exported by exp:
IEXPORT:V11.02.00-version
USCOTT-user
RTABLES-object
Confirm the export command of the dmp file exported by expdp
Strings test.dmp | grep CLIENT_COMMAND
Example of impdp command
# # Import all data in dmp file, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log full=yes
# # Import a table, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log tables=scott.emp
# # Import multiple tables, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log tables=\ (scott.emp,scott.dept\)
# # Import a user, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log schemas=scott
# # Import multiple users, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log schemas=\ (scott,hr\)
Take importing all the data in the dmp file as an example
# # parallel Import:
Impdp system/oracle directory=my_dir dumpfile=expdp%U.dmp logfile=impdp.log parallel=5
# # Import metadata (including table definitions, stored procedures, functions, etc.):
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log content=metadata_only
# # Import stored procedures, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log include=procedure
# # Import functions and views, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log include=\ (function,view\)
# # Import data, but excluding indexes, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log exclude=index
# # rename table name import, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_table=scott.emp:emp1
# # renaming schema name Import, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_schema=scott:tim
# # Import table space names with renamed names, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_tablespace=users:apptbs
# # ignore the segment attributes of all objects when importing, so that the objects are created on the default tablespace of the target database user.
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log transform=segment_attributes:n
# # Import the ddl statements of a dmp file into a file, but not the database, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log sqlfile=import.sql
Impdp parameter description
Attach= [schema _ name.] job_name
Description: nodefault. Connect to the job and enter interactive mode.
Import mode, the following five parameters are mutually exclusive.
Full= [yes | no]
Description: default:yes. Import all data and metadata from the dmp file.
Schemas=schema_name [,...]
Description: nodefault. Import users.
Tables= [schema _ name.] table_name [: partition_name] [,...]
Description: nodefault. Import table.
Tablespaces=tablespace_name [,...]
Description: nodefault. Import tablespaces.
Transport_tablespaces=tablespace_name [,...]
Description: nodefault. Import removable table spaces.
Filter condition, the following three parameters are mutually exclusive:
Query= [schema.] [table_name:] query_clause
Description: nodefault. Import by query criteria.
Exclude=object_type [: name_clause] [,...]
Description: nodefault. Excludes specific object types.
Include=object_type [: name_clause] [,...]
Description: nodefault. Includes specific object types.
Other parameters:
Directory=directory_object
Description: default:data_pump_dir. Import path.
Dumpfile= [directory _ object:] file_name [,...]
Description: default:expdat.dmp. The file name of the import.
Logfile= [directory _ object:] file_name
Description: default:export.log. The name of the imported log file.
Content= [all | data_only | metadata_only]
Description: default:all. Specify the data to import.
Parallel=integer
Description: default:1. Parallelism, which should be less than or equal to the number of dmp files, or can use the replacement variable'% U' for 'dumpfile='.
Compression= [all | data_only | metadata_only | none]
Description: default:metadata_only. Compress.
Parfile= [directory _ path] file_name
Description: nodefault. Specifies the name of the import parameter file.
Network_link=source_database_link
Description: nodefault. Connect to the source database for import.
Job_name=jobname_string
Description: default:system-generated name of the form SYS_EXPORT__NN. Specifies the job name.
Version= [compatilble | latest | version_string]
Description: default:compatible. The default compatibility mode allows you to specify the version of the imported dmp file.
REMAP_TABLE= [schema.] old_tablename [.partition]: new_tablename
Description: nodefault. Allows table names to be renamed during import.
REMAP_SCHEMA=source_schema:target_schema
Description: nodefault. Allows renaming of schema names during import.
REMAP_TABLESPACE=source_tablespace:target_tablespace
Description: nodefault. Allows you to rename the tablespace name during import.
TRANSFORM = transform_name:value [: object_type]
Description: nodefault. Allows you to correct the DDL of the object being imported.
SQLFILE= [directory _ object:] file_name
Description: nodefault. Writes all SQL DDL to the specified file based on other parameters.
TABLE_EXISTS_ACTION= [skip | APPEND | TRUNCATE | REPLACE]
Description: default:skip (if content=data_only is specified,then the default is append)
Interactive mode
Enter the interaction to operate the import and export job. The way to enter interactive mode:
1. Press Ctrl + c during the execution of the import and export command line
2. Expdp attach=jobname or impdp attach=jobnam to view the import and export log, you can see jobname, or you can find jobname by querying dba_datapump_jobs.
Summary of error report
The system directory is not established, an error is reported:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Impdp imports the dmp file exported by exp with an error:
ORA-39000: bad dump file specification
ORA-39143: dump file "/ u01/20161031/bjh02.dmp" may be an original export dump file
If the exported database version is higher than the imported data version, you need to add the parameter version= to import the database version when exporting. Otherwise, an error is reported:
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/ home/oracle/EXPDP20161024_1.DMP" for read
ORA-27037: unable to obtain file status
At the end of the article, there is a sqlfile command to export user metadata, which was asked by a friend before. Because of the work done by db2, I only recently started to deal with oracle, and the data pump knows very little, so I hesitated for a moment when asked about this piece. The first thing I thought of was to use expdp, a data pump tool that comes with oracle:
Expdp system/oracle schemas=scott directory=my_dir dumpfile=scott.dmp logfile=scott.log content=metadata_only
But here comes the problem. Friends want sqlfile, not dumpfile. Baidu learned that metadata can be exported into sqlfile through plsql developer tool, but the exported sqlfile still does not meet the needs of friends. Later, after instructions, you only need to add sqlfile parameters to import metadata to generate sqlfile files. The specific commands are as follows:
Impdp system/oracle directory=my_dir schemas=scott dumpfile=scott.dmp logfile=scott_imp.log sqlfile=scott.sql
Sqlfile is described in the impdp tool as follows
[oracle@Yukki tmp] $impdp-help
SQLFILE
Write all the SQL DDL to a specified file.
Writes all SQL DDL to the specified file.
At this point, I believe that everyone on the "Oracle data pump commonly used commands have a deeper understanding, might as well come to the actual operation of it!" Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.