In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the knowledge of "Oracle data pump commonly used command arrangement". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Preface
Expdp and impdp are tools for moving data between oracle databases. Expdp and impdp can only be used on the database server, not on the client. This article briefly summarizes the commands commonly used in expdp and impdp. For more information, refer to the official oracle documentation Utilities.
Directory related SQL statements:
Select * from dba_directories
Create directory my_dir as'/ home/oracle/tmp'
Grant read,write on directory my_dir to scott
Expdp
Note:
1. Derivative database users need to have read and write permissions to directory_object.
2. The path specified by directory_object already exists in the operating system.
3. Oracle users have read and write permissions to the path specified by directory_object.
4. When system users export users, the metadata that creates users and grants permissions to the system will also be exported, which cannot be exported by ordinary users.
Example of expdp command
Export a table, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log tables=scott.emp
Export multiple tables, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log tables=\ (scott.emp,scott.dept\)
Export a user (export all objects for this user), for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott
Export multiple users, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=\ (scott,hr\)
Export the entire database (user data of sys, ordsys, mdsys will not be exported) example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log full=yes
Take exporting a user as an example
Parallel export:
Expdp system/oracle directory=my_dir dumpfile=expdp%U.dmp logfile=expdp.log schemas=scott parallel=5
Export user metadata (including table definitions, stored procedures, functions, and so on):
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott content=metadata_only
Export user stored procedures, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott include=procedure
Export user functions and views, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott include=\ (function,view\)
Export a user, but does not include the index, for example:
Expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott exclude=index
Expdp parameter description:
Attach= [schema _ name.] job_name
Description: nodefault. Connect to the job and enter interactive mode.
Export mode, the following five parameters are mutually exclusive.
Full= [yes | no]
Description: nodefault. Export all data and metadata. To perform a full export, you need to have the datapump_exp_full_database role.
Schemas=schema_name [,...]
Description: default current user's schema. Export users.
Tables= [schema _ name.] table_name [: partition_name] [,...]
Description: nodefault. Export the table.
Tablespaces=tablespace_name [,...]
Description: nodefault. Export tablespaces.
Transport_tablespaces=tablespace_name [,...]
Description: nodefault. Export removable table spaces.
Filter condition, the following three parameters are mutually exclusive:
Query= [schema.] [table_name:] query_clause
Description: nodefault. Export by query criteria.
Exclude=object_type [: name_clause] [,...]
Description: nodefault. Excludes specific object types.
Include=object_type [: name_clause] [,...]
Description: nodefault. Includes specific object types.
Other parameters:
Directory=directory_object
Description: default:data_pump_dir. Export the path.
Dumpfile= [directory _ object:] file_name [,...]
Description: default:expdat.dmp. The file name of the export.
Logfile= [directory _ object:] file_name
Description: default:export.log. The exported log file name.
Content= [all | data_only | metadata_only]
Description: default:all. Specify the data to export.
Parallel=integer
Description: default:1. Parallelism, which should be less than or equal to the number of dmp files, or can use the replacement variable'% U' for 'dumpfile='.
In the RAC environment, if the parallelism is greater than 1, note that the directory should be a shared directory.
Compression= [all | data_only | metadata_only | none]
Description: default:metadata_only. Compress.
Parfile= [directory _ path] file_name
Description: nodefault. Specifies the export parameter file name.
Network_link=source_database_link
Description: nodefault. Connect to the source database for export.
Filesize=integer [b | kb | mb | gb | tb]
Description: default:0 does not limit the size. Specifies the maximum size of each dmp file.
If this parameter is less than the size of the data to be exported, an error ORA-39095 will be reported.
Job_name=jobname_string
Description: default:system-generated name of the form SYS_EXPORT__NN. Specifies the job name.
Version= [compatilble | latest | version_string]
Description: default:compatible. The default compatibility mode allows you to specify the version of the exported dmp file.
Cluster= [yes | no]
Description: default:yes. Utilize cluster resources and distribute workers across the Oracle RAC . It should be noted that in a multi-node environment, if the export directory is not on the shared storage, an error such as ORA-31693 will be reported if you do not add the cluster=no parameter, because other nodes do not have permission to export the directory.
Impdp
Note:
1. Files exported by expdp cannot be imported using imp, but can only be imported into the database through impdp.
2. If you encounter an existing object during import, you will skip this object by default and continue to import other objects.
3. When importing, you should confirm whether the tablespace and schema of the dmp file and the target database correspond.
4. When importing a dmp file, you should determine the command when exporting the dmp file in order to import the data smoothly.
Get a dmp file. If you forget the export command, you can confirm it by the following methods (unofficial, do not use production data):
◆ confirms whether the dmp file is exported by exp or expdp
1)
Xxd test.dmp | more
The file exported by expdp starts with 0301 dint exp and starts with 0303.
2)
Strings test.dmp | more
Dmp file header information exported by expdp:
"SYS". "SYS_EXPORT_TABLE_01"-job name
X86_64/Linux 2.4.xx-operating system version
Bjdb-Database name
ZHS16GBK-Database character set
11.02.00.04.00-Database version
Dmp file header information exported by exp:
IEXPORT:V11.02.00-version
USCOTT-user
RTABLES-object
◆ confirms the export command of the dmp file exported by expdp
Strings test.dmp | grep CLIENT_COMMAND
Example of impdp command
Import all the data in the dmp file, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log full=yes
Import a table, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log tables=scott.emp
Import multiple tables, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log tables=\ (scott.emp,scott.dept\)
Import a user, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log schemas=scott
Import multiple users, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log schemas=\ (scott,hr\)
Take importing all the data in the dmp file as an example
Parallel import:
Impdp system/oracle directory=my_dir dumpfile=expdp%U.dmp logfile=impdp.log parallel=5
Import metadata (including table definitions, stored procedures, functions, and so on):
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log content=metadata_only
Import stored procedures, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log include=procedure
Import functions and views, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log include=\ (function,view\)
Import data, but do not include indexes, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log exclude=index
Rename table name import, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_table=scott.emp:emp1
Rename schema name import, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_schema=scott:tim
Rename tablespace name import, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_tablespace=users:apptbs
On import, the segment properties of all objects are ignored so that the objects are created on the default tablespace of the target database user.
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log transform=segment_attributes:n
Import the ddl statements of the dmp file into a file, not the database, for example:
Impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log sqlfile=import.sql
Impdp parameter description
Attach= [schema _ name.] job_name
Description: nodefault. Connect to the job and enter interactive mode.
Import mode, the following five parameters are mutually exclusive.
Full= [yes | no]
Description: default:yes. Import all data and metadata from the dmp file.
Schemas=schema_name [,...]
Description: nodefault. Import users.
Tables= [schema _ name.] table_name [: partition_name] [,...]
Description: nodefault. Import table.
Tablespaces=tablespace_name [,...]
Description: nodefault. Import tablespaces.
Transport_tablespaces=tablespace_name [,...]
Description: nodefault. Import removable table spaces.
Filter condition, the following three parameters are mutually exclusive:
Query= [schema.] [table_name:] query_clause
Description: nodefault. Import by query criteria.
Exclude=object_type [: name_clause] [,...]
Description: nodefault. Excludes specific object types.
Include=object_type [: name_clause] [,...]
Description: nodefault. Includes specific object types.
Other parameters:
Directory=directory_object
Description: default:data_pump_dir. Import path.
Dumpfile= [directory _ object:] file_name [,...]
Description: default:expdat.dmp. The file name of the import.
Logfile= [directory _ object:] file_name
Description: default:export.log. The name of the imported log file.
Content= [all | data_only | metadata_only]
Description: default:all. Specify the data to import.
Parallel=integer
Description: default:1. Parallelism, which should be less than or equal to the number of dmp files, or can use the replacement variable'% U' for 'dumpfile='.
Compression= [all | data_only | metadata_only | none]
Description: default:metadata_only. Compress.
Parfile= [directory _ path] file_name
Description: nodefault. Specifies the name of the import parameter file.
Network_link=source_database_link
Description: nodefault. Connect to the source database for import.
Job_name=jobname_string
Description: default:system-generated name of the form SYS_EXPORT__NN. Specifies the job name.
Version= [compatilble | latest | version_string]
Description: default:compatible. The default compatibility mode allows you to specify the version of the imported dmp file.
REMAP_TABLE= [schema.] old_tablename [.partition]: new_tablename
Description: nodefault. Allows table names to be renamed during import.
REMAP_SCHEMA=source_schema:target_schema
Description: nodefault. Allows renaming of schema names during import.
REMAP_TABLESPACE=source_tablespace:target_tablespace
Description: nodefault. Allows you to rename the tablespace name during import.
TRANSFORM = transform_name:value [: object_type]
Description: nodefault. Allows you to correct the DDL of the object being imported.
SQLFILE= [directory _ object:] file_name
Description: nodefault. Writes all SQL DDL to the specified file based on other parameters.
TABLE_EXISTS_ACTION= [skip | APPEND | TRUNCATE | REPLACE]
Description: default:skip (if content=data_only is specified,then the default is append)
Interactive mode
Enter the interaction to operate the import and export job.
The way to enter interactive mode:
1. Press Ctrl + c during the execution of the import and export command line
2. Expdp attach=jobname or impdp attach=jobname
If you look at the import and export log, you can see jobname, or you can find jobname by querying dba_datapump_jobs.
Summary of error report
The system directory is not established, an error is reported:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Impdp imports the dmp file exported by exp with an error:
ORA-39000: bad dump file specification
ORA-39143: dump file "/ u01/20161031/bjh02.dmp" may be an original export dump file
If the exported database version is higher than the imported data version, you need to add the parameter version= to import the database version when exporting. Otherwise, an error is reported:
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/ home/oracle/EXPDP20161024_1.DMP" for read
ORA-27037: unable to obtain file status
At the end of the article, there is a sqlfile command to export user metadata, which was asked by a friend before. Because of the work done by db2, I only recently started to deal with oracle, and the data pump knows very little, so I hesitated for a moment when asked about this piece. The first thing I thought of was to use expdp, a data pump tool that comes with oracle:
Expdp system/oracle directory=my_dir schemas=scott dumpfile=scott.dmp logfile=scott.log content=metadata_only
But here comes the problem. Friends want sqlfile, not dumpfile. Baidu learned that metadata can be exported into sqlfile through plsql developer tool, but the exported sqlfile still does not meet the needs of friends. Later, after instructions, you only need to add sqlfile parameters to import metadata to generate sqlfile files. The specific commands are as follows:
Impdp system/oracle directory=my_dir schemas=scott dumpfile=scott.dmp logfile=scott_imp.log sqlfile=scott.sql
Sqlfile is described in the impdp tool as follows
[oracle@Yukki tmp] $impdp-help
SQLFILE
Write all the SQL DDL to a specified file.
Writes all SQL DDL to the specified file.
This is the end of the content of "commands commonly used in Oracle data pump". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.