In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "what are the common operations of Oracle data pump technology". In the daily operation, I believe that many people have doubts about the common operation of Oracle data pump technology. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "what are the common operations of Oracle data pump technology?" Next, please follow the editor to study!
1. Why choose the data pump?
Data pump is a new technology introduced by Oracle 10g, but it is no longer a new technology now. 11g has finished its standard support period. Technical renewal
Starting from the 10g version, the data pump technology has been continuously improved with the new version, which has a stronger adaptability to the new version of Oracle database. Less BUG
Data pump can use direct path, parallel and other characteristics, for the migration of a large amount of data, it has better performance and can reduce a lot of import and export time. Better performanc
Exp/imp cannot export empty tables, and since updates are stopped from 10g, new objects introduced in later versions, such as custom objects, do not support export. More powerful
1.1. Main differences from exp/imp
Angular exp/imp is a client-side tool that can be used on the client-side; the data pump is a server-side tool that can only be used on the server-side.
^ exp/imp cannot export an empty table; the data pump can.
The file that is exported with the data pump cannot be used in the same way as the file that is exported by the Res. exp/imp and data pump.
two。 Basic concept
The command line command of the data pump is expdp/impdp, which is a tool for moving data between databases.
The directory (directory) is an object type of Oracle and can be thought of as a pointer to the physical storage path that specifies the path used by the data pump to export the import file.
A schema is a collection of database objects, usually a schema for each user.
3. Basic steps
3.1 create a directory
SQL > create directory dump_dir as'/ home/oracle/dump'
Dump_dir is the directory name; you can define it according to the actual situation without repeating it. If it already exists in the system, you can use it directly.
/ home/oracle/dump is the physical path of the operating system; you must ensure that the path exists and that there is enough space for backup files.
3.2 authorization
SQL > grant read,write on directory dump_dir to public
Dump_dir is the directory name created above
Public indicates that the directory is public; it can also be authorized for a user
3.3 perform import and export
$expdp system/dbmanager directory=dump_dir full=Y dumpfile=full_20190101.dmp logfile=full_expdp_20190101.log
$impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log
System is the user used to export the command
Dbmanager is the password of the system user
Dump_dir is the directory name created above
Full=Y stands for full library export
Dumpfile/logfile can be named according to the actual situation; the naming needs to be able to describe basic information such as the content and time of the backup; if the name is repeated, it will be overridden.
4. Common ways
4.1 expdp Export
# # Export a table, for example:
Expdp system/dbmanager directory=dump_dir dumpfile=emp_20190101.dmp logfile=emp_expdp_20190101.log tables=scott.emp
# # Export multiple tables, for example:
Expdp system/dbmanager directory=dump_dir dumpfile=empdept_20190101.dmp logfile=empdept_expdp_20190101.log tables=\ (scott.emp,scott.dept\)
# # Export a user (export all the objects of this user), for example:
Expdp system/dbmanager directory=dump_dir dumpfile=scott_20190101.dmp logfile=scott_expdp_20190101.log schemas=scott
# # Export multiple users, for example:
Expdp system/dbmanager directory=dump_dir dumpfile=scotthr_20190101.dmp logfile=scotthr_expdp_20190101.log schemas=\ (scott,hr\)
# # Export the entire database (user data of sys will not be exported), for example:
Expdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_expdp_20190101.log full=y
# # parallel export, for example:
Expdp system/dbmanager directory=dump_dir dumpfile=scott_20190101_%U.dmp logfile=scott_expdp_20190101.log schemas=scott parallel=8
# # Export user metadata (including table definitions, stored procedures, functions, etc.), for example:
Expdp system/dbmanager directory=dump_dir dumpfile=scott_20190101.dmp logfile=scott_expdp_20190101.log schemas=scott content=metadata_only
# # Export user stored procedures, for example:
Expdp system/dbmanager directory=dump_dir dumpfile=scott_20190101.dmp logfile=scott_expdp_20190101.log schemas=scott include=procedure
# # Export user functions and views, for example:
Expdp system/dbmanager directory=dump_dir dumpfile=scott_20190101.dmp logfile=scott_expdp_20190101.log schemas=scott include=\ (function,view\)
# # Export a user, but does not include index, for example:
Expdp system/dbmanager directory=dump_dir dumpfile=scott_20190101.dmp logfile=scott_expdp_20190101.log schemas=scott exclude=index
4.2 impdp Import
# # Import all data in dmp file, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log full=y
# # Import a table, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log tables=scott.emp
# # Import multiple tables, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log tables=\ (scott.emp,scott.dept\)
# # Import a user, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log schemas=scott
# # Import multiple users, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log schemas=\ (scott,hr\)
# # parallel Import, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101_%U.dmp logfile=full_impdp_20190101.log parallel=5
# # Import metadata (including table definitions, stored procedures, functions, etc.), for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log content=metadata_only
# # Import stored procedures, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log include=procedure
# # Import functions and views, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log include=\ (function,view\)
# # Import data, but excluding indexes, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log exclude=index
# # rename table name import, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log remap_table=scott.emp:emp1
# # renaming schema name Import, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log remap_schema=scott:tim
# # Import table space names with renamed names, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log remap_tablespace=users:pams
# # Import the ddl statements of a dmp file into a file, but not the database, for example:
Impdp system/dbmanager directory=dump_dir dumpfile=full_20190101.dmp logfile=full_impdp_20190101.log sqlfile=import.sql
5. Common parameters
5.1 expdp parameter description
Attach= [schema _ name.] job_name
Description: nodefault. Connect to the job and enter interactive mode.
# # Export mode, the following five parameters are mutually exclusive:
Full= [Y | N]
Description: export all data and metadata. To perform a full export, you need to have the datapump_exp_full_database role.
Schemas=schema_name [,...]
Description: export users.
Tables= [schema _ name.] table_name [: partition_name] [,...]
Description: export the table.
Tablespaces=tablespace_name [,...]
Description: export tablespaces.
Transport_tablespaces=tablespace_name [,...]
Description: export removable tablespaces.
# # filter conditions. The following three parameters are mutually exclusive:
Query= [schema.] [table_name:] query_clause
Description: export according to query conditions.
Exclude=object_type [: name_clause] [,...]
Description: excludes specific object types.
Include=object_type [: name_clause] [,...]
Description: includes specific object types.
# # other parameters:
Directory=directory_object
Description: export path.
Dumpfile=file_name [,...]
Description: the file name of the export.
Logfile=file_name
Description: the name of the exported log file.
Content= [all | data_only | metadata_only]
Description: specifies the data to export.
Parallel=integer
Description: parallelism, this value should be less than or equal to the number of dmp files, or you can use the replacement variable'% U 'for' dumpfile=''. In the RAC environment, if the parallelism is greater than 1, note that the directory should be a shared directory.
Compression= [all | data_only | metadata_only | none]
Description: compression.
Parfile= [directory _ path] file_name
Description: specifies the name of the export parameter file.
Filesize=integer [b | kb | mb | gb | tb]
Description: specifies the maximum size of each dmp file. If this parameter is less than the size of the data to be exported, an error ORA-39095 will be reported.
5.2 description of impdp parameters
Attach=job_name
Description: connect to the job and enter interactive mode.
Import mode, the following five parameters are mutually exclusive.
Full= [Y | N]
Description: import all data and metadata from dmp files.
Schemas=schema_name [,...]
Description: import users.
Tables= [schema _ name.] table_name [: partition_name] [,...]
Description: import table.
Tablespaces=tablespace_name [,...]
Description: import tablespaces.
Transport_tablespaces=tablespace_name [,...]
Description: import removable tablespaces.
Filter condition, the following three parameters are mutually exclusive:
Query= [schema.] [table_name:] query_clause
Description: import according to query conditions.
Exclude=object_type [: name_clause] [,...]
Description: excludes specific object types.
Include=object_type [: name_clause] [,...]
Description: includes specific object types.
Other parameters:
Directory=directory_object
Description: import path.
Dumpfile=file_name [,...]
Description: the file name of the import.
Logfile=file_name
Description: imported log file name.
Content= [all | data_only | metadata_only]
Description: specifies the data to import.
Parallel=integer
Description: parallelism, this value should be less than or equal to the number of dmp files, or you can use the replacement variable'% U 'for' dumpfile=''.
Parfile= [directory _ path] file_name
Description: specifies the name of the import parameter file.
REMAP_TABLE= [schema.] old_tablename [.partition]: new_tablename
Description: allows table names to be renamed during import.
REMAP_SCHEMA=source_schema:target_schema
Description: allows you to rename the schema name during import.
REMAP_TABLESPACE=source_tablespace:target_tablespace
Description: allows you to rename tablespace names during import.
SQLFILE= [directory _ object:] file_name
Description: writes all SQL DDL to the specified file according to other parameters.
TABLE_EXISTS_ACTION= [SKIP | APPEND | TRUNCATE | REPLACE]
Description: default:skip (if content=data_only is specified,then the default is append)
6. Matters needing attention
6.1 directory related SQL statements:
# # viewing Catalog
Select * from dba_directories
# # creating a Directory
Create directory dump_dir as'/ home/oracle/tmp'
# # Directory Authorization
Grant read,write on directory my_dir to public
6.2 expdp Export
1. Derivative database users need to have read and write permissions to directory_object.
2. The path specified by directory_object already exists in the operating system.
3. Oracle users have read and write permissions to the path specified by directory_object.
4. When system users export users, the metadata that creates users and grants permissions to the system will also be exported, which cannot be exported by ordinary users.
6.3 impdp Import
1. Files exported by expdp cannot be imported using imp, but can only be imported into the database through impdp.
2. If you encounter an existing object during import, you will skip this object by default and continue to import other objects.
3. When importing, you should confirm whether the tablespace and schema of the dmp file and the target database correspond.
4. When importing a dmp file, you should determine the command when exporting the dmp file in order to import the data smoothly.
6.4 interaction mode
Enter the interaction to operate the import and export job.
The way to enter interactive mode:
1. Press Ctrl + c during the execution of the import and export command line
2. Expdp attach=jobname or impdp attach=jobnam
If you look at the import and export log, you can see jobname, or you can find jobname by querying dba_datapump_jobs.
6.5 Common errors reported
The system directory is not established, an error is reported:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Impdp imports the dmp file exported by exp with an error:
ORA-39000: bad dump file specification
ORA-39143: dump file "/ orabak/pams_20190101.dmp" may be an original export dump file
If the exported database version is higher than the imported data version, you need to add the parameter version= to import the database version when exporting. Otherwise, an error is reported:
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/ orabak/pams_20190101.dmp" for read
ORA-27037: unable to obtain file status
At this point, the study of "what are the common operations of Oracle data pump technology" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.