In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
COMPRESSION= {METADATA_ONLY | NONE}-data compression
CONTENT= {ALL | DATA_ONLY | METADATA_ONLY}-specifies the content to be exported (object metadata and object data are exported when CONTENT is set to ALL, object data is exported only when DATA_ONLY is set, and object metadata is exported only when METADATA_ONLY is set.)
DIRECTORY=directory_object-the path where backup files are stored
DUMPFILE= [directory _ object:] file_name [,...]-- directory_object is used to specify the directory object name, and file_name is used to specify the dump file name. If no directory_object is given, the export utility automatically uses the directory object specified by the DIRECTORY option.
ESTIMATE= {BLOCKS | STATISTICS}-when set to BLOCKS, oracle estimates the space occupied by the target object by multiplying the number of data blocks occupied by the target object by the block size; when set to STATISTICS, the occupied space of the object is given based on the most recent statistical value, and the error of this method is larger. No matter which option value is used, there will be errors.
ESTIMATE_ONLY= {y | n}-specifies whether to estimate only the disk space occupied by the export job. The default value is N
EXCLUDE=object_type [: name_clause] [,...]-used to control which database objects are not exported during the export process. (object_type is used to specify the type of object to exclude, and name_clause is used to specify the specific object name to exclude. Note that the EXCLUDE and INCLUDE options cannot be used at the same time.)
FILESIZE=integer [B | K | M | G]-limits the maximum capacity of a single dump file. The default value is 0, which means there is no limit on file size. This option is used with the DUMPFILE option.
FULL= {y | n}-whether to export the database in full library mode. The default is N.
LOGFILE= [directory _ object:] file_name-- specifies the name of the log file during the export process. The default value is export.log.
PARALLEL=integer-specifies the degree of parallelism at which the export operation is performed, with a default value of 1.
SCHEMAS=schema_name [,...]-exports in SCHEMA mode, defaults to the current user. It's very common, no more explanations.
TABLES= [schema _ name.] table_name [: partition_name] [,...]-- schema_name is used to specify the user name, table_name is used to specify the table name to export, and partition_name is used to specify the partition name to be exported.
TABLESPACES=tablespace_name [,...]-- specifies the table data in which table space needs to be exported.
1. Create a backup or restore directory
SQL > create directory tong as'/ u01'
Directory created.
SQL > grant read,write on directory tong to USERCTLDEV
Grant succeeded.
SQL > select * from dba_directories where directory_name='TONG'
OWNER DIRECTORY_NAME DIRECTORY_PATH
SYS TONG / u01
SQL >
two。 Export data by user
Expdp scott/tiger schemas=scott dumpfile=scott.dmp logfile=scott.log directory=tong
3. Export data by user parallelism
Expdp scott/tiger schemas=scott dumpfile=scott.dmp logfile=scott.log directory=tong parallel=40
4. Export data by table name
Expdp scott/tiger tables=emp,dept dumpfile=scott.dmp logfile=scott.log directory=tong
5. Export data by table name and conditional
Expdp scott/tiger tables=emp query='where deptno=20' dumpfile=scott.dmp logfile=scott.log directory=tong
6. Export data by tablespace
Expdp scott/tiger tablespace=temp,example dumpfile=scott.dmp logfile=scott.log directory=tong
7. Export the entire database
Expdp system/manager directory=tong dumpfile=full.dmp FULL=y
8. Import the data of scott users into scott users
Impdp scott/tiger directory=tong dumpfile=expdp.dmp schemas=scott
9. Import the dept table under scott users into system users
Impdp system/manager directory=tong dumpfile=expdp.dmp tables=scott.dept remap_schema=scott:system
10. Import tablespace
Impdp system/manager directory=tong dumpfile=tablespace.dmp tablespace=example
11. Import database
Impdb system/manager directory=dump_dir dumpfile=full.dmp full=y
twelve。 Import the data from the abc table space under the scoot user into the bcd table space under the system user
Impdp scott/tiger directory=tong dumpfile=scott.dmp logfile=scott.log remap_schema=scott:system remap_tablespace=abc:bcd
13. Export only data, table structure and indexes, not other objects
Expdp upcenter/zVQpErDi76 tables=FUND_CURR_INFO,FUND_NAV_CALC,FUND_MNY_RETRUN directory=tong dumpfile=upcenter.dump logfile=upcenter.log INCLUDE=TABLE,TABLE_DATA,INDEX compression=ALL
14. If a table already exists in the tablespace, there are 4 cases in which data is imported
TABLE_EXISTS_ACTION= corresponds to the following four values
SKIP skips directly regardless of existing tables
APPEND keeps existing data and imports new data
TRUNCATE deletes the original data and imports new data
REPLACE deletes all tables (drop), rebuilds (create), and then imports new data
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.