In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Continue to decrypt the Oracle backup tool-expdp/impdp
In this information age, the growth of data scale and amount of data is expanding at an explosive speed. Before a few hundred megabytes or gigabytes of data, the amount of data is huge. Nowadays, the space occupied by the test system is tens or even a hundred times that of this data. Native imp/ expo workers just these two old horses are unable to deal with such a large amount of data. Starting from 10g, data pump was born, called data pump in Chinese.
Advantages of data pump
a. Provides a more subtle level of selectivity for data and data objects (using the exclude,include,content parameter)
b. You can set the database version number (mainly for compatibility with older versions of the database system)
c. Parallel execution
d. Estimate the disk space required for the export job (using the estimate_only parameter)
e. Support import and export through database links in distributed environment
f. Support for remapping on import (that is, importing objects into new target data files, schemas, tablespaces, etc.)
g. Support for metadata compression and data sampling
To use the data pump tool, specify a directory object.
What is a directory object?
a. It literally means a directory, which is not an entity, but a point to a specific path in the operating system.
b. Each directory object has two permissions, read and write, which are authorized to the specified user through grant.
c. A user with read/write permission for a directory object can read / write files under the operating system path actually specified by the directory object.
I. directory operation
1. Create a directory at the operating system level
Mkdir / home/oracle/expdp
2. Create a directory in the library and authorize it to scott users
Sqlplus / as sysdba SQL > create directory expdp as'/ home/oracle/expdp'; SQL > grant read,write on directory expdp to scott
3. View all directories in the current library
SQL > select OWNER','DIRECTORY_NAME','DIRECTORY_PATH from dba_directories
2. Expdp parameter description:
1. Export all objects under a user
Expdp scott/lipengfei directory=expdp dumpfile=scott_all.dmp SCHEMAS=SCOTT logfile=scott_all.log
2. Export part of the table
(1) expdp scott/lipengfei directory=expdp dumpfile=scott_emp.dmp tables=\ (emp,dept\) logfile=scott_emp.log (2) expdp scott/lipengfei directory=expdp dumpfile=scott_E_D.dmp tables=\ (scott.E%,scott.D%\)
3. Specify conditional export
Expdp scott/lipengfei directory=expdp dumpfile=scott_emp.dmp logfile=scott_emp.log tables=emp query=\ "where sal\ > 1000\"
4. Export with the exception of an object [static collection of information, sequence, view, table]
Expdp scott/lipengfei exclude=STATISTICS,SEQUENCE,VIEW,TABLE:\ "IN\ (\ 'EMP\',\ 'DEPT\')\" directory=expdp dumpfile=scott_2015_06_02.dmp logfile=scott_2015_06_02.log
5. Export and compress the dmp file at the same time (compression=ALL, this value is only available in 11g)
(1) create a test table and insert a large amount of data create table li nologging as select * from all_objects; insert into li select * from li; / commit; (2) View the physical size of the test table select segment_name,bytes/1024/1024 from user_segments where segment_name='LI' (3) Export the test table and add the compression=ALL parameter expdp scott/lipengfei directory=EXPDP dumpfile=scott_all_compression.dmp SCHEMAS=SCOTT logfile=scott_all_compression.log compression=ALL (4) Export the test table expdp scott/lipengfei directory=EXPDP dumpfile=scott_all.dmp SCHEMAS=SCOTT logfile=scott_all.log (5) check the size of the exported dmp file ls-lh * .dmp [by looking at the file size, you can see that after adding the compression=ALL parameter The compression ratio is comparable to "gzip-9"] (6) zip scott_all_compression.zip scott_all_compression.dmp for the second compression of dmp files
6. When content is all, the object definition and all its data will be exported. When data_only, only object data is exported, and when metadata_only, only object definitions are exported
Expdp scott/lipengfei directory=EXPDP dumpfile=scott_metadata_only.dmp content=metadata_only logfile=scott_metadata_only.log ls-lh * .dmp [by looking at the file size, only object definitions can be exported, and the dmp file is very small]
3. Impdp parameter description:
1. Import all objects of a user
(1) create tablespace SQL > create tablespace lipengfei datafile'/ home/oracle/app/oracle/oradata/ecom/lipengfei.dbf' size 100m AUTOEXTEND OFF; (2) create user-specified password and default tablespace SQL > create user lipengfei identified by lipengfei default tablespace lipengfei; (3) unlock newly created user SQL > alter user lipengfei account unlock (4) Grant the most basic roles and permissions to the newly created user SQL > grant connect,resource to lipengfei; SQL > grant create table to lipengfei; SQL > grant create view to lipengfei; (5) authorize the directory read and write permissions specified by the data pump lipengfei SQL > grant read, write on directory EXPDP to lipengfei (6) Log in to lipengfei, create tables and initialize data sqlplus lipengfei/lipengfei create table hehe (an int,b varchar2 (10)); insert into hehe values (2); insert into hehe values (4); insert into hehe values (6); commit; create view nimei as select a from hehe; create table (id int) Insert into values (1); commit; (7) Export lipengfei user all objects expdp lipengfei/lipengfei directory=expdp dumpfile=lipengfei_all.dmp SCHEMAS=LIPENGFEI logfile=lipengfei_all.log (8) Login lipengfei Simulation data all lost sqlplus lipengfei/lipengfei drop view nimei; drop table hehe; drop table (9) restore the data exported above to lipengfei user impdp lipengfei/lipengfei directory=expdp dumpfile=lipengfei_all.dmp logfile=lipengfei_all.log
2. The imported object already exists
When using IMPDP to complete database import, if we encounter that the table already exists, Oracle provides us with the following four processing methods:
a. Ignore (SKIP, default behavior)
b. Continue to increase based on the original data (APPEND)
c. First DROP the table, then create the table, and finally complete the data insertion (REPLACE)
d. First TRUNCATE, then complete the data insertion (TRUNCATE).
Impdp lipengfei/lipengfei directory=expdp dumpfile=lipengfei_all.dmp TABLE_EXISTS_ACTION=TRUNCATE logfile=lipengfei_all.log
3. Import lipengfei user data into shiqiang users
(1) create tablespace SQL > create tablespace shiqiang datafile'/ home/oracle/app/oracle/oradata/ecom/shiqiang.dbf' size 100m AUTOEXTEND OFF; (2) create user-specified user and default tablespace SQL > create user shiqiang identified by shiqiang default tablespace shiqiang; (3) unlock new user SQL > alter user shiqiang account unlock (4) authorize the most basic roles and permissions of the new user SQL > grant connect,resource to shiqiang; SQL > grant create table to shiqiang; SQL > grant create view to shiqiang; (5) authorize the directory read and write permissions specified by the data pump shiqiang SQL > grant read, write on directory EXPDP to shiqiang (6) Import the exported dmp file of lipengfei user into impdp shiqiang/shiqiang directory=expdp remap_schema=lipengfei:shiqiang remap_tablespace=lipengfei:shiqiang dumpfile=lipengfei_all.dmp logfile=lipengfei_shiqiang.log of shiqiang user
4. Import only some tables
(1) simulated data loss sqlplus lipengfei/lipengfei drop view nimei; drop table hehe; drop table ; (2) restore previously backed up dmp files back to lipengfei users, and specify to restore only table impdp lipengfei/lipengfei directory=expdp tables= dumpfile=lipengfei_all.dmp logfile=lipengfei_only_.log (3) View sqlplus lipengfei/lipengfei select * from hehe; select * from after data recovery
5. Import the higher version into the lower version
(1) 11g export, and specify version parameter expdp shiqiang/shiqiang directory=expdp dumpfile=shiqiang_11g_all.dmp SCHEMAS=SHIQIANG logfile=shiqiang_11g_all.log version=10.2.0.1.0 (2) 10g to import impdp shiqiang/shiqiang directory=expdp dumpfile=shiqiang_11g_all.dmp logfile=shiqiang_11g_all.log
Although oracle has always had a tradition of exaggerating its own products, we should understand that with the foreshadowing of the previous native exp/imp tools, by contrast, the import and export of data pumps have been greatly improved.
If you have used the exp/imp tool before, through some of the examples of expdp/impdp above, I believe you can really feel what is meant by efficiency!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.