In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Description
Oracle import and export commands are mainly EXPDP and IMPDP, EXP and IMP, the differences are as follows: EXP and IMP are client-side tool programs, they can be used either on the client side or on the server side. EXPDP and IMPDP are server-side tools, they can only be used on the ORACLE server, not on the client. IMP applies only to EXP exported files, not EXPDP exported files, and IMPDP applies only to EXPDP exported files, not EXP exported files. When you use the expdp or impdp command, you can not specify the username / password @ instance name as identity for the time being, and then enter it as prompted, for example, both expdp schemas=scott dumpfile=test.dmp DIRECTORY=testdata; commands are executed directly in the CMD command.
I. EXPDP and IMPDP commands
Create a logical directory. This command does not create a real directory in the operating system, but is best created by an administrator such as system.
Create directory testdata as'd:\ test\ dump'
Check the administrator directory (also check to see if the operating system exists, because Oracle doesn't care if the directory exists, and if it doesn't exist, an error occurs)
Select * from dba_directories
Give scott users the permission to operate in a specified directory, preferably by administrators such as system.
Grant read,write on directory testdata to scott
1.2. Export data
1) by user guide
Expdp scott/tiger@orcl schemas=scott dumpfile=test.dmp DIRECTORY=testdata
2) parallel process parallel
Expdp scott/tiger@orcl directory=testdata dumpfile=test.dmp parallel=40 job_name=testjob
3) Guide by table name
Expdp scott/tiger@orcl TABLES=emp,dept dumpfile=test.dmp DIRECTORY=testdata
4) Guide according to query conditions
Expdp scott/tiger@orcl directory=testdata dumpfile=test.dmp Tables=emp query='WHERE deptno=20'
5) according to the table space guide
Expdp system/manager DIRECTORY=testdata DUMPFILE=test.dmp TABLESPACES=temp,example
6) Import the entire database
Expdp system/manager DIRECTORY=testdata DUMPFILE=full.dmp FULL=y
1.3. Import data
1) Import to the specified user
Impdp scott/tiger DIRECTORY=testdata DUMPFILE=test.dmp SCHEMAS=scott
2) change the owner of the table
Impdp system/manager DIRECTORY=testdata DUMPFILE=test.dmp TABLES=scott.dept REMAP_SCHEMA=scott:system
3) Import tablespace
Impdp system/manager DIRECTORY=testdata DUMPFILE=test.dmp TABLESPACES=example
4) Import database
Impdb system/manager DIRECTORY=testdata DUMPFILE=full.dmp FULL=y
5) additional data
Impdp system/manager DIRECTORY=testdata DUMPFILE=test.dmp SCHEMAS=system TABLE_EXISTS_ACTION
II. EXP and IMP commands
2.1.The EXP command
There are three main ways (full, user, table)
1. Completely:
EXP SYSTEM/MANAGER BUFFER=64000 file=D:\ all.dmp log=D:\ all.log FULL=Y
If you want to perform a full export, you must have special permissions
2. User mode:
EXP test/test BUFFER=64000 FILE=D:\ test.DMP log=D:\ test.log OWNER=test
In this way, all objects of the user test are output to a file.
3. Table mode:
EXP test/test BUFFER=64000 FILE=D:\ test.DMP log=D:\ test.log OWNER=test TABLES= (test) so that the user test table test is exported
2.2, IMP command
There are three modes (full, user, table)
1. Completely:
IMP SYSTEM/MANAGER BUFFER=64000 FILE=D:\ FULL.DMP log=D:\ impfull.log FULL=Y
2. User mode:
IMP test/test BUFFER=64000 FILE=D:\ test.DMP log=D:\ imptest.log FROMUSER=test TOUSER=test
In this way, all objects of the user test are imported into the file. The FROMUSER and TOUSER parameters must be specified so that data can be imported.
3. Table mode:
EXP test/test BUFFER=64000 FILE=D:\ test.DMP log=D:\ imptest.log OWNER=test TABLES= (test)
In this way, the table test of user test is imported.
2.3 about the parameter ignore=y
If the parameter ignore=y is used, the data contents in the exp file will be imported; if the table has a constraint condition with a unique keyword, the unqualified condition will not be imported; if the table does not have a constraint condition with a unique keyword, it will cause record duplication
Explanation:
When you import a table (test) that already exists in the database, if the table does not have a uniqueness constraint, then add the parameter ignore=y at the time of import, and the data will be fully imported into the table without error reporting.
When the uniqueness constraint already exists in the table, especially the primary key constraint, only the records that do not exist in the primary key are imported. There will be a warning during import.
Using this principle, you can do an incremental import.
Third, performance and efficiency comparison (directly quoting other people's articles)
3.1 Export performance comparison
Performance comparison of EXP General Mode, EXP Direct path Mode and EXPDP Export
1) the first is the general path export of EXP:
Exp zhejiang/zhejiang file=/data1/zj_regular.dmp buffer=20480000
The regular EXP export method was performed for 1 hour and 24 minutes.
2) Direct path export method:
Exp zhejiang/zhejiang file=/data1/zj_direct.dmp buffer=20480000
Recordlength=65535 direct=y
The direct path import takes 18 minutes, which is significantly faster than the conventional path export speed.
3) the export speed of data pump.
Expdp zhejiang/zhejiang dumpfile=zj_datapump.dp directory=d_test schemas=zhejiang
The export time of the data pump takes only 14 minutes, which is more than 20% faster than the direct path import method. Moreover, by observing the size of the three export files, we can find that the faster the export speed, the smaller the corresponding file, and the file obtained by the data pump export mode is nearly 1.5G smaller than that of the EXP method.
3.2 Import performance comparison
Comparison of import performance between IMP and IMPDP
1) Import speed of IMP:
Imp zhejiang/zhejiang file=/data1/zj_regular.dmp full=y buffer=20480000 log=/data1/zj_regular.log
It took 3 hours and 17 minutes to import IMP
2) Import speed of IMPdp:
Impdp zhejiang/zhejiang DUMPFILE=zj_datapump.dp DIRECTORY=d_test FULL=y LOGFILE=zj_datapump.log
The import operation of the data pump took 3 hours and 8 minutes, which is very close to the import speed of IMP. It seems that in not all cases, as described by Oracle, the import efficiency of the data pump is much higher than that of ordinary import.
In the test, it is found that the import speed of IMPDP is about the same as that of IMP. When introducing the data pump, Oracle mentioned that the import speed of IMPDP is up to 10 times that of IMP. Fortunately, IMPDP can be optimized and adjusted, that is, by setting PARALLEL to improve the parallelism of IMPDP.
First, take a look at the number of CPU:
SQL > show parameter cpu
Since the number of CPU on the database server is 2, let's try to set PARALLEL to 2 for import
Impdp zhejiang/zhejiang DUMPFILE=zj_datapump.dp DIRECTORY=d_test FULL=y LOGFILE=zj_datapump.log parallel=2
Using the import mode with parallelism of 2, it is found that the speed has been greatly improved. The import speed of parallelism 1 is 3 hours and 8 minutes, but now it takes less than two and a half hours.
Since the parallelism setting should not exceed 2 times the number of CPU, try the import of parallelism 3 and 4, and the import time is very close to the parallelism of 2. It seems that it is no longer possible to improve performance by increasing parallelism.
1) Let's first look at the performance of direct export:
$expdp zhejiang/zhejiang directory=d_test dumpfile=zhejiang.dp
The whole export operation took about 14 and a half minutes.
2) try to export with parallelism 2, and still set an exported data file:
$expdp zhejiang/zhejiang directory=d_test dumpfile=zhejiang_p2_1file.dp parallel=2
The whole import process takes less than 14 minutes, but the performance improvement is not obvious. However, there is a reason for this. Because the degree of parallelism is set, two processes are performing export operations at the same time, but the two processes have to write the exported data to the same data file, which will inevitably lead to resource contention.
3) still use parallelism 2, but set up two data files to check the export performance again:
$expdp zhejiang/zhejiang directory=d_test dumpfile=zhejiang_p2_2file1.dp,zhejiang_p2_2file2.dp parallel=2
This export took only 10 and a half minutes, and the efficiency of export has been greatly improved.
4) Test parallelism 4 and export to 4 data files:
It took 9 minutes for the whole export to complete, and a certain performance improvement can still be achieved by setting the parallelism to 4, but it is not obvious. This is mainly because the bottleneck of the whole performance is no longer the processing capacity of a single process, and most of the performance bottleneck has become a disk IO bottleneck. At this time, increasing the parallelism alone can no longer significantly improve the performance.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.