Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the logical backup methods for importing and exporting Oracle databases

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces Oracle database import and export logical backup methods, the article is very detailed, has a certain reference value, interested friends must read it!

[common parameters of EXP]

USERID is the user name / password. This parameter must be the first.

FILE specifies the data output file path

LOG specifies the log output file path

TABLES exports specified table data

FULL completely exports the entire file (N)

ROWS Export data Row (Y)

The select clause used by QUERY to export a subset of a table

[example]

1. [full library mode] completely export the database orcl, export the user name scott password scott data file to D:/orcl/scott.dmp, and log file to D:/orcl/scott.log.

Exp scott/scott@orcl file = D:/orcl/scott.dmp log = D:/orcl/scott.log full = y

2. [table schema] Export the tables emp and dept in the database to D:/orcl/scott_empdept.dmp

Exp scott/scott@orcl file = D:/orcl/scott_empdept.dmptables = (emp,dept)

3. [user mode] Export all data of users ng_lxj1 and ng_lxj2 in 243database to D:/orcltest/ ng_lxj_user.dmp

Exp system/manager@243 file = D:/orcltest/ng_lxj_user.dmpowner = (ng_lxj1, ng_lxj2)

4. Export the records with SAL field greater than 1000 in the table emp in the database to D:/orcl/scott_emp.dmp

Exp scott/scott@orcl file = D:/orcl/sys_scott.dmptables = (emp) query =\ "where sal > 1000\"

5. Export the table emp in the database only to build the table statement, not to export data rows to D:/orcl/scott_empddl.dmp

Exp scott/scott@orcl file = D:/orcl/scott_empddl.dmptables = emp rows = n

[all parameters]

USERID username / password FULL exports the entire file (N)

BUFFER data buffer size OWNER owner username list

FILE output file TABLES table name list

Length of COMPRESS imported into an area (Y) RECORDLENGTH IO record

GRANTS Export permissions (Y) INCTYPE incremental Export Typ

INDEXES Export Index (Y) RECORD tracking incremental Export (Y)

DIRECT Direct path (N) TRIGGERS Export trigger (Y)

Log file output from LOG screen STATISTICS Analysis object (ESTIMATE)

ROWS Export data Row (Y) PARFILE Parameter File name

Consistency of CONSISTENT crosstab (N) constraints derived from CONSTRAINTS (Y)

OBJECT_CONSISTENT is only set to read-only transactions during object export (N)

FEEDBACK shows progress per x lines (0)

FILESIZE maximum size of each dump file

SCN used by FLASHBACK_SCN to set the session snapshot back to its previous state

The time used by FLASHBACK_TIME to get the SCN closest to the specified time

The select clause used by QUERY to export a subset of a table

RESUMABLE hangs when it encounters a space-related error (N)

The text string used by RESUMABLE_NAME to identify recoverable statements

Waiting time for RESUMABLE_TIMEOUT RESUMABLE

TTS_FULL_CHECK performs a full or partial correlation check on TTS

List of table spaces to be exported by TABLESPACES

TRANSPORT_TABLESPACE exports table space metadata that can be transferred (N)

TEMPLATE calls the template name of the iAS schema export

[common parameters of IMP]

USERID is the user name / password. This parameter must be the first.

FILE specifies the data output file path

LOG specifies the log output file path

IGNORE ignores the creation error (N). When the table to be imported already exists, if the parameter is Y, no error will be reported. If the imported table does not exist, the table will be automatically created. It should be noted that the tablespace created by the table is generally the tablespace in which the table is exported, and an error will be reported if there is no such tablespace in the imported library.

FROMUSER owner user name list

List of TOUSER user names

TABLES exports specified table data

FULL completely exports the entire file (N)

ROWS Export data Row (Y)

[example]

1. [full database mode] Import the data from the backup database file into the database orcl, user name scott password scott, data file path D:/orcl/scott.dmp, log file path D:/orcl/scott.log

Imp scott/scott@orcl file = D:/orcl/scott.dmp log = D:/orcl/scott.log full = y ignore = y

2. [table Mode] Import the table emp and dept data from the backup database file into the database orcl, user name scott password scott, data file path D:/orcl/scott.dmp

Imp scott/scott@orcl file = D:/orcl/scott.dmp log = D:/orcl/scott.log ignore = y tables = (emp,dept)

In addition, if you use table schema when exporting and import all table data when importing, you can also use full = y, such as:

Imp scott/scott@orcl file = D:/orcl/scott_empdept.dmp ignore= y full = y

3. [user mode] there are ng_lxj1 and ng_lxj2 user data in the backup data file, and the data file path is D:/orcltest/ng_lxj_user.dmp

Import the data from ng_lxj1 into ng_lxj:

Imp system/manager@243 file = D:/orcltest/ng_lxj_user.dmpfromuser = ng_lxj1 touser = ng_lxj

Import both ng_lxj1 and ng_lxj2 data into ng_lxj:

Imp system/manager@243 file = D:/orcltest/ng_lxj_user.dmpfromuser = (ng_lxj1,ng_lxj2) touser = (ng_lxj,ng_lxj)

Note that ng_lxj should be written twice to correspond with the previous one. If only one is written, Oracle will not get the corresponding one by default and will be imported to this user, such as:

Imp system/manager@243 file = D:/orcltest/ng_lxj_user.dmpfromuser = (ng_lxj1,ng_lxj2)

This command will import ng_lxj1,ng_lxj2 from ng_lxj1 to ng_lxj2

Import the data of ng_lxj1 into two users, ng_lxj1 and ng_lxj2:

Cannot be written as imp system/manager@243 file = D:/orcltest/ng_lxj_user.dmpfromuser = (ng_lxj1,ng_lxj1) touser = (ng_lxj1,ng_lxj2)

Otherwise, an error will be reported:

IMP-00034: warning: FromUser "NG_LXJ1" not found in the export file

Guess: Oracle extracts the data file into the cache and clears it once it has been used.

Therefore, one user directs two users to write two command statements separately.

Import data from the T1 table under the ng_lxj1 user into the ng_lxj2 user:

Imp system/manager@243 file = D:/orcltest/ng_lxj_user.dmpfromuser = ng_lxj1 touser = ng_lxj2 tables = T1

4. From the backup data file, import only the build table statements but not the data records, and the file path D:/orcl/scott.dmp

Imp scott/scott@orcl file = D:/orcl/scott.dmp full = yignore = y rows = n

5. Use parameter files

Imp system/manager@243 parfile=bible_tables.par

Bible_tables.par parameter file:

File = D:/orcltest/ng_lxj_user.dmp fromuser = ng_lxj1touser = ng_lxj

[all parameters]

USERID username / password FULL imports the entire file (N)

BUFFER data buffer size FROMUSER owner username list

FILE input file (EXPDAT.DMP) TOUSER user name list

SHOW only lists file contents (N) TABLES table name list

IGNORE ignores the length of the creation error (N) RECORDLENGTH IO record

GRANTS Import permissions (Y) INCTYPE incremental Import Typ

INDEXES Import Index (Y) COMMIT commit Array insert (N)

ROWS Import data Row (Y) PARFILE Parameter File name

LOG screen output log file CONSTRAINTS import limit (Y)

DESTROY overrides tablespace data files (N)

INDEXFILE writes table / index information to the specified file

SKIP_UNUSABLE_INDEXES skips maintenance of unavailable indexes (N)

FEEDBACK shows progress per x lines (0)

TOID_NOVALIDATE skips validation of specified type ID

FILESIZE maximum size of each dump file

STATISTICS always imports precomputed statistics

RESUMABLE hangs when it encounters a space error (N)

The text string used by RESUMABLE_NAME to identify recoverable statements

Waiting time for RESUMABLE_TIMEOUT RESUMABLE

COMPILE compilation process, packages and functions (Y)

General metadata of the STREAMS_CONFIGURATION import stream (Y)

STREAMS_INSTANTIATION import stream instantiation metadata (N)

DATA_ONLY imports data only (N)

The following keywords are used only for transferable tablespaces

TRANSPORT_TABLESPACE imports transferable tablespace metadata (N)

The tablespace that TABLESPACES will transfer to the database

The data file that DATAFILES will transfer to the database

TTS_OWNERS has users who can transfer data in tablespace sets

[P.S.]

1. The dump files exported by the higher version of Export cannot be read by the lower version of Import, while the dump files exported by the lower version of Export can be read by the higher version of Import.

2. Export data from the lower version of Oracle can be Import to the higher version of Oracle, but it is limited to the adjacent version of Oracle. The conversion between two non-adjacent versions should be based on the intermediate version.

3. Oracle10g export files containing large fields will report errors that do not exist in the "IMP-00058" and "ORA-00942" tables when they are sent to the Oracle10g database through Oracle 11g client Import, and several large fields will be reported several times, but the final data will be successfully imported without errors. If you want to make no mistake, change to a 10g client or import an 11g server database with an 11g client.

4. Exp/imp can migrate on different versions of Oracle and different databases, and errors that tablespaces do not exist are likely to occur when migrating on different databases. Solution: first, we can try to create a table and specify a new table space, and then use imp and ignore=y to import only the data; if the report space still does not exist, the best way is to change the exp and specify the table space of the table to be backed up as USERS (system default table space); or we can create a very small table space with the same name as the exported database table space on the database to be imported.

5. When there is a master foreign key key, if the master foreign key table exists in the backup file, imp will automatically identify and establish constraints after importing the data. If only the foreign key table is imported but not the primary key table, the data will be imported successfully but an error will be reported and the foreign key constraint will be lost.

6. Multi-table and multi-condition export. Tables specifies multiple tables, but query cannot make different conditional restrictions for different tables. You can write multiple statements to export each table, or you can use expdp to export multi-table and multi-condition.

7. Storage allocation failed when importing a large table. By default, when EXP, compress = Y, that is, all the data is compressed on one data block. When importing, if there is no consecutive big data block, the import will fail. When exporting large tables of more than 80m, remember compress= N, and this error will not occur.

8. There are three optional modes for import and export (full library, user, table). One of these modes must be selected when exp/imp is used, and no other modes can be found when using full file mode.

9. When using user mode to import and export, USERID needs to use highly privileged users such as system/manager

10, with the help of PL/SQL, when there is no large field in the table, we can also use PL/SQL 's export tool to import and export; it should be noted that the file exported from one library may have a different default time format with another library due to errors, and the exported pure SQL insert statement has time can not be directly used for DB2, in which the definition of empty, large field, etc. Is different.

11. Starting from Oracle 10g, a new tool called data pump expdp/impdp is provided, which provides high-speed parallelism and big data migration for Oracle data. Imp/exp can be called on the client side, but expdp/impdp can only be called on the server side, because you need to create a Directory in the database before using expdp/impdp.

The difference between the characteristics of data pump and traditional export and import

1. EXP and IMP are client-side tool programs, which can be used either on the client side or on the server side; EXPDP and IMPDP are server-side tool programs, which can only be used on the ORACLE server side, not on the client side.

The efficiency of 2.EXP and IMP is low. The efficiency of EXPDP and IMPDP is high.

3. Data pump has powerful functions of parallelism, filtering, conversion, compression, encryption, interaction and so on.

4. The data pump does not support the previous version of 9i, and EXP/IMP is more suitable in the short term.

5. Data pump export includes four ways to export tables, export schemes, export tablespaces, and export databases, while imp/exp has three

6.IMP only applies to EXP exported files, not EXPDP exported files; IMPDP only applies to EXPDP exported files, not EXP exported files

[expdp/impdp]

This command can only be used on the server side! That is, you must use this command on the server where the database is imported or exported!

Using this command requires both logical and physical directories to exist!

For example, two steps to export the emp,dept of soctt users in the local orcl database:

1) create a logical directory

This command does not create a real directory in the operating system, preferably by an administrator such as system

Log in to SQLPLUS:

Sqlplus system/manager@orcl as sysdba

Create a logical directory:

Create directory dptest as'd:\ dptest'

2) authorize the exported user

Grant read,write on directory dptest to scott

3) establish a physical directory

Create a new directory dptest on disk D

4) Export emp and dept tables

Expdp scott/scott@orcl tables = emp,dept dumpfile = empdept_dp.dmp directory = dptest

[common parameters of expdp]

DIRECTORY directory objects for dump and log files, which need to be created before export

DUMPFILE specifies the export data file name. If you do not fill in the directory path, it defaults to the directory specified by direcory, or you can specify the created directory object.

CONTENT specifies the content to be exported. Optional parameters are all, data_only and metadata_only. Default is all. When the parameter is all, DDL and its data are exported, data_only only exports data, and metadata_only only exports DDL.

EXCLUDE excludes specific object types, such as EXCLUDE=TABLE:EMP

INCLUDE includes specific object types, such as INCLUDE=TABLE_DATA

The name of the export task to be created by JOB_NAME. If it is not specified, an object similar to SYS_EXPORT_SCHEMA_01 will be created by default as the JOB name.

LOGFILE specifies the log file name (export.log)

NOLOGFILE does not write log files (N)

FULL exports the entire database (N)

List of scenarios to be exported by SCHEMAS (login scheme)

TABLES specifies the list of tables to export

TABLESPACES specifies the list of tablespaces to export

The predicate clause used by QUERY to export a subset of the table

[example]

1. [full database mode] Export the local database to the created directory object dptest

Expdp system/manager@orcl dumpfile = orcl_dp.dmp directory= dptest full = y

2. [table schema] Export the tables emp and dept in the local database scott to the created directory object dptest

Expdp scott/scott@orcl tables = emp,dept dumpfile = empdept_dp.dmp directory = dptest

3. [user Mode] Export the local database scott user data to the created directory object dptest

Expdp scott/scott@orcl schemas = scott dumpfile = scott_dp.dmpdirectory = dptest

4. [tablespace schema] Export the data under the USERS and TEMP tablespaces in the local database to the created directory object dptest

Expdp system/manager@orcl dumpfile = users_temp_dp.dmpdirectory = dptest tablespace = users,temp

5. Export records in table emp whose SAL field is greater than 1000 by specifying conditions (multiple tables and multiple conditions can be realized).

Expdp scott/scott@orcl directory= dptest dumpfile=expdp.dmp tables=empquery=' where sal > 1000'

6. Parallel process parallel

Expdp scott/scott@orcl directory= dptest dumpfile=scott1.dmp parallel=40 job_name=scott1

[common parameters of impdp]

DIRECTORY directory objects for dump and log files, which need to be created before import

DUMPFILE specifies the import data file name. If you do not fill in the directory path, it defaults to the directory specified by direcory, or you can specify the directory object that has been created.

CONTENT specifies the content to be imported. Optional parameters are all, data_only and metadata_only. The default is all. When the parameter is all, DDL and its data will be imported, data_only only imports data, and metadata_only imports only DDL.

EXCLUDE excludes specific object types, such as EXCLUDE=TABLE:EMP

INCLUDE includes specific object types, such as INCLUDE=TABLE_DATA

The name of the import task to be created by JOB_NAME

LOGFILE specifies the log file name (import.log)

NOLOGFILE does not write log files (N)

FULL exports the entire database (N)

List of scenarios to be exported by SCHEMAS (login scheme)

TABLES specifies the list of tables to export

TABLESPACES specifies the list of tablespaces to export

The predicate clause used by QUERY to export a subset of the table

REMAP_SCHEMA loads objects from one scheme into another

REMAP_TABLESPACE remaps tablespace objects to another tablespace, which is used to change tablespaces when different databases import and export

[example]

1. [full library mode] Import all the data files into the local database, and store the data files in the created directory object dptest.

Impdp system/manager@orcl dumpfile = orcl_dp.dmpdirectory = dptest full = y

2. [table Mode] Import the tables emp and dept in the data file into the local database scott user

Impdp scott/scott@orcl tables = emp,dept dumpfile = empdept_dp.dmp directory = dptest

Import the table emp in the data file into the local database and change its owner to system

Impdp system/manager dumpfile = empdept_dp.dmpdirectory = dptest tables=scott.dept remap_schema=scott:system

3. [user mode] Import the data file into the local database scott user

Impdp scott/scott@orcldumpfile = scott_dp.dmp directory = dptest SCHEMAS=scott

4. Change the tablespace

Impdp scott/scott@orcl dumpfile = scott_dp.dmpdirectory = dptest remap_tablespace=users:temp

[big example]

Export from one user expdp and then impdp import to another user

For example, Windows is A server, Linux is B server, and the database user is test. Migrate the data from A server to B server.

Operate on server A:

1 、

SQL > create directory expdp_dir as'd:\ mzl\ backup'

SQL > grant read,write on directory expdp_dir totest

2. Create a directory D:\ mzl\ backup in the windows directory

3. Export in the DOS command window:

Expdp test/test DIRECTORY=expdp_dir DUMPFILE=test.dmplogfile=testexpdp.log

Operate in B server:

4. SQL > create directory impdp_diras'/ home/oracle/impdp_dir'

SQL > grantread,write on directory impdp_dir to test

1. / home/oracle/impdp_dir directory is required in the system, and read and write permissions are required under the impdp_dir directory.

(chmod 777 impdp_dir)

5. Use ftp to upload the data exported by A server to the / home/oracle/impdp_dir directory of B server.

Configure the server name of server B in server A, and import data in server A.

6. Export in the DOS command window:

Imppdp test/test@B_database DIRECTORY=impdp_dirDUMPFILE=test.dmp logfile=testimpdp.log

Note the case here. If test.dmp is uppercase in linux, it must be changed to uppercase. Linux is case sensitive)

These are all the contents of the article "what are the logical backup methods for importing and exporting Oracle databases?" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report