In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
For many reasons, we expect to extract a large amount of data and associated object definitions from one database and load that data into another database in an easier way. Backup is one of the important reasons, and you may need to transfer data between production and test environments or between online systems and data warehouses. Data pump Data Pump is a tool for large-scale and fast data transfer between Oracle databases.
1. The architecture of Data Pump
Data Pump is a server-side utility. Compared with previous Export/Import utilities, Data Pump processes running on the server directly access data files and SGA without having to access through a session, which can significantly improve performance.
When you start a Data Pump job, at least the following two processes are started: one Data Pump Master process (DMnn) and one or more worker processes (DWnn). The main process controls the worker process. If parallel technology is enabled, each DWnn process can use two or more parallel execution servers (Pnnn).
Each Data Pump job creates two queues: a control queue and a status queue. The DMnn process puts the task in the control queue, and the DWnn process acquires and executes the task, and may take advantage of the parallel execution server. The status queue is used to complete the monitoring function of the job, in which the DMnn process places the status message of the job. Any session with appropriate permissions can monitor the progress of the job by querying this queue.
There are three types of files generated by Data Pump: SQL files, dump files, and log files.
2. Directory and file location
Data Pump always uses the Oracle directory to locate the files to be read and written and their log files. If no directory is specified in Data Pump, the default directory is used. Each 11g database has an available Oracle directory named DATA_PUMP_DIR.
You can specify the directory of a user's Data Pump job at four levels. The order of priority from highest to lowest is:
The settings for each file in the Data Pump job apply to the parameters of the entire Data Pump job DATA_PUMP_DIR environment variables DATA_PUMP_DIR directory object
Therefore, you can explicitly control the location of each file, you can specify a single Oracle directory for the job, you can use environment variables, and if these do not work, Data Pump will use the default directory.
Here is to create your own directory object
Create directory my_dir as'e:\ my_dir'
Authorize a directory
Grant read, write on directory my_dir to public
Confirm that the directory has been created
Col directory_path for a60
Select * from dba_directories
OWNER DIRECTORY_NAME DIRECTORY_PATH
SYS MY_DIR e:\ my_dir
SYS ORACLECLRDIR C:\ oracle\ app\ oracle\ product\ 11.2.0\ server\ bin\ clr
SYS DATA_PUMP_DIR C:\ oracle\ app\ oracle/admin/xe/dpdump/
SYS XMLDIR C:\ oracle\ app\ oracle\ product\ 11.2.0\ server\ rdbms\ xml
SYS ORACLE_OCM_CONFIG_DIR C:\ ADE\ aime_xe28\ oracle/ccr/state
3. Direct path and external table path
Data Pump loads and unloads data by either direct path or external table path.
The direct path avoids the database buffer cache. When exporting data through a direct path, Data Pump reads the data file directly from disk, extracts and formats the contents of the file, and finally writes the contents of the file as a dump file. When importing data through a direct path, Data Pump reads the dump file, populates the table data block with the file contents, and finally writes these contents directly to the data file. The write operation is done above the "high water mark" of the table.
The external table path uses the database buffer cache. When exporting data, Data Pump uses the normal SELECT process to read blocks from the data file into the cache and formats the data for output as a dump file. When importing data, Data Pump constructs standard INSERT statements based on the contents of the dump file and executes these statements by reading blocks of data from the data file into the cache, and INSERT operations are done in the cache in a standard style. Like normal DML, both undo and redo are generated when using external table paths.
So how do you determine whether Data Pump uses a direct path or an external table path? DBA has no control over this, and Data Pump makes decisions based on the complexity of the object. Only simple structures, such as heap tables without active triggers, can be processed through a direct path. For more complex objects, such as cluster tables, where interaction with SGA is required in order to decompose complexity, external table paths must be used. In both cases, the resulting dump file is identical.
4. Export and import using Data Pump
Data Pump can export a complete database, or part of a database. You can export table definitions, PL/SQL objects, views, sequences, or any other object type with or without data rows. If you are exporting a table, you can also restrict the exported rows with the WHERE clause, or export a random percentage sampling of the table.
Parallel processing can be used to speed up the operation of Data Pump. There are two levels of parallelism: the number of worker processes and the number of parallel servers used by each worker process.
Its evaluation function calculates the space required for the Data Pump export without actually running the job.
In network mode, you can still use Data Pump to transfer datasets from one database to another, even if the data is not transferred in segments on disk. The Data Pump export job on the source database writes the data to the target database through the database link, while the Data Pump import job reads the data from the database link and inserts the data. You don't need to dump a file directory in network mode, but you still need a log file directory.
The remap feature can rename objects, transfer objects from one schema to another, and move data objects from one tablespace to another on import.
You can compress and encrypt the output file on export.
The user who exports the import login operation must have system privileges of exp_full_database or imp_full_database.
Here are some common uses:
1) Export and import the entire database
To export the entire database, use the following command:
Expdp system/mesHz2@mes full=y directory=my_dir parallel=4 dumpfile=full1_%U.dmp,full2_%U.dmp,full3_%U.dmp,full4_%U.dmp filesize=2g compression=all
This command logs in to the database as a system user and uses four worker processes running in parallel, each of which generates its own collection of dump files named according to the% U template, which generates a unique string. Each worker process decomposes the output into multiple compressed files no more than 2G in size.
The value method of option compression is different for 10g and 11g. 10g defaults to compression when this option is not described.
Corresponding import:
Impdp system/mes@mes full=y directory=my_dir parallel=4 dumpfile=full1_%U.dmp,full2_%U.dmp,full3_%U.dmp,full4_%U.dmp
In the case of full database import, users do not need to build the schema before import, the system will rebuild automatically during import, and only need to establish the required tablespace.
2) specify the mode of export and import
You can use the schemas option to specify the mode of export import:
Expdp system/mesHz2@mes schemas=cmes,rmes,bosch,abs directory=my_dir parallel=4 dumpfile=dp1_%U.dmp,dp2_%U.dmp,dp3_%U.dmp,dp4_%U.dmp filesize=2g compression=all
Corresponding import:
Impdp system/mesHz2@mes schemas=cmes,rmes,bosch,abs directory=my_dir parallel=4 dumpfile=dp1_%U.dmp,dp2_%U.dmp,dp3_%U.dmp,dp4_%U.dmp
3) Export only metadata
Export metadata for RMES, BOSCH, and ABS user objects, without row data for the table:
Expdp system/mesHz2@mes schemas=rmes,bosch,abs dumpfile=my_dir:dp_meta.dmp content=metadata_only
Corresponding import:
Impdp system/mesHz2@mes schemas=rmes,bosch,abs dumpfile=my_dir:dp_meta.dmp
4) include and exclude objects
List the objects that need to be included or excluded in the include or exclude clause, and the following are the object types included in the data pump job (note that triggers are included in the table category):
Expdp system/mesHz2@mes schemas=cmes dumpfile=my_dir:dp_cmes.dmp include=user,system_grant,role_grant,default_role,pre_schema,synonym,type,sequence,table,package,function,procedure,view
Exports can conditionally contain only specific objects and data, excluding unwanted objects. The filter criteria for tables can be single or multiple tables, fuzzy table names specified by like, or even table names through SQL queries, while data can be specified through where conditions:
Expdp system/mesHz2@mes dumpfile=my_dir:1.dmp tables=rmes.r_wip_print_t
Expdp system/mesHz2@mes schemas=rmes dumpfile=my_dir:2.dmp include=table: "in ('Renewable Printments T')"
Expdp system/mesHz2@mes schemas=cmes directory=my_dir dumpfile=3.dmp exclude=TABLE
Expdp system/mesHz2@mes schemas=cmes directory=my_dir dumpfile=4.dmp exclude=TABLE:\ "LIKE\ 'TMP%\"
Expdp system/mesHz2@mes schemas=cmes directory=my_dir dumpfile=5.dmp exclude=TABLE:\ "IN\ (\ 'C_LINE_T\'\,\ 'C_BOM_T\')\"
Expdp system/mesHz2@mes schemas=cmes directory=my_dir dumpfile=6.dmp exclude=TABLE:\ "IN\ (select table_name from cmes.k$host_condition where condition 'NumberA'\)\"
Expdp system/mesHz2@mes dumpfile=my_dir:7.dmp tables=rmes.r_wip_tracking_t query= "'where in_line_time
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.