In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Previously, exp was used to make oracle backups, but there was a problem with exp on 11g, that is, empty tables could not be exported.
Recently, to do oracle data migration, we need to export the empty table together. After searching, we found expdb, so we have this article.
This article is only for recording the problems and my personal understanding of expdp.
Before you can use expdp impdp, you need to create a directory object and give the user permissions. This is because expdp impdp can only store data in the system directory through DIRECTORY object relationships.
Note: the red part should be replaced with the actual value.
Expdp export
Create DIRECTORY objects and OS PATH mappings and grant permissions
> connect / as sysdba
> CREATE OR REPLACE DIRECTORY directory_name AS 'directory_ospath'
> GRANT read,write ON DIRECTORY directory_name TO user_name
two。 Query DIRECTORY
> select * from dba_directories
> select * from all_directories
3.expdp export
> expdp user_name/user_passwd schemas=user_name dumpfile=expdp.dmp directory=directory_name
Schema is a collection of database objects, and a user usually corresponds to a schema. The user's schema name is equal to the user name and serves as the user's default schema. Reference http://blog.csdn.net/kimsoft/article/details/4627520
DIRECTORY the directory object used by the dump and log files.
List of DUMPFILE target dump files (expdat.dmp)
Keyword description (default) refer to http://blog.csdn.net/engledb/article/details/8979910
ATTACH connects to an existing job, such as ATTACH [= job name].
COMPRESSION reduces the size of valid dump file contents
The key values are: (METADATA_ONLY) and NONE.
CONTENT specifies the data to uninstall, where the valid keywords are:
(ALL), DATA_ONLY and METADATA_ONLY.
DIRECTORY the directory object used by the dump and log files.
List of DUMPFILE target dump files (expdat.dmp)
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
The password keyword used by ENCRYPTION_PASSWORD to create encrypted column data.
ESTIMATE calculates job estimates, where valid keywords are:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY calculates job estimates without performing an export.
EXCLUDE excludes specific object types, such as EXCLUDE=TABLE:EMP.
FILESIZE specifies the size of each dump file in bytes.
FLASHBACK_SCN is the SCN used to set the session snapshot back to its previous state.
The time FLASHBACK_TIME is used to get the SCN closest to the specified time.
FULL exports the entire database (N).
HELP displays a help message (N).
INCLUDE includes specific object types, such as INCLUDE=TABLE_DATA.
The name of the export job to be created by JOB_NAME.
LOGFILE log file name (export.log).
The name of the remote database that NETWORK_LINK links to the source system.
NOLOGFILE does not write to log files (N).
PARALLEL changes the number of active worker for the current job.
PARFILE specifies the parameter file.
QUERY is used to export the predicate clause of a subset of the table.
Percentage of data to be exported by SAMPLE
A list of scenarios to be exported by SCHEMAS (login scenarios).
STATUS when the default value (0) will show the new status when available
Frequency (in seconds) job status to be monitored.
TABLES identifies the list of tables to export-there is only one scenario.
TABLESPACES identifies the list of tablespaces to export.
TRANSPORT_FULL_CHECK validates the storage segment (N) of all tables.
A list of tablespaces from which TRANSPORT_TABLESPACES unloads metadata.
The version of the object to be exported by VERSION, where the valid keywords are:
(COMPATIBLE), LATEST or any valid database version.
Impdp Import
Upload the backup file exported by expdb to the new library host
1. Before importing data into a new library, you need to create a user on the new library and grant relevant permissions
Create user user_name
$sqlplus sys/ as sysdba
> CREATE USER user_name IDENTIFIED BY password
two。 Give user_name users permission to log in and create tables
> GRANT create table,create session TO user_name
3. Assign USERS quota
Sys user rights:
> GRANT UNLIMITED TABLESPACE TO user_name
4. Create DIRECTORY objects and OS PATH mappings and grant permissions
> CREATE OR REPLACE DIRECTORY directory_name AS 'directory_ospath'
> GRANT read,write ON DIRECTORY directory_name TO user_name
5. Import
> impdp user_name/password directory=directory_name dumpfile=backup_name.dmp schemas=user_name
Examples of EXPDP usage:
1) by user guide
Expdp scott/tiger@orcl schemas=scott dumpfile=expdp.dmp DIRECTORY=dpdata1
2) parallel process parallel
Expdp scott/tiger@orcl directory=dpdata1 dumpfile=scott3.dmp parallel=40 job_name=scott3
3) Guide by table name
Expdp scott/tiger@orcl TABLES=emp,dept dumpfile=expdp.dmp DIRECTORY=dpdata1
4) Guide according to query conditions
Expdp scott/tiger@orcl directory=dpdata1 dumpfile=expdp.dmp Tables=emp query='WHERE deptno=20'
5) according to the table space guide
Expdp system/manager DIRECTORY=dpdata1 DUMPFILE=tablespace.dmp TABLESPACES=temp,example
6) Import the entire database
Expdp system/manager DIRECTORY=dpdata1 DUMPFILE=full.dmp FULL=y
Examples of IMPDP usage:
1) Import to the specified user
Impdp scott/tiger DIRECTORY=dpdata1 DUMPFILE=expdp.dmp SCHEMAS=scott
2) change the owner of the table
Impdp system/manager DIRECTORY=dpdata1 DUMPFILE=expdp.dmp TABLES=scott.dept REMAP_SCHEMA=scott:system
3) Import tablespace
Impdp system/manager DIRECTORY=dpdata1 DUMPFILE=tablespace.dmp TABLESPACES=example
4) Import database
Impdb system/manager DIRECTORY=dump_dir DUMPFILE=full.dmp FULL=y
5) additional data
Impdp system/manager DIRECTORY=dpdata1 DUMPFILE=expdp.dmp SCHEMAS=system
6) Import the data of the original scott user to the current scott2 user, and redirect the objects of the original users table space to the users2 table space.
Impdp system DIRECTORY=backup SCHEMAS=scott REMAP_SCHEMA=scott:scott2 REMAP_TABLESPACE=users:users2 TABLE_EXISTS_ACTION=replace DUMPFILE=scott_all%U.dmp LOGFILE=impdp_scott_all.log PARALLEL=2
TABLE_EXISTS_ACTION keyword description (default)
ATTACH connects to an existing job, such as ATTACH [= job name].
CONTENT specifies the data to load, where the valid keywords are:
(ALL), DATA_ONLY and METADATA_ONLY.
DIRECTORY the directory object used by dump files, log files, and sql files.
List of dump files that DUMPFILE wants to import from (expdat.dmp)
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
The password keyword used by ENCRYPTION_PASSWORD to access encrypted column data.
This parameter is not valid for network import jobs.
ESTIMATE calculates job estimates, where valid keywords are:
(BLOCKS) and STATISTICS.
EXCLUDE excludes specific object types, such as EXCLUDE=TABLE:EMP.
FLASHBACK_SCN is the SCN used to set the session snapshot back to its previous state.
The time FLASHBACK_TIME is used to get the SCN closest to the specified time.
FULL imports all objects (Y) from the source.
HELP displays a help message (N).
INCLUDE includes specific object types, such as INCLUDE=TABLE_DATA.
The name of the import job to be created by JOB_NAME.
LOGFILE log file name (import.log).
The name of the remote database that NETWORK_LINK links to the source system.
NOLOGFILE does not write to the log file.
PARALLEL changes the number of active worker for the current job.
PARFILE specifies the parameter file.
The predicate clause used by QUERY to import a subset of the table.
REMAP_DATAFILE redefines data file references in all DDL statements.
REMAP_SCHEMA loads objects from one scheme into another.
REMAP_TABLESPACE remaps tablespace objects to another tablespace.
REUSE_DATAFILES initializes the tablespace (N) if it already exists.
A list of scenarios to be imported by SCHEMAS.
SKIP_UNUSABLE_INDEXES skips indexes that are set to the useless index state.
SQLFILE writes all SQL DDL to the specified file.
STATUS when the default value (0) will show the new status when available
Frequency (in seconds) job status to be monitored.
STREAMS_CONFIGURATION enables loading of stream metadata
The action performed when the TABLE_EXISTS_ACTION import object already exists.
Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES identifies the list of tables to import.
TABLESPACES identifies the list of tablespaces to import.
The metadata transformation to be applied to the applicable object by TRANSFORM.
Valid conversion keywords: SEGMENT_ATTRIBUTES, STORAGE
OID and PCTSPACE.
TRANSPORT_DATAFILES A list of data files imported in transferable mode.
TRANSPORT_FULL_CHECK validates the storage segment (N) of all tables.
A list of tablespaces from which TRANSPORT_TABLESPACES wants to load metadata.
Valid only in NETWORK_LINK schema import operations.
The version of the object to be exported by VERSION, where the valid keywords are:
(COMPATIBLE), LATEST or any valid database version.
Valid only for NETWORK_LINK and SQLFILE.
Error
Q:
ORA-31626: job does not exist
ORA-31687: error creating worker process with worker id 1
ORA-31687: error creating worker process with worker id 1
ORA-31688: Worker process failed during startup.
A:
Quote the official
ChangesIn the first situation AQ_TM_PROCESSES=0For the second situation AQ_TM_PROCESSES should not be 0.CauseFor the first situation, AQ_TM_PROCESSES init.ora parameter was set to zero (AQ_TM_PROCESSES=0) Once removed this parameter from the init.ora file, and bounced the database the problem was resolvedFor the second situation, there is likely a lack of memory for the streams_pool_size.SolutionFor the first situation:o Remove AQ_TM_PROCESSES init.ora parameter (AQ_TM_PROCESSES=0) from the init.ora. For the second situation:o Allocate between 50-100MB for the STREAMS_POOL_SIZE in order for datapump to function since it is dependent on streams processing.
In short, query the aq_tm_ processes value
If 0, remove this value from init.ora and restart the oracle service
If 1, 50-100mb memory is allocated to STREAMS_POOL_SIZE
Check the aq_tm_ processes value
SQL > show parameter process
NAME TYPE VALUE
-
Aq_tm_processes integer 0
Db_writer_processes integer 1
Gcs_server_processes integer 0
Job_queue_processes integer 10
Log_archive_max_processes integer 2
Processes integer 150
Modify STREAMS_POOL_SIZE
SQL > alter system set streams_pool_size=50m scope=spfile
Q:
SQL > Connected to an idle instance.
SQL > ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file'/ oracle/app/oracle/product/11.2.0/dbhome_1/dbs/initorcl.ora'
SQL > Disconnected
A:
Cp / oracle/app/oracle/admin/orcl/pfile/init.ora.4262015194529 / oracle/app/oracle/product/11.2.0/dbhome_1/dbs/initorcl.ora
When you are unable to start, be sure to read the error message in the startup log!
/ oracle/app/oracle/product/11.2.0/dbhome_1/startup.log
Q:
ORA-31626: job does not exist
ORA-31633: unable to create master table "SERVER.SYS_IMPORT_FULL_05"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-01031: insufficient privileges
A:
The user does not have the permission to create the table again. Just give the user the permission of creaate table.
> GRANT CREATE TABLE TO user_name
Q:
ORA-39006: internal error
ORA-39068: invalid master table data in row with PROCESS_ORDER=-4
ORA-01950: no privileges on tablespace 'USERS'
A:
USERS table space is insufficient. Just give USERS table space to the user.
> GRANT UNLIMITED TABLESPACE TO user_name
Or:
> alter user youruse quota 100m on users
User_name users:
> CREATE TABLE test (A varchar2)
Q:
Import data Times similar to the following error
Job "" SYSTEM "." SYS_IMPORT_SCHEMA_01 "" completed with 116 error (s)
Because the table already exists, the import of the table data of which the table exists is skipped because the default option of the table_exists_action parameter is skip.
A:
At this point, if we want to import the data, we can add the parameter table_exists_action and specify the desired options.
TABLE_EXISTS_ACTION
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.
Corresponding to add, replace, [skip], and truncation.
Truncate is selected here, that is, if the table exists, the processing method is to truncate the data contained in the import file after the table.
Note: if the append option is selected here, then if the original table has data and there are no reasonable constraints, it may lead to repeated import of the data. therefore, in the actual import process of the production environment, we must find out the actual situation of the data in order to accurately determine how to select the option of this parameter.
To import all the content under the user, you can use TABLE_EXISTS_ACTION=REPLACE
Modify a user's password
ALTER USER user_name IDENTIFIED BY new_password
Delete user
DROP USER user_name cascade
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.