In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
I. Import and Export
Backup of Oracle is a common task in oracle operations. Common backup schemes include:
1. Logical backup (IMP&EXP command for backup)
There are four modes:
(1) tablespace backup (tablespace)
(2) Table backup (table)
(3) user backup (user)
(4) full backup (full)
2. Physical file backup (offline and online backup)
3. Backup of incremental physical file system using RMAN (Recovery Manager)
The logical backup of oracle is to import and export data using the IMP&EXP command. Create Session system permissions are required when exporting using the EXP command or importing using the IMP command, but if you want to export other tables, you must have permission: EXP_FULL_DATABASE.
When invoking the import and export command, the first step is to estimate the space required. The binaries (* .dmp) exported by the EXP command can only be read and restored by the corresponding IMP command. The purpose of import and export is:
Backup and recovery
Oracle platform replacement: you can backup and restore between the same version. The lower version of Oracle export data files can be import to the higher version of Oracle database, but the version of Oracle can only be adjacent, not cross-version.
Reorganization table
Move data between different database users
Move data tablespaces between different databases
The definition of the table is stored in a binary operating system file to prevent data loss caused by user errors.
It should be pointed out that logical backup can not fully recover the database, that is, the loss of data is inevitable. So how much data will logical backups lose? All data will be lost during this period from the beginning of export to the time of import, which is absolutely unacceptable to the databases of banks, stock exchanges and telecommunications industries. Therefore, for most real production or commercial databases, logical backup can never be used as the cornerstone of backup and recovery strategy, they must have physical backup to ensure full recovery, and logical backup can only be used as an auxiliary means.
II. EXP export data
EXP commands can export data in a database in an interactive environment or execute commands in a non-interactive environment. Command execution in an interactive environment is a step-by-step process.
1. Interactive export
Parsing of some code in the figure: (1) Exp is the export command, followed by "username / password @ server network connection"
(2) the buffer size used by Exp program when exporting. The larger the buffer is, the faster the export speed is. Direct enter represents the use of the default value of 4096B.
(3) the Exp command exports all the data to be exported to a Dmp file, and this step is for Exp to ask for the name of the exported data file.
(4) the Exp program asks whether to export the entire user or a table. The entire user is exported by default.
(5) the Exp program asks whether to export access to each table. Default export access.
(6) the Exp program asks whether to export the data in the table. Data in database tables is exported by default.
(7) the data in the Oracle table may come from data blocks in different partitions. By default, all data will be compressed on one data block when exporting. When IMP imports, if there is no continuous big data block, the import will fail.
two。 Non-interactive export
You can also set various parameters at one time when using the Exp command. The Exp command that is ready does not need to interact with the user. According to the requirements of the parameters, the Exp command will perform the export work at one time. To specify parameters, you can use the keyword
EXP KEYWORD=value or KEYWORD= (value1,value2,...valueN)
For example: EXP SCOTT/TIGER GRANTS=Y TABLES= (EMP,DEPT,MGR)
Parameter name description
USERID means "username / password"
BUFFER data buffer size. In bytes, usually more than 64000
FILE specifies the path and file name of the output file, which is usually suffixed with .dmp. Note the package.
Include the full path, but the path must exist, and the export command cannot automatically create the path.
Whether COMPRESS compresses the export. Default is yes.
Whether GRANTS exports permissions. Default is yes.
Whether INDEXES exports indexes. Default is yes.
Whether DIRECT is exported directly. By default, the data goes through Oracle's data buffer first, and then
Export data.
LOG specifies the location of the log file where the log of the export command is located
Whether ROWS exports data rows. All data is exported by default.
Whether CONSTRAINTS exports constraints of the table. Default is yes.
PARFILE can configure various parameters as a file in the form of text keys, which can specify the location of the parameter file
Whether TRIGGERS exports triggers. Default is yes.
A list of the names of TABLES tables, which can be separated by commas when multiple tables are exported
TABLESPACES exports data in a table space
Owner exports a user's data
Full exports all data from the database. The default value is no
QUERY exports the results of the query
3. Import data
IMP program import is the process of re-importing files exported by exp into the database. There are also some important parameters when importing.
> Fromuser: indicates the user information recorded in the dmp file when exporting.
>
> Commit: the default is N. Whether commit is required when the buffer is full. If set to N, a larger rollback segment is required.
> Ignore:Oracle in the process of data recovery, when a table is restored and the table already exists, it is necessary to decide how to operate according to the setting of the ignore parameter. If ignore=y,Oracle does not execute the CREATE TABLE statement, the data is inserted directly into the table. If the inserted record violates constraints, such as primary key constraints, the wrong record will not be inserted, but the legal record will be added to the table. For example, ignore=n,Oracle does not execute the CREATE TABLE statement and does not insert data into the table, but ignores the errors in that table and continues to reply to the next table.
D:\ > imp system/manager file=employee.dmp fromuser=scott touser=employee commit=y
IV. Frequently asked questions
The data object already exists
In general, tables, sequences, functions / procedures, triggers, etc. under the target data should be completely deleted before importing the data.
The database object already exists. If you press the default imp parameter, the import will fail.
If the parameter ignore=y is used, the data contents in the exp file are imported.
If the table does not have a constraint for a unique keyword, it will cause record duplication.
Database objects have primary foreign key constraints
When the primary foreign key constraint is not met, the data import fails.
Solution: first import the main table, and then import the dependent table.
The disable target imports the primary foreign key constraints of the object, imports the data, and then enable them.
Insufficient permissions
If you want to import the data of user An into user B, user A needs to be authorized by imp_full_datebase.
Storage allocation failed when importing a large table (greater than 80m)
By default, EXP, compress=Y, that is, all the data is compressed into one data block.
When importing, if there is no consecutive big data block, the import will fail.
When exporting large tables of more than 80m, remember compress=N, and this error will not occur.
Imp and exp use different character sets
If the character set is different, the import will fail. You can change the unix environment variable or the information about NLS _ LANG in the NT registry.
Imp and exp versions are not upwards compatible
Imp can successfully import the files generated by the lower version of exp, but not the files generated by the higher version of exp according to the situation.
Import and export instances
Set line 120; / / formatted display, clearer set pagesize 30; / / formatted display, clearer create table emp_dump as select * from emp; / create table named emp_dump create table dept_dump as select * from dept; / create table named emp_dump select count (*) from emp_dump; / verify select count (*) from dept_dump; / verify
After confirming that the two tables have been created successfully, you should also create a directory f:\ export where the logical backup files are stored, and you can now start the logical backup experiment. Before the specific operation, the principle and purpose of the experiment are briefly introduced. First, make a logical backup of the two tables you just created, then DML one of the tables and commit them immediately. Next, destroy the two tables. Finally, the logical backup is used for data recovery to verify how much data can be recovered.
Exp scott/tiger file=f:\ export\ scott.dmp tables= (emp_dump,dept_dump) select * from emp_dump;update emp_dump set job='CEO';commit;select * from emp_dump; check whether the modification is successful drop table emp_dump;drop table dept_dump;select * from emp_dump;select * from dept_dump; there is an error imp scott/tiger file=f:\ export\ scott.dmp; import operation select * from dept_dump; found that the position has not changed, it is still before the promotion.
Introduction of data pump
Data pump is a new tool introduced by Oracle 10g. It not only includes all the functions of previous import and export tools, but also has a lot of extensions and enhancements. In addition, the speed is faster, and the operation is safer, in short, there are many benefits. Its function is similar to the import and export tool.
The data pump tool is also an application for the operating system, including expdp and impdp, where expdp is responsible for export and impdp is responsible for import. In the exp export command, we use the full path (absolute path) of the physical file for future management and maintenance. But in oracle 10g, for the sake of system security, absolute paths are no longer allowed in expdpd and impdp applications. Instead, database directory objects are used in expdp and impdp applications. Directory objects are typically created by DBA or users with the appropriate system permissions, and then read or write permissions for the directory are granted to the desired users. To simplify the problem, instead of creating a directory object for the time being, we use the Oracle database system to automatically create a directory object called DATA--PUMP_DIR, which is the default working directory for expdp and impdp applications.
So how can I find the operating system directory of the object of the database directory object? Oracle provides a data dictionary called dba_directories, which makes it easy to find the information you need. However, in order to make the display output clear. The first step is to SQL*Plus the format command (to log in to the database system as DBA).
Set line 120 200col OWNER for SCOTT / We often work with 200col OWNER for a6 ten col DIRECTORY_NAME for a20 col Singapore DIRECTORY_PATH for A65 select * from dba_directories;GARNT READ,WRITE ON DIRECTORY DATA_PUMP_DIR to scott;// gives scott users the right to DATA_PUMP_DIR select * from emp_dump order by job,sql; after switching SCOTT users, query relevant information
Parameter description:
DIRECTORY=DATA_PUMP_DIR, the directory where the exported files are stored is the operating system directory defined by DATA_PUMP_DIR.
Tables= (emp_dump,dept_dump), and the tables to be exported are emp_dump and dept_dump
DUMPFILE=SCOTT.dmp, the name of the exported operating system file is SCOTT.dmp.
QUERY=scott.emp_dump: "WHERE job'ANALYST' AND sal > 1250". In scott's emp_dump table, only data with job (position) not ANALYST (analyst) and sal (salary) higher than 1250 is exported to the SCOTT.dmp file.
Because of the singular number of this strength, we save the above parameters in a body parameter file called scott_par.txt, which is then called by the expdp application. If the export work is regular, this greatly simplifies the routine operation. So start the notepad program and write the parameters described above, and finally save the file name scott_par.txt into the F:\ oracle\ mgt directory (you can create this directory yourself, or you can use other directory names).
Launch the DOS window
F:\ > cd oracle\ mgtF:\ > cd oracle\ mgt > dirF:\ > cd oracle\ mgt > more scott_par.txtDIRECTORY=DATA_PUMP_DIRtables= (emp_dump,dept_dump) DUMPFILE=SCOTT.dmpQUERY=scott.emp_dump: "WHERE job'ANALYST' AND sql > 1250" F:\ > cd oracle\ mgt > expdp scott/tiger parfile=scott_par.txt check whether the file is created to generate drop table emp_dump; to scott user select * from emp_dump;// to verify whether it is really deleted
Now you can use the logical backup you just made (the file exported with the data pump) for logical recovery, and you can use the following command to import the data from the SCOTT.DMP file back into the SCOTT user (logical restore).
F:\ > cd oracle\ mgt > impdp scott/tiger parfile=scott_par.txt
The display output indicates that the DEPT_DUMP under the SCOTT user already exists, because only the emp_ dump table was deleted before, but this does not affect the result of the logical recovery, and the impdp program just skips the existing table. Next, use the SQL statement to check whether the emp_ dump table has been successfully restored.
Select * from emp_dump
The results shown in the above statement show that impdp only recovers data that meets the condition (job is not ANALYST and sal is higher than 1250), that is, a subset of the table. Finally, use a statement to check that the dept_ dump table is intact. After looking at the data in the dept_ dump table, the data is indeed intact, which is exactly what we expect to see. So far, we can be sure that the logical recovery has been successful.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.