In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "what is the method of Oracle recovery and media recovery". In the daily operation, I believe that many people have doubts about the methods of Oracle recovery and media recovery. I have consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful for you to answer the doubts of "what is the method of Oracle recovery and media recovery?" Next, please follow the editor to study!
I. recovery solution
Error types and solutions
Misclassification
Restore solution
Media failure
If it is a small amount of block damage, use block media to recover; if it is a large number of blocks, data files, and table space damage, you may need to perform a full recovery of the damaged data file or table space; if it is the loss of archived Redo log files or online Redo log files, only incomplete recovery is needed.
Logic damage
If the problem is caused by a programmer error, the problem can be fixed by patching. For problems that cannot be repaired, media recovery can also be used to recover data.
User error
According to different user errors, choose different Flashback technology to recover, and using Flashback technology to recover user errors is the first choice. If Flashback can't recover the data well, then consider using media recovery or tablespace point-in-time recovery.
Note: recovery depends on backup, and when deployment is complete in a production environment, you should ensure that there is a full backup of the database and that the archived Redo logs are opened.
Second, SCN time mechanism
SCN (System Change Number, system change number) is a very important time mechanism for Oracle content. Consistency and data recovery are closely related to SCN. When Oracle starts, it is also verified by SCN to confirm whether the database needs to perform instance recovery. Although there are multiple instances of RAC, there is only one database, and SCN is the corresponding database-level change number, so the SCN generated in different instances must be unique and orderly, which is the work done by the SCN generator. System time and SCN can be easily converted to each other. Here is an example of the conversion between system time and SCN:
SQL > select timestamp_to_scn (sysdate) from dual
TIMESTAMP_TO_SCN (SYSDATE)
-
6980593
SQL > select scn_to_timestamp (6980593) from dual
SCN_TO_ TIMESTAMP (6980593)
-
02-DES-13 11.12.21.000000000 PM
SQL >
Here are several common SCN types:
? Checkpoint SCN
Query the current recent checkpoint SCN:
SQL > select checkpoint_change# from v$database
CHECKPOINT_CHANGE#
-
3.6555E+12
SQL >
Each checkpoint is updated by the CKPT process, and the SCN is saved in the control file. The execution of checkpoints can ensure the integrity and consistency of the data at the time of checkpoint execution. Instance recovery also starts from the last checkpoint, and the frequency of the checkpoint determines the time it takes for Crash recovery or instance recovery. A checkpoint is triggered when a log switch occurs or when the requested SGA space is insufficient. When a checkpoint occurs, the DBWn process writes all dirty data back to disk, thus ensuring that committed consistent data is written back to disk. The larger the single online Redo log file, the longer the interval between checkpoints and the corresponding increase in instance recovery time, and the loss of log files will lead to more data loss.
? The latest SCN
Execute the following SQL statement to view the latest SCN of the database:
SQL > select current_scn from v$database
CURRENT_SCN
-
3.6555E+12
SQL >
This SCN is the newly generated global SCN, which is constantly updated and will only grow but not decrease.
? Data file SCN
Execute the following SQL to view the SCN of all data files:
SQL > select checkpoint_change# from v$datafile
CHECKPOINT_CHANGE#
-
3.6555E+12
3.6555E+12
3.6555E+12
3.6555E+12
3.6555E+12
3.6555E+12
6 rows selected.
SQL >
Each data file has a data file SCN, and each checkpoint is updated by the CKPT process, and the SCN is saved in the control file.
? Start SCN
Execute the following SQL statement to query the startup SCN of all data files:
SQL > select checkpoint_change# from v$datafile_header
CHECKPOINT_CHANGE#
-
3.6555E+12
3.6555E+12
3.6555E+12
3.6555E+12
3.6555E+12
3.6555E+12
6 rows selected.
SQL >
Each data file has a startup SCN, which is updated by the CKPT process without performing a checkpoint, and the SCN is stored in the data file header.
? Terminating SCN
Execute the following SQL statement to query the termination SCN of all data files:
SQL > select last_change# from v$datafile
LAST_CHANGE#
-
6 rows selected.
SQL >
Each data file has a terminating SCN, and the checkpoint is updated once by the CKPT process without performing a checkpoint. The SCN is saved in the control file, and when the database is opened, because there is no termination SCN, what you see is empty, indicating infinity.
In the process of starting the database, it will first check whether the number of checkpoints of the control file and the data file are consistent, and media recovery of the data file is required if it is inconsistent. If consistent, further check whether the startup SCN and termination SCN of the data file are the same. If the database is shut down abnormally, then terminating the SCN must be empty, which requires the process of Crash recovery or instance recovery. The Crash recovery or instance recovery process requires online Redo logs and UNDO tablespaces to perform roll-forward and rollback operations, and if the online Redo logs or UNDO tablespaces are corrupted, the database may not open properly. If starting SCN is the same as terminating SCN, then the database can be opened normally.
? Log SCN
Execute the following SQL statement to query the log-related SCN:
SQL > select status,first_time,first_change#,next_time,next_change# from v$log
STATUS FIRST_TIME FIRST_CHANGE# NEXT_TIME NEXT_CHANGE#
--
INACTIVE 02-12 "13 3.6555E+12 02-12" 13 3.6555E+12
CURRENT 02-12 "13 3.6555E+12 2.8147E+14
CURRENT 02-12 "13 3.6555E+12 2.8147E+14
INACTIVE 02-12 "13 3.6555E+12 02-12" 13 3.6555E+12
SQL >
In the above query, first_time represents the time when the log group started, and first_change# represents the SCN at which the log group started. As you can see, the first_change# value of the log group with the status of current is the same as the SCN value of the checkpoint discussed earlier, indicating that the checkpoint generates a consistent checkpoint SCN when a log switch occurs. Next_time indicates the time when the log group ends, and next_change# represents the SCN of the end log group, which is equal to the first_change# value of the next log group. The next_time of the current log group is empty and next_change# is a large value.
III. Log thread and online Redo log
The Redo log records all the operations of the data, as well as the order of the operations. Redo logs mainly include online Redo logs, archived Redo logs and Standby Redo logs. These three types of logs are exactly the same in structure, but for different purposes.
Redo data can only reflect its value when the database is restored. In a RAC environment, the archived Redo logs of each instance can be stored on the local file system, but the archived Redo logs of all nodes need to be put together during recovery to ensure that the restored instances can access the archived Redo logs of all instances.
Each instance corresponds to a log thread (Redo thread) that maintains logs, and a single instance has only one log thread with thread number 1. For RAC, the relationship between the log thread and the instance can be queried by the following SQL statement.
1) query the thread information from the control file:
SQL > select thread#,checkpoint_change#,last_redo_change# from gv$thread
THREAD# CHECKPOINT_CHANGE# LAST_REDO_CHANGE#
1 3.6555E+12 3.6555E+12
2 3.6555E+12 3.6555E+12
1 3.6555E+12 3.6555E+12
2 3.6555E+12 3.6555E+12
SQL >
2) the relationship between query thread and instance:
SQL > select thread#,instance_name from gv$instance
THREAD# INSTANCE_NAME
1 PROD1
2 PROD2
SQL >
3) the relationship between query thread and log group:
SQL > select group#,thread# from v$log
GROUP# THREAD#
--
1 1
2 1
3 2
4 2
SQL >
4. UNDO tablespace
The UNDO table space stores the front mirror of the data block and is the multi-version data of the block, which is used for database recovery, consistency reading and transaction rollback, and plays an important role in the database concurrent read-down consistency.
In a RAC environment, like connecting to Redo logs, each instance has its own UNDO tablespace, UNDO tablespaces must be placed on shared storage, and each instance can access the UNDO tablespaces of all instances so that any active instance can perform restore operations for all instances. Each instance has its own independent UNDO tablespace and reduces contention for UNDO tablespaces between instances.
The Redo log group corresponding to the instance is specified by the Redo thread, and the UNDO tablespace corresponding to the instance is specified directly through the parameters in the initialization parameter file. The following are the parameters for the corresponding instance of the specified UNDO tablespace in the parameter file:
Prod01.undo_tablespace = 'UNDOTBS1'
Prod02.undo_tablespace = 'UNDOTBS2'
1. UNDO parameters
L UNDO_MANAGEMENT initialization parameters
UNDO_MANAGEMENT specifies the UNDO space management mode used by the system. Set to AUTO, the instance turns on automatic UNDO management mode; set to MANUAL means to use manual management mode of rollback segment automatic UNDO management (AUM) is introduced from Oracle9i to replace rollback segment, which can also be called system management UNDO (SMU). Automatic UNDO manages the UNDO tablespace that automatically allocates and manages the space required for DML operations, instead of allocating many rollback segments of different sizes.
L UNDO_RETENTION initialization parameters
UNDO_RETENTION specifies how long the UNDO data will be retained after the transaction is committed. After a transaction is committed, UNDO data is no longer needed for rollback or transaction recovery, but it may also be needed for consistent reads.
L UNDO_TABLESPACE initialization parameters
Each database can have multiple UNDO table spaces, but there is only one active UNDO table space per instance. In a RAC environment, each instance corresponds to an UNDO tablespace, and UNDO_TABLESPACE is used to specify the UNDO tablespace for the instance. The UNDO table space cannot be shrunk, and if the UNDO table space is too large, the size of the UNDO table space can only be reduced by replacement.
L GUARANTEED UNDO RETENTION characteristics
The default guaranteed undo retention feature is disabled, and if enabled means that the database cannot overwrite UNDO data that has been committed but has been retained for longer than the time specified by undo_retention. Enabling this feature requires larger UNDO tablespaces to support, and not enough UNDO tablespaces will cause DML operations to fail to allocate UNDO segments. Enabling this feature reduces the chance of ORA-01555 errors and ensures that some of the Flashback features can be used to flashback data within a certain period of time. Execute the following SQL statement to enable the RETENTION feature of the UNDO tablespace:
SQL > alter tablespace undotbs1 retention guarantee
2. UNDO view
L V$undostat view
V$undostat is the replacement and promotion of v$rollstat, which contains a lot of monitoring and statistical information about the UNDO space. This view is very useful for understanding the instance's use of UNDO space, being able to monitor and estimate the size of the UNDO table space required by the current load, to provide recommendations for adjusting UNDO_RETENTION based on statistics, and to find long-running SQL statements. In the system, the data also uses the information provided by this view to adjust the use of UNDO tablespaces. This view can only be used in automated UNDO management mode.
L dba_undo_extents view
Dba_undo_extents describes the intervals contained in all undo tablespaces in the data. This view shows the size of each interval in the UNDO. Execute the following SQL statement to display statistics for different interval states in the UNDO tablespace.
L v$transaction
L dba_rollabck_segs view
Oracle instance recovery
Attribute
Description
Grammar
DB_BLOCK_CHECKSUM= {OFF | FALSE | TYPICAL | TURE | FULL}
Default value
TYPICAL
Modify scope
ALTER SESSION,ALTER SYSTEM
The checksum is validated only if the parameter value is TYPICAL or FULL and the last time the block is written is to store a checksum. In FULL mode, Oracle validates the checksum before changing the data with the update/ delete statement, and recalculates the checksum after the change is applied.
Starting with Oracle Database 11g, most log checksums are generated through the foreground process, while LGWR does the rest of the work to make better use of CPU and caching efficiency. When this parameter is set to FULL, LGWR verifies the checksum of each log block generated by the foreground process before writing the log block to disk. In versions prior to Oracle Database 11g, LGWR alone performed a log block checksum. The checksum of data file blocks is calculated and managed by the DBWR process.
When this parameter is set to OFF, DBWn only calculates the checksum for the SYSTEM tablespace, not for the user tablespace. In addition, the database will not perform log verification at this time.
Checksums can make Oracle databases aware of damage caused by disks, storage systems, or Icano systems. If set to FULL,DB_BLOCK_CHECKSUM, it will also catch corruption in memory and stop their operation on the disk. Setting this parameter to TYPICAL will only cause an additional 1% to 2% of the system load, and setting to FULL will cause 4% to 5% of the load. Oracle recommends setting DB_BLOCK_CHECKSUM to TYPICAL. To maintain backward compatibility, TRUE and false values are retained, and TRUE is equivalent to TYPICAL,FALSE and OFF.
If DB_BLOCK_CHECKSUM is not equal to the false value, each time the block is read, Oracle calculates the checksum, which is compared with the checksum stored in the block head. Examples are as follows:
Corrupt block relative dba: 0x0380a58f (file 14 female block 42383)
Bad check value found during buffer read
……
Parameter 2 DB_BLOCK_CHECKING
The DB_BLOCK_CHECKING parameter is mainly used for logical consistency checking of data blocks, but only within blocks, not including logical checks between blocks. It is mainly used to prevent corruption or data corruption in memory.
Regardless of how this parameter is set, logical consistency checking is always on for SYSTEM tablespaces and is turned off by default in other tablespaces.
Properties of the DB_BLOCK_CHECKING parameter
Parameter value
Meaning
OFF or FALSE
There is no logical consistency check for user tablespaces
LOW
After the contents of a block are changed in memory, basic block checks are performed, such as UPDATE statements, INSERT statements, disk reads, or after block transfer between internal instances in RAC
MEDIUM
All objects except the index perform LOW checking and full semantic checking, and since the index can be rebuilt in the event of corruption, it can be ignored.
FULL or TRUE
All objects perform MEDIUM checking and full semantic checking
Oracle examines a block by traversing the data in the block to make sure it is logically consistent. Based on the system load and parameter values, the block checks usually 1% or 10% of the load together. Open block check, a large number of UPDATE or INSERT will cause a greater load, for a busy system, especially the system with a large number of insert or update operations, the performance impact is more obvious. If the performance load is acceptable, you should consider setting DB_BLOCK_CHECKING to FULL. To maintain backward compatibility, the TURE and FALSE parameter values can also be used, and FALSE is equivalent to OFF,TRUE and FULL.
If the DB_BLOCK_CHECKING parameter is enabled, a logical corruption occurs on the block of the disk, and the next block update will mark the block as soft corruption, and then read the block to produce an ORA-1578 error.
N block dislocation
An error occurs when Oracle senses that the contents of the read block belong to different blocks but the checksum is correct.
L logical block damage
If the block contains a correct checksum, the structure below the block head is damaged (the block content is corrupted), which may cause different ORA-600 errors. A detailed description of logical block corruption is not usually printed to the alarm log. DBV will report the block-specific logic error.
3. A tool for detecting bad blocks
The following are the detection tools and usage of damaged blocks:
L DBVERIRY Bad Block Verification tool
DBVERIRY cannot validate online Redo logs, archived Redo logs, control files, and RMAN backup sets, and can only be used for block verification of datafiles.
N DBV validates traditional data files
The following is an example of validating data file blocks using the DBV tool:
$dbv file=/testdb/test01.dbf blocksize=8192
Note: the DBV tool is not only used to detect whether there are bad blocks in the data file, but also to obtain details of the bad blocks.
N DBV verifies the bare device data file
DBV requires that file be followed by a file with an extension, so if the database uses a bare device as its storage method, it must use the ln command to connect to a file with an extension on the bare device, and then use the DBV tool to verify the data file of the bare device by verifying the linked file.
N DBV verifies the data files stored by ASM
If you are validating a data file stored in ASM, you need to specify a user name and password, and if you do not specify a user name and password, you will receive an error from DBV-00008:USERID must bu specified for OSM files. The following is an example of using the DBV tool to validate blocks of data files stored in ASM:
$dbv file=+DATAFILE/testdb/datafile/test.234.648839 userid=sys/oracle
L ANALYZE command
The main purpose of the Analyze command is to collect statistics of database objects for the optimizer by analyzing database objects so that the optimizer can generate an accurate execution plan. At the same time, it can also check for corruption in a table or index. Analyze performs a bad block check, but does not mark the bad block as corrupt, and the test results are saved in the user trace file in the USER_DUMP_DEST directory. Analyze syntax:
Analyze table/index / validate structure
The Analyze command verifies the integrity of each block, record, and index. The CASCADE keyword indicates the validation table and all its related indexes. Unlike DBVERIFY, analyze only validates blocks below the high water mark, and analyze does not validate unused space.
SQL > analyze table fengpin.test validate structure
L RMAN tool
RMAN is a backup tool, like a filter, RMAN needs to filter each block through the cache, one of the features is to check whether the block is damaged. If the backed up database contains bad blocks, you will receive an error
L EXP tool
If you export a table that contains a bad block, you will receive the relevant error message. In this case, in cases where the repair block cannot be recovered through block recovery in non-archive mode, there are two ways to deal with it:
Method 1: enable the 10231 event
By setting the 10231 diagnostic event, you can let Oracle ignore the damaged blocks of the table when exporting. 10231 is the internal diagnostic event of Oracle. It is set to skip the data blocks of bad blocks during the full table scan, export only the data containing the correct blocks, then delete the table, and then import the exported table data into the new table, thus repairing the table.
1) enable 10231 diagnostic events
SQL > alter system set events='10231 trace name context forever,level 10'
2) disable 10231 diagnostic events:
SQL > alter system set events='10231 trace name context forever,level 0'
Method 2: use DBMS_REPAIR packages to mark damaged blocks.
You can use DBMS_REPAIR packages to mark corrupted database objects so that corrupted blocks are skipped when performing a full table scan of corrupted objects. The syntax is as follows:
SQL > exec dbms_repair.skip_corrupt_blocks (', 'tablename')
EXP bad block checking has some limitations, and the following types of bad blocks will not be found:
Bad blocks above ü HWM (high water mark)
Bad blocks in ü index
Bad blocks in the data dictionary
L Expdp tool
Using the expdp tool does not give a hint of a bad block, but only exports the correct data for the object.
Options for incomplete recovery
Incomplete recovery mode
RMAN option
User Management backup options
Restore to a certain point in time
Until time
Until time
Restore to a log serial number
Until suquence
Until cancel
Restore to a SCN number
Until SCN
Until change
Incomplete recovery can only be performed for the entire database and cannot perform incomplete recovery of data files and tablespaces; in addition, incomplete recovery cannot be performed for non-archive mode restores.
1) incomplete recovery based on point in time
RMAN > run {
Shutdown immediate
Startup mount
SQL "alter session set nls_date_format=''YYYY-MM-DD HH24:MI:SS''"; (there is no space between two single quotes)
Set until time '2013-12-07 17-24-24-00'
Restore database
Recover database
Alter database open resetlogs;}
SET UNTIL TIME time can be expressed in a variety of ways, you can use the TO_DATE function to represent time, and you can use SYSDATE-1 to represent time.
Note: incomplete recovery is for the entire data. If a datafile or tablespace is restored and restored in the above script, the script will ignore the set until time setting and fully restore the datafile or tablespace. The significance of specifying until time or until scn for the RESTORE command is that you can automatically select the most recent RMAN backup to restore the data. The above batch command is equivalent to specifying the until time clause for both RESTORE and RECOVER, which is simpler and more reasonable than the single command mode. A point-in-time recovery cannot be restored to a period of time before the final backup completion point.
2) incomplete recovery based on sequence number
The incomplete recovery based on the sequence number needs to specify the sequence number of a Redo thread, so the archive logs of all instances before the sequence number switch time point, the load of each node is different, and the sequence number of other instances may be larger than the specified Redo thread column number.
The following is an example of an incomplete recovery based on log sequence numbers in RMAN:
RMAN > run {
Shutdown immediate
Startup mount
Set until sequence 10350 thread 1
Restore database
Recover database
Alter database open resetlogs;}
Note: specify a thread column number of 10350, but the restore does not contain the log of that serial number, that is, the restore will only restore to the log of thread 1 sequence 10304, as is the case for point-in-time and scn recovery.
3) incomplete recovery based on SCN
The following is an example of an SCN-based incomplete recovery in RMAN:
RMAN > run {
Shutdown immediate
Startup mount
Set until scn 324394
Restore database
Recover database
Alter database open resetlogs;}
Note: a full library backup or level 0 backup of an open database may require partial Redo logs even if it is not fully restored to the backup time point of performing a full library backup or level 0 backup, because performing an RMAN backup of an open database is an inconsistent backup.
7. Tablespace point-in-time recovery
The tablespace point-in-time recovery (Tablespace Point-in-time Recovery,TSPITR) feature restores one or more tablespaces at a point in time that is earlier than the rest of the database. Tablespace point-in-time recovery is an incomplete recovery of tablespace, which is complicated to operate.
Tablespace point-in-time recovery can restore one or more user tablespaces to a previous point in time without affecting other tablespaces and objects in the database. A tablespace point-in-time recovery is a special incomplete recovery that performs a recovery based on a certain point in time for a tablespace. RMAN TSPITR can be used in the following scenarios:
N an incorrect job or DML statement destroys the data in one of the tablespaces in the database, and RMAN TSPITR can be used to recover the corrupted tablespaces.
After the n DDL operation changes the structure of the table, RMAN TSPITR can be used to recover lost data. In this case, Flashback Table cannot be used to retrieve data from the previous structure of the table, such as an TRUNCATE table operation after a change in the table structure.
N A table is thoroughly drop with the purge option added by a drop statement. RMAN TSPITR can be used to recover from this situation.
N restore the tablespace of drop, when the recovery directory is not used, RMAN can perform TSPITR on the tablespace to be drop.
N recover from the logical corruption of a table.
Tablespace point-in-time recovery is a bit similar to Flashback, where data can be recovered using Flashback Database without media failure, but Flashback Database causes the entire database to be flashed back to a previous point in time. Unlike TSPITR, the Flashback Database feature must generate Flashback logs, using a Flashback Database flashback database is more limited than using TSPITR, and TSPITR can be extended to retrieve earlier backup data that can be used for recovery.
1) how TSPITR works
5 steps to how TSPITR works:
Step 1 restore the data files with the backup set of the target database on the secondary instance.
Step 2 restore the data file with the archive file of the target database on the secondary instance.
Step 3 export the relevant data on the secondary database.
Step 4 modify the control file of the main library.
Step 5 import the file exported on the secondary data into the target database.
2) RMAN TSPITR mode
Tablespace point-in-time recovery uses RMAN's RECOVER tablespace command, and there are several options for executing RMAN TSPITR, and different options coordinate the automation of specific environments between different modes of operation.
a. Fully automated (default)
In this mode, RMAN manages the entire TSPITR process. Define the recovery set for the tablespace, the secondary destination, the target time, and other aspects that allow RMAN to manage all TSPITR.
b. Automation
You can override some default RMAN TSPITR settings, but still use RMAN to manage secondary instances and secondary destinations. This is a variation of the default method. RMAN TSPITR provides some fixed management colleagues to allow you to specify some parameters manually, mainly including the following two parameters:
N location of auxiliary set or restore set file
N initialization parameter file
Except for the recovery set file location after TSPITR, the default method is recommended for more control of secondary set files, channel settings and parameters, or other aspects of secondary instances during TSPITR. Both full automation and automation use RMAN to automatically manage auxiliary instances.
c. Non-automatic
Using this pattern is to manually create and manage all aspects of the secondary instance and part of the TSPITR process. This mode is appropriate when you must assign different channels or use a user management auxiliary instance to change parameters.
3) execute fully automated TSPITR
Let's create a secondary database build TSPITR on the target database server in the default full automation mode to recover a misoperation on the table in the target database. Both the target database and the secondary database are on the same server. The following table is a comparison of the instance name and database name of the two databases:
Database type
Instance name
Database name
Target database
Test
Test
Auxiliary database
Test2
Test
a. Create a simulated environment
Step 1 create a test tablespace xy_tbs in the target data:
SQL > create tablespace xy_tbs datafile'/ u01 size size 5m
Step 2 ensure that the target database is in archive mode, use RMAN to find the file_id of the xy_tbs tablespace and back it up:
RMAN > report schema
……
RMAN > backup datafile 5
Step 3 create the test user xiaoyang:
SQL > create user xiaoyang identified by xiaoyang default tablespace xy_tbs tempory tablespace temp
SQL > grant create session,create table,unlimited tablespace to xiaoyang
SQL > connect xiaoyang/xiaoyang
Step 4 create two test tables, x1 and x2, with one test data inserted in each table:
SQL > create table x1 (D date)
SQL > create table x2 (D2 date)
SQL > insert into x1 values (sysdate)
SQL > insert into x2 values (sysdate)
SQL > commit
Step 5 View the current time as the recovery point for the tablespace:
SQL > alter session set nls_date_format='YYYY-MM-DD HH24:MI:SS'
SQL > select sysdate from dual
Step 6 insert a piece of data into the x2 table and create a new table x3 after the recovery point, and insert a piece of test data into x3:
SQL > insert into x2 values (sysdate)
SQL > create table x3 (D3 date)
SQL > insert into x3 values (sysdate)
SQL > commit
Step 7 manually drop drop table x1:
SQL > drop table x1 purge
Assuming that the drop x1 table was a misoperation, it is now time to restore the table through the TSPITR recovery technique.
b. Check whether the tablespace is self-associated
Execute the following SQL statement to check the tablespace self-association:
SQL > execute dbms_tts.transport_set_check ('xy_tbs',true,true)
SQL > select * from transport_set_violations
If a value is returned from the query transport_set_violations view, the tablespace is self-associated and the prompted problem needs to be dealt with manually.
c. Identify objects that will be lost after TSPITR
Step 1 identifies the objects that will be lost after TSPITR by time:
SQL > select owner,name,tablespace_name,to_char (creation_time,'YYYY-MM-DD HH24:MI:SS') from ts_pitr_objects_to_be_dropped where tablespace_name in ('xy_tbs') and creation_time > to_date (' 2013-12-08 09MMmuri DD HH24:MI:SS') order by tablespace_name,creation_time
Step 2 identify the objects that will be lost after TSPITR through SCN:
SQL > select owner,names,tablespace_name,to_char (creation_time,'YYYY-MM-DD HH24:MI:SS') from ts_pitr_objects_to_be_dropped where tablespace_name in ('xy_tbs') and creation_time > to_date (to_char (scn_to_timestamp (1638492),' YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS') order by tablespace_name,creation_time
If the results of the above two queries are returned, then these objects should be logically backed up before executing TSPITR, and after the TSPITR is completed, these objects should be imported in a replacement way to recover the objects that caused data loss.
d. Create a secondary database parameter file
Dump the target database parameter file, copy the parameter file to the $ORACLE_HOME directory, rename it to inittest2.ora, and adjust the initialization parameter file of the inittest2.ora instance by referring to the following parameter file:
* .audit_file_dest='/u01/app/oracle/admin/test2/adump'
* .control_files='/u01/app/oracle/oradata/test2/control01.ctl'
……
Db_file_name_convert= ("/ u01/app/oracle/oradata/test", "/ u01/app/oracle/oradata/test2")
Log_file_name_convert= ("/ u01/app/oracle/oradata/test", "/ u01/app/oracle/oradata/test2")
Log_archive_start=false
Lock_name_space=test2
Modify the audit_file_dest and control_file parameters in the initialization parameter file of the test2 instance to ensure that the storage location is different from that of the original database. Add db_file_name_convert, log_file_name_convert, log_archive_start, and lock_name_space parameter settings.
e. Create a directory structure
Create the corresponding directory structure according to the setting of the instance test2 parameter:
$mkdir-p / u01/app/oracle/admin/test2/adump
$mkdir-p / u01/app/oracle/oradata/test2
f. Create a test2 instance password file
$cd $ORACLE_HOME/dbs/
$orapwd file=orapwtest2 password=oracle entries=5
g. Create a static registration of a secondary database instance
Modify the following configuration in the $GRID_HOME/network/admin/listener.ora file under grid user:
$cat listener.ora
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = test2)
(ORACLE_HOME = / u01/app/oracle/product/11.2.0/db_1)
(SID_NAME = test2)
)
)
h. Add Oracle Net local service name
Add the following configuration to the $ORACLE_HOME/network/admin/tnsnames.ora file under Oracle:
TEST2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = rhel2) (PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = test2)
)
)
i. Start the secondary database to the NOMOUNT state
$export ORACLE_SID=test2
$sqlplus / as sysdba
SQL > create spfile from pfile
SQL > startup nomount
j. Perform TSPITR recovery
Use RMAN to connect to both the target and secondary databases, and use the RECOVER TABLESPACE command to perform TSPITR operations:
$rman target /
RMAN > connect auxiliary sys/oracle@test2
RMAN > run {
Allocate auxiliary channel a1 type disk
Allocate channel c1 type disk
Recover tablespace xy_tbs until time "to_date ('2013-12-08 09VF 3000mm HH24:MI:SS')"
Release channel c1
}
Performing TSPITR requires manual allocation of auxiliary channels
Make the xy_tbs table space ONLINE in the target data:
SQL > alter tablespace xy_tbs online
k. Verify the restore result
SQL > select * from xiaoyang.x1
SQL > select * from xiaoyang.x2
SQL > select * from xiaoyang.x3
At this point, the study on "what is the method of Oracle recovery and media recovery" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.