In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "how to recover the data loss of Oracle database". In the daily operation, I believe that many people have doubts about how to recover the data loss of Oracle database. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubt of "how to recover the data loss of Oracle database". Next, please follow the editor to study!
According to the characteristics of oracle database and the tools provided, the main methods are as follows:
Use logical backup of tables that have lost data using the import tool
Use physical backup to restore data files and perform incomplete recovery
Recover from redo log files using dbms_logmnr package
Using flashback characteristics to recover data
In order to facilitate the introduction of the method of use, the above recovery methods will be based on the following scenario: the system administrator made a full database logical backup of the database with export at 11: 00 the night before, and then hot backed up all the data files. At 10:00 the next morning, when the system administrator was modifying the data in table TFUNDASSET, the ztm field of a batch of records (thousands of entries) was modified to the wrong value and was submitted because the conditions of the modification statement were miswritten. This table is an asset sheet, and the data changes relatively infrequently.
First, use logical backup to recover lost data using import tools
Export/import is a tool provided by oracle for logical backup of databases. This tool is suitable for backing up those database systems with small amount of data and business. Because if a logical backup had been made with export at 11: 00 the night before, when the database crashed unexpectedly at 10:00 this morning, all data modification operations (including DDL and DML) would be lost during the period from backup to database crash. If the data on the table in the lost data is relatively stable, that is to say, there are basically no DML operations on the table, such as the standard code table and the historical data in the partition table, then using import to import the table can recover the data completely. If the table is a business table that changes frequently, the lost data can only be recovered from paper records according to the business situation, or by other means.
The example of ▲ is as follows: this table is an asset table. Relatively speaking, there is less modified data in the operation of the system today, and the amount of lost data can be borne or recovered from other ways. Then you can use import to recover.
Method 1:
1. Backup the data from this table to another table:
2. Delete the record of the table:
3. Execute the following command:
This command specifies the name of the table to be imported in the keyword tables, and ignore=y means to ignore errors that already exist in the table.
4. After the import is finished, check the records in the table and restore the changes of the day with appropriate methods.
Method 2:
1. Import the table to be restored to another user:
2. After checking the data, delete the original table record:
3. Then insert it back from another user table:
4. When the amount of data is relatively large, the following methods can be used:
Second, use physical backup to restore data files and perform incomplete recovery
If the database is running in archive mode, you can restore it by using a previous backup of the data file, then roll forward using the archive log until it is rolled back to the point in time of the error operation, and then reset the log file to open the database.
You can confirm whether you are running in archive mode by:
If it is shown above, it is running in archive mode.
▲ assumes that a full database physical backup was made at 11:00 the night before, so consider the following restore:
1. Close the database:
Because the incomplete recovery of the database must be implemented on a closed database, restore with the backup of an old database, and then use the log to roll forward step by step as needed, instead of restoring a new backup and then falling back to a certain point in time.
Notify each client that the database will be closed, and then issue:
The database has been closed.
The database has been uninstalled.
The ORACLE routine has been closed.
2. Determine the time of the error operation:
You can determine when rollforward stops are required for incomplete recovery based on the operator's estimate, or you can use LogMiner to analyze the log files (this tool will be described later) to find out the exact time of the error operation.
3. Restore the data file:
Back up the current database file first, and then overwrite the existing data file with the previous most recent backup. Note: existing control files are not overwritten.
4. Based on point-in-time recovery, start the database to the assembly state:
This brings the database back to 09:58 on October 20, 2015.
Then use the business data to supplement the data during this period of time.
Third, recover from log files by using dbms_logmnr package
This package is provided by Oracle and used in conjunction with the dbms_logmnr_d package makes it easy to analyze online and archive log files and extract all changes to the database from these log files.
Before using this package, you need to make some settings and modifications:
1. Open initorcl.ora, modify the initialization parameter utl_file_dir, and set the placement directory of the data dictionary files that will be used by the dbms_logmnr_d package.
Then restart the database for the parameters to take effect.
2. Execute the dbmslmd.sql script to rebuild the dbms_logmnr_d package with the sys user connecting to the database.
The main steps to apply Logminer to analyze redo log files are as follows:
Create an external data dictionary file using the stored procedure build in dbms_logmnr_d
Use the stored procedure add_logfile in dbms_logmnr to add log files to be analyzed
Start the analysis using the stored procedure start_logmnr in dbms_logmnr
Query several views related to dbms_logmnr to get the contents of the log file
Use the stored procedure end_logmnr in dbms_logmnr to end the analysis.
The process of using ▲ is described in detail below.
1) use the stored procedure build in dbms_logmnr_d to create an external data dictionary file:
2) use the stored procedure add_logfile in dbms_logmnr to add the log files to be analyzed to the list of files to be analyzed:
If you are not running in archive mode, the recycling of the redo log file may cause the log file to be overwritten and the recovery entry you are looking for cannot be obtained. If you are running in archive mode, you can determine which archive log files are added to the list of files to be analyzed by looking at the time when the log files are archived and the time of error operation in the alert_orcl.log in the $ORACLE_HOME\ admin\ orcl\ bdump directory.
Note: when performing the above procedure, the logfilename parameter needs to write the full path of the log file, otherwise an error will be reported. Repeat until all the files that need to be analyzed are added to the list. This allows you to initiate an analysis.
3) use the stored procedure start_logmnr in dbms_logmnr to start the analysis
In this way, you can get the contents of the log file through the following query.
4) query several views related to dbms_logmnr to get the contents of log files
In this way, you can find the statements you need to recover. Note: v$logmnr_contents is only valid for sessions that perform dbms_logmnr.start_logmnr, and if you terminate the analysis through another session or using dbms_logmnr.end_logmnr, you will not be able to access v$logmnr_contents 's data. If you want to make this data available to other sessions, you can do so by creating a separate table, such as:
Create table undo_sql as select * from v$logmnr_contents.
Then authorize undo_sql, and other users can access v$logmnr_contents 's data.
5) use the stored procedure end_logmnr in dbms_logmnr to end the analysis.
When finished, use the following command to end the analysis activity: exec dbms_logmnr.end_logmnr
This frees up the resources (memory and data structures) allocated to logminer.
As can be seen from the above process, if the amount of updated data is relatively large and the log file is relatively small, it may lead to the switching of log files. If the log files are not mined in time (not running in archive mode), the data may not be recoverable due to the recycling of log files. If you are running in archive mode, it may also take a long time because there are more log files to analyze.
Fourth, make use of the new features of flashback to restore data
Oracle9i began to provide flashback query (Flashback Query) function, which provides a simple and fast recovery method for cases that have been mistakenly deleted or updated and have been commit; but before Oracle provides flashback query, this situation can only be recovered through backup for point-in-time recovery or using logmnr mining logs, which is undoubtedly more troublesome and time-consuming than flashback query.
Prerequisites for using this Flashback Query feature:
1. The database must be in the Automatic Undo Management state.
2. * * the time period during which the query can be flashback is specified by the UNDO_RETENTION initialization parameter (in seconds)
Parameter values can be dynamically modified by ALTER SYSTEM SET UNDO_RETENTION =;.
How does ▲ use Flashback Query to recover data?
1) through SQL
Use the AS OF of the SELECT statement to make a flashback query with the following syntax:
Use the AS OF keyword to Flashback Query a table, view, or materialized view. If SCN is specified, the expr part must be a number, and if TIMESTAMP is specified, expr must be a value of type timestamp. The query result will return the data at the specified SCN or point in time.
Let's do an experiment using the scott scheme.
If you want to use AS OF in the subquery section of update, the query can only return one record, otherwise an error will be reported.
You can add a temporary table as a transit, and then update it, as follows:
2) recover through DBMS_FLASHBACK package
The DBMS_FLASHBACK package provides the following functions:
ENABLE_AT_TIME: sets the flashback query time of the current SESSION
ENABLE_AT_SYSTEM_CHANGE_NUMBER: sets the flashback query SCN of the current SESSION
GET_SYSTEM_CHANGE_NUMBER: gets the SCN of the current database
DISABLE: closes the flashback query of the current SESSION
When a SESSION is set to flashback query mode, subsequent queries will be based on that point in time or the database status of the SCN. If the SESSION ends, then the flashback query will automatically fail even if the DISABLE is not explicitly specified.
No DML and DDL operations are allowed when SESSION is running in a flashback query state. If you want to use DML operations for data recovery, you must use PL/SQL cursors.
▲ example:
As you can see from the above example, as long as this modification is not earlier than the number of seconds specified by sysdate- (the number of seconds specified by UNDO_RETENTION), data can be recovered in this way.
For the batch data in the question, you can write a process to get the data before the change:
Then update the TFUNDASSET with the data in this temporary table.
V. Summary
Comparing the use of the above methods to recover data, we can see that:
Exp/imp is only suitable for data loss in tables with little data change. Even after processing with this method, it is necessary to correct the data from the business data, otherwise it will lead to data loss.
The lost data can be recovered by using incomplete recovery based on the point in time, but the database needs to be shut down, the system availability time is reduced, and the data after the recovery point will be lost.
Using LogMiner can recover data well, but require the database to run in archive mode as much as possible, otherwise it may also result in data loss. The advantage is that without shutting down the system, you can get all the data from the log file.
Using Flashback is the most convenient and concise, you can get the pre-modified data directly, but it depends on the system settings and takes up a lot of rollback table space.
Therefore, what kind of method you choose to recover the data depends on your system environment and specific circumstances, and you can't copy it mechanically. Only by using the right method can we reduce the loss of data to a certain extent.
Of course, these recovery methods are not needed. The premise is that you must do the following work:
Create different database users and different passwords for different environments (be sure to have different passwords if you can't have different users)
Separate owner from application users and do appropriate authorization
Before doing DML, do a query with the same criteria to see if the result set meets expectations.
At this point, the study on "how to recover the data loss of Oracle database" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.