Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Use Oracle logminer mining logs to recover mistakenly deleted data and find the operator

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Case background: users report that the data with a table is always inexplicably deleted, hoping to recover the data and find out who deleted the data.

I. preparation of case environment

To be able to use the logminer tool, the database must enable minimum supplementary logging

SQL > ALTER DATABASE ADD SUPPLEMENTAL LOG DATA

Create a new table TEST and insert 6 rows of data for testing

Next, delete a few rows of data

# normal deletion

SQL > DELETE FROM TEST WHERE ID=1

SQL > COMMIT

# Delete by binding variables

SQL > DELETE FROM TEST WHERE ID=:1

SQL > COMMIT

Assuming that it has been a long time since the deletion was found, and flashback recovery can no longer work, then we can only mine the log, through which we can get the SQL statement of the recovered data and trace who deleted the data and when.

Second, determine the actual time of excavation

You can usually use three methods to determine the start time of mining.

1. Query DBA_HIST_ACTIVE_SESS_HISTORY and V$SQLAREA view to see if you can find the SQL that deletes the data.

SQL > SELECT SQL_ID,LAST_ACTIVE_TIME,SQL_FULLTEXT FROM V$SQLAREA WHERE UPPER (SQL_TEXT) LIKE 'DELETE%TEST%' ORDER BY LAST_ACTIVE_TIME DESC

SQL > SELECT HIS.SESSION_ID,HIS.SESSION_SERIAL#,HIS.SQL_EXEC_START,HIS.MACHINE,HIS.PROGRAM,S.SQL_ID,S.SQL_FULLTEXT

FROM DBA_HIST_ACTIVE_SESS_HISTORY HIS, V$SQLAREA S

WHERE

HIS.SQL_ID=S.SQL_ID

AND HIS.SQL_OPNAME='DELETE'

ORDER BY SQL_EXEC_START DESC NULLS LAST

2. The user knows the approximate deletion time.

If the deletion time is relatively long, the SQL has been brushed out of the shared pool and cannot be found. At this time, an approximate time can only be determined according to the response of the user.

Start digging logs

Here, we start mining at 15:24:45 on 2019-11-02, and use this time to find out which log to start with.

SQL > SELECT THREAD#,SEQUENCE#,FIRST_TIME,NEXT_TIME,NAME

FROM V$ARCHIVED_LOG

WHERE TO_DATE ('2019-11-02 15-14-24-45-15,' YYYY-MM-DD HH24:MI:SS') BETWEEN FIRST_TIME AND NEXT_TIME

The corresponding log sequence number is 49, so we use log 49 as the starting point for mining.

1. Set the utl_file_dir parameter and specify a directory to store the data dictionary. The secondary parameter is a static parameter. After setting the parameter, restart the database to take effect.

SQL > alter system set utl_file_dir='/home/oracle/utl_file_dir' scope=spfile

2. Generate a data dictionary file with the file path / home/oracle/utl_file_dir/log_mnr_dictionary.log

SQL > EXECUTE dbms_logmnr_d.build (dictionary_filename = > 'log_mnr_dictionary.log',dictionary_location = >' / home/oracle/utl_file_dir')

3. Specify a table space to store mining data. It is recommended to create a separate table space.

SQL > EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE ('log_mnr_tbs')

4. Add archive log

SQL > EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LogFileName= >'+ DATA/racdb/archivelog/2019_11_02/thread_1_seq_49.343.1023295243', Options= > dbms_logmnr.new)

If necessary, you can continue to add more archive logs

SQL > EXECUTE DBMS_LOGMNR.ADD_LOGFILE (LogFileName= >'+ DATA/racdb/archivelog/2019_11_02/thread_1_seq_50.344.1023296047', Options= > dbms_logmnr.addfile)

5. Start digging

SQL > execute dbms_logmnr.start_logmnr (dictfilename= >'/ home/oracle/utl_file_dir/log_mnr_dictionary.log')

6. View mining results

The v$logmnr_contents view holds the data mined from the eating archive log, including REDO SQL and UNDO SQL.

The v$logmnr_contents view is only valid for the current session. If there is a large amount of data, it is recommended to rebuild it into a regular table, index it and then query it. Here we create a new table LOG_MNR_TAB to save the mined data.

SQL > SELECT SCN,TIMESTAMP,OPERATION,SEG_TYPE_NAME,USERNAME,SEG_NAME,OS_USERNAME,MACHINE_NAME,SQL_REDO,SQL_UNDO

FROM LOG_MNR_TAB WHERE TABLE_NAME='TEST' ORDER BY SCN

Execute the queried SQL_UNDO in order, and you can return to the previous state.

What is not clear is that some can see the operator and the name of the machine, and some can not.

If you find that there are not enough logs for mining, you can continue to add and continue mining.

7. End log mining

SQL > exec dbms_logmnr.end_logmnr

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report