Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summarize the use of logminer and deal with various problems

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

The version is a little old, but it's still useful.

[@ more@] summarize the use of logminer and deal with various problems

On the basis of previous work, combined with their own use of the problems and solutions to give a new version:

Understand and use Oracle 8i analysis tool LogMiner

Oracle LogMiner is a practical and useful analysis tool provided by Oracle since product 8i. Using this tool, you can easily obtain the specific contents of Oracle redo log files (archived log files). In particular, this tool can analyze all the DML (insert, update, delete, etc.) statements for database operations, as well as some necessary rollback SQL statements. This tool is particularly useful for debugging, auditing, or backing back a particular transaction.

The LogMiner analysis tool is actually made up of a set of PL/SQL packages and some dynamic views (part of the Oracle8i built-in package). It is released as part of the Oracle database and is a completely free tool provided by the 8i product. However, this tool is a little more complex to use than other Oracle built-in tools, mainly because it does not provide any graphical user interface (GUI). This article describes in detail how to install and use the tool.

I. the use of LogMiner

The log file stores all the data for database recovery and records every change in the database structure, that is, all the DML statements for database operations.

Prior to Oracle 8i, Oracle did not provide any tools to assist database administrators in reading and interpreting the contents of remade log files. There is a problem with the system, for an ordinary data administrator, the only thing we can do is to package all the log files, then send them to Oracle's technical support, and then quietly wait for Oracle's technical support to give us the final answer. However, since 8i, Oracle has provided such a powerful tool-LogMiner.

LogMiner tools can be used to analyze both online and offline log files, that is, you can analyze your own database redo log files or other database redo log files.

In general, the main uses of LogMiner tools are:

1. Track changes in the database: you can track changes in the database offline without affecting the performance of the online system.

2. Roll back changes in the database: roll back specific change data and reduce the execution of point-in-time recovery.

3. Optimization and expansion plan: you can analyze the data growth pattern by analyzing the data in the log file.

Second, install LogMiner

To install the LogMiner tool, you must first run the following two scripts:

L $ORACLE_HOME/rdbms/admin/dbmslm.sql

2 $ORACLE_HOME/rdbms/admin/dbmslmd.sql.

Both scripts must be run as the SYS user. The first script is used to create the DBMS_LOGMNR package, which is used to analyze log files. The second script is used to create the DBMS_LOGMNR_D package, which is used to create data dictionary files.

Example of installation on the windows platform:

SQL > @ c:oracleora81rdbmsadmindbmslm.sql

The package has been created.

Authorization successful.

SQL > @ c:oracleora81rdbmsadmindbmslmd.sql

The package has been created.

The package body has been created.

There are no mistakes.

Authorization successful.

Third, use LogMiner tools

How to use the LogMiner tool is described in more detail below.

1. Create a data dictionary file (data-dictionary)

As mentioned earlier, the LogMiner tool actually consists of two new PL/SQL built-in packages (DBMS_LOGMNR and DBMS_LOGMNR _ D) and four V$ dynamic performance views (which are created when you start LogMiner with the procedure DBMS_LOGMNR.START_LOGMNR):

1. V$logmnr_contents, which gives the result information of log analysis.

2. V$logmnr_dictionary can have multiple dictionary files because of logmnr, and this view is used to display this information.

3. V$logmnr_parameters it is used to display the parameters of logmnr.

4. V$logmnr_logs, which is used to display log list information for analysis.

Before using the LogMiner tool to analyze the redo log file, you can use the DBMS_LOGMNR_D package to export the data dictionary as a text file. The dictionary file is optional, but without it, the parts of the data dictionary (such as table names, column names, etc.) and values in the statements interpreted by LogMiner will be in hexadecimal form, which we cannot directly understand. For example, the following sql statement:

INSERT INTO dm_dj_swry (rydm, rymc) VALUES (00005, 'Zhang San')

The result explained by LogMiner will look like this

Insert into Object#308 (col#1, col#2) values (hextoraw ('c30rte567e436'), hextoraw (' 4a6f686e20446f65'))

The purpose of creating a data dictionary is to allow LogMiner references to refer to parts of the internal data dictionary as their actual names, rather than the hexadecimal within the system. The data dictionary file is a text file created using the package DBMS_LOGMNR_D. If the table in the database we are analyzing changes and the data dictionary that affects the library also changes, we need to recreate the dictionary file. Another situation is that when analyzing the redo log of another database file, it is also necessary to regenerate the data dictionary file of the analyzed database.

The dictionary file is used to store the correspondence between the table and the object ID number. When using a dictionary file, it establishes an one-to-one correspondence between the table name and the object ID number. Therefore, it should be noted that if the user has created a new table and may log analyze the table in the future, then the dictionary file needs to be re-established to store its object name and object ID number in the dictionary file.

First, in the init.ora initialization parameter file, specify the location of the data dictionary file, that is, add a parameter UTL_FILE_DIR, which is the directory where the data dictionary file is placed on the server. Such as:

UTL_FILE_DIR = (e:Oraclelogs)

Restart the database so that the newly added parameters take effect, and then create a data dictionary file:

SQL > CONNECT SYS

SQL > EXECUTE dbms_logmnr_d.build (

Dictionary_filename = > 'v816dict.ora'

Dictionary_location = > 'eRu oraclelogs')

Note: the parameter dictionary_filename is used to specify the file name of the dictionary file; the parameter dictionary_location is used to specify the directory where the dictionary file is stored, which must be consistent with the value of the initialization parameter UTL_FILE_DIR.

Also note that the subscript may exceed the limit in the steps to generate the dictionary file:

SQL > EXECUTE dbms_logmnr_d.build (dictionary_filename = > 'v817dict.orarecording journal dictionarylocation = >' dVRV oradict')

BEGIN dbms_logmnr_d.build (dictionary_filename = > 'v817dict.orastat.com)

*

ERROR is on line 1:

ORA-06532: the subscript exceeds the limit

ORA-06512: in "SYS.DBMS_LOGMNR_D", line 793,

ORA-06512: in line 1

Solution:

TYPE col_desc_array IS VARRAY (513) OF col_description

Change it to:

TYPE col_desc_array IS VARRAY (713) OF col_description

Save the file and execute the script again:

SQL > @ c:oracleora81rdbmsadmindbmslmd.sql

The package has been created.

The package body has been created.

There are no mistakes.

Authorization successful.

Then recompile the DBMS_LOGMNR_D package:

SQL > alter package DBMS_LOGMNR_D compile body

The package body has changed.

Finally, re-execute dbms_logmnr_d.build:

SQL > EXECUTE dbms_logmnr_d.build (dictionary_filename = > 'v817dict.orakeeper' dictionarylocation = > 'd:ora

Dict')

The PL/SQL process completed successfully.

The dictionary file is generated normally, and the generated dictionary file is the same size as the dictionary file generated if the prompt subscript exceeds the limit.

Please also note that some documents say:

If the specified dictionary file name dict.ora already exists, it should be deleted completely (from the trash can) before performing this operation, otherwise the process will fail.

Practice has proved that this statement is groundless, and the original generated dictionary file is not deleted before the actual operation, but the dictionary file is generated normally.

2. Create a list of log files to be analyzed

There are two types of Oracle redo logs, online (online) and offline (offline) archive log files. Let's discuss the list creation of these two different log files respectively.

(1) analyze online redo log files

a. Create a list

SQL > EXECUTE dbms_logmnr.add_logfile (

LogFileName= > 'eVision Oracleoradatasxfredo01.log'

Options= > dbms_logmnr.new)

b. Add additional log files to the list

SQL > EXECUTE dbms_logmnr.add_logfile (

LogFileName= > 'eVision Oracleoradatasxfredo02.log'

Options= > dbms_logmnr.addfile)

(2) analyze offline log files

a. Create a list

SQL > EXECUTE dbms_logmnr.add_logfile (

LogFileName= > 'EVA OracleadatasxfarchiveARCARC09108.001'

Options= > dbms_logmnr.new)

b. Add additional log files to the list

SQL > EXECUTE dbms_logmnr.add_logfile (

LogFileName= > 'EVA OracleadatasxfarchiveARCARC09109.001'

Options= > dbms_logmnr.addfile)

It is entirely up to you to decide the number of log files to be analyzed in this list of log files, but it is recommended that you add only one log file to be analyzed at a time, and then add another file after analyzing the file.

Corresponding to adding a log analysis list, you can also remove a log file from the list using the procedure 'dbms_logmnr.removefile'. The following example removes the log file e:Oracleoradatasxfredo02.log added above.

SQL > EXECUTE dbms_logmnr.add_logfile (

LogFileName= > 'eVision Oracleoradatasxfredo02.log'

Options= > dbms_logmnr. REMOVEFILE)

You can view the log files to be analyzed in the log analysis list through the dynamic performance view v$logmnr_logs.

Now that you have created a list of log files to analyze, you can analyze them below.

3. Use LogMiner for log analysis

(1) unrestricted conditions

SQL > EXECUTE dbms_logmnr.start_logmnr (

DictFileName= > 'e:oraclelogs v816dict.ora')

(2) there are restrictions

By setting several different parameters in the process DBMS_ LOGMNR.START_LOGMNR (the meaning of the parameters is shown in Table 1), you can narrow the scope of the log file to be analyzed. By setting the start time and end time parameters, we can limit the analysis of logs to a certain time range. As in the following example, we only analyze the log of September 18, 2001:

SQL > EXECUTE dbms_logmnr.start_logmnr (

DictFileName = > 'e:oraclelogs v816dict.ora'

StartTime = > to_date ('2001-9-18 00 Mmuri DD HH24:MI:SS')

EndTime = > to_date (''2001-9-18 23 HH24:MI:SS))

Note: the key to the success of this process is the given starttime (start time) and endtime (termination time).

Should be within a valid range. In particular, the termination time should be less than or equal to the establishment time of the archive log; if greater than

When the archive log is established, the analysis process cannot be performed. When analyzing multiple archive logs, it is best to keep these archive logs continuous

You can also limit the scope of the log to be analyzed by setting the starting SCN and ending SCN:

SQL > EXECUTE dbms_logmnr.start_logmnr (

DictFileName = > 'e:oraclelogs v816dict.ora'

StartScn = > 20

EndScn = > 50)

Table 1 meaning of DBMS_LOGMNR.START__LOGMNR process parameters

Default value meaning of parameter type

StartScn Digital (Number) 0 analyzes the SCN ≥ StartScn log file section of the redo log

EndScn Digital (Number) 0 analyzes the SCN ≤ EndScn log file section of the redo log

StartTime date (Date) 1998-01-01 analyze the log file portion of the timestamp ≥ StartTime in the redo log

EndTime date (Date) 2988-01-01 analyze the log file portion of the timestamp ≤ EndTime in the redo log

DictFileName character type (VARCHAR2) dictionary file that contains a snapshot of the database directory. This file can be used to make the analysis results in understandable text form rather than hexadecimal within the system.

Options BINARY_INTEGER 0 system debugging parameters are rarely used in practice.

If you prompt for an invalid month when performing the analysis, you can follow these steps to try:

Alter session set nls_date_language='AMERICAN'

Alter session set nls_date_format='DD-MON-YYYY HH:MI:SS'

Execute the package (exec dbms_logmnr.start_logmnr (dictfilename= >'')

Be sure to name the parameter dictfilename, because this package has five default parameters, and if you don't name it, it will default to the first.

4. Observation and analysis results (v$logmnr_contents)

So far, we have analyzed the contents of the redo log file. The dynamic performance view v$logmnr_contents contains all the information obtained by LogMiner analysis.

SELECT sql_redo FROM v$logmnr_contents

If we just want to know what a user is doing with a table, we can get it through the following SQL query, which gets all the work that the user DB_ZGXT has done with the table SB_DJJL.

SQL > SELECT sql_redo FROM v$logmnr_contents WHERE username='DB_ZGXT' AND seg_name='SB_DJJL'

It is important to emphasize that the analysis results in the view v$logmnr_contents exist only for the lifetime of the session in which we run the process' dbms_logmrn.start_logmnr'. This is because all the LogMiner storage is in PGA memory, it is invisible to all other processes, and as the process ends, the analysis results disappear.

Finally, using the procedure DBMS_LOGMNR.END_LOGMNR to terminate the log analysis transaction, the PGA memory area is cleared and the analysis results no longer exist

5. End the analysis: use EXCUTE DBMS_LOGMNR.END_LOGMNR.

IV. Other matters needing attention

We can use the LogMiner log analysis tool to analyze the redo log files generated by other database instances, not just the redo logs files of the database instances where LogMiner is installed. When using LogMiner to analyze other database instances, there are a few points to note:

1. LogMiner must use the dictionary file generated by the database instance being analyzed instead of the dictionary file generated by the database where LogMiner is installed. In addition, it must ensure that the character set of the installed LogMiner database is the same as that of the database being analyzed.

two。 The database platform being analyzed must be the same as the current LogMiner database platform, that is, if the file we want to analyze is generated by Oracle 8i running on the UNIX platform, then we must also run LogMiner on an Oracle instance running on the UNIX platform, instead of running LogMiner on other platforms such as Microsoft NT. Of course, the hardware conditions of the two are not necessarily the same.

3. The LogMiner log analysis tool can only analyze future Oracle8 products, but it can analyze Oracle8 logs. For products before 8, there is nothing the tool can do. In addition, Oracle8i can only analyze DML operations, starting with Oracle9i, you can analyze not only DML operations, but also DDL operations. You can use the following statement in Oracle9i to query the DDL operation and the specific operation time:

SQL > select sql_redo

2 from v$logmnr_contents

3 where sql_redo like'% create%' or sql_redo like'% CREATE%'

LogMiner does not support indexing organization tables, Long, LOB, and collection types.

LogMiner can not be used in MTS environment.

5. Conclusion

LogMiner is a very powerful tool for database administrator (DBA), and it is also a tool often used in daily work. With the help of this tool, we can get a lot of information about database activities. One of the most important uses is to recover a change in the database without fully recovering the database. In addition, the tool can also be used to monitor or audit user activities, for example, you can use the LogMiner tool to see who has modified that data and the status of the data before it is modified. We can also use this tool to analyze any redo log files generated by Oracle 8 and later. In addition, another very important feature of the tool is that it can analyze the log files of other databases. In a word, this tool is a very effective tool for database administrators. It is very helpful for every database administrator to have a deep understanding and proficiency in this tool.

Reference documentation:

The Oracle8i LogMiner Utility

PURPOSE

This paper details the mechanics of what LogMiner does, as well as detailing

The commands and environment it uses.

SCOPE & APPLICATION

For DBAs requiring further information about LogMiner.

The ability to provide a readable interface to the redo logs has been asked

For by customers for a long time. The ALTER SYTSTEM DUMP LOGFILE interface

Has been around for a long time, though its usefulness outside Support is

Limited. There have been a number of third party products, e.g. BMC's PATROL

DB-Logmaster (SQL*Trax as was), which provide some functionality in this

Area. With Oracle release 8.1 there is a facility in the Oracle kernel to do

The same. LogMiner allows the DBA to audit changes to data and performs

Analysis on the redo to determine trends, aid in capacity planning

Point-in-time Recovery etc.

RELATED DOCUMENTS

[NOTE:117580.1] ORA-356, ORA-353, & ORA-334 Errors When Mining Logs with

Different DB_BLOCK_SIZE

Oracle8i-8.1LogMiner:

=

1. WHAT DOES LOGMINER DO?

=

LogMiner can be used against online or archived logs from either the

'current' database or a 'foreign' database. The reason for this is that it

Uses an external dictionary file to access meta-data, rather than the

'current' data dictionary.

It is important that this dictionary file is kept in step with the database

Which is being analyzed. If the dictionary used is out of step from the redo

Then analysis will be considerably more difficult. Building the external

Dictionary will be discussed in detail in section 3.

LogMiner scans the log/logs it is interested in, and generates, using the

Dictionary file meta-data, a set of SQL statements which would have the same

Effect on the database as applying the corresponding redo record.

LogMiner prints out the 'Final' SQL that would have gone against the

Database. For example:

Insert into Table x Values (5)

Update Table x set COLUMN=newvalue WHERE ROWID=''

Delete from Table x WHERE ROWID='' AND COLUMN=value AND COLUMN=VALUE

We do not actually see the SQL that was issued, rather an executable SQL

Statement that would have the same EFFECT. Since it is also stored in the

Same redo record, we also generate the undo column which would be necessary

To roll this change out.

For SQL which rolls back, no undo SQL is generated, and the rollback flag is

Set. An insert followed by a rollback therefore looks like:

REDO UNDO ROLLBACK

Insert sql Delete sql 0

Delete sql 1

Because it operates against the physical redo records, multirow operations

Are not recorded in the same manner e.g. DELETE FROM EMP WHERE DEPTNO=30

Might delete 100 rows in the SALES department in a single statement, the

Corresponding LogMiner output would show one row of output per row in the

Database.

2. WHAT IT DOES NOT DO

= =

1. 'Trace' Application SQL-use SQL_Trace/10046

Since LogMiner only generates low-level SQL, not what was issued, you

Cannot use LogMiner to see exactly what was being done based on the SQL.

What you can see, is what user changed what data at what time.

two。 'Replicate' an application

LogMiner does not cover everything. Also, since DDL is not supported

(the insert into the tab$ etc. Is, however the create table is not)

3. Access data dictionary SQL In a visible form

Especially UPDATE USER$ SET PASSWORD=.

Other Known Current Limitations

= =

LogMiner cannot cope with Objects.

LogMiner cannot cope with Chained/Migrated Rows.

LogMiner produces fairly unreadable output if there is no record of the

Table in the dictionary file. See below for output.

The database where the analysis is being performed must have a block size

Of at least equal to that of the originating database. See [NOTE:117580.1].

3. FUNCTIONALITY

=

The LogMiner feature is made up of three procedures in the LogMiner

(dbms_logmnr) package, and one in the Dictionary (dbms_logmnr_d).

These are built by the following scripts: (Run by catproc)

$ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql

$ORACLE_HOME/rdbms/admin/dbmslogmnr.sql

$ORACLE_HOME/rdbms/admin/prvtlogmnr.plb

Since 8.1.6:

$ORACLE_HOME/rdbms/admin/dbmslmd.sql

$ORACLE_HOME/rdbms/admin/dbmslm.sql

$ORACLE_HOME/rdbms/admin/prvtlm.plb

1. Dbms_logmnr_d.build

This procedure builds the dictionary file used by the main LogMiner

Package to resolve object names, and column datatypes. It should be

Generated relatively frequently, since otherwise newer objects will not

Be recorded.

It is possible to generate a Dictionary file from an 8.0.database and

Use it to Analyze Oracle 8.0 redo logs. In order to do this run

"dbmslogmnrd.sql" against the 8.0 database, then follow the procedure as

Below. All analysis of the logfiles will have to take place while

Connected to an 8.1 database since dbms_logmnr cannot operate against

Oracle 8.0 because it uses trusted callouts.

Any redo relating to tables which are not included in the dictionary

File are dumped RAW. Example: If LogMiner cannot resolve the Table and

Column references, then the following is output: (insert statement)

Insert into UNKNOWN.objn:XXXX (Coll [x],....) VALUES

HEXTORAW ('xxxxxx'), HEXTORAW (' xxxxx').

PARAMETERS

=

1. The name of the dictionary file you want to produce.

2. The name of the directory where you want the file produced.

The Directory must be writeable by the server i.e. Included in

UTL_FILE_DIR path.

EXAMPLE

=

BEGIN

Dbms_logmnr_d.build (

Dictionary_filename= > 'miner_dictionary.dic'

Dictionary_location = >'/ export/home/sme81/aholland/testcases

/ logminer'

);

END

/

The dbms_logmnr package actually performs the redo analysis.

2. Dbms_logmnr.add_logfile

This procedure registers the logfiles to be analyzed in this session. It

Must be called once for each logfile. This populates the fixed table

X$logmnr_logs (v$logmnr_logs) with a row corresponding to the logfile.

Parameters

=

1. The logfile to be analyzed.

2. Option

DBMS_LOGMNR.NEW (SESSION) First file to be put into PGA memory.

This initialises the V$logmnr_logs table.

And

DBMS_LOGMNR.ADDFILE

Adds another logfile to the v$logmnr_logs PGA memory.

Has the same effect as NEW if there are no rows there

Presently.

DBMS_LOGMNR.REMOVEFILE

Removes a row from v$logmnr_logs.

Example

=

Include all my online logs for analysis.

BEGIN

Dbms_logmnr.add_logfile (

'/ export/home/sme81/aholland/database/files/redo03.log'

DBMS_LOGMNR.NEW)

Dbms_logmnr.add_logfile (

'/ export/home/sme81/aholland/database/files/redo02.log'

DBMS_LOGMNR.ADDFILE)

Dbms_logmnr.add_logfile (

'/ export/home/sme81/aholland/database/files/redo01.log'

DBMS_LOGMNR.ADDFILE)

END

/

Full Path should be required, though an environment variable

Is accepted. This is NOT expanded in V$LOGMNR_LOGS.

3. Dbms_logmnr.start_logmnr

This package populates V$logmnr_dictionary, v$logmnr_parameters

And v$logmnr_contents.

Parameters

=

1. StartScn Default 0

2. EndScn Default 0

3. StartTime Default'01 MujanMui 1988'

4. EndTime Default '01Mujanmuri 2988'

5. DictFileName Default''

6. Options Default 0 Debug flag-uninvestigated as yet

A Point to note here is that there are comparisions made between the

SCNs, the times entered, and the range of values in the file. If the SCN

Range OR the start/end range are not wholly contained in this log, then

The start_logmnr command will fail with the general error:

ORA-01280 Fatal LogMiner Error.

4. Dbms_logmnr.end_logmnr

This is called with no parameters.

/ * THIS IS VERY IMPORTANT FOR SUPPORT * /

This procedure MUST be called prior to exiting the session that was

Performing the analysis. This is because of the way the PGA is used to

Store the dictionary definitions from the dictionary file, and the

V$LOGMNR_CONTENTS output.

If you do not call end_logmnr, you will silently get ORA-00600 [723]...

On logoff. This OERI is triggered because the PGA is bigger at logoff

Than it was at logon, which is considered a space leak. The main problem

From a support perspective is that it is silent, i.e. Not signalled back

To the user screen, because by then they have logged off.

The way to spot LogMiner leaks is that the trace file produced by the

OERI 723 will have A PGA heap dumped with many Chunks of type 'Freeable'

With a description of "KRVD:alh"

4. OUTPUT

=

Effectively, the output from LogMiner is the contents of V$logmnr_contents.

The output is only visible during the life of the session which runs

Start_logmnr. This is because all the LogMiner memory is PGA memory, so it

Is neither visible to other sessions, nor is it persistent. As the session

Logs off, either dbms_logmnr.end_logmnr is run to clear out the PGA, or an

OERI 723 is signalled as described above.

Typically users are going to want to output sql_redo based on queries by

Timestamp, segment_name or rowid.

V$logmnr_contents

Name Null? Type

SCN NUMBER

TIMESTAMP DATE

THREAD# NUMBER

LOG_ID NUMBER

XIDUSN NUMBER

XIDSLT NUMBER

XIDSQN NUMBER

RBASQN NUMBER

RBABLK NUMBER

RBABYTE NUMBER

UBAFIL NUMBER

UBABLK NUMBER

UBAREC NUMBER

UBASQN NUMBER

ABS_FILE# NUMBER

REL_FILE# NUMBER

DATA_BLK# NUMBER

DATA_OBJ# NUMBER

DATA_OBJD# NUMBER

SEG_OWNER VARCHAR2 (32)

SEG_NAME VARCHAR2 (32)

SEG_TYPE VARCHAR2 (32)

TABLE_SPACE VARCHAR2 (32)

ROW_ID VARCHAR2 (19)

SESSION# NUMBER

SERIAL# NUMBER

USERNAME VARCHAR2 (32)

ROLLBACK NUMBER

OPERATION VARCHAR2 (32)

SQL_REDO VARCHAR2 (4000)

SQL_UNDO VARCHAR2 (4000)

RS_ID VARCHAR2 (32)

SSN NUMBER

CSF NUMBER

INFO VARCHAR2 (32)

STATUS NUMBER

PH1_NAME VARCHAR2 (32)

PH1_REDO VARCHAR2 (4000)

PH1_UNDO VARCHAR2 (4000)

PH2_NAME VARCHAR2 (32)

PH2_REDO VARCHAR2 (4000)

PH2_UNDO VARCHAR2 (4000)

PH3_NAME VARCHAR2 (32)

PH3_REDO VARCHAR2 (4000)

PH3_UNDO VARCHAR2 (4000)

PH4_NAME VARCHAR2 (32)

PH4_REDO VARCHAR2 (4000)

PH4_UNDO VARCHAR2 (4000)

PH5_NAME VARCHAR2 (32)

PH5_REDO VARCHAR2 (4000)

PH5_UNDO VARCHAR2 (4000)

SQL > set heading off

SQL > select scn, username, sql_undo from v$logmnr_contents

Where segment_name = 'emp'

12134756 scott insert (...) Into emp

12156488 scott delete from emp where empno =.

12849455 scott update emp set mgr =

This will return the results of an SQL statement without the column

Headings. The columns that you are really going to want to query are the

"sql_undo" and "sql_redo" values because they give the transaction details

And syntax.

5. PLACEHOLDERS

=

In order to allow users to be able to query directly on specific data

Values, there are up to five PLACEHOLDERs included at the end of

V$logmnr_contents. When enabled, a user can query on the specific BEFORE and

AFTER values of a specific field, rather than a% LIKE% query against the

SQL_UNDO/REDO fields. This is implemented via an external file called

"logmnr.opt". (See the Supplied Packages manual entry on dbms_logmnr for

Further details.) The file must exist in the same directory as the

Dictionary file used, and contains the prototype mappings of the PHx fields

To the fields in the table being analyzed.

Example entry

=

Colmap = SCOTT EMP (EMPNO, 1, ENAME, 2, SAL, 3)

In the above example, when a redo record is encountered for the SCOTT.EMP

Table, the full Statement redo and undo information populates the SQL_REDO

And SQL_UNDO columns respectively, however the PH3_NAME, PH3_REDO and

PH3_UNDO columns will also be populated with 'SAL',

Respectively,which means that the analyst can query in the form.

SELECT * FROM V$LOGMNR_CONTENTS

WHERE SEG_NAME = 'EMP'

AND PH3_NAME='SAL'

AND PH3_REDO=1000000

The returned PH3_UNDO column would return the value prior to the update.

This enables much more efficient queries to be run against V$LOGMNR_CONTENTS

View, and if, for instance, a CTAS was issued to store a physical copy, the

Column can be indexed.

Search Words:

=

Log Miner

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report