Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand the ORACLE AWR report

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to understand the ORACLE AWR report". In the daily operation, I believe many people have doubts about how to understand the ORACLE AWR report. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "how to understand the ORACLE AWR report"! Next, please follow the editor to study!

Detailed Analysis of ORACLE AWR report

AWR is a new feature of Oracle 10g version, which is called Automatic Workload Repository- automatic load Information Base.

AWR generates report data by comparing the statistics collected by two snapshots (snapshot), and the generated report consists of multiple parts.

WORKLOAD REPOSITORY report for

DB Name

DB Id

Instance

Inst num

Release

RAC

Host

ICCI

1314098396

ICCI1

one

10.2.0.3.0

YES

HPGICCI1

Snap Id

Snap Time

Sessions

Cursors/Session

Begin Snap:

2678

25-Dec-08 14:04:50

twenty-four

1.5

End Snap:

2680

25-Dec-08 15:23:37

twenty-six

1.5

Elapsed:

78.79 (mins)

DB Time:

11.05 (mins)

DB Time does not include the time consumed by the Oracle background process. If the DB Time is much less than the Elapsed time, the database is idle.

Db time= cpu time + wait time (without idle wait) (non-background process)

To put it bluntly, db time is the time spent by the recorded server on database operations (non-background processes) and waiting (non-idle waiting).

DB time = cpu time + all of nonidle wait event time

In 79 minutes (during which 3 snapshot data were collected), the database took 11 minutes, and the RDA data showed that the system had 8 logical CPU (4 physical CPU)

The average time per CPU is 1.4 minutes, and the CPU utilization is only about 2% (1.4 + 79). It shows that the pressure of the system is very small.

List the following two to explain:

Report A:

Snap Id Snap Time Sessions Curs/Sess

--

Begin Snap: 4610 24-Jul-08 22:00:54 68 19.1

End Snap: 4612 24-Jul-08 23:00:25 17 1.7

Elapsed: 59.51 (mins)

DB Time: 466.37 (mins)

Report B:

Snap Id Snap Time Sessions Curs/Sess

--

Begin Snap: 3098 13-Nov-07 21:00:37 39 13.6

End Snap: 3102 13-Nov-07 22:00:15 40 16.4

Elapsed: 59.63 (mins)

DB Time: 19.49 (mins)

The server is an AIX system with 4 dual-core cpu and 8 cores:

/ sbin > bindprocessor-Q

The available processors are: 0 1 2 3 4 5 6 7

Let's start with Report A. during the snapshot interval, a total of about 60 minutes, cpu has a total of 60 minutes, DB time is 466.37 minutes.

Then: cpu spent 466.37 minutes handling Oralce non-idle waits and operations (such as logical reads)

In other words, cpu spends 466.37 and 480 million dollars on processing Oracle operations, not including background processes.

Look at Report B, for a total of about 60 minutes, cpu has 19.49 Universe 480 percent spent on processing Oracle operations.

Obviously, the average load on servers in Report B is very low.

You can get a rough idea of the db load from awr report's Elapsed time and DB Time.

However, for batch systems, the workload of the database is always concentrated over a period of time. If the snapshot cycle is not within this period of time

Or the snapshot cycle span is too long and contains a lot of database idle time, the analysis results are meaningless.

This also shows that it is critical to select the analysis period, which represents the performance problem.

Report SummaryCache Sizes

Begin

End

Buffer Cache:

3344M

3344M

Std Block Size:

8K

Shared Pool Size:

704M

704M

Log Buffer:

14352K

Displays the size of each area in the SGA (after AMM changes them), which can be used to compare with the initial parameter values.

Shared pool mainly includes library cache and dictionary cache.

Library cache is used to store SQL, PL/SQL, Java classes, etc., after recent parsing (or compilation).

Dictionary cache is used to store recently referenced data dictionaries.

The cost of cache miss in library cache or dictionary cache is much higher than that in buffer cache.

Therefore, shared pool should be set up to ensure that all recently used data can be cache.

Load Profile

Per Second

Per Transaction

Redo size:

918805.72

775912.72

Logical reads:

3521.77

2974.06

Block changes:

1817.95

1535.22

Physical reads:

68.26

57.64

Physical writes:

362.59

306.20

User calls:

326.69

275.88

Parses:

38.66

32.65

Hard parses:

0.03

0.03

Sorts:

0.61

0.51

Logons:

0.01

0.01

Executes:

354.34

299.23

Transactions:

1.18

% Blocks changed per Read:

51.62

Recursive Call%:

51.72

Rollback per transaction%:

85.49

Rows per Sort:

#

It is more meaningful to display the database load profile and compare it with the baseline data. if the load changes little per second or per transaction, it means that the application runs stably.

A single report data only shows the load of the application, and most of the data does not have a so-called "correct" value, however,

A Logons greater than 1 to 2 per second, a Hard parses greater than 100 per second, and a total parses of more than 300 per second indicate that there may be a contention problem.

Redo size: the log size (in bytes) generated per second, which can indicate the frequency of data changes and whether the database task is heavy or not.

Logical reads: the number of blocks per second / per transaction read logically. The number of block produced by logical reads per second. Logical Reads= Consistent Gets + DB Block Gets

Block changes: blocks modified per second / transaction

Physical reads: blocks per second / physical read per transaction

Physical writes: blocks per second / physical writes per transaction

User calls: call per second / user per transaction

The number of times Parses:SQL parsed. The number of parses per second, including the combination of fast parse,soft parse and hard parse.

Soft parsing more than 300 times per second means that your "application" is not efficient, adjust the session_cursor_cache.

In this case, fast parse refers to a direct hit in PGA (session_cached_cursors=n is set)

Soft parse refers to a situation that is hit in shared pool, while hard parse refers to a situation where none of them is hit.

Hard parses: the number of hard parsing is too much, indicating that the reuse rate of SQL is not high.

The number of hard parsing per second, more than 100 per second, may indicate that your binding is not good, or that the shared pool setting is unreasonable.

At this point, you can enable the parameter cursor_sharing=similar | force, which defaults to exact. However, when this parameter is set to similar, there is a bug, which may lead to poor execution of the plan.

Sorts: number of sorts per second / transaction

Logons: number of logins per second / transaction

Executes: SQL execution per second / transaction

Transactions: transactions per second. The number of transactions generated per second, reflecting whether the database task is heavy or not.

Blocks changed per Read: represents the proportion of data blocks that logical reads are used to modify. The percentage of blocks changed in each logical read.

Recursive Call: the percentage of recursive calls to all operations. The percentage of recursive calls, which is higher if there is a lot of PL/SQL.

Rollback per transaction: rollback rate per transaction. See if the rollback rate is very high, because the rollback consumes a lot of resources, if the rollback rate is too high

It may indicate that your database has experienced too many invalid operations, and too much rollback may also lead to competition from Undo Block.

The calculation formula of this parameter is as follows: Round (User rollbacks / (user commits + user rollbacks), 4) * 100%.

Rows per Sort: number of rows sorted each time

Note:

Hard and soft parsing of Oracle

When it comes to soft parsing (soft prase) and hard parsing (hard prase), we have to talk about Oracle's handling of sql.

When you issue a sql statement to deliver the Oracle, Oracle will process the sql in several steps before executing and getting the results:

1. Syntax checking (syntax check)

Check whether the spelling of this sql is grammatical.

2. Semantic checking (semantic check)

Such as checking whether the access object in the sql statement exists and whether the user has the corresponding permissions.

3. Parse the sql statement (prase)

The internal algorithm is used to parse the sql to generate the parsing tree (parse tree) and the execution plan (execution plan).

4. Execute sql and return the result (execute and return)

Among them, soft and hard parsing takes place in the third process.

Oracle uses the internal hash algorithm to get the hash value of the sql, and then looks in library cache to see if the hash value exists.

Assuming it exists, compare this sql with the one in cache

Assuming "the same", the existing parse tree and execution plan will be utilized, while the work related to the optimizer will be omitted. This is the process of soft parsing.

Admittedly, if either of the above two assumptions is not true, the optimizer will create a parse tree and generate an execution plan. This process is called hard parsing.

Creating parsing trees and generating execution plans are expensive actions for sql execution, so hard parsing should be avoided and soft parsing should be used as much as possible.

Instance Efficiency Percentages (Target 100%)

Buffer Nowait%:

100.00

Redo NoWait%:

100.00

Buffer Hit%:

98.72

In-memory Sort%:

99.86

Library Hit%:

99.97

Soft Parse%:

99.92

Execute to Parse%:

89.09

Latch Hit%:

99.99

Parse CPU to Parse Elapsd%:

7.99

% Non-Parse CPU:

99.95

This section contains the memory hit ratio of Oracle key metrics and the efficiency of other database instance operations. Among them, Buffer Hit Ratio is also called Cache Hit Ratio

Library Hit ratio is also known as Library Cache Hit ratio.

Like the Load Profile section, there is no so-called "correct" value in this section, but can only be judged according to the characteristics of the application.

In a DSS environment that uses direct reads to execute large parallel queries, 20% of the Buffer Hit Ratio is acceptable, which is totally unacceptable for an OLTP system.

According to Oracle's experience, for OLTP systems, the Buffer Hit Ratio ideal should be above 90%.

Buffer Nowait represents the percentage of not waiting for data to be obtained in memory. Get the non-wait ratio of Buffer in the buffer

This value of Buffer Nowait generally needs to be greater than 99%. Otherwise, there may be contention, which can be further confirmed in later waiting events.

Buffer hit represents the rate at which the process finds blocks from memory, and it is more important to monitor this value for significant changes than the value itself.

For general OLTP systems, if this value is less than 80%, more memory should be allocated to the database.

The hit rate of data blocks in the data buffer should usually be more than 95%. Otherwise, less than 95%, important parameters need to be adjusted, and less than 90% may be required to add db_cache_size.

A high hit rate does not necessarily mean that the performance of the system is optimal. For example, a large number of non-selective indexes are accessed frequently, which will create the illusion of a high hit rate (a large number of db file sequential read).

However, a relatively low hit rate will generally have an impact on the performance of the system and need to be adjusted. A sudden change in hit rate is often a bad message.

If the hit ratio suddenly increases, you can check the top buffer get SQL to see the statements and indexes that cause a lot of logical reads

If the hit ratio suddenly decreases, you can check the top physical reads SQL and check for statements that produce a large number of physical reads, mainly those that do not use the index or where the index is deleted.

Redo NoWait represents the percentage of not waiting to get BUFFER in the LOG buffer. If it is too low (see 90% threshold), consider increasing LOG BUFFER.

When the redo buffer reaches 1m, you need to write to the redo log file, so when the redo buffer setting exceeds 1m, it is unlikely to wait for the buffer space to be allocated.

Currently, redo buffer, which is generally set to 2m, should not be too large for the total amount of memory.

Library hit represents the ratio that Oracle retrieves a parsed SQL or PL/SQL statement from Library Cache when the application calls a SQL or stored procedure

Oracle checks Library Cache to determine if there is a parsed version, and if so, Oracle immediately executes the statement; if not, Oracle parses the statement and assigns it a shared SQL zone in Library Cache.

Low library hit ratio can lead to too much parsing, increase CPU consumption, and degrade performance.

If the library hit ratio is less than 90%, you may need to increase the shared pool area.

Generally speaking, the hit rate of STATEMENT in the shared area should be kept above 95%, otherwise you need to consider: increasing the shared pool; using binding variables; modifying parameters such as cursor_sharing.

Latch Hit:Latch is a lock that protects memory structures and can be thought of as a SERVER process obtaining permission to access memory data structures.

Make sure that the Latch Hit is more than 99%, otherwise it means Shared Pool latch contention. It may be due to unshared SQL, or the Library Cache is too small, which can be solved by binding change or scaling up Shared Pool.

Make sure it is > 99%, otherwise there will be serious performance problems. When there is a problem with this value, we can find and solve the problem with the help of the later wait time and latch analysis.

Parse CPU to Parse Elapsd: parsing actual elapsed time / (parsing actual running time + waiting time for resources in parsing), the higher the better.

The calculation formula is: Parse CPU to Parse Elapsd% = 100* (parse time cpu / parse time elapsed).

That is, parsing the actual running time / (parsing the actual running time + waiting for resources in the parsing). If the ratio is 100%, it means that the CPU wait time is 0 and there is no wait.

Non-Parse CPU: actual elapsed time of SQL / (actual elapsed time of SQL + SQL parsing time). Too low means that parsing takes too much time.

The calculation formula is:% Non-Parse CPU = round (100*1-PARSE_CPU/TOT_CPU), 2). If this value is small, it means that parsing consumes too much CPU time.

Compared with PARSE_CPU, if the TOT_CPU is high, the ratio will be close to 100%, which is good, indicating that most of the work performed by the computer is to execute the query rather than analyze the query.

Execute to Parse: is the ratio of statement execution to analysis, which can be high if the SQL reuse rate is high. The higher the value, the more times it is repeated after a parse.

The calculation formula is: Execute to Parse = 100 * (1-Parses/Executions).

In this example, parse is required almost five times per execution. So if the system Parses > Executions, it is possible that the ratio is less than 0.

The value is 1:

79.99

73.52

Memory Usage%: for a database that has been running for some time, the shared pool memory usage should be stable between 75% and 90%.

If it is too small, the Shared Pool is wasted, while if it is higher than 90, there is contention in the shared pool and insufficient memory.

This number should be stable at 75% to 90% for a long time. If this percentage is too low, it indicates that the shared pool setting is too large, resulting in additional administrative burden, which can lead to performance degradation under some conditions.

If this percentage is too high, it will age the components outside the shared pool, and if the SQL statement is executed again, it will cause the SQL statement to be hard parsed.

In a system of the right size, the utilization of shared pools will be in the range of 75% to slightly less than 90%.

SQL with executions > 1: the sql ratio with more than 1 execution times. If this value is too small, you need to use more binding variables in the application to avoid too much SQL parsing.

In a system that tends to run in a cycle, this number must be carefully considered. In this loop, a different set of SQL statements is executed relative to another part of the day.

In the shared pool, there will be a set of SQL statements that have not been executed during the observation period, simply because the statements that will execute them are not running during the observation period. This number is close to 100% only if the system runs the same set of SQL statements continuously.

Memory for SQL w/exec > 1: the percentage of memory consumed by SQL with more than 1 execution times.

This is a measure of how much memory is consumed by frequently used SQL statements compared to infrequently used SQL statements.

This number will be very close to% SQL with executions > 1 in general, unless some query tasks consume irregular memory.

In a steady state, you will see that about 75% and 85% of the shared pools are used over time. If the time window of the Statspack report is large enough to cover all cycles

The percentage of SQL statements that are executed more than once should be close to 100%. This is a statistic affected by the duration between observations. It can be expected to increase as the length of time between observations increases.

Summary: through the ORACLE instance validity statistics, we can get a general impression, however, we can not determine the performance of the data operation. Determination of current performance issues

We mainly rely on the following waiting events to confirm. We can understand the two parts in this way. Hit statistics help us identify and predict some performance problems that will occur in the system, so we

You can plan ahead. The wait event indicates that there has been a performance problem in the current database that needs to be solved, so it is the nature of mending after the sheep is lost.

Top 5 Timed Events

Event

Waits

Time (s)

Avg Wait (ms)

% Total Call Time

Wait Class

CPU time

five hundred and fifteen

77.6

SQL*Net more data from client

27319

sixty-four

two

9.7

Network

Log file parallel write

5497

forty-seven

nine

7.1

System I/O

Db file sequential read

7900

thirty-five

four

5.3

User I/O

Db file parallel write

4806

thirty-four

seven

5.1

System I/O

This is the last section of the report summary and shows the five most serious waits in the system, in reverse order in proportion to the waiting time. When we tune, we always want to observe the most significant effect.

So we should start from here to determine what we should do next.

For example, if 'buffer busy wait' is a serious waiting event, we should continue to study the contents of the Buffer Wait and File/Tablespace IO areas in the report.

Identify which files are causing the problem. If the most serious wait event is the SQL O event, we should study the sentence regions sorted by physical reads to identify which statements are in the

Perform a large number of Tablespace O, and study the files with slower response time in the Imax and Imax O area. If you have a high LATCH waiting, you need to see the detailed LATCH

Statistics to identify which problems caused by LATCH.

For a system with good performance, cpu time should be in front of top 5, otherwise your system spends most of its time waiting.

Here, log file parallel write is a relatively long wait, taking up 7% of CPU time.

In general, CPU time is always listed first in a database that has no problem.

For more waiting events, see the Wait Events section of this report.

RAC Statistics

Begin

End

Number of Instances:

two

two

Global Cache Load Profile

Per Second

Per Transaction

Global Cache blocks received:

4.16

3.51

Global Cache blocks served:

5.97

5.04

GCS/GES messages received:

408.47

344.95

GCS/GES messages sent:

258.03

217.90

DBWR Fusion writes:

0.05

0.05

Estd Interconnect traffic (KB)

211.16

Global Cache Efficiency Percentages (Target local+remote 100%)

Buffer access-local cache%:

98.60

Buffer access-remote cache%:

0.12

Buffer access-disk%:

1.28

Global Cache and Enqueue Services-Workload Characteristics

Avg global enqueue get time (ms):

0.1

Avg global cache cr block receive time (ms):

1.1

Avg global cache current block receive time (ms):

0.8

Avg global cache cr block build time (ms):

0.0

Avg global cache cr block send time (ms):

0.0

Global cache log flushes for cr blocks served%:

3.5

Avg global cache cr block flush time (ms):

3.9

Avg global cache current block pin time (ms):

0.0

Avg global cache current block send time (ms):

0.0

Global cache log flushes for current blocks served%:

0.4

Avg global cache current block flush time (ms):

3.0

Global Cache and Enqueue Services-Messaging Statistics

Avg message sent queue time (ms):

0.0

Avg message sent queue time on ksxp (ms):

0.3

Avg message received queue time (ms):

0.5

Avg GCS message process time (ms):

0.0

Avg GES message process time (ms):

0.0

% of direct sent messages:

14.40

% of indirect sent messages:

77.04

% of flow controlled messages:

8.56

Main Report

Wait Events Statistics

SQL Statistics

Instance Activity Statistics

IO Stats

Buffer Pool Statistics

Advisory Statistics

Wait Statistics

Undo Statistics

Latch Statistics

Segment Statistics

Dictionary Cache Statistics

Library Cache Statistics

Memory Statistics

Streams Statistics

Resource Limit Statistics

Init.ora Parameters

Wait Events Statistics

Time Model Statistics

Wait Class

Wait Events

Background Wait Events

Operating System Statistics

Service Statistics

Service Wait Class Stats

Back to Top

/ * oracle wait events are an important basis and indication to measure the health status of oracle. There are two types of wait events:

Idle wait events and non-idle wait events, TIMED_STATISTICS = TRUE then wait events are sorted by wait time, = FALSE then events are sorted by the number of waits.

TIMED_STATISTICS = TRUE must be set on session during statspack operation, otherwise the statistical data will be distorted.

Idle wait event means that oracle is waiting for some kind of work, so you don't need to pay too much attention to this event when diagnosing and optimizing the database.

Non-idle wait events are specific to oracle activities, which refer to database tasks or waits that occur while the application is running

These wait events are what we should pay attention to when we are adjusting the database.

For common wait events, the instructions are as follows:

1) db file scattered read files are read separately

This event is usually related to a full table scan or fast full index scan. Because full table scans are carried out in memory, usually based on performance considerations, and sometimes not enough contiguous memory space is allocated, blocks of data are scattered read into the Buffer Cache. It is possible that the wait is too large because the index is missing or there is no suitable index (optimizer_index_cost_adj can be adjusted). This may also be normal because performing a full table scan may be more efficient than an index scan. When the system has these waits, it needs to be checked to determine whether a full table scan is necessary to adjust. Because full table scans are placed on the cold end (cold end) of the LRU (Least Recently Used, least recently applicable) list, for smaller tables that are accessed frequently, you can choose to Cache them into memory to avoid repeated reads. When this wait event is significant, it can be diagnosed in conjunction with the v$session_longops dynamic performance view, which records things that have been running for a long time (running for more than 6 seconds), probably many of which are full table scan operations (in any case, this part of the information is worthy of our attention).

About the parameter OPTIMIZER_INDEX_COST_ADJ=n: this parameter is a percentage value, and the default value is 100, which can be understood as FULL SCAN COST/INDEX SCAN COST. When n% * INDEX SCAN COST select parameter1,parameter2,parameter3 from v$event_name where name='buffer busy waits'

PARAMETER1 PARAMETER2 PARAMETER3

File# block# id

Oracle 10g

PARAMETER1 PARAMETER2 PARAMETER3

File# block# class#

When diagnosing buffer busy waits events, it is useful to get the following information:

1. Gets the wait reason number of the buffer busy waits event, which can be obtained by querying the p3 parameter value of the event

two。 Get the SQL statement that generated this event, which can be obtained by querying as follows:

Select sql_text from v$sql T1 and vandalism session T2 relegation to session wait t3

Where t1.address=t2.sql_address and t1.hash_value=t2.sql_hash_value

And t2.sid=t3.sid and t3.event='buffer busy waits'

3. Get the type and segment of the waiting block, which can be obtained by querying as follows:

Select 'Segment Header' class,a.segment_type,a.segment_name,a.partition_name from dba_segments a minute vainsessionkeeper wait b

Where a.header_file=b.p1 and a.header_block=b.p2 and b.event='buffer busy waits'

Union

Select 'Freelist Groups' class,a.segment_type,a.segment_name,a.partition_name from dba_segments a minute vainsessionkeeper wait b

Where a.header_file=b.p1 and b.p2 between a.header_block+1 and (a.header_block+a.freelist_groups) and a.freelist_groups > 1 and b.event='buffer busy waits'

Union

Select a.segment_type | | 'block' class,a.segment_type,a.segment_name,a.partition_name from dba_extents a dint vandalism session wait b

Where a.file_id=b.p1 and b.p2 between a.block_id and a.block_id+a.blocks-1 and b.event='buffer busy waits' and not exists (select 1 from dba_segments where

Header_file=b.p1 and header_block= b.p2)

The first part of the query: if the waiting block type is segment header, you can directly take the p1 and p2 parameters of the buffer busy waits event to match the header_file and header_block fields in the dba_segments view to find the waiting segment name and segment type, and adjust accordingly.

The second part of the query: if the block type waiting is freelist groups, you can also find the corresponding segment name and segment type in the dba_segments view. Note that the parameter p2 indicates that the position of freelist groups is between the header_block+1 of segment and the number of header_block+freelist groups groups, and the number of freelist groups groups is greater than 1.

The third part of the query: if the block type you are waiting for is a normal data block, you can use p1, p2 parameters and dba_extents to jointly query to get the segment name and segment type where the block is located.

For different types of waiting blocks, we take different approaches:

1.data segment header:

Processes usually access data segment header frequently for two reasons: obtaining or modifying process freelists information and extending the high water mark. In the first case, frequent access to process freelists information by processes leads to freelist contention. We can increase the storage parameters freelist or freelist groups of the corresponding segment object. If the process often needs to modify the freelist due to the frequent entry and exit of data blocks in and out of freelist, you can set a large gap between the pctfree value and the pctused value, so as to avoid frequent data blocks going in and out of freelist;. In the second case, because the segment space is consumed very quickly, and the set next extent is too small, the high water mark is frequently extended. The solution is to increase the storage parameter next extent of the segment object or directly set extent size uniform when creating the tablespace.

2.data block:

One or some data blocks are read and written by multiple processes at the same time and become hot chunks. This problem can be solved by the following ways:

(1) reduce the concurrency of the program. If the parallel query is used in the program, reduce the parallel degree, so as to prevent multiple parallel slave from accessing the same data object at the same time, resulting in waiting to degrade performance.

(2) adjust the application so that it can read fewer data blocks to get the required data, reducing buffer gets and physical reads

(3) reduce the number of records in the same block so that records are distributed in more data blocks, which can be achieved in several ways: you can adjust the pctfree value of segment objects, you can rebuild segment into table spaces with smaller block size, and you can use alter table minimize records_per_block statements to reduce the number of records in each block.

(4) if the hotspot block object is an index similar to the self-increasing id field, the index can be converted into an inverted index to break up the data distribution and disperse the hotspot block.

3.undo segment header:

The reason for undo segment header contention is that there is not enough undo segment in the system and enough undo segment needs to be added. According to the management method of undo segment, if you are in manual management mode, you need to modify rollback_segments initialization parameters to increase rollback segment. If you are in automatic management mode, you can reduce the value of transactions_per_rollback_segment initialization parameters to make oracle automatically increase the number of rollback segment.

4.undo block:

Undo block contention is due to the fact that the data is read and written at the same time in the application, and the reading process needs to go to undo segment to obtain consistent data. The solution is to stagger the time for the application to modify data and query a large amount of data.

Summary: the buffer busy waits event is one of the more complex oracle waiting events, and there are many reasons for its formation. It needs to be diagnosed according to the p3 parameters compared with the cause code table provided by Oracle. After 10g, it is necessary to analyze the waiting time according to the block type of waiting combined with the specific SQL of waiting time, and take corresponding adjustment measures.

4) latch free: this problem needs to be adjusted when the latch loss rate is higher than 0.5%. We will explain the details later in the Latch Activity for DB section.

Latch is a low-level queuing mechanism used to protect shared memory structures in SGA. Latch is like a memory lock that is quickly acquired and released. Used to prevent shared memory structures from being accessed by multiple users at the same time. If latch is not available, the latch release failure (latch free miss) is logged. There are two types related to latches:

■, right away.

■ can wait.

If a process tries to acquire a latch in immediate mode and the latch is already held by another process, if the latch is not available, the process will not wait to acquire the latch. It will continue to perform another operation.

Most latch issues are related to the following:

What is not good is the use of bound variables (library cache latch), redo generation problems (redo allocation latch), cache contention problems (cache buffers LRU chain), and the existence of "hot" blocks in buffer cache (cache buffers chain).

Usually we say that if you want to design a failed system without considering binding variables, this condition is enough. For systems with strong heterogeneity, the consequences of not using binding variables are extremely serious.

There are also some latch waiting to be related to bug, so you should pay attention to the announcement of Metalink-related bug and the release of patches. When the latch miss ratios is greater than 0.5%, you should study this problem.

The latch mechanism of Oracle is competition, which is similar to CSMA/CD in the network. All user processes compete for latch. For latch that is willing to wait (willing-to-wait), if a process does not get latch in the first attempt, it will wait and try again. If it cannot get latch after _ spin_count contention, then the process goes to sleep for a specified length of time. Then wake up again and repeat the previous steps in order. The default value in 8i/9i is _ spin_count=2000.

If the SQL statement cannot be adjusted, Oracle provides a new initialization parameter beyond version 8.1.6: CURSOR_SHARING can force binding variables on the server side by setting CURSOR_SHARING = force. Setting this parameter may bring some side effects. For Java programs, there are related bug, and the specific application should pay attention to the bug announcement of Metalink.

* Latch problem and possible solution

-

* Library Cache and Shared Pool (unbound variable-bind variable, adjust shared_pool_size)

Whenever SQL or PL/SQL stored procedures, packages, functions, and triggers are executed, the Latch is used in .Parse operations and the Latch is frequently used.

* Redo Copy (increase the _ LOG_SIMULTANEOUS_COPIES parameter)

Redo copy Latch is used to copy redo records from PGA to the redo log buffer.

* Redo Allocation (minimize REDO generation and avoid unnecessary submissions)

This Latch is used to allocate space in the redo log buffer and NOLOGGING can be used to slow down competition.

* Row Cache Objects (increase shared pool)

Data dictionary competition. Excessive parsing.

* Cache Buffers Chains (_ DB_BLOCK_HASH_BUCKETS should be increased or set to prime)

Overheated data blocks cause Latch competition in the memory buffer chain.

* Cache Buffers Lru Chain (adjust SQL, set DB_BLOCK_LRU_LATCHES, or use multiple buffer pools)

The memory buffer LRU chain Latch is used when scanning the LRU (least recently used) chain of all memory buffer blocks. Too small memory buffers, too large memory buffer throughput, too many sorting operations in memory, and DBWR speed not keeping up with workloads can cause competition for this Latch.

5) the Enqueue queue is a lock that protects some shared resources from concurrent DML operations. The queue uses the FIFO policy, and note that latch is not the FIFO mechanism. There are three common types of queues: ST queue, HW queue and TX4 queue.

ST Enqueue waits are mainly caused by space management and allocation in dictionary-managed tablespaces. Solution: 1) change the dictionary-managed table space to local management mode 2) pre-allocate partitions or set the next extent of the problematic dictionary-managed table space larger.

HW Enqueue is for HWM of segment. When such a wait occurs, it can be solved by manually assigning extents.

TX4 Enqueue waiting is the most common wait situation. There are usually three situations that cause this type of wait: 1) duplicate indexes in a unique index. Workaround: commit or rollback to release the queue. 2) there are multiple update for the same bitmap index segment (bitmap index fragment), because a bitmap index fragment may contain multiple rowid, so when multiple users update, one user may lock the segment, resulting in waiting. The solution is the same as above. 3) multiple users update a data block at the same time, of course, these DML operations may be for different rows of the data block, if there is no free ITL slot at this time, a block-level lock will be generated. Solution: increase the initrans value of the table to create more ITL slots; or increase the pctfree value of the table so that oracle can create more ITL slots in the space of pctfree as needed; use smaller block size so that each block contains fewer rows, reducing the chance of conflicts.

AWR report Analysis-wait event-queue .doc

6) Free Buffer free buffer: this wait event indicates that the system is waiting for free space in memory, which indicates that there is no memory space for Free in the current Buffer. If the application is well designed, the SQL specification is written, and variables are fully bound, this wait may indicate that the Buffer Cache setting is too small, and you may need to increase the DB_CACHE_SIZE. This wait may also indicate that DBWR is not writing fast enough, or that there is serious disk competition, and you may need to consider increasing checkpoints, using more DBWR processes, or increasing the number of physical disks to spread the load and balance IO.

7) Log file single write: this event is only related to writing log file header blocks, and usually occurs when new group members are added and serial numbers are promoted. The header block is written individually, because part of the information of the header block is the file number, and each file is different. Update the log file header this operation is completed in the background, generally rarely wait, do not need to pay much attention.

8) log file parallel write: write redo records from log buffer to redo log files, mainly referring to regular write operations (as opposed to log file sync). If you have more than one group member in your Log group, the write operation is parallel when you flush log buffer, and this wait event may occur. Although this write operation is processed in parallel, the write operation will not be completed until all I IO O operations are completed (if your disk supports asynchronous IO or uses IO SLAVE, this wait may occur even if there is only one redo log file member). Compared with log file sync time, this parameter can be used to measure the writing cost of log file. It is often called synchronous cost rate. The way to improve this wait is to put redo logs on the fast disk of redo log O, avoid using raid5 as much as possible, make sure that the tablespace is not in hot standby mode, and make sure that the data files for redo log and data are on different disks.

9) log file sync: when a user commits or rolls back data, LGWR populates the redo records of the session from the log buffer into the log file, and the user's process must wait for the populating work to be completed. If this wait event affects database performance, you need to modify the commit frequency of the application. To reduce this wait event, you need to submit more records at one time, or access the redo log REDO LOG file to a different physical disk to improve the performance of Igamo.

When a user commits or rolls back data, LGWR writes the redo of the session to the redo log by the log buffer. The log file synchronization process must wait for the process to complete successfully. To reduce this wait event, you can try to submit more records at a time (frequent commits will cause more overhead). Place redo logs on faster disks, or alternately use redo logs on different physical disks to reduce the impact of archiving on LGWR.

Generally speaking, do not use RAID5 for soft RAID. RAID5 will bring great performance loss to the system that writes frequently. You can consider using file system direct input / output, or using bare device (raw device), which can improve the performance of writes.

10) log buffer space: the log buffer writes faster than LGWR writes REDOFILE, which can increase the log file size, increase the log buffer size, or use a faster disk to write data.

This wait occurs when you use log buffer to generate redo logs faster than LGWR writes, or when log switching (log switch) is too slow. When this wait occurs, it usually indicates that the redo log buffer is too small. To solve this problem, consider increasing the size of the log file or increasing the size of the log buffer.

Another possible reason is that there is a bottleneck in disk Ipool O, so you can consider using a disk that writes faster. Under permissible conditions, you can consider using bare devices to store log files to improve writing efficiency. In a general system, the lowest standard is not to store log files and data files together, because log files are usually written but not read, and separate storage can improve performance.

11) logfile switch: usually because the archiving speed is not fast enough. Indicates that all commit requests need to wait for the completion of the log file switch. Log file Switch consists of two main sub-events:

The wait event log file switch (archiving needed) usually occurs because the first log archive has not been completed after the log group cycle is full. The wait may indicate that there is a problem with io. Solution: ① can consider increasing log files and adding log groups; ② moves archive files to fast disk; ③ adjusts log_archive_max_processes.

After the log file switch (checkpoint incomplete) log group has been written, LGWR attempts to write the first log file, waiting for the event to occur if the database does not finish writing out the dirty block recorded in the first log file (for example, the first checkpoint is not completed). This wait event usually indicates that your DBWR is too slow to write or that there is a problem with your IO. To solve this problem, you may need to consider adding additional DBWR or increasing your log group or log file size, or you can also consider increasing the frequency of checkpoint.

12) DB File Parallel Write: occurs when a file is written in parallel by DBWR. Solution: improve IO performance.

When handling this event, you need to be aware of

1) the db file parallel write event belongs only to the DBWR process.

2) slow DBWR may affect the foreground process.

3) A large amount of db file parallel write waiting time is likely to be caused by the problem of IZP O. On the premise of confirming that os supports asynchronous io, you can check the disk_asynch_io parameter in the system to make sure it is TRUE. You can increase the number of DBWR processes by setting db_writer_processes, of course, as long as the number of cpu is not exceeded. )

The DBWR process performs all database writes through SGA, and when writing starts, the DBWR process compiles a set of dirty blocks (dirty block) and publishes system write calls to the operating system. The DBWR process looks for blocks written at various times, including a lookup every 3 seconds, when the foreground process commits to clear the buffer: at the checkpoint, when the _ DB_LARGE_DIRTY_QUEUE, _ DB_BLOCK_MAX_DIRTY_TARGET, and FAST_START_MTTR_TARGET thresholds are met, and so on.

Although user sessions have never experienced db file parallel write wait events, this does not mean that they are not affected by such events. Slow DBWR write performance can cause foreground sessions to wait on write complete waits or free buffer waits events. DBWR write performance may be affected by the type of Imax O operation (synchronous or asynchronous), the storage device (bare device or mature file system), the database layout, and the Imax O subsystem configuration. The key database statistics to view are system-wide TIME_WAITED and AVERAGE_WAIT when db file parallel write, free buffer waits, and write complete waits wait events are associated with each other.

If the average wait time of db file parallel write is greater than that of 10cs (or 100ms), it usually indicates slow Imax O throughput. There are many ways to improve the average waiting time. The main method is to use the correct type of Imap O operation. If the data file is on a bare device (raw device) and the platform supports asynchronous I _ write O, you should use asynchronous writes. However, if the database is on a file system, you should use synchronous writes and direct Iwhite O (this is the operating system direct Icando O). In addition to ensuring that you are using the correct type of IUnip O operation, you should also check your database layout and use common commands to monitor IUnip O throughput from the operating system. For example, sar-d or iostat-dxnC.

When the average db file parallel write wait time is high and the system is busy, the user session may start waiting on the free buffer waits event. This is because the DBWR process does not meet the need to free the buffer. If the TIME_WAITED of the free buffer waits event is high, you should address the problem of DBWR Imax O throughput before increasing the number of buffers in the cache.

Another response to high db file parallel write average wait times is the high TIME_WAITED on write complete waits wait events. The foreground process is not allowed to modify blocks that are being transferred to disk. In other words, blocks that are located in DBWR bulk writes. The foreground session waits on the write complete waits wait event. Therefore, the occurrence of write complete waits events must be a sign of a slow DBWR process, and this delay can be corrected by improving DBWR Imax O throughput.

13) DB File Single Write: occurs when a file header or other individual block is written, and this waits until all the Icano calls are complete. Solution: improve IO performance.

14) DB FILE Scattered Read: occurs when an entire segment is scanned to read multiple blocks according to the initialization parameter db_file_multiblock_read_count, because the data may be scattered in different parts, which is related to striping or segmentation), so it usually requires multiple scattered reads to read all the data. The wait time is the time it takes to complete all Imax O calls. Solution: improve IO performance.

This situation usually shows the wait associated with a full table scan.

When a database performs a full table scan, the data is scattered and read into Buffer Cache for performance reasons. If this wait event is significant, it may indicate that for some fully scanned tables, no indexes are created or appropriate indexes are not created, and we may need to check that these tables have been set up correctly.

However, this wait event does not necessarily mean poor performance. Under some conditions, Oracle will actively use full table scans to replace index scans to improve performance, which is related to the amount of data accessed. Oracle will make a more intelligent choice under CBO, and Oracle will prefer to use indexes under RBO.

Because full table scans are placed on the cold end (cold end) of the LRU (Least Recently Used, least recently applicable) list, for smaller tables that are accessed frequently, you can choose to Cache them into memory to avoid repeated reads.

When this wait event is significant, it can be diagnosed in conjunction with the v$session_longops dynamic performance view, which records things that have been running for a long time (running for more than 6 seconds), probably many of which are full table scan operations (in any case, this part of the information is worthy of our attention).

15) DB FILE Sequential Read: occurs when the foreground process makes regular reads of the data file, including index lookup and other non-whole segment scans, as well as data file block discarding and so on. The wait time is the time it takes to complete all Imax O calls. Solution: improve IO performance.

If this wait event is significant, it may indicate that in a multi-table join, there is a problem with the join order of the table and the driver table is not used correctly, or there may be a problem with the use of the index, and the index is not always the best choice. In most cases, records can be obtained more quickly through an index, so it is usually normal for a well-coded, well-adjusted database to have a large wait event. Sometimes this high wait is related to the discontinuous storage distribution and the caching of some contiguous data blocks, especially for DML frequent data tables, discontinuities of data and storage space may lead to excessive single block reads, and regular data collation and space recovery are sometimes necessary.

It should be noted that in many cases, using an index is not the best choice, such as reading a large amount of data in a larger table, and a full table scan may be significantly faster than an index scan, so it should be noted in development that index scanning should be avoided for such queries.

16) Direct Path Read: generally speaking, direct path reading refers to reading data blocks directly into PGA. Commonly used for sorting, parallel queries, and read ahead operations. This wait may be caused by Iamp O. Using the asynchronous Ihop O mode or restricting sorting on disk may reduce the wait time here.

The wait event associated with a direct read. When ORACLE reads the data block directly into the PGA (process global area) of the session, while bypassing the SGA (system global area). The data in PGA is not shared with other sessions. That is, this part of the data read is used by the session alone and is not placed in the shared SGA.

When sorting operations (order by/group by/union/distinct/rollup/ merge joins), due to insufficient SORT_AREA_SIZE space in PGA, temporary tablespaces are needed to hold intermediate results, and direct path read wait events are generated when sorting results are read from temporary tablespaces.

Flush hash partitions that are not suitable for being in memory to the temporary tablespace using the SQL statement of the HASH connection. To find the row that matches the SQL predicate, the hash partition in the temporary table space is read back into memory (to find out the row that matches the SQL predicate), and the ORALCE session waits on the direct path read wait event.

Using SQL statements that are scanned in parallel also affects system-wide direct path read wait events. In the process of parallel execution, the direct path read wait event is related to the dependent query, but has nothing to do with the parent query. The session running the parent query will basically wait on the PX Deq:Execute Reply, and the dependent query will generate direct path read wait events.

Direct reading may be performed synchronously or asynchronously, depending on the platform and the value of the initialization parameter disk_asynch_io parameter. The statistics of system-wide waiting events may be inaccurate and misleading when using asynchronous Icano.

17) direct path write: direct path write the wait occurs when the system waits to confirm that all unfinished asynchronous Imax O has been written to disk. For this write wait, we should find the data files that operate most frequently on Icano (or temporary files if there are too many sorting operations), spread the load and speed up their write operations. If there is too much disk sorting in the system, it will lead to frequent temporary tablespace operations. In this case, you can consider using Local to manage tablespaces, dividing them into multiple small files and writing them to different disks or bare devices.

In DSS systems, it is normal to have a large number of direct path read, but in OLTP systems, significant direct path reading (direct path read) usually means that there is a problem with the system application, which leads to a large number of disk sort read operations.

Direct path writing (direct paht write) usually occurs when Oracle writes data directly from PGA to data files or temporary files, which can bypass SGA.

Such write operations are typically used in the following situations:

Direct path loading

Parallel DML operation

Disk sorting

Writes to the uncached "LOB" segment are then recorded as direct path write (lob) wait.

Most of the most common direct path writes are caused by disk sorting. For this write wait, we should find the data files that operate most frequently on Icano (or temporary files if there are too many sorting operations), spread the load and speed up their write operations.

18) control file parallel write: this event may occur when the server process updates all control files. If the wait is short, you don't have to think about it. If the waiting time is long, check to see if there is a bottleneck on the physical disk Imax O, where the control files are stored.

Multiple control files are identical copies and are used for mirroring to improve security. For business systems, multiple control files should be stored on different disks. Generally speaking, three is sufficient. If there are only two physical hard drives, then two control files are acceptable. It doesn't make sense to save multiple control files on the same disk. To reduce this wait, consider the following method: ① reduces the number of control files (while ensuring security). ② use asynchronous IO if the system supports it. ③ transfers control files to IO's lighter physical disk.

19) control file sequential read

Control file single write: these two events occur when there is a problem with a single control file I write to a single control file. If the wait is obvious, check the individual control file to see if there is an Icano bottleneck in the storage location.

20) library cache pin

This event usually occurs when the first session is running an object such as PL/SQL,VIEW,TYPES, and another session executes to recompile the object, that is, first adding a shared lock to the object, and then adding an exclusive lock to it, so that the wait occurs on the session with the exclusive lock. P1PowerP2 can be related to x$kglpn and x$kglob tables.

X$KGLOB (Kernel Generic Library Cache Manager Object)

X$KGLPN (Kernel Generic Library Cache Manager Object Pins)

-- query X$KGLOB to find the relevant object. The SQL statement is as follows

(that is, associate the P1raw in V$SESSION_WAIT with the KGLHDADR in X$KGLOB)

Select kglnaown,kglnaobj from X$KGLOB

Where KGLHDADR = (select p1raw from v$session_wait

Where event='library cache pin')

-- find out the sid of the blocker who caused the waiting event

Select sid from x$kglpn, v$session

Where KGLPNHDL in

(select p1raw from v$session_wait

Where wait_time=0 and event like 'library cache pin%')

And KGLPNMOD 0

And v$session.saddr=x$kglpn.kglpnuse

-- find out the SQL statement being executed by the blocker

Select sid,sql_text

From v$session, v$sqlarea

Where v$session.sql_address=v$sqlarea.address

And sid=

In this way, the root cause of "library cache pin" waiting can be found, and the resulting performance problems can be solved.

21) library cache lock

This event is usually caused by multiple DDL operations, that is, adding an exclusive lock to the library cache object and then adding an exclusive lock to it from another session, so that a wait is generated in the second session. You can find its corresponding object in the base table x$kgllk.

-- query the sid, session user, locked object of the blocker causing the waiting event

Select b.sid,a.user_name,a.kglnaobj

From x$kgllk a, v$session b

Where a.kgllkhdl in

(select p1raw from v$session_wait

Where wait_time=0 and event = 'library cache lock')

And a.kgllkmod 0

And b.saddr=a.kgllkuse

Of course, you can also view it directly from v$locked_objects, but without the above statement, you can directly find pid in v$process according to sid, and then kill or other processing.

22)

For example, some common IDLE wait events:

Dispatcher timer

Lock element cleanup

Null event

Parallel query dequeue wait

Parallel query idle wait-Slaves

Pipe get

PL/SQL lock timer

Pmon timer- pmon

Rdbms ipc message

Slave wait

Smon timer

SQL*Net break/reset to client

SQL*Net message from client

SQL*Net message to client

SQL*Net more data to client

Virtual circuit status

Client message

SQL*Net message from client

Here is a quick preview of common waiting events and solutions here

Waiting event

General solution

Sequential Read

Adjust the relevant indexes and select the appropriate driver row source

Scattered Read

Indicates that there are many full table scans. Optimize the code,cache small table into memory.

Free Buffer

Increase the DB_CACHE_SIZE, increase the frequency of checkpoint, optimize the code

Buffer Busy Segment header

Add freelist or freelistgroups

Buffer Busy Data block

Isolate hot blocks; use inverted indexes; use smaller blocks; increase the initrans of the table

Buffer Busy Undo header

Increase the number or size of rollback segments

Buffer Busy Undo block

Commit more; increases the number or size of rollback segments

Latch Free

Check the specific waiting latch type. The solution is described later.

Enqueue-ST

Use locally managed tablespaces or increase pre-allocated extent size

Enqueue-HW

Pre-allocate extents on top of HWM

Enqueue-TX4

Increase the value of initrans or use smaller blocks on a table or index

Log Buffer Space

Increase LOG_BUFFER, improve Imax O

Log File Switch

Add or increase log files

Log file sync

Reduce the frequency of submissions; use a faster I-map O; or use a bare device

Write complete waits

Increase the frequency of CKPT by increasing DBWR;

Time Model Statistics

Total time in database user-calls (DB Time): 663s

Statistics including the word "background" measure background process time, and so do not contribute to the DB time statistic

Ordered by or DB time desc, Statistic name

Statistic Name

Time (s)

% of DB Time

DB CPU

514.50

77.61

Sql execute elapsed time

482.27

72.74

Parse time elapsed

3.76

0.57

PL/SQL execution elapsed time

0.50

0.08

Hard parse elapsed time

0.34

0.05

Connection management call elapsed time

0.08

0.01

Hard parse (sharing criteria) elapsed time

0.00

0.00

Repeated bind elapsed time

0.00

0.00

PL/SQL compilation elapsed time

0.00

0.00

Failed parse elapsed time

0.00

0.00

DB time

662.97

Background elapsed time

185.19

Background cpu time

67.48

This section shows the CPU time taken by various types of database processing tasks.

Db time=cpu time + all of nonidle wait event time displayed in the DB time= report header

Back to Wait Events Statistics

Back to Top

Type of Wait Class wait event

S-second

Cs-centisecond-100th of a second

Ms-millisecond-1000th of a second

Us-microsecond-1000000th of a second

Ordered by wait time desc, waits desc

Query the 12 wait event classes provided by Oracle 10gR1:

Select wait_class#, wait_class_id, wait_class from v$event_name group by wait_class#, wait_class_id, wait_class order by wait_class#

Wait Class

Waits

% Time-outs

Total Wait Time (s)

Avg wait (ms)

Waits / txn

User I/O

66837

0.00

one hundred and twenty

two

11.94

System I/O

28295

0.00

ninety-three

three

5.05

Network

1571450

0.00

sixty-six

0

280.72

Cluster

210548

0.00

twenty-nine

0

37.61

Other

81783

71.82

twenty-eight

0

14.61

Application

333155

0.00

sixteen

0

59.51

Concurrency

5182

0.04

five

one

0.93

Commit

nine hundred and nineteen

0.00

four

four

0.16

Configuration

25427

99.46

one

0

4.54

Back to Wait Events Statistics

Back to Top

Wait Events reality non-idle wait event is followed by idle wait event

S-second

Cs-centisecond-100th of a second

Ms-millisecond-1000th of a second

Us-microsecond-1000000th of a second

Ordered by wait time desc, waits desc (idle events last)

(1) query all wait events and their attributes:

Select event#, name, parameter1, parameter2, parameter3 from v$event_name order by name

(2) query the 12 wait event classes provided by Oracle 10gR1:

Select wait_class#, wait_class_id, wait_class from v$event_name group by wait_class#, wait_class_id, wait_class order by wait_class#

Wait_event.doc

The content shown below may come from the following views)

The V$EVENT_NAME view contains all wait events defined for the database instance.

The V$SYSTEM_EVENT view shows the total statistics of all wait events encountered by all Oracle sessions since the instance was started.

The V$SESSION_EVENT view contains total wait event statistics for all sessions currently connected to the instance. This view contains all the columns that appear in the V$SYSTEM_EVENT view. It records the total number of waits, the waited time, and the maximum wait time for each waiting event in the session. The SID column identifies a separate session. The maximum wait time for each event in each session is tracked in the MAX_WAIT column. You can get more information about the session and the user by combining the V$SESSION_EVENT view with the V$SESSION view with the SID column.

The V$SESSION_WAIT view provides detailed information about the events or resources that each session is waiting for. This view contains only one line of active or inactive information for each session at any given time.

Since the introduction of OWI in Oracle 7.0.12, there have been four V$ views:

V$EVENT_NAME

V$SESSION_WAIT

V$SESSION_EVENT

V$SYSTEM_EVENT

In addition to these wait event views, the following new views have been introduced in Oracle 10gR1 to display wait information from multiple angles:

V$SYSTEM_WAIT_CLASS

V$SESSION_WAIT_CLASS

V$SESSION_WAIT_HISTORY

V$EVENT_HISTOGRAM

V$ACTIVE_SESSION_HISTORY

However, V$SESSION_WAIT, V$SESSION_WAIT, and V$SESSION_WAIT are still three important views that provide wait event statistics and timing information at different levels of granularity. The relationship between the three is as follows:

V$SESSION_WAIT? V$SESSION_EVENT? V$SYSTEM_EVENT

Event

Waits

% Time-outs

Total Wait Time (s)

Avg wait (ms)

Waits / txn

SQL*Net more data from client

27319

0.00

64

2

4.88

Log file parallel write

5497

0.00

47

9

0.98

Db file sequential read

7900

0.00

35

4

1.41

Db file parallel write

4806

0.00

34

7

0.86

Db file scattered read

10310

0.00

31

3

1.84

Direct path write

42724

0.00

30

1

7.63

Reliable message

355

2.82

18

49

0.06

SQL*Net break/reset to client

333084

0.00

16

0

59.50

Db file parallel read

3732

0.00

13

4

0.67

Gc current multi block request

175710

0.00

10

0

31.39

Control file sequential read

15974

0.00

10

1

2.85

Direct path read temp

1873

0.00

9

5

0.33

Gc cr multi block request

20877

0.00

8

0

3.73

Log file sync

919

0.00

4

4

0.16

Gc cr block busy

526

0.00

3

6

0.09

Enq: FB-contention

10384

0.00

3

0

1.85

DFS lock handle

3517

0.00

3

1

0.63

Control file parallel write

1946

0.00

3

1

0.35

Gc current block 2-way

4165

0.00

2

0

0.74

Library cache lock

432

0.00

2

4

0.08

Name-service call wait

22

0.00

2

76

0.00

Row cache lock

3894

0.00

2

0

0.70

Gcs log flush sync

1259

42.02

2

1

0.22

Os thread startup

18

5.56

2

89

0.00

Gc cr block 2-way

3671

0.00

2

0

0.66

Gc current block busy

113

0.00

1

12

0.02

SQL*Net message to client

1544115

0.00

1

0

275.83

Gc buffer busy

15

6.67

1

70

0.00

Gc cr disk read

3272

0.00

1

0

0.58

Direct path write temp

159

0.00

1

5

0.03

Gc current grant busy

898

0.00

1

1

0.16

Log file switch completion

29

0.00

1

17

0.01

CGS wait for IPC msg

48739

99.87

0

0

8.71

Gc current grant 2-way

1142

0.00

0

0

0.20

Kjbdrmcvtq lmon drm quiesce: ping completion

9

0.00

0

19

0.00

Enq: US-contention

567

0.00

0

0

0.10

Direct path read

138

0.00

0

1

0.02

Enq: WF-contention

14

0.00

0

9

0.00

Ksxr poll remote instances

13291

58.45

0

0

2.37

Library cache pin

211

0.00

0

1

0.04

Ges global resource directory to be frozen

9

100.00

0

10

0.00

Wait for scn ack

583

0.00

0

0

0.10

Log file sequential read

36

0.00

0

2

0.01

Undo segment extension

25342

99.79

0

0

4.53

Rdbms ipc reply

279

0.00

0

0

0.05

Ktfbtgex

6

100.00

0

10

0.00

Enq: HW-contention

44

0.00

0

1

0.01

Gc cr grant 2-way

158

0.00

0

0

0.03

Enq: TX-index contention

1

0.00

0

34

0.00

Enq: CF-contention

64

0.00

0

1

0.01

PX Deq: Signal ACK

37

21.62

0

1

0.01

Latch free

3

0.00

0

10

0.00

Buffer busy waits

625

0.16

0

0

0.11

KJC: Wait for msg sends to complete

154

0.00

0

0

0.03

Log buffer space

11

0.00

0

2

0.00

Enq: PS-contention

46

0.00

0

1

0.01

Enq: TM-contention

70

0.00

0

0

0.01

IPC send completion sync

40

100.00

0

0

0.01

PX Deq: reap credit

1544

99.81

0

0

0.28

Log file single write

36

0.00

0

0

0.01

Enq: TT-contention

46

0.00

0

0

0.01

Enq: TD-KTF dump entries

12

0.00

0

1

0.00

Read by other session

1

0.00

0

12

0.00

LGWR wait for redo copy

540

0.00

0

0

0.10

PX Deq Credit: send blkd

17

5.88

0

0

0.00

Enq: TA-contention

14

0.00

0

0

0.00

Latch: ges resource hash list

44

0.00

0

0

0.01

Enq: PI-contention

8

0.00

0

0

0.00

Write complete waits

1

0.00

0

2

0.00

Enq: DR-contention

3

0.00

0

0

0.00

Enq: MW-contention

3

0.00

0

0

0.00

Enq: TS-contention

3

0.00

0

0

0.00

PX qref latch

150

100.00

0

0

0.03

PX qref latch

PX qref latch wait events will be found occasionally in the case of parallel execution, which is most likely to occur when the system peak period and high concurrency are adopted at the same time. Looks like it's going to be special care.

Concept and principle

In a parallel execution environment, data and information are exchanged between query slaves and query coordinator through queues. PX qref latch is used to protect these queues.

The occurrence of PX qref latch wait events generally indicates that the message is sent faster than received, so you need to adjust the buffer size (which can be adjusted by the parallel_execution_message_size parameter).

However, in some cases, it is difficult to avoid this situation, such as consumer needs to wait a long time for data processing, because a large number of packets need to be returned, which is normal.

Adjustment and measures

When the load of the system is high, the parallelism needs to be reduced; if the default parallelism is used, the effect can be achieved by reducing the value of the parallel_thread_per_cpu parameter.

DEFAULT degree = PARALLEL_THREADS_PER_CPU * # CPU's

Optimize parallel_execution_message_size parameters

Tuning parallel_execution_message_size is a tradeoff between

Performance and memory. For parallel query, the connection

Topology between slaves and QC requires (n ^ 2 + 2n) connections

(where n is the DOP not the actual number of slaves) at maximum.

If each connection has 3 buffers associated with it then you can

Very quickly get into high memory consumption on large machines

Doing high DOP queries

Enq: MD-contention

2

0.00

0

0

0.00

Latch: KCL gc element parent latch

11

0.00

0

0

0.00

Enq: JS-job run lock-synchronize

1

0.00

0

1

0.00

SQL*Net more data to client

16

0.00

0

0

0.00

Latch: cache buffers lru chain

1

0.00

0

0

0.00

Enq: UL-contention

1

0.00

0

0

0.00

Gc current split

1

0.00

0

0

0.00

Enq: AF-task serialization

1

0.00

0

0

0.00

Latch: object queue header operation

3

0.00

0

0

0.00

Latch: cache buffers chains

1

0.00

0

0

0.00

Latch: enqueue hash chains

2

0.00

0

0

0.00

SQL*Net message from client

1544113

0.00

12626

8

275.83

Gcs remote message

634884

98.64

9203

14

113.41

DIAG idle wait

23628

0.00

4616

195

4.22

Ges remote message

149591

93.45

4612

31

26.72

Streams AQ: qmn slave idle wait

167

0.00

4611

27611

0.03

Streams AQ: qmn coordinator idle wait

351

47.86

4611

13137

0.06

Streams AQ: waiting for messages in the queue

488

100.00

4605

9436

0.09

Virtual circuit status

157

100.00

4596

29272

0.03

PX Idle Wait

1072

97.11

2581

2407

0.19

Jobq slave wait

145

97.93

420

2896

0.03

Streams AQ: waiting for time management or cleanup tasks

1

100.00

270

269747

0.00

PX Deq: Parse Reply

40

40.00

0

3

0.01

PX Deq: Execution Msg

121

26.45

0

0

0.02

PX Deq: Join ACK

38

42.11

0

1

0.01

PX Deq: Execute Reply

34

32.35

0

0

0.01

PX Deq: Msg Fragment

16

0.00

0

0

0.00

Streams AQ: RAC qmn coordinator idle wait

351

100.00

0

0

0.06

Class slave wait

2

0.00

0

0

0.00

The db file scattered read wait event occurs when SESSION waits for multi-block I index fast full scans O, due to full table scans or index fast full scans. Segments with excessive read operations can be identified in the "Segments by Physical Reads" and "SQL ordered by Reads" sections (it may be a different name in other versions of the report). If in OLTP applications, there should not be too many full scan operations, but should use selective index operations.

DB file sequential read waits mean that sequential Ihammer O read waits occur (usually single-block read waits into contiguous memory areas). If this wait is very serious, you should use the method of the previous paragraph to determine the hot SEGMENT for read operations, and then partition large tables to reduce the amount of sequential read O, or optimize the execution plan (by using a storage outline or performing data analysis) to avoid sequential read waits caused by single-block read operations. In batch applications, DB file sequential read is a performance-impacting event that should always be avoided.

The Log File Parallel Write event occurs while waiting for the LGWR process to write the REDO record from the LOG buffer to the online log file. Although writes may be concurrent, LGWR needs to wait for the last I write O to be written to disk to consider parallel writes complete, so the wait time depends on the time it takes OS to complete all requests. If the wait is serious, you can reduce the wait by moving the LOG file to a faster disk or striping the disk (reducing contention).

The Buffer Busy Waits event occurs when a SESSION needs to access a database block in BUFFER CACHE but cannot. There are two reasons for the buffer "busy": 1) another SESSION is reading data blocks into BUFFER. 2) another SESSION is occupying the requested BUFFER in exclusive mode. You can find the SEGMENT on which this wait occurs in the "Segments by Buffer Busy Waits" section, and then reduce the wait event by using reverse-key indexes and partitioning the heat table.

The Log File Sync event, when the user SESSION performs a transactional operation (COMMIT, ROLLBACK, etc.), informs the LGWR process to write all the required REDO information from LOG BUFFER to the LOG file, which occurs when the user SESSION waits for LGWR to return a safe write to disk notification. The method that reduces this wait writes the handling of the Log File Parallel Write event.

Enqueue Waits is the local lock for serial access to local resources, indicating that you are waiting for a resource to be locked by another SESSION (one or more) in exclusive mode. The way to reduce this wait depends on the type of lock that production waits. There are three main types of locks that cause Enqueue to wait: TX (transaction lock), TM D (ML lock) and ST (space management lock).

Back to Wait Events Statistics

Back to Top

Background Wait Events

Ordered by wait time desc, waits desc (idle events last)

Event

Waits

% Time-outs

Total Wait Time (s)

Avg wait (ms)

Waits / txn

Log file parallel write

5497

0.00

forty-seven

nine

0.98

Db file parallel write

4806

0.00

thirty-four

seven

0.86

Events in waitclass Other

69002

83.25

twenty-two

0

12.33

Control file sequential read

9323

0.00

seven

one

1.67

Control file parallel write

1946

0.00

three

one

0.35

Os thread startup

eighteen

5.56

two

eighty-nine

0.00

Direct path read

one hundred and thirty eight

0.00

0

one

0.02

Db file sequential read

twenty-one

0.00

0

five

0.00

Direct path write

one hundred and thirty eight

0.00

0

0

0.02

Log file sequential read

thirty-six

0.00

0

two

0.01

Gc cr block 2-way

ninety-six

0.00

0

0

0.02

Gc current block 2-way

seventy-eight

0.00

0

0

0.01

Log buffer space

eleven

0.00

0

two

0.00

Row cache lock

fifty-nine

0.00

0

0

0.01

Log file single write

thirty-six

0.00

0

0

0.01

Buffer busy waits

one hundred and fifty one

0.66

0

0

0.03

Gc current grant busy

twenty-nine

0.00

0

0

0.01

Library cache lock

four

0.00

0

one

0.00

Enq: TM-contention

ten

0.00

0

0

0.00

Gc current grant 2-way

eight

0.00

0

0

0.00

Gc cr multi block request

seven

0.00

0

0

0.00

Gc cr grant 2-way

five

0.00

0

0

0.00

Rdbms ipc message

97288

73.77

50194

five hundred and sixteen

17.38

Gcs remote message

634886

98.64

9203

fourteen

113.41

DIAG idle wait

23628

0.00

4616

one hundred and ninety five

4.22

Pmon timer

1621

100.00

4615

2847

0.29

Ges remote message

149591

93.45

4612

thirty-one

26.72

Streams AQ: qmn slave idle wait

one hundred and sixty seven

0.00

4611

27611

0.03

Streams AQ: qmn coordinator idle wait

three hundred and fifty one

47.86

4611

13137

0.06

Smon timer

two hundred and seventy seven

6.50

4531

16356

0.05

Streams AQ: waiting for time management or cleanup tasks

one

100.00

two hundred and seventy

269747

0.00

PX Deq: Parse Reply

forty

40.00

0

three

0.01

PX Deq: Join ACK

thirty-eight

42.11

0

one

0.01

PX Deq: Execute Reply

thirty-four

32.35

0

0

0.01

Streams AQ: RAC qmn coordinator idle wait

three hundred and fifty one

100.00

0

0

0.06

Back to Wait Events Statistics

Back to Top

Operating System Statistics

Statistic

Total

NUM_LCPUS

0

NUM_VCPUS

0

AVG_BUSY_TIME

101442

AVG_IDLE_TIME

371241

AVG_IOWAIT_TIME

5460

AVG_SYS_TIME

25795

AVG_USER_TIME

75510

BUSY_TIME

812644

IDLE_TIME

2971077

IOWAIT_TIME

44794

SYS_TIME

207429

USER_TIME

605215

LOAD

0

OS_CPU_WAIT_TIME

854100

RSRC_MGR_CPU_WAIT_TIME

0

PHYSICAL_MEMORY_BYTES

8589934592

NUM_CPUS

eight

NUM_CPU_CORES

four

NUM_LCPUS: if 0 is displayed, it is because LPARS is not set

NUM_VCPUS: ditto.

AVG_BUSY_TIME: BUSY_TIME / NUM_CPUS

AVG_IDLE_TIME: IDLE_TIME / NUM_CPUS

AVG_IOWAIT_TIME: IOWAIT_TIME / NUM_CPUS

AVG_SYS_TIME: SYS_TIME / NUM_CPUS

AVG_USER_TIME: USER_TIME / NUM_CPUSar o

BUSY_TIME: time equiv of% usr+%sys in sar output

IDLE_TIME: time equiv of% idle in sar

IOWAIT_TIME: time equiv of% wio in sar

SYS_TIME: time equiv of% sys in sar

USER_TIME: time equiv of% usr in sar

LOAD: unknown

OS_CPU_WAIT_TIME: supposedly time waiting on run queues

RSRC_MGR_CPU_WAIT_TIME: time waited coz of resource manager

PHYSICAL_MEMORY_BYTES: total memory in use supposedly

NUM_CPUS: number of CPU of number of CPUs reported by OS operating system

NUM_CPU_CORES: number of CPU slots on the number of CPU sockets on motherboard motherboard

The total elapsed time can also be calculated by formula:

BUSY_TIME + IDLE_TIME + IOWAIT TIME

Or SYS_TIME + USER_TIME + IDLE_TIME + IOWAIT_TIME

(because BUSY_TIME = SYS_TIME+USER_TIME)

Back to Wait Events Statistics

Back to Top

Service Statistics

Ordered by DB Time

Service Name

DB Time (s)

DB CPU (s)

Physical Reads

Logical Reads

ICCI

608.10

496.60

315849

16550972

SYS$USERS

54.70

17.80

6539

58929

ICCIXDB

0.00

0.00

0

0

SYS$BACKGROUND

0.00

0.00

two hundred and eighty two

38990

Back to Wait Events Statistics

Back to Top

Service Wait Class Stats

Wait Class info for services in the Service Statistics section.

Total Waits and Time Waited displayed for the following wait classes: User I/O, Concurrency, Administrative, Network

Time Waited (Wt Time) in centisecond (100th of a second)

Service Name

User I/O Total Wts

User I/O Wt Time

Concurcy Total Wts

Concurcy Wt Time

Admin Total Wts

Admin Wt Time

Network Total Wts

Network Wt Time

ICCI

59826

8640

4621

three hundred and thirty eight

0

0

1564059

6552

SYS$USERS

6567

3238

two hundred and thirty one

eleven

0

0

7323

three

SYS$BACKGROUND

four hundred and forty three

one hundred and fifteen

three hundred and thirty

one hundred and sixty eight

0

0

0

0

Back to Wait Events Statistics

Back to Top

SQL Statistics v$sqlarea

SQL ordered by Elapsed Time

SQL ordered by CPU Time

SQL ordered by Gets

SQL ordered by Reads

SQL ordered by Executions

SQL ordered by Parse Calls

SQL ordered by Sharable Memory

SQL ordered by Version Count

SQL ordered by Cluster Wait Time

Complete List of SQL Text

This section lists the SQL statements that consume the most resources by various resources and shows their percentage of all resources in the statistical period, which gives us a tuning guide. For example, in a system where CPU resources are the bottleneck of system performance, optimizing the SQL statements with the most buffer gets will achieve the maximum effect. In a system where waiting is the most serious event, the goal of tuning should be the SQL statement with the most physical IOs.

There is no complete SQL statement in the STATSPACK report, and you can use the Hash Value in the report to look it up from the database with the following statement:

Select sql_text

From stats$sqltext

Where hash_value = & hash_value

Order by piece

Back to Top

SQL ordered by Elapsed Time

Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.

% Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100

Elapsed Time (s)

CPU Time (s)

Executions

Elap per Exec (s)

% Total DB Time

SQL Id

SQL Module

SQL Text

ninety-three

fifty-seven

one

93.50

14.10

D8z0u8hgj8xdy

Cuidmain@HPGICCI1 (TNS V1-V3)

Insert into CUID select CUID_...

seventy-six

seventy-five

172329

0.00

11.52

4vja2k2gdtyup

Load_fnsact@HPGICCI1 (TNS V1-V3)

Insert into ICCICCS values (:...

fifty-eight

forty-two

one

58.04

8.75

569r5k05drsj7

Cumimain@HPGICCI1 (TNS V1-V3)

Insert into CUMI select CUSV_...

fifty-one

forty-two

one

50.93

7.68

Ackxqhnktxnbc

Cusmmain@HPGICCI1 (TNS V1-V3)

Insert into CUSM select CUSM_...

thirty-eight

thirty-six

166069

0.00

5.67

7gtztzv329wg0

Select c.name, u.name from co...

thirty-five

three

one

35.00

5.28

6z06gcfw39pkd

SQL*Plus

SELECT F.TABLESPACE_NAME, TO_...

twenty-three

twenty-three

172329

0.00

3.46

1dm3bq36vu3g8

Load_fnsact@HPGICCI1 (TNS V1-V3)

Insert into iccifnsact values...

fifteen

eleven

five

2.98

2.25

Djs2w2f17nw2z

DECLARE job BINARY_INTEGER: =...

fourteen

fourteen

172983

0.00

2.16

7wwv1ybs9zguz

Load_fnsact@HPGICCI1 (TNS V1-V3)

Update ICCIFNSACT set BORM_AD...

thirteen

thirteen

172337

0.00

2.00

Gmn2w09rdxn14

Load_oldnewact@HPGICCI1 (TNS V1-V3)

Insert into OLDNEWACT values...

thirteen

thirteen

166051

0.00

1.89

Chjmy0dxf9mbj

Icci_migact@HPGICCI1 (TNS V1-V3)

Insert into ICCICCS values (:...

ten

four

one

9.70

1.46

0yv9t4qb1zb2b

Cuidmain@HPGICCI1 (TNS V1-V3)

Select CUID_CUST_NO, CUID_ID_...

ten

eight

five

1.91

1.44

1crajpb7j5tyz

INSERT INTO STATS$SGA_TARGET_A...

eight

eight

172329

0.00

1.25

38apjgr0p55ns

Load_fnsact@HPGICCI1 (TNS V1-V3)

Update ICCICCS set CCSMAXOVER...

eight

eight

172983

0.00

1.16

5c4qu2zmj3gux

Load_fnsact@HPGICCI1 (TNS V1-V3)

Select * from ICCIPRODCODE wh...

Back to SQL Statistics

Back to Top

SQL ordered by CPU Time

Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.

% Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100

CPU Time (s)

Elapsed Time (s)

Executions

CPU per Exec (s)

% Total DB Time

SQL Id

SQL Module

SQL Text

seventy-five

seventy-six

172329

0.00

11.52

4vja2k2gdtyup

Load_fnsact@HPGICCI1 (TNS V1-V3)

Insert into ICCICCS values (:...

fifty-seven

ninety-three

one

57.31

14.10

D8z0u8hgj8xdy

Cuidmain@HPGICCI1 (TNS V1-V3)

Insert into CUID select CUID_...

forty-two

fifty-one

one

42.43

7.68

Ackxqhnktxnbc

Cusmmain@HPGICCI1 (TNS V1-V3)

Insert into CUSM select CUSM_...

forty-two

fifty-eight

one

42.01

8.75

569r5k05drsj7

Cumimain@HPGICCI1 (TNS V1-V3)

Insert into CUMI select CUSV_...

thirty-six

thirty-eight

166069

0.00

5.67

7gtztzv329wg0

Select c.name, u.name from co...

twenty-three

twenty-three

172329

0.00

3.46

1dm3bq36vu3g8

Load_fnsact@HPGICCI1 (TNS V1-V3)

Insert into iccifnsact values...

fourteen

fourteen

172983

0.00

2.16

7wwv1ybs9zguz

Load_fnsact@HPGICCI1 (TNS V1-V3)

Update ICCIFNSACT set BORM_AD...

thirteen

thirteen

172337

0.00

2.00

Gmn2w09rdxn14

Load_oldnewact@HPGICCI1 (TNS V1-V3)

Insert into OLDNEWACT values...

thirteen

thirteen

166051

0.00

1.89

Chjmy0dxf9mbj

Icci_migact@HPGICCI1 (TNS V1-V3)

Insert into ICCICCS values (:...

eleven

fifteen

five

2.23

2.25

Djs2w2f17nw2z

DECLARE job BINARY_INTEGER: =...

eight

eight

172329

0.00

1.25

38apjgr0p55ns

Load_fnsact@HPGICCI1 (TNS V1-V3)

Update ICCICCS set CCSMAXOVER...

eight

ten

five

1.60

1.44

1crajpb7j5tyz

INSERT INTO STATS$SGA_TARGET_A...

eight

eight

172983

0.00

1.16

5c4qu2zmj3gux

Load_fnsact@HPGICCI1 (TNS V1-V3)

Select * from ICCIPRODCODE wh...

four

ten

one

3.54

1.46

0yv9t4qb1zb2b

Cuidmain@HPGICCI1 (TNS V1-V3)

Select CUID_CUST_NO, CUID_ID_...

three

thirty-five

one

3.13

5.28

6z06gcfw39pkd

SQL*Plus

SELECT F.TABLESPACE_NAME, TO_...

Back to SQL Statistics

Back to Top

SQL ordered by Gets

Resources reported for PL/SQL code includes the resources used by all SQL statements called by the code.

Total Buffer Gets: 16648792

Captured SQL account for 97.9% of Total

In this part, the SQL statement is sorted by Buffer Gets, that is, how many logical Imax O are executed by it. The comment at the top indicates that the cache Buffer Gets of an PL/SQL unit includes the Buffer Gets of all SQL statements executed by this block of code. Therefore, you will often see PL/SQL procedures at the top of this list, because the number of individual statements executed by the stored procedure is totaled. The Buffer Gets here is a cumulative value, so a large value does not necessarily mean that there is a problem with the performance of the statement. Usually we can compare the Buffer Gets and physical values of the statement, if these two are close, we are sure that there is a problem with the statement, we can analyze why the value of physical reads is so high through the execution plan. In addition, we can also look at the value of gets per exec here, which, if too large, indicates that the statement may have used a poor index or improper table joins.

Another point: a large number of logical reads are often accompanied by high CPU consumption. So most of the time, when the CPU of the system we see is close to 100%, it is often caused by the SQL statement. At this time, we can analyze the SQL logically read here.

Select * from

(select substr (sql_text,1,40) sql, buffer_gets

Executions, buffer_gets/executions "Gets/Exec"

Hash_value,address

From v$sqlarea

Where buffer_gets > 0 and executions > 0

Order by buffer_gets desc)

Where rownum 0 and executions > 0

Order by disk_reads desc) where rownum 0

Order by executions desc) where rownum 1, which often indicates that there may be problems with this statement: no binding variables are used, the shared pool setting is too small, cursor_sharing is set to exact, session_cached_cursors is not set, and so on.

Select * from

(select substr (sql_text,1,40) sql, parse_calls

Executions, hash_value,address

From v$sqlarea where parse_calls > 0

Order by parse_calls desc) where rownum 1048576

Order by sharable_mem desc)

Where rownum 1 group by enabled) c where i.obj#=c.enabled (+) and i.objacks = ist.obj# (+) and i.bo#=:1 order by i.obj#

7v9dyf5r424yh

Select NEWACTNO into: b0 from OLDNEWACT where OLDACTNO=:b1

7wwv1ybs9zguz

Update ICCIFNSACT set BORM_ADV_DATE=:b0, BOIS_MATURITY_DATE=:b1, BOIS_UNPD_BAL=:b2, BOIS_UNPD_INT=:b3, BOIS_BAL_FINE=:b4, BOIS_INT_FINE=:b5, BOIS_FINE_FINE=:b6, BORM_LOAN_TRM=:b7, BORM_FIVE_STAT=:b8, BOIS_ARREARS_CTR=:b9, BOIS_ARREARS_SUM=:b10 where BORM_MEMB_CUST_AC=:b11

83taa7kaw59c1

Select name, intcol#, segcol#, type#, length, nvl (precision#, 0), decode (type#, 2, nvl (scale,-127/*MAXSB1MINAL*/), 178,178,179,179,180,181,181,182,182,182,183,183,183,231, scale, 0), null$, fixedstorage, nvl (deflength, 0), default$, rowid, col#, property, nvl (charsetid, 0), nvl (charsetform, 0), spare1, spare2, nvl (spare3, 0) from col$ where obj#=:1 order by intcol#

4qubbrsr0kfn

Insert into wrh$_latch (snap_id, dbid, instance_number, latch_hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, sleep2, sleep3, sleep4, wait_time) select: snap_id,: dbid,: instance_number, hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, sleep2, sleep3, sleep4, wait_time from v$latch order by hash

9qgtwh76xg6nz

Update seg$ set type#=:4, blocks=:5, extents=:6, minexts=:7, maxexts=:8, extsize=:9, extpct=:10, user#=:11, iniexts=:12, lists=decode (: 13, 65535, NULL,: 13), groups=decode (: 14, 65535, NULL,: 14), cachehint=:15, hwmincr=:16, spare1=DECODE (: 17, 0, NULL,: 17), scanhint=:18 where ts#=:1 and file#=:2 and block#=:3

9vtm7gy4fr2ny

Select con# from con$ where owner#=:1 and name=:2

A2any035u1qz1

Select owner#, name from con$ where con#=:1

A7nh7j8zmfrzw

Select CUSV_CUST_NO from CUMI_TMP where CHGFLAG='D'

Back to SQL Statistics

Back to Top

Instance Activity Statistics

Instance Activity Stats

Instance Activity Stats-Absolute Values

Instance Activity Stats-Thread Activity

Back to Top

Instance Activity Stats

Statistic

Total

Per Second

Per Trans

CPU used by this session

23388

4.95

4.18

CPU used when call started

21816

4.61

3.90

CR blocks created

2794

0.59

0.50

Cached Commit SCN referenced

237936

50.33

42.50

Commit SCN cached

three

0.00

0.00

DB time

583424

123.41

104.22

DBWR checkpoint buffers written

402781

85.20

71.95

DBWR checkpoints

nine

0.00

0.00

DBWR fusion writes

two hundred and fifty five

0.05

0.05

DBWR object drop buffers written

0

0.00

0.00

DBWR thread checkpoint buffers written

221341

46.82

39.54

DBWR transaction table writes

one hundred and thirty

0.03

0.02

DBWR undo block writes

219272

46.38

39.17

DFO trees parallelized

sixteen

0.00

0.00

PX local messages recv'd

forty

0.01

0.01

PX local messages sent

forty

0.01

0.01

PX remote messages recv'd

eighty

0.02

0.01

PX remote messages sent

eighty

0.02

0.01

Parallel operations not downgraded

sixteen

0.00

0.00

RowCR-row contention

nine

0.00

0.00

RowCR attempts

fourteen

0.00

0.00

RowCR hits

five

0.00

0.00

SMON posted for undo segment recovery

0

0.00

0.00

SMON posted for undo segment shrink

nine

0.00

0.00

SQL*Net roundtrips to/from client

1544063

326.62

275.82

Active txn count during cleanout

276652

58.52

49.42

Application wait time

1620

0.34

0.29

Auto extends on undo tablespace

0

0.00

0.00

Background checkpoints completed

seven

0.00

0.00

Background checkpoints started

nine

0.00

0.00

Background timeouts

21703

4.59

3.88

Branch node splits

three hundred and thirty seven

0.07

0.06

Buffer is not pinned count

1377184

291.32

246.01

Buffer is pinned count

20996139

4441.37

3750.65

Bytes received via SQL*Net from client

7381397183

1561408.36

1318577.56

Bytes sent via SQL*Net to client

149122035

31544.22

26638.45

Calls to get snapshot scn: kcmgss

1696712

358.91

303.09

Calls to kcmgas

433435

91.69

77.43

Calls to kcmgcs

142482

30.14

25.45

Change write time

4707

1.00

0.84

Cleanout-number of ktugct calls

282045

59.66

50.38

Cleanouts and rollbacks-consistent read gets

fifty-five

0.01

0.01

Cleanouts only-consistent read gets

2406

0.51

0.43

Cluster key scan block gets

21886

4.63

3.91

Cluster key scans

10540

2.23

1.88

Cluster wait time

2855

0.60

0.51

Commit batch/immediate performed

two hundred and ninety four

0.06

0.05

Commit batch/immediate requested

two hundred and ninety four

0.06

0.05

Commit cleanout failures: block lost

2227

0.47

0.40

Commit cleanout failures: callback failure

seven hundred and fifty

0.16

0.13

Commit cleanout failures: cannot pin

four

0.00

0.00

Commit cleanouts

427610

90.45

76.39

Commit cleanouts successfully completed

424629

89.82

75.85

Commit immediate performed

two hundred and ninety four

0.06

0.05

Commit immediate requested

two hundred and ninety four

0.06

0.05

Commit txn count during cleanout

111557

23.60

19.93

Concurrency wait time

five hundred and fifteen

0.11

0.09

Consistent changes

1716

0.36

0.31

Consistent gets

5037471

1065.59

899.87

From the values of consistent gets,db block gets and physical reads, we can also calculate buffer hit ratio. The formula is as follows: buffer hit ratio = 100* (1-physical reads / (consistent gets+ db block gets)). For example, here, we can calculate: buffer hit ratio = 100* (1-26524 / (16616758) 2941398) = 99.86

Consistent gets-examination

2902016

613.87

518.40

Consistent gets direct

0

0.00

0.00

Consistent gets from cache

5037471

1065.59

899.87

Current blocks converted for CR

0

0.00

0.00

Cursor authentications

four hundred and thirty four

0.09

0.08

Data blocks consistent reads-undo records applied

1519

0.32

0.27

Db block changes

8594158

1817.95

1535.22

Db block gets

11611321

2456.18

2074.19

Db block gets direct

1167830

247.03

208.62

Db block gets from cache

10443491

2209.14

1865.58

Deferred (CURRENT) block cleanout applications

20786

4.40

3.71

Dirty buffers inspected

25007

5.29

4.47

Dirty data ages from the LRU list, A value here indicates that the DBWR is not keeping up. If this value is greater than 0, you need to consider increasing the DBWRs.

Dirty buffers inspected: This is the number of dirty (modified) data buffers that were aged out on the LRU list. You may benefit by adding more DBWRs.If it is greater than 0, consider increasing the database writes.

Drop segment calls in space pressure

0

0.00

0.00

Enqueue conversions

6734

1.42

1.20

Enqueue releases

595149

125.89

106.31

Enqueue requests

595158

125.90

106.32

Enqueue timeouts

nine

0.00

0.00

Enqueue waits

7901

1.67

1.41

Exchange deadlocks

one

0.00

0.00

Execute count

1675112

354.34

299.23

Free buffer inspected

536832

113.56

95.90

This value contains the buffer area of dirty,pinned,busy. If the value of free buffer inspected-dirty buffers inspected-buffer is pinned count is still large, it indicates that there are more blocks of memory that cannot be reused, which will lead to latch contention and need to increase the buffer cache.

Free buffer requested

746999

158.01

133.44

Gc CPU used by this session

9099

1.92

1.63

Gc cr block build time

thirteen

0.00

0.00

Gc cr block flush time

one hundred and forty three

0.03

0.03

Gc cr block receive time

four hundred and seventy four

0.10

0.08

Gc cr block send time

thirty-six

0.01

0.01

Gc cr blocks received

4142

0.88

0.74

Gc cr blocks served

10675

2.26

1.91

Gc current block flush time

twenty-three

0.00

0.00

Gc current block pin time

thirty-four

0.01

0.01

Gc current block receive time

1212

0.26

0.22

Gc current block send time

fifty-two

0.01

0.01

Gc current blocks received

15502

3.28

2.77

Gc current blocks served

17534

3.71

3.13

Gc local grants

405329

85.74

72.41

Gc remote grants

318630

67.40

56.92

Gcs messages sent

1129094

238.84

201.70

Ges messages sent

90695

19.18

16.20

Global enqueue get time

1707

0.36

0.30

Global enqueue gets async

12731

2.69

2.27

Global enqueue gets sync

190492

40.30

34.03

Global enqueue releases

190328

40.26

34.00

Global undo segment hints helped

0

0.00

0.00

Global undo segment hints were stale

0

0.00

0.00

Heap block compress

108758

23.01

19.43

Hot buffers moved to head of LRU

18652

3.95

3.33

Immediate (CR) block cleanout applications

2462

0.52

0.44

Immediate (CURRENT) block cleanout applications

325184

68.79

58.09

Index crx upgrade (positioned)

4663

0.99

0.83

Index fast full scans (full)

thirteen

0.00

0.00

Index fetch by key

852181

180.26

152.23

Index scans kdiixs1

339583

71.83

60.66

Leaf node 90-10 splits

thirty-four

0.01

0.01

Leaf node splits

106552

22.54

19.03

Lob reads

eleven

0.00

0.00

Lob writes

eighty-three

0.02

0.01

Lob writes unaligned

eighty-three

0.02

0.01

Local undo segment hints helped

0

0.00

0.00

Local undo segment hints were stale

0

0.00

0.00

Logons cumulative

sixty-one

0.01

0.01

Messages received

20040

4.24

3.58

Messages sent

19880

4.21

3.55

No buffer to keep pinned count

0

0.00

0.00

No work-consistent read gets

1513070

320.06

270.29

Opened cursors cumulative

183375

38.79

32.76

Parse count (failures)

one

0.00

0.00

Parse count (hard)

one hundred and forty three

0.03

0.03

Parse count (total)

182780

38.66

32.65

With parse count (hard) and parse count (total), the soft parse rate can be calculated as:

100-100 * (parse count (hard) / parse count (total)) = 100-100 * (1-6090 prime 191531) = 96.82

Parse time cpu

twenty-seven

0.01

0.00

Parse time elapsed

three hundred and thirty eight

0.07

0.06

Physical read IO requests

82815

17.52

14.79

Physical read bytes

2643378176

559161.45

472200.46

Physical read total IO requests

98871

20.91

17.66

Physical read total bytes

2905491456

614607.04

519023.13

Physical read total multi block requests

24089

5.10

4.30

Physical reads

322678

68.26

57.64

Physical reads cache

213728

45.21

38.18

Physical reads cache prefetch

191830

40.58

34.27

Physical reads direct

108950

23.05

19.46

Physical reads direct temporary tablespace

108812

23.02

19.44

Physical reads prefetch warmup

0

0.00

0.00

Physical write IO requests

223456

47.27

39.92

Physical write bytes

14042071040

2970360.02

2508408.55

Physical write total IO requests

133835

28.31

23.91

Physical write total bytes

23114268672

4889428.30

4129022.63

Physical write total multi block requests

116135

24.57

20.75

Physical writes

1714120

362.59

306.20

Physical writes direct

1276780

270.08

228.08

Physical writes direct (lob)

0

0.00

0.00

Physical writes direct temporary tablespace

108812

23.02

19.44

Physical writes from cache

437340

92.51

78.12

Physical writes non checkpoint

1673703

354.04

298.98

Pinned buffers inspected

ten

0.00

0.00

Prefetch clients-default

0

0.00

0.00

Prefetch warmup blocks aged out before use

0

0.00

0.00

Prefetch warmup blocks flushed out before use

0

0.00

0.00

Prefetched blocks aged out before use

0

0.00

0.00

Process last non-idle time

4730

1.00

0.84

Queries parallelized

sixteen

0.00

0.00

Recursive calls

1654650

350.01

295.58

Recursive cpu usage

2641

0.56

0.47

Redo blocks written

8766094

1854.32

1565.93

Redo buffer allocation retries

twenty-four

0.01

0.00

Redo entries

4707068

995.70

840.85

Redo log space requests

thirty-four

0.01

0.01

Redo log space wait time

fifty

0.01

0.01

Redo ordering marks

277042

58.60

49.49

Redo size

4343559400

918805.72

775912.72

Redo subscn max counts

2693

0.57

0.48

Redo synch time

four hundred and eight

0.09

0.07

Redo synch writes

6984

1.48

1.25

Redo wastage

1969620

416.64

351.84

Redo write time

5090

1.08

0.91

Redo writer latching time

one

0.00

0.00

Redo writes

5494

1.16

0.98

Rollback changes-undo records applied

166609

35.24

29.76

Rollbacks only-consistent read gets

1463

0.31

0.26

Rows fetched via callback

342159

72.38

61.12

Session connect time

1461

0.31

0.26

Session cursor cache hits

180472

38.18

32.24

Session logical reads

16648792

3521.77

2974.06

Session pga memory

37393448

7909.94

6679.79

Session pga memory max

45192232

9559.64

8072.92

Session uga memory

30067312240

6360225.77

5371081.14

Session uga memory max

61930448

13100.33

11062.96

Shared hash latch upgrades-no wait

6364

1.35

1.14

Shared hash latch upgrades-wait

0

0.00

0.00

Sorts (disk)

four

0.00

0.00

Disk sorting generally cannot exceed 5%. If it exceeds 5%, you need to set the parameter PGA_AGGREGATE_TARGET or SORT_AREA_SIZE. Note that SORT_AREA_SIZE is assigned to each user, and PGA_AGGREGATE_TARGET is set for the total number of all session.

Sorts (memory)

2857

0.60

0.51

Number of sorts in memory

Sorts (rows)

42379505

8964.66

7570.47

Space was found by tune down

0

0.00

0.00

Space was not found by tune down

0

0.00

0.00

Sql area evicted

seven

0.00

0.00

Sql area purged

forty-four

0.01

0.01

Steps of tune down ret. In space pressure

0

0.00

0.00

Summed dirty queue length

35067

7.42

6.26

Switch current to new buffer

seventeen

0.00

0.00

Table fetch by rowid

680469

143.94

121.56

This is the number of rows obtained through an index or where rowid= statement, and the higher the value, the better.

Table fetch continued row

0

0.00

0.00

This is the row on which the row migration occurs. When the situation of row migration is serious, this part needs to be optimized.

Check the method of row migration:

1) run $ORACLE_HOME/rdbms/admin/utlchain.sql

2) analyze table table_name list chained rows into CHAINED_ROWS

3) select * from CHAINED_ROWS where table_name='table_name'

Method of cleanup:

Method 1:create table table_name_tmp as select * from table_name where rowed in (select head_rowid from chained_rows)

Delete from table_name where rowed in (select head_rowid from chained_rows)

Insert into table_name select * from table_name_tmp

Method 2:create table table_name_tmp select * from table_name

Truncate table table_name

Insert into table_name select * from table_name_tmp

Method 3: export the table with the exp tool, then delete the table, and finally import the table with the imp tool

Method 4:alter table table_name move tablespace tablespace_name, and then re-index the table

The above four methods can be used to eliminate the existing row migration phenomenon, but in many cases, the PCT_FREE parameter setting is too small, so it is necessary to adjust the value of the PCT_FREE parameter.

Table scan blocks gotten

790986

167.32

141.30

Table scan rows gotten

52989363

11208.99

9465.77

Table scans (long tables)

four

0.00

0.00

A longtables is a table whose size is larger than buffer buffer* _ SMALL_TABLE_THRESHOLD. If a database has too many large table scans, the db file scattered read wait event may also be significant. If the per Trans value of table scans (long tables) is greater than 0, you may need to add the appropriate index to optimize your SQL statement

Table scans (short tables)

169201

35.79

30.23

Short tables means that the length of the table is less than 2% of buffer chache (2% is defined by the implicit parameter _ SMALL_TABLE_THRESHOLD, which has different meanings in different versions of oracle. In 9i and 10g, the parameter value is defined as 2%, in 8i, the parameter value is 20 blocks, in v7, the parameter is a table of 5 blocks). These tables will give priority to full table scans. Indexes are generally not used. The SMALL_TABLE_ Thresh value is calculated as follows: (db_cache_size/8192) * 2%.

Note: modification of the _ SMALL_TABLE_THRESHOLD parameter is a very dangerous operation

Total number of times SMON posted

two hundred and fifty nine

0.05

0.05

Transaction lock background get time

0

0.00

0.00

Transaction lock background gets

0

0.00

0.00

Transaction lock foreground requests

0

0.00

0.00

Transaction lock foreground wait time

0

0.00

0.00

Transaction rollbacks

two hundred and ninety four

0.06

0.05

Tune down retentions in space pressure

0

0.00

0.00

Undo change vector size

1451085596

306952.35

259215.00

User I/O wait time

11992

2.54

2.14

User calls

1544383

326.69

275.88

User commits

eight hundred and twelve

0.17

0.15

User rollbacks

4786

1.01

0.85

Workarea executions-onepass

one

0.00

0.00

Workarea executions-optimal

1616

0.34

0.29

Write clones created in background

0

0.00

0.00

Write clones created in foreground

eleven

0.00

0.00

Back to Instance Activity Statistics

Back to Top

Instance Activity Stats-Absolute Values

Statistics with absolute values (should not be diffed)

Statistic

Begin Value

End Value

Session cursor cache count

3024

3592

Opened cursors current

thirty-seven

thirty-nine

Logons current

twenty-four

twenty-six

Back to Instance Activity Statistics

Back to Top

Instance Activity Stats-Thread Activity

Statistics identified by'(derived) 'come from sources other than SYSSTAT

Statistic

Total

Per Hour

Log switches (derived)

nine

6.85

Back to Instance Activity Statistics

Back to Top

IO Stats

Tablespace IO Stats

File IO Stats

Back to Top

In general, read and write operations are expected to be evenly distributed across devices. To find out what files may be very "hot". Once DBA understands how to read and write this data, they may be able to get some performance improvement through a more uniform distribution of Imando O among disks.

The main focus here is on the value of the Av Rd (ms) column (reads per millisecond). Generally speaking, this value of most disk systems can be adjusted below 14ms, and oracle believes that it is unnecessary for this value to exceed 20ms. If this value exceeds 1000ms, it is almost certain that there is a performance bottleneck of Imax O. If # appears in this column, it may be that your system has a serious Imax O problem, or it may be a format display problem.

When the above problems arise, we can consider the following ways:

1) optimize the related statements that operate on the tablespace or file.

2) if the tablespace contains an index, you can consider compressing the index so that the distribution space of the index is reduced, thus reducing the Icano.

3) spread the tablespace among multiple logical volumes to balance the load of Icano.

4) We can adjust the parallelism of reads by setting the parameter DB_FILE_MULTIBLOCK_READ_COUNT, which will improve the efficiency of full table scanning. But it also brings a problem, that is, oracle will use more full table scans and give up the use of some indexes. To solve this problem, we need to set another parameter, OPTIMIZER_INDEX_COST_ADJ=30 (it is generally recommended to set 10mur50).

About OPTIMIZER_INDEX_COST_ADJ=n: this parameter is a percentage value, and the default value is 100, which can be understood as FULL SCAN COST/INDEX SCAN COST. When n% * INDEX SCAN COST0 are displayed

Ordered by Block Size, Buffers For Estimate

This is oracle's recommendation for resizing buffer pool. From the data of advisory, of course, the larger the buffer, the smaller the physical read. With the increase of buffer, the performance improvement of physical read is getting smaller and smaller. The current buffer is set to 5120m, and the physical read factor is 1. We can see that with the expansion of buffer pool before 3G, the improvement of physical reading is very obvious, and after that, the degree of improvement is getting lower and lower.

P

Size for Est (M)

Size Factor

Buffers for Estimate

Est Phys Read Factor

Estimated Physical Reads

D

three hundred and twenty

0.10

38380

1.34

10351726

D

six hundred and forty

0.19

76760

1.25

9657000

D

nine hundred and sixty

0.29

115140

1.08

8365242

D

1280

0.38

153520

1.04

8059415

D

1600

0.48

191900

1.02

7878202

D

1920

0.57

230280

1.01

7841140

D

2240

0.67

268660

1.01

7829141

D

2560

0.77

307040

1.01

7817370

D

2880

0.86

345420

1.01

7804884

D

3200

0.96

383800

1.00

7784014

D

3344

1.00

401071

1.00

7748403

D

3520

1.05

422180

0.99

7702243

D

3840

1.15

460560

0.99

7680429

D

4160

1.24

498940

0.99

7663046

D

4480

1.34

537320

0.99

7653232

D

4800

1.44

575700

0.99

7645544

D

5120

1.53

614080

0.98

7630008

D

5440

1.63

652460

0.98

7616886

D

5760

1.72

690840

0.98

7614591

D

6080

1.82

729220

0.98

7613191

D

6400

1.91

767600

0.98

7599930

Back to Advisory Statistics

Back to Top

PGA Aggr Summary

PGA cache hit%-percentage of WhampA (WorkArea) data processed only in-memory

PGA Cache Hit%

W/A MB Processed

Extra W/A MB Read/Written

87.91

1100

one hundred and fifty one

Back to Advisory Statistics

Back to Top

PGA Aggr Target Stats

B: Begin snap E: End snap (rows dentified with B or E contain data which is absolute i.e. Not diffed over the interval)

Auto PGA Target-actual workarea memory target

Whopa PGA Used-amount of memory used for all Workareas (manual + auto)

% PGA Whamma A Mem-percentage of PGA memory allocated to workareas

% Auto Whamma A Mem-percentage of workarea memory controlled by Auto Mem Mgmt

% Man Whamma A Mem-percentage of workarea memory under manual control

PGA Aggr Target (M)

Auto PGA Target (M)

PGA Mem Alloc (M)

Whopa PGA Used (M)

% PGA WhampA Mem

% Auto WhampA Mem

% Man WhampA Mem

Global Mem Bound (K)

B

1024

eight hundred and sixty two

150.36

0.00

0.00

0.00

0.00

104850

E

1024

eight hundred and sixty

154.14

0.00

0.00

0.00

0.00

104850

Back to Advisory Statistics

Back to Top

PGA Aggr Target Histogram

Optimal Executions are purely in-memory operations

Low Optimal

High Optimal

Total Execs

Optimal Execs

1-Pass Execs

M-Pass Execs

2K

4K

1385

1385

0

0

64K

128K

twenty-eight

twenty-eight

0

0

128K

256K

five

five

0

0

256K

512K

seventy-nine

seventy-nine

0

0

512K

1024K

one hundred and eight

one hundred and eight

0

0

1M

2M

seven

seven

0

0

8M

16M

one

one

0

0

128M

256M

three

two

one

0

256M

512M

one

one

0

0

Back to Advisory Statistics

Back to Top

PGA Memory Advisory

When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0

PGA Target Est (MB)

Size Factr

W/A MB Processed

Estd Extra W/A MB Read/ Written to Disk

Estd PGA Cache Hit%

Estd PGA Overalloc Count

one hundred and twenty eight

0.13

4652.12

2895.99

62.00

0

two hundred and fifty six

0.25

4652.12

2857.13

62.00

0

five hundred and twelve

0.50

4652.12

2857.13

62.00

0

seven hundred and sixty eight

0.75

4652.12

2857.13

62.00

0

1024

1.00

4652.12

717.82

87.00

0

1229

1.20

4652.12

717.82

87.00

0

1434

1.40

4652.12

717.82

87.00

0

1638

1.60

4652.12

717.82

87.00

0

1843

1.80

4652.12

717.82

87.00

0

2048

2.00

4652.12

717.82

87.00

0

3072

3.00

4652.12

717.82

87.00

0

4096

4.00

4652.12

717.82

87.00

0

6144

6.00

4652.12

717.82

87.00

0

8192

8.00

4652.12

717.82

87.00

0

Back to Advisory Statistics

Back to Top

Shared Pool Advisory

SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor

Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. Inv $librarycache), with the number of Lib Cache Memory Objects is invalid.

Shared Pool Size (M)

SP Size Factr

Est LC Size (M)

Est LC Mem Obj

Est LC Time Saved (s)

Est LC Time Saved Factr

Est LC Load Time (s)

Est LC Load Time Factr

Est LC Mem Obj Hits

three hundred and four

0.43

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

three hundred and eighty four

0.55

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

four hundred and sixty four

0.66

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

five hundred and forty four

0.77

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

six hundred and twenty four

0.89

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

seven hundred and four

1.00

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

seven hundred and eighty four

1.11

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

eight hundred and sixty four

1.23

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

nine hundred and forty four

1.34

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

1024

1.45

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

1104

1.57

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

1184

1.68

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

1264

1.80

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

1344

1.91

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

1424

2.02

seventy-eight

7626

64842

1.00

thirty-one

1.00

3206955

Back to Advisory Statistics

Back to Top

SGA Target Advisory

SGA Target Size (M)

SGA Size Factor

Est DB Time (s)

Est Physical Reads

1024

0.25

9060

9742760

2048

0.50

7612

7948245

3072

0.75

7563

7886258

4096

1.00

7451

7748338

5120

1.25

7423

7713470

6144

1.50

7397

7680927

7168

1.75

7385

7666980

8192

2.00

7385

7666980

Back to Advisory Statistics

Back to Top

Streams Pool Advisory

No data exists for this section of the report.

Back to Advisory Statistics

Back to Top

Java Pool Advisory

No data exists for this section of the report.

Back to Advisory Statistics

Back to Top

Wait Statistics

Buffer Wait Statistics

Enqueue Activity

Back to Top

Buffer Wait Statistics

Ordered by wait time desc, waits desc

Class

Waits

Total Wait Time (s)

Avg Time (ms)

Data block

three

0

twenty-three

Undo header

six hundred and sixteen

0

0

File header block

eight

0

0

Undo block

seven

0

0

Back to Wait Statistics

Back to Top

Enqueue Activity

Only enqueues with waits are shown

Enqueue stats gathered prior to 10g should not be compared with 10g data

Ordered by Wait Time desc, Waits desc

Enqueue Type (Request Reason)

Requests

Succ Gets

Failed Gets

Waits

Wt Time (s)

Av Wt Time (ms)

FB-Format Block

14075

14075

0

7033

three

0.43

US-Undo Segment

nine hundred and sixty four

nine hundred and sixty four

0

five hundred and fifty six

0

0.32

WF-AWR Flush

twenty-four

twenty-four

0

fourteen

0

9.00

HW-Segment High Water Mark

4223

4223

0

thirty-seven

0

1.22

CF-Controlfile Transaction

10548

10548

0

fifty-eight

0

0.67

TX-Transaction (index contention)

one

one

0

one

0

35.00

TM-DML

121768

121761

six

seventy

0

0.43

PS-PX Process Reservation

one hundred and three

one hundred and three

0

forty-six

0

0.65

TT-Tablespace

9933

9933

0

thirty-nine

0

0.54

TD-KTF map table enqueue (KTF dump entries)

twelve

twelve

0

twelve

0

1.42

TA-Instance Undo

eighteen

eighteen

0

thirteen

0

0.38

PI-Remote PX Process Spawn Status

sixteen

sixteen

0

eight

0

0.50

MW-MWIN Schedule

three

three

0

three

0

0.67

DR-Distributed Recovery

three

three

0

three

0

0.33

TS-Temporary Segment

fourteen

eleven

three

three

0

0.33

AF-Advisor Framework (task serialization)

fourteen

fourteen

0

one

0

1.00

JS-Job Scheduler (job run lock-synchronize)

two

two

0

one

0

1.00

UL-User-defined

two

two

0

one

0

1.00

MD-Materialized View Log DDL

six

six

0

two

0

0.00

Back to Wait Statistics

Back to Top

Undo Statistics

Undo Segment Summary

Undo Segment Stats

Back to Top

Undo starts from 9i, and the rollback segment is generally managed automatically. in general, we don't need to focus on it here.

Here, the main focus is on pct waits. If there are more pct waits, you need to increase the number of rollback segments or increase the space for rollback segments. In addition, observe the use of each rollback segment, it is ideal that the Avg Active on each rollback segment is more balanced.

Before oracle 9i, when the rollback segment is manually managed, you can set a contraction value of the rollback segment by specifying the optimal value. If it is not set, the default should be initial+ (minextents-1) * next extents. The specified result is that the growth of the rollback segment cannot be limited. When the optimal setting value is exceeded, the oracle will shrinks to the optimal size when appropriate. However, after 9i, undo is generally set to auto mode. In this mode, we cannot specify the shrinks value, and there seems to be no default value, so the rollback segment will grow indefinitely until the table space utilization reaches 100%. If the table space is set to automatic expansion, it will be even worse, and undo will grow indefinitely. Here, we can also see that the value of shrinks is 0, that is, it has never contracted.

Segment Summary

Min/Max TR (mins)-Min and Max Tuned Retention (minutes)

STO-Snapshot Too Old count, OOS-Out of Space count

Undo segment block stats:

US-unexpired Stolen, uR-unexpired Released, uU-unexpired reUsed

ES-expired Stolen, eR-expired Released, eU-expired reUsed

Undo TS#

Num Undo Blocks (K)

Number of Transactions

Max Qry Len (s)

Max Tx Concurcy

Min/Max TR (mins)

STO/ OOS

US/uR/uU/ eS/eR/eU

one

219.12

113405

0

six

130.95/239.25

0/0

0/0/0/13/24256/0

Back to Undo Statistics

Back to Top

Undo Segment Stats

Most recent 35 Undostat rows, ordered by Time desc

End Time

Num Undo Blocks

Number of Transactions

Max Qry Len (s)

Max Tx Concy

Tun Ret (mins)

STO/ OOS

US/uR/uU/ eS/eR/eU

25-Dec 15:18

182021

74309

0

five

one hundred and thirty one

0/0

0/0/0/13/24256/0

25-Dec 15:08

fifty-seven

one hundred and seventy

0

three

two hundred and thirty nine

0/0

0/0/0/0/0/0

25-Dec 14:58

sixty-eight

thirty-one

0

two

two hundred and twenty nine

0/0

0/0/0/0/0/0

25-Dec 14:48

one hundred and ninety four

4256

0

four

two hundred and nineteen

0/0

0/0/0/0/0/0

25-Dec 14:38

five hundred and seventy

12299

0

five

two hundred and nine

0/0

0/0/0/0/0/0

25-Dec 14:28

36047

21328

0

six

two hundred

0/0

0/0/0/0/0/0

25-Dec 14:18

seventy

nine hundred and seven

0

three

one hundred and sixty two

0/0

0/0/0/0/0/0

25-Dec 14:08

ninety-one

one hundred and five

0

three

one hundred and fifty four

0/0

0/0/0/0/0/0

Back to Undo Statistics

Back to Top

Latch Statistics

Latch Activity

Latch Sleep Breakdown

Latch Miss Sources

Parent Latch Statistics

Child Latch Statistics

Back to Top

Latch is a low-level queuing mechanism that prevents parallel access to the memory structure and protects the system global area (SGA) shared memory structure. Latch is a memory lock that is acquired and released quickly. If latch is not available, latch free miss is logged.

There are two types of Latch:willing to wait and (immediate) not willing to wait.

For a latch that is willing to wait (willing-to-wait), if a process does not get latch in the first attempt, it waits and tries again, if it cannot get latch after _ spin_count contention, then the process goes to sleep, wakes up after 1% seconds, and repeats the previous steps in order. The default value in 8i/9i is _ spin_count=2000. I will sleep longer and longer.

For a latch that does not want to wait for type (not-willing-to-wait), if the latch is not available immediately, the process will not wait to obtain the latch. It will continue to perform another operation.

Most Latch problems can be summed up as follows:

What is not good is the use of binding variables (library cache latch and shared pool cache), redo generation problems (redo allocation latch), cache contention problems (cache buffers LRU chain), and the existence of "hot" blocks in buffer cache (cache buffers chain).

There are also some latch waiting to be related to bug, so you should pay attention to the announcement of Metalink-related bug and the release of patches.

When latch miss ratios is greater than 0.5%, you need to check the waiting problem of latch.

If the SQL statement cannot be adjusted, you can force binding variables on the server side by setting CURSOR_SHARING = force above version 8.1.6. Setting this parameter may bring some side effects, which may lead to poor execution plan. In addition, for Java programs, there are related bug, and specific applications should pay attention to the bug announcement of Metalink.

Here are several important types of latch waits:

1) latch free: when 'latch free' appears in the reported high wait event, it indicates that there may be a performance problem, so you need to analyze in detail the specific type of latch in which the wait occurs in this section before adjusting it.

2) cache buffers chain:cbc latch indicates hot blocks. Why does this indicate the presence of hot lumps? In order to understand this problem, we must first understand the role of cbc. ORACLE manages buffer cache as a linked list of hash (the number of buckets,buckets called oracle is defined by _ db_block_hash_buckets). Cbc latch is set up to protect buffer cache. When there are concurrent access requirements, cbc will serialize these visits. When we gain control of the cbc latch, we can start to access the data. If the data we request happens to be in a certain buckets, we will read the data directly from memory, and then release the cbc latch,cbc latch to be obtained by other users. Cbc latch acquisition and release is very fast, so there is generally no wait in this case. However, if the requested data does not exist in memory, we need to read the data on the physical disk, which is quite a long time compared to latch. When the corresponding data block is found, if other users are accessing the data block, and there are no free ITL slots on the data block to receive this request, we must wait. In this process, we have been in possession of cbc latch because we did not get the requested data, and other users will not be able to get cbc latch, so there is a situation of cbc latch waiting. So in the final analysis, this wait is caused by the comparison of data blocks to hot.

For a workaround, you can refer to the workaround for hot blocks in 3) buffer busy wait in the wait event.

3) cache buffers lru chain: this latch is used to scan the LRU linked list of buffer. There are three situations that can lead to contention: 1) buffer cache is too small; 2) buffer cache is overused, or too many cache-based sorting operations; and 3) DBWR is not timely. Solution: find the statement that the logic reads too high, and increase the buffer cache.

4) Library cache and shared pool contention:

Library cache is a hash table, which we need to access through an array of hash buckets (similar to buffer cache). Library cache latch is to serialize access to library cache. When a sql (or PL/SQL procedure,package,function,trigger) needs to be executed, you first need to get a latch, and then library cache latch queries the library cache to reuse these statements. In 8i, there is only one library cache latch. In 9i, there are seven child latch, which can be modified by the parameter _ KGL_LATCH_ COUNT (up to 66). Contention for 'shared pool',' library cache pin', or 'library cache' latch occurs when the shared pool is too small or the reuse of the statement is low. The solution is to increase the shared pool or set CURSOR_SHARING=FORCE | SIMILAR, of course, we also need tuning SQL statement. To reduce contention, we can also use DBMS_SHARED_POOL.KEEP packages to pinning some larger SQL or processes in shared pool.

Shared pool memory structure is similar to buffer cache, but also managed by hash. The shared pool has a fixed number of hash buckets, which is serialized to protect the use of this memory through a fixed number of library cache latch. When the data starts, 509 hash buctets,2*CPU_COUNT library cache latch are allocated. When there are more and more objects in the shared pool in the use of the database, oracle will increase the number of hash buckets in the following increments: 509, 1021, 4093, 8191, 32749, 65521, 131071, 4292967293. We can implement _ KGL_BUCKET_COUNT by setting the following parameter. The default value of the parameter is 0, which represents the quantity of 509, and the maximum we can set to 8, representing the number of 131071.

We can view the specific memory segments of shared pools through x$ksmsp, focusing on the following fields:

KSMCHCOM- represents the type of memory segment

Ksmchptr- represents the physical address of the memory segment

Ksmchsiz- represents the size of the memory segment

Ksmchcls- represents the classification of memory segments. Recr means a recreatable piece currently in use that can be a candidate for flushing when the shared pool is low in available memory; freeabl represents a segment that is currently in use and can be released; free represents an idle unallocated segment; and perm indicates that a permanent allocation segment cannot be released.

To reduce latch contention for shared pools, we can mainly consider the following events:

1. Use binding variables

2. Use cursor sharing

3. Set session_cached_cursors parameters. The purpose of this parameter is to transfer cursor from shared pool to pga. Reduce contention for shared pools. In general, the initial value can be set to 100, and then adjusted as appropriate.

4. Set a shared pool of the right size

5) Redo Copy: this latch is used to copy redo records from PGA to redo log buffer. The initial number of latch is 2*COU_OUNT. You can increase the number of latch and reduce contention by setting the parameter _ LOG_SIMULTANEOUS_COPIES.

6) Redo allocation: this latch is used for the allocation of redo log buffer. There are three ways to reduce this type of contention:

Increase redo log buffer

Appropriate use of the nologging option

Avoid unnecessary commit operations

7) Row cache objects: contention for this latch usually indicates contention for data dictionaries, which often indicates too much reliance on common synonyms for parse. Solution: 1) increase shared pool 2) use locally managed tablespaces, especially for index tablespaces

Latch event

Suggest a solution

Library cache

Use bind variables; adjust shared_pool_size.

Shared pool

Use bind variables; adjust shared_pool_size.

Redo allocation

Reduce the production of redo and avoid unnecessary commits.

Redo copy

Add _ log_simultaneous_copies.

Row cache objects

Add shared_pool_size

Cache buffers chain

Increase _ DB_BLOCK_HASH_BUCKETS; make it prime.

Cache buffers LRU chain

Use multiple buffer pools; adjust queries that cause a large number of logical reads

Note: here, a lot of hidden parameters are mentioned, and there is also a description of how to use hidden parameters to solve latch, but in practice, it is strongly recommended not to change the default values of hidden parameters.

Latch Activity

Get Requests, "Pct Get Miss" and "Avg Slps/Miss" are statistics for willing-to-wait latch get requests

"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests

"Pct Misses" for both should be very close to 0.0

Latch Name

Get Requests

Pct Get Miss

Avg Slps / Miss

Wait Time (s)

NoWait Requests

Pct NoWait Miss

ASM db client latch

11883

0.00

0

0

AWR Alerted Metric Element list

18252

0.00

0

0

Consistent RBA

5508

0.02

0.00

0

0

FOB s.o list latch

seven hundred and thirty one

0.00

0

0

JS broadcast add buf latch

6193

0.00

0

0

JS broadcast drop buf latch

6194

0.00

0

0

JS broadcast load blnc latch

6057

0.00

0

0

JS mem alloc latch

eight

0.00

0

0

JS queue access latch

eight

0.00

0

0

JS queue state obj latch

218086

0.00

0

0

JS slv state obj latch

thirty-one

0.00

0

0

KCL gc element parent latch

2803392

0.04

0.01

0

one hundred and eight

0.00

KJC message pool free list

43168

0.06

0.00

0

14532

0.01

KJCT flow control latch

563875

0.00

0.00

0

0

KMG MMAN ready and startup request latch

1576

0.00

0

0

KSXR large replies

three hundred and twenty

0.00

0

0

KTF sga latch

twenty-three

0.00

0

1534

0.00

KWQMN job cache list latch

three hundred and fifty two

0.00

0

0

KWQP Prop Status

five

0.00

0

0

MQL Tracking Latch

0

0

ninety-four

0.00

Memory Management Latch

0

0

1576

0.00

OS process

two hundred and seven

0.00

0

0

OS process allocation

1717

0.00

0

0

OS process: request allocation

seventy-three

0.00

0

0

PL/SQL warning settings

two hundred and twenty six

0.00

0

0

SGA IO buffer pool latch

20679

0.06

0.00

0

20869

0.00

SQL memory manager latch

seven

0.00

0

1575

0.00

SQL memory manager workarea list latch

439442

0.00

0

0

Shared B-Tree

one hundred and eighty two

0.00

0

0

Undo Hint Latch

0

0

twelve

0.00

Active checkpoint queue latch

7835

0.00

0

0

Active service list

50936

0.00

0

1621

0.00

Archive control

five

0.00

0

0

Begin backup scn array

72901

0.00

0.00

0

0

Business card

thirty-two

0.00

0

0

Cache buffer handles

331153

0.02

0.00

0

0

Cache buffers chains

48189073

0.00

0.00

0

1201379

0.00

Cache buffers lru chain

891796

0.34

0.00

0

991605

0.23

Cache table scan latch

0

0

10309

0.01

Channel handle pool latch

ninety-nine

0.00

0

0

Channel operations parent latch

490324

0.01

0.00

0

0

Checkpoint queue latch

671856

0.01

0.00

0

555469

0.02

Client/application info

three hundred and thirty five

0.00

0

0

Commit callback allocation

twelve

0.00

0

0

Compile environment latch

173428

0.00

0

0

Dml lock allocation

243087

0.00

0.00

0

0

Dummy allocation

one hundred and thirty four

0.00

0

0

Enqueue hash chains

1539499

0.01

0.03

0

two hundred and sixty three

0.00

Enqueues

855207

0.02

0.00

0

0

Error message lists

sixty-four

0.00

0

0

Event group latch

thirty-eight

0.00

0

0

File cache latch

4694

0.00

0

0

Gcs drop object freelist

8451

0.19

0.00

0

0

Gcs opaque info freelist

38584

0.00

0.00

0

0

Gcs partitioned table hash

9801867

0.00

0

0

Gcs remaster request queue

thirty-one

0.00

0

0

Gcs remastering latch

1014198

0.00

0.33

0

0

Gcs resource freelist

1154551

0.03

0.00

0

771650

0.00

Gcs resource hash

3815373

0.02

0.00

0

two

0.00

Gcs resource scan list

four

0.00

0

0

Gcs shadows freelist

795482

0.00

0.00

0

779648

0.00

Ges caches resource lists

209655

0.02

0.00

0

121613

0.01

Ges deadlock list

eight hundred and forty

0.00

0

0

Ges domain table

366702

0.00

0

0

Ges enqueue table freelist

487875

0.00

0

0

Ges group table

543887

0.00

0

0

Ges process hash list

59503

0.00

0

0

Ges process parent latch

908232

0.00

0

one

0.00

Ges process table freelist

seventy-three

0.00

0

0

Ges resource hash list

862590

0.02

0.28

0

72266

0.01

Ges resource scan list

five hundred and thirty four

0.00

0

0

Ges resource table freelist

135406

0.00

0.00

0

0

Ges synchronous data

one hundred and sixty

0.63

0.00

0

2954

0.07

Ges timeout list

3256

0.00

0

4478

0.00

Global KZLD latch for mem in SGA

twenty-one

0.00

0

0

Hash table column usage latch

fifty-nine

0.00

0

1279

0.00

Hash table modification latch

one hundred and sixteen

0.00

0

0

Job workq parent latch

0

0

fourteen

0.00

Job_queue_processes parameter latch

eighty-six

0.00

0

0

Kks stats

three hundred and eighty four

0.00

0

0

Ksuosstats global area

three hundred and twenty nine

0.00

0

0

Ktm global data

two hundred and ninety six

0.00

0

0

Kwqbsn:qsga

one hundred and eighty two

0.00

0

0

Lgwr LWN SCN

6547

0.18

0.00

0

0

Library cache

235060

0.00

0.00

0

twenty-two

0.00

Library cache load lock

four hundred and eighty six

0.00

0

0

Library cache lock

49284

0.00

0

0

Library cache lock allocation

five hundred and sixty six

0.00

0

0

Library cache pin

27863

0.00

0.00

0

0

Library cache pin allocation

two hundred and four

0.00

0

0

List of block allocation

10101

0.00

0

0

Loader state object freelist

one hundred and eight

0.00

0

0

Longop free list parent

six

0.00

0

six

0.00

Message pool operations parent latch

1424

0.00

0

0

Messages

222581

0.00

0.00

0

0

Mostly latch-free SCN

6649

1.43

0.00

0

0

Multiblock read objects

29230

0.03

0.00

0

0

Name-service memory objects

18842

0.00

0

0

Name-service namespace bucket

56712

0.00

0

0

Name-service namespace objects

fifteen

0.00

0

0

Name-service pending queue

6436

0.00

0

0

Name-service request

forty-four

0.00

0

0

Name-service request queue

57312

0.00

0

0

Ncodef allocation latch

seventy-seven

0.00

0

0

Object queue header heap

37721

0.00

0

7457

0.00

Object queue header operation

2706992

0.06

0.00

0

0

Object stats modification

twenty-two

0.00

0

0

Parallel query alloc buffer

nine hundred and thirty nine

0.00

0

0

Parallel query stats

seventy-two

0.00

0

0

Parallel txn reco latch

six hundred and thirty

0.00

0

0

Parameter list

one hundred and ninety three

0.00

0

0

Parameter table allocation management

sixty-eight

0.00

0

0

Post/wait queue

4205

0.00

0

2712

0.00

Process allocation

46895

0.00

0

thirty-eight

0.00

Process group creation

seventy-three

0.00

0

0

Process queue

one hundred and seventy five

0.00

0

0

Process queue reference

2621

0.00

0

two hundred and forty

62.50

Qmn task queue latch

six hundred and sixty eight

0.15

1.00

0

0

Query server freelists

one hundred and fifty nine

0.00

0

0

Query server process

eight

0.00

0

seven

0.00

Queued dump request

23628

0.00

0

0

Redo allocation

21206

0.57

0.00

0

4706826

0.02

Redo copy

0

0

4707106

0.01

Redo writing

29944

0.01

0.00

0

0

Resmgr group change latch

sixty-nine

0.00

0

0

Resmgr:actses active list

one hundred and thirty seven

0.00

0

0

Resmgr:actses change group

fifty-two

0.00

0

0

Resmgr:free threads list

one hundred and thirty

0.00

0

0

Resmgr:schema config

seven

0.00

0

0

Row cache objects

1644149

0.00

0.00

0

three hundred and twenty one

0.00

Rules engine rule set statistics

five hundred

0.00

0

0

Sequence cache

three hundred and sixty

0.00

0

0

Session allocation

535514

0.00

0.00

0

0

Session idle bit

3262141

0.00

0.00

0

0

Session state list latch

one hundred and sixty six

0.00

0

0

Session switching

seventy-seven

0.00

0

0

Session timer

1620

0.00

0

0

Shared pool

60359

0.00

0.00

0

0

Shared pool sim alloc

thirteen

0.00

0

0

Shared pool simulator

4246

0.00

0

0

Simulator hash latch

1862803

0.00

0

0

Simulator lru latch

1719480

0.01

0.00

0

46053

0.00

Slave class

two

0.00

0

0

Slave class create

eight

12.50

1.00

0

0

Sort extent pool

1284

0.00

0

0

State object free list

four

0.00

0

0

Statistics aggregation

two hundred and eighty

0.00

0

0

Temp lob duration state obj allocation

two

0.00

0

0

Threshold alerts latch

two hundred and two

0.00

0

0

Transaction allocation

two hundred and eleven

0.00

0

0

Transaction branch allocation

seventy-seven

0.00

0

0

Undo global data

779759

0.07

0.00

0

0

User lock

one hundred and two

0.00

0

0

Back to Latch Statistics

Back to Top

Latch Sleep Breakdown

Ordered by misses desc

Latch Name

Get Requests

Misses

Sleeps

Spin Gets

Sleep1

Sleep2

Sleep3

Cache buffers lru chain

891796

3061

one

3060

0

0

0

Object queue header operation

2706992

1755

three

1752

0

0

0

KCL gc element parent latch

2803392

1186

eleven

1176

0

0

0

Cache buffers chains

48189073

four hundred and ninety six

one

four hundred and ninety five

0

0

0

Ges resource hash list

862590

one hundred and sixty

forty-four

one hundred and sixteen

0

0

0

Enqueue hash chains

1539499

seventy-nine

two

seventy-eight

0

0

0

Gcs remastering latch

1014198

three

one

two

0

0

0

Qmn task queue latch

six hundred and sixty eight

one

one

0

0

0

0

Slave class create

eight

one

one

0

0

0

0

Back to Latch Statistics

Back to Top

Latch Miss Sources

Only latches with sleeps are shown

Ordered by name, sleeps desc

Latch Name

Where

NoWait Misses

Sleeps

Waiter Sleeps

KCL gc element parent latch

Kclrwrite

0

eight

0

KCL gc element parent latch

Kclnfndnewm

0

four

six

KCL gc element parent latch

KCLUNLNK

0

one

one

KCL gc element parent latch

Kclbla

0

one

0

KCL gc element parent latch

Kclulb

0

one

one

KCL gc element parent latch

Kclzcl

0

one

0

Cache buffers chains

Kcbnew: new latch again

0

two

0

Cache buffers chains

Kclwrt

0

one

0

Cache buffers lru chain

Kcbzgws

0

one

0

Enqueue hash chains

Ksqcmi: if lk mode not requested

0

two

0

Event range base latch

No latch

0

one

one

Gcs remastering latch

sixty-nine

0

one

0

Ges resource hash list

Kjlmfnd: search for lockp by rename and inst id

0

twenty-three

0

Ges resource hash list

Kjakcai: search for resp by resname

0

thirteen

0

Ges resource hash list

Kjrmas1: lookup master node

0

five

0

Ges resource hash list

Kjlrlr: remove lock from resource queue

0

two

thirty-three

Ges resource hash list

Kjcvscn: remove from scan queue

0

one

0

Object queue header operation

Kcbo_switch_q_bg

0

three

0

Object queue header operation

Kcbo_switch_mq_bg

0

two

four

Object queue header operation

Kcbw_unlink_q

0

two

0

Object queue header operation

Kcbw_link_q

0

one

0

Slave class create

Ksvcreate

0

one

0

Back to Latch Statistics

Back to Top

Parent Latch Statistics

No data exists for this section of the report.

Back to Latch Statistics

Back to Top

Child Latch Statistics

No data exists for this section of the report.

Back to Latch Statistics

Back to Top

Segment Statistics

Segments by Logical Reads

Segments by Physical Reads

Segments by Row Lock Waits

Segments by ITL Waits

Segments by Buffer Busy Waits

Segments by Global Cache Buffer Busy

Segments by CR Blocks Received

Segments by Current Blocks Received

Back to Top

DBA_HIST_SEG_STAT

Desc DBA_HIST_SEG_STAT

V$sesstat

V$statname

Segments by Logical Reads

Total Logical Reads: 16648792

Captured Segments account for 85.2% of Total

Owner

Tablespace Name

Object Name

Subobject Name

Obj. Type

Logical Reads

% Total

ICCI01

ICCIDAT01

ICCICCS_PK

INDEX

1544848

9.28

ICCI01

ICCIDAT01

CUSCAD_TMP

TABLE

1349536

8.11

ICCI01

ICCIDAT01

ICCIFNSACT_PK

INDEX

1268400

7.62

ICCI01

ICCIDAT01

IND_OLDNEWACT

INDEX

1071072

6.43

ICCI01

ICCIDAT01

CUID_PK

INDEX

935584

5.62

Back to Segment Statistics

Back to Top

Segments by Physical Reads

Total Physical Reads: 322678

Captured Segments account for 64.2% of Total

Owner

Tablespace Name

Object Name

Subobject Name

Obj. Type

Physical Reads

% Total

ICCI01

ICCIDAT01

CUID_TMP

TABLE

116417

36.08

ICCI01

ICCIDAT01

CUMI_TMP

TABLE

44086

13.66

ICCI01

ICCIDAT01

CUSM_TMP

TABLE

26078

8.08

ICCI01

ICCIDAT01

CUSVAA_TMP_PK

INDEX

19554

6.06

ICCI01

ICCIDAT01

CUID

TABLE

two hundred and fifty nine

0.08

Back to Segment Statistics

Back to Top

Segments by Row Lock Waits

This wait occurs when a process acquires an exclusive lock on a row of data that is being locked by another process. This wait is often due to a large number of INSERT operations on a table with a primary key index.

No data exists for this section of the report.

Back to Segment Statistics

Back to Top

Segments by ITL Waits

No data exists for this section of the report.

Back to Segment Statistics

Back to Top

Segments by Buffer Busy Waits

No data exists for this section of the report.

Back to Segment Statistics

Back to Top

Segments by Global Cache Buffer Busy

Of Capture shows of GC Buffer Busy for each top segment compared

With GC Buffer Busy for all segments captured by the Snapshot

Owner

Tablespace Name

Object Name

Subobject Name

Obj. Type

GC Buffer Busy

% of Capture

SYS

SYSTEM

TSQ$

TABLE

two

100.00

Back to Segment Statistics

Back to Top

Segments by CR Blocks Received

Total CR Blocks Received: 4142

Captured Segments account for 95.6% of Total

Owner

Tablespace Name

Object Name

Subobject Name

Obj. Type

CR Blocks Received

% Total

SYS

SYSTEM

USER$

TABLE

1001

24.17

SYS

SYSTEM

TSQ$

TABLE

seven hundred and twenty two

17.43

SYS

SYSTEM

SEG$

TABLE

four hundred and forty six

10.77

SYS

SYSTEM

OBJ$

TABLE

two hundred and sixty four

6.37

SYS

SYSTEM

I_OBJ2

INDEX

one hundred and seventy four

4.20

Back to Segment Statistics

Back to Top

Segments by Current Blocks Received

Total Current Blocks Received: 15502

Captured Segments account for 84.8% of Total

Owner

Tablespace Name

Object Name

Subobject Name

Obj. Type

Current Blocks Received

% Total

ICCI01

ICCIDAT01

CUSM_TMP

TABLE

5764

37.18

ICCI01

ICCIDAT01

CUMI_TMP

TABLE

2794

18.02

ICCI01

ICCIDAT01

CUID_TMP

TABLE

2585

16.68

SYS

SYSTEM

SEG$

TABLE

three hundred and sixty one

2.33

SYS

SYSTEM

TSQ$

TABLE

three hundred and sixty one

2.33

Back to Segment Statistics

Back to Top

Dictionary Cache Statistics

Dictionary Cache Stats

Dictionary Cache Stats (RAC)

Back to Top

/ * Library cache details.

Get Requests:get represents a type of lock, parsing lock. This type of lock is set on an object during the parsing phase of the SQL statement that references that object. Each time a statement is parsed, the value of Get Requests increases by 1.

Pin requests:pin also represents a type of lock that occurs during execution. Each time a statement is executed, the value of pin requests increases by 1.

The reloads:reloads column shows the number of times that an executed statement needs to be re-parsed because Library Cache has expired or invalidated the parsed version of the statement.

Invalidations: invalidation occurs when a SQL statement that has been told to cache, even though it is already in library cache, has been marked as invalid and forced to be parsed again. Whenever the objects referenced by statements that have been told to the cache are modified in some way, the statements are marked as invalid.

Pct miss should not be higher than 1%.

Reloads / pin requests 80% of Limit are shown

Ordered by resource name

Resource Name

Current Utilization

Maximum Utilization

Initial Allocation

Limit

Gcs_resources

349392

446903

450063

450063

Gcs_shadows

400300

447369

450063

450063

Back to Top

Init.ora Parameters

Parameter Name

Begin value

End value (if different)

Audit_file_dest

/ oracle/app/oracle/admin/ICCI/adump

Background_dump_dest

/ oracle/app/oracle/admin/ICCI/bdump

Cluster_database

TRUE

Cluster_database_instances

two

Compatible

10.2.0.3.0

Control_files

/ dev/rora_CTL01, / dev/rora_CTL02, / dev/rora_CTL03

Core_dump_dest

/ oracle/app/oracle/admin/ICCI/cdump

Db_block_size

8192

Db_domain

Db_file_multiblock_read_count

sixteen

Db_name

ICCI

Dispatchers

(PROTOCOL=TCP) (SERVICE=ICCIXDB)

Instance_number

one

Job_queue_processes

ten

Open_cursors

eight hundred

Pga_aggregate_target

1073741824

Processes

five hundred

Remote_listener

LISTENERS_ICCI

Remote_login_passwordfile

EXCLUSIVE

Sga_max_size

4294967296

Sga_target

4294967296

Sort_area_size

196608

Spfile

/ dev/rora_SPFILE

Thread

one

Undo_management

AUTO

Undo_retention

nine hundred

Undo_tablespace

UNDOTBS1

User_dump_dest

/ oracle/app/oracle/admin/ICCI/udump

Back to Top

More RAC Statistics

Global Enqueue Statistics

Global CR Served Stats

Global CURRENT Served Stats

Global Cache Transfer Stats

Back to Top

Global Enqueue Statistics

Statistic

Total

Per Second

Per Trans

Acks for commit broadcast (actual)

18537

3.92

3.31

Acks for commit broadcast (logical)

21016

4.45

3.75

Broadcast msgs on commit (actual)

5193

1.10

0.93

Broadcast msgs on commit (logical)

5491

1.16

0.98

Broadcast msgs on commit (wasted)

four hundred and fifty

0.10

0.08

Dynamically allocated gcs resources

0

0.00

0.00

Dynamically allocated gcs shadows

0

0.00

0.00

False posts waiting for scn acks

0

0.00

0.00

Flow control messages received

0

0.00

0.00

Flow control messages sent

two

0.00

0.00

Gcs assume cvt

0

0.00

0.00

Gcs assume no cvt

9675

2.05

1.73

Gcs ast xid

one

0.00

0.00

Gcs blocked converts

7099

1.50

1.27

Gcs blocked cr converts

8442

1.79

1.51

Gcs compatible basts

forty-five

0.01

0.01

Gcs compatible cr basts (global)

two hundred and seventy three

0.06

0.05

Gcs compatible cr basts (local)

12593

2.66

2.25

Gcs cr basts to PIs

0

0.00

0.00

Gcs cr serve without current lock

0

0.00

0.00

Gcs dbwr flush pi msgs

two hundred and twenty three

0.05

0.04

Gcs dbwr write request msgs

two hundred and twenty three

0.05

0.04

Gcs error msgs

0

0.00

0.00

Gcs forward cr to pinged instance

0

0.00

0.00

Gcs immediate (compatible) converts

2998

0.63

0.54

Gcs immediate (null) converts

170925

36.16

30.53

Gcs immediate cr (compatible) converts

0

0.00

0.00

Gcs immediate cr (null) converts

722748

152.88

129.11

Gcs indirect ast

306817

64.90

54.81

Gcs lms flush pi msgs

0

0.00

0.00

Gcs lms write request msgs

one hundred and eighty nine

0.04

0.03

Gcs msgs process time (ms)

16164

3.42

2.89

Gcs msgs received

1792132

379.09

320.14

Gcs out-of-order msgs

0

0.00

0.00

Gcs pings refused

0

0.00

0.00

Gcs pkey conflicts retry

0

0.00

0.00

Gcs queued converts

two

0.00

0.00

Gcs recovery claim msgs

0

0.00

0.00

Gcs refuse xid

0

0.00

0.00

Gcs regular cr

0

0.00

0.00

Gcs retry convert request

0

0.00

0.00

Gcs side channel msgs actual

four hundred and thirty seven

0.09

0.08

Gcs side channel msgs logical

21086

4.46

3.77

Gcs stale cr

3300

0.70

0.59

Gcs undo cr

five

0.00

0.00

Gcs write notification msgs

twenty-three

0.00

0.00

Gcs writes refused

three

0.00

0.00

Ges msgs process time (ms)

1289

0.27

0.23

Ges msgs received

138891

29.38

24.81

Global posts dropped

0

0.00

0.00

Global posts queue time

0

0.00

0.00

Global posts queued

0

0.00

0.00

Global posts requested

0

0.00

0.00

Global posts sent

0

0.00

0.00

Implicit batch messages received

81181

17.17

14.50

Implicit batch messages sent

19561

4.14

3.49

Lmd msg send time (ms)

0

0.00

0.00

Lms (s) msg send time (ms)

0

0.00

0.00

Messages flow controlled

15306

3.24

2.73

Messages queue sent actual

108411

22.93

19.37

Messages queue sent logical

222518

47.07

39.75

Messages received actual

474202

100.31

84.71

Messages received logical

1931144

408.50

344.97

Messages sent directly

25742

5.45

4.60

Messages sent indirectly

137725

29.13

24.60

Messages sent not implicit batched

88859

18.80

15.87

Messages sent pbatched

1050224

222.16

187.61

Msgs causing lmd to send msgs

61682

13.05

11.02

Msgs causing lms (s) to send msgs

85978

18.19

15.36

Msgs received queue time (ms)

911013

192.71

162.74

Msgs received queued

1931121

408.50

344.97

Msgs sent queue time (ms)

5651

1.20

1.01

Msgs sent queue time on ksxp (ms)

66767

14.12

11.93

Msgs sent queued

215124

45.51

38.43

Msgs sent queued on ksxp

243729

51.56

43.54

Process batch messages received

120003

25.38

21.44

Process batch messages sent

181019

38.29

32.34

Back to Top

Global CR Served Stats

Statistic

Total

CR Block Requests

10422

CURRENT Block Requests

two hundred and fifty one

Data Block Requests

10422

Undo Block Requests

two

TX Block Requests

twenty

Current Results

10664

Private results

four

Zero Results

five

Disk Read Results

0

Fail Results

0

Fairness Down Converts

1474

Fairness Clears

0

Free GC Elements

0

Flushes

three hundred and seventy

Flushes Queued

0

Flush Queue Full

0

Flush Max Time (us)

0

Light Works

two

Errors

0

Back to Top

Global CURRENT Served Stats

Pins = CURRENT Block Pin Operations

Flushes = Redo Flush before CURRENT Block Served Operations

Writes = CURRENT Block Fusion Write Operations

Statistic

Total

%

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report