Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze the indicators of Oracle AWR report

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to analyze the Oracle AWR report indicators. The content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.

[performance tuning] full analysis of Oracle AWR report metrics

2013-08-31 BY 26 comments

Physical read bytes

Throughput of physical read total bytes physical reads per second

Physical read IO requestsphysical read total IO requests physical read IOPSphysical write bytesphysical write total bytes physical write Throughput / second physical write IO requestsphysical write total IO requests physical write IOPS

Total physical throughput / second = physical read total bytes+physical write total bytes

Total physical IOPS= physical read total IO requests+ physical write total IO requests

The main metrics of IO, throughput, IOPS and latency can all be obtained from AWR, and the information of IO latency can be obtained from Wait Class Avg Wait time of User I User O, or refer to IOStat by Function summary that appears in 11g.

Instance Activity Stats has a large number of indicators, but there is no complete and detailed description of these indicators. Even if it is not found in the original Oracle factory (or Maclean can not be found), it is relatively easy for developers to introduce a certain Activity Stats, which is not as troublesome as applying for a new background process. Oracle has strict requirements for the introduction of new background processes in the new version, but Activity Stats is very easy. Often an one-off patch can be introduced, but in fact Activity Stats is just a few counters at the source code layer.'

For the more basic statistics, you can refer to the Statistics Descriptions description of the official document, which is here.

For in-depth indicators such as "Batched IO (space) vector count", which is introduced due to some new features, there is generally no very detailed material, you need to read the relevant modules in the source code in order to summarize its use, for this work is generally very delayed to complete, so there is not a complete list. If you have any questions about this, please send a post to t.askmaclean.com to ask questions.

Instance Activity Stats-Absolute Values Snaps: 7071-> Statistics with absolute values (should not be diffed) Statistic Begin Value End Value-- session pga memory max 1.157882826E+12 1. 154290304E+12session cursor cache count 157042373 157083136session uga memory 5.496429019E+14 5.496775467E+14opened cursors current 268916 265694workarea memory allocated 827704 837487logons current 2609 2613session uga memory max 1.749481584E+13 1.749737418E+13session pga memory 4.150306913E+11 4.150008177E+11

Instance Activity Stats-Absolute Values is the absolute value of some metrics that show the start and end of a snapshot

The number of logins at the current point in time of logon current

The number of cursors currently open by opened cursors current

The number of session cache cursors that currently exist in session cursor cache count

Instance Activity Stats-Thread ActivityDB/Inst: G10R25/G10R25 Snaps: 3663-3-> Statistics identified by'(derived) 'come from sources other than SYSSTAT Statistic Total per Hour-- log switches (derived) 17 2326.47

For the number of log switches (derived) log switches, see "what is the ideal switching time for online redo logs?" "

5 IO statistics

5-1 Tablespace IO Stats IO information based on tablespace grouping

Tablespace IO Stats DB/Inst: ITSCMP/itscmp2 Snaps: 70719-70723-> ordered by IOs (Reads + Writes) descTablespace-- Av Buffer Av Buf Reads Reads/s Rd (ms) Blks/Rd Writes Writes/s Waits Wt (ms)-- DATA_TS 17349398 4801 2.3 1.5 141077 39 4083704 5.8INDEX_TS 9193122 2544 2.0 1.0 238563 66 3158187 46.1UNDOTBS1 1582659 438 0.7 1.0 2 0 12431 69.0

Reads: refers to the number of physical reads that occur on this table space (not in blocks, but in numbers)

Av Reads/s: refers to the average number of physical reads per second on this tablespace (not in blocks, but in numbers)

Av Rd (ms): the average read latency per read on this tablespace

Av Blks/Rd: refers to the average number of blocks per read in this table space, because multiple blocks can be read at a physical read; if Av Blks/Rd > > 1, the system may have more db file scattered read, which may be diagnostic FULL TABLE SCAN or FAST FULL INDEX SCAN, and need to pay attention to two indicators: table scans (long tables) and index fast full scans (full).

Writes: the number of physical writes that occur on this tablespace; for those tablespaces where Writes is always equal to 0, find out whether the data is read-only, and if so, you can solve some performance problems in RAC through read only tablespace.

Av Writes/s: the average number of physical writes per second on this tablespace

Buffer Waits: the number of buffer busy waits and read by other session occurrences on this tablespace (buffer busy waits includes read by other session in 9i).

Av Buf Wt (ms): the average wait time, in ms, for buffer Waits to occur on this tablespace

5-2 File iCandle O

File IO Stats Snaps: 70719-70723-> ordered by Tablespace FileTablespace Filename---- Av Buffer Av Buf Reads Reads/s Rd (ms) Blks/Rd Writes Writes/s Waits Wt (ms)-- AMG_ALBUM_IDX_TS + DATA/ Itscmp/plugged/data2/amg_album_idx_ts01.dbf 23298 6 0.6 1.0 200 0.0AMG_ALBUM_IDX_TS + DATA/itscmp/plugged/data3/amg_album_idx_ts02.dbf 3003 1 0.6 1.0 2 00 0.0

Tablespace tablespace name

Path to the FileName data file

Reads: the cumulative number of physical reads that occurred on this data file, not the number of blocks

Av Reads/s: the average number of physical reads per second on this data file, not blocks

Av Rd (ms): the average latency per physical read on this data file, in ms

Av Blks/Rd: the average number of blocks involved in each read on this data file, which is close to 1 in OLTP environment

Writes: the cumulative number of physical writes that occurred on this data file, not the number of blocks

Av Writes/s: the average number of physical writes per second, not blocks, on this data file

Buffer Waits: the number of buffer busy waits and read by other session occurrences on this data file (buffer busy waits includes read by other session in 9i).

Av Buf Wt (ms): the average wait time for buffer Waits to occur on this data file, in ms

If there is a high IO load on a tablespace, it is necessary to analyze whether the IO on the data file to which it belongs is more uniform or skewed, and whether it is necessary to combine the storage characteristics to distribute the data evenly to the data files on different disks in order to optimize IZO.

6 buffer pool statistics Buffer Pool Statistics

Buffer Pool Statistics Snaps: 70719-70723-> Standard block size Pools D: default, K: keep, R: recycle- > Default Pools for other block sizes: 2k, 4k, 8k, 16k 32k Free Writ Buffer Number of Pool Buffer Physical Physical Buff Comp BusyP Buffers Hit% Gets Reads Writes Wait Wait Waits -16k 15720 Nash A 00000D 2259159 98 2.005084E+09 42753650 560460 0 1 8.51E+06

The data in this link mainly comes from WRH$_BUFFER_POOL_STATISTICS, while WRH$_BUFFER_POOL_STATISTICS summarizes the data in v$SYSSTAT on a regular basis.

P pool pool name D: default buffer pool default buffer pool, K: Keep Pool, R: Recycle Pool; 2k 4k 8k 16k 32k: buffer pool representing various non-standard block sizes

Number of buffers: actual number of buffer blocks, approximately equal to the size of the pool / the block size of the pool

Pool Hit%: hit ratio of the buffer pool

Buffer Gets: the number of visits to the middle block of the buffer pool includes consistent gets and db block gets

Physical Reads: the number of physical reads caused by the buffer pool Buffer Cache, which is actually physical reads cache, in blocks * times

Physical Writes: the physical write to which the Buffer cache is written in the buffer pool, which is actually physical writes from cache, in blocks * times

Free Buffer Waits: the number of times to wait for idle buffers, which can be regarded as the number of times free buffer waits waits have occurred in this buffer pool

Write Comp Wait: the number of times to wait for DBWR to write dirty buffer to disk, which can be regarded as the number of times that write complete waits wait occurs in this buffer pool

Buffer Busy Waits: the number of times buffer busy wait waits have occurred in this buffer pool

7-1 Checkpoint Activity checkpoint and Instance Recovery Stats instance recovery

Checkpoint Activity Snaps: 70719-70723-> Total Physical Writes: 590563 Other Autotune Thread MTTR Log Size Log Ckpt Settings Ckpt Ckpt Writes Writes -0 12899 0-- -Instance Recovery Stats Snaps: 70719-70723-> B: Begin Snapshot E: End Snapshot Estd Targt Estd Log Ckpt Log Ckpt Opt RAC MTTR MTTR Recovery Actual Target Log Sz Timeout Interval Log Avail (s) (s) Estd IOs RedoBlks RedoBlks RedoBlks RedoBlks RedoBlks Sz (M) Time-- -B 0 6 12828 477505 1786971 5096034 1786971 N Accord A 3e 07 16990 586071 2314207 5096034 2314207 N mer An A 3

The data of this link comes from WRH$_INSTANCE_RECOVERY.

MTTR Writes: a physical write WRITES_MTTR made to meet the MTTR specified by FAST_START_MTTR_TARGET

Log Size Writes: physical write WRITES_LOGFILE_SIZE due to the smallest redo log file

Log Ckpt writes: physical write WRITES_LOG_CHECKPOINT_SETTINGS due to incremental checkpoints driven by LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT

Other Settings Writes: physical writes due to other settings (such as FAST_START_IO_TARGET), WRITES_OTHER_SETTINGS

Autotune Ckpt Writes: physical writes due to automatic tuning of checkpoints, WRITES_AUTOTUNE

Thread Ckpt Writes: physical writing due to thread checkpoint, WRITES_FULL_THREAD_CKPT

B stands for the beginning and E for the end

Targt MTTR (s): the target MTTR (mean time to recover) means the effective recovery time in seconds. The calculation of TARGET_MTTR is based on the given parameter FAST_START_MTTR_TARGET, while TARGET_MTTR is used internally. In fact, Target MTTR may not be the same as FAST_START_MTTR_TARGET in use. If the FAST_START_MTTR_TARGET is too small, then the TARGET_MTTR will be the minimum estimate allowed by the system conditions; if the FAST_START_MTTR_TARGET is too large, the TARGET_MTTR is calculated using a conservative algorithm to obtain the maximum estimated time to complete the restore.

Estimated_mttr (s): the effective recovery time currently estimated based on the number of dirty buffer and redo log blocks. Its estimate tells users how long it will take to roll forward the crash recovery if an instance crash occurs under the current system load before the database can be opened.

Recovery Estd IOs: actually the number of dirty blocks in the current buffer cache, which will be rolled forward once the instance crashes

Actual RedoBlks: the current actual number of redo redo blocks that need to be recovered

Target RedoBlks: the minimum value of Log Sz RedoBlks, Log Ckpt Timeout RedoBlks, and Log Ckpt Interval RedoBlks

Log Sz RedoBlks: represents the redo block involved in the checkpoint that must be completed before log file switch log switching, also known as max log lag; data source select LOGFILESZ from Xerotargetrba; select LOG_FILE_SIZE_REDO_BLKS from v$instance_recovery

Log Ckpt Timeout RedoBlks: the number of redo block to be processed by LOG_CHECKPOINT_TIMEOUT, lag for checkpoint timeout; data source select CT_LAG from x$targetrba

Log Ckpt Interval RedoBlks: in order to meet the number of redo block to be processed by LOG_CHECKPOINT_INTERVAL, the lag for checkpoint interval; data source select CI_LAG from x$targetrba

Opt Log Sz (M): the size of the redo logfile estimated based on FAST_START_MTTR_TARGET, in MB. The size of the redo log officially recommended by Oracle is at least larger than this estimate.

Estd RAC Avail Time: refers to the time it takes for the cluster to become partially available after the node fails in the evaluated RAC. This metric is only available in RAC (in seconds). ESTD_CLUSTER_AVAILABLE_TIME

7-2 Buffer Pool Advisory buffer pool recommendation

Buffer Pool Advisory DB/Inst: ITSCMP/itscmp2 Snap: 70723-> Only rows with estimated physical reads > 0 are displayed- > ordered by Block Size Buffers For Estimate Est Phys Estimated Est Size for Size Buffers Read Phys Reads Est Phys% DBtimeP Est (M) Factor (thousands) Factor (thousands) Read Time for Rds -- D 1920. 1 227 4.9 1110565597 1 1.0E+09D 3840. 2 454 3.6 832483886 1 7.4E+08D 5760. 3 680 2.8 634092578 1 5.6E+08D 7680. 4 907 2.2 500313589 1 4.3E+08D 9600. 5 1134 1.8 410179557 1 3.5E+08D 11520. 6 1361 1.5 348214283 1 2.9E+08D 13440. 7 1588 1.3 304658441 1 2.5E+08D 15360. 8 1814 1.2 273119808 1 2.2E+08D 17280. 9 2041 1.1 249352943 1 2.0E+08D 19200 1.0 2268 1.0 230687206 1 1.8E+08D 19456 1.0 2298 1.0 228664269 1 1.8E+08D 21120 1.1 2495 0.9 215507858 1 1.7E+08D 23040 1.2 2722 0.9 202816787 1 1.6E+08D 24960 1.3 2948 0.8 191974196 1 1.5E+08D 26880 1.4 3175 0.8 182542765 1 1.4E+08D 28800 1.5 3402 0.8 174209199 1 1.3E+08D 30720 1.6 3629 0.7 166751631 1 1.2E+08D 32640 1.7 3856 0.7 160002420 1 1.2E+08D 34560 1.8 4082 0.7 153827351 1 1.1E+08D 36480 1.9 4309 0.6 148103338 1 1.1E+08D 38400 2.0 4536 0.6 142699866 1 1.0E+08

The particle size of the buffer pool can be found in SELECT * FROM V$SGAINFO where name like ('Granule%').

P means that the name of the buffer pool may include D default buffer pool, K Keep Pool, R recycle Pool.

Size For Est (M): refers to the buffer pool of this size as the object of evaluation, which is generally 10% ~ 200% of the current current size, in order to understand the impact of increasing or decreasing buffer pool on physical reading.

Size Factor: size factor, which only corresponds to the scale factor of the buffer pool size to the current setting. For example, if current_size is 100m, then if the evaluation value is 110m, then size Factor is 1.1m.

Buffers (thousands): refers to the number of buffer under this buffer pool size, which is the actual value multiplied by 1000

Est Phys Read Factor: the evaluated physical read factor, for example, the buffer pool of the current size will cause 100physical reads, while the buffer pool of other sizes will cause 120physical reads, then the Est Phys Read Factor of the corresponding size will be 1.2.

Estimated Phys Reads (thousands): the number of physical reads evaluated, multiplied by 1000, is the actual value. Obviously, different sizes of buffer pool correspond to different number of evaluated physical reads.

Est Phys Read Time: estimated physical read time

Est% DBtime for Rds: ratio of evaluated physical reads to DB TIME

We see that buffer pool advisory generally has two purposes:

In the case of more physical reads, we hope to alleviate the physical read wait by increasing the buffer pool size. This is because we focus on whether the buffer pool size with Size Factor > 1 can effectively reduce Est Phys Read Factor. If the Est Phys Read Factor decreases significantly with the increase of Size Factor, then increasing buffer cache can effectively reduce physical reading.

In the case of tight memory, you want to set aside part of the memory from the buffer pool for other purposes, but you do not want the smaller buffer cache to lead to a decline in the performance of physical reads, so observe whether Est Phys Read Factor increases significantly with the decrease of Size Factor. If not, it means that reducing part of buffer cache will not lead to a significant increase in physical reads, so you can rest assured that buffer cache can be reduced.

Note that there is not a simple linear relationship between Size Factor and Est Phys Read Factor, so human intervention is needed to evaluate the gains and losses.

7-3 PGA Aggr Summary

PGA Aggr Summary Snaps: 70719-70723-> PGA cache hit%-percentage of WameA (WorkArea) data processed only in-memoryPGA Cache Hit% MB Processed Extra WameA MB Read/Written 99.9 412527 375

PGA Cache Hit%: refers to the ratio that the data of Wram A WorkArea workspace is processed only in memory, the PGA cache hit rate

Workarea is the area of PGA that handles sorting, hash joins, and bitmap merging; workarea is also known as the SQL job area

Whopa MB processes: the amount of data processed in Workarea, in MB

Extra WhampA MB Read/Written: additional workspace data read and written from disk, in MB

7-4 PGA Aggr Target Stats

Warning: pga_aggregate_target was set too low for current workload, as this value was exceeded during this interval. Use the PGA Advisory view to help identify a different value for pga_aggregate_target.PGA Aggr Target Stats Snaps: 70719-70723-> B: Begin Snap E: End Snap (rows dentified with B or E contain data which is absolute i.e. Not diffed over the interval)-> Auto PGA Target-actual workarea memory target- > Wram A PGA Used-amount of memory used for all Workareas (manual + auto)->% PGA Wamp A Mem-percentage of PGA memory allocated to workareas- > % Auto WameA Mem-percentage of workarea memory controlled by Auto Mem Mgmt- >% Man Wallace A Mem-percentage of workarea memory under manual control% PGA% Auto% Man PGA Aggr Auto PGA PGA Mem WameA PGA Wigma A Global Mem Target (M) Target (M) Alloc (M) Used (M) Mem Mem Mem Bound (K)-B 8192 512 23690.5 150.1 .6 100.0 .0838860E 8192 512 23623.6 156.9. 7 100.0. 0 838860-

The data source of this link is mainly WRH$_PGASTAT.

PGA Aggr Target (M): essentially pga_aggregate_target, and of course this value may change automatically in an AMM (memory_target) environment

Auto PGA Target (M): the actual workspace memory "aggregate PGA auto target" that is available in automatic PGA management mode, because PGA has other uses and cannot all be used as workarea memory

PGA Mem Alloc (M): currently allocated PGA memory, alloc is not equal to inuse, that is, allocated memory is not equal to memory in use. Theoretically, PGA will return memory that is really not used to OS (PGA memory freed back to OS), but there are scenarios where PGA takes up a lot of memory and does not release it.

In the above example, the pga_aggregate_target is only 8192 M, but the actual processes is between 2615 and 8000. If a process consumes the PGA of 5MB, it also needs 10000m of PGA, while the actual PGA Mem Alloc (M) is 23690 M, which indicates that there is an overload of PGA and the pga_aggregate_target needs to be adjusted.

WMB A PGA Used (M): the total amount of memory used by all workspace workarea (including manual and auto), in MB

% PGA PGA Used A Mem: percentage of memory allocated to workarea to total PGA, (Whopa PGA Used) / PGA Mem Alloc

% Auto Wax A Mem: percentage of memory (workarea_size_policy=AUTO) controlled by AUTO automatic workspace management to total workarea memory

% Man WA Mem: percentage of memory (workarea_size_policy=MANUAL) controlled by MANUAL manual workspace management to total workarea memory

Global Mem Bound (K): the maximum memory that can be allocated by a workspace in automatic PGA management mode (note that there may be multiple workspace workarea during the execution of a SQL). The index Global Mem Bound (K) will be continuously modified during the operation of the instance to reflect the load of the database at that time in the workspace. Obviously Global Mem Bound will drop correspondingly under the system load with many active workspaces. However, the value of global bound should not be less than 1 MB, otherwise it is recommended to increase pga_aggregate_target.

7-5 PGA Aggr Target Histogram

PGA Aggr Target Histogram Snaps: 70719-70723-> Optimal Executions are purely in-memory operations Low HighOptimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs--2K 4K 262086 262086 0 0 64K 128K 497 497 0 0 128K 256K 862 862 0 0 256K 512K 368 368 0 0 512K 1024K 440585 440585 0 0 1M 2M 68313 68313 0 0 2M 4M 169 161 8 0 4M 8M 50 42 8 0 8M 16M 82 82 0 016M 32M 1 100 032M 64M 12 12 0 128M 256M 20 20-

Source: WRH$_SQL_WORKAREA_HISTOGRAM

Low Optimal: the lower limit of the workspace workarea that this line contains is most suitable for memory requirements

High Optimal: the upper limit of the workspace workarea that this line contains that best fits the memory requirement

Total Execs: total number of executions completed in the Low Optimal~High Optimal-scoped workspace

Optimal execs: optimal execution refers to the number of times the execution is completed completely in PGA memory

1-pass Execs: the number of times that only one disk read occurs during an operation

M-pass Execs: refers to the number of times that more than one disk read occurred during the operation, and frequent disk reads were performed

7-6 PGA Memory Advisory

PGA Memory Advisory Snap: 70723-> When using Auto Memory Mgmt Minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0 Estd Extra Estd P Estd PGAPGA Target Size Wax A MB Wamp A MB Read/ Cache Overallo Estd Est (MB) Factr Processed Written to Disk Hit% Count Time -1024 2671356938.7 387531258.9 87.0 1.07E+07 7.9E+11 2048 2671356938.7 387529979.1 87.0 1.07E+07 7.9E+11 4096 0.5 2671356938.7 387518881.8 87.0 1.07E+07 7.9E+11 6144 0.8 2671356938.7 387420749.5 87.0 1.07E+07 7.9E+11 8192 1.0 2671356938.7 23056196.5 99.0 1.07E+07 6.9E+11 9830 1.2 2671356938.7 22755192.6 99.0 6.81E+06 6.9E+11 11469 1.4 2671356938.7 20609438.5 99.0 4.15E+06 6.9E+11 13107 1.6 2671356938.7 19021139.1 99. 0 581362 6.9E+11 14746 1.8 2671356938.7 18601191.0 99.0 543531 6.9E+11 16384 2.0 2671356938.7 18561361.1 99.0 509687 6.9E+11 24576 3.0 2671356938.7 18527422.3 99.0 232817 6.9E+11 32768 4.0 2671356938.7 18511872.6 99.0 120180 6.9E+11 49152 6.0 2671356938.7 18,500,815.3 99.0 8,021 6. 9E+11 65536 8.0 2671356938.7 18498733.0 99.0 0 6.9E+11

The PGA_AGGREGATE _ target value used by PGA Target Est (MB) to evaluate

Size Factr, the scale factor PGA Target Est / PGA_AGGREGATE_TARGE between the PGA_AGGREGATE_ TARGET currently used for evaluation and the PGA_AGGREGATE_ TARGET currently actually set

Whopa MB Processed: the amount of data to be processed in workarea, in MB

Estd Extra WhampA MB Read/ Written to Disk: estimated amount of data processed in one-pass or M-Pass (in MB)

Estd P Cache Hit%: estimated PGA cache hit ratio

Estd PGA Overalloc Count: the estimated PGA overload, as mentioned above, PGA_AGGREGATE _ TARGET is only a target value and cannot really limit the use of PGA memory. PGA overallocate overload will occur when there is a hard demand for PGA memory (When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0).

7-7 Shared Pool Advisory

Shared Pool Advisory Snap: 70723-> SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor- > Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. Inv $librarycache), with the number of Lib Cache Memory Objects is invalid. Est LC Shared SP Est LC Time Time Load Load Est LC Pool Size Size Est LC Saved Saved Time Time Mem Obj Size (M) Factr (M) Mem Obj (s) Factr (s) Factr Hits (K)- -- 304.8 56 3987 7728 1.061 1.43323529.101 6243 7745 1.044 1.0334 400 1.0 114 7777 7745 1.0 44 1.0 334 448 1.1 114 7777 7745 1.0 44 1.0 334 496 1.2 114 7777 7745 1.0 44 1.0 334 544 1.4 114 7777 7745 1.0 44 1.0 334 592 1.5 114 7777 7745 1.0 44 1.0 334 640 1.6 114 7777 7745 1.0 44 1.0 334 688 1.7 114 7777 7745 1.0 44 1.0 334 736 1.8 114 7777 7745 1.0 44 1.0 334 784 2.0 114 7777 7745 1.0 44 1.0 334 832 2.1 114 7777 7745 1.0 44 1.0 334-- -

Shared Pool Size (M): the size of the shared pool shared pool used for evaluation. The shared_pool size may float in AMM / ASMM environments.

SP Size Factr: scale factor of shared pool size, (Shared Pool Size for Estim / SHARED_POOL_SIZE)

Estd LC Size (M): the size of the evaluated library cache, in MB, because the shared pool contains library cache and, of course, others such as row cache

Est LC Mem Obj refers to the number of library cache memory object in a shared pool of a specified size evaluated ESTD_LC_MEMORY_OBJECTS

Est LC Time Saved (s): the parsing time saved by finding the required library cache memory objects in the case of a specified shared pool size. These saved parsing time is also spent on repeatedly loading objects (reload) in the shared pool, which may be aged out because there is not enough free memory in the shared pool. ESTD_LC_TIME_SAVED

Est LC Time Saved Factr: scale factor of Est LC Time Saved (s), (Est LC Time Saved (s) / Current LC Time Saved (s)) ESTD_LC_TIME_SAVED_FACTOR

Est LC Load Time (s): time spent parsing at the specified shared pool size

Scale factor of Est LC Load Time Factr:Est LC Load Time (s), (Est LC Load Time (s) / Current LC Load Time (s)) ESTD_LC_LOAD_TIME_FACTOR

Est LC Mem Obj Hits (K): the number of times the library cache memory object needed in the specified shared pool size is found in the shared pool ESTD_LC_MEMORY_OBJECT_HITS

For the need to reduce the size of the shared_pool_size shared pool, you can pay attention to Est LC Mem Obj Hits (K). For example, if the Est LC Mem Obj Hits (K) is 334 when the shared pool is 352m in the above example, then you can consider reducing the shared_pool_size to this value, but you should pay attention to the minimum requirements for the shared pool on each version / platform, including resources such as gcs resource and gcs shadow in RAC, which reside in shared pool. Attention should be paid to the increase of db_cache_size.

7-8 SGA Target Advisory

SGA Target Advisory Snap: 70723SGA Target SGA Size Est DB Est Physical Size (M) Factor Time (s) Reads- 3752 0.1 1.697191E+09 1.4577142918E+12 7504 0.3 1.222939E+09 832293601354 11256 0.4 1.000162E+09 538390923784 15008 0.5 895087191 399888743900 18760 0.6 840062594 327287716803 22512 0.8 806389685 282881041331 26264 0.9 782971706 251988446808 30016 1.0 765293424 228664652276 33768 1.1 751135535 210,005,616,650 37,520 1.3 739,350,016 194,387,820,900 41,272 1.4 733,533,785 187,299,216,679 45,024 1.5 732,921,550 187,299,216,679 48,776 1.6 732,691,962 187,299,216,679 52,528 1.8 732,538,908 187299216679 56280 1.91.9187299216679 60032 2.0732462391 187299458716

The data of this link comes from WRH$_SGA_TARGET_ADVICE.

SGA target Size: sga target size for evaluation (sga_target)

SGA Size Factor: scale factor of SGA Size, (est SGA target Size / Current SGA target Size)

Est DB Time (s): evaluates the amount of DB TIME generated corresponding to the specified sga target size (in seconds)

Est Physical Reads: evaluate how many physical reads will be generated for the sga target size that should be specified

7-9 Streams Pool Advisory

Streams Pool Advisory DB/Inst: ITSCMP/itscmp2 Snap: 70723 Size for Size Est Spill Est Spill Est Unspill Est Unspill Est (MB) Factor Count Time (s) Count Time (s)- -64 0.5 000 0 128 1.0 000 0 192 1.5 000 0 256 2.0 0 0 0 320 2.5 0 0 0 384 3.0 0 0 0 448 3.5 0 0 0 0 512 4.0 0 0 0 576 4.5 0 0 0 640 5.0 0 0 0 704 5 . 5 0 0 0 768 6.0 0 0 0 832 6.5 0 0 0 896 7.0 0 0 0 0 960 7.5 0 0 0 1024 8.0 0 0 0 1088 8.5 0 0 0 1152 9.0 0 0 0 1216 9.5 0 0 0 1280 10.0 0 0 0

This link is necessary only if Streams stream replication is used, the data source WRH$_STREAMS_POOL_ADVICE

Size for Est (MB): streams pool size used for evaluation

Size Factor: scale factor of streams pool siz

Est Spill Count: the estimated number of message overflows to disk when using a stream pool of this size ESTD_SPILL_COUNT

Est Spill Time (s): the estimated time in seconds for message to overflow to disk when a stream pool of this size is used

Est Unspill Count: the estimated number of ESTD_UNSPILL_COUNT that message unspill reads from disk when using a stream pool of this size

Est Unspill Time (s): estimated time in seconds for message unspill to read from disk when a stream pool of this size is used

7-10 Java Pool Advisory

The relevant indicators of java pool are similar to those of shared pool, so they are no longer described.

8 Wait Statistics

8-1 Buffer Wait Statistics

Buffer Wait Statistics Snaps: 70719-70723-> ordered by wait time desc Waits descClass Waits Total Wait Time (s) Avg Time (ms)-- data block 8442041 407259 48undo header 16212 1711 106undo block 21023 557 261st level bmb 1038 266 2562nd level bmb 540 185 342bitmap block 90 25 276segment header 197 13 66file header block 132 6 43bitmap index block 18 0 1extent map 2 0 0

Source: WRH$_WAITSTAT

This link is a summary of waiting information for various types of (class) blocks in the buffer pool. The reasons for wait are generally buffer busy waits and read by other session.

The class of class data block, an oracle data block has both class attribute and type attribute, type attribute (KCBH) is recorded in the data block, and class attribute (X$BH.class) is stored in buffer header.

Waits: the number of waits for this type of data block

Total Wait Time (s): the total wait time for this type of data block is in seconds

Avg Time (ms): the average time spent waiting for each block of this type (in ms)

If the user is using undo_management=AUTO 's SMU, there is generally no waiting for undo header block class blocks because there is too little rollback segment.

Wait for buffer contention caused by INSERT:

1. For manual segment management MSSM, consider adding Freelists and Freelist Groups.

2. Use ASSM. Of course, ASSM itself does not have any parameters to adjust.

For contention caused by INSERT ON INDEX:

Use reverse index key

Use HASH partitions and local indexes

Reduce the density of index if possible

8-2 Enqueue Activity

Enqueue queue lock wait

Enqueue Activity Snaps: 70719-70723-> only enqueues with waits are shown- > Enqueue stats gathered prior to 10g should not be compared with 10g data- > ordered by Wait Time desc Waits descEnqueue Type (Request Reason)-Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time (ms)- -TX-Transaction (index contention) 201270 201326 0 193948 97517 502.80TM-DML 702731 702681 4 1081 46671 43174.08SQ-Sequence Cache 28643 28632 0 17418 35606 2044.19HW-Segment High Water Mark 9210 8845 376 1216 12505 10283.85TX-Transaction (row lock contention) 9288 9280 09232 10486 1135.80CF-Controlfile Transaction 15851 14094 1756 2 798 4565 1631.64TX-Transaction (allocate ITL entry) 471 369 102 360 169 469.28

The type of Enqueue Type (Request Reason) enqueue queue. We should at least understand enqueue type and enqueue mode before studying the enqueue problem. Enqueue type is the resource to be protected by the queue lock, such as TM table lock CF control file lock, enqueue mode is the mode that holds the queue lock (SS, SX, S, SSX, X)

Requests: the number of times the corresponding enqueue type resource or queue conversion (enqueue conversion, for example, S to SSX) is requested

Succ Gets: the number of times the corresponding enqueue was successfully applied or converted

Failed Gets: the number of failed applications or conversions of the corresponding enqueue

Waits: the number of waits caused by the application or conversion of the corresponding enqueue

Wt Time (s): the waiting time caused by the application or conversion of the corresponding enqueue

Av Wt Time (ms): the average waiting time caused by the application or conversion of the corresponding enqueue, Wt Time (s) / Waits, in ms

The main enqueue wait events:

Enq: TX-row lock/index contention, allocate ITL wait event

Enq: TM-contention wait event

Oracle queue lock enq:TS,Temporary Segment (also TableSpace)

9-1 Undo Segment Summary

Undo Segment Summary Snaps: 70719-70723-> Min/Max TR (mins)-Min and Max Tuned Retention (minutes)-> STO-Snapshot Too Old count, OOS-Out of Space count- > Undo segment block stats:- > uS-unexpired Stolen, uR-unexpired Released, uU-unexpired reUsed- > eS-expired Stolen, eR-expired Released EU- expired reUsedUndo Num Undo Number of Max Qry Max Tx Min/Max STO/ uS/uR/uU/ TS# Blocks (K) Transactions Len (s) Concurcy TR (mins) OOS eS/eR/eU -4 85.0 200127 55448 317 1040.2 -Undo Segment Stats Snaps: 70719-70723-> Most recent 35 Undostat rows Ordered by Time desc Num Undo Number of Max Qry Max Tx Tun Ret STO/ uS/uR/uU/End Time Blocks Transactions Len (s) Concy (mins) OOS eS/eR/eU-- -- 29-Aug 05:52 11700 35098 55448 234 1070 0 0/0/0/0/0/029-Aug 05:42 12203 24677 54844 284 1065 0 0/0/0/0/0/029-Aug 0 0/0/0/0/0/029-Aug 05:32 14132 37826 54241 237 1060 0 0/0/0/0/0/029-Aug 05:22 14379 32315 53637 317 1050 0/0 0/0/0/0/0/029-Aug 05:12 15693 34157 53033 299 1045 0/0 0/0/0/0/0/029-Aug 05:02 16878 36054 52428 250 1040 0/0 0/0/0/0/0/0

Data source: WRH$_UNDOSTAT, undo related usage information is refreshed to v$undostat every 10 minutes

Undo Extent has three states: active, unexpired, and expired.

Active transactions are included in active = > extent. Undo extent of active is generally not allowed to be overridden by other transaction reuse.

There are no active transactions in unexpired = > extent, but the relevant undo record has not passed undo retention from inactive to now (note the problem with auto undo retention because this feature may see that most of the block is unexpired when observing dba_undo_extents, which is normal), so it is unexpired. For undo tablespace without guarantee retention, unexpired extent may be reused by steal for other things

Expired = > there are no active transactions in extent, and the time of undo retention has been exceeded

For the table space number of the undo table space used by Undo TS#, only one undo tablespace can be used for an instance at a time, and different undo tablespace can be used for different RAC nodes.

Num Undo Blocks (K) refers to the number of undo blocks consumed, and (K) represents the actual value multiplied by 1000; this indicator can be used to evaluate the system's consumption of undo block so that the size of the UNDO tablespace can be evaluated based on the actual load.

Number of Transactions refers to the total amount of transaction transaction executed on the undo tablespace during the period of time

Max Qry Len (s) the longest query time in seconds during this period

Max Tx Concy the maximum number of transaction concurrency in this period

Min/Max TR (mins) minimum and maximum tuned undo retention (in minutes); tuned undo retention is an automatic undo tuning feature, see the introduction to undo automatic tuning.

STO/ OOS STO refers to the number of ORA-01555 Snapshot Too Old errors; OOS-refers to the number of Out of Space count errors.

The number of times uS-unexpired Stolen attempted to steal undo space from undo extent that never expired

Number of blocks released in uR-unexpired Released undo extent that have never expired

UU-the number of blocks in an unexpired undo extent whose block is reused by other transactions in uU-block

Number of eS-expired Stolen attempts to steal undo space from expired undo extent

Number of blocks released by eR-expired Released from expired undo extent

EU-the number of blocks in an expired undo extent in which the block is reused by other transactions

UNXPSTEALCNTNUMBERNumber of attempts to obtain undo space by stealing unexpired extents from other transactionsUNXPBLKRELCNTNUMBERNumber of unexpired blocks removed from certain undo segments so they can be used by other transactionsUNXPBLKREUCNTNUMBERNumber of unexpired undo blocks reused by transactionsEXPSTEALCNTNUMBERNumber of attempts to steal expired undo blocks from other undo segmentsEXPBLKRELCNTNUMBERNumber of expired undo blocks stolen from other undo segmentsEXPBLKREUCNTNUMBERNumber of expired undo blocks reused within the same undo segmentsSSOLDERRCNTNUMBERIdentifies the number of times the error ORA-01555 occurred. You can use this statistic to decide whether or not the UNDO_RETENTION initialization parameter is set properly given the size of the undo tablespace. Increasing the value of UNDO_RETENTION can reduce the occurrence of this error.

10-1 Latch Activity

Latch Activity Snaps: 70719-70723-> "Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for willing-to-wait latch get requests- > "NoWait Requests" "Pct NoWait Miss" are for no-wait latch get requests- > "Pct Misses" for both should be very close to 0.0 Pct Avg Wait Pct Get Get Slps Time NoWait NoWaitLatch Name Requests Miss / Miss (s) Requests Miss---- AQ deq hash table latch 4 000 N/AASM Keyed state latch 9048 0.1 0.2 0 0 N/AASM allocation 15017 0.2 0.8 1 0 N/AASM db client latch 72745 0.0 0 0 N/AASM map headers 5860 0.6 0.6 1 0 N / AASM map load waiting lis 1462 0.0 0 0 N/AASM map operation freeli 63539 0.1 0.4 1 0 N/AASM map operation hash t 76484447 0.1 1.0 66 0 N/A

Name of the latch name Latch latch

The number of times Get Requests latch has been applied for and obtained in willing-to-wait mode

Pct Get Miss miss refers to the number of times that latch is applied in willing-to-wait mode but the applicant must wait, Pct Get Miss = Miss/Get Requests; miss can be obtained from the following Latch Sleep Breakdown

Avg Slps / Miss Sleep refers to the number of times that latch is applied in willing-to-wait mode resulting in session needing sleep to wait for the latch; Avg Slps / Miss = Sleeps/ Misses; Sleeps can be obtained from later Latch Sleep Breakdown

Wait Time (s) refers to the time spent waiting for latch, in seconds

NoWait Requests refers to the number of times latch has been applied in no-wait mode

The number of times Pct NoWait Miss applied for latch in no-wait mode but failed directly

For highly concurrent latch such as cache buffers chains, the Pct Misses should be very close to 0

General tuning principles:

If latch: cache buffers chains is a Top 5 event, you need to consider optimizing SQL to reduce full table scans and reduce logical reads of Top buffer gets SQL statements

If latch: redo copy and redo allocation wait too much, you can consider increasing LOG_BUFFER

If more latch:library cache occurs, consider increasing shared_pool_size

10-2 Latch Sleep Breakdown

Latch Sleep Breakdown DB/Inst: ITSCMP/itscmp2 Snaps: 70719-70723-> ordered by misses desc Get SpinLatch Name Requests Misses Sleeps Gets -- cache buffers chains 3365097866 12831875 130058 12683450row cache objects 69050058 349839 1320 348649session idle bit 389437460 268285 2768 265752enqueue hash chains 8698453 239880 22476 219950ges resource Hash list 8388730 158894 70728 91104gc element 100383385 135759 6285 129742gcs remastering latch 12213169 72373 1 72371enqueues 4662545 46374 259 46155ASM map operation hash tab 76484447 46231 45210 1952Lsod array latch 72598 24,224 24,577 1,519

Name of the latch name Latch latch

The number of times Get Requests latch has been applied for and obtained in willing-to-wait mode

Misses refers to the number of times that latch is applied in willing-to-wait mode but the applicant has to wait

After 9i, there are generally two cases of spin gets after miss, or sleep will not wake up until it is post. For more information, please see the principle of Oracle Latch latch after a comprehensive analysis of 9i.

8i previous latch algorithms can be referred to: Oracle Latch: a piece of pseudo code that describes the operation of Latch.

So generally speaking, the misses= Sleeps+ Spin Gets after 9i, although not absolutely so.

Sleeps refers to the number of times that latch is applied in willing-to-wait mode and session needs sleep to wait for the latch.

Spin Gets applied for latch in willing-to-wait mode, and obtained the number of latch in spin mode after miss

10-3 Latch Miss Sources

Latch Miss Sources Snaps: 70719-70723-> only latches with sleeps are shown- > ordered by name Sleeps desc NoWait WaiterLatch Name Where Misses Sleeps Sleeps -ASM Keyed state latch kfksolGet 0 1 1ASM allocation kfgpnSetDisks2 0 17 0ASM allocation kfgpnClearDisks 0 5 0ASM allocation kfgscCreate 0 4 0ASM allocation kfgrpGetByName 0 1 26ASM map headers kffmUnidentify_3 0 7 8ASM map headers kffmAllocate 0 6 0ASM map headers kffmIdentify 0 6 11ASM map headers kffmFree 0 1 0ASM map operation freeli kffmTranslate2 0 15 8ASM map operation hash t kffmUnidentify 0 44677 36784ASM map operation hash t kffmTranslate 0 220 3517

The data source is DBA_HIST_LATCH_MISSES_SUMMARY.

Name of the latch name Latch latch

Where: refers to which code paths kernel functions hold the latch, rather than which code paths to apply for these latch;. For example, the kcbgtcr function functions as Get a block for Consistent read, and it is normal for it to hold latch: cache buffers chain.

NoWait Misses: the number of times you failed to apply for latch in no-wait mode

Sleeps: refers to the number of times that latch was applied in willing-to-wait mode resulting in session needing sleep to wait for the latch. Time of sleeps resulted in making the latch request

Waiter Sleeps: number of times the waiters sleep times of sleeps that waiters did for each where; Sleep is the number of times the blockers wait, Waiter Sleeps is the number of times the blocked people wait

10-4 Mutex Sleep Summary

Mutex Sleep Summary Snaps: 70719-70723-> ordered by number of sleeps desc WaitMutex Type Location Sleeps Time (ms)-- -Cursor Pin kksfbc [KKSCHLFSP2] 4364 14520Cursor Pin kkslce [KKSCHLPIN2] 2396 2498Library Cache kglpndl1 95 903 475Library Cache kglpin1 4 800 458Library Cache kglpnal2 91 799 259Library Cache kglget1 1 553 1697Library Cache kglpnal1 90 489 88Library Cache kgllkdl1 85 481 1528Cursor Pin kksLockDelete [KKSCHLPIN6] 410 666Cursor Stat kkocsStoreBindAwareStats [KKSSTA 346 497Library Cache kglhdgn2 106 167 348Library Cache kglhdgh2 64 26 84Library Cache Kgldtin1 42 19 55Cursor Pin kksfbc [KKSCHLPIN1] 13 34Library Cache kglhdgn1 62 11 13Library Cache kgllkal1 80 9 12Library Cache Kgllkc1 57 6 0Cursor Pin kksSetBindType [KKSCHLPIN3] 5 5Library Cache kglGetHandleReference 124 4 20Library Cache kglUpgradeLock 119 4 0Library Cache kglget2 2 3 0Library Cache kglati1 45 1 0Library Cache kglini1 32 1 0Library Cache kglobld1 75 1 0Library Cache kglobpn1 71 1 0

Mutex is a new memory locking mechanism introduced after 10.2.0.2. For a specific description of Mutex, please see "understanding Mutex in Oracle": http://www.askmaclean.com/archives/understanding-oracle-mutex.html.

Mutex Type

The type of Mutex is actually the name of the customer corresponding to mutex. In version 10.2, only KKS uses Mutex, so there are only three:

Cursor Stat (kgx_kks1)

Cursor Parent (kgx_kks2)

Cursor Pin (kgx_kks3)

Library Cache was added in 11g.

Location initiates the code path code location requested for the Mutex, instead of holding the code path or kernel function of the Mutex

The following functions are the most common in 10.2

Kkspsc0-responsible for parsing cursors-detects whether the cursor we are parsing has parent cursor heap 0 of the object

Kksfbc-responsible for finding the appropriate subcursor or creating a new subcursor

KksFindCursorstat

Sleeps:

Get and Sleep of Mutex

When a Mutex is applied for, it is generally called a get request. If the initial application is not authorized, the process will enter the 255th SPIN (_ mutex_spin_count Mutex spin count) as a result of this application, and the process will see if the Mutex has been released during each iteration of the SPIN loop.

If the Mutex is not released after the SPIN, the process enters the corresponding mutex wait waiting event for the requested mutex. The wait event and wait manner of the actual process are determined by the type lock of the mutex, such as Cursor pin, Cursor Parent. For example, this wait could be a blocking wait, or it could be a sleep.

Note, however, that the sleep column on the view VroomMUTEXSLEEPSTER * means the number of times to wait. The relevant code function adds this sleep field when it starts to enter the wait.

The wait time is calculated from before the process enters the wait. When a process ends its wait, the waiting time is added to the sum of the total. The process tries to apply for the previous Mutex again, and if the Mutex is still not available, it enters the spin/wait loop again.

The GETS column of the V$MUTEX_SLEEP_HISTORY view increases only when a Mutex is successfully applied for.

Wait Time (ms) is similar to the elapsed time that latch,spin time does not count as mutex, it only contains the time waiting for elapsed time.

=

11 segment statistics segment level statistics

11-1 Segments by Logical Reads

Segments by Logical Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Logical Reads: 2021476421-> Captured Segments account for 83.7% of Total Tablespace Subobject Obj. LogicalOwner Name Object Name Name Type Reads% Total- CONTENT_OW INDEX_TS MZ_PRODUCT_ATTRIBUTE INDEX 372849920 18.44CONTENT_OW INDEX_TS MZ_PRODUCT__LS_PK INDEX 329829632 16.32CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 218419008 10.80CONTENT_OW PLAYLIST_A MZ_PLAYLIST_ARTIST TABLE 182426240 9.02CONTENT_OW DATA_TS MZ_PRODUCT TABLE 108597376 5.37

Owner: the owner of the segment

Tablespace Name: the name of the table space in which the data segment resides

Object Name: object name

Subobject Name: the sub-object name, such as a partition of a partition table

Obj Type: generally, the object type is TABLE / INDEX or partition or subpartition

Logical Reads: the logical reads that have occurred on this segment, in blocks * times

% Total: percentage of total logical reads, (logical reads / Total DB logical reads that have occurred on the current object)

11-2 Segments by Physical Reads

Segments by Physical Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Physical Reads: 56839035-> Captured Segments account for 51.9% of Total Tablespace Subobject Obj. PhysicalOwner Name Object Name Name Type Reads% Total- CONTENT_OW SONG_TS MZ_SONG TABLE 7311928 12.86CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 4896554 8.61CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 3099387 5.45CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 1529971 2.69CONTENT_OW DATA_TS MZ_PUBLICATION TABLE 1391735 2.45

Physical Reads: the number of physical reads that have occurred on this segment, in blocks *

% Total: percentage of total physical reads, (logical reads / Total DB logical reads that have occurred on the current object)

11-3 Segments by Physical Read Requests

Segments by Physical Read Requests DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Physical Read Requests: 33936360-> Captured Segments account for 45.5% of Total Tablespace Subobject Obj. Phys ReadOwner Name Object Name Name Type Requests% Total- CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 3099346 9.13CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 1529950 4.51CONTENT_OW DATA_TS MZ_PRODUCT TABLE 1306756 3.85CONTENT_OW DATA_TS MZ_AUDIO_FILE TABLE 910537 2.68CONTENT_OW INDEX_TS MZ_PRODUCT_ATTRIBUTE INDEX 820459 2.42

Phys Read Requests: the number of applications for physical reading

% Total: (number of requests for physical reads that occurred on this paragraph / physical read IO requests)

11-4 Segments by UnOptimized Reads

Segments by UnOptimized Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total UnOptimized Read Requests: 811466-> Captured Segments account for 58.5% of Total Tablespace Subobject Obj. UnOptimizedOwner Name Object Name Name Type Reads% Total- CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 103580 12.76CONTENT_OW SONG_TS MZ_SONG TABLE 56946 7.02CONTENT_OW DATA_TS MZ_IMAGE TABLE 47017 5.79CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 40950 5.05CONTENT_OW DATA_TS MZ_PRODUCT TABLE 30406 3.75

UnOptimized Reads UnOptimized Read Reqs = Physical Read Reqts-Optimized Read Reqs

Optimized Read Requests refers to what satisfies Exadata Smart Flash Cache (or the Smart Flash Cache in OracleExadata V2 (Note that despite same name, concept and use of)

'Smart Flash Cache' in Exadata V2 is different from' Smart Flash Cache' in Database Smart Flash Cache). The number of physical readings of. Read requests from smart flash cache are considered to be optimized, because these reads are much faster than normal from disk.

In addition, reading storage index through smart scan is also known as' optimized read requests', because reading irrelevant data can be avoided.

When the user is no longer using Exadata, UnOptimized Read Reqs is always equal to Physical Read Reqts

% Total: (UnOptimized Read Reqs / (physical read IO requests-physical read requests optimized) of physical reads that occurred on this paragraph)

11-5 Segments by Optimized Reads

Segments by Optimized Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Optimized Read Requests: 33124894-> Captured Segments account for 45.2% of Total Tablespace Subobject Obj. OptimizedOwner Name Object Name Name Type Reads% Total- CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 2995766 9.04CONTENT_OW DATA_TS MZ_PRODUCT_ATTRIBUTE TABLE 1489000 4.50CONTENT_OW DATA_TS MZ_PRODUCT TABLE 1276350 3.85CONTENT_OW DATA_TS MZ_AUDIO_FILE TABLE 890775 2.69CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 816067 2.46

As explained above about optimizerd read, the unit here is the number of request.

% Total: (Optimized Read Reqs/ physical read requests optimized of physical reads that occurred on this paragraph)

11-6 Segments by Direct Physical Reads

Segments by Direct Physical Reads DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Direct Physical Reads: 14118552-> Captured Segments account for 94.2% of Total Tablespace Subobject Obj. DirectOwner Name Object Name Name Type Reads% Total- CONTENT_OW SONG_TS MZ_SONG TABLE 7084416 50.18CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 4839984 34.28CONTENT_OW DATA_TS MZ_PUBLICATION TABLE 1361133 9.64CONTENT_OW DATA_TS SYS_LOB0000203660C00 LOB 5904 .04CONTENT _ OW DATA_TS SYS_LOB0000203733C00 LOB 1656. 01

Direct reads direct path physical read (in blocks * times)

% Total (direct path reads / Total physical reads direct that occurred on this segment)

11-7 Segments by Physical Writes

Segments by Physical Writes DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Physical Writes: 590563-> Captured Segments account for 38.3% of Total Tablespace Subobject Obj. PhysicalOwner Name Object Name Name Type Writes% Total- CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_ R TABLE 23595 4.00CONTENT_OW DATA_TS MZ_PODCAST TABLE 19834 3.36CONTENT_OW INDEX_TS MZ_IMAGE_IX2 INDEX 16345 2.77SYS SYSAUX WRH$_ACTIVE_SESSION_ 1367_70520 TABLE 14173 2.40CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 9645 1.63

Physical Writes, physical write in blocks * number of times

Total% (physical writes / Total physical writes that occurred on this paragraph)

11-9 Segments by Physical Write Requests

Segments by Physical Write Requests DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Physical Write Requestss: 436789-> Captured Segments account for 43.1% of Total Tablespace Subobject Obj. Phys WriteOwner Name Object Name Name Type Requests% Total- CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_ R TABLE 22581 5.17CONTENT_OW DATA_TS MZ_PODCAST TABLE 19797 4.53CONTENT_OW INDEX_TS MZ_IMAGE_IX2 INDEX 14529 3.33CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 9434 2.16CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 8618 1.97

The number of requests written physically by Phys Write Requests, in number of times

% Total (number of physical write requests that occurred on this segment / physical write IO requests)

11-10 Segments by Direct Physical Writes

Segments by Direct Physical Writes DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Direct Physical Writes: 29660-> Captured Segments account for 18.3% of Total Tablespace Subobject Obj. DirectOwner Name Object Name Name Type Writes% Total- SYS SYSAUX WRH$_ACTIVE_SESSION_ 1367 _ 70520 TABLE 4601 15.51CONTENT_OW DATA_TS SYS_LOB0000203733C00 LOB 620 2.09CONTENT_OW DATA_TS SYS_LOB0000203660C00 LOB 134 .45CONTENT _ OW DATA_TS SYS_LOB0000203779C00 LOB 46 .16CONTENT _ OW DATA_TS SYS_LOB0000203796C00 LOB 41. 14

Direct Writes direct path write, unit amount is the number of blocks * times

% Total is (direct path write / physical writes direct that occurs on this segment)

11-11 Segments by Table Scans

Segments by Table Scans DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Table Scans: 10713-> Captured Segments account for 1.0% of Total Tablespace Subobject Obj. TableOwner Name Object Name Name Type Scans% Total- CONTENT_OW DATA_TS MZ_PUBLICATION TABLE 92 .86CONTENT _ OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 14 .13CONTENT _ OW SONG_TS MZ_SONG TABLE 3 .03CONTENT _ OW DATA_TS MZ_AM_REQUEST TABLE 1. 01

The source of Table Scans is dba_hist_seg_stat.table_scans_delta, but this indicator is not very accurate.

11-12 Segments by DB Blocks Changes

Segments by DB Blocks Changes DB/Inst: MAC/MAC2 Snaps: 70719-70723->% of Capture shows% of DB Block Changes for each top segment compared- > with total DB Block Changes for all segments captured by the Snapshot Tablespace Subobject Obj. DB Block% ofOwner Name Object Name Name Type Changes Capture- CONTENT_OW INDEX_TS MZ_AM_REQUEST _ IX8 INDEX 347856 10.21CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 269504 7.91CONTENT_OW INDEX_TS MZ_AM_REQUEST_PK INDEX 251904 7.39CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 201056 5.90CONTENT_OW INDEX_TS MZ_PRODUCT_ATTRIBUTE INDEX 199888 5.86

DB Block Changes, in number of blocks * times

% Total: (block changes / db block changes occurs on this segment)

11-13 Segments by Row Lock Waits

Segments by Row Lock Waits DB/Inst: MAC/MAC2 Snaps: 70719-70723->% of Capture shows% of row lock waits for each top segment compared- > with total row lock waits for all segments captured by the Snapshot Row Tablespace Subobject Obj. Lock% ofOwner Name Object Name Name Type Waits Capture- CONTENT_OW LOB_8K_TS MZ_ASSET_ WORK_EVENT_ INDEX 72005 43.86CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _ 2013_1_36 INDEX 13795 8.40CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _ 2013_5_35 INDEX 12383 7.54CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 8937 5.44CONTENT_OW DATA_TS MZ_AM _ REQUEST TABLE 8531 5.20

Row Lock Waits refers to the data of waiting times for row locks derived from dba_hist_seg_stat.ROW_LOCK_WAITS_DELTA.

11-14 Segments by ITL WAITS

Segments by ITL Waits DB/Inst: MAC/MAC2 Snaps: 70719-70723->% of Capture shows% of ITL waits for each top segment compared- > with total ITL waits for all segments captured by the Snapshot Tablespace Subobject Obj. ITL% ofOwner Name Object Name Name Type Waits Capture- CONTENT_OW LOB_8K_TS MZ_ASSET_ WORK_EVENT_ INDEX 95 30.16CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _ 2013_1_36 INDEX 48 15.24CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _ 2013_5_35 INDEX 21 6.67CONTENT_OW INDEX_TS MZ_SALABLE_FIRST_AVA INDEX 21 6.67CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 20 6.35

For more information on ITL, please see http://www.askmaclean.com/archives/enqueue-tx-row-lock-index-itl-wait-event.html.

The number of times ITL Waits waited for ITL. The data source is dba_hist_seg_stat.itl_waits_delta.

11-14 Segments by Buffer Busy Waits

Segments by Buffer Busy Waits DB/Inst: MAC/MAC2 Snaps: 70719-70723->% of Capture shows% of Buffer Busy Waits for each top segment compared- > with total Buffer Busy Waits for all segments captured by the Snapshot Buffer Tablespace Subobject Obj. Busy% ofOwner Name Object Name Name Type Waits Capture- CONTENT_OW LOB_8K_TS MZ_ASSET_ WORK_EVENT_ INDEX 251073 57.07CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _ 2013_1_36 INDEX 36186 8.23CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _ 2013_5_35 INDEX 31786 7.23CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 15663 3.56CONTENT_OW INDEX_TS MZ_CS_ WORK_PENDING_R INDEX 11087 2.52

Buffer Busy Waits the number of times buffer busy wait occurred on this segment data source dba_hist_seg_stat.buffer_busy_waits_delta

11-15 Segments by Global Cache Buffer

Segments by Global Cache Buffer BusyDB/Inst: MAC/MAC2 Snaps: 70719-7072->% of Capture shows% of GC Buffer Busy for each top segment compared- > with GC Buffer Busy for all segments captured by the Snapshot GC Tablespace Subobject Obj. Buffer% ofOwner Name Object Name Name Type Busy Capture- CONTENT_OW INDEX_TS MZ_AM_REQUEST _ IX3 INDEX 2135528 50.07CONTENT_OW DATA_TS MZ_CONTENT_PROVIDER_ TABLE 652900 15.31CONTENT_OW LOB_8K_TS MZ_ASSET_WORK_EVENT_ INDEX 552161 12.95CONTENT_OW LOB_8K_TS MZ_CS_WORK_NOTE_RE_I _ 2013_1_36 INDEX 113042 2.65CONTENT_OW LOB_8K_TS MZ_CS_WORK_INFO_PART _ 2013_5_35 INDEX 98134 2.30

The number of times the monk gc buffer busy is played on the GC Buffer Busy data segment, and the data source dba_hist_seg_stat.gc_buffer_busy_delta

11-15 Segments by CR Blocks Received

Segments by CR Blocks Received DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total CR Blocks Received: 763037-> Captured Segments account for 40.9% of Total CR Tablespace Subobject Obj. BlocksOwner Name Object Name Name Type Received% Total- CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 69100 9.06CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 44491 5.83CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 36830 4.83CONTENT_OW DATA_TS MZ_PODCAST TABLE 36632 4.80CONTENT_OW INDEX_TS MZ_AM_REQUEST_PK INDEX 19646 2.57

CR Blocks Received: the number of global cache CR blocks received by the local node in the RAC; the data source is dba_hist_seg_stat.gc_cu_blocks_received_delta

% Total: (Global CR blocks / gc cr blocks received received on this node on this segment)

11-16 Segments by Current Blocks Received

Segments by Current Blocks ReceivedDB/Inst: MAC/MAC2 Snaps: 70719-70723-> Total Current Blocks Received: 704731-> Captured Segments account for 61.8% of Total Current Tablespace Subobject Obj. BlocksOwner Name Object Name Name Type Received% Total- CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3 INDEX 56287 7.99CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX3A INDEX 45139 6.41CONTENT_OW DATA_TS MZ_AM_REQUEST TABLE 40350 5.73CONTENT_OW DATA_TS MZ_CS_WORK_PENDING_R TABLE 22808 3.24CONTENT_OW INDEX_TS MZ_AM_REQUEST_IX8 INDEX 13343 1.89

Current Blocks Received: refers to the number of global cache Current blocks received by the local node in the RAC, the data source DBA_HIST_SEG_STAT.gc_cu_blocks_received_delta

% Total: (global cache current blocks / gc current blocks received received on this node on this segment)

12 Dictionary Cache Stats

Dictionary Cache Stats DB/Inst: MAC/MAC2 Snaps: 70719-70723-> "Pct Misses" should be very low (

< 2% in most cases) ->

"Final Usage" is the number of cache entries being used Get Pct Scan Pct Mod FinalCache Requests Miss Reqs Miss Reqs Usage- -dc_awr_control 87 2.3 0 N Accord A 6 1dc_global_oids 1134 7.8 0 N Acer A 0 13dc_histogram_data 6119027 0.9 0 N Accord A 0 11784dc_histogram_defs 1898714 2.3 0 N/A 0 5462dc_object_grants 175 26.9 0 N/A 0 4dc_objects 10254514 0.2 0 N/A 0 3807dc_profiles 8452 0.0 0 N/A 0 2dc_rollback_segments 3031044 0.0 0 N/A 0 1947dc_segments 1812243 1.4 0 N/A 10 3595dc_sequences 15783 69.6 0 N/A 15782 20dc_table_scns 70 2.9 0 N/A 0 1dc_tablespaces 1628112 0.0 0 N/A 0 37dc_users 2037138 0.0 0 N/A 0 52global database name 7698 0.0 0 N/A 0 1outstanding_alerts 264 99.6 0 N/A 8 1sch_lj_oids 51 7.8 0 N/A 0 1

Dictionary Cache dictionary cache is also called row cache

The data source is dba_hist_rowcache_summary.

Cache dictionary cache class name kqrstcid kqrsttxt cid=3 (dc_rollback_segments)

The number of times Get Requests requests to get this data dictionary cache object gets

Miss: the number of times GETMISSES requests to get the data dictionary cache object but miss

Pct Miss: the ratio of GETMISSES / Gets, Miss. This pct miss should be very low and less than 2%. Otherwise, there may be a large number of row cache lock.

Scan Reqs: the number of requests for scanning. When scan occurs in kqrssc, kqrpScan or kqrpsiv, the number of scans will be increased by kqrstsrq++ (scan requests). For example, calling the kttm2b function when migrate tablespace is called will callback kqrpsiv (used extent cache) to delete records in uet$ safely, which is rare in practice.

Pct Miss:SCANMISSES/SCANS

Mod Reqs: the number of requests to modify the dictionary cache object. From the above data, we can see that the mod reqs of dc_sequences is very high. This is because sequence is a dictionary object that changes a lot.

Final Usage: the total number of dictionary cache records that contain valid data, that is, the row cache record USAGE Number of cache entries that contain valid data being used

Dictionary Cache Stats (RAC) DB/Inst: MAC/MAC2 Snaps: 70719-70723 GES GES GESCache Requests Conflicts Releases-- -dc_awr_control 14 2 0dc_global_oids 88 0 102dc_histogram_defs 43518 0 43521dc_objects 21608 17 21176dc_profiles 10 1dc_segments 24974 14 24428dc_sequences 25178 10644 347dc_table_scns 2 0 2dc_tablespaces 165 0 166dc_users 119 0 119outstanding_alerts 478 8 250sch_lj_oids 4 0 4

GES Request kqrstilr total instance lock requests, the number of times instance lock is requested through the global queuing service GES

The reason for applying for GES request may be that dump cache object and kqrbfr LCK process want background free some parent objects to release some parent objects, etc.

GES Conflicts kqrstifr instance lock forced-releases, the number of times the LCK process released the lock in AST mode, only in kqrbrl

GES Releases kqrstisr instance lock self-releases, the LCK process may increase itself when it asks background free some parent objects to release some parent objects.

In the above data, we can see that only dc_sequences corresponds to more GES Conflicts, and the use of ordered and non-cache options for sequence will lead to a marginal effect in RAC, that is, "row cache lock" waiting originates from DC_SEQUENCES ROW CACHE. GETS request, modifications, GES requests, and GES conflict on DC_SEQUENCES are related to the specific SQL execution frequency that triggers the generation of a new sequence number.

In Oracle 10g, ORDERED Sequence may also cause a large number of DFS lock Handle waits at high concurrency, due to bug 5209859

13 Library Cache Activity

Library Cache Activity DB/Inst: MAC/MAC2 Snaps: 70719-70723-> "Pct Misses" should be very low Get Pct Pin Pct Invali-Namespace Requests Miss Requests Miss Reloads dations -- ACCOUNT_STATUS 8436 0.30 Nash A 0 0BODY 8697 15537 15537 0CLUSTER 317 4.7 321 4.7 0 0DBLINK 9212 0.1 0 N/A 0 0EDITION 4431 0.0 8660 0.0 0 0HINTSET OBJECT 1027 9.5 1027 14.4 0 0INDEX 792 18.2 792 18.2 0 0QUEUE 10 0.0 1733 0.0 0 0RULESET 0 N/A 8 87.5 7 0SCHEMA 8169 0.0 0 N/A 0 0SQL AREA 533409 533409-4246727944 101.1 44864 576SQL AREA BUILD 71500 65.50 N 0SQL AREA STATS A 0 0SQL AREA STATS 41008 90.3 41008 90.3 1 0TABLE/PROCEDURE 320310 0.6 1033991 3.6 25378 0TRIGGER 847 0.038442 0.3 110 0

The namespace of NameSpace library cache

GETS Requests the number of times the library cache lock of an object contained in this namespace has been requested

The number of times the library cache handle of the GETHITS object is found in memory

Pct Misses: (1-(GETHITS / GETS Requests)) * 100

Pin Requests the number of pin requests on objects contained in this namespace

The number of times that the heap metadata of the object to be pin by PINHITS is exactly in shared pool

Pct Miss (1-(PINHITS / Pin Requests)) * 100

Reloads refers to the PIN that is not the first time to PIN the object since the object handle is rebuilt, and the PIN requires the object to read and load from disk. If the Reloads value is high, it is recommended to increase the shared_pool_size.

INVALIDATIONS the number of times that objects contained in this namespace have been marked as invalid because the object has been modified since

Library Cache Activity (RAC) DB/Inst: MAC/MAC2 Snaps: 70719-70723 GES Lock GES Pin GES Pin GES Inval GES Invali-Namespace Requests Requests Releases Requests dations-- -- ACCOUNT_STATUS 8436 000 0BODY 0 15497 15497 0 0CLUSTER 321 321 321 0 0DBLINK 9212 0 0 0 0EDITION 4431 4431 4431 0 0HINTSET OBJECT 1027 1027 1027 0 0INDEX 792 792 792 0 0QUEUE 8 1733 1733 0 0RULESET 0 8 8 0 0SCHEMA 4226 0 0 0 0TABLE/PROCEDURE 373163 704816 704816 0 0TRIGGER 0 38430 38430 0 0

GES Lock Request: the number of times dlm_lock_requests Lock instance-lock ReQuests applied for lock instance lock

GES PIN request: the number of times DLM_PIN_REQUESTS Pin instance-lock ReQuests applied for pin instance lock

Number of times GES Pin Releases DLM_PIN_RELEASES release the pin instance lock released pin instance lock

Number of times GES Inval Requests DLM_INVALIDATION_REQUESTS get the invalidation instance lock applied for invalidation instance lock

The number of times GES Invali- dations DLM_INVALIDATIONS received invalidation pings from other nodes

14 Process Memory Summary

Process Memory Summary DB/Inst: MAC/MAC2 Snaps: 70719-70723-> B: Begin Snap E: End Snap- > All rows below contain absolute values (i.e. Not diffed over the interval)-> Max Alloc is Maximum PGA Allocation size at snapshot time- > Hist Max Alloc is the Historical Max Allocation for still-connected processes- > ordered by Begin/End snapshot Alloc (MB) desc Hist Avg Std Dev Max Max Alloc Used Alloc Num Num Category (MB) (MB) Proc Alloc--B Other 16062.7 Nash A 6.1 66.6 3370 3370 2612 2612 SQL 5412.2 4462.9 2.2 89.5 4483 4483 2508 2498 Freeable 2116.4. 0. 9 6.3 298 N/A 2266 2266 PL/SQL 94.0 69.8. 0. 0 11 2610 2609E Other 15977.3 N/A 6.1 66.9 3387 3387 2616 2616 SQL 5447.9 4519.0 2.2 89.8 4505 4505 2514 2503 Freeable 2119.9. 0. 9 6.3 297 N/A 2273 2273 PL/SQL 93.2 69.2. 0. 0 11 2614 2613

The data source is dba_hist_process_mem_summary. Here is a summary of the use of PGA to help us understand who used PGA.

B: start snapshot E: end snapshot

This section lists the usage of each category in PGA

Category category name, including "SQL", "PL/SQL", "OLAP" and "JAVA". The special categories are "Freeable" and "Other". Free memory refers to which OS has been allocated to the process, but not allocated to any category of memory. "Other" is memory allocated to a category, but not a named category

Alloc (MB) allocated_total the total memory allocated for this category

Used (MB) used_total memory used for this category

Avg Alloc (MB) allocated_avg averages the amount of memory allocated by this category per process

Std Dev Alloc (MB): the standard deviation of memory allocated by this classification between each process

Max Alloc (MB) ALLOCATED_MAX: the maximum amount of memory allocated by a single process in this category during snapshot time: Max Alloc is Maximum PGA Allocation size at snapshot time

Hist Max Alloc (MB) MAX_ALLOCATED_MAX: processes that are still linked at present the maximum amount of memory allocated in this category: Hist Max Alloc is the Historical Max Allocation for still-connected processes

Number of Num Proc num_processes processes

The number of processes allocated by Num Alloc NON_ZERO_ALLOCS for this type of memory

14 SGA Information

14-1 SGA Memory Summary

SGA Memory Summary DB/Inst: MAC/MAC2 Snaps: 70719-70723 End Size (Bytes) SGA regions Begin Size (Bytes) (if different)- -- Database Buffers 20669530112Fixed Size 2241880Redo Buffers 125669376Variable Size 10536094376-sum 31333535744

Coarse-grained memory usage information in sga area. End Size only lies in that begin size does not print at the same time.

14-2 SGA breakdown difference

SGA breakdown difference DB/Inst: MAC/MAC2 Snaps: 70719-70723-> ordered by Pool, Name- > NumberA value for Begin MB or End MB indicates the size of that Pool/Name was insignificant Or zero in that snapshotPool Name Begin MB End MB% Diff- java free memory 64.0 64.0 0.00large PX msg pool 7.8 7.8 0.00large free memory 247.8 247.8 0.00shared Checkpoint queue 140.6 140.6 0.00shared FileOpenBlock 2459.2 2459.2 0.00shared KGH: NO ACCESS 1629.6 1629.6 0.00shared KGLH0 997.7 990.5-0.71shared KKSSP 312.2 308.9-1.06shared SQLA 376.6 370.6-1.61shared db_block_hash_buckets 178.0 178.0 0.00shared dbktb: trace buffer 156.3 156.3 0.00shared event statistics per sess 187.1 187.1 0.00shared free memory 1208.9 1220.6 0.97shared gcs resources 435.0 435.0 0.00shared gcs shadows 320.6 320.6 0.00shared ges enqueues 228.9 228.9 0.00shared ges resource 118 . 3 118.3 0.00shared init_heap_kfsg 1063.6 1068.1 0.43shared kglsim object batch 124.3 124.3 0.00shared ksunfy: SSO free list 174.7 174.7 0.00stream free memory 128.0 128 .0 0.00 buffer_cache 19712.0 19712.0 0.00 fixed_sga 2.1 2.1 0.00 log_buffer 119.8 119.8 0.00-

Name of the Pool memory pool

The names of the subdivided components in the Name memory pool, such as KGLH0 storage KEL Heap 0, SQLA storage SQL execution plan, etc.

The memory size of the component at the beginning of the Begin MB snapshot

The memory size of the component at the end of the End MB snapshot

% Diff difference percentage

Special attention should be paid to the fact that the shared pool contraction caused by AMM / ASMM can generally be significantly reduced in sga breakdown, such as SQLA, KQR and other components, which may lead to a series of parsing waiting for cursor: Pin S on X, row cache lock, etc.

The free memory information here is also worthy of our attention. It is generally recommended that the shared pool should have a free memory of 300 MB 400 MB.

15 Streams statistics

Streams CPU/IO Usage DB/Inst: ORCL/orcl1 Snaps: 556-559-> Streams processes ordered by CPU usage- > CPU and Time in micro secondsSession Type CPU Time User O Time in micro secondsSession Type CPU Time User O Time -QMON Coordinator 101698 0 0QMON Slaves 63856 0 -Streams Capture DB/Inst: CATGT/catgt Snaps: 911,912-> Lag Change should be small or negative (in seconds) Captured Enqueued Pct Per Per Lag RuleEval Enqueue RedoWait PauseCapture Name Second Second Change Time-- CAPTURE_CAT 650391 930 23 071-Streams Apply DB/Inst: CATGT/catgt Snaps: 911,912-> Pct DB is the percentage of all DB transactions that this apply handled-> WDEP is the wait for dependency-> WCMT is the wait for commit-> RBK is rollbacks-> MPS is messages per second-> TPM is time per message in milli-seconds-> Lag Change should be small or negative (in seconds) Applied Pct Applied Dequeue Apply Lag Apply Name TPS DB WDEP WCMT RBK MPS TPM TPM Change-- -APPLY_CAT 0 000 000 0 0 0-

Capture Name: Streams capture process name

Captured Per Second: the number of message entries mined per second

Enqueued Per Second: the number of message entries per second

Lag change: refers to the delay between the time when the log is generated and the time it takes to mine the log to generate message

Pct Enqueue Time: percentage of time spent joining the team

Pct redoWait Time: percentage of time waiting for redo

Pct Pause Time: percentage of Pause time

Name of the Apply process of the Apply Name Streams application

Applied TPS: number of transactions applied per second

Pct DB: percentage of all DB transactions handled by apply

Pct WDEP: the percentage of time spent waiting for dependent data

Pct WCMT: percentage of time spent waiting for commit

Pct RBK: percentage of transaction rollback rollbacks

Applied MPS: number of message applied per second

Dequeue TPM: number of message out of the team per millisecond

Lag Change: refers to the delay between the generation of the latest message and its receipt by Apply

16 Resource Limit

Resource Limit Stats DB/Inst: MAC/MAC2 Snap: 70723-> only rows with Current or Maximum Utilization > 80% of Limit are shown- > ordered by resource name Current Maximum InitialResource Name Utilization Utilization Allocation Limit -- ges_procs 2612 8007 10003 10003processes 2615 8011 10000 10000

The data is derived from dba_hist_resource_limit

Note that only the names of resources with current usage or maximum usage > 80% * maximum limit are listed here. If they are not listed here, the resource usage is safe.

Current Utilization's current usage of this resource, including Enqueue Resource, Lock, and processes

The maximum usage of this resource since the last instance of Maximum Utilization was launched

Initial Allocation initial allocation value, which is generally equal to the value specified in the parameter file

Actual limit of Limit

17 init.ora Parameters

Init.ora Parameters DB/Inst: MAC/MAC2 Snaps: 70719-70723 End valueParameter Name Begin value (if different)- -_ compression_compatibility 11.2.0_kghdsidx_count 4_ksmg_granule_size 67108864_shared_pool_reserved_min_all 4100archive_lag_target 900audit_file_dest / u01/app/oracle/admin/MAC/adumaudit_trail OScluster_database TRUEcompatible 11.2.0.2.0control_files + DATA/MAC/control01.ctl + RECOdb_16k_cache_size 268435456db_block_size 8192db_cache_size 19327352832db_create_file_dest + DATA

Parameter Name parameter name

Parameter values when Begin value starts a snapshot

Parameter values when End value ends the snapshot (printed only when changes occur)

18 Global Messaging Statistics

Global Messaging Statistics DB/Inst: MAC/MAC2 Snaps: 70719-70723Statistic Total per Second per Trans---acks for Commit broadcast (actual) 53705 14.9 0.2acks for commit broadcast (logical 311182 86.1 1.3broadcast msgs on commit (actual) 317082 87.7 1.3broadcast msgs on commit (logical) 317082 87.7 1.3broadcast msgs on commit (wasted) 263332 72.9 1.1dynamically allocated gcs resourc 0 0.0 0.0dynamically allocated gcs shadows 0 0.0 0.0flow control messages received 267 0.1 0.0flow control messages sent 127 0.0 0.0gcs apply delta 0 0.0 0.0gcs assume cvt 55541 15.4 0.2

Global communication statistics, data source WRH$_DLM_MISC

20 Global CR Served Stats

Global CR Served Stats DB/Inst: MAC/MAC2 Snaps: 70719-70723Statistic Total---CR Block Requests 403703CURRENT Block Requests 444896Data Block Requests 403705Undo Block Requests 94336TX Block Requests 307896Current Results 652746Private results 21057Zero Results 104720Disk Read Results 69418Fail Results 508Fairness Down Converts 102844Fairness Clears 15207Free GC Elements 0Flushes 105052Flushes Queued 0Flush Queue Full 0Flush Max Time (us) 0Light Works 71793Errors 117

LMS transmits statistics of CR BLOCK, data source WRH$_CR_BLOCK_SERVER

21 Global CURRENT Served Stats

Global CURRENT Served Stats DB/Inst: MAC/MAC2 Snaps: 70719-70723-> Pins = CURRENT Block Pin Operations- > Flushes = Redo Flush before CURRENT Block Served Operations- > Writes = CURRENT Block Fusion Write OperationsStatistic Total%

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 239

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report