In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces how to use db file read of Oracle event. It is very detailed and has certain reference value. Friends who are interested must finish reading it.
Db file sequential read (single block read into one SGA buffer)
Db file scattered read (multiblock read into many discontinuous SGA buffers)
Direct read (single or multiblock read into the PGA, bypassing the SGA)
The most common is that the INDEX FULL SCAN/UNIQUE SCAN is included in the execution plan. At this time, the occurrence of "db file sequential read" waiting is expected, and generally does not require our special attention.
two。 When the execution plan includes INDEX RANGE SCAN- ("TABLE ACCESS BY INDEX ROWID" / "DELETE" / "UPDATE"), the service process accesses index and table in the order of "accessing index-> finding rowid- > accessing table data blocks specified by rowid and performing necessary operations". Each physical read enters "db file sequential read" wait, and each read is a data block. In this case, clustering_factor will play its role, which requires our special attention, and the solution mentioned in this example is also effective for this scenario.
3.Extent boundary, assuming that there are 33 blocks in an Extent interval, and the number of blocks read by a "db file scattered read" multi-block read is 8, then after 4 multi-block reads when reading this interval, there is still one data block left, but keep in mind that the multi-block read scattered read cannot span an interval (span an extent), then a single-block read and "db file sequential read" will appear. This is a normal phenomenon and generally does not require extra attention.
4. Suppose there are 8 data blocks in a certain interval, which can be a block, a data block, a data block, a block and a block. Note that this can happen not only to tables, but also to indexes. This is a normal phenomenon and generally does not require extra attention.
5.chained/migrated rows is chained or migrated. Here we will not describe the reason for the formation of chained rows. Chained/migrated rows will cause the service process to need additional single-block reading when recording a line of fetch, resulting in the emergence of "db file sequential read". This phenomenon requires special attention because a large number of chained / migrating rows can lead to extreme deterioration of operations such as FULL SCAN (past experience is that a full table scan takes only 30 minutes, but after a large number of chained rows, full table scanning takes several hours), while it can also have a less significant performance impact on other operations. You can monitor the "table fetch continued row" operation statistics in the v$sysstat view to understand the chain / migration row access in the system, and you can also use the CHAIN_CNT in the DBA_TBALES view to understand the chain / migration rows on the table, which of course requires collecting statistics on the table on a regular basis. If you are not in the habit of collecting regularly, you can cooperate with the @? / rdbms/admin/utlchain script and the analyze table list chained rows command to get the necessary chain line information
6. To create an Index entry, it is obvious that when performing INSERT operations on the table to insert data, although you can't see too many details in the execution plan, in fact, we need to use the index to quickly verify whether some constraints on the table are reasonable, and we also need to insert relevant records in the leaf block of the index. at this time, there may also be "db file sequential read" waiting events, of course, this also has something to do with the specific way to insert. This is a normal phenomenon and generally does not require extra attention.
7. For UPDATE/DELETE on a table, unlike the "INDEX RANGE SCAN-UPDATE/DELETE" mentioned earlier, if we use rowid to update or delete data, the service process will first access the row data on the table block pointed to by rowid (note that access table block first), and then access the index leaf block according to the specific data on the row (note that Oracle does not know where these leaf block are So here we also have to access index branch block like range-scan/unique-scan), these accesses will be single block reads, and there will be actual EXEC operations that update or delete only after db file sequential read', completes the necessary reads.
Db file sequential read when the information needed by the process is not in SGA, wait for it to be read into the SGA from disk, and the process waits for this event. It is usually sent by sql or recursive sql to read information from indexes, rollback segments, tables (rowid back tables), control files, and data file headers. To reduce this wait event, either reduce its number of times or reduce the average wait time. By tuning SQL to reduce logical reads, pay attention to inefficient large-scale index scan back to the table (perhaps full table scan is better), you can reduce the number of times. The average waiting time can be reduced by using the storage with higher response time to disperse the hotspot files. In the new storage subsystem, the average read wait time for a single block should not exceed 10ms (1/1000 seconds). With the p1 and p2 parameters and the dba_extents view, we locate the segments waiting to be accessed, and then disperse the hotspots.
Official waiting event: performance tuning guide-wait events statistics
The above is all the contents of the article "how to use the db file read of Oracle event". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.