In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to solve the DIRECT PATH READ caused by no index in the database". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn how to solve the DIRECT PATH READ caused by no index in the database.
Look directly at TOP 5 EVENTS, which is the quickest way to diagnose database problems.
Let's first take a look at the direct path read events that account for 63.33% of DB TIME. The waiting times are 78586 and the total waiting time is 3833s (about 64 minutes), while that of elapsed time is only 20 minutes. So we need to figure out what action causes such a high direct path read.
So what is direct path read? Generally speaking, the data block BLOCK (that is, the minimum storage unit of ORACLE) is always buffered to buffer cache by the background server process before being obtained by the server process. But for some large tables, buffering them to buffer cache is bound to extrude many other objects in buffer cache, namely ageing out. To avoid this, direct path read is generated, which does not need to be buffered to the cache, but is fetched directly from disk by the server process. ORACLE uses a number of parameters to control when to take direct path read.
Since the direct path read is high, go directly to see which objects have a high direct path read. By looking at segment by direct physical reads, you can get this information:
Obviously, direct physical reads is caused by visiting tbcm_catalogfile. Because physical reads= physical reads cache + physical reads direct, in addition to looking at segment by direct physical reads, it is also necessary to take a look at segment by physical reads:
The most Physical reads is still table tbcm_catalogfile. Now we know which object physical reads occurs mainly on, but we still don't know on which business (that is, on which SQL logic). Even though Physical reads is waiting the most, naturally, we need to look at the SQL statements that Physical reads has the most:
View the first SQL statement according to SQL_ID, with the text:
SELECT F_ID, F_OBJECTID, F_FILELOCATION, f_filesrclocation, F_ISONSERVER, F_DATASIZE, F_PACKAGEPATH, F_SERVERID, F_ISMAINFILE, F_FILEPROPERTY, F_DIRTYPE FROM TBCM_CATALOGFILE where Foundry OBJECTID: "SYS_B_0" and FACKAGEPATHER: "SYS_B_1" order by F_OBJECTID
Sure enough, it has something to do with the table tbcm_catalogfile. Next, let's take a look at the information about that table. I learned that the table has more than 4000000 records, and the F_OBJECTID field is almost unique, but there are no indexes on the table. Since there is no index, when the above SQL is executed, ORACLE can only choose the full table scan method, and for such a large table, it happens to meet the conditions of DIRECT PATH READ, so the implementation plan chooses to use DIRECT PATH READ to obtain data. If it is a single process, it is actually very bad. Multiple processes, like direct path read, do not buffer block to the cache, so each process has to get the data it wants through direct path read. As a result, the situation gets worse.
After analyzing the first place in TOP 5 EVENTS neutralization, let's analyze the second place.
The second place is log file sync. When the COMMIT or ROLLBACK command is issued, the server process wakes up the LGWR process, and LGWR is responsible for flushing the log cache in REDO BUFFER to the log file. The wait event generated by the LGWR background process is log file parallel write. Therefore, generally speaking, the foreground log file sync wait event is high, and the background log file parallel write is also high. Let's verify it in the AWR report:
That figures. In addition, the avg wait of log file parallel write is 28ms, which is higher than 20, which, as a rule of thumb, means that there is a log file IO emergency.
Keep looking:
The log has been switched five times in 20 minutes, an average of every 4 minutes, which is much higher than the generally accepted 20-minute switch of 15ml. This indicates that the REDO FILE file may be too small.
Keep looking:
Within 20 minutes, no fallback, or user rollback=0, occurred. User calls/ (user commints + user rollback) = 9.87, which is less than the empirical value of 25, indicating that the system commits too frequently.
In view of the above problems, the following countermeasures are given:
Create a composite index on the FoundOBJECTID field of the tbcm_catalogfile table
Since the hardware cannot be replaced, IO contention for log files can be ignored.
Change the log file from 50m to 2G
Due to the heavy workload of adjusting the code, don't worry about the problem that COMMIT commits too frequently.
After the adjustment, the incoming operation is executed again, and the AWR report between 15:00 and 15:15 is collected. Through the inspection report, the above problems have been solved:
At this point, I believe you have a deeper understanding of "how to solve the DIRECT PATH READ caused by no index in the database". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.