In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
In the previous article in this series, I produced some diagrams that highlight the main difference between performing a large number of deletions by table scan and a large number of deletions by index range scan. Depending on the data pattern involved, choosing the right policy can have a significant impact on the number of random I/Os, the number of undo generated, and the number of CPU required for sorting-all of which can affect the time required to perform the deletion.
However, this simple demonstration is more complex in a production environment than in a production environment. So, if you are faced with a daunting task, you need to think carefully about how to model what really represents the system you are dealing with. It is important that there are actually two different situations.
* when you are dealing with a very large one-time task, you need to do it in the first place, and do not discover some key special cases too late-especially if you are not allowed to take the production system offline to complete the task. and your deadlines are tight.
* when you have a regular, but infrequent, very large job, it's important to know which seemingly unrelated small operations can have a big impact on the runtime; and, it's worth knowing what problems might arise in the next upgrade so that you can solve any problems in advance.
A simple example of the latter, of course, is my brief comment on 12c and its ability to drive deletions through a fast full scan of the index-a feature that didn't work in earlier versions of Oracle. In my small example, a test changed its execution plan from 11g index full scan to 12c index fast full scan, which took twice as long to complete.
Keep thinking-- how many things can you think of when you try to delete a table or index in Oracle through an index range scan, and what impact might this have?
For a busy system, this suggestion sounds good. Sometimes you will find that a long-running DML statement is very slow to run because it actually involves the most recent part of the data and is therefore affected by current changes From this point of view, Oracle finds that it must read the undo segment to get the undo data, which makes it possible to create a block version that is consistent with the read-it needs to do so so that it can check which rows are agreed to by the current and read-consistent version of the block.
One example I've done is to delete data through the "date_open" index-so how do I force the index to do a descending range scan so that it first checks the latest data before it has much (or any) time to suffer collateral damage from other DML?
There is a very quick way to test the effectiveness of this idea. So all we have to do is check the number of rows sorted and the number of rows deleted so we can know if the optimization has occurred.
My test dataset has 1000000 rows and four indexes (primary key client_ref, date_open, and date_closed indexes), so at best I should see "sort (rows)" = 4 * rows deleted. Here is a summary of a test I did, and I wanted to know what would happen:
Delete / * + index_desc (T1 t1_pk) * / from T1 where id
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.