In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
A transaction query library mainly uses the In Memory feature of Oracle 12.1.0.2.0 to cache three monthly partitioned large tables, the In Memory component is mainly for OLAP applications, and most of the operations of this application are queries, and most of the time only care about one or more columns in the table, so the in memory feature can also specify that only specific one or more columns in the table can be loaded into in memory area. At the beginning of the situation due to concurrency and other factors, the run is still very fast. With the passage of time, the amount of data of the three tables becomes larger and larger, and takes up more and more memory resources. There are always problems of one kind or another. Such as a failure of the system in the first half of this year.
SQL > r
1 select wait_class_id,wait_class,count () cnt
2 from dba_hist_active_sess_history
3 where snap_id between 12073 and 12074
4 group by wait_class_id,wait_class
5 order by 3 desc
WAIT_CLASS_ID WAIT_CLASS CNT
1740759767 User I/O 12472
2363
3386400367 Commit 2301
1893977003 Other 1093
3875070507 Concurrency 132
4217450380 Application 67
4108307767 System I/O 21
3290255840 Configuration 1
8 rows selected.
Query what are the events in the database reflected by the corresponding IO situation
EVENT_ID EVENT CNT
3056446529 read by other session 6149
834992820 db file parallel read 4756
2652584166 db file sequential read 1418
3926164927 direct path read 993
506183215 db file scattered read 56
According to the waiting time, the corresponding SQL text is:
SELECT
FROM (SELECT tmp_page., rownum row_id
FROM (SELECT t.TRAN_UUID
T.IN_MNO
T.EX_MNO merchantCode
T.CARD_TYP
T.CARD_DISP_NO
T.TRAN_RESPONSE_CD
T.TRAN_CD
T.TRAN_STS
T.TRAN_SEQ_NO
T.TRAN_BAT_NO
To_char (t.TRAN_DATE_TIME, 'YYYYMMDD') AS TRAN_DT
To_char (t.TRAN_DATE_TIME, 'HH24MISS') AS TRAN_TM
T.TRAN_IN_MOD payWay
T.TERMINAL_NUM
T.POS_SIGN_FLG
T.TRAN_AMT
T.RECEIVER_FEE_AMT
T.TRAN_FLG
T.ROOT_XXXX_ORG_NM belongtoOrgNm
T.BUSINESS_EMP_NM empNm
T.XXXX_ORG_NM directlyOrg
T.XXXX_ORG_NO
T.XXXX_ORG_PATH
FROM T_SSP_TRANDATA_MPOS t
WHERE t.TRAN_DATE BETWEEN TO_DATE (: 1, 'yyyyMMdd') AND
TO_DATE (: 2, 'yyyyMMdd')
AND t.ROOT_XXXX_ORG_NO =: 3
AND t.XXXX_ORG_PATH LIKE: 4 | |'%'
ORDER BY t.TRAN_DATE_TIME DESC) tmp_page
WHERE rownum
< = :5) WHERE row_id >: 6
The execution plan is similar to the following:
Using AWR to compare the same time and different date time periods, we can see that the single execution time of the SQL on the previous day was 1168 milliseconds, about 0.01min. The execution frequency is 171and the single execution time in the failure period is 102929 milliseconds, about 1.71min. The frequency of execution is 248. The failure period was performed 77 times more than usual. An extra 131.67 points.
It is speculated that the failure time period is significantly higher than that of the previous day. Whether there is a user in the foreground who clicks a button, waits for a long time not to respond, and then clicks all the time, causing the SQL to run repeatedly.
IO resources are almost exhausted, session an is reading the data block on the disk into memory, session b, session c also request this data block. It leads to b, c read by other session.
When the direct path read table is small, the data is read into the cache. When the table is growing, the oracle algorithm will skip loading the cache by using direct path force reading after the cache is greater than 2%. A large number of repeated disk IO reads will deplete IO, so it is decided to set the 10949 event to turn off this feature.
To use the IN MEMORY feature, you need to set parallel_degree_policy=AUTO and parallel_force_local=false to start the IM feature in the real sense, otherwise it is just an implementation plan to enable it, which is an illusion.
Then change parallel_degree_policy to AUTO. Then reload the T_SSP_TRANDATA_MPOS table and enter it all into in memory. After such a toss, the system has been stable for a period of time, but there are still some problems in the later stage.
Under the condition that the code remains unchanged, colleagues in the development and architecture department have carried out the scheme of breaking down the table and dividing the library. Three large tables discard one table, and the other two tables are split into four tables, which are split on a monthly basis, with four small tables in a month. The migration of the new library was completed, and on the night it was put into production, it was found that the query function still could not get the result when the data was checked. The single execution time of the SQL is 150s.
Come on, I can't make a job for such a long time.
Start looking at SQL for SQL optimization.
SELECT
FROM (SELECT tmp_page., rownum row_id
FROM (SELECT to_char (TRAN_DATE_TIME, 'yyyyMMdd HH24:mm:ss')
T.TRAN_UUID
T.IN_MNO
T.EX_MNO merchantCode
T.CARD_TYP
T.CARD_DISP_NO
T.TRAN_RESPONSE_CD
T.TRAN_CD
T.TRAN_STS
T.TRAN_SEQ_NO
T.TRAN_BAT_NO
To_char (t.TRAN_DATE_TIME, 'YYYYMMDD') AS TRAN_DT
To_char (t.TRAN_DATE_TIME, 'HH24MISS') AS TRAN_TM
T.TRAN_IN_MOD payWay
T.TERMINAL_NUM
T.POS_SIGN_FLG
T.TRAN_AMT
T.RECEIVER_FEE_AMT
T.TRAN_FLG
T.XXXX_ORG_NO
T.XXXX_ORG_PATH
FROM T_TRADE_201807_MPOS_2_0001 t
WHERE t.TRAN_DATE BETWEEN TO_DATE ('20180701,' yyyyMMdd') AND
TO_DATE ('20180730,' yyyyMMdd')
AND t.ROOT_XXXX_ORG_NO = '6AAAAAAAAAAAAA'
AND t.XXXX_ORG_PATH LIKE '0FDAFDS%'
ORDER BY t.TRAN_DATE_TIME DESC) tmp_page
WHERE rownum
< = 10) WHERE row_id >0
The following is the implementation plan:
The index of the table:
OWNER INDEX_NAME COLUMN_NAME
XXXX IDX_1807_MPOS_21_XXXX_ORG_NO XXXX_ORG_NO
XXXX IDX_1807_MPOS_21_IN_MNO IN_MNO
XXXX IDX_1807_MPOS_21_ROOT_XXXX_N ROOT_XXXX_ORG_NO
XXXX IDX_1807_MPOS_21_TRAN_DT TRAN_DATE
XXXX IDX_1807_MPOS_21_TRAN_TM TRAN_DATE_TIME
XXXX PK_T_SSP_1807_MPOS_21 TRAN_UUID
XXXX PK_T_SSP_1807_MPOS_21 TRAN_DATE
We all know that to create an index, we need to look at the cardinality of the table, and we can know the selectivity of a column of the table according to the ratio of the cardinality to the total number of rows.
The total number of rows in the July table is 18228172, and the cardinality of the ROOT_XXXX_ORG_NO column is 1, indicating that the column is a duplicate value.
And the selectivity of this ROOT_XXXX_ORG_NO index is too low. Creating an index is absolutely not recommended! When the selectivity of the columns in a table is greater than 20%, the data distribution of the column is more balanced. And appears in the where condition, and the column does not create an index, then the column must create an index.
I don't want to say anything more, since my colleagues in the development department can't make a job in front of the leader, let's try to see if there is room for optimization.
First collect the statistics for the table and do some dynamic sampling. The execution time has been greatly shortened.
Make it clear that the paging statements must be sorted, otherwise the results will be different each time. It's okay if the business logic is not strict.
Here you need to look at the fields after the where condition.
When the where condition is equivalent and oder by other columns, then the column of the where condition comes first and the other columns follow.
When the where condition is not equivalent, order by other columns, then the creation of the index is not necessarily how to build, the key depends on whether the filtered data is too much!
Based on the above considerations, create the following index:
Create index xxx.IDX_1807_MPOS_21_NO_PA on xxx.T_TRADE_201807_MPOS_2_0001 ("TRAN_DATE_TIME", "ROOT_xxxx_ORG_NO", "xxxx_ORG_PATH") tablespace XXX_IDX online nologging
As a result, colleagues in the development department can hand in the job.
We can also feel the optimization through our monitoring system, such as CPU utilization.
Memory utilization
DBtime monitoring
From the original peak of all kinds of protuberances to the smooth operation now.
Here are a few questions, is there a problem with such index skimming? Why is the number of rows returned not 10? Welcome to discuss actively.
There's got to be some suspense.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.