In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces how to use SQLT in SQL optimization. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.
1. Background introduction of SQLT
SQLTXPLAIN (abbreviated as SQLT) is a SQL performance diagnosis tool provided by ORACLE COE. The main method of SQLT is to generate a set of diagnostic files by inputting a SQL statement, which are used to diagnose SQL with poor performance or WRONG RESULTS.
The diagnostic files generated by SQLT include a series of files that need to diagnose SQL performance, such as execution plans, statistics, parameters of CBO, 10053 files, history of performance changes, and so on, and SQLT also provides a series of tools, such as tools to quickly bind SQL execution plans.
SQLT is mainly used in the need to quickly bind SQL execution plan, or in some difficult SQL analysis related to parameters, BUG, etc.
Second, a brief introduction to the SQLT family
SQLT mainly includes the following methods:
SQLT provides the following seven main ways for a SQL statement to generate diagnostic details XTRACT,XECUTE,XTRXEC,XTRSBY,XPLAIN,XPREXT and XPREXC. XTRACT,XECUTE,XTRXEC,XTRSBY,XPREXT and XPREXC handle binding variables and do bind peeking (bind variable snooping), but XPLAIN does not. This is because XPLAIN is executed based on the EXPLAIN PLAN FOR command, which does not do bind peeking.
Therefore, if possible, avoid using XPLAIN, except for the bind peeking limitations of XPLAIN, all of these seven main methods can provide sufficient diagnostic details to make a preliminary assessment of SQL with poor performance or incorrect result sets. If the SQL is still in memory or in Automatic Workload Repository (AWR), use XTRACT or XTRXEC, and use XECUTE otherwise. For Data Guard or standby read-only databases, use XTRSBY. Consider using XPLAIN only if none of the other methods are feasible. XPREXT and XPREXC are similar to XTRACT and XECUTE, but they disable some SQLT features in order to improve the performance of SQLT.
The relationship between several main methods is as follows:
XTRXEC includes the XTRACT and XECUTE methods, and it actually executes both methods to generate the corresponding files. After using these methods, a file is generated and packaged automatically.
For more information on SQLT, please refer to the MOS documentation: SQLT usage Guide (Doc ID 1677588.1). This article focuses on the more useful methods in SQLT (the context of this article is 11.2.0.3).
Third, the SQLT sword is out of sheath.
1. SQLT generates diagnostic files
The diagnostic files are generated using files in the sqlt/run directory, which also contains scripts for SQLHC health checks. Here's an example:
SQL text: select * from test1 where test1.status in (select test2.status from test2 where object_name like 'PRC_TEST%')
This is a simple subquery SQL, where the status of test1 has an index, and the status has a skewed distribution as follows:
Dingjun123@ORADB > select status,count (*) 2 from test1 3 group by status; STATUS COUNT (*)-- INVALID 6 VALID 76679-- the subquery result is INVALID dingjun123@ORADB > select test2.status from test2 2 where object_name like 'PRC_TEST%' 3; STATUS-INVALID INVALID
The statement in the subquery returns exactly INVALID, so it can be predicted that this statement should use the result of the subquery to drive the table test1, take the index of the test.status column, and normally go nested loops. OK, let's take a look at the execution plan:
The execution plan is puzzling, you know, for the statistics of the table is up-to-date and the sampling rate is 100%, and the histogram of the STATUS column is also collected, why do you still go to HASH JOIN, and why does TEST1 go to the whole table? First use SQLT diagnosis, go to the sqlt/run directory to find the corresponding script, and then type SQLID, after which the generated files will be packaged.
Dingjun123@ORADB > @ sqltxtrxec PL/SQL procedure successfully completed. Elapsed: 00.00 Parameter 1: SQL_ID or HASH_VALUE of the SQL to be extracted (required) Enter value for 1: aak402j1r6zy3 Paremeter 2: SQLTXPLAIN password (required) Enter value for 2: XXXXXX PL/SQL procedure successfully completed. Elapsed: 00.00 Value passed to sqltxtrxec: SQL_ID_OR_HASH_VALUE: "aak402j1r6zy3"
Unzip the file and you can see the following:
Here we mainly look at the main file, which is the main content as well as 10053 and so on.
First open the main file, and you can see the main diagnostic contents:
You can see that the environment, execution plan and historical execution information, tables, indexes and other object statistics of CBO are all in this main file. Most of the time, you can use this file to understand the reasons for the inefficiency of SQL. For example, statistics are collected during the period when the execution plan is bad, so you can quickly locate that it may be caused by incorrect statistics collection.
In general, you first look at the execution plan and find Execution Plans through the Plans directory. You can click on those +, and the corresponding statistical information is displayed.
When the statistics are correct, CBO estimates that the resulting row is 76685 rows, while the actual result is 6 rows, which is 12781 times the actual number, which is obviously problematic. You can click the corresponding + to see the statistics:
The STATUS column of TEST1 collects histograms and is 100% sampled without any problems. At this point, the likely scenario for this simple SQL is:
Due to the defect of CBO, the cardinality of the corresponding result set cannot be estimated accurately.
The reason for the BUG or parameter setting of CBO.
In view of the above two cases, the solution will be introduced later. Here we first talk about why the cardinality of the result set is exactly the number of rows of TEST1 when we use HASH JOIN,TEST1 instead of FULL TABLE SCAN. The reason is:
TEST1's STATUS has a histogram
The subquery results query STATUS, but the status value of the query results is unknown before execution, that is, it may be INVALID or VALID.
Combined with the above factors, CBO is unable to predict the specific value of the result before the runtime, which leads to optimizer defects and poor execution plan (12C apative plan can solve this problem).
Now that you know this is the reason, just use SQL PROFILE binding, as detailed in the next section.
2. SQLT Quick bind execution Plan
SQL PROFILE can be bound quickly using SQLT tools. SQL PROFILE adds a series of HINTS to SQL. The advantage is that there is no need to rewrite SQL and can be managed directly in the database.
There are two types of SQL PROFILE bindings for COE tools:
Direct binding: in view of the frequent sudden changes in the implementation plan, there is a good implementation plan in history, and the current implementation plan is poor, just bind it directly.
Replacement binding: for the execution plan has been poor, there is no good execution plan as a reference, you can make it go well by adding hints, and then manually modify the file or coe_load_sql_profile or write stored procedures through the coe tool to bind to a good execution plan.
Note: if SQL does not bind variables, the files generated through coe_xfr_sql_profile need to modify force_match= > true, and those who manually write stored procedures or coe_load_sql_profile to do replacement binding also need to modify force_match= > true, so that all SQL with the same SQL structure (literally different conditions) are bound to a good execution plan.
(the corresponding script for the binding plan is in the sqlt/utl directory)
Let's talk about these two binding methods:
1) use coe_xfr_sql_profile script to bind directly
The execution plan for SQL often mutates, and when the plan gets worse, it is quickly bound to the efficient execution plan. Such as the following example: run code_xfr_sql_profile and enter sql_id:
SQL > @ coe_xfr_sql_profile.sql Parameter 1: SQL_ID (required) Enter value for 1: 0hzkb6xf08jhw PLAN_HASH_VALUE AVG_ET_SECS-- 3071332600. 006-efficient plan 40103161 653 Parameter 2:-enter the PLAN_HASH_VALUE to be bound Obviously we enter 3071332600 PLAN_HASH_VALUE (required) Enter value for 2: finally generate the file Execute. Note: if SQL does not use binding variables, you need to change the FALSE in the force_match = > FALSE of the generated file to TRUE.
2) use coe_load_sql_profile as a replacement binding
The example in 3.1is that the result of the sub-query cannot be determined due to the defect of CBO, which leads to a wrong execution plan. Here, you need to bind the execution plan before 12c. Since there is no ready-made execution plan, you need to write hints to construct a SQL that correctly executes the plan, and then bind the correct execution plan to the original SQL through the replacement binding of SQLT.
First, add hints to the original SQL to make its execution plan correct. The modified SQL is as follows:
Select/*+ BEGIN_OUTLINE_DATA USE_NL (@ "SEL$5DA710D3"TEST1" @ "SEL$1") LEADING (@ "SEL$5DA710D3"TEST2" @ "SEL$2"TEST1" @ "SEL$1") INDEX_RS_ASC (@ "SEL$5DA710D3"TEST2" @ "SEL$2" ("TEST2". "OBJECT_NAME") INDEX_RS_ASC (@ "SEL$5DA710D3"TEST1" @ "SEL$1" ("TEST1". " STATUS ") OUTLINE (@" SEL$2 ") OUTLINE (@" SEL$1 ") UNNEST (@" SEL$2 ") OUTLINE_LEAF (@" SEL$5DA710D3 ") ALL_ROWS DB_VERSION ('11.2.0.3') OPTIMIZER_FEATURES_ENABLE ('11.2.0.3') IGNORE_OPTIM_EMBEDDED_HINTS END_OUTLINE_DATA * / * from test1 where test1.status in ( Select test2.status from test2 where object_name like 'PRC_TEST%')
Then use the coe_load_sql_profile script to do the replacement binding, entering the original sql_id and the replacement sql_id:
Dingjun123@ORADB > @ coe_load_sql_profile Parameter 1: ORIGINAL_SQL_ID (required) Enter value for 1: aak402j1r6zy3 Parameter 2: MODIFIED_SQL_ID (required) Enter value for 2: 6rbnw92d7djwk PLAN_HASH_VALUE AVG_ET_SECS-- 313848035 .001 Parameter 3: PLAN_HASH_VALUE (required) Enter value for 3: 313848035 Values passed to coe_load_sql_profile: ~ ~ ORIGINAL_SQL_ID: "aak402j1r6zy3" MODIFIED_SQL_ID: "6rbnw92d7djwk" PLAN_HASH_VALUE: "313848035" …
When you execute the original statement again, you can see that the binding execution plan is successful, and the index and NESTED LOOPS have been gone.
SQLT's fast binding execution plan, which is widely used in dealing with sudden SQL performance problems, is indeed a very good tool, like a sword out of its sheath and cut like iron.
3. XPLORE quickly diagnose the problem of parameter setting
One night, an important statement of a certain system was executed for an hour after it was migrated to the new database with no result. It was originally very fast (about 1s), and the business staff were very anxious. The corresponding statement is as follows:
SELECT * FROM (SELECT A.ID, A.TEL_ID, A.PRE_CATE_ID, A.INSERT_TIME, A.REMARK1 FROM TAB_BN_TEST_LOG A, (SELECT TEL_ID MIN (INSERT_TIME) AS INSERT_TIME FROM TAB_BN_TEST_LOG WHERE INSERT_TIME >'08 AND ID NOT IN APRMur19' AND ID NOT IN (SELECT IMEI FROM TX_MM_LOG_201907 WHERE TID = '10') GROUP BY TEL_ID) B WHERE A.TEL_ID = B.TEL_ID AND A.INSERT_TIME = B.INSERT_TIME AND A.ID NOT IN (SELECT IMEI FROM TX_MM_LOG_201907 WHERE TID = '10') ORDER BY INSERT_TIME) WHERE ROWNUM < 200
View the execution plan:
FILTER appears in the execution plan, that is, the subquery cannot unnest, because NOT IN is used, but in retrospect, this is 11g, with null aware features, there should be no FILTER, and the use of hints is not effective. Then the first thing that comes to mind is to check whether the null aware parameter is set, after checking:
There is no problem at all, so how to solve it when there is no problem with collecting statistics, SQL PROFILE, and conceivable parameter settings?
Since query conversion is affected by many parameter settings, although null aware has been enabled, it may be affected by other parameters or fix control settings. Therefore, you can use SQLT's artifact XPLORE analysis here, which resets the known parameters and the corresponding fix control corresponding to the known bug one by one, and then generates the corresponding execution plan. Finally, a html file is generated, and the corresponding parameters or BUG are found by viewing the execution plan.
There are many methods in SQLT XPLORE, such as XEXCUTE, XPLAIN, and so on. For slow statements, the XPLAIN method is recommended. Then check the settings where the analysis results match the target plan to identify the problem.
With XPLORE, you can refer to readme.txt in sqlt/utl/xplore. Here, you need to add: / * ^ unique_id * / to the corresponding SQL content.
Finally, the contents of the generated XPLORE file are as follows:
There are 8 PLAN_HASH_VALUE for the execution plan. Enter the corresponding point to find the parameter settings for the correct execution plan:
Finally, it was found that it was related to the _ optimizer_squ_bottomup parameter, which was set to FALSE by the system, so that the subquery could not be converted to null aware query. After reset, the statement execution returned to normal time.
In view of such a situation, if a parameter is compared and analyzed, it must take a long time. Using SQLT's XPLORE artifact, you can quickly find the corresponding parameter settings or known BUG problems, such as SQL performance problems caused by some new features, SQL error results, etc., you can quickly find the corresponding parameters through XPLORE analysis, and then reset.
On the SQL optimization of SQLT in the use of what is shared here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.