Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Changing the SET option in a procedure or trigger will cause a recompilation

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

SQL Prompt automatically retrieves according to the object name, syntax and code snippet of the database to provide users with appropriate code choices. Automatic script settings make the code easy to read-especially useful when developers are not familiar with scripts. SQL Prompt can be installed and used, which can greatly improve the coding efficiency. This tutorial introduces SQL Prompt's performance rule PE012, which recommends whether you detect the use of SET statements in stored procedures or triggers, which can lead to unnecessary recompilation, although the problem involves other types of batch processing.

Sometimes, for some obvious reason, you will have a stored procedure or trigger that takes longer to run intermittently. You have checked the index to rule out problems such as parameter sniffing, but intermittent performance problems remain. In order to change the execution settings, can SET be as simple as issuing statements in a batch? If you do so, the problem may be caused by the need for SQL Server to recompile the process or to trigger repeatedly.

There is nothing particularly wrong with recompilation; in fact, it is common to force some queries to be recompiled each time they are executed, precisely to avoid poor performance problems associated with parameter sniffing, misuse of Execute (), or all-inclusive queries. However, if recompiling becomes too much, especially for frequent or expensive queries, it can be a problem and worth investigating, and I'll show you how to use extension events.

What is recompilation?

When SQL Server executes temporary batches or queries or objects such as stored procedures or triggers, SQL Server compiles the execution plan database, its objects and their data, optimized for the current state for each batch or object and for each query in that batch or object. It takes time and resources for SQL Server's optimizer to design this plan, but it must be done before the code can be passed to the execution engine. Fortunately, we tend to execute the same query or process repeatedly, possibly with different parameters, so SQL Server stores most of the plans it generates in the plan cache, and no matter what parameter values we use, it ensures that all plans can be safely reused. When we execute the same batch or object again, it simply reuses its cached plan whenever possible.

However, sometimes we re-execute the stored procedure, or resubmit the cache that the optimizer has seen before, and have an optimized plan in the cache for that optimizer, but for some reason, it cannot reuse the plan and compile a new one. This is a recompilation and occurs for a variety of reasons. If the execution engine detects that the table has changed or its statistics have changed significantly, it will automatically occur, and it will mark all cache plans to recompile the query that accessed the table. The next time one of the queries is run, the optimizer will generate the new plan and the old plan will be deleted.

We can also force the optimizer to constantly recompile the plan by appending the OPTION (RECOMPILE) hint to the query. The plan for the query may still be in the cache, but will not be reused. This is usually done to deal with unstable performance caused by parameter sniffing, the use of "catch-all" procedures, abuse of Execute (), and so on.

To save time and resources, SQL Server recompiles at the statement level where possible. If the plan for only one statement in a batch or stored procedure is invalid due to a change in the data structure or the underlying data, or if only one statement has an OPTION (RECOMPILE) prompt, only the plan for the affected statement is recompiled, not the entire batch or storage.

Sometimes, recompilation is not automatically triggered by changes to the data structure or data, nor is it enforced by the use of prompts. We re-execute the same query on the same database and there is a matching cache plan because the SQL text of the submitted query exactly matches the SQL text associated with the cache plan (including spaces and carriage returns), but the plan is not reused.

Again, there are several possible reasons that we will not discuss further here, such as references to temporary tables that are not statically created during the process, or lack of schema validation, and the reason we will address is that the cached plan is created using a different SET option than the SET option used to submit the query.

SET options for "affecting planned reuse"

Changing the values of some SET options, sometimes referred to as "affecting plan reuse", will change how the query runs and its results. Therefore, when the optimizer checks whether its cache plan matches, it includes checking that the SET option used in the compilation cache plan matches the SET option used in the connection of the release batch. If they do not match, it will not reuse the existing plan, but will compile a new plan.

This means that you can see multiple cached plans, which are basically the same except for the details of these SET options.

These Plan reuse impact options are listed alphabetically, ANSI_DEFAULTS, ANSI_NULL_DFLT_OFF, ANSI_NULL_DFLT_ON, ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL, DATEFIRST, DATEFORMAT, FORCEPLAN, LANGUAGE, NO_BROWSETABLE, NUMERIC_ROUNDABORT, and QUOTED_IDENTIFIER.

These SET statements are detected when SQL Server performs a "constant collapse" during compilation, and it seems that in older versions of SQL Server, changing some of these SET options to certain values may result in recompilation every time the procedure is called. However, this issue is rarely heard of in the latest version of SQL Server.

However, it is wise to change the SET option, and changing the option at the beginning of a batch, or even during a trigger, can cause a new plan to be compiled and reused only if you execute exactly the same batch or object and have exactly the same settings. While recompiling a plan in this manner rarely causes major performance problems, it does incur CPU costs and can cause problems, especially for complex queries with high compilation costs and high frequency of execution, or even both in multi-statement programs.

Change connection settings

For ODBC, ADO, or JDBC connections, the way to specify any changes to the default settings for the connection is to perform a preliminary batch of SET statements after the connection is first established. There is no option in the connection string to allow this operation: it must be done by the SET statement. In SSMS, you can use the query menu (query > query options) to specify advanced and ANSI standard options for the execution behavior of a connection. When developing and testing, it is worth setting them to be the same as those used to connect to the production system. These settings only reflect the execution settings when the connection is established. If you then change the settings in the batch in the connection, these settings are used for subsequent batches.

You will notice that the SET option in this tab (and the ANSI tab, not shown) does not cover all the Plan-reuse-impact options. The rest must be done when a new connection is established through the SET option statement.

Change the result by changing the SET option

As mentioned earlier, changes to the session SET options can result in errors or warnings in some cases, or different results for queries. The quick demo is worth mentioning, where I will simply change the values of several SET options at the beginning of each batch:

With ARITHABORT set to ON, when the query encounters 0 division, the query ends with an error (which we caught), so 2 rows are returned. When we turn this option off, the same query returns three rows:

If you check the schedule for each batch, except for the values of these SET options (open the properties of the SELECT operator to view them), you will see that they are the same:

The following query will show us what's going on in the plan cache (I've already done this in the PhilFactor database, so you need to make changes).

To get this result...

Due to different SET option settings (235 and 4331), each batch has its own compilation plan. You'll notice that an attribute of the plan, set_options, provides you with bitmap values for all SET options, most of which are on or off.

Each time you change one of these setting options, you will see a new plan created specifically for that option set, which obviously increases the cache requirements and the CPU time it takes to compile the plan. If you execute these two batches ten times, you will see that the appropriate plan is used without recompiling.

Change the SET option in a stored procedure

So far, we have only dealt with batches, but what if for some reason we want to ensure that the procedures are executed with specific settings?

I have encapsulated the same logic in three stored procedures, the first two using specific settings for our two options, and the third without any SET option statements.

We see a total of nine plans, and each time the plan is executed from a connection with different set_ options values, a new plan is compiled for each process. In other words, if the execution settings for calling the batch do not match the execution settings that are in effect when compiling any execution plan for the process, a new cache plan is created with the new set option. If we re-execute the same stored procedure with the same set_ options value, the plan will be reused.

Calling the first stored procedure (explicitly setting ARITHABORT to ON) always returns 2 rows, while calling the second stored procedure always returns 3 rows. When a procedure is called without a SET statement, it depends only on the setting of the calling connection.

If you change the settings in the procedure, they are valid only in that procedure, so they do not affect the batch that calls the procedure. All nine plans show the value of the SET option for performing the connection that invokes the batch.

Use of SET statements that capture "impact planning reuse" in procedures and triggers

The performance rules (PE012) in SQL Prompt seem to see if SET makes any SET statements that "affect planned reuse" in stored procedures and triggers (though not in batches). You can also use SQL Change Automation to run checks to find this problem in the database build source. SQL Monitor also supports code analysis.

Note, however, that this applies not only to procedures or triggers, but also to any temporary batches, batches executed using sp_executesql, prepared queries, and dynamic SQL. If you issue an "affect plan reuse" SET statement, the cache plan for any of them cannot be reused so easily, and there is a risk of recompilation for each use in earlier versions of SQL Server.

We give priority to using stored procedures and triggers to handle dynamic Transact-SQL batches because they are easier to reuse. They are parameterized, so the SQL text never changes, which promotes reuse. When you change settings during a prepared batch or process, the setting option is only used to execute the prepared batch or process

Batches can also be reused, but SQL Server finds it easier to do so if the batch is executed through sp_executesql or Prepare methods rather than dynamic SQL or Execute methods.

To make matters worse, when a temporary batch is executed, any changes in the SET options are leaked from the batch, causing the connection to retain its new settings: you must explicitly restore the settings, but the error of aborting the batch immediately before that point will not execute the code. The optimizer may then need to compile the new plan for these new settings for all subsequent batches and procedures that you execute on the connection.

This error is difficult to detect and reinforces the general recommendation that after a connection is established, these statements must always be executed as a preliminary batch and then avoid any changes. This means that all such SET statements are suspicious in the code and should be regarded as "SQL code odor". It is difficult to prove that they are reasonable.

Investigation overcompiled

SQL Server Profiler can be used in versions of SQL Server where extension events are not available or are too rough. Although SP:Recompile trace events can be used only for statement-level recompilation of reporting procedures and triggers, SQL:StmtRecompile can also be used to trace and debug recompilations, which can detect recompilations of stored procedures, triggers, temporary batches, using sp_executesql, prepared queries, and batches of dynamic SQL execution. The eventual subclass columns of SP:Recompile and SQL:StmtRecompile contain an integer code indicating the reason for the recompilation.

By expanding events, things become more civilized. We can get a complete report on the recompilation and its causes. This is a simple session to report on each compilation.

In this way, we can get the details of a single recompilation. I usually add session event filters to the sqlserver.username field to get recompiled only for specific users (the name of the test user who ran the test code). Otherwise, there will be a lot of noise.

Summary

If you find that your code contains SET statements involving the "planned reuse impact" option, then that's what the code smells like, and you should investigate why.

Of course you can do something cunning and clever, but I've never found it in my work on SQL Server development. This is not only a bad practice in stored procedures or triggers, but may also be executed multiple times in any batch. If you need to set language, ANSI options, or error handling compatibility, set it when you create a connection and create a single standard. If this fails, it will cause SQL Server to perform unnecessary recompilations.

When I write that the use of these SET statements is "bad", I don't want to imply that batch recompilation must be bad: sometimes they avoid one of the hidden performance problems, and they rarely affect performance as long as you don't indulge in SQL code, the smell of the application is unnecessary. For example, when we create batches to reuse, we always promote code reuse by using the parameter sp_ExecuteSQL, or in our application, we use binding parameters correctly. As a precaution, we use table variables.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report