Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summary of database SQL statement optimization

2025-03-13 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the "database SQL statement optimization summary". In the daily operation, I believe that many people have doubts about the database SQL statement optimization summary. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "database SQL statement optimization summary". Next, please follow the editor to study!

1. To optimize the query, to avoid full table scanning as far as possible, we should first consider establishing indexes on the columns involved in where and order by.

two。 Try to avoid judging the null value of a field in the where clause, otherwise it will cause the engine to abandon the use of the index and perform a full table scan, such as:

Select id from t where num is null

It is best not to leave NULL for the database, but to populate the database with NOT NULL as much as possible.

Comments, descriptions, comments, etc., can be set to NULL, others, it is best not to use NULL.

Do not think that NULL does not need space, for example: char (100), when the field is created, the space is fixed, regardless of whether the value is inserted (NULL is also included), it takes up 100 characters of space. If it is a variable length field like varchar, null does not take up space.

You can set the default value of 0 on num to ensure that there is no null value for the num column in the table, and then query it like this:

Select id from t where num = 0

3. The use of the! = or operator in the where clause should be avoided as much as possible, otherwise the engine will abandon the use of indexes and perform a full table scan.

4. Try to avoid using or to join conditions in the where clause. If a field has an index and a field does not have an index, it will cause the engine to give up using the index and perform a full table scan, such as:

Select id from t where num=10 or Name = 'admin'

You can query it like this:

Select id from t where num = 10union allselect id from t where Name = 'admin'

5.in and not in should also be used with caution, otherwise it will lead to full table scanning, such as:

Select id from t where num in (1, 2, 3)

For consecutive values, use between instead of in:

Select id from t where num between 1 and 3

In many cases, using exists instead of in is a good choice:

Select num from a where num in (select num from b)

Replace it with the following statement:

Select num from a where exists (select 1 from b where num=a.num)

6. The following query will also cause a full table scan:

Select id from t where name like'% abc%'

To improve efficiency, consider full-text retrieval.

7. Using parameters in the where clause also results in a full table scan. Because SQL parses local variables only at run time, the optimizer cannot defer the choice of an access plan until run time; it must be selected at compile time. However, if an access plan is established at compile time, the value of the variable is still unknown and cannot be used as an input to the index selection. A full table scan will be performed as follows:

Select id from t where num = @ num

You can force the query to use the index instead:

Select id from t with (index (index name)) where num = @ num

Expression manipulation of fields in the where clause should be avoided as far as possible, which will cause the engine to abandon the use of indexes and perform full table scans. Such as:

Select id from t where num/2 = 100

It should be changed to:

Select id from t where num = 100,0002

9. Functional manipulation of fields in the where clause should be avoided as far as possible, which will cause the engine to abandon the use of indexes and perform full table scans. Such as:

Select id from t where substring (name,1,3) = 'abc'-- idselect id from t where datediff (day,createdate,'2005-11-30') where name begins with abc = 0--' 2005-11-30'-- generated id

It should be changed to:

Select id from t where name like 'abc%'select id from t where createdate > =' 2005-11-30 'and createdate <' 2005-12-1'

10. Do not perform functions, arithmetic operations, or other expression operations to the left of the "=" in the where clause, or the system may not be able to use the index correctly.

11. When using an index field as a condition, if the index is a composite index, the first field in the index must be used as a condition to ensure that the system uses the index, otherwise the index will not be used. and the order of the fields should be consistent with the order of the index as far as possible.

twelve。 Don't write meaningless queries, such as generating an empty table structure:

Select col1,col2 into # t from t where 1: 0

This type of code does not return any result sets, but consumes system resources, and should be changed to this:

Create table # t (…)

13.Update statement, if you change only 1 or 2 fields, do not Update all fields, otherwise frequent calls will cause significant performance consumption and a large number of logs.

14. For multiple tables with a large amount of data (a few hundred are large here), you should first page and then JOIN, otherwise the logical reading will be very high and the performance will be very poor.

15.select count (*) from table;, such as count without any conditions, will cause full table scanning, and does not have any business significance, so it must be eliminated.

16. Index is not the more the better, the index can improve the efficiency of the corresponding select, but also reduce the efficiency of insert and update, because insert or update may rebuild the index, so how to build the index needs to be carefully considered, depending on the specific situation. It is best to have no more than 6 indexes in a table, and if there are too many, consider whether it is necessary to build indexes on some infrequently used columns.

17. Updating clustered index data columns should be avoided as much as possible, because the order of clustered index data columns is the physical storage order of table records. Once the value of this column changes, it will lead to the adjustment of the order of the whole table records, which will consume a lot of resources. If the application system needs to update the clustered index data column frequently, it needs to consider whether the index should be built as a clustered index.

18. Try to use numeric fields, and try not to design character fields that contain only numeric information, which will reduce the performance of queries and connections, and increase storage overhead. This is because the engine compares each character in the string one by one when processing queries and concatenations, while for numeric types, only one comparison is needed.

19. Use varchar/nvarchar instead of char/nchar as much as possible, because first of all, the storage space of longer fields is small, which can save storage space, and secondly, for queries, searching in a relatively small field is obviously more efficient.

20. Don't use select * from t anywhere, replace "*" with a specific list of fields, and don't return any fields that you don't need.

21. Try to use table variables instead of temporary tables. If the table variable contains a large amount of data, note that the index is very limited (only the primary key index).

twenty-two。 Avoid creating and deleting temporary tables frequently to reduce the consumption of system table resources. Temporary tables are not unavailable, and using them appropriately can make some routines more efficient, for example, when you need to re-reference a dataset in a large or common table. However, for one-time events, it is best to use an export table.

23. When creating a new temporary table, if you insert a large amount of data at one time, you can use select into instead of create table to avoid causing a lot of log to speed up; if the amount of data is small, in order to ease the resources of the system table, you should first create table, and then insert.

24. If temporary tables are used, be sure to explicitly delete all temporary tables at the end of the stored procedure, first truncate table, and then drop table, to avoid prolonged locking of system tables.

25. Avoid using cursors as much as possible, because cursors are inefficient, and if you operate on cursors with more than 10,000 rows of data, you should consider rewriting.

twenty-six。 Before using a cursor-based or temporary table approach, you should look for a set-based solution to the problem, which is usually more effective.

twenty-seven。 Like temporary tables, cursors are not unavailable. Using FAST_FORWARD cursors for small datasets is generally better than other row-by-row processing methods, especially if you have to reference several tables to get the data you need. Routines that include "totals" in the result set are usually executed faster than using cursors. If you allow it during development, you can try both the cursor-based approach and the set-based approach to see which method works better.

twenty-eight。 Set SET NOCOUNT ON at the beginning of all stored procedures and triggers and set SET NOCOUNT OFF at the end. There is no need to send a DONE_IN_PROC message to the client after each statement of the stored procedure and trigger is executed.

twenty-nine。 Try to avoid large transaction operations and improve the concurrency ability of the system.

thirty。 Try to avoid returning a large amount of data to the client. If the amount of data is too large, you should consider whether the corresponding requirements are reasonable.

Actual case study: split large DELETE or INSERT statements and submit SQL statements in batches

If you need to perform a large DELETE or INSERT query on an online site, you need to be very careful not to stop your entire site from responding. Because these two operations will lock the table, once the table is locked, other operations can not come in.

Apache will have many child processes or threads. Therefore, it works quite efficiently, and our server does not want to have too many child processes, threads and database links, which takes up a lot of server resources, especially memory.

If you lock your table for a period of time, say 30 seconds, then for a site with high traffic, the accumulated access processes / threads, database links, and the number of files opened in those 30 seconds may not only crash your WEB service, but also crash your entire server immediately.

So, if you have a big deal, you must split it, using the LIMIT Oracle (rownum), sqlserver (top) condition is a good way. Here is an example of MySQL:

While (1) {/ / only do 1000 mysql_query at a time ("delete from logs where log_date

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report