Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to improve the query speed of mysql

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to improve the query speed of mysql". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to improve the query speed of mysql".

Methods to improve the speed of mysql query: 1, select the most applicable field properties; 2, use JOIN to replace subqueries; 3, use UNION instead of manually created temporary tables; 4, add indexes; 5, optimize the query to avoid full table scanning as far as possible; 6, try to use table variables instead of temporary tables, and so on.

The operating environment of this tutorial: windows7 system, mysql8 version, Dell G3 computer.

The reason for the slow query speed

From the programmer's point of view

The query statement is not well written.

No index, unreasonable index or invalid index

Associative queries have too many join

From the server's point of view

The server does not have enough disk space

Unreasonable setting of server tuning configuration parameters

Eight ways of MySQL Database Optimization

1. Select the most applicable field attribute

MySQL can well support access to a large amount of data, but in general, the smaller the table in the database, the faster the query will be executed on it. Therefore, in order to achieve better performance when creating the table, we can set the width of the fields in the table as small as possible.

For example, when defining the zip code field, if you set it to CHAR, it obviously adds unnecessary space to the database, and even the use of the VARCHAR type is redundant, because CHAR (6) can do the job well. Similarly, if possible, we should use MEDIUMINT instead of BIGIN to define integer fields.

Another way to improve efficiency is to set the field to NOTNULL whenever possible, so that the database does not have to compare null values when executing queries in the future.

For some text fields, such as "province" or "gender", we can define them as ENUM types. Because in MySQL, ENUM types are treated as numeric data, and numeric data is processed much faster than text types. In this way, we can improve the performance of the database.

2. Use JOIN to replace subquery (Sub-Queries)

MySQL has supported SQL's subqueries since 4. 1. This technique can use the select statement to create a single-column query result, which can then be used as a filter condition in another query. For example, if we want to delete a customer without any order in the customer basic information table, we can use a subquery to extract all the customer ID that placed the order from the sales information table, and then pass the result to the main query, as shown below:

DELETE FROM customerinfo WHERE CustomerID NOT IN (SELECT CustomerID FROM salesinfo)

Using subqueries can accomplish many SQL operations that logically require multiple steps to complete at one time, avoid transaction or table locking, and are easy to write. However, in some cases, subqueries can be JOIN more efficiently. Replace. For example, suppose we want to take out all the users who do not have an order record, we can complete it with the following query:

SELECT * FROM customerinfo WHERE CustomerID NOT IN (SELECTC ustomerID FROM salesinfo)

If you use connection (JOIN).. To complete this query, it will be much faster. Especially when there is an index on CustomerID in the salesinfo table, the performance will be better, as shown in the query:

SELECT * FROM customerinfo LEFT JOIN salesinfo ON customerinfo.CustomerID=salesinfo.CustomerID WHERE salesinfo.CustomerID ISNULL

Connect (JOIN).. It is more efficient because MySQL does not need to create temporary tables in memory to complete this logically two-step query.

3. Use UNION instead of manually created temporary tables

MySQL has supported union queries since version 4.0, which can merge two or more select queries that need to use temporary tables in a single query. At the end of the client query session, the temporary table is automatically deleted to ensure that the database is neat and efficient. When using union to create a query, we only need to concatenate multiple select statements using UNION as the keyword, and note that the number of fields in all select statements should be the same. The following example demonstrates a query that uses UNION.

SELECT Name,Phone FROM client UNIONSELECT Name,BirthDate FROM author UNIONSELECT Name,Supplier FROM product

4. Transaction

Although we can use subqueries (Sub-Queries), JOIN (join), and UNION (UNION) to create a variety of queries, not all database operations can be done with one or a few SQL statements. More often, you need to use a series of statements to accomplish some kind of work. But in this case, when one of the statements in this block goes wrong, the operation of the whole block becomes uncertain. Imagine that if you want to insert some data into two associated tables at the same time, it may happen that after a successful update in the first table, there is a sudden unexpected condition in the database, resulting in the operation in the second table not being completed, thus resulting in incomplete data and even destroying the data in the database. To avoid this, you should use transactions, which either succeed or fail for every statement in the block. In other words, you can maintain the consistency and integrity of the data in the database. Things start with the BEGIN keyword and end with the COMMIT keyword. If a SQL operation fails in between, the ROLLBACK command can restore the database to the state it was before BEGIN started.

BEGIN; INSERT INTO salesinfo SET CustomerID=14; UPDATE inventory SET Quantity=11 WHERE item='book'; COMMIT

Another important role of transactions is that when multiple users use the same data source at the same time, it can use the method of locking the database to provide users with a secure way of access. this ensures that the user's operation will not be disturbed by other users.

5. Lock the table

Although transaction is a very good way to maintain database integrity, it sometimes affects the performance of database because of its exclusivity, especially in large application systems. Because the database will be locked during the execution of the transaction, other user requests can only wait temporarily until the transaction ends. If a database system is used by only a few users, the impact of transactions will not be a big problem; but suppose there are thousands of users accessing a database system at the same time, such as an e-commerce website. there will be serious response delays.

In fact, in some cases we can get better performance by locking the table. The following example uses the method of locking the table to complete the function of the transaction in the previous example.

LOCKTABLE inventory WRITE SELECT Quantity FROM inventory WHERE Item='book';...UPDATE inventory SET Quantity=11 WHERE Item='book'; UNLOCKTABLES

Here, we use a select statement to take the initial data, and through some calculations, update the new value to the table. A LOCKTABLE statement that contains the WRITE keyword ensures that there will be no other access to insert, update, or delete the inventory until the UNLOCKTABLES command is executed.

6. Use foreign keys

The method of locking the table can maintain the integrity of the data, but it cannot guarantee the relevance of the data. At this point, we can use foreign keys.

For example, a foreign key can ensure that each sales record points to an existing customer. Here, the foreign key can map the CustomerID in the customerinfo table to the CustomerID in the salesinfo table, and any record without a legal CustomerID will not be updated or inserted into the salesinfo.

CREATE TABLE customerinfo (CustomerIDINT NOT NULL,PRIMARYKEY (CustomerID)) TYPE=INNODB;CREATE TABLE salesinfo (SalesIDNT NOT NULL,CustomerIDINT NOT NULL,PRIMARYKEY (CustomerID,SalesID), FOREIGNKEY (CustomerID) REFERENCES customerinfo (CustomerID) ON DELETE CASCADE) TYPE=INNODB

Notice the parameter "ON DELETE CASCADE" in the example. This parameter ensures that when a customer record in the customerinfo table is deleted, all records related to that customer in the salesinfo table will also be automatically deleted.

7. Use the index

Indexing is a common way to improve database performance, which enables the database server to retrieve specific rows much faster than without an index, especially when the query contains commands such as MAX (), MIN () and ORDERBY.

Which fields should be indexed?

In general, the index should be based on the fields that will be used for JOIN,WHERE judgment and ORDERBY sorting. Try not to index a field in the database that contains a large number of duplicate values. For a field of type ENUM, it is possible to have a large number of duplicate values

For example, "province" in customerinfo.. Field, indexing on such a field will not help; on the contrary, it may degrade the performance of the database. We can create the appropriate index at the same time as we create the table, or we can use ALTERTABLE or CREATEINDEX to create the index later. In addition, MySQL supports full-text indexing and search since version 3.23.23. A full-text index is an index of type FULLTEXT in MySQL, but can only be used for tables of type MyISAM. For a large database, it would be very fast to load the data into a table without an FULLTEXT index, and then create an index using ALTERTABLE or CREATEINDEX. However, if you load the data into a table that already has an FULLTEXT index, the execution process will be very slow.

8. Optimized query statement

In most cases, using an index can improve the speed of the query, but if the SQL statement is not used properly, the index will not play its due role.

28 optimization methods of SQL query statements:

1. Try to avoid using the! = or operator in the where clause, otherwise the engine will give up using the index and do a full table scan.

2. Try to avoid judging the null value of a field in the where clause, otherwise it will cause the engine to give up using the index and perform a full table scan, such as:

Select id from t where num is null

You can set the default value of 0 on num to ensure that there is no null value for the num column in the table, and then query like this:

Select id from t where num = 0

3. When there is only the OR keyword in the query condition of the query statement, and the columns in the two conditions before and after OR are indexes, the index is used in the query. Otherwise, the query will not use the index.

4. In order to optimize the query, we should avoid full table scanning as far as possible, and we should first consider establishing indexes on the columns involved in where and order by.

5. Leading fuzzy queries cannot use indexes (like'% XX' or like'% XX%') and can be avoided by using index overrides.

6. In and not in should also be used with caution, otherwise it will lead to full table scanning. Such as:

Select id from t where num in (1, 2, 3)

For consecutive values, use between instead of in:

Select id from t where num between 1 and 3

7. If you use parameters in the where clause, it will also cause a full table scan. Because SQL parses local variables only at run time, the optimizer cannot select the access plan to the runtime; it must be selected at compile time. However, if the resume accesses the plan at compile time, the value of the variable is still unknown and cannot be used as an input for index selection. A full table scan will be performed as follows:

Select id from t where num=@num

You can force the query to use the index instead:

Select id from t with (index (index name)) where num=@num

8. Try to avoid expression operations on fields in the where clause, which will cause the engine to give up using the index and do a full table scan. Such as:

Select id from t where num/2 = 100

It should be changed to:

Select id from t where num = 100 * 2

9. Try to avoid functional operations on fields in the where clause, which will cause the engine to give up using the index and do a full table scan. Such as:

Select id from t where substring (name, 1,3) = 'abc'-name; / / id select id from t where datediff starting with abc (day,createdate,'2005-11-30') = 0Murray 2005-11-30; / / generated id

It should be changed to:

Select id from t where name like 'abc%' select id from t where createdate > =' 2005-11-30 'and createdate <' 2005-12-1'

Do not perform functions, arithmetic operations, or other expression operations on the left side of the "=" in the where clause, otherwise the system may not be able to use the index correctly.

11. When using an index field as a condition, if the index is a composite index, you must use the first field of the index as a condition to ensure that the system uses the index, otherwise the index will not be used. and the order of the fields should be consistent with the order of the index as far as possible.

12. In many cases, using exists instead of in is a good choice:

Select num from a where num in (select num from b)

Replace it with the following statement:

Select num from a where exists (select 1 from b where num=a.num)

13, the index is not the more the better, the index can improve the efficiency of the corresponding select, but also reduce the efficiency of insert and update, because insert or update may rebuild the index, so how to build the index needs to be carefully considered, depending on the specific situation. It is best to have no more than 6 indexes in a table, and if there are too many, consider whether it is necessary to build indexes on some infrequently used columns.

14. Not all indexes are valid for the query. SQL optimizes the query based on the data in the table. When there are a large number of duplicate data in the index column, the SQL query may not use the index. For example, if there are fields sex,male and female in a table, then even if the index is built on sex, it will not have an effect on query efficiency.

15. Use numeric fields as much as possible, and try not to design character fields that contain only numerical information, which will reduce the performance of queries and connections, and increase storage overhead. This is because the engine compares each character in the string one by one when processing queries and connections, while for numeric types, it only needs to be compared once.

16. Use varchar/nvarchar instead of char/nchar as much as possible, because first of all, the storage space of longer fields is small, which can save storage space. Secondly, for queries, searching in a relatively small field is obviously more efficient. One more time is enough.

17. Do not use select * from t anywhere, replace * with a specific list of fields, and do not return any fields that are not needed.

18. Try to use table variables instead of temporary tables. If the table variable contains a large amount of data, note that the index is very limited (only the primary key index).

Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.

Temporary tables are not unusable, and proper use of them can make some routines more efficient, for example, when you need to re-reference a dataset in a large or common table. However, for one-time events, it is best to use an export table.

21. When creating a new temporary table, if you insert a large amount of data at one time, you can use select into instead of create table to avoid causing a lot of log to improve speed; if the amount of data is small, in order to ease the resources of the system table, you should first create table, and then insert.

22. If temporary tables are used, be sure to explicitly delete all temporary tables at the end of the stored procedure, first truncate table, and then drop table, so as to avoid locking the system tables for a long time.

23. Avoid using cursors as far as possible, because cursors are inefficient, and if the data operated by cursors exceeds 10,000 rows, you should consider rewriting them.

24. Before using a cursor-based method or a temporary table method, you should look for a set-based solution to solve the problem, which is usually more effective.

Like temporary tables, cursors are not unusable. Using FAST_FORWARD cursors for small datasets is generally better than other row-by-row processing methods, especially if you have to reference several tables to get the data you need. Routines that include "totals" in the result set are usually executed faster than using cursors. If you allow it during development, you can try both the cursor-based approach and the set-based approach to see which method works better.

Set SET NOCOUNT ON at the beginning of all stored procedures and triggers and set SET NOCOUNT OFF at the end. There is no need to send a DONE_IN_PROC message to the client after each statement of the stored procedure and trigger is executed.

27. Try to avoid returning a large amount of data to the client. If the amount of data is too large, you should consider whether the corresponding requirements are reasonable.

28. Try to avoid large transaction operations and improve the concurrency ability of the system.

Thank you for your reading, the above is the content of "how to improve the query speed of mysql". After the study of this article, I believe you have a deeper understanding of how to improve the query speed of mysql. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database