Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The reason and solution of slow query in mysql

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

There are many reasons for slow query speed, and the common ones are as follows:

1. There is no index or no index is used (this is the most common problem of slow query and a flaw in programming)

2. The throughput of IPUP O is small, which forms the bottleneck effect.

3. The query is not optimized because the computed column is not created.

4. Insufficient memory

5. Slow network speed

6. The amount of data queried is too large (multiple queries can be used, and other methods can reduce the amount of data)

7. Lock or deadlock (this is also the most common problem of slow query and is a flaw in programming)

8. Sp_lock,sp_who, viewed by active users, due to read and write competition for resources.

9. Unnecessary rows and columns are returned

10. the query sentence is not good and has not been optimized.

You can optimize queries in the following ways:

1. Put the data, logs and indexes on different I Tempdb O devices to increase the reading speed. In the past, you can put Tempdb on the RAID0, but it is no longer supported by SQL2000. The larger the amount of data (size) is, the more important it is to improve the I _ map O.

2. Divide the table vertically and horizontally to reduce the size of the table (sp_spaceuse)

3. Upgrade hardware

4. According to the query conditions, establish the index, optimize the index, optimize the access mode, and limit the amount of data in the result set. Note that the fill factor is appropriate (it is best to use the default value of 0). The index should be as small as possible. It is better to use columns with a small number of bytes to build an index (reference index creation). Do not build a single index on fields with limited values, such as gender fields.

5. Improve the network speed

6. Expand the memory of the server. Windows 2000 and SQL server 2000 can support 4-8 GB of memory. Configure virtual memory: the size of virtual memory should be configured based on services running concurrently on the computer. When running Microsoft SQL Server 2000, consider setting the virtual memory size to 1.5 times the physical memory installed on your computer. If you also have the full-text search feature installed and you plan to run the Microsoft search service to perform full-text indexing and queries, consider configuring virtual memory to be at least three times the physical memory installed on your computer. Configure the SQL Server max server memory server configuration option to 1.5 times the physical memory (half the virtual memory size setting).

7. Increase the number of server CPU; however, it must be understood that parallel processing serial processing needs more resources such as memory. The use of parallel or serial travel is selected by MsSQL automatic evaluation. A single task is broken down into multiple tasks and can be run on the processor. For example, delayed query sorting, join, scanning and GROUP BY words are executed at the same time, SQL SERVER determines the optimal parallel level according to the load of the system, and complex queries that consume a lot of CPU are most suitable for parallel processing. But the update operation UPDATE,INSERT,DELETE cannot be processed in parallel.

8. If you are using like for query, it is not possible to simply use index, but full-text indexing consumes space. Like'a% 'when using the index like'% a' without referencing the like'% a% 'query, the query time is proportional to the total length of the field value, so you can't use the CHAR type, but VARCHAR. Create a full-text index for long values of the field.

9. Separation of DB Server and APPLication Server; separation of OLTP and OLAP

10. Distributed partitioned views can be used to implement database server consortia. A consortium is a group of separately managed servers, but they cooperate with each other to share the processing load of the system. This mechanism of forming a consortium of database servers through partitioned data can expand a set of servers to support the processing needs of large, multi-tier Web sites. For more information, see designing a federated database server. (refer to the SQL help file 'Partition View')

Before implementing a partitioned view, you must first partition the table horizontally

B. after creating the member table, define a distributed partitioned view on each member server, and each view has the same name. In this way, queries that reference the name of the distributed partitioned view can be run on any member server. The system operates as if there is a copy of the original table on each member server, but there is only one member table and one distributed partitioned view on each server. The location of data is transparent to the application.

11. Re-index DBCC REINDEX, DBCC INDEXDEFRAG, shrink data and log DBCC SHRINKDB,DBCC SHRINKFILE. Set up automatic shrinking log. Do not set the database to grow automatically for large databases, it will degrade the performance of the server. There is a lot of attention to the way T-sql is written. Here are some common points: first of all, the process of DBMS processing query plan is as follows:

1. Lexical and grammatical checking of query sentences

2. Submit the statement to the query optimizer of DBMS

3. Optimizer does algebra optimization and access path optimization.

4. query planning is generated by pre-compilation module.

5. Then submit it to the system for processing and execution at the right time

6. Finally, the execution result will be returned to the user. Secondly, take a look at the data storage structure of SQL SERVER: a page size of 8K (8060) bytes, 8 pages for a disk area, according to the B-tree storage.

12. The difference between Commit and rollback Rollback: roll back everything. Commit: submit the current thing. There is no need to write things in dynamic SQL, if you want to write, please write outside such as: begin tran exec (@ s) commit trans or write dynamic SQL into functions or stored procedures.

13. Use Where words to limit the number of rows returned in the query Select statement to avoid table scanning. if unnecessary data is returned, it wastes the server's Icano resources, increases the burden of the network and reduces the performance. If the table is large, lock the table during the table scan and prohibit other joins from accessing the table, with serious consequences.

14. SQL's comments have no effect on implementation.

15. Do not use the cursor as much as possible, it takes up a lot of resources. If you need to execute row-by-row, try to use non-cursor techniques, such as: looping in the client, using temporary tables, Table variables, using subqueries, using case statements, and so on. Cursors can be classified according to the extraction options it supports: rows must be extracted in the order from the first row to the last line. FETCH NEXT is the only allowed extraction operation and the default mode. Scrollability can extract any row at random anywhere in the cursor. The technology of cursors becomes very powerful under SQL2000, and its purpose is to support loops. There are four concurrency options, READ_ONLY: updates via cursor positioning (Update) are not allowed, and there are no locks in the rows that make up the result set. OPTIMISTIC WITH valueS: optimistic concurrency control is a standard part of transaction control theory. Optimistic concurrency control is used in situations where there is only a small chance of a second user updating a row between opening the cursor and updating the row. When a cursor is opened with this option, there is no lock to control the rows in it, which will help maximize its processing power. If the user attempts to modify a row, the current value of the row is compared with the value obtained when the row was last extracted. If any value changes, the server knows that someone else has updated the row and returns an error. If the value is the same, the server performs the modification. Select this concurrency option  OPTIMISTIC WITH ROW VERSIONING: this optimistic concurrency control option is based on row versioning. Use row versioning, where the table must have some version identifier that the server can use to determine whether the row has changed after reading the cursor. In SQL Server, this performance is provided by the timestamp data type, which is a binary number that represents the relative order of changes in the database. Each database has a global current timestamp value: @ @ DBTS. Each time a row with a timestamp column is changed in any way, SQL Server first stores the current @ @ DBTS value in the timestamp column, and then increases the value of @ @ DBTS. If a table has a timestamp column, the timestamp is recorded at the row level. The server can then compare the current timestamp value of a row with the timestamp value stored when it was last extracted to determine whether the row has been updated. The server does not have to compare the values of all the columns, just the timestamp column. If the application requires optimistic concurrency based on row versioning for tables without timestamp columns, the cursor defaults to optimistic concurrency control based on numeric values. SCROLL LOCKS this option implements pessimistic concurrency control. In pessimistic concurrency control, the application will try to lock the database rows when reading the rows of the database into the cursor result set. When using a server cursor, an update lock is placed on it when a row is read into the cursor. If the cursor is opened within a transaction, the transaction update lock is held until the transaction is committed or rolled back; when the next row is fetched, the cursor lock is removed. If the cursor is opened outside the transaction, the lock is discarded when the next row is fetched. Therefore, whenever the user needs complete pessimistic concurrency control, the cursor should be opened within the transaction. The update lock will prevent any other task from acquiring the update lock or exclusive lock, thus preventing other tasks from updating the row. However, an update lock does not prevent a shared lock, so it does not prevent other tasks from reading rows unless the second task is also requiring a read with an update lock. Scroll locks these cursor concurrency options generate scroll locks based on the lock hints specified in the SELECT statement defined by the cursor. The scroll lock is acquired on each line during fetch and remains until the next fetch or cursor closes, whichever occurs first. The next time it fetches, the server acquires the scroll lock for the row in the new extraction and releases the scroll lock for the row in the last extraction. The scroll lock is independent of the transaction lock and can be held until after a commit or rollback operation. If the option to close cursors on commit is off, the COMMIT statement does not close any open cursors, and the scroll lock is retained until after the commit to maintain isolation of the extracted data. The type of scroll lock acquired depends on the cursor concurrency option and the lock hint in the cursor SELECT statement. Lock hint read-only optimistic numeric optimistic row version control lock no hint unlocked update NOLOCK unlocked HOLDLOCK shared update UPDLOCK error update TABLOCKX error unlocked unlocked update other unlocked update * specify NOLOCK hint will make The table that specifies the prompt is read-only within the cursor.

16. Use Profiler to track the query, get the time needed for the query, and find out the problem with SQL; optimize the index with the index optimizer

Pay attention to the difference between UNion and UNion all. Hello, UNION all.

Be careful to use DISTINCT, do not use it when it is not necessary, it will slow down the query just like UNION. Duplicate records are fine in the query.

19. Do not return unwanted rows or columns when querying

Use sp_configure 'query governor cost limit' or SET QUERY_GOVERNOR_COST_LIMIT to limit the resources consumed by the query. When the resources consumed by the evaluation query exceed the limit, the server automatically cancels the query, killing it before the query. Time when SET LOCKTIME sets the lock

21. Use select top 100 / 10 Percent to limit the number of rows returned by the user or SET ROWCOUNT to limit the rows of operations

22. Before SQL2000, generally do not use the following words: "IS NULL", ","! = ","! > ","!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report