In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article will explain in detail how to achieve paging query in MySQL, the quality of the article content is high, so Xiaobian shares it with you as a reference, I hope you have a certain understanding of relevant knowledge after reading this article.
First, we create an order table, which only carries a primary key, and all other fields do not carry indexes. Then use the stored procedure to insert 100000 pieces of data into the datasheet:
Speaking of pagination? We all know how to use the limit keyword for paging. For example, if we need to query data with id 900000 to 900100, we may be familiar with paging like this:
We can see that the response time for querying data with id ranging from 900000 to 900100 using limit is 3.51 seconds. If the operation time of business processing logic is added, basically the whole paging query operation takes more than 4 seconds. We can use the explain keyword to view the running properties of the sql statement just now:
You can see that we actually only need 100 rows of data for the query, but the database scans from the first row until we have the data we need. In a system, hundreds of thousands of rows of data are scanned every time the query is performed. The performance is definitely greatly reduced. Because the previous 900000 rows of data are invalid data for us, we can first use subqueries to optimize it. First, we use subqueries to overwrite the index query and return the primary key id. Then, we use the primary key id to do an association operation with the original table, which can reduce the number of table returns and reduce Mysql scanning invalid rows:
I have tested many times and found that after using subquery optimization, the query result I want to use only takes about 0.58 seconds. Why does using subqueries provide such efficiency? Because our data table has more than 20 fields, our subquery using select id efficiency in the case of large data is about 3 times the performance of select *, and id is the primary key can use the index to optimize the query speed, and then the outer query using id can also use the index to speed up the query efficiency, so it is obvious that using subquery paging efficiency is much higher than using limit paging efficiency directly.
Since we can use the nature of id primary key to improve efficiency, in fact, we can further remove subqueries and use id interval range to query data:
You can see the difference between... and limit id to the specified range query paging only takes 0.06 seconds, why so fast, we can use explain to test the run properties:
You can see that only 100 lines of data have been scanned, so the efficiency naturally improves. But id limit query exists a condition: that is, your database data must be continuous according to id and cannot be interrupted. Why is this? In fact, it is easy to understand, for example, the database has 100 data id 1---100, if I want to query the 50--60 data, I can use the following sql statement:
select * from aok_score_info where id between 50 and 60 limit 10;
However, if there is some invalid data in the middle, I need to delete it. I delete the data with id 30--40. At this time, I want to query the data with id 50--60. If I also use the sql statement above, I will find the same data with id 50--60. However, in fact, the data with id 50--60 in the database should be id 60--70. Therefore, id restriction paging query is very efficient but has limitations. The limitation is that data cannot be deleted to ensure continuity of ids, and subqueries cannot use where clauses, because using where clauses will filter conditions and cause ids to lose continuity. So in the case where id can guarantee continuity, we can choose to use the method of limiting id to improve paging efficiency. Next let's see what can be done except for subqueries if id cannot guarantee absolute continuity.
First of all, we still test with the 1 million pieces of data just now. The id is currently continuous from 1--10000. We now use inner join to query only the index column id in the subquery, and then read the required column by id:
You can see that it only takes 0.735 seconds, but obviously we are now ID continuous. We now delete 3000 data items with id 10001--13000, and then test:
Obviously, we can see that the query records have been extended by 3000, which means that we can use the read-only index method to optimize the limit statement when the id is not continuous, which can greatly improve the efficiency of paging. And I often use a method in development: the client passes id and size to indicate the maximum value of the current page id and the number of articles per page, and then directly uses id to query the size data of the forward push, for example, we return the data with id 701--800, and then query the ninth page. When you only need to pass the maximum id of the seventh page 800, you can push the next 100 data:
For example, if the client queries the first page and id is 0, the backend can use the following sql statement:
select * from aok_score_info limit 0, 10;
Because the first page of data is queried, the scanning speed is very fast. After that, the page number query passes the maximum value of the current page id, for example, the parameter id is 9999, indicating that the maximum id of the previous page is 9999. Then we will push the query forward from 9999. We can see that the 3000 records deleted in the middle will be skipped directly, so it will not affect the correctness of our query. The most important thing is that id is the primary key with index above it. The sql execution time of this statement is 0.04 seconds. It can be said that in the case of high concurrency can also basically meet the requirements. This is my personal development understanding of several paging scheme, there are different views can be proposed to discuss together.
About MySQL how to achieve paging query to share here, I hope the above content can be of some help to everyone, you can learn more knowledge. If you think the article is good, you can share it so that more people can see it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.