In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces you how to optimize the index query in the MySQL database, the content is very detailed, interested friends can refer to, I hope it can be helpful to you.
Problem description:
The table we are going to access is a very large table with 40 million records, id is the primary key, and program_id is indexed.
Execute a SQL:
Select * from program_access_log where program_id between 1 and 4000
This SQL is very slow.
We thought we were dealing with too many records, so we added the id limit to read only 500000 records at a time.
Select * from program_access_log where id between 1 and 500000 and program_id between 1 and 4000
But this SQL is still very slow, with little improvement over the above one.
Mysql processes a table of 500000 records, and the condition field is indexed. This statement should be done instantly.
Problem analysis:
This table has a capacity of about 30 gigabytes and 16 gigabytes of server memory and cannot be loaded at once. That's what caused the problem.
This SQL has two conditions, ID 1 to 500000 and Program_ id 1 to 4, 000, which was chosen as the primary index because the program_id range is much smaller.
First through the index file to find out all the program_id in the range of 1 to 4000 all the id, this process is very fast.
The next step is to find the records in the table through these id, and because the id is discrete, mysql's access to the table is not read sequentially.
And this table is too large to be loaded into memory at once, so every time a record is accessed, mysql has to relocate on disk and load nearby records into memory. A large number of IO operations lead to a decline in speed.
Solution to the problem:
1. Partition the table on the condition of program_id
two。 Sub-table processing, the size of each table does not exceed the size of memory
However, the server uses mysql5.0, does not support partitioning, and this table is a common table, and the structure of the table cannot be modified without affecting other items.
So we took the third approach:
Select * from program_access_log where id between 1 and 500000 and program_id between 1 and 15000000
Now the scope of program_id is much larger than that of id. Id is looked up as the primary index. Because id is the primary key, the search is for 500000 consecutive records, which is basically the same speed as accessing a table with 500000 records.
On how to optimize the index query in the MySQL database to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.