In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to quickly query large data batch key values, for this problem, this article details the corresponding analysis and solution, hoping to help more small partners who want to solve this problem find a simpler and easier way.
Generally choose database to store data, and with the help of the index of the data table to speed up the retrieval speed. Using index to find data, even if the total amount of data reaches 1 billion, the search efficiency for a single record is about tens of milliseconds (complexity is LogN). However, if there are a lot of key values to query, such as thousands or even tens of thousands, if they are searched independently each time, the reading and comparison will accumulate to tens of thousands or even hundreds of thousands of times, and the time delay will rise to tens of minutes or even hours. At this time, simply using the database index must be intolerable for the user experience.
For example, the following query:
The structure is as follows:
Field Type Remarks idlong 100000000001 Start self-increasing Datastring Random string (length 180 bytes)
For 600 million pieces of data in this structure, it takes about 120 seconds to extract 10,000 records corresponding to random IDs with Oracle.
Select * from testdata where id is in (…)
In addition, since the maximum number of in is 1000, the results of multiple queries need to be merged again, which is also troublesome to process.
The same data, using a concentrator to process, the code is simple and efficient, see the following example:
AB1 =file("testdata.ctx").create()//Open the group table file testdata.ctx2=A1.index@3(id_idx)//Load the three-level index 3=keys//Random key sequence to be searched 4=A1.icursor(;A3.contain(id),id_idx)//Search by group table index id_idx
The aggregator group table function is used here, which is based on high-performance index and batch key lookup, which can effectively cope with this scenario. In this scenario, the concentrator query took only 20 seconds, six times faster than Oracle's 120 seconds.
About how to quickly query large data batch key answers to the problem shared here, I hope the above content can be of some help to everyone, if you still have a lot of doubts not solved, you can pay attention to the industry information channel to learn more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.