In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article focuses on "how to access massive data in redis". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "how to access the massive data in redis"!
Preface
Sometimes we need to know the use of online redis, especially the key value of some prefixes. How can we check it? Share a little knowledge today.
Accident generation
Because our user token cache is key in [user_token:userid] format, the value of the user's token is saved. Our operation and maintenance staff to help developers find out how many logged-in users are online.
The keys user_token* method was directly used for inquiry, and the accident occurred. Causes redis to be unavailable, faking death.
Analyze the reasons
We have millions of online login users, a relatively large amount of data; keys algorithm is a traversal algorithm, the complexity is O (n), that is, the more data, the higher the time complexity.
When the amount of data reaches millions, the instruction # keys will cause Redis service stutters, because Redis is a single-threaded program that executes all instructions sequentially. Other instructions must wait until the current keys instruction has been executed before continuing.
Solution
So how do we go through a large amount of data? This is also what the interview often asks. We can use another command of redis, scan. Let's take a look at the characteristics of scan
1. Although the complexity is also O (n), it is carried out step by step through cursors and will not block the thread.
2. Provide the count parameter, which is not the number of results, but the number of dictionary slots in a single traversal of redis (approximately equal to)
3. Like keys, it also provides pattern matching function
4. The server does not need to save the state for the cursor. The only state of the cursor is the cursor integer returned by scan to the client.
5. The returned result may be duplicated, which needs to be repeated by the client, which is very important.
6. The fact that the single return result is empty does not mean that the traversal is over, but depends on whether the returned cursor value is zero.
I. scan command format
SCAN cursor [MATCH pattern] [COUNT count]
Second, command explanation: the number of elements returned by the scan cursor MATCH count per iteration
The SCAN command is an incremental loop that returns only a small portion of the elements per call. So what will not let the redis fake death SCAN command return is a cursor that traverses from 0 to 0.
Third, give examples
Redis > scan 0 match user_token* count 5 1) "6" 2) 1) "user_token:1000" 2) "user_token:1001" 3) "user_token:1010" 4) "user_token:2300" 5) "user_token:1389"
Traversal starts at 0, cursor 6 is returned, and data is returned. To continue scan traversal, start at 6.
Redis > scan 6 match user_token* count 5 1) "10" 2) 1) "user_token:3100" 2) "user_token:1201" 3) "user_token:1410" 4) "user_token:5300" 5) "user_token:3389" summary
This is often asked in interviews, and it is also often used by our partners in the process of working. In general, small companies will not have any problems, but when there is a large amount of data, your operation will be incorrect and your performance will be deducted.
At this point, I believe you have a deeper understanding of "how to access the massive data in redis". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.