In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Preface
Are you still using the keys command to fuzzy match and delete data? This is a bomb that explodes at any time!
There is no instruction in Redis to delete a specific prefix key in bulk, but we often need to delete it according to the prefix, so what on earth should we do? Maybe you will get the following answer after a search.
Redis-cli-- raw keys "ops-coffee-*" | xargs redis-cli del
Directly match all the key under linux through the keys command of redis, and then call the system command xargs to delete it. It seems perfect, but the risk is huge.
Because of Redis's single-threaded service mode, the command keys will block normal business requests. If you have too many keys matches at a time or encounter a large key during del, it will directly lead to business unavailability and even the risk of redis downtime.
So we should avoid using the above method in the production environment, is there any elegant way to solve it? SCAN!
Introduction and use of SCAN
Redis supports the scan command since version 2.8, and the basic usage of the SCAN command is as follows:
SCAN cursor [MATCH pattern] [COUNT count]
Cursor: cursor, the SCAN command is a cursor-based iterator. Each time the SCAN command is called, a new cursor is returned to the user. The user needs to use this new cursor as the cursor parameter of the SCAN command in the next iteration to continue the previous iteration process. Until the server returns a cursor with a value of 0 to the user, a complete traversal process ends.
MATCH: matching rules, such as traversing all key that begins with ops-coffee- can be written as ops-coffee-*, containing-coffee- can be written as *-coffee-*
COUNT: the function of the COUNT option is to let the user tell the iteration command how many elements should be returned from the dataset in each iteration. COUNT is just a hint to the incremental iteration command and does not represent the actual number of returns. For example, if you set COUNT to 2, you may return three elements, but the returned element data will be positively related to the COUNT setting. The default value of COUNT is 10.
The following is an example of an iterative process for the SCAN command:
127.0.0.1 scan 0 MATCH ops-coffee-* 1) "38" 2) 1) "ops-coffee-25" 2) "ops-coffee-19" 3) "ops-coffee-29" 4) "ops-coffee-10" 5) "ops-coffee-23" 6) "ops-coffee-5" 7) "ops-coffee-14" 8) "ops-coffee-16" 9) "ops-coffee-11" 10) "ops" -coffee-15 "11)" ops-coffee-7 "12)" ops-coffee-1 "127.0.0.1 scan 38 MATCH ops-coffee-* COUNT 10001)" 0 "2) 1)" ops-coffee-13 "2)" ops-coffee-9 "3)" ops-coffee-21 "4)" ops-coffee-6 "5)" ops-coffee-30 "6)" ops-coffee-20 "7)" ops-coffee-2 " "8)" ops-coffee-12 "9)" ops-coffee-28 "10)" ops-coffee-3 "11)" ops-coffee-26 "12)" ops-coffee-4 "13)" ops-coffee-31 "14)" ops-coffee-8 "15)" ops-coffee-22 "16)" ops-coffee-27 "17)" ops-coffee-18 "18)" ops-coffee-24 "19)" ops-coffee-17 "
The SCAN command returns an array of two elements, the first array element is a new cursor for the next iteration, and the second array element is an array containing all the iterated elements
The above example means to scan all key prefixed with ops-coffee-
The first iteration uses 0 as the cursor to start a new iteration, and uses MATCH to match the key with the prefix ops-coffee-, returning a cursor value of 38 and traversing data
The second iteration uses the cursor returned in the first iteration, that is, the command returns the value of the first element 38, and forces the command to scan more elements for this iteration by setting the parameter of the COUNT option to 1000.
On the second call to the SCAN command, the command returns cursor 0, which means that the iteration is over and the entire dataset has been fully traversed
The time complexity of the KEYS command is O (n), while the SCAN command decomposes the traversal operation into m operations with a time complexity of O (1), thus solving the problem of server blocking caused by traversing a large amount of data using the keys command. The following instructions can be used to gracefully delete:
Redis-cli-scan-pattern "ops-coffee-*" | xargs-L 2000 redis-cli del
The xargs-L instruction indicates the number of rows read by xargs at a time, that is, the number of key deleted at a time. Reading too many xargs at a time will cause an error.
Elegant deletion of several other data structures
For similar SCAN commands, there are several other SSCAN, HSCAN, and ZSCAN for different data types of Redis, using similar methods:
> sscan ops-coffee 0 MATCH v1) "7" 2) 1) "v15" 2) "v13" 3) "v12" 4) "v10" 5) "v14" 6) "v1"
Unlike the SCAN command, these commands require an extra argument to key, such as ops-coffee above
For a large set key, elegant batch deletions can be achieved with sscan using the following code:
Import redisdef del_big_set_key (key_name): r = redis.StrictRedis (host='localhost', port=6379) # count indicates the number of elements deleted each time. Here, for key in r.sscan_iter (name=key_name, count=300): r.srem (key_name, key) del_big_set_key ('ops-coffee')
For a large hash key, graceful deletion can be achieved with the help of hscan using the following code:
The result obtained by import redisdef del_big_hash_key (key_name): r = redis.StrictRedis (host='localhost', port=6379) # hscan_iter is a meta-ancestor. The following hdel deletion is used to get key for key in r.hscan_iter (name=key_name, count=300) with key [0]: r.hdel (key_name, key [0]) del_big_hash_key ('ops-coffee')
It is relatively simple to delete large ordered collections, directly according to the zremrangebyrank ranking range.
Import redisdef del_big_sort_key (key_name): r = redis.StrictRedis (host='localhost', port=6379) while r.zcard (key_name) > 0: # to determine whether there are any elements in the collection, and if so, delete the element r.zremrangebyrank (key_name, 0,99) del_big_sort_key ('ops-coffee') that ranks 0-99.
To delete a large big list list, you can refer to the method above, judge the quantity by llen, and then remove the elements in the range by ltrim, which will not be discussed here.
At this point, the elegant deletion of large key in the five data structures of Redis is all realized, and the production environment is preferably used.
Summary
The above is the whole content of this article. I hope the content of this article has a certain reference and learning value for everyone's study or work. Thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.