In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "how Redis deletes more than 10,000 Key without affecting business". In daily operation, I believe many people have doubts about how Redis deletes more than 10,000 Key without affecting business. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful for everyone to answer the doubt that "Redis how to delete more than 10,000 Key without affecting business"! Next, please follow the editor to study!
Demand
Sometimes because Redis Key does not set the expiration time or because of business requirements or requirements such as running out of Redis memory or modifying Redis key values, and these Key are regular and can be matched by regular expressions.
Solution one
Usually through a search on the Internet, you will be told to use the following method. Redis provides a simple and violent instruction keys to list all key that meet specific regular string rules.
$redis-cli-- raw keys "testkey-*" | xargs redis-cli del
Using Redis keys to match the key you need to delete, and then using xargs to pass the results to redis-cli del, this seems perfect, but it is actually very risky.
The above command is very easy to use, providing a simple regular string, but it has two obvious disadvantages.
If there is no offset or limit parameters, you can spit out all the key that meets the condition at once. If hundreds of key in the instance meet the condition, you will feel uncomfortable when you see that the string on the screen has no end.
The keys algorithm is a traversal algorithm with a complexity of O (n). If there is a key of tens of millions of levels in the instance, this instruction will lead to a Redis service stutter, and all other instructions reading and writing Redis will be delayed or even time-out error, because Redis 6 is a single-threaded program that executes all instructions sequentially, and other instructions can not continue until the current keys instruction is executed, which will lead to business unavailability. It even causes the risk of redis downtime.
Note: this method is not recommended. It is recommended that the production environment block the keys command. Then people will ask, is there a better way to solve this problem? The answer is of course, please read on below.
Solution two
Redis supports the scan command since version 2.8, and the basic usage of the SCAN command is as follows:
SCAN cursor [MATCH pattern] [COUNT count]
Cursor: cursor, the SCAN command is a cursor-based iterator. Each time the SCAN command is called, a new cursor is returned to the user. The user needs to use this new cursor as the cursor parameter of the SCAN command in the next iteration to continue the previous iteration process until the server returns a cursor with a value of 0 to the user. A complete traversal process ends.
MATCH: matching rules, such as traversing all key that begin with testkey-, can be written as testkey-*.
The function of the COUNT:COUNT option is to let the user tell the iterative command how many elements should be returned from the dataset in each iteration. COUNT is only a prompt for the incremental iterative command, and does not represent the real number of returns. For example, if you set COUNT to 2, you may return 3 elements, but the returned element data will be positively related to the COUNT setting. The default value of COUNT is 10.
Example:
$scan 0 MATCH testkey-* 1) "34" 2) 1) "testkey-2" 2) "testkey-49" 3) "testkey-20" 4) "testkey-19" 5) "testkey-93" 6) "testkey-8" 7) "testkey-34" 8) "testkey-76" 9) "testkey-13" 10) "testkey-18" 11) "testkey-10" $scan 34 MATCH testkey-* COUNT 1000 1) "ops-coffee-16" 2) "ops-coffee-19" 3) "ops-coffee-23" 4) "ops-coffee-21" 5) "ops-coffee-40" 6) "ops-coffee-22" 7) "ops-coffee-1" 8) "ops-coffee-11" 9) "ops-coffee-28" 10) "ops-coffee-3" 11) "ops-coffee-26" 12) "ops-coffee-4" 13) "ops-coffee-31".
The scan command returns an array of two elements, the first array element is a new cursor for the next iteration, and the second array element is an array containing all the elements that are iterated.
The above example means to scan all key with the prefix testkey-. The first iteration uses 0 as the cursor to start a new iteration, while using MATCH to match the key with the prefix testkey-, returning a cursor value of 34 and traversing data. The second iteration uses the cursor returned from the first iteration, that is, the command returns the value of the first element 34, and forces the command to scan more elements for this iteration by setting the parameter of the COUNT option to 1000. On the second call to the SCAN command, the command returns cursor 0, which means that the iteration is over and the entire dataset has been fully traversed.
The Redis scan command is a cursor-based iterator, which means that each time the command is called, it needs to use the cursor returned by the last call as the cursor parameter of the call, thus continuing the previous iteration process. When the cursor parameter of the SCAN command is set to 0, the server starts a new iteration, and when the redis server returns a cursor with a value of 0 to the user, it indicates that the iteration is over, which is the only way to determine the end of the iteration, not by whether the result set is empty or not.
The above requirements can eventually be addressed with the following command:
$redis-cli-scan-pattern "testkey-*" | xargs-L 1000 redis-cli del
The xargs-L instruction indicates the number of rows read by xargs at a time, that is, the number of key deleted at a time, and do not read too many key at a time.
Comparison between scan and keys
Compared with keys, scan has the following characteristics:
The complexity is also O (n), but it is done step by step through cursors and does not block threads.
The limit parameter is provided to control the maximum number of results returned at a time. Limit is only a hint for incremental iterative commands, and the results returned can be more or less.
Like keys, it also provides pattern matching capabilities.
The server does not need to save state for the cursor; the only state of the cursor is the cursor integer returned by scan to the client.
It is important that the returned results may be duplicated and need to be repeated by the client.
If there is data modification in the process of traversal, it is uncertain whether the changed data can be traversed.
The fact that a single return is empty does not mean that the traversal is over, but depends on whether the returned cursor value is zero.
Summary
There are many other Redis commands like scan, such as:
The scan instruction is a series of instructions that traverse a specified collection of containers in addition to traversing all key.
Zscan traverses zset collection elements
Hscan traverses the elements of the hash dictionary
Sscan traverses the elements of the set collection
Note: the first argument to the SSCAN, HSCAN, and ZSCAN commands is always a database key. The SCAN command does not need to provide any database keys in the first parameter because it iterates over all the database keys in the current database.
At this point, the study on "how Redis deletes more than 10,000 Key without affecting business" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.