In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what are the high-frequency interview questions of Redis". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what are the high-frequency interview questions of Redis"?
1. What are the application scenarios of Redis?
Caching: this should be the main function of Redis, and it is also a necessary mechanism for large websites. Reasonable use of caching can not only speed up data access, but also effectively reduce the pressure on back-end data sources.
Shared Session: for some services that rely on session functionality, if you need to change from a stand-alone machine to a cluster, you can choose redis to manage session in a unified way.
Message queuing system: message queuing system can be said to be a necessary basic component of a large website, because it has the characteristics of business decoupling, non-real-time service peaking and so on. Redis provides publish and subscribe function and blocking queue function, although it is not strong enough compared with professional message queue, but it can basically satisfy the general message queue function. For example, in a distributed crawler system, redis is used to uniformly manage url queues.
Distributed locks: in distributed services. You can use Redis's setnx capabilities to write distributed locks, although this may not be very common.
Of course, functions such as rankings and likes can be achieved using Redis, but Redis can not do everything. For example, when the amount of data is very large, it is not suitable for Redis. We know that Redis is based on memory, although memory is very cheap, but if you have a large amount of data every day, such as hundreds of millions of user behavior log data, it is quite expensive to store it with Redis.
2. Why is single-thread Redis so fast?
How fast is Redis? The official answer is 100000 read and write speed per second. Would you be surprised if this is the result of running in a single thread? Why is single-threaded Redis so fast? The reasons are as follows:
Pure memory operation: Redis is completely based on memory, so the read and write efficiency is very high. Of course, there are persistence operations in Redis, which are all completed by fork sub-processes and using the page cache technology of the Linux system, which will not affect the performance of Redis.
Single-threaded operation: single-threading is not a bad thing, single-threading can avoid frequent context switching, frequent context switching will also affect performance.
Reasonable and efficient data structure
The non-blocking Icancano multiplexing mechanism is adopted: the multiplexing model makes use of the ability of select, poll, and epoll to monitor multiple streams' Icano events at the same time, blocking the current thread when idle, and waking up from the blocked state when one or more streams have Icano events, so the program polls all streams once (epoll polls only those streams that actually send events). And only deal with ready streams sequentially, which avoids a large number of useless operations.
3. Talk about the data structure and usage scenario of Redis.
Redis provides five data structures, each of which has a variety of usage scenarios.
1. String string
String type is the most basic data structure of Redis. First of all, the key is a string type, and several other data structures are built on the string type. The set key value command we often use is the string. Commonly used in caching, counting, sharing Session, speed limit, and so on.
2. Hash hash
In Redis, the hash type means that the key value itself is a key-value pair structure, such as value= {{field1,value1},... {fieldN,valueN}}, add command: hset key field value. Hashes can be used to store user information, such as implementing shopping carts
3. List list
The list type is used to store multiple ordered strings. You can do a simple message queue function. In addition, you can use the lrange command to do Redis-based paging function, excellent performance, good user experience.
4. Set collection
The set type is also used to hold multiple string elements, but unlike the list type, duplicate elements are not allowed in the collection, and the elements in the collection are unordered and cannot be obtained by index subscripts. The use of Set intersection, union, difference and other operations, you can calculate common preferences, all preferences, their own unique preferences and other functions.
5. Sorted Set ordered set
Sorted Set has an extra weight parameter Score, and the elements in the collection can be arranged by Score. You can do the ranking application and take TOP N operation.
4. Talk about Redis's data expiration strategy.
Let's first give you a conclusion that the data expiration policy in Redis adopts periodic deletion + lazy deletion strategy.
1. What is the strategy of periodic deletion and lazy deletion?
Periodically delete policy: Redis enables a timer to monitor all key periodically to determine whether the key expires and delete it if it expires. This strategy ensures that expired key will eventually be deleted, but it also has serious disadvantages: it traverses all the data in memory every time, which consumes CPU resources, and when the key has expired, but the timer is still unawakened, key can still be used during this period of time.
Lazy deletion policy: when obtaining the key, determine whether the key expires, and delete it if it expires. This approach has a drawback: if the key has not been used, then it has been in memory, in fact, it has expired, will waste a lot of space.
2. How does the periodic delete + lazy delete policy work?
After the natural complementarity of these two strategies, some changes have taken place in the regular deletion strategy. Instead of scanning all the key each time, a part of the key is randomly selected for inspection, thus reducing the loss of CPU resources. The lazy deletion strategy complements the checked key, which basically meets all the requirements. But sometimes it is so clever that it is neither extracted by the timer nor used, how can the data disappear from memory? It doesn't matter, there is also the memory elimination mechanism, when there is not enough memory, the memory elimination mechanism will play. The Redis memory obsolescence mechanism has the following strategies:
Noeviction: when there is not enough memory to hold new write data, the new write operation reports an error. (Redis default policy)
Allkeys-lru: when there is not enough memory to hold newly written data, remove the least recently used Key in the key space. (recommended)
Allkeys-random: when there is not enough memory to hold newly written data, a Key is randomly removed from the key space.
Volatile-lru: when there is not enough memory to hold newly written data, remove the least recently used Key from the key space where the expiration time is set. In this case, Redis is generally used as both caching and persistent storage.
Volatile-random: when there is not enough memory to hold newly written data, a Key is randomly removed from the key space where the expiration time is set.
Volatile-ttl: when there is not enough memory to hold newly written data, Key with an earlier expiration time is removed first in the key space where the expiration time is set.
To modify the memory obsolescence mechanism, you only need to configure the maxmemory-policy parameter in the redis.conf configuration file.
5. How to solve the problems of Redis cache penetration and cache avalanche
Cache avalanche: because the cache layer carries a large number of requests, it effectively protects the storage layer, but if the cache layer cannot provide services for some reasons, such as the Redis node is down and the hotspot key is invalid, in these cases, all requests will be directly requested to the database, which may cause database downtime.
To prevent and solve the cache avalanche problem, we can start from the following three aspects:
1. Use Redis high availability architecture: use Redis clusters to ensure that Redis services will not fail
2. Inconsistent cache time: add a random value to the cache expiration time to avoid collective failure
3. Current restriction and downgrade strategy: if there is a certain record, for example, if the personalized recommendation service is not available, replace it with a hot data recommendation service.
Cache traversal: cache traversal refers to querying a data that does not exist at all, which is definitely not in the cache, which will cause all requests to fall on the database, which may lead to database downtime.
You can consider the following two ways to prevent and resolve cache penetration problems:
1. Cache null objects: cache null values, but there is a problem. A large number of invalid null values will take up space, which is very wasteful.
2. Bloom filter interception: map all possible query key to the Bloom filter first. When querying, determine whether the key exists in the Bloom filter, and then continue to execute downwards. If it does not exist, it will be returned directly. Bloom filter has a certain degree of miscalculation, so you need to allow a certain degree of fault tolerance in your business.
At this point, I believe you have a deeper understanding of "what are the high-frequency interview questions of Redis?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.