In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shares the content of sample analysis of Redis interview questions. Xiaobian thinks it is quite practical, so share it with everyone for reference. Let's follow Xiaobian and have a look.
application scenarios
cache
Shared Session
Message Queuing system
distributed lock
Related recommendations: Redis video tutorial
Why Single Thread Redis is Fast
pure memory operation
Single-threaded operation, avoiding frequent context switches
Reasonable and efficient data structure
Non-blocking I/O multiplexing mechanism is adopted (there is a file descriptor that listens to multiple file descriptors simultaneously for data arrival)
Redis data structure and usage scenarios
String: String type is the most basic data structure of Redis. First of all, keys are string types, and several other data structures are built on the basis of string types. The set key value command we often use is string. Commonly used in caching, counting, sharing sessions, speed limits, etc.
Hash: In Redis, hash type refers to the key itself and a key pair structure, hash can be used to store user information, such as the implementation of shopping carts.
List: The list type is used to store multiple ordered strings. Can do simple message queue function.
Set: Set type is also used to store multiple string elements, but unlike list type, duplicate elements are not allowed in the set, and the elements in the set are unordered, and elements cannot be obtained by index subscript. Using Set intersection, union, subtraction and other operations, you can calculate common preferences, all preferences, your own unique preferences and other functions.
Sorted Set Ordered Set (skip table implementation): Sorted Set adds a weight parameter Score, and the elements in the set can be arranged by Score. You can do the leaderboard application, take TOP N operation.
Redis Data Expiration Policy
The data expiration policy in Redis adopts the policy of periodic deletion + lazy deletion
Periodic deletion policy: Redis enables a timer to monitor all keys regularly to determine whether the key expires and deletes it if it expires. This strategy ensures that expired keys will eventually be deleted, but there are serious drawbacks: every time all the data in memory is iterated, it consumes CPU resources, and when the key has expired, but the timer is still in the unawakened state, the key can still be used during this time.
Lazy deletion policy: When obtaining a key, first determine whether the key expires, and delete it if it expires. There is a disadvantage to this approach: if the key is never used, it will always be in memory, which is actually expired and wastes a lot of space.
These two strategies complement each other naturally. After combining them, some changes have taken place in the timed deletion strategy. Instead of scanning all the keys every time, some keys are randomly selected for inspection, which reduces the loss of CPU resources. The lazy deletion strategy complements the unchecked keys and basically meets all the requirements. But sometimes it is so coincidental that it has not been extracted by the timer and has not been used. How can these data disappear from memory? It doesn't matter. There's also a memory culling mechanism. When there's not enough memory, the memory culling mechanism comes into play. The elimination strategy is divided into: when the memory is insufficient to accommodate the new write data, the new write operation will report an error. (Redis default policy) When memory is insufficient to accommodate newly written data, remove the least recently used Key from the key space. (LRU recommended) When the memory is insufficient to accommodate the newly written data, a Key is randomly removed from the key space. When memory is insufficient to accommodate newly written data, remove the least recently used Key from the key space for which expiration time is set. This is generally used when Redis is used as both a cache and persistent storage. When the memory is insufficient to accommodate newly written data, a Key is randomly removed from the key space for which the expiration time is set. When memory is insufficient to accommodate newly written data, keys with earlier expiration times are removed preferentially in key spaces with expiration times set.
Redis set and setnx
Setnx in Redis does not support setting expiration time. To avoid deadlock caused by a client interrupt when doing distributed locks, you need to set lock expiration time. Setnx and expire cannot implement atomic operations when high concurrency occurs. If you want to use it, you must lock it displayed on the program code. Using SET instead of SETNX is equivalent to SETNX+EXPIRE achieving atomicity, so you don't have to worry about SETNX succeeding and EXPIRE failing.
Redis LRU specific implementation:
The traditional LRU is in the form of a stack, each time the latest use is moved to the top of the stack, but the form of a stack will lead to a large number of non-hot data occupying the header data when executing select *, so it needs to be improved. Every time Redis gets a value by key, it updates the lru field in value to the timestamp of the current second level. The initial implementation algorithm of Redis is very simple, randomly take five keys from dict and eliminate one with the smallest lru field value. In 3.0, a version of the algorithm was improved. First, the randomly selected keys were put into a pool (the size of the pool was 16), and the keys in the pool were arranged in the order of lru size. Each time the randomly selected keylru value must be less than the smallest lru in the pool, it will continue to be added until the pool is full. After the pool is full, each time a new key needs to be added, the largest key in the lru pool needs to be taken out. To eliminate, select the smallest lru value from the pool and eliminate it.
How Redis finds hot keys
Based on experience, make an estimate: for example, if you know the opening of an activity in advance, you will use this Key as a hot Key.
Server-side collection: add a line of code for data statistics before operating redis.
Packet capture for evaluation: Redis uses TCP protocol to communicate with clients. The communication protocol is RESP, so you can also intercept packets for parsing by writing your own program to listen to ports.
At the proxy layer, each redis request is collected and reported.
Redis comes with command query: Redis version 4.0.4 provides redis-cli -hotkeys to find hot keys. (If you want to query Redis with its own command, be careful to set the memory eviction policy to allkeys-lfu or volatile-lfu first, otherwise an error will be returned.) Go to Redis and use config set maxmemory-policy allkeys-lfu.)
Redis hot key solution
Server cache: cache hotspot data to server memory. (Use the message notification mechanism provided by Redis to ensure the data consistency between Redis and the hotspot Key of the server. Set up a monitor for the hotspot Key client. When there is an update operation for the hotspot Key, the server will update it accordingly.)
Backup Hotspot Key: Hotspot Key+ random number, randomly assigned to other Redis nodes. This way, when accessing the hotspot key, it will not hit all the machines.
How to solve Redis cache avalanche problem
Using Redis High Availability Architecture: Using Redis Clusters to Ensure Redis Services Do Not Hang
Cache time inconsistency, add a random value to the cache expiration time to avoid collective failure
Current limiting and degradation strategy: there are certain records, such as personalized recommendation service is unavailable, replaced by hot data recommendation service
How to solve Redis cache penetration problem
Check the interface
Store null value (cache breakdown latch, or set not to expire)
Bloom filter interception: map all possible query keys to Bloom filter first, judge whether the key exists in Bloom filter first when querying, and continue to execute downward if it exists. If it does not exist, return directly. Bloom filters store values with multiple hash bits. Bloom filters say that an element is present and may be misjudged. Bloom filters say an element is not there, so it must not be there.
Redis persistence mechanism
Redis caches data in memory for efficiency, but periodically writes updated data to disk or changes to additional log files to ensure data persistence. Redis has two persistence strategies:
RDB: Snapshot form is to save the data in memory directly to a dump file, save it regularly, save the policy. When Redis needs persistence, Redis forks a child process that writes data to a temporary RDB file on disk. When the child process finishes writing temporary files, replace the original RDB.
AOF: Store all commands that modify Redis servers in a file, a collection of commands.
Using AOF for persistence, each write command is appended to appendonly.aof via the write function. Aof's default policy is fsync once per second, which results in up to one second of data loss in the event of a outage. The disadvantage is that AOF files are usually larger than RDB files for the same dataset. Depending on the fsync policy used, AOF may be slower than RDB. Redis is the default persistence method for snapshot RDBs. For master-slave synchronization, full synchronization (RDB) is performed when the master-slave is just connected; incremental synchronization (AOF) is performed after the full synchronization is completed.
Redis Business
The essence of a Redis transaction is a collection of commands. Transactions support executing multiple commands at once, and all commands in a transaction are serialized. During transaction execution, commands in the execution queue are serialized sequentially, and command requests submitted by other clients are not inserted into the transaction execution command sequence. To summarize: redis transactions are one-time, sequential, exclusive executions of a sequence of commands in a queue.
Redis transactions have no concept of isolation level. Batch operations are placed in queue cache before EXEC command is sent, and will not be actually executed. Therefore, there is no query within the transaction to see the updates in the transaction, and queries outside the transaction cannot see them.
In Redis, a single command is executed atomically, but transactions are not guaranteed atomicity and there is no rollback. Any command in the transaction fails, and the rest of the commands are executed.
Redis Transaction Related Commands
watch key1 key2 ... : Monitor one or more keys, if the monitored key is changed by another command before the transaction executes, the transaction is interrupted (similar to optimistic locking)
multi : marks the beginning of a transaction block (queued)
exec : Executes all transaction block commands (once exec is executed, the previously added monitor locks will be cancelled)
discard : Cancel the transaction, discarding all commands in the transaction block
unwatch : cancel watch monitoring of all keys
Difference Between Redis and Memcached
Storage method: memcache will store all the data in memory, and it will hang up after power failure. The data cannot exceed the memory size. Redis has some data stored on the hard disk, which ensures data persistence.
Data support types: memcache support for data types is simple, only supports simple key-value, while redis supports five data types.
The underlying models differ: they differ in the underlying implementation and application protocols used to communicate with clients. Redis built the VM mechanism itself directly, because normal system calls waste time moving and requesting system functions.
Value size: redis can reach 1GB, while memcache is only 1MB.
Several Cluster Modes of Redis
master-slave replication
sentry mode
cluster mode
Sentinel mode of Redis
Sentinel is a distributed system where you can run multiple sentinel processes in an architecture based on master-slave replication. These processes use a gossip protocol to receive information about whether a Master is offline and a voting protocol to decide whether to perform automatic failover and which Slave to choose as the new Master.
Each sentinel sends periodic messages to other sentries, masters, and slaves to confirm that they are alive, and if they do not respond within a specified time (configurable), they are temporarily considered dead (so-called "subjective downtime").
If most of the sentinels in the "sentinel group" report that a master does not respond, the system considers the master to be "completely dead"(i.e., objectively a real down machine), and selects one of the remaining slave nodes to be promoted to master through a certain vote algorithm, and then automatically modifies the relevant configuration.
Redis hash
Redis rehash operations are not done in a one-time, centralized manner, but in multiple, incremental ways, and redis maintains an index counter variable rehashidx to indicate the progress of the rehash.
This progressive rehash avoids the huge amount of computation and memory operations brought by centralized rehash, but it should be noted that when redis rehashing, normal access requests may need to do more than two visits to hashtable (ht[0], ht[1]). For example, if the key is rehashed to the new ht1, it needs to visit ht0 first. If it cannot be found in ht0, it needs to go to ht1 to find it.
Redis hash table is extended by conditions
The number of keys stored in the hash table exceeds the hash table size.
The Redis server is not currently executing the BGSAVE command (rdb) or the BGREWRITEAOF command, and the load factor of the hash table is greater than or equal to 1.
The Redis server is currently executing the BGSAVE command (rdb) or BGREWRITEAOF command, and the load factor of the hash table is greater than or equal to 5. (Load factor = number of nodes saved in hash table/hash table size, shrink hash table when load factor of hash table is less than 0.1)
Redis Concurrency Competitive Key Solution
Distributed Lock + Timestamp
Using message queues
Redis pipeline
For single-threaded blocking Redis, Pipeline can satisfy batch operations, send multiple commands to Redis Server in succession, and then parse the response results one by one. Pipelining can improve batch processing performance, mainly due to the reduction of "round trip" time in TCP connections. Pipeline is the bottom layer by encapsulating all operations into streams, redis has to define its own in and out of the output stream. The sync() method performs the operation, placing each request in a queue and parsing the response packet.
Redis and Mysql double-write consistency scheme
Update the database first, then delete the cache. Database reads are much faster than writes, so dirty data is hard to come by. You can delay the asynchronous deletion policy to ensure that the deletion operation is performed after the read request is completed.
Thank you for reading! About "Redis interview question example analysis" This article is shared here, I hope the above content can have some help to everyone, so that everyone can learn more knowledge, if you think the article is good, you can share it to let more people see it!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.