In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
What this article shares with you is about what Redis high-frequency interview questions there are in 2021. The editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article. Without saying much, let's take a look at it with the editor.
Psychological analysis of interviewer
From the interviewer's point of view, the purpose of this question is to examine your understanding of caching, as well as your ability to handle business with caching and improve architecture. This question obviously allows you to play freely and gives you the opportunity to guide the interviewer to the points of knowledge you are most familiar with, so take advantage of this opportunity as much as possible to make a good impression on the interviewer. If you talk about this problem well, you can communicate deeply for an hour or so, and if it is one side, you can basically win it easily. [related recommendation: Redis video tutorial]
But do not come up and talk about the topic to death, the conversation is too shallow, it is basically to go back and wait for notice.
For example, the answer is as follows:
Many people will say that the answer is also correct! Yes, but there is always a feeling that it is useless to give you a chance.
At this point, the interviewer wants to say something like this:
Relatively basic, should not have a very in-depth understanding of Redis
Thinking about the problem stays on the surface. I guess I know how to work at ordinary times, but I haven't thought about it.
Give you the opportunity to play freely, if you don't take advantage of it, it seems that I still have to do it myself. First, I have to ask you a few distributed and persistent questions to see how the level is. No, that's it. There are still a lot of people who will get off work later.
If you don't want to resist the downfall of the interviewer, you should take the initiative to arouse the interviewer's interest and take the lead in improving your pattern (horizontal breadth and depth) and say as much as you can.
For example, the answer is as follows:
Summary of Redis interview questions
This my understanding is roughly like this interviewer!
High performance: a big standard of high performance is fast response time. Redis is based on memory storage, and CPU access speed is fast. In addition, the extreme optimization of data structure, internal thread model and network Imax O model design of Redis determine that Redis is a high-performance storage database. The only disadvantage is that memory is expensive and resources are usually limited, so for very large data cache architectures should be designed reasonably, we usually do not store too large data in Redis, because this will lead to Redis performance degradation.
High concurrency: high concurrency is usually measured by response time (Response Time), throughput (Throughput), query rate per second (Query Per Second) and the number of concurrent users. Although Redis is a single-process single-threaded model, Redis is indeed a powerful weapon in high concurrency business scenarios. Currently, the QPS of Redis can reach 100000 or even 1 million, which is absolutely high concurrency.
High availability: Redis high availability is mainly reflected in master-slave replication, sentinel (Sentinel mode) and Cluster (cluster mode).
This my understanding is roughly like this interviewer!
Single-threaded Redis refers to the use of single-threaded command operations, and multi-threaded processing of network data read and write and protocol parsing after Redis6.x release. The main reasons why Redis single-threading is so fast are as follows:
It can greatly optimize the response speed and efficiency of the Redis server by using the Icano multiplexing non-blocking model to deal with client-side socket connections. The multi-channel Imaco multiplexing technology can enable a single thread to process multiple connection requests efficiently and reduce the time consumption of network IO as much as possible.
Based on memory storage, reducing disk IO, fast reading speed
Redis optimizes the internal data structure to make the data structure of Redis very efficient. And different encoding methods are adopted for different amounts of data and stored contents, which involves the underlying coding conversion of Redis. For example, list, hash, and zset keys use ziplist compressed list coding. Ziplist is a compact data structure. When a key value contains fewer elements, it will be stored in ziplist first. When the number of elements exceeds a certain value, the ziplist will be converted into a standard storage structure. Of course, this value can be customized in redis.conf. In addition, the memory pre-allocation of SDS, the progressive hash in Hash result Rehash, and the ZSet sequence based on SkipList storage all optimize the data structure and make it faster.
Redis uses single thread to execute commands without considering the design of concurrent locks and the consumption of CPU context switching caused by multithreads, so it is faster to execute commands.
This my understanding is roughly like this interviewer!
Redis has five basic data types, which are String, List, Hash, Set and ZSet;. In addition, there are three special data types: Bitmaps, Geospatial, and HyperLogLog.
The data type simply describes the usage scenario Stringstring (string) is the simplest and most widely used data structure of Redis, with an array of characters inside. String (string) is a dynamic string that allows modification; its implementation in structure is similar to ArrayList in Java (constructing an initial array of size 10 by default), which is the idea of redundant memory allocation, also known as pre-allocation; this idea can reduce the performance consumption caused by expansion. When the size of string (string) reaches the expansion threshold, string (string) will be expanded. There are three main cases of string (string) expansion: 1. The length is less than 1MB, and after expansion, it is twice as long as before; length = length * 22. The length is larger than 1MB, and the capacity is increased by 1MB; length = length + 1MB 3. The maximum length of a string is 512MB cache, counters, distributed locks, and so on. ListRedis's list, which is equivalent to LinkedList in the Java language, is a two-way linked list data structure (but this structure is cleverly designed, which will be described later) and supports sequential traversal. The operation of insertion and deletion of linked list structure is fast, the time complexity is O (1), the query is slow, and the time complexity is O (n). Redis's list is not a simple one. LinkedList, but quicklist-"quick list", quicklist is a two-way list of multiple ziplist (compressed list); linked list, asynchronous queue, Weibo follower timeline list. HashRedis's hash (dictionary) is equivalent to HashMap in the Java language. It is an unordered dictionary distributed according to hash values, and the internal elements are stored by key-value pairs. The implementation of hash (dictionary) is also consistent with the structure of HashMap (JDK1.7) in Java, and its data structure is also a two-dimensional structure composed of array + linked list. Node elements are hashed on the array, and linked lists are used to concatenate the array nodes if hash collisions occur. The value stored in the hash (dictionary) in Redis can only be a string value, and the expansion is also different from the HashMap in Java. HashMap in Java is completed at one time during capacity expansion, while Redis considers that its core access is a single-threaded performance problem, so it adopts a progressive rehash strategy in order to pursue high performance. Progressive rehash means that it is not completed at one time, it is completed many times, so it is necessary to retain the old hash structure, so the hash (dictionary) in Redis will have two new hash structures. After the end of the rehash, that is, after all the values of the old hash are moved to the new hash, the new hash will functionally completely replace the previous hash. User information, Hash table. The set (collection) of SetRedis is equivalent to HashSet in Java, and its internal key-value pairs are unordered and unique. It internally implements a special dictionary in which all value is null. After the last element in the collection is removed, the data structure is automatically deleted and memory is reclaimed. To re-function, like, step on, mutual friend... ZSetzset (ordered set) is the most frequently asked data structure in Redis. It is similar to the combination of SortedSet and HashMap in the Java language. on the one hand, it ensures the uniqueness of the internal value value through set, on the other hand, it sorts it through the score (weight) of value. This sorting function is achieved through Skip List (Jump list). After the last element of zset (ordered set) value is removed, the data structure is automatically deleted and memory is reclaimed. Fan list, ranking of student scores, ranking of visits, ranking of clicks. BitmapsBitmaps is called a bitmap, and it is not strictly a data type. At the bottom of the Bitmaps is the key-value byte array. We can use ordinary get/set to get and set the contents of the bitmap directly, or we can treat the byte array as a "bit array" through the bitmap operation getbit/setbit provided by Redis. Bitmaps's "bit array" can only store 0 and 1 per cell, and the subscript of the array is called an offset in Bitmaps. If the key does not exist when Bitmaps is set, a new string will be automatically generated. If the offset is set beyond the scope of the existing content, the bit array will be automatically extended to zero for employees to sign in. GeospatialGeospatial is a geographic location added by Redis after version 3.2. People near the GEO module Wechat, online order "nearby restaurant" …... HyperLogLogHyperLogLog is an algorithm used to do cardinality statistics. It provides an inaccurate deduplication counting scheme (this inaccuracy is not very imprecise). The standard error is 0.81%. For statistics like UV, such an error range is allowed. The advantage of HyperLogLog is that when the number or volume of input elements is very large, the storage space for cardinality calculation is fixed. In Redis, each HyperLogLog key costs only 12KB memory to calculate nearly 2 ^ 64 different cardinals. But HyperLogLog can only count the size of the cardinality (that is, the size of the dataset, the number of sets), he cannot store the element itself, it cannot store the element itself like the set collection, that is, it cannot return the element. Cardinal statistics such as UV, etc.
This my understanding is roughly like this interviewer!
The data structure of Redis can set the expiration time (TTL) of key through EXPIRE key seconds. We are also used to thinking that Redis's key will be deleted automatically when it expires, which is obviously not true. The design of Redis takes into account comprehensive factors such as performance / memory, and designs a set of expiration strategy.
Active deletion (lazy delete)
Passive deletion (periodic policy)
Active deletion (lazy deletion) refers to verifying whether the key has expired when the key is accessed, and deleting it actively if it expires.
Passive deletion (periodic policy) refers to the Redis server regularly randomly tests the expiration time of the key, and passively deletes if it expires. Passive deletion is essential because there are some key that are out of date and are no longer accessed permanently, and if they all rely on active deletion, they will permanently take up memory.
In order to ensure high performance service, Redis passively deletes expired key and adopts greedy strategy / probability algorithm. By default, it scans every 10 seconds. The specific strategy is as follows:
1. Randomly select 20 key from the expiration dictionary (the collection of key with expiration time set) to check whether they are expired.
2. Delete the key that has expired
3. If the number of expired key deleted is more than 25%, repeat step 1.
In addition, when designing the Redis cache architecture, developers must be careful to avoid (prohibit) setting a large number of key to the same expiration time as much as possible, because combined with passive deletion, when Redis passively deletes expired key, it will cause the service to be temporarily unavailable. If a large number of key expires at the same time, this will cause the three steps of passively deleting the key to loop multiple times, resulting in a stutter condition in the Redis service, which is not acceptable for large traffic projects.
So to avoid this, be sure to set some of the allowed expiration times without a very precise key, and set a more random expiration time, so that the stutter time can be reduced.
This my understanding is roughly like this interviewer!
Our common distributed lock solutions in distributed scenarios are (if you can bring the other two out here, if not, don't fool yourself! ):
Distributed locking based on Database Lock Mechanism
Distributed Lock based on Zookeeper
Distributed Lock based on Redis
The solution for Redis to implement distributed locks is like this.
If Redis is in a stand-alone environment, we can implement distributed locks through the atomic instructions provided by Redis.
Set key value [EX seconds] [PX milliseconds] [NX | XX]
In order to prevent the lock added by A from being deleted by B, you can pass the lock tag to the client when you add the lock, and unlock is allowed only if the tag passed in by the client is the same as the lock tag. However, Redis does not provide such a function, so we can only deal with it through Lua script, because Lua script can guarantee the atomic execution of multiple instructions. Finally, we also need to consider the timeout of the lock. It will certainly not work if the client does not release the lock all the time, so the lock can only guarantee that it will not be unlocked by other clients within the specified timeout period, and it will be released automatically after the timeout. In this case, it is difficult for us to optimize it in this way:
Try not to perform longer tasks in Redis distributed locks, and execute code within the lock range as much as possible. Just like synchronized optimization in a single JVM lock, we can consider optimizing the lock interval.
Do more stress tests and online simulation tests of real scenarios to estimate an appropriate lock timeout
Prepare the means of data recovery after the problem that the Redis distributed lock timeout task is not finished.
If it is in a distributed environment, a new problem will be added, for example, in the sentinel+ one-master and multi-slave environment, the client may apply for a lock on the master node, but the synchronization is not completed and the master node goes down, and the lock on the newly elected master node is invalid.
The handling of this situation should be considered in this way. First of all, Redis master-slave synchronization cannot solve the problem of data loss in any case. So we consider changing from a Redis application lock to a multiple stand-alone Redis application lock, only most of the applications are successful. This idea is called RedLock (red lock).
RedLock uses multiple Redis instances, and each instance has no master-slave relationship and is independent of each other. When locking, the client sends locking instructions to all nodes. If more than half of the nodes succeed in set, the locking is successful. When you release the lock, you need to send del instructions to all nodes to release the lock.
Although the red lock solves the problem of master-slave synchronization, it brings new and complex problems:
The first problem is clock drift.
The second problem is that the successful time for clients to apply for locks is different, like different Redis servers.
Therefore, the minimum valid duration of the requested lock needs to be calculated in RedLock. Assuming that the client successfully applies for the lock, the first key sets the successful time to TF, the last key sets the successful time to TL, the lock timeout is TTL, and the clock difference between different processes is CLOCK_DIFF, the minimum valid duration of the lock is:
TIME = TTL-(TF- TL)-CLOCK_DIFF
Using Redis to implement distributed locks cannot be achieved without unavailability issues such as server downtime. The same is true of RedLock red locks here. Even if multiple servers apply for locks, we should consider the processing after server downtime. It is officially recommended to use AOF persistence processing.
However, AOF persistence can only restart and resume normal SHUTDOWN instructions, but in the case of a power outage, lock data may be lost during the last persistence to power outage, and distributed locking semantic errors may occur when the server is restarted. Therefore, in order to avoid this situation, officials recommend that after the Redis service is restarted, the Redis service is not available for a maximum client TTL time (no lock application service is provided). This does solve the problem, but it is obvious that this will definitely affect the performance of the Redis server, and when this happens to most nodes, the system will be globally unavailable.
This my understanding is roughly like this interviewer!
Redis is very fast, a large part of the reason is that the data of Redis is stored in memory, so when the server goes down or power goes down, all the data will be lost, so Redis provides two mechanisms to ensure that all Redis data will not be lost due to failure. This mechanism is called Redis persistence mechanism.
There are two persistence mechanisms for Redis:
RDB (Redis Data Base) memory snapshot
AOF (Append Only File) incremental log
RDB (Redis DataBase) refers to writing a snapshot of a dataset in memory to disk within a specified time interval. RDB is persisted in the form of a snapshot of memory (in the form of binary serialization of memory data), each time a snapshot is generated from Redis for full backup of the data.
Advantages:
Compact storage, saving memory space
The recovery is very fast.
Suitable for full backup, full replication scenarios, often used for disaster recovery (where data integrity and consistency requirements are relatively low)
Disadvantages:
It is easy to lose data, and it is easy to lose the changed data in the Redis server between two snapshots.
RDB makes full backups of memory snapshots through the help child process, which is a heavyweight operation and is costly to perform frequently.
Although the fork child process shares memory, if the memory is modified during backup, it may expand to twice the size at most.
The rules triggered by RDB can be divided into two categories: manual trigger and automatic trigger:
Automatically trigger:
Configure trigger rules
Shutdown trigger
Flushall trigger
Manually triggered:
Save
Bgsave
AOF (Append Only File) records all instructions (write operations) that make changes to memory as independent log files, and restarts the data by executing the Redis command in the AOF file. AOF can solve the real-time problem of data persistence, and it is the mainstream persistence scheme in Redis persistence mechanism.
Advantages:
The backup of data is more complete, and the probability of data loss is lower, so it is suitable for scenarios with high requirements for data integrity.
Log files are readable, AOF is more operable, and can be repaired by operating log files.
Disadvantages:
AOF log records are getting larger and larger in the long run, and it is very time-consuming to recover, so AOF logs need to be slimmed down regularly (more on later).
It is slow to restore backup.
Frequent synchronous write operations will bring performance pressure
The AOF log exists in the form of a file. When the program writes to the AOF log file, it actually writes the contents to a memory buffer allocated by the kernel to the file descriptor, and then the kernel will asynchronously flush the data in the buffer to disk. If the data in the buffer is not flushed back to disk and the server goes down, the data will be lost.
Therefore, Redis forcibly flushes the contents of the specified file from the kernel buffer back to disk by calling fsync (int fid) provided by glibc of the Linux operating system, so as to ensure that the data in the buffer will not be lost. However, this is an IO operation, which is very slow compared to the performance of Redis, so it cannot be performed frequently.
There are three configurations for flushing buffers in the Redis configuration file:
Appendfsync always
Every Redis write operation is written to the AOF log, which theoretically cannot be handled by the Linux operating system, because the concurrency of Redis far exceeds the maximum refresh rate provided by the Linux operating system. Even if there are few Redis writes, this configuration is very performance-consuming, because it involves IO operations, so this configuration basically will not be used.
Appendfsync everysec
Flush the data in the buffer to the AOF file once per second. The default policy in this Redis configuration file is compatible with the compromise between performance and data integrity. In this configuration, the theoretically lost data is about one second.
Appendfsync no
The Redis process will not take the initiative to refresh the data in the buffer to the AOF file, but directly to the operating system to judge, this operation is not recommended, the possibility of data loss is very high.
When I mentioned the shortcomings of AOF earlier, I said that AOF belongs to the form of log appends to store Redis write instructions, which leads to a large number of redundant instruction storage, which makes AOF log files very large, which not only takes up memory, but also causes recovery to be very slow, so Redis provides a rewrite mechanism to solve this problem. After Redis's AOF persistence mechanism performs rewriting, only the minimum instruction set for recovering data is saved. If we want to trigger manually, we can use the following instructions
Bgrewriteaof
The rewrite after Redis4.0 uses the splicing of the RDB snapshot and the AOF directive. The head of the AOF file is the binary data of the RDB snapshot, and the tail is the instruction of the write operation that occurs after the snapshot is generated.
Rewriting AOF files will have a certain impact on the performance of Redis, so it cannot be rewritten automatically. Redis provides two metrics configured for automatic AOF rewriting. Rewriting occurs only if both metrics are met:
Auto-aof-rewrite-percentage 100: when a file has twice as much memory as it used to
Auto-aof-rewrite-min-size 64mb: refers to the minimum memory size for file rewriting
In addition, most of the usage scenarios after Redis4.0 do not use RDB or AOF as the persistence mechanism alone, but take into account the advantages of both.
Finally, to sum up the two, which is better to use?
It is recommended that both are enabled.
If you are not sensitive to data, you can choose to use RDB alone.
AOF alone is not recommended because Bug may appear
If you just do pure memory caching, you don't have to do it at all.
This my understanding is roughly like this interviewer!
Redis is a key-value database based on memory storage. We know that the memory is fast but the space is small. When the physical memory reaches the upper limit, the system will run very slowly, so we will set the maximum memory of Redis. Memory collection will be triggered when Redis memory reaches the set threshold. Redis provides many memory elimination strategies:
Noeviction: an error is returned when the memory limit is reached and the client tries to execute a command that may cause more memory to be used. simply put, read operations are still allowed, but new data is not allowed to be written, and del requests can.
Allkeys-lru: eliminated from all key through the lru (Least Recently Used-least recently used) algorithm
Allkeys-random: random elimination from all key
Volatile-lru: from all key with expiration time set, it is eliminated by lru (Least Recently Used-least recently used) algorithm, which ensures that data whose expiration time is not set and needs to be persisted will not be selected.
Volatile-random: random elimination from all key with expiration time set
Volatile-ttl: from all key with expiration time set, by comparing the value of TTL of remaining expiration time of key, the smaller the TTL is, the more obsolete it is.
Volatile-lfu: use LFU elimination algorithm for key with expiration time
Allkeys-lfu: use the LFU elimination algorithm for all key
One of the two more important algorithms among these strategies is LRU, which eliminates the least recently used key. However, Redis uses the approximate LRU algorithm, which is not completely accurate to eliminate the most infrequently used key recently, but the overall accuracy can also be guaranteed.
The approximate LRU algorithm is very simple. In the key object of Redis, 24bit is added to store the system timestamp of the last access. When the client sends the key write-related request to the Redis server, it is found that the memory reaches maxmemory, and lazy deletion is triggered. The Redis service selects five key that meet the criteria through random sampling (note that the random sampling allkeys-lru is randomly sampled from all key and volatile-lru is randomly sampled from all key with expiration time set), and the oldest key; of the five key is eliminated by comparing the most recent access timestamps recorded in the key object. If there is still not enough memory, repeat this step.
When Redis 3.0 maxmemory_samples is set to 10, Redis's approximate LRU algorithm is very close to the real LRU algorithm, but it is obvious that setting maxmemory_samples to 10 consumes more CPU computing time than setting maxmemory_samples to 5, because the calculation time increases with each sample data.
Redis3.0 's LRU is more accurate than Redis2.8 's LRU algorithm, because Redis3.0 adds a phase-out pool of the same size as maxmemory_samples. Each time key is eliminated, it is compared with the key waiting to be eliminated in the elimination pool, and finally the oldest key is eliminated. In fact, the selected key is put together to eliminate the oldest one.
LRU has an obvious disadvantage that it does not correctly represent the heat of a Key. If a key has never been accessed and is accessed by the user only a moment before memory obsolescence occurs, it will be considered a hot key in the LRU algorithm. LFU (Least Frequently Used) is an elimination algorithm introduced by Redis 4.0. it eliminates key by comparing the access frequency of key, with emphasis on Frequently Used.
The difference between LRU and LFU:
LRU-> Recently Used, based on the time of the last visit
LFU-> Frequently Used, which is compared according to the access frequency of key
In LFU mode, the 24bit lru field of the Redis object header is stored in two segments, high 16bit storage ldt (Last Decrement Time) and low 8bit storage logc (Logistic Counter). High 16bit is used to record the last time the counter dropped. Since only 8bit, the Unix minute timestamp module 2 ^ 16 is stored. The maximum value that 16bit can represent is 65535 (65535 ≈ 24 ≈ 45.5), and it will return in about 45.5 days (return means that the value after taking the module starts again from 0).
The lower 8 bits are used to record the access frequency, and the maximum value 8bit can represent is 255 Rediskey. Logc certainly cannot record the access times of the real Rediskey. In fact, you can see from the name that what is stored is the logarithm of the number of visits. The initial logc value of each newly added key is 5 (LFU_INITI_VAL), which ensures that the newly added value will not be selected and eliminated first; logc will be updated every time the key is accessed; in addition, the logc will decay over time.
Logistic Counter not only grows, but also weakens, and the rules for growth and weakness can also be configured through redis.conf.
Lfu-log-factor is used to adjust the growth rate of Logistic Counter. The higher the lfu-log-factor value, the slower the Logistic Counter growth.
Lfu-decay-time is used to adjust the decay rate of Logistic Counter, which is a value in minutes. The default value is 1. The higher the value, the slower the decay.
This my understanding is roughly like this interviewer!
Cache breakdown:
That is to say, a hot key that is accessed very frequently is in the case of centralized high concurrency access. When the key expires, a large number of requests break through the cache, directly request the database, and go directly through the Redis.
Solution:
If the cached data is fixed, you can try to set the hotspot data to never expire.
If the cache data is updated infrequently and the whole process of cache refresh takes less time, distributed mutexes based on Redis, zookeeper and other distributed middleware can be used, or local mutexes can be used to ensure that only a small number of requests can request the database and rebuild the cache, and the rest of the threads can access the new cache after the lock is released.
If the cache data is updated frequently or if the cache refresh process takes a long time, you can use the timing thread to actively rebuild the cache or postpone the cache expiration time before the cache expires, to ensure that all requests can always access the corresponding cache.
Cache traversal:
It refers to the request of data that does not exist in the cache and database, which is usually attacked by hackers, and it is easy to kill the database if it is not defended. For example, if a hacker uses a negative id to query one of your tables, our id is usually not set to a negative number.
Solution:
If the database is not queried, a null value is set in the cache, which cannot solve the situation of using different negative id requests.
Use the Bloom filter to map all the data in the database to the Bloom filter. Use the Bloom filter to determine whether it exists before the request is called, and just return it directly if it does not exist.
Cache Avalanche:
The cache avalanche occurs when a large number of caches fail at the same time, which will cause the database to crash instantly (high concurrency scenario). In this case, if the cache is not restored and the database is useless, it will continue to be hit.
Solution:
Cache architecture design: design highly available Redis, master-slave + sentinel,Redis cluster cluster
Project server: use local cache and service degradation processing to minimize requests to MySQL
Operation and maintenance means: regularly monitor Redis clusters, do persistent backup mechanism, and restore cached data in time in the event of an avalanche
Almost this answer, the interviewer's face showed a long-lost smile, we can only catch this move, this interview will have.
Of course, it is not clear to say a few words about this knowledge point, so I suggest you take a look at this article so that you can easily hold.
The above are the Redis high-frequency interview questions in 2021. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.