In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
8.2. Memory management
Redis mainly implements memory management by controlling the memory limit and the corresponding recycling strategy. This section will introduce how Redis manages memory around these two aspects.
8.2.1 memory limit
Redis uses the maxmemory parameter to limit the maximum available memory. The main purposes of limiting memory are:
It is used to cache scenarios. When the memory limit maxmemory is exceeded, delete policies such as LRU are used to free up space.
Prevent the memory used from exceeding the server's physical memory.
It should be noted that the maxmemory limit is the amount of memory actually used by Redis memory, that is, the memory corresponding to the used_memory statistics. Due to the existence of the memory fragmentation rate, the actual memory consumption may be larger than that set by maxmemory, so you should be careful of this memory overflow in actual use. Through the memory limit, it is very convenient to realize the memory control of multiple Redis processes deployed by one server. For example, a server with 24GB memory reserves 4GB memory for the system, reserves 4GB free memory for other processes or Redis fork processes, and leaves Redis 16GB memory, so that four maxmemory=4GB Redis processes can be deployed. Thanks to the Redis single-threaded architecture and memory limitation mechanism, CPU and memory isolation can be achieved even without virtualizing different Redis processes, as shown in figure 8-2.
Figure 8-2: server assigns 4 4GB Redis processes
8.2.2 dynamically adjust the memory limit
The memory limit of Redis can be dynamically modified by config set maxmemory {bytes}. For example, in the previous example, when it is found that Redis-1 does not make a good memory estimate, only less than 2GB memory is actually used, and the Redis-2 process needs to be expanded to 6GB memory, you can execute the following command to adjust:
Redis-1 > config set maxmemory 2GBRedis-2 > config set maxmemory 6GB
By dynamically modifying maxmemory, you can achieve the purpose of dynamically scaling Redis memory under the current server, as shown in figure 8-3.
Figure 8-adjusting maxmemory scalable memory between 3:redis processes
This example is too ideal. If the Redis-3 and Redis-4 processes also need to be expanded to 6Gb, if the physical memory limit of the system is exceeded, you cannot simply adjust maxmemory to achieve the purpose of capacity expansion. You need to migrate data online or switch servers based on replication to achieve capacity expansion. For details, please see Cluster Section and Sentinel Section.
Operation and maintenance Tip: 1:Redis uses server memory indefinitely by default. In order to prevent the system from running out of memory in extreme cases, it is recommended that all Redis processes be configured with maxmemory. 2: under the condition of sufficient physical memory, all redis processes on the server can adjust the maxmemory parameters to achieve the purpose of freely scaling the maximum available memory. 8.2.3 memory recovery Policy
The memory recovery mechanism of Redis is mainly reflected in the following two aspects:
Delete key objects that reach expiration time
Trigger memory overflow control strategy when memory usage reaches maxmemory launch
1: delete expired key object
All keys in Redis can set the expiration property and are stored internally in the expiration dictionary. Because a large number of keys are stored in the process, maintaining the accurate expired deletion mechanism of each key will lead to the consumption of a large amount of CPU, which is too expensive for single-threaded Redis, so Redis uses lazy deletion and timing task deletion mechanism to achieve expired memory recovery.
1) lazy deletion:
Lazy deletion is used when the client reads a key with a timeout attribute, and if the expiration time set by the key has exceeded, it will perform a delete operation and return null. This strategy is to save CPU costs and does not need to maintain a separate TTL linked list to handle the deletion of expired keys. However, this method has the problem of memory leak, when the expired key has not been accessed, it will not be deleted in time, resulting in memory can not be released in time. Because of this, Redis also provides another scheduled task deletion mechanism as a supplement to lazy deletion.
2) scheduled task deletion:
Redis maintains a scheduled task internally, which runs 10 times per second by default (controlled by the parameter hz). The logic of deleting expired keys in scheduled tasks uses an adaptive algorithm, using fast and slow rate modes to recycle keys according to the expiration ratio of keys, as shown in figure 8-4:
Figure 8-4: scheduled tasks delete expired key logic
Process description:
The scheduled task randomly checks 20 expired keys in each database space and deletes the corresponding keys when they are found to have expired.
If a key that exceeds 25% of the number of checks expires, the loop executes recycling logic until less than 25% or runs out, with a timeout of 25 milliseconds in slow mode.
If the previous recycling key logic runs over, the reclaim expired key task is run again in fast mode before the Redis triggers an internal event, with a timeout of 1 millisecond and only once in 2 seconds.
The internal deletion logic of the fast and slow modes is the same, but the execution timeout is different.
Icon
Development Tips:
The Redis timing task controls the cycle through the hz parameter. The timing task involves a lot of logic, such as closing the timeout connection, deleting the expiration key, AOF file writing frequency, cluster timing communication, and so on. Modifying this parameter affects a wide range, so it is not recommended to increase the hz parameter to accelerate memory recovery frequency.
Avoid setting the same expiration time for a large number of keys, otherwise it is easy to generate key expiration scenarios of more than 25% at the same time, triggering cyclic deletion of expired key logic, thus slowing down the Redis response.
2: memory overflow control strategy
When the memory used by Redis reaches the upper limit of maxmemory, the corresponding overflow control policy will be triggered. Specific policies are controlled by maxmemory-policy parameters. Redis supports six policies as follows:
Noeviction: the default policy, which does not delete any data, rejects all writes and returns the client error message "(error) OOM command not allowed when used memory", where Redis only responds to read operations.
Volatile-lru: delete the key with the timeout attribute (expire) set according to the LRU algorithm until enough space is freed. If there are no key objects to delete, fall back to the noeviction policy.
Allkeys-lru: delete the key according to the LRU algorithm, regardless of whether the data has set the timeout property or not, until enough space is freed.
Allkeys-random: randomly delete all keys until there is enough space.
Volatile-random: randomly delete expired keys until enough space is made available.
Volatile-ttl: deletes the data that is about to expire recently based on the ttl property of the key object. If not, fall back to the noeviction policy.
The memory overflow control policy can be dynamically configured with config set maxmemory-policy {policy}. Redis supports rich memory overflow strategies, which can be flexibly customized according to actual needs. For example, when setting a volatile-lru policy, keys with expired attributes are guaranteed to be eliminated according to LRU, while keys that do not time out can be retained permanently. You can also use the allkeys-lru strategy to turn Redis into a pure cache server. When Redis removes keys because of a memory overflow, you can find out the number of keys that have been removed by the current Redis server by executing the info stats command to look at the evicted_keys metric.
Each time Redis executes a command, it attempts to reclaim memory if the maxmemory parameter is set. When Redis has been working in the state of memory overflow (used_memory > maxmemory) and the non-noeviction policy is set, the operation of reclaiming memory will be triggered frequently, affecting the performance of Redis server. The pseudo code of reclaiming memory logic is as follows:
Def freeMemoryIfNeeded (): int mem_used, mem_tofree, mem_freed; / / calculate the current total memory, excluding the memory occupation of the slave node output buffer and AOF buffer int slaves = server.slaves; mem_used = used_memory ()-slave_output_buffer_size (slaves)-aof_rewrite_buffer_size () / / if the current use of maxmemory is less than or equal to exit if (avoid frequent memory collection overhead in mem_used used_memory state.
For scenarios that need to shrink Redis memory, you can achieve fast recycling by reducing maxmemory. For example, set maxmemory=4GB for a process that actually takes up 6GB memory. When you execute the command for the first time, if you use a non-noeviction policy, it will reclaim the amount of memory specified by maxmemory at one time, thus achieving the goal of rapid memory recovery. Note that this operation will result in data loss and is generally used in cache scenarios.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.