Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deal with the problem of cache hot key in Redis

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Most people do not understand the knowledge points of this article "how to deal with hot key caching in Redis", so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "how to deal with hot key caching in Redis" article.

Background

What is the problem with hot key and how does it cause it?

Generally speaking, the cache Redis we use is a multi-node cluster version. When reading and writing to a key, the corresponding slot will be calculated based on the hash of the key. According to this slot, the corresponding shards (a group of redis clusters composed of a master and multiple slave) can be found to access the KMuv. However, in the process of practical application, for some specific services or some specific periods (such as the commodity flash sale activity of e-commerce business), there may be a large number of requests to access the same key. All requests (and such requests have a very high read-write ratio) will fall on the same redis server, and the load of the redis will increase seriously. at this time, it is useless to add a new redis instance to the whole system, because according to the hash algorithm, the request of the same key will still fall on the same new machine, and the machine will still become the system bottleneck 2, and even cause the whole cluster to crash, if the value of this hot key is also relatively large. It will also cause the network card to reach the bottleneck, which is called the "hot key" problem.

As shown in figures 1 and 2 below, there are normal redis cluster clusters and redis cluster key access using a layer of proxy agents, respectively.

As mentioned above, hot key will bring ultra-high load pressure on a small number of nodes in the cluster. If it is not handled correctly, then these nodes may go down, which will affect the operation of the entire cache cluster. Therefore, we must find hot key in time and solve the hot key problem.

1. Thermal key detection

Hot key detection, we can see that due to the dispersion of redis clusters and the significant impact of hot key, we can do hot key detection through rough and detailed thinking process.

1.1 qps monitoring for each slot in the cluster

The most obvious impact of hot key is that under the premise that the qps in the whole redis cluster is not so large, the slot is unevenly distributed in the cluster. Then the first thing we can think of is to monitor the traffic in each slot. After reporting, we can compare the traffic of each slot, and then we can find the specific slot affected when the hot slot appears. Although this monitoring is the most convenient, the granularity is too coarse, and it is only suitable for the previous cluster monitoring scheme, but not for scenarios where hot key is accurately detected.

1.2 the proxy mechanism of proxy serves as the overall traffic entry statistics.

If we use the redis cluster proxy proxy mode in figure 2, since all requests will first go to proxy and then to a specific slot node, then the detection statistics of this hot key can be done in proxy. In proxy, we can slide the window based on time, count each key, and then count the key that exceeds the corresponding threshold. To prevent excessive redundant statistics, you can also set some rules to count only the key corresponding to prefixes and types. This approach requires at least a proxy proxy mechanism and a requirement for the redis architecture.

1.3 redis Hot spot key Discovery Mechanism based on LFU

Versions of redis 4.0 and above support the LFU-based hotspot key discovery mechanism on each node, using redis-cli-hotkeys and adding the-hotkeys option when executing redis-cli. You can periodically use this command in the node to discover the corresponding hotspot key.

As shown below, you can see the execution result of redis-cli-hotkeys and the statistics of hot key. The execution time of this command is long, so you can set timing execution to count.

1.4 probe based on Redis client

Since redis commands are issued from the client every time, we can count some codes in redis client. Each client does statistics based on time sliding window. After exceeding a certain threshold, it is reported to server, and then sent to each client by server, and the corresponding expiration time is configured.

This method looks more beautiful, but in fact, it is not so suitable in some application scenarios, because the modification on the client side will bring more memory overhead to the running process. More directly, for automatic memory management languages such as Java and goLang, objects will be created more frequently, thus triggering the problem that gc will increase the response time of the interface, which is not easy to predict.

Finally, the corresponding choices can be made through the infrastructure of each company.

two。 Hot key solution

Through the above ways, we detect the corresponding hot key or hot slot, then we have to solve the corresponding thermal key problem. There are also several ways to solve hot key for reference. Let's sort it out one by one.

2.1 limit current for a specific key or slot

One of the simplest and roughest way is to limit the current for a specific slot or hot key, which is obviously harmful to the business, so it is recommended to limit the current only when there are online problems and need to stop losses.

2.2 use secondary (local) caching

Local caching is also one of the most commonly used solutions. Since our first-tier cache can't handle so much pressure, let's add a second-tier cache. Since each request is sent by service, it is more appropriate for this second-level cache to be added to the service side. Therefore, each time the server gets the corresponding hot key, you can use the local cache to store a copy, and then request again after the local cache expires, thus reducing the pressure on the redis cluster. Take java as an example, guavaCache is a ready-made tool. The following example:

/ / Local cache initialization and construction private static LoadingCache configCache = CacheBuilder.newBuilder () .concurrencyLevel (8) / / level of concurrent reads and writes It is recommended to set the number of cpu cores. InitireAfterWrite (10, TimeUnit.SECONDS) / / how long to expire after data is written. InitialCapacity (10) / initialize the container size of cache. MaximumSize (10) / / cache container maximum .recordStats () / / build method can specify CacheLoader Automatically load the cache .build (new CacheLoader () {@ Override public List load (String hotKey) throws Exception {}}) through the implementation of CacheLoader when the cache does not exist / / Local cache acquisition Object result = configCache.get (key)

The biggest impact of local caching on us is the problem of data inconsistency. How long we set the cache expiration time will lead to the longest online data inconsistency problem. This cache time needs to measure the cluster pressure and the maximum inconsistent time accepted by the business.

2.3 dismantling key

How can we not only ensure that there are no hot key problems, but also ensure data consistency as much as possible? Dismantling key is also a good solution.

When we put it into the cache, we split the cache key of the corresponding business into several different key. As shown in the figure below, we first split the key into N parts on the side of the update cache, for example, a key named "good_100", then we can split it into four parts, "good_100_copy1", "good_100_copy2", "good_100_copy3" and "good_100_copy4". Each update and addition need to change the N key, this step is to dismantle the key.

As far as the service side is concerned, we need to figure out how to make the traffic we visit as uniform as possible and how to add a suffix to the hot key we are about to visit. In several ways, hash is made according to the local ip or mac address, and then the value is offset with the number of detached key, and the final decision is made on what kind of key suffix is spliced and which machine is called. A random number at the start of the service makes a surplus on the number of detached key.

2.4 another idea of local cache configuration center

For partners who are familiar with the micro-service configuration center, our thinking can be changed to the consistency of the configuration center. Take nacos, for example, how does it achieve distributed configuration consistency and respond quickly? Then we can configure the cache analogy and do this.

Long polling + localized configuration. First, all configurations are initialized when the service is started, and then a long poll is started regularly to check whether the configuration monitored by the current service has changed. If there is any change, the long polling request will be returned immediately to update the local configuration. If there are no changes, the local memory cache configuration is used for all business codes. In this way, the timeliness and consistency of distributed cache configuration can be ensured.

2.5 other plans that can be made in advance

Each of the above solutions is relatively independent to solve the hot key problem, so if we are really faced with business demands, we will actually have a long time to consider the overall solution design. For the hot key problem caused by some extreme second-kill scenarios, if we have a sufficient budget, we can directly isolate the service and the redis cache cluster to avoid affecting the normal business, and at the same time, we can also temporarily take better disaster recovery and flow restriction measures.

Some integrated solutions

At present, there are many relatively complete application-level solutions about hotKey on the market, among which JD.com has an open source hotkey tool in this respect. The principle is to make insight on the client side, and then report the corresponding hotkey. After the server side detects it, the corresponding hotkey is sent to the corresponding server for local cache, and the local cache is updated synchronously after the remote corresponding key update. Has been a more mature automatic detection hot key, distributed consistency cache solution, JD.com retail hot key.

The above is about the content of this article on "how to deal with the hot key problem of caching in Redis". I believe we all have some understanding. I hope the content shared by the editor will be helpful to you. If you want to learn more about related knowledge, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report