Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the usage specification and monitoring method of Redis

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what is the use specification and monitoring method of Redis". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the use specification and monitoring method of Redis".

I. Preface

In Internet applications, caching has become a key component of high concurrency architecture. This article mainly introduces the typical scenarios of cache usage, practical case studies, Redis usage specifications and conventional Redis monitoring.

Second, comparison of common caches

Common caching schemes: local cache includes HashMap/ConcurrentHashMap, Ehcache, Memcache, Guava Cache and so on, and cache middleware includes Redis, Tair and so on.

3. Redis usage scenarios

1. Count

Redis implements fast counting and caching functions.

For example: the number of online viewers of video or live streaming, each time the user plays, it will increase by 1.

2. Centralized management of Session

Session can be stored in the application service JVM, but this solution will have consistency problems, and high concurrency will cause JVM memory overflow. Redis centrally manages the user's Session. In this case, as long as the Redis is highly available and scalable, each user update or query login will directly obtain information from the Redis.

3. Speed limit

For example: highly concurrent flash sale activity, using the incrby command to achieve atomic increment.

For example, the service requires that users can only obtain CAPTCHA 5 times in one minute.

4. Ranking list

The query speed of relational database is generally slow in terms of ranking, so we can sort hot data with the help of redis's SortedSet.

For example, in a project, if you need to count the top-earning list of VJs, you can use the VJ's id as the member, and the heat value of the event gifts rewarded on the day as the score, and you can obtain the daily list of VJs through zrangebyscore.

5. Distributed lock

In the actual multi-process concurrency scenario, distributed locks are used to limit the concurrent execution of programs. It is often used to prevent the possibility of cache breakdown in high concurrency scenarios.

The distributed lock is actually "occupying the pit". When another process executes the setnx, it finds that the identity bit is already 1, so it has to give up or wait.

IV. Case analysis

1. Expiration setting-the set command removes the expiration time

The expiration time can be set for all Redis data structures. If a string has already set the expiration time, and then reset it, it will cause the expiration time to disappear. Therefore, it is necessary to reasonably evaluate the Redis capacity in the project to avoid the lack of expiration policy due to frequent set, which indirectly leads to full memory.

The following is a screenshot of the Redis source code:

2. BUG is set when Jedis 2.9.0 and below expires

It is found that Jedis has bug when calling the expiredAt command, and the final call is the pexpire command. This bug will lead to key expiration for a long time, leading to Redis memory overflow and other problems. It is recommended that you upgrade to Jedis version 2.9.1 or above.

The BinaryJedisCluster.java source code is as follows:

@ Override public Long pexpireAt (final byte [] key, final long millisecondsTimestamp) {return new JedisClusterCommand (connectionHandler, maxAttempts) {@ Override public Long execute (Jedis connection) {return connection.pexpire (key, millisecondsTimestamp); / / where pexpire should be pexpireAt} .runBinary (key);}

Compare pexpire with pexpireAt:

For example, our current time is 2018-06-14 17:00:00, and its unix timestamp is 1528966800000 milliseconds. When we use the PEXPIREAT command, the corresponding key expires immediately because of the elapsed time.

When we misuse the PEXPIRE command, key will not expire immediately, but wait until 1528966800000 milliseconds before it expires. The key expiration time will be quite long, about a few weeks later, which may lead to Redis memory overflow, server crash and other problems.

3. The cache is broken.

The cached key has an expiration policy. If there are a large number of concurrent requests for this Key at this time, these requests find that the cache expiration will generally retrieve the origin data from the back-end DB and set it back to the cache. At this time, large concurrent requests may instantly hold down the back-end DB.

There are two commonly used optimization schemes in the industry:

First: use distributed locks to ensure that under high concurrency, only one thread can go back to the back-end DB.

The second: to ensure that the Redis key of highly concurrent requests is always valid, use non-user requests to the back-end of origin-pull and change it to active origin-pull. You can generally use asynchronous tasks to actively refresh the cache.

4. Redis-standalone architecture forbids the use of non-zero libraries

Redis executes the command select 0 and select 1 to switch, causing performance loss.

RedisTemplate gets the link first when executing the execute method.

When you execute to the RedisConnectionUtils.java, there is a method to get the link.

JedisConnectionFactory.java will call the

JedisConnection constructor. Note that the dbIndex here is the database number, such as: 1

Continue to follow the JedisConnection code, if the selected library is greater than 1, there will be a select db operation. If you use the 0 library all the time, you don't need to execute the cut command. Knowing the first place to cut library select 1, where did select 0 come from?

In fact, when the client uses Redis, it will release the link, but RedisTemplate has already released it for us automatically. Let's go back to the beginning when RedisTemplate executes execute (...). The place of the method.

The following is also RedisConnectionUtils.java, which executes the code for link closure.

According to the meaning of the code comment, if you select the library number other than 0, the select 0 will be reset every time!

In the vivo Mall business, the interface of the commodity details page has been tuned above, and the performance has been improved by more than 3 times.

Further verify that database switching affects the performance at least 3 times (depending on the specific business).

Rediscluster cluster database, default 0 library, can not select other databases, which avoids this problem.

5. Watch out for the time complexity o (n) Redis command

Redis is single-threaded, so it is thread-safe.

Redis uses non-blocking IO, and most commands have a time complexity of O (1).

Using time-consuming commands is very dangerous and can take up a lot of processing time from a single thread, causing all requests to be delayed.

For example: get all the element smembers myset in the set collection and return all the member in the specified Hash with a time complexity of O (N).

The collection of cached Value becomes larger. When a high parallel API request is made, the relevant data will be read from the Redis, and the reading time of each request becomes longer, resulting in a hot KEY situation, a Redis shard is blocked, and the CPU utilization reaches 100%.

6. Cache hot key

In Redis, the key with high access frequency is called hotspot key. When a hotspot key requests to the Server host, due to the large number of requests, the host resources are insufficient, or even downtime, affecting the normal service.

There are two reasons for the hot key problem:

The data of user consumption is much larger than that of production, such as hot-selling goods or second-kill goods, hot news, hot comments and so on. These typical scenarios of reading more and writing less will give rise to hot issues.

Request sharding set exceeds the performance limit of a single Server, such as fixed name key, hash falls into a Server, and the number of visitors exceeds the Server limit, which will lead to hot Key problems.

So in the actual business, how to identify the hot key?

Estimate which hot key is based on business experience

Client statistics collection, local statistics or reporting

If the server has an agent layer, it can be collected and reported at the agent layer.

When we recognize the hot key, how to solve the hot key problem?

Redis cluster expansion: add shard replicas and balance read traffic

Further hash the hot key, such as backing up a key as key1,key2. KeyN, N backups of the same data, N backups are distributed to different shards, and one of the N backups can be accessed randomly during access to further share the read traffic.

Use a secondary cache, that is, a local cache.

When a hot key is found, the corresponding data of the hot key is first loaded into the local cache of the application server to reduce the read request to the Redis.

V. Redis specification

1. Prohibit use of non-database 0

Description:

Redis-standalone architecture, which forbids the use of other database in Redis.

The reason:

Maintain compatibility for future business migration Redis Cluster

When multiple database switches with select, it consumes more CPU resources.

It is easier to automate operation and maintenance management, such as the scan/dbsize command is only used when database

Some Redis Clients do not support multiple database for single instance due to thread safety issues.

2. Key design specification

Name key prefixes by business function to prevent key conflicts from overwriting. It is recommended to separate them with colons, for example, business name: table name: id:, such as live:rank:user:weekly:1:202003.

The length of the Key is less than 30 characters, the Key name itself is a String object, and the Redis hard code limits the maximum length 512MB.

In Redis cache scenarios, it is recommended to set TTL values for Key to ensure that unused Key can be cleaned or eliminated in a timely manner.

Key is designed to prohibit the inclusion of special characters, such as spaces, line breaks, single and double quotes, and other escape characters.

3. Value design specification

The size of a single Value must be controlled within the 10KB. The number of keys in a single instance is too large, which may result in untimely recovery of expired keys.

For complex data types such as set, hash and list, it is recommended to reduce the number of elements in the data structure to no more than 1000.

4. Focus on the time complexity of the command

The O (1) command, such as get scard, is recommended.

The O (N) command focuses on the number of N, and the following command needs to control the N value at the business level.

Hgetall

Lrange

Smembers

Zrange

For example, the time complexity of the smember command is O (n). When n continues to increase, it will cause the Redis CPU to keep soaring and block the execution of other commands.

5. Use of Pipeline

Note: Pipeline is a method of batch submission by Redis, that is, sending multiple command operations to Redis for execution by establishing a connection at once, which is better than a single submission in a loop.

The Redis client executes a command in four processes: send command-> command queue-> command execution-> return the result.

The commonly used mget and mset commands effectively save RTT (round trip time for command execution), but hgetall does not have mhgetall and does not support batch operations. At this point, you need to use Pipeline command

For example, in an LVB project, you need to query VJ's daily, weekly and monthly rankings at the same time, use PIPELINE to submit multiple commands at a time, and return three list data at the same time.

6. Disable commands online

Prohibit the use of Monitor

The use of monitor command in production environment is prohibited. Under the condition of high concurrency, monitor command will have the hidden danger of memory explosion and affecting the performance of Redis.

Prohibit the use of Keys

The keys operation traverses all the key, and if a large number of key causes a slow query, it blocks other commands. Therefore, the use of keys and keys pattern commands is prohibited.

It is recommended to use the scan command online instead of the keys command.

Prohibit the use of Flushall and Flushdb

Delete all records in all databases in Redis, and the command is atomic and does not terminate execution, and once executed, it will not fail.

Prohibit the use of Save

Block the current redis server until the persistence operation is complete, which will cause long-term blocking for instances with large memory

BGREWRITEAOF

Manual AOF, manual persistence will cause blocking for instances with large memory for a long time

Config

Config is the client configuration method, which is not conducive to Redis operation and maintenance. It is recommended to set it in the Redis configuration file.

VI. Redis monitoring

1. Slow query

Method 1: slowlog acquires slow query log

127.0.0.1: {port} > slowlog get 5 1) 1) (integer) 47 2) (integer) 1533810300 3) (integer) 175833 4) 1) "DEL" 2) "spring:session:expirations:1533810300000" 2) 1) (integer) 46 2) (integer) 1533810300 3) (integer) 117400 4) 1) "SMEMBERS"

Method 2: more comprehensive slow queries can be monitored by the CacheCloud tool.

Path: "Application list"-Click the relevant application name-Click "slow query" Tab page.

Click "slow query" and focus on the number of slow queries and related commands.

2. Monitor the CPU core utilization bound to Redis instances

Since Redis is single-threaded, focus on monitoring the CPU core utilization of Redis instance bindings.

Generally, the utilization rate of CPU resources is about 10%. If the utilization rate is higher than 20%, consider whether RDB persistence is used.

3. Redis shard load balancing

The current redis-cluster architecture model, a cluster of 3 master and 3 Slave, focuses on the traffic balance of each Redis-cluster shard requests.

Obtain through the command:

Redis-cli-p {port}-h {host}-- stat

In general, an alarm is required for more than 12W.

4. Pay attention to Big key BigKey

Through the tools provided by Redis, redis-cli scans the corresponding Redis large Key regularly for optimization.

The specific commands are as follows:

Redis-cli-h 127.0.0.1-p {port}-bigkeys

Or

Redis-memory-for-key-s {IP}-p {port} XXX_KEY

Generally speaking, more than 10K is a large key, which needs to be paid attention to. It is recommended to optimize it from the business level.

5. Monitor the amount of memory occupied by Redis

Check the Info memory command to avoid performance problems caused by the depletion of allocated MaxMemory in high concurrency scenarios.

Focus on the value corresponding to the used_memory_human configuration item. If the increment is too high, it needs to be evaluated.

Thank you for your reading, the above is the content of "what is the Redis usage specification and monitoring method". After the study of this article, I believe you have a deeper understanding of what the Redis use specification and monitoring method is, and the specific use still needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report