Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What happens when Redis has a high latency

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail what happens when there is a high delay in Redis. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.

Redis is an in-memory database that stores data in memory and is much faster to read and write than traditional databases that store data on disk. But when there is delay in Redis, it is necessary for us to have a deep understanding of its cause in order to quickly troubleshoot the problem and solve the delay problem of Redis.

A command execution process

In the scenario of this article, latency is the interval between sending a command on the client and receiving the return value of the command on the client. So let's first take a look at the steps for the execution of a command in Redis, where a problem with each step can lead to high latency.

The figure above is a schematic diagram of the execution of a command sent by the Redis client. The green one is the execution step, while the blue one is the possible cause of the high delay.

Network connection limits, network transmission rates and CPU performance are performance problems that may occur on all servers. But Redis has its own unique problems that can lead to high latency: misuse of command or data structures, persistence blocking, and memory swapping.

What is more deadly is that Redis uses single-thread and event-driven mechanisms to process network requests. There are corresponding connection reply processors, command request processors and command reply processors to handle the network request events of the client, and then continue to process the next one in the queue after processing one event. One command has a high delay that affects other commands that are queued next. You can refer to this article for information about the Redis event handling mechanism.

For high latency, Redis natively provides slow query statistics. Execute slowlog get {n} command to get the most recent n slow query commands. By default, commands executing more than 10 milliseconds (configurable) will be recorded in a fixed-length queue. For online instances, it is recommended to set it to 1 millisecond to facilitate timely detection of commands above millisecond.

# commands that exceed the slowlog-log-slower-than threshold are recorded in the slow query queue # the maximum queue length is slowlog-max-len slowlog-log-slower-than 10000 slowlog-max-len 128

If the command execution time is in milliseconds, the actual OPS of the instance is only about 1000. The length of slow query queue defaults to 128, which can be enlarged appropriately. The slow query itself only records the command execution time, excluding the data network transmission time and the command queue time, so when a blocking exception occurs on the client, it may not be that the current command is slow, but waiting for other commands to execute. It is necessary to focus on comparing the time points at which exceptions and slow queries occur to confirm whether there are command blocking queues caused by slow queries.

The output format of slowlog is shown below. The first field represents the sequence number of the entry in all slow logs, and the latest record is displayed at the front; the second field is the system time when the record was recorded, which can be converted to a friendly format with the date command. The third field represents the response time of the command in us (microseconds), and the fourth field is the corresponding Redis operation.

> slowlog get 1) 1) (integer) 26 2) (integer) 1450253133 3) (integer) 43097 4) 1) "flushdb"

Let's take a look at the high latency problems caused by unreasonable use of commands or data structures, persistence blocking, and memory swapping.

Unreasonable command or data structure

Generally speaking, the execution speed of Redis command is very fast, but when the amount of data reaches a certain level, the execution of some commands will take a lot of time, such as hgetall operation on a hash structure containing tens of thousands of elements. Because of the large amount of data and the complexity of the command algorithm O (n), the execution speed of this command must be very slow.

This problem is typical of improper use of commands and data structures. For scenarios with high concurrency, we should try to avoid executing commands with algorithmic complexity more than O (n) on large objects. For hash structures with more keys, you can use the scan series commands to iterate through them step by step, rather than directly using hgetall to get them all.

Redis itself provides tools for discovering large objects, corresponding to the command: redis-cli-h {ip}-p {port} bigkeys. This command uses scan to continuously sample from the specified Redis DB, outputs in real time the key value with the largest value footprint at that time, and finally gives a summary report of the biggest key of various data structures.

> redis-cli-h host-p 12345--bigkeys # Scanning the entire keyspace to find biggest keys as well as # average sizes per key type. You can use-I 0.1 to sleep 0.1 sec # per 100 SCAN commands (not usually needed). [00.005%] Biggest hash found so far 'idx:user'with2 fields [00.00%] Biggest hash found so far' idx:product'with4 fields [00.00%] Biggest hash found so far 'idx:order'with24 fields [02.29%] Biggest hash found so far' idx:fund'with26 fields [02.29%] Biggest hash found so far 'idx:pay'with79 fields [04.45%] Biggestset found so Far 'indexed_word_set'with2482 members [05.93%] Biggest hash found so far' idx:address'with259 fields [11.79%] Biggest hash found so far 'idx:reply'with296 fields-summary-Sampled1484 keys in the keyspace! Total key length in bytes is13488 (avg len 9.09) Biggestset found 'indexed_word_set' has 1482 members Biggest hash found' idx: 'has 196 fields 0 strings with0 bytes (00.0000% of keys, avg size 0.00) 0 lists with0 items (00.0000% of keys, avg size 0.00) 2 sets with2710 members (00.13% of keys, avg size 855.00) 1482 hashs with7731 fields (99.87% of keys) Avg size 4.54) 0 zsets with0 members (0.000% of keys, avg size 0.00)

Persistent blocking

For Redis nodes with persistence enabled, you need to check whether the blockage is caused by persistence. The main thread blocking operations caused by persistence are: fork blocking, AOF flushing disk blocking.

When the fork operation occurs when RDB and AOF are rewritten, the Redis main thread calls the fork operation to generate a child process with shared memory, and the child process completes the corresponding persistence work. If the fork operation itself takes too long, it will inevitably lead to blocking of the main thread.

The memory consumption of the child process generated by the fork operation by Redis is the same as that of the parent process, which theoretically requires twice as much physical memory to complete the corresponding operation. But Linux has copy-on-write technology (copy-on-write), in which the parent and child processes share the same physical memory page, and when the parent process handles the write request, it makes a copy of the page that needs to be modified to complete the write operation, while the child process still reads the memory snapshot of the entire parent process when the fork is read. So, in general, fork doesn't consume too much time.

The latestforkusec metric can be obtained by executing the info stats command, indicating that the last fork operation of Redis is time-consuming. If it takes a lot of time, for example, more than 1 second, you need to make optimization adjustments.

> redis-cli-c-p 7000 info | grep-w latest_fork_usec latest_fork_usec:315

When we enable the AOF persistence feature, the file refresh method is usually once per second, and the backstage thread does fsync operations on AOF files every second. When the hard disk is under too much pressure, the fsync operation needs to wait until the write is complete. If the main thread finds that it has been more than 2 seconds since the last fsync was successful, it blocks until the background thread executes the fsync operation for the sake of data security. This blocking behavior is mainly caused by hard disk pressure, which can be identified by viewing the Redis log. When this blocking behavior occurs, the following log will be printed:

Asynchronous AOF fsync is taking too long (disk is busy). \ Writing the AOF buffer without waiting for fsync to complete,\ this may slow down Redis.

You can also check the aofdelayedfsync metric in info persistence statistics, which accumulates each time fdatasync blocks the main thread.

> info persistence loading:0 aof_pending_bio_fsync:0 aof_delayed_fsync:0

Memory swapping

Memory swapping (swap) is very fatal for Redis, and an important prerequisite for Redis to ensure high performance is that all data is in memory. If the operating system swaps out part of the memory used by Redis to the hard disk, the performance of Redis will decline sharply after swapping because of the difference of several orders of magnitude between the memory and the hard disk read and write speed. The check method to identify Redis memory swapping is as follows:

> redis-cli-p 6383 info server | grep process_id # query redis process number > cat / proc/4476/smaps | grep Swap# query memory swap size Swap: 0 kB Swap: 4 kB Swap: 0 kB Swap: 0 kB

If the exchange volume is all 0KB or the individual is 4KB, it is normal, indicating that the Redis process memory has not been swapped.

There are many ways to avoid memory swapping. For example:

Make sure the machine has enough available memory

Ensure that all Redis instances set the maximum available memory (maxmemory) to prevent uncontrollable growth of Redis memory in extreme cases.

Lower the priority of the system using swap, such as echo10 > / proc/sys/vm/swappiness.

So much for sharing about what happens when Redis has a high latency. I hope the above content can be helpful to you and learn more. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report