Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What if Redis accidental connection fails

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the Redis occasional connection failure how to do, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let Xiaobian take you to understand.

Preface

This article mainly introduces you about the accidental connection failure of Redis, and shares it for your reference and study. I won't say much below. Let's take a look at the detailed introduction.

[author]

Zhang Yanjun: senior DBA of Ctrip Technical support Center, has a strong interest in database architecture and troubleshooting.

Shouxiangchen: senior DBA of Ctrip Technical support Center, mainly responsible for the operation and maintenance of Ctrip Redis and DB. He has more practical experience in automatic operation and maintenance, process and monitoring and troubleshooting. He likes to analyze problems in depth and improve the efficiency of team operation and maintenance.

[problem description]

There is a Redis in   production environment that occasionally reports an error of connection failure. There is no obvious rule for the time point of the error report and the client IP. After a while, the error will be automatically restored.

  the following is the error message from the client:

CRedis.Client.RExceptions.ExcuteCommandException: Unable to Connect redis server:-- > CRedis.Third.Redis.RedisException: Unable to Connect redis server: in CRedis.Third.Redis.RedisNativeClient.CreateConnectionError () in CRedis.Third.Redis.RedisNativeClient.SendExpectData (Byte [] [] cmdWithBinaryArgs) in CRedis.Client.Entities.RedisServer.c__ DisplayClassd`1.

  from the error message, it should be unable to connect to the Redis. The version of Redis is 2.8.19. Although the version is a little old, it is basically stable.

This is the only cluster in the   online environment that occasionally reports errors. One of the obvious features of this cluster is that there are hundreds of clients and servers.

[problem Analysis]

  from the error message, the client can not connect to the server. The common reasons are as follows:

A common reason is that when the port is exhausted, the network connection is checked. At the point where the problem occurs, the number of TCP connections is far from reaching the port exhaustion scenario, so this is not the root cause of Redis connection failure.

Another common scenario is that there are slow queries on the server side, resulting in Redis service blocking. On the Redis server, we crawled statements that ran for more than 10 milliseconds, and did not catch slow statements.

  from the server deployment monitoring, the problem point, there is a sudden surge in the number of connections, from 3500 connections suddenly soared to 4100 connections. The following figure shows:

At the same time, the server shows that there is packet loss on the Redis server: 345539-344683 = 856 packets.

Sat Apr 7 10:41:40 CST 2018 1699 outgoing packets dropped 92 dropped because of missing route 344683 SYNs to LISTEN sockets dropped 344683 times the listen queue of a socket overflowedSat Apr 7 10:41:41 CST 2018 1699 outgoing packets dropped 92 dropped because of missing route 345539 SYNs to LISTEN sockets dropped 345539 times the listen queue of a socket overflowed

The reason why the   client reports an error is basically determined because the connection speed is too fast, which causes the server backlog queue to overflow and the connection to be reset on the server side.

[about backlog overflow]

  is a common type of tcp error in highly concurrent short connection services. A normal tcp connection establishment process is as follows:

  1.client sends a (SYN) to server

  2.server returns a (SYN,ACK) to client

  3.client returns one (ACK)

The   three-way handshake ends. For client, the connection is successful. Client can continue to send packets to server, but the server may not be ready at this time, as shown in the following figure:

In the tcp protocol implemented by the BSD version of the kernel, the server connection process requires two queues, one is SYN queue and the other is accept queue. The former is called a half-open join (or half-join) queue, which joins the queue when a SYN sent by client is received. A common network attack is to keep sending SYN but not ACK, resulting in a burst of semi-open queues on the server side and a denial of service on the server side. ) the latter is called full connection queue, and the server returns (SYN,ACK). After receiving the ACK sent by client (at this time, client will think that the connection has been completed and will start sending PSH packets), if the accept queue is not full, then server moves the connection information from SYN queue to accept queue;. If the accept queue overflows at this time, the behavior of server depends on the configuration. If tcp_abort_on_overflow is 0 (default), then the PSH packet sent by client is directly drop, and the client will enter the retransmission process. After a period of time, the server side will resend the SYN,ACK, starting with the second step of connection establishment. If tcp_abort_on_overflow is 1, the server will send reset directly when it finds that the accept queue is full.

Through wireshark search, it is found that there are more than 2000 connection establishment requests to the Redis Server side in one second. We tried to change the tcp backlog size from 511 to 2048, but the problem was not solved. So this kind of fine-tuning can not completely solve the problem.

[network packet Analysis]

We use wireshark to identify the exact time and cause of network congestion. We already have the exact time to report an error. First, we use editcap to cut the oversized tcp packet into a 30-second interval, and analyze the exact time point of network congestion through the wireshark I _ 100ms interval:

  can clearly see the existence of block in tcp's packets transactions according to the icon.

  makes a detailed analysis of the network packets before and after the block. The traffic of network packets is as follows:

TimeSourceDestDescription    12      01    54.6536050           for      TCP:Flags= … AP...    12      01VR 54.6538580      Redis-Server Clients      TCP:Flags= … AP...    12      01VR 54.6539770      Redis-Server      Clients      TCP:Flags= … AP...    12      01VR 54.6720580      Redis-Server      Clients      TCP:Flags= … A.. s..    12      01V 54.6727200      Redis-Server Clients      TCP:Flags= … A.    12      01VR 54.6808480      Redis-Server      Clients      TCP:Flags= … AP... ..    12      01VR 54.6910840      Redis-Server      Clients      TCP:Flags=... A... S.,    12      01V 54.6911950      Redis-Server      Clients      TCP:Flags= … A.   ...     ...     ...     ...    12      01VR 56.1181350      Redis-Server Clients      TCP:Flags= … AP... .

The Redis Server side sends a Push packet to the client, that is, a result of the query request. The following packets are all doing connection processing, including the Ack package, the Ack acknowledgement package, and the reset RST packet, followed by the following Push package sent out at 12RST 01purl 56.1181350. The interval is 1.4372870 seconds. In other words, during these 1.4372870 seconds, the server side of Redis, except for making a query, is making a connection or refusing a connection.

The logic before and after the error report on the client is clear. The redis-server card lasted 1.43 seconds, the connection pool of client was full, the new connection was crazy, the accept queue of server was full, the service was directly denied, and the client reported an error. When you begin to suspect that client has sent a special command, you need to confirm what the last commands of client are, find the first package before the redis-server card dies, install a redis plug-in for wireshark, and see that the last commands are simple get, and the key-value is so small that it doesn't take 1.43 seconds to complete. There is no slow log on the server, so troubleshooting is deadlocked again.

[further analysis]

To understand what Redis Server is doing in 1.43 seconds, we use pstack to grab information. Pstack is essentially gdb attach. High frequency fetching will affect the throughput of redis. A mindless grab for 0.5 seconds in the dead loop. When the redis-server is stuck, the stack is captured as follows (filtering useless stack information):

Thu May 31 11:29:18 CST 2018

Thread 1 (Thread 0x7ff2db6de720 (LWP 8378)):

# 0 0x000000000048cec4 in? ()

# 1 0x00000000004914a4 in je_arena_ralloc ()

# 2 0x00000000004836a1 in je_realloc ()

# 3 0x0000000000422cc5 in zrealloc ()

# 4 0x00000000004213d7 in sdsRemoveFreeSpace ()

# 5 0x000000000041ef3c in clientsCronResizeQueryBuffer ()

# 6 0x00000000004205de in clientsCron ()

# 7 0x0000000000420784 in serverCron ()

# 8 0x0000000000418542 in aeProcessEvents ()

# 9 0x000000000041873b in aeMain ()

# 10 0x0000000000420fce in main ()

Thu May 31 11:29:19 CST 2018

Thread 1 (Thread 0x7ff2db6de720 (LWP 8378)):

# 0 0x0000003729ee5407 in madvise () from / lib64/libc.so.6

# 1 0x0000000000493a4e in je_pages_purge ()

# 2 0x000000000048cf70 in? ()

# 3 0x00000000004914a4 in je_arena_ralloc ()

# 4 0x00000000004836a1 in je_realloc ()

# 5 0x0000000000422cc5 in zrealloc ()

# 6 0x00000000004213d7 in sdsRemoveFreeSpace ()

# 7 0x000000000041ef3c in clientsCronResizeQueryBuffer ()

# 8 0x00000000004205de in clientsCron ()

# 9 0x0000000000420784 in serverCron ()

# 10 0x0000000000418542 in aeProcessEvents ()

# 11 0x000000000041873b in aeMain ()

# 12 0x0000000000420fce in main ()

Thu May 31 11:29:19 CST 2018

Thread 1 (Thread 0x7ff2db6de720 (LWP 8378)):

# 0 0x000000000048108c in je_malloc_usable_size ()

# 1 0x0000000000422be6 in zmalloc ()

# 2 0x00000000004220bc in sdsnewlen ()

# 3 0x000000000042c409 in createStringObject ()

# 4 0x000000000042918e in processMultibulkBuffer ()

# 5 0x0000000000429662 in processInputBuffer ()

# 6 0x0000000000429762 in readQueryFromClient ()

# 7 0x000000000041847c in aeProcessEvents ()

# 8 0x000000000041873b in aeMain ()

# 9 0x0000000000420fce in main ()

Thu May 31 11:29:20 CST 2018

Thread 1 (Thread 0x7ff2db6de720 (LWP 8378)):

# 0 0x000000372a60e7cd in write () from / lib64/libpthread.so.0

# 1 0x0000000000428833 in sendReplyToClient ()

# 2 0x0000000000418435 in aeProcessEvents ()

# 3 0x000000000041873b in aeMain ()

# 4 0x0000000000420fce in main ()

After repeated fetching, the suspicious stack clientsCronResizeQueryBuffer location is found in the stack, which belongs to the serverCron () function. The timing scheduling within this redis-server is not under the user thread, which explains why there is no slow query when it is jammed.

Check the redis source code to confirm what redis-server is doing:

ClientsCron (server.h): # define CLIENTS_CRON_MIN_ITERATIONS 5void clientsCron (void) {/ * Make sure to process at least numclients/server.hz of clients * per call. Since this function is called server.hz times per second * we are sure that in the worst case we process all the clients in 1 * second. * / int numclients = listLength (server.clients); int iterations = numclients/server.hz; mstime_t now = mstime (); / * Process at least a few clients while we are at it, even if we need * to process less than CLIENTS_CRON_MIN_ITERATIONS to meet our contract * of processing each client once per second. * / if (iterations

< CLIENTS_CRON_MIN_ITERATIONS) iterations = (numclients < CLIENTS_CRON_MIN_ITERATIONS) ? numclients : CLIENTS_CRON_MIN_ITERATIONS; while(listLength(server.clients) && iterations--) { client *c; listNode *head; /* Rotate the list, take the current head, process. * This way if the client must be removed from the list it's the * first element and we don't incur into O(N) computation. */ listRotate(server.clients); head = listFirst(server.clients); c = listNodeValue(head); /* The following functions do different service checks on the client. * The protocol is that they return non-zero if the client was * terminated. */ if (clientsCronHandleTimeout(c,now)) continue; if (clientsCronResizeQueryBuffer(c)) continue; }} clientsCron首先判断当前client的数量,用于控制一次清理连接的数量,生产服务器单实例的连接数量在5000不到,也就是一次清理的连接数是50个。 clientsCronResizeQueryBuffer(server.h):/* The client query buffer is an sds.c string that can end with a lot of * free space not used, this function reclaims space if needed. * * The function always returns 0 as it never terminates the client. */int clientsCronResizeQueryBuffer(client *c) { size_t querybuf_size = sdsAllocSize(c->

Querybuf); time_t idletime = server.unixtime-c-> lastinteraction; / * Resize query buffer only occurs in the following two cases: * 1) Query buffer > BIG_ARG (# define PROTO_MBULK_BIG_ARG (1024x32) defined in server.h) and the peak of this Buffer is less than the client usage for a period of time. * 2) the client is idle for more than 2 seconds and the Buffer size is greater than 1k. * / if (querybuf_size > PROTO_MBULK_BIG_ARG) & & (querybuf_size/ (c-> querybuf_peak+1)) > 2) | (querybuf_size > 1024 & & idletime > 2)) {/ * Only resize the query buffer if it is actually wasting space. * / if (sdsavail (c-> querybuf) > 1024) {c-> querybuf = sdsRemoveFreeSpace (c-> querybuf);}} / * Reset the peak again to capture the peak memory usage in the next * cycle. * / c-> querybuf_peak = 0; return 0;}

If the query buffer of the redisClient object satisfies the condition, then resize it directly. There are two types of connections that meet the conditions, one is really large, which is larger than the peak value used by the client over a period of time, and the other is very idle (idle > 2), both of which must meet one condition, that is, the part of buffer free exceeds 1k. So the reason why redis-server is stuck is that there are exactly 50 large or idle connections and free size loops to do resize when there are more than 1k connections. Because redis is a single-threaded program, block client. Then the solution to this problem is clear: make the frequency of resize lower or the execution speed of resize faster.

Since the problem is on query buffer, let's take a look at where this thing has been modified:

ReadQueryFromClient (networking.c): redisClient * createClient (int fd) {redisClient * c = zmalloc (sizeof (redisClient)); / * passing-1 as fd it is possible to create a non connected client. * This is useful since all the Redis commands needs to be executed * in the context of a client. When commands are executed in other * contexts (for instance a Lua script) we need a non connected client. * / if (fd! =-1) {anetNonBlock (NULL,fd); anetEnableTcpNoDelay (NULL,fd); if (server.tcpkeepalive) anetKeepAlive (NULL,fd,server.tcpkeepalive); if (aeCreateFileEvent (server.el,fd,AE_READABLE, readQueryFromClient, c) = = AE_ERR) {close (fd); zfree (c); return NULL;}} selectDb (cMagne 0); c-> id = server.next_client_id++; c-> fd = fd; c > name = NULL; c > bufpos = 0 C-> querybuf = sdsempty (); initialization is 0readQueryFromClient (networking.c): void readQueryFromClient (aeEventLoop * el, int fd, void * privdata, int mask) {redisClient* c = (redisClient*) privdata; int nread, readlen; size_t qblen; REDIS_NOTUSED (el); REDIS_NOTUSED (mask); server.current_client = c; readlen = REDIS_IOBUF_LEN / * If this is a multi bulk request, and we are processing a bulk reply * that is large enough, try to maximize the probability that the query * buffer contains exactly the SDS string representing the object, even * at the risk of requiring more read (2) calls. This way the function * processMultiBulkBuffer () can avoid copying buffers to create the * Redis Object representing the argument. * / if (c-> reqtype = = REDIS_REQ_MULTIBULK & & c-> multibulklen & & c-> bulklen! =-1 & & c-> bulklen > = REDIS_MBULK_BIG_ARG) {int remaining = (unsigned) (c-> bulklen+2)-sdslen (c-> querybuf); if (remaining

< readlen) readlen = remaining; } qblen = sdslen(c->

Querybuf); if (c-> querybuf_peak)

< qblen) c->

Querybuf_peak = qblen; c-> querybuf = sdsMakeRoomFor (c-> querybuf, readlen); will be expanded here

Thus it can be seen that the size of c-> querybuf will be allocated at least 102432 after connecting the first read command, so looking back at the cleaning logic of resize, there is an obvious problem. The size of each used query buffer is at least 102432, but the judgment condition for cleaning is > 1024, that is, all used connections with idle > 2 will be dropped by resize, and the next time the request is received, it will be reassigned to 102432. In fact, this is not necessary. In frequently visited clusters, memory will be frequently reclaimed and reallocated, so we try to change the criteria for cleaning to the following, so we can avoid most unnecessary resize operations:

If ((querybuf_size > REDIS_MBULK_BIG_ARG) & & (querybuf_size/ (c-> querybuf_peak+1)) > 2) | (querybuf_size > 1024x32 & & idletime > 2)) {/ * Only resize the query buffer if it is actually wasting space. * / if (sdsavail (c-> querybuf) > 1024x32) {c-> querybuf = sdsRemoveFreeSpace (c-> querybuf);}}

The side effect of this modification is the memory overhead, which is calculated according to an example of 5k connections, which is 5000m 1024m 32m 160m, which is acceptable for servers with hundreds of gigabytes of memory.

[problem recur]

After using the modified source code Redis server, the problem still reappears, the client will still report the same type of error, and when the error is reported, the server memory will still jitter. Crawling memory stack information is as follows:

Thu Jun 14 21:56:54 CST 2018

# 3 0x0000003729ee893d in clone () from / lib64/libc.so.6

Thread 1 (Thread 0x7f2dc108d720 (LWP 27851)):

# 0 0x0000003729ee5400 in madvise () from / lib64/libc.so.6

# 1 0x0000000000493a1e in je_pages_purge ()

# 2 0x000000000048cf40 in arena_purge ()

# 3 0x00000000004a7dad in je_tcache_bin_flush_large ()

# 4 0x00000000004a85e9 in je_tcache_event_hard ()

# 5 0x000000000042c0b5 in decrRefCount ()

# 6 0x000000000042744d in resetClient ()

# 7 0x000000000042963b in processInputBuffer ()

# 8 0x0000000000429762 in readQueryFromClient ()

# 9 0x000000000041847c in aeProcessEvents ()

# 10 0x000000000041873b in aeMain ()

# 11 0x0000000000420fce in main ()

Thu Jun 14 21:56:54 CST 2018

Thread 1 (Thread 0x7f2dc108d720 (LWP 27851)):

# 0 0x0000003729ee5400 in madvise () from / lib64/libc.so.6

# 1 0x0000000000493a1e in je_pages_purge ()

# 2 0x000000000048cf40 in arena_purge ()

# 3 0x00000000004a7dad in je_tcache_bin_flush_large ()

# 4 0x00000000004a85e9 in je_tcache_event_hard ()

# 5 0x000000000042c0b5 in decrRefCount ()

# 6 0x000000000042744d in resetClient ()

# 7 0x000000000042963b in processInputBuffer ()

# 8 0x0000000000429762 in readQueryFromClient ()

# 9 0x000000000041847c in aeProcessEvents ()

# 10 0x000000000041873b in aeMain ()

# 11 0x0000000000420fce in main ()

Obviously, the problem of Querybuffer being frequently resize has been optimized, but there are still client-side errors. This leads to another stalemate. Are there other factors causing query buffer resize to slow down? Let's grab the pstack again. But at this time, jemalloc caught our attention. Recall the memory allocation mechanism of Redis. In order to avoid the problem that libc memory is not released and lead to a large number of memory fragments, Redis uses jemalloc as memory allocation management by default. In the stack information reported this time, je_pages_purge () redis is calling jemalloc to collect dirty pages. Let's take a look at what jemalloc did:

Arena_purge (arena.c) static voidarena_purge (arena_t * arena, bool all) {arena_chunk_t * chunk; size_t npurgatory; if (config_debug) {size_t ndirty = 0; arena_chunk_dirty_iter (& arena- > chunks_dirty, NULL, chunks_dirty_iter_cb, (void *) & ndirty); assert (ndirty = = arena- > ndirty);} assert (arena- > ndirty > arena- > npurgatory | | all); assert (arena- > nactive > > opt_lg_dirty_mult)

< (arena->

Ndirty-arena- > npurgatory) | | all); if (config_stats) arena- > stats.npurge++; npurgatory = arena_compute_npurgatory (arena, all); arena- > npurgatory + = npurgatory; while (npurgatory > 0) {size_t npurgeable, npurged, nunpurged; / * Get next chunk with dirty pages. * / chunk = arena_chunk_dirty_first (& arena- > chunks_dirty); if (chunk = = NULL) {arena- > npurgatory-= npurgatory; return;} npurgeable = chunk- > ndirty; assert (npurgeable! = 0); if (npurgeable > npurgatory & & chunk- > nruns_adjac = = 0) {arena- > npurgatory + = npurgeable-npurgatory; npurgatory = npurgeable;} arena- > npurgatory-= npurgeable; npurgatory-= npurgeable; npurged = arena_chunk_purge (arena, chunk, all); nunpurged = npurgeable-npurged Arena- > npurgatory + = nunpurged; npurgatory + = nunpurged;}}

Every time Jemalloc reclaims, it will determine all the chunck that should be cleaned and count the cleanup, which is a luxury for systems with high response requirements, so we consider upgrading the version of jemalloc to optimize the performance of purge. After the release of version 4.0 of Redis, the performance has been greatly improved, and memory can be recovered through commands, and we are also preparing to upgrade online. We have made a lot of optimizations with the version of arena_purge () used after version 4.0 of jemalloc, which is released in version 4.1 of jemalloc, which removes the call of counters and simplifies a lot of judgment logic. The arena_stash_dirty () method combines the previous calculation and judgment logic, increases the purge_runs_sentinel, replaces the previous way of keeping the dirty block in the dirty-run-containing chunck of the arena tree with the way of keeping the dirty block in each arena LRU, greatly reduces the volume of the dirty block purge, and no longer moves the memory block during the memory recovery process. The code is as follows:

Arena_purge (arena.c) static voidarena_purge (arena_t * arena, bool all) {chunk_hooks_t chunk_hooks = chunk_hooks_get (arena); size_t npurge, npurgeable, npurged; arena_runs_dirty_link_t purge_runs_sentinel; extent_node_t purge_chunks_sentinel; arena- > purging = true; / * * Calls to arena_dirty_count () are disabled even for debug builds * because overhead grows nonlinearly as memory usage increases. * / if (false & & config_debug) {size_t ndirty = arena_dirty_count (arena); assert (ndirty = = arena- > ndirty);} assert ((arena- > nactive > > arena- > lg_dirty_mult)

< arena->

Ndirty | | all); if (config_stats) arena- > stats.npurge++; npurge = arena_compute_npurge (arena, all); qr_new (& purge_runs_sentinel, rd_link); extent_node_dirty_linkage_init (& purge_chunks_sentinel); npurgeable = arena_stash_dirty (arena, & chunk_hooks, all, npurge, & purge_runs_sentinel, & purge_chunks_sentinel); assert (npurgeable > = npurge) Npurged = arena_purge_stashed (arena, & chunk_hooks, & purge_runs_sentinel, & purge_chunks_sentinel); assert (npurged = = npurgeable); arena_unstash_purged (arena, & chunk_hooks, & purge_runs_sentinel, & purge_chunks_sentinel); arena- > purging = false;}

[problem solving]

Actually, we have multiple options. You can use Google's tcmalloc instead of jemalloc, upgrade the version of jemalloc, and so on. Based on the above analysis, we try to solve the problem by upgrading the jemalloc version and actually upgrading the Redis version. After we upgraded the version of Redis to 4.0.9, the thorny problem of online client connection timeout was resolved.

[summary of problems]

Redis is widely used in production environment because of its high concurrency, fast response and easy operation. For operation and maintenance personnel, its response time requirements have brought a variety of problems. Redis connection timeout is a typical one, from finding problems, client connection timeout, to memory stack location by grabbing network packets between client and server, memory stack positioning problems are also confused by some of these illusions. Finally, by upgrading the version of jemalloc (Redis) to solve the problem, this time the most worthy of summary and reference is the whole analysis of ideas.

Thank you for reading this article carefully. I hope the article "what to do if the Redis accidental connection fails" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report