Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the problems that may be encountered in using Redis

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the problems that may be encountered in the use of Redis, the article is very detailed, has a certain reference value, interested friends must read it!

In this article, I want to talk to you about the "pit" that you may step into when using Redis.

If you have encountered the following "weird" scenarios when using Redis, there is a good chance that you have stepped on the "pit":

Obviously a key set the expiration time, how can it become non-expired?

Using the O (1) complexity of the SETBIT command, Redis was OOM?

Execute RANDOMKEY to take out a key at random, but it will also block Redis?

With the same command, why can't the data be found in the main database, but can be found from the library?

Why does the slave library use more memory than the master library?

Why is the data written to Redis inexplicably lost?

...

What on earth is the cause of these problems?

In this article, I will check with you, you may step on the "pit" when using Redis, and how to avoid it.

I divided these questions into three major parts:

What are the common commands?

What are the pitfalls in data persistence?

What are the pits of master-slave library synchronization?

The causes of these problems are likely to "subvert" your cognition. If you are ready, follow my thoughts and start!

There is a lot of practical information about this article. I hope you can read it patiently.

What are the common commands?

First, let's take a look at some common commands that encounter "unexpected" results when using Redis.

1) accidental loss of expiration time?

When you use Redis, you must often use the SET command, which is very simple.

In addition to setting key-value, SET can also set the expiration time of key, like the following:

127.0.1 integer 6379 > SET testkey val1 EX 60OK127.0.0.1:6379 > TTL testkey (integer) 59

At this point, if you want to change the value of key, but simply use the SET command without adding the parameter "expiration time", the expiration time of the key will be "erased".

127.0.1 key 6379 > SET testkey val2OK127.0.0.1:6379 > TTL testkey / / never expires! (integer)-1

Did you see that? Testkey will never expire!

If you've just started using Redis, I'm sure you've stepped on this hole, too.

The reason for this problem is that if the SET command does not set the expiration time, Redis will automatically "erase" the expiration time of the key.

If you find that the memory of Redis continues to grow, and many key originally set the expiration time, but later found that the expiration time has been lost, it is probably due to this reason.

At this point, there will be a large number of non-expired key in your Redis, consuming too much memory resources.

So, when you use the SET command, if you set the expiration time at the beginning, then modify the key later, be sure to add the expiration time parameter to avoid the problem of expiration time loss.

2) DEL can also block Redis?

Delete a key, you will definitely use the DEL command, do not know you have not thought about its time complexity is?

O (1)? Actually, not necessarily.

If you read Redis's official documentation carefully, you will find that the time it takes to delete a key has something to do with the type of key.

When introducing the DEL command, the Redis official documentation describes it as follows:

Key is a String type, and the DEL time complexity is O (1).

Key is a List/Hash/Set/ZSet type, DEL time complexity is O (M), M is the number of elements

That is, if you are deleting a non-String type key, the more elements of this key, the longer it will take to execute the DEL!

Why is this?

The reason is that when you delete this key, the Redis needs to free up memory for each element in turn, and the more elements you have, the more time-consuming the process becomes.

Such a long time-consuming operation is bound to block the entire Redis instance and affect the performance of Redis.

Therefore, when you delete a key of type List/Hash/Set/ZSet, be careful not to execute the DEL without a brain, but to delete it in the following ways:

Query number of elements: execute the LLEN/HLEN/SCARD/ZCARD command

Determine the number of elements: if the number of elements is small, you can delete DEL directly, otherwise delete it in batches

Batch deletion: perform LRANGE/HSCAN/SSCAN/ZSCAN + LPOP/RPOP/HDEL/SREM/ZREM deletion

After understanding the impact of DEL on List/Hash/Set/ZSet-typed data, let's analyze whether deleting a String-typed key will have this problem.

Huh? Didn't I mention earlier that the time complexity of deleting a key of type String is O (1) as described in the Redis official document? This won't cause Redis blocking, will it?

In fact, this is not necessarily!

If you think about it, what if this key takes up a lot of memory?

For example, if this key stores 500MB data (obviously, it is a bigkey), it will still take longer to execute DEL!

This is because it takes time for Redis to release such a large amount of memory to the operating system, so the operation will take longer.

Therefore, for the String type, you'd better not store too much data, otherwise there will be performance problems when deleting it.

At this point, you might think: didn't Redis 4.0 introduce the lazy-free mechanism? If you turn on this mechanism, the operation of releasing memory will be executed in the background thread, so that the main thread will not be blocked.

That's a very good question.

Will it really be like this?

Let me tell you the conclusion here: even if Redis turns on lazy-free, when you delete a bigkey of type String, it is still processed in the main thread rather than executed in the background thread. Therefore, there is still a risk of blocking Redis!

Why is that?

Here to sell a pass, interested students can first consult the lazy-free-related materials to find the answer. :)

In fact, there are many knowledge points about lazy-free, because of the space, so I intend to write a special article to tell, welcome to continue to follow ~

3) RANDOMKEY can also block Redis?

If you want to randomly view a key in Redis, you will usually use the command RANDOMKEY.

This command "randomly" fetches a key from the Redis.

Since it is random, the execution speed must be very fast, right?

Actually this is not so.

To explain this problem clearly, it is necessary to combine the expiration policy of Redis.

If you know anything about Redis's expiration strategy, you should know that Redis cleans up expired key by using a combination of regular cleanup and lazy cleanup.

After RANDOMKEY randomly comes up with a key, it will first check whether the key has expired.

If the key is out of date, Redis deletes it, which is called laziness cleanup.

But it can't be finished after cleaning up. Redis also needs to find a "non-expired" key and return it to the client.

At this point, Redis will continue to randomly take out a key, and then determine whether it expires until an unexpired key is returned to the client.

The whole process goes like this:

Master randomly takes out a key to determine whether it has expired.

If key has expired, delete it and continue to fetch key at random

Repeat this cycle until you find a non-expired key and return

But there is a problem here: if a large number of key in the Redis have expired but have not been cleaned up yet, the cycle will take a long time to end, and this time is spent cleaning the expired key + finding the non-expired key.

As a result, RANDOMKEY execution takes longer, affecting Redis performance.

The above process is actually performed on master.

If you execute RANDOMEKY on slave, the problem will be even more serious!

Why?

The main reason is that slave itself does not clean up expired key.

When will slave delete the expired key?

In fact, when a key is about to expire, master will first clean up and delete it, and then master sends a DEL command to slave, telling slave to delete the key as well, so as to achieve data consistency between master and slave libraries.

The same scenario: there are a large number of expired key in the Redis that have not been cleaned up, and the following problems occur when executing RANDOMKEY on the slave:

Slave randomly takes out a key to determine whether it has expired.

Key has expired, but slave will not delete it, but will continue to randomly search for non-expired key

Because a large number of key have expired, then slave will not be able to find qualified key, at this point will fall into a "dead cycle"!

In other words, executing RANDOMKEY on slave may cause the entire Redis instance to get stuck!

Didn't you expect it? How could it be possible to cause such serious consequences by taking a key at random on slave?

This is actually a Bug of Redis, and the Bug was not fixed until 5. 0.

The solution to repair is that when executing RANDOMKEY on slave, it will first determine whether all key of the entire instance have an expiration time. If so, in order to avoid not finding eligible key,slave for a long time, it will only look in the hash table for a maximum of 100 times, and will exit the loop regardless of whether it can be found.

The solution is to increase the maximum number of retries, so as to avoid falling into a dead cycle.

Although this solution avoids the problem of slave falling into a dead loop and jamming the entire instance, there is still a chance that it will take longer to execute this command on master.

Therefore, if you find that there is a "jitter" in Redis when you are using RANDOMKEY, it is likely to be caused by this reason!

4) O (1) complexity of SETBIT can lead to Redis OOM?

When using Redis's String type, in addition to writing a string directly, you can also use it as a bitmap.

Specifically, we can split a key of type String into bit to operate, like this:

127.0.0.1 SETBIT testkey 6379 > GETBIT testkey 10 (integer) 1

Where each bit bit of the operation is called offset.

However, there is a hole that you need to pay attention to.

If the key does not exist, or the memory usage of the key is very small, and the offset you want to operate on is very large, then the Redis needs to allocate "more memory space", which will take longer and affect performance.

Therefore, when you are using SETBIT, you must also pay attention to the size of offset. Operating too large offset will also cause Redis stutters.

This type of key, which is also a typical bigkey, takes longer to delete, in addition to allocating memory that affects performance.

5) executing MONITOR will also lead to Redis OOM?

You must have heard of this pit many times.

When you execute MONITOR commands, Redis writes each command to the client's "output buffer", from which the client reads the results returned by the server.

However, if your Redis QPS is very high, this will cause the output buffer memory to continue to grow, taking up a lot of Redis memory resources, and if your machine does not have enough memory resources, then the Redis instance will be at risk of being OOM.

Therefore, you need to be careful with MONITOR, especially if the QPS is very high.

All of these problem scenarios occur when we use common commands, and they are likely to be triggered "unintentionally".

Let's take a look at what are the holes in Redis "data persistence"?

What are the pitfalls in data persistence?

Redis data persistence can be divided into two ways: RDB and AOF.

Where RDB is the data snapshot, and AOF records each write command to the log file.

Problems in data persistence are mainly concentrated in these two areas, which we will look at in turn.

1) master is down and slave data is lost?

If your Redis is deployed in the following mode, data loss will occur:

Master-slave + Sentinel deployment instance

Master does not enable data persistence.

The Redis process is managed using supervisor and configured to "process downtime, automatic restart"

If master goes down at this time, it will cause the following problems:

When the master is down and the sentry has not initiated a switch, the master process is automatically pulled up by supervisor immediately.

However, master does not enable any data persistence. After startup, it is an "empty" instance.

At this point, in order to be consistent with master, slave automatically "empties" all data in the instance, and slave becomes an "empty" instance.

Did you see that? In this scenario, all master / slave data is lost.

At this time, when the business application accesses the Redis and finds that there is no data in the cache, it will call all the requests to the backend database, which will further trigger a "cache avalanche", which will have a great impact on the business.

So, you must avoid this situation. My advice to you is:

The Redis instance is automatically pulled up without using process management tools

After the master goes down, let the sentry initiate the handover and upgrade slave to master.

After the switch is complete, restart master to degenerate it to slave

You should avoid this problem when configuring data persistence.

2) is it true that AOF everysec will not block the main thread?

When Redis enables AOF, you need to configure the flushing policy of AOF.

Based on the balance between performance and data security, you will definitely adopt the appendfsync everysec solution.

The working mode of this scheme is that the backstage thread of Redis brushes the data of AOF page cache to the disk (fsync) every second.

The advantage of this scheme is that the time-consuming operation of AOF disk flushing is put into the background thread to avoid the impact on the main thread.

But does it really not affect the main thread?

The answer is no.

In fact, there is such a scenario: when the Redis background thread is performing AOF page cache fysnc, if the disk IO load is too high, then the call to fsync will be blocked.

At this point, the main thread still receives write requests, so the main thread at this time will first determine whether the last background thread was successful.

How to judge?

After the background thread successfully flushes the disk, it will record the time of flushing.

Based on this time, the main thread will judge how long it has been since the last flush. The whole process goes like this:

Before writing AOF page cache (write system call), the main thread checks whether the background fsync has been completed?

Fsync has been completed, and the main thread is written directly to AOF page cache

If the fsync is not completed, how long has it been since the last fsync?

If it is within 2 seconds since the last fysnc succeeded, the main thread will return directly without writing AOF page cache.

If more than 2 seconds have passed since the last fysnc succeeded, the main thread will force AOF page cache (write system call).

Because the disk IO load is too high, the background thread fynsc will block, and the main thread will also block waiting when writing AOF page cache (the operation of the same fd,fsync and write are mutually exclusive, and one side must wait for the other party to succeed before continuing to execute, otherwise the blocking wait will occur)

Through the analysis, we can find that even if you configure the AOF flushing strategy is appendfsync everysec, there is still a risk of blocking the main thread.

In fact, the point of this problem is that the overload of disk IO leads to fynsc blocking, which in turn leads to the blocking of the main thread writing AOF page cache.

Therefore, you must make sure that the disk has sufficient IO resources to avoid this problem.

3) is it true that AOF everysec will only lose one second of data?

Then the above problems continue to be analyzed.

As mentioned above, we need to focus on step 4 above.

That is, when the main thread writes AOF page cache, it will first judge the time when the last fsync was successful. If it is within 2 seconds since the last fysnc was successful, then the main thread will directly return and no longer write AOF page cache.

This means that when the background thread performs a fsync flush, the main thread waits for up to 2 seconds and will not write AOF page cache.

If there is an outage in Redis at this time, the data lost in the AOF file is 2 seconds, not 1 second!

Let's continue to analyze, why does the Redis main thread wait 2 seconds not to write AOF page cache?

In fact, when Redis AOF is configured as appendfsync everysec, normally, the background thread performs a fsync flush every 1 second, and if the disk resources are sufficient, it will not be blocked.

In other words, the Redis main thread doesn't really care whether the background thread flushes the disk successfully or not, as long as it doesn't have a brain to write AOF page cache.

However, the Redis authors consider that if disk IO resources are tight at this time, then the background thread fsync is at risk of blocking.

Therefore, before the main thread writes AOF page cache, the author of Redis first checks the time from the last successful fsync. If it is not successful for more than 1 second, then the main thread can know at this time that fsync may be blocked.

Therefore, the main thread waits for 2 seconds not to write AOF page cache, the purpose of which is:

Reduce the risk of main thread blocking (if you don't have a brain to write AOF page cache, the main thread will block immediately)

If the fsync is blocked, the main thread leaves 1 second for the background thread to wait for the fsync to succeed

But the price is that if an outage occurs at this time, AOF loses 2 seconds of data instead of 1 second.

This scenario should be a further tradeoff between performance and data security for Redis authors.

In any case, all you need to know here is that even if AOF is configured to swipe per second, the data lost by AOF is actually 2 seconds in the extreme case mentioned above.

4) OOM occurs in Redis when RDB and AOF rewrite occur.

Finally, let's take a look at the problems that occur when Redis performs RDB snapshots and AOF rewrite.

When doing RDB snapshots and AOF rewrite, Redis will create child processes to persist the data in the instance to disk.

When a child process is created, the operating system's fork function is called.

After the fork execution is complete, the parent and child processes share the same memory data at the same time.

But at this time, the main process can still receive write requests, and the incoming write requests will manipulate in-memory data in the way of Copy On Write (copy on write).

In other words, once the main process has data that needs to be modified, Redis will not directly modify the data in the existing memory, but first copy this piece of data in memory, and then modify the data in this new piece of memory, which is called "copy on write".

Copying while writing can also be understood as copying and then modifying whoever needs to write.

You should have noticed that if the parent process wants to modify a key, it needs to copy the original memory data to the new memory, which involves the request for "new memory".

If your business is characterized by "more writing, less reading" and the OPS is very high, there will be a lot of memory copy work during RDB and AOF rewrite.

What's wrong with that?

Because there are so many write requests, this can cause the Redis parent process to request a lot of memory. During this period, the wider the scope of key modification, the more requests for new memory.

If your machine does not have enough memory resources, this will cause Redis to face the risk of being OOM!

This is why you will hear from your DBA classmates that you need to reserve memory for the Redis machine.

The goal is to avoid preventing Redis OOM during RDB and AOF rewrite.

These are the "data persistence" will encounter the pit, how many have you stepped on?

Let's take a look at the problems of "master-slave replication".

What are the pits of master-slave replication?

In order to ensure high availability, Redis provides master-slave replication, which ensures that there are multiple "copies" of Redis. When the master library goes down, we still have slave libraries to use.

During the period of master-slave synchronization, there are still many pits, let's look at them in turn.

1) will master-slave replication lose data?

First of all, you need to know that the master-slave replication of Redis is done asynchronously.

This means that if master goes down suddenly, there may be situations where some of the data has not been synchronized to slave.

What problems will this lead to?

If you use Redis as a pure cache, it will have no impact on the business.

The data of master is not synchronized to slave, and the business application can query it again from the back-end database.

However, for businesses that use Redis as a database or as a distributed lock, data loss / lock loss may occur due to asynchronous replication problems.

More details about the reliability of Redis distributed locks will not be expanded here, but a separate article will be written later to analyze this knowledge point in detail. Here you just need to know that there is a probability of data loss in Redis master-slave replication.

2) the same command queries a key, but the master-slave database returns different results?

I wonder if you have ever thought about such a question: if a key has expired but the key has not been cleaned by master, what will be the result of querying the key on slave?

Slave returns the value of key normally

Slave returns NULL

Which one do you think it is? You can think about it.

The answer is: not necessarily.

Yeah? Why is it not necessarily?

This question is very interesting, please follow my train of thought, I will take you step by step to analyze the reasons.

In fact, what results are returned depends on the following three factors:

The version of Redis

Specific execution of orders

Machine clock

Let's take a look at the Redis version first.

If you are using Redis version 3.2 or later, as long as the key has not been cleaned by master, query the key on slave and it will always return value to you.

That is, even if the key has expired, the key can still be queried on the slave.

/ / Redis version 2.8executes 127.0.0.1 TTL testkey 6479 > TTL testkey (integer)-2 / / has expired 127.0.0.1 TTL testkey 6479 > GET testkey "testval" / / can also be queried!

But if you query the key on master at this time and find that it has expired, it will be cleaned up and NULL will be returned.

/ / Redis version 2.8executes 127.0.0.1 TTL testkey 6379 > TTL testkey (integer)-2127.0.0.1 TTL testkey 6379 > GET testkey (nil) on master

Did you find out? When you query the same key on master and slave, the results are different?

In fact, slave should be consistent with master. If key has expired, you should return NULL to the client instead of the normal value of key.

Why did this happen?

In fact, this is a version of Redis below Bug:3.2 of Redis. When querying a key on slave, it does not determine whether the key has expired, but directly returns the result to the client.

This Bug was fixed in version 3.2, but it was "not thorough enough".

What do you mean the restoration is "not thorough enough"?

This should be explained in the light of the second influence factor mentioned earlier, "specifically executed orders".

Redis 3. 2 fixed the Bug, but missed one command: EXISTS.

That is, a key is out of date, and slave returns NULL when slave queries its data directly, such as when executing commands such as GET/LRANGE/HGETALL/SMEMBERS/ZRANGE.

But if EXISTS,slave is executed, it still returns: key still exists.

/ / Redis version 3.2 executes 127.0.0.1 GET testkey 6479 > GET testkey (nil) / / key has logically expired 127.0.0.1 GET testkey 6479 > EXISTS testkey (integer) 1 / / still exists!

The reason is that EXISTS does not use the same method as the command that queries the data.

The Redis author only adds the expiration time check when querying the data, but the EXISTS command still does not do so.

It wasn't until the version of Redis 4.0.11 that Redis really fixed the missing Bug.

If you are using a version above this, then performing a data query or EXISTS on slave will return "does not exist" for expired key.

Here we briefly summarize that the slave query for expired key has gone through three stages:

3.2Under the following versions, key expiration has not been cleaned up. No matter which command is used to query slave, value is returned normally.

Version 3.2-4.0.11, query data returns NULL, but EXISTS still returns true

4.0.11 and above, all commands have been fixed, and expired key is queried on slave and returns "does not exist"

Here I would like to give a special thanks to Fu Lei, the author of "Redis Development and Operation and maintenance".

I read about this problem in his article, and I found it very interesting. It turns out that there was such a Bug before Redis. Then I checked the relevant source code and combed the logic before writing an article to share with you.

Although I have personally thanked him on Wechat, I would like to express my gratitude to him again here.

Finally, let's look at the third factor that affects the results of the query: the machine clock.

Suppose we have circumvented the above-mentioned version of Bug, for example, if we use Redis version 5.0 to query a key in slave, will the result be different from that of master?

The answer is, it is still possible.

This is related to the machine clock of master / slave.

Both master and slave are based on the "local clock" when determining whether an key is out of date.

If the machine clock of the slave runs "faster" than the master, it will cause, even if the key has not expired, from the perspective of the slave, the key has actually expired, and the client will return NULL when querying on the slave.

Isn't that interesting? A small out-of-date key, unexpectedly hiding so many tricks.

If you encounter a similar situation, you can check through the above steps to determine if you stepped on the pit.

3) Master-slave switching will lead to cache avalanche?

This problem is an extension of the previous one.

Let's assume that slave's machine clock runs "faster" and "much faster" than master.

At this point, from a slave perspective, the data in Redis is "massively out of date".

If you operate the "master-slave switch" at this time, promote slave to the new master.

When it becomes a master, it begins to clean up a large number of expired key, which results in the following results:

Master cleans up a large number of expired key, and the main thread is blocked and unable to process client requests in time.

A large amount of data in Redis expires, causing cache avalanche

You see, when the master / slave machine clock is seriously inconsistent, it has a great impact on the business!

Therefore, if you are a DBA operator, be sure to ensure that the machine clock of the master-slave library is consistent to avoid these problems.

4) A large number of master / slave data are inconsistent?

There is also a scenario that results in a large number of inconsistencies in master / slave data.

This involves the maxmemory configuration of Redis.

The maxmemory of Redis can control the memory usage limit of the entire instance. If the upper limit is exceeded, and the elimination policy is configured, the instance begins to phase out data.

But there is a problem: assuming that the maxmemory of master / slave configuration is different, data inconsistency will occur at this time.

For example, the maxmemory configured by master is 5G, while the maxmemory of slave is 3G. When the data in Redis exceeds 3G, slave will start to phase out the data "ahead of time". At this time, the master and slave database data are inconsistent.

In addition, although the maxmemory set by master / slave is the same, you should be careful if you want to adjust their upper limit, otherwise it will also lead to slave elimination data:

When adjusting the maxmemory, adjust the slave first, and then the master

When reducing the maxmemory, adjust the master first and then the slave

By operating in this way, the problem of slave surpassing maxmemory ahead of time is avoided.

In fact, you can think about it, what is the key to these problems?

The fundamental reason is that after slave exceeds maxmemory, it will eliminate the data "on its own".

If you don't let slave eliminate the data on its own, can all these problems be avoided?

That's right.

In response to this problem, Redis officials should also receive a lot of feedback from users. In Redis version 5. 0, the authorities have finally solved this problem completely!

Redis 5. 0 adds a configuration item: replica-ignore-maxmemory, the default yes.

This parameter indicates that although slave memory exceeds maxmemory, it will not eliminate the data on its own!

In this way, slave will always be the same as master, will only honestly copy the data sent by master, will not do their own "tricks".

At this point, the master / slave data can be guaranteed to be completely consistent!

If you happen to be using version 5.0, you don't have to worry about it.

5) how can slave have a memory leak?

Yes, you read it right.

How did this happen? Let's take a look at the details.

When you use Redis, slave memory leaks will be triggered if the following scenarios are met:

Redis uses version 4.0 or less.

The slave configuration item is read-only=no (writable from the library)

Key with expiration time was written to slave

At this point, the slave will have a memory leak: the key in the slave will not be automatically cleaned up even when it expires.

If you don't delete it actively, the key will remain in slave memory all the time, consuming slave memory.

The most troublesome thing is that you use commands to query these key, but you still can't find any results!

This is the problem of slave "memory leak".

This is actually a Bug,Redis 4.0 of Redis that fixes this problem.

The solution is that on writable slave, when writing key with expiration time, slave will "record" these key.

Slave then scans the key periodically and cleans up if the expiration time is reached.

If your business needs to store data temporarily on slave, and these key are set to expire, you should pay attention to this problem.

You need to confirm your Redis version, if it is below 4.0, be sure to avoid stepping on this hole.

In fact, the best solution is to develop a Redis usage specification, slave must be forcibly set to read-only, not allowed to write, which not only ensures the data consistency of master / slave, but also avoids the problem of slave memory leakage.

6) Why does master-slave full synchronization always fail?

In the case of master-slave full synchronization, you may encounter the problem of synchronization failure, as shown below:

Slave initiates a full synchronization request to master, and master generates RDB and sends it to slave,slave to load RDB.

Because the RDB data is too large, slave loading time can also become very long.

At this point, you will find that slave has not finished loading RDB, but the connection between master and slave has been disconnected, and data synchronization has failed.

Then you will find that slave initiates full synchronization, and master generates RDB and sends it to slave.

Similarly, when slave loads RDB, master / slave synchronization fails again, to and fro.

What's going on?

In fact, this is Redis's "replication storm" problem.

What is a replication storm?

As just described: master-slave full synchronization fails, starts synchronization again, and then fails to synchronize again, thus reciprocating, vicious circle and continuous waste of machine resources.

Why does it cause this kind of problem?

This problem can occur if your Redis has the following characteristics:

The instance data of master is too large, and slave takes too long to load RDB

Copy buffer (slave client-output-buffer-limit) configuration is too small

Master has a large number of write requests.

When the master and slave synchronizes all the data, the write requests received by the master will first be written to the master-slave "copy buffer", and the "upper limit" of this buffer is determined by the configuration.

When slave loads RDB too slowly, it will cause slave to fail to read data from the "copy buffer" in time, which leads to a "overflow" of the copy buffer.

In order to avoid continuous memory growth, the master will "force" to disconnect the slave, and the full synchronization will fail.

After that, the slave who fails to synchronize will "re-initiate" full synchronization and fall back into the problem described above, which is a vicious circle, which is the so-called "replication storm".

How to solve this problem? Let me give you the following suggestions:

Redis instance should not be too large to avoid too large RDB

Copy the buffer configuration as large as possible, leaving enough time for slave to load RDB to reduce the probability of full synchronization failure.

If you also step on this hole, you can solve it through this solution.

Summary

Well, to sum up, in this article we mainly talk about the possible "holes" of Redis in three aspects: "command use", "data persistence" and "master-slave synchronization".

How's it going? Does it subvert your perception?

The amount of information in this article is relatively large, if your thinking is already a bit "messy", don't worry, I have also prepared a mind map for you to better understand and remember.

I hope you can avoid these pits in advance when using Redis, so that Redis can provide better services.

Postscript

Finally, I would like to talk to you about the experience and experience of stepping on the pit in the development process.

In fact, if you come into contact with any new field, you will go through the stages of strangeness, familiarity, trampling, absorbing experience, and being able to handle it.

So in the stage of stepping on the pit, how to step on the pit less? Or how to efficiently troubleshoot the problem after stepping on the pit?

Here I have summed up four areas that should help you:

1) read more official documents + comments on configuration files

Be sure to read the official documentation, as well as the comments on the configuration file. In fact, there are many possible risks, excellent software will prompt you in the documentation and comments, read carefully, you can avoid a lot of basic problems in advance.

2) Don't let go of the details and think more about why?

Always be curious. When you encounter problems, you should master the ability of stripping and cocooning, gradually locate the problem, and always maintain the mentality of exploring the nature of the problem.

3) dare to ask questions, the source code will not deceive people

If you think a question is strange, it may be a Bug, dare to question it.

Finding the truth of the problem through source code is better than reading a hundred articles that plagiarize each other online (all of which are probably wrong).

4) without perfect software, excellent software is iterated step by step.

Any excellent software is iterated step by step. It's normal to have a Bug during the iteration, and we need to look at it with the right mindset.

These are all the contents of this article entitled "what are the problems you may encounter in using Redis?" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report