Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Redis to implement distributed Lock

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to use Redis to achieve distributed locks. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.

Locking part

Unlocking part

The main principle is that the setnx of redis is used to insert a set of key-value, in which the key is locked (in the project, the user userId is locked), and false is returned if the lock fails. But according to the idea of a two-stage lock, if you think about it carefully, there is an interesting phenomenon:

Assuming that a request of microservice A locks the user with userId = 7, the request of microservice A can read the user's information and modify its content; other modules can only read the user's information and cannot modify its content.

Assuming that the current request of microservice An is unlocked for a user with userId = 7, all modules can read the user's information and modify its contents

In this way:

If microservice module A receives a request from another user who needs to modify userId = 7, assuming that the user is still locked, can the request modify it this time? (yes, just unlock it.)

If microservice module B receives a request from another user who needs to modify userId = 7, assuming the user is still locked, can the request modify it this time? (yes, just unlock it.)

If the microservice module An accidentally collapses in the middle of executing the locked request, can other users modify the information? (yes, just unlock it.)

Obviously, these three points are not what we want. So how to implement distributed locks is the best practice?

What does a good distributed lock need to achieve?

Locked by a request of a module and unlocked only by this request of this module (mutually exclusive, only one request of a microservice can hold the lock)

If the locking request of the locked module times out, it should be unlocked automatically and its changes should be restored (fault tolerance, even if a microservice holding the lock goes down, it will not affect the locking of other modules).

What should we do?

To sum up, one of the important issues that our team's distributed lock ignores when implementing module mutual exclusion is "request mutual exclusion". We only need to save the value of key-value to the requestId of the current request when locking, and to determine whether it is the same request or not when unlocking.

So after this revision, can we rest easy?

Yes, that's enough. Because our development environment Redis is a single example on a single server, there is no problem with the distributed lock implemented in the above way, but when we are ready to deploy to the production environment, we suddenly realize a problem: if we achieve master-slave read-write separation, redis multi-machine master-slave synchronous data is replicated asynchronously, that is, after a "write" operation to our reids main library. Return to success immediately (do not wait for synchronization to slave library before returning, if this is synchronization completed, then return to synchronous replication), which will cause a problem:

Suppose that after the request of id=1 in module An is locked successfully, the master database before the slave library is broken by us (down), then the redis Sentinel will select a new master library from the slave library. At this time, if the request of id=2 in module A re-requests to lock, it will be successful.

With poor skills, we had to draw water with the help of search engines (heavy fog), and found that there was really a general solution to this situation: redlock.

How to implement Redlock distributed Security Lock

First of all, redlock is the implementation recommended by the redis official documentation, which does not use the master-slave architecture itself, but uses the polymorphic master library to pick up the lock in turn. Suppose there are five main libraries here, and the overall process is roughly as follows:

Add lock

Application layer request locking

Send requests to five redis servers in turn

If more than half of the servers return locking successfully, locking is completed, if not, unlocking is performed automatically, and wait for a random period of time before retrying. (objective reasons for locking failure: problems such as poor network condition and unresponsive server. Wait for a random period of time and try again to avoid the situation of "swarm in", resulting in an instantaneous surge in server resources.)

If any of the servers already hold the lock, the lock fails, wait for a random period of time and try again. (failed to lock for subjective reasons: it has been locked by someone else)

Unlock

You can make a request directly to five servers, regardless of whether there is already a lock on this server or not.

The overall idea is very simple, but there is still a lot to pay attention to. When sending lock requests to these five servers, because there will be an expiration time to ensure the "automatic unlocking (fault tolerance)" mentioned above, and considering the delay and other reasons, the automatic unlocking time of the five servers is not exactly the same, so there is an addition.

Generally speaking, the problem of lock time difference is solved as follows:

Before locking, it must be in the application layer (or encapsulate the distributed lock into a global general-purpose micro-service) 2. Record the timestamp T1 of the locked request

After the last redis main library is locked, record the timestamp T2

The locking time is T1-T2.

Assuming that the time for the resource to be automatically unlocked is 10 seconds, the real available time for the resource is 10-T1 + T2. If

The available time is not as expected, or it's negative, you know, do it all over again.

If you have more strict control over the expiration time of the lock, you can record the successful locking time from T1 to the first server separately, and then add this time to the last available time to get a more accurate value.

Now consider another question: if a requested lock guarantee happens to exist on three servers, all three of them are down. TAT), then another request for a lock, isn't it back to the problem our team faced in the first place? Unfortunately, yes, the official answer to this question is: will it be better to enable AOF persistence?

With regard to performance processing, generally speaking, not only low latency but also high throughput are required. According to the official documents, we can use multiplex to communicate with five redis main libraries at the same time to reduce the overall time consuming, or set socket to non-blocking mode (this advantage is that you do not wait for return when sending commands, so you can send all commands at once and wait for the overall running results. Although I think it is not very useful if the network latency is very low, the proportion of time waiting for the server will be increased)

On how to use Redis to achieve distributed locks to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report