Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to write distributed Lock Framework by Zookeeper

2025-03-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "Zookeeper how to write distributed lock framework", the content of the article is simple and clear, easy to learn and understand, now please follow the editor's ideas slowly in depth, together to study and learn "Zookeeper how to write distributed lock framework" bar!

1. The core code is as follows:

Import org.springframework.beans.factory.annotation.Autowired

Import org.springframework.context.annotation.Bean

Import org.springframework.data.redis.core.StringRedisTemplate

Import org.springframework.data.redis.core.script.DefaultRedisScript

Import org.springframework.stereotype.Component

Import java.util.Arrays

Import java.util.concurrent.TimeUnit

@ Component

Public class RedisLock {

@ Autowired

Private StringRedisTemplate template

@ Autowired

Private DefaultRedisScript redisScript

Private static final Long RELEASE_SUCCESS = 1L

Private long timeout = 3000

Public boolean lock (String key, String value) {

/ / execute the set command

Boolean absent = template.opsForValue () .setIfAbsent (key, value, timeout, TimeUnit.MILLISECONDS); / 1

/ / in fact, there is no need to judge NULL. Here is the logic for the rigor of the procedure.

If (absent = = null) {

Return false

}

/ / whether the lock was acquired successfully

Return true

}

Public boolean unlock (String key, String value) {

/ / use the Lua script: determine whether the lock is set by yourself, and then delete it.

Long result = template.execute (redisScript, Arrays.asList (key,value))

/ / return the final result

Return RELEASE_SUCCESS.equals (result)

}

Public void setTimeout (long timeout) {

This.timeout = timeout

}

@ Bean

Public DefaultRedisScript defaultRedisScript () {

DefaultRedisScript defaultRedisScript = new DefaultRedisScript ()

DefaultRedisScript.setResultType (Long.class)

DefaultRedisScript.setScriptText ("if redis.call ('get', KEYS [1]) = = KEYS [2] then return redis.call (' del', KEYS [1]) else return 0 end")

Return defaultRedisScript

}

}

Executing the setIfAbsent () method above results in only two results:

1. There is currently no lock (key does not exist), then the lock operation is performed and the lock is valid, and value represents the locked client.

two。 There is already a lock and no action is done.

At any time, the code can guarantee that only one client can hold the lock, and each distributed lock has an expiration time to ensure that there will be no deadlock. If fault tolerance is not considered for the time being, locking and unlocking ensures the same lock for multiple clients through key, while the function of value is to ensure that the locking and unlocking operations on the same lock are all the same client.

Second, why the above plan is not good enough

To understand what we want to improve, let's first take a look at the current state of most Redis-based distributed lock tripartite libraries. The easiest way to implement a distributed lock with Redis is to create a key value in the instance. The created key value usually has a timeout (this is the timeout feature of Redis), so each lock will eventually be released (see attribute 2 above). When a client wants to release the lock, it only needs to delete the key value. On the face of it, this approach seems to work, but there is a problem here: there is a single point of failure in our system architecture. What if the master node of Redis goes down? Someone might say: add a slave node! Just use slave when master is down! But in fact, this scheme is obviously not feasible, because this scheme does not guarantee the first secure mutex attribute, because Redis replication is asynchronous. In general, there is an obvious race condition in this scenario, for example:

Client A gets the lock at the master node.

The master node went down before writing the key created by A to slave.

Slave becomes a master node

B also gets the same lock that A still holds (because there is no information about A holding lock in the original slave)

Of course, in some special scenarios, the aforementioned solution is completely fine, for example, during a downtime, multiple clients are allowed to hold locks at the same time, if you can tolerate this problem, then there is no problem with using this replication-based solution, otherwise I still suggest you improve the above scheme. For example, consider using the Redlock algorithm.

3. Redlock algorithm

In the distributed version of the algorithm, we assume that we have N Redis master nodes, which are completely independent, and we do not need any replication or other implicit distributed coordination algorithms. We have described how to safely acquire and release locks in a single-node environment. So it is only natural that we should use this method to acquire and release locks in each single node. In our example, we set N to 5, which is a relatively reasonable number, so we need to run five master nodes on different computers or virtual machines to ensure that they do not go down at the same time in most cases. A client needs to do the following to acquire the lock:

Gets the current time in milliseconds.

Take turns requesting locks on N nodes with the same key and random values. In this step, when the client requests a lock on each master, there will be a much smaller timeout than the total lock release time. For example, if the lock automatic release time is 10 seconds, the timeout period for each node lock request may be in the range of 5-50 milliseconds, which can prevent a client from blocking on a down master node for too long. If a master node is unavailable, we should try the next master node as soon as possible.

The client calculates the time it takes to acquire the lock in the second step, and the lock is considered successful only if the client successfully acquires the lock on most master nodes (three in this case) and the total time consumed does not exceed the lock release time.

If the lock acquisition is successful, the lock automatic release time is now the initial lock release time minus the time it took to acquire the lock.

If lock acquisition fails, whether it is because no more than half of the locks were successfully acquired, or because the total elapsed time exceeds the lock release time, the client releases locks on each master node, even those locks that he believes did not succeed.

Thank you for reading, the above is the content of "how to write distributed lock framework for Zookeeper". After the study of this article, I believe you have a deeper understanding of how to write a distributed lock framework for Zookeeper, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report