In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail what is a distributed lock and how to implement it with Redis or Zookeeper. The content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.
As we all know, if we have multiple different threads on a machine to preempt the same resource, and if we execute multiple times, there will be exceptions, we call it non-thread safety. In general, in order to solve this problem, we usually use locks, such as the java language, we can use synchronized. If there are different java instances in the same machine, we can use the file read-write lock of the system to solve the problem. What if we extend it to different machines? We usually use distributed locks to solve the problem.
The characteristics of distributed locks are as follows:
Mutual exclusion: mutual exclusion is as basic as our local locks, but distributed locks need to ensure the mutual exclusion of different threads on different nodes.
Reentrancy: if the same thread on the same node acquires the lock, it can also acquire the lock again.
Lock timeout: lock timeout is supported like local locks to prevent deadlocks.
Efficient, highly available: locking and unlocking need to be efficient, but also need to ensure high availability to prevent distributed lock failure, can increase degradation.
Support for blocking and non-blocking: lock and trylock as well as tryLock (long timeOut) are supported like ReentrantLock.
Support for fair locks and unfair locks (optional): fair locks mean that locks are obtained in the order in which locks are requested, while unfair locks are disordered on the contrary. Generally speaking, this is less realized. Distributed locks. I believe we have all encountered such a business scenario, we have a scheduled task that needs to be executed regularly, but this task is not idempotent at the same time, so we can only let one machine and one thread execute it.
There are many kinds of implementation of distributed lock, such as redis,zookeeper, Google's chubby and so on.
Implementation of distributed Lock with Redis
Give me a brief introduction. I believe you have come up with the solution here, that is, every time you execute a task, first query whether there is a locked key in the redis, if not, write it, and then start executing the task.
This seems to be quite right, but what is the problem? for example, process An and process B query Redis at the same time, and they both find that there is no corresponding value in Redis, and then both start to write. Because they do not read and write with version, both write successfully and acquire the lock. Fortunately, Redis provides us with atomic write operations, setnx (SET if Not eXists, a command we'd better know the full name to help us remember this command).
If you think that it is naive to complete a distributed lock as long as this, we might as well consider some extreme cases, such as a thread getting the lock, but unfortunately, the machine crashes, then the lock is not released, and the task will never be performed. So a better solution is to estimate the execution time of a program when applying for a lock, and then set a timeout for the lock, after which others can get the lock. But this leads to another problem, sometimes the load is very high, the task execution is very slow, as a result, the timeout task is not finished, at this time there is another task to execute.
This is the charm of architectural design. When you solve a problem, it will always give rise to some new problems that need to be solved step by step. In this method, we can usually open a daemon thread after preempting the lock, and regularly go to the redis to ask if I am still preempting the current lock, and how long it will expire. If we find that it is going to expire, renew it immediately.
Well, seeing here, I believe you have learned how to implement a distributed locking service with Redis.
Implementation of distributed Lock with Zookeeper
The schematic diagram of Zookeeper implementing a distributed lock is as follows:
In the figure above, the Zookeeper cluster is on the left, lock is the data node, node_1 to node_n represents a series of sequential temporary nodes, and client_1 to client_n on the right indicates the client to acquire the lock. Service is a mutually exclusive service.
Code implementation
The following source code is based on Zookeeper's open source client Curator to implement distributed locks. The native API implementation using zk will be more complex, so here we can directly use Curator as a wheel and use Curator's acquire and release methods to implement distributed locks.
Import org.apache.curator.RetryPolicy;import org.apache.curator.framework.CuratorFramework;import org.apache.curator.framework.CuratorFrameworkFactory;import org.apache.curator.framework.recipes.locks.InterProcessMutex;import org.apache.curator.retry.ExponentialBackoffRetry
Public class CuratorDistributeLock {
Public static void main (String [] args) {RetryPolicy retryPolicy = new ExponentialBackoffRetry (1000, 3); CuratorFramework client = CuratorFrameworkFactory.newClient ("111.231.83.101 args 2181", retryPolicy); client.start (); CuratorFramework client2 = CuratorFrameworkFactory.newClient ("111.231.83.101 args 2181", retryPolicy); client2.start () / / create a distributed lock. The root node path of lock space is / curator/lock InterProcessMutex mutex = new InterProcessMutex (client, "/ curator/lock"); final InterProcessMutex mutex2 = new InterProcessMutex (client2, "/ curator/lock"); try {mutex.acquire ();} catch (Exception e) {e.printStackTrace () } / / obtain the lock and proceed with the business process System.out.println ("clent Enter mutex"); Thread client2Th = new Thread (new Runnable () {@ Override public void run () {try {mutex2.acquire (); System.out.println ("client2 Enter mutex")) Mutex2.release (); System.out.println ("client2 release lock")
} catch (Exception e) {e.printStackTrace ();}
}); client2Th.start (); / / complete the business process and release the lock try {Thread.sleep (5000); mutex.release (); System.out.println ("client release lock"); client2Th.join ();} catch (Exception e) {e.printStackTrace ();}
/ / close client client.close ();}}
The execution result of the above code is as follows:
You can see that the client client first gets the lock and then executes the business, and then it's client2's turn to try to acquire the lock and execute the business.
Source code analysis
The locking method that keeps track of acquire () can be traced to that the core function of locking is attemptLock.
String attemptLock (long time, TimeUnit unit, byte [] lockNodeBytes) throws Exception {. While (! isDone) {isDone = true
Try {/ / create a temporary ordered node ourPath = driver.createsTheLock (client, path, localLockNodeBytes); / / determine whether it is the node with the lowest sequence number, if it is not to add the notification that the previous node has been deleted, hasTheLock = internalLockLoop (startMillis, millisToWait, ourPath) }} / / return the node path if (hasTheLock) {return ourPath;} if the lock is acquired. }
Go deep into the internalLockLoop function source code:
Private boolean internalLockLoop (long startMillis, Long millisToWait, String ourPath) throws Exception {. While ((client.getState () = = CuratorFrameworkState.STARTED) & &! haveTheLock) {/ / gets the list of child nodes sorted by sequence number List children = getSortedChildren (); String sequenceNodeName = ourPath.substring (basePath.length () + 1) / / + 1 to include the slash / / determine whether you are the current minimum sequence number node PredicateResults predicateResults = driver.getsTheLock (client, children, sequenceNodeName, maxLeases); if (predicateResults.getsTheLock ()) {/ / successfully acquire the lock haveTheLock = true } else {/ / get the previous node String previousSequencePath = basePath + "/" + predicateResults.getPathToWatch () / / if you do not get the lock, call wait to wake up the current thread synchronized (this) {try {/ / set listeners through callback notifyAll when waiting for the previous node to be deleted. GetData will determine whether the previous node exists. If it does not exist, an exception is thrown so that the listener client.getData () .usingWatcher (watcher) .forPath (previousSequencePath) is not set. / / if millisToWait is set, wait a while, and then delete yourself to jump out of the loop if (millisToWait! = null) {millisToWait-= (System.currentTimeMillis ()-startMillis); startMillis = System.currentTimeMillis () If (millisToWait
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 290
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.