Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use ZooKeeper to realize Java distributed Lock across JVM

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "how to use ZooKeeper to achieve Java cross-JVM distributed locks". In daily operations, I believe many people have doubts about how to use ZooKeeper to achieve Java cross-JVM distributed locks. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to use ZooKeeper to achieve Java cross-JVM distributed locks". Next, please follow the editor to study!

1. Using ZooKeeper to realize Java distributed lock across JVM

Zookeeper version is Release 3.4.8 (stable)

Curator version 2.9.1

Org.apache.zookeeper zookeeper 3.4.8 org.apache.curator curator-recipes 2.9.1 Org.apache.curator curator-client 2.9.1

Lock principle:

1. The first step is to create a lock root node, such as / mylock.

2. The client who wants to acquire the lock creates a znode under the root node of the lock. As a child node of / mylock, the type of node should be CreateMode.PERSISTENT_SEQUENTIAL, and the name of the node should be uuid. (as to why uuid is used, I will talk about it later. First, if you don't do this, deadlock will occur under certain circumstances. I have seen many domestic friends' own implementation, but have not taken this layer into account. That's why I don't recommend that you encapsulate the lock yourself, because it's really complicated.) assuming there are three clients trying to acquire the lock at the same time, the directory under / mylock should look like this.

Xxx-lock-0000000001,xxx-lock-0000000002,xxx-lock-0000000003

Xxx is uuid, 0000000001Magol 000000020000000003 is a self-increasing number automatically generated by the zook server.

3. The current client acquires the list of all child nodes through getChildren (/ mylock) and sorts them according to the self-increment number, and then determines whether the order of the nodes they create is the smallest in the list. If so, acquire the lock, if not, then get their previous node, and set to listen for changes in this node. When the node changes, re-perform step 3 until you are the one with the lowest number.

For example: suppose the current client creates a node of 0000000002 and cannot get a lock because its number is not the lowest, then it finds the node in front of it 0000000001 and listens to it.

4. Release the lock, and the client that currently obtains the lock deletes the node created by itself after the operation is completed, which will trigger the event of zook to other clients to know, so that other clients will re-execute it (step 3).

For example: add client 0000000001 to acquire the lock, and then client 0000000002 joins in to acquire the lock and finds that it is not the one with the lowest number, then it listens for the event of the node in front of it (event of 0000000001) and then performs step (3). When the client deletes its own node after the completion of the client 0000000001 operation, the zook server will send an event, and the client 0000000002 will receive the event and repeat step 3 until the lock is acquired.

The above steps implement an ordered lock, that is, the client that enters the waiting lock first acquires the lock when the lock is available.

If you want to implement a random lock, you just need to replace PERSISTENT_SEQUENTIAL with a random number.

Simple example:

Package com.framework.code.demo.zook;import org.apache.curator.RetryPolicy;import org.apache.curator.framework.CuratorFramework;import org.apache.curator.framework.CuratorFrameworkFactory;import org.apache.curator.framework.recipes.locks.InterProcessMutex;import org.apache.curator.retry.ExponentialBackoffRetry Public class CuratorDemo {public static void main (String [] args) throws Exception {/ / Operation failure retry mechanism retry 3 times in 1000 milliseconds RetryPolicy retryPolicy = new ExponentialBackoffRetry (1000, 3); / / create Curator client CuratorFramework client = CuratorFrameworkFactory.newClient ("192.168.1.18 retryPolicy", retryPolicy) / / start client.start () / * this class is thread safe. Just create one for a JVM * mylock is the root directory of the lock, and we can create different root directories for different businesses * / final InterProcessMutex lock = new InterProcessMutex (client, "/ mylock") Try {/ / blocks the method, and the thread that cannot acquire the lock hangs. Lock.acquire (); System.out.println ("lock has been acquired"); Thread.sleep (10000);} catch (Exception e) {e.printStackTrace () } finally {/ / release the lock, which must be placed in the finally, which ensures that the lock can also be released if there is an exception in the above method. Lock.release ();} Thread.sleep (10000); client.close ();}}

The code above paused for 10 seconds where the lock was acquired, and we used zook's client to check the creation of the directory. Since I have done several tests before, the sequence number starts at 12.

Simulate multiple clients (which can also be thought of as multiple JVM):

Now modify the above code and put it into a thread to execute, simulating multiple client tests.

Public class CuratorDemo {public static void main (String [] args) throws Exception {for (int I = 0; I < 10; iTunes +) {/ / start 10 threads to simulate multiple clients Jvmlock jl = new Jvmlock (I); new Thread (jl) .start () / / add 300 milliseconds here to allow threads to start sequentially, otherwise it is possible that thread 4 starts earlier than thread 3, so the test is not allowed. Thread.sleep;}} public static class Jvmlock implements Runnable {private int num; public Jvmlock (int num) {this.num = num } @ Override public void run () {RetryPolicy retryPolicy = new ExponentialBackoffRetry (1000 Override public void run 3); CuratorFramework client = CuratorFrameworkFactory .newClient ("192.168.142.128purl 2181", retryPolicy) Client.start (); InterProcessMutex lock = new InterProcessMutex (client, "/ mylock") Try {System.out.println ("I am thread" + num + ", I started to acquire lock"); lock.acquire (); System.out.println ("I am thread" + num + "thread, I have acquired lock") Thread.sleep (10000);} catch (Exception e) {e.printStackTrace ();} finally {try {lock.release () } catch (Exception e) {e.printStackTrace ();}} client.close ();}

Through the client software, we can see that 10 nodes that apply for locks have been created.

Looking at the print result, the thread that first applies for the lock is the first to acquire the lock when the lock is available, because the sequence number of the node created when they apply for the lock is incremented, and the node number created by the client that applies for the lock first is the lowest, so get the lock first.

I'm thread 0, I'm thread 1, I'm thread 1, I'm thread 2, I'm thread 3, I'm thread 4, I'm thread 5, I'm thread 6, I'm thread 6, I'm thread 7. I started to acquire locks, I'm thread 8, I'm thread 9, I'm thread 1, I'm thread 1, I'm thread 2, I'm thread 3, I'm thread 4, I'm thread 5, I'm thread 6, I'm thread 6, I'm thread 7. I have acquired the lock. I am thread 8. I have acquired the lock. I am thread 9. I have acquired the lock.

Why the name of the node should be added with uuid, this is the English explanation of the framework.

It turns out there is an edge case that exists when creating sequential-ephemeral nodes. The creation can succeed on the server, but the server can crash before the created node name is returned to the client. However, the ZK session is still valid so the ephemeral node is not deleted. Thus, there is no way for the client to determine what node was created for them.

Even without sequential-ephemeral, however, the create can succeed on the sever but the client (for various reasons) will not know it.

Putting the create builder into protection mode works around this. The name of the node that is created is prefixed with a GUID. If node creation fails the normal retry mechanism will occur. On the retry, the parent path is first searched for a node that has the GUID in it. If that node is found, it is assumed to be the lost node that was successfully created on the first try and is returned to the caller.

That is to say, when the client creates a node, the creation process is successful on the server side of the zook, but the server hangs before returning the path of the node to the client, because the client's session is still valid, so the node will not be deleted, so the client does not know which node it created.

When the client fails to create, it will retry. If the zook is already available at this time, the client will query all the child nodes on the server, and then compare it with the uuid created by itself. If it is found, it indicates that the node was created before, and it will be used directly for such a long time, otherwise the node will become a dead node, resulting in deadlock.

Implement unfair locking:

Override the method of creating nodes

Length of package com.framework.code.demo.zook.lock;import org.apache.curator.framework.CuratorFramework;import org.apache.curator.framework.recipes.locks.StandardLockInternalsDriver;import org.apache.zookeeper.CreateMode;public class NoFairLockDriver extends StandardLockInternalsDriver {/ * random number * / private int numLength; private static int DEFAULT_LENGTH = 5; public NoFairLockDriver () {this (DEFAULT_LENGTH) } public NoFairLockDriver (int numLength) {this.numLength = numLength;} @ Override public String createsTheLock (CuratorFramework client, String path, byte [] lockNodeBytes) throws Exception {String newPath = path + getRandomSuffix (); String ourPath If (lockNodeBytes! = null) {/ / originally uses CreateMode.EPHEMERAL_SEQUENTIAL type node / / node name ends up like this _ c_c8e86826-d3dd-46cc-8432-d91aed763c2e-lock-0000000025 / / where 0000000025 is a self-increment sequence automatically generated by the zook server, starting from 0000000000. / / so each client creates nodes in the order of 0 The order in which they acquire locks is the same as the order in which they enter. This is the so-called fair lock / / now we replace the ordered number with a random number, so it becomes an unfair lock ourPath = client.create (). CreatingParentContainersIfNeeded (). WithProtection (). WithMode (CreateMode.EPHEMERAL) .forPath (newPath, lockNodeBytes). / ourPath = client.create (). CreatingParentContainersIfNeeded (). WithProtection (). WithMode (CreateMode.EPHEMERAL_SEQUENTIAL) .forPath (path, lockNodeBytes);} else {ourPath = client.create (). CreatingParentContainersIfNeeded (). WithProtection (). WithMode (CreateMode.EPHEMERAL) .forPath (newPath); / / ourPath = client.create (). CreatingParentContainersIfNeeded (). WithProtection (). WithMode (CreateMode.EPHEMERAL_SEQUENTIAL) .forPath (path) } return ourPath;} / * get the random number string * / public String getRandomSuffix () {StringBuilder sb = new StringBuilder (); for (int I = 0; I < numLength; iTunes +) {sb.append ((int) (Math.random () * 10) } return sb.toString ();}}

Register the class we wrote:

InterProcessMutex lock = new InterProcessMutex (client, "/ mylock", new NoFairLockDriver ())

Again, in the above example, when you look at the results while running, you can see that the order in which locks are acquired is already out of order, thus realizing unfair locks.

I'm thread 1, I'm thread 0, I'm thread 2, I'm thread 3, I'm thread 4, I'm thread 5, I'm thread 6, I'm thread 6, I'm thread 7. I started to acquire locks, I was thread 8, I started to acquire locks, I was thread 9, I was thread 8, I was thread 4, I was thread 7, I was thread 3, I was thread 1, I was thread 1, I was thread 2. I have acquired the lock. I am thread 5. I am thread 6. I have acquired the lock so far. The study on "how to use ZooKeeper to implement Java distributed locks across JVM" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report