Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize distributed Lock based on Zookeeper

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "how to achieve distributed locks based on Zookeeper". In daily operation, I believe many people have doubts about how to achieve distributed locks based on Zookeeper. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "how to achieve distributed locks based on Zookeeper". Next, please follow the editor to study!

1. What is Zookeeper?

Zookeeper is a distributed, open source distributed application coordination service, which is an important component of Hadoop and hbase.

Quote the legend on the official website:

Characteristics:

The data structure of zookeeper is a data structure of node tree, zNode is the basic unit, znode is a node similar to unix file system, and you can store or obtain data from this node.

Through the client, you can perform data operations on znode, and you can also register watcher to monitor znode changes.

2. Zookeeper node type

Persistent node (Persistent)

Persistent sequential node (Persistent_Sequential)

Temporary node (Ephemeral)

Temporary sequential node (Ephemeral_Sequential)

3. Build the Zookeeper environment

Download zookeeper, official website link, https://zookeeper.apache.org/releases.html#download, go to the official website to find the corresponding software and download it locally.

Modify the configuration file, ${ZOOKEEPER_HOME}\ conf, find the zoo_sample.cfg file, back up one copy first, and modify the other to zoo.cfg

After decompressing, click zkServer.cmd to run the server:

4. Basic use of Zookeeper

Enter commands in the cmd window or directly in the terminal in the idea editor:

ZkCli.cmd-server 127.0.0.1purl 2181

Enter the command help to view help information:

ZooKeeper-server host:port-client-configuration properties-file cmd args addWatch [- m mode] path # optional mode is one of [PERSISTENT PERSISTENT_RECURSIVE]-default is PERSISTENT_RECURSIVE addauth scheme auth close config [- c] [- w] [- s] connect host:port create [- s] [- c] [- t ttl] path [data] [acl] delete [- v version] path deleteall path [- b batch size] delquota [- n |-b |-N |-B] path Get [- s] [- w] path getAcl [- s] path getAllChildrenNumber path getEphemerals path history listquota path ls [- s] [- w] [- R] path printwatches on | off quit reconfig [- s] [- v version] [[- file path] | [- members serverID=host:port1:port2] Port3 [,...] *]] | [- add serverId=host:port1:port2 Port3 [,...]] * [- remove serverId [,...] *] redo cmdno removewatches path [- c |-d |-a] [- l] set [- s] [- v version] path data setAcl [- s] [- v version] [- R] path acl setquota-n |-b |-N |-B val path stat [- w] path sync path version whoami

Create [- s] [- e] [- c] [- t ttl] path [data] [acl],-s represents sequential node,-e represents temporary node. If persistent node is not specified, acl is used for permission control.

[zk: 127.0.0.1 create 2181 (CONNECTED) 1] create-s / zk-test 0Created / zk-test0000000000

View

[zk: 127.0.0.1 ls 2181 (CONNECTED) 4] ls / [zk-test0000000000, zookeeper]

Set up and modify node data

Set / zk-test 123

Get node data

Get / zk-test

For details of the ps,zookeeper command, check the help help documentation, or go to the official website to see the documentation.

Ok, and then java writes an example for watcher snooping

Package com.example.concurrent.zkSample;import org.I0Itec.zkclient.IZkDataListener;import org.I0Itec.zkclient.ZkClient / * Zookeeper example * * @ author mazq * modified version: modified by: modified date: 16:57 on 2021-12-09 modified content: * * / public class ZookeeperSample {public static void main (String [] args) {ZkClient client = new ZkClient ("localhost:2181"); client.setZkSerializer (new MyZkSerializer ()) Client.subscribeDataChanges ("/ zk-test", new IZkDataListener () {@ Override public void handleDataChange (String dataPath, Object data) throws Exception {System.out.println ("listening for node data changes!") } @ Override public void handleDataDeleted (String dataPath) throws Exception {System.out.println ("listening to node data deleted");}}); try {Thread.sleep (1000 * 60 * 2);} catch (InterruptedException e) {e.printStackTrace () 5. Zookeeper application scenario

What are the typical application scenarios of Zookeeper:

Registry (Dubbo)

Naming service

Master election

Cluster management

Distributed queue

Distributed lock

6. Zookeeper distributed lock

Zookeeper is suitable for making distributed locks, and then what is the principle of its implementation? We know that zookeeper is a file system similar to unix, and we also know that under a folder, there will be features with inconsistent file names, that is, mutually exclusive features. Zookeeper also has this feature. Child nodes cannot be named repeatedly under the same znode node. So this feature can be used to implement distributed locks.

Business scenario: order scenario under high concurrency, which is a typical e-commerce scenario

Custom Zookeeper serialization class:

Package com.example.concurrent.zkSample;import org.I0Itec.zkclient.exception.ZkMarshallingError;import org.I0Itec.zkclient.serialize.ZkSerializer;import java.io.UnsupportedEncodingException;public class MyZkSerializer implements ZkSerializer {private String charset = "UTF-8"; @ Override public byte [] serialize (Object o) throws ZkMarshallingError {return String.valueOf (o). GetBytes ();} @ Override public Object deserialize (byte [] bytes) throws ZkMarshallingError {try {return new String (bytes, charset) } catch (UnsupportedEncodingException e) {throw new ZkMarshallingError ();}

Order number generator class, because SimpleDateFormat is thread-unsafe, you still need to add ThreadLocal

Package com.example.concurrent.zkSample;import java.text.DateFormat;import java.text.SimpleDateFormat;import java.util.Date;import java.util.concurrent.atomic.AtomicInteger;public class OrderCodeGenerator {private static final String DATE_FORMAT = "yyyyMMddHHmmss"; private static AtomicInteger ai = new AtomicInteger (0); private static int i = 0; private static ThreadLocal threadLocal = new ThreadLocal () {@ Override protected SimpleDateFormat initialValue () {return new SimpleDateFormat (DATE_FORMAT) }; public static DateFormat getDateFormat () {return (DateFormat) threadLocal.get ();} public static String generatorOrderCode () {try {return getDateFormat (). Format (new Date (System.currentTimeMillis () + iTunes;} finally {threadLocal.remove ();}

Pom.xml plus zookeeper client configuration:

Com.101tec zkclient 0.10

To implement a zookeeper distributed lock, the idea is to acquire the node, which is multithreaded, and the lock can be obtained, that is, if the node is successfully created, the business will be executed, and other threads that cannot grab the lock will block and wait, register the watcher listening lock to see if the lock is released, release, cancel the registration watcher, and continue to grab the lock.

Package com.example.concurrent.zkSample;import lombok.extern.slf4j.Slf4j;import org.I0Itec.zkclient.IZkDataListener;import org.I0Itec.zkclient.ZkClient;import org.I0Itec.zkclient.exception.ZkNodeExistsException;import java.util.concurrent.CountDownLatch;import java.util.concurrent.TimeUnit;import java.util.concurrent.locks.Condition;import java.util.concurrent.locks.Lock;@Slf4jpublic class ZKDistributeLock implements Lock {private String localPath; private ZkClient zkClient; ZKDistributeLock (String localPath) {super () This.localPath = localPath; zkClient = new ZkClient ("localhost:2181"); zkClient.setZkSerializer (new MyZkSerializer ());} @ Override public void lock () {while (! tryLock ()) {waitForLock ();}} private void waitForLock () {/ / create countdownLatch collaboration CountDownLatch countDownLatch = new CountDownLatch (1) / / register watcher snooping IZkDataListener listener = new IZkDataListener () {@ Override public void handleDataChange (String path, Object o) throws Exception {/ / System.out.println ("zookeeper data has changekeeping!") } @ Override public void handleDataDeleted (String s) throws Exception {/ / System.out.println ("zookeeper data has deleted threads released!"); / / listen for lock release, release thread countDownLatch.countDown ();}}; zkClient.subscribeDataChanges (localPath, listener) / / Thread waits for if (zkClient.exists (localPath)) {try {countDownLatch.await ();} catch (InterruptedException e) {e.printStackTrace ();}} / / Unregister zkClient.unsubscribeDataChanges (localPath, listener) } @ Override public void unlock () {zkClient.delete (localPath);} @ Override public boolean tryLock () {try {zkClient.createEphemeral (localPath);} catch (ZkNodeExistsException e) {return false;} return true;} @ Override public boolean tryLock (long time, TimeUnit unit) throws InterruptedException {return false } @ Override public void lockInterruptibly () throws InterruptedException {} @ Override public Condition newCondition () {return null;}}

Order Service api

Package com.example.concurrent.zkSample;public interface OrderService {void createOrder ();}

Order service implementation class, plus zookeeper distributed lock

Package com.example.concurrent.zkSample;import java.util.concurrent.locks.Lock;public class OrderServiceInvoker implements OrderService {@ Override public void createOrder () {Lock zkLock = new ZKDistributeLock ("/ zk-test"); / / Lock zkLock = new ZKDistributeImproveLock ("/ zk-test"); String orderCode = null; try {zkLock.lock (); orderCode = OrderCodeGenerator.generatorOrderCode () } finally {zkLock.unlock ();} System.out.println (String.format ("thread name:% s, orderCode:% s", Thread.currentThread (). GetName (), orderCode);}}

Because it is tedious to build a distributed environment, we use the concurrent collaborative tool class in juc, CyclicBarrier to simulate multi-thread concurrency scenarios and high concurrency scenarios in distributed environment.

Package com.example.concurrent.zkSample;import java.util.concurrent.BrokenBarrierException;import java.util.concurrent.CyclicBarrier;public class ConcurrentDistributeTest {public static void main (String [] args) {/ / number of multithreads int threadSize = 30; / / create multithreaded loop barrier CyclicBarrier cyclicBarrier = new CyclicBarrier (threadSize, ()-> {System.out.println ("ready!") ;}); / / simulate the scenario of distributed cluster for (int I = 0; I)

< threadSize ; i ++) { new Thread(()->

{OrderService orderService = new OrderServiceInvoker (); / / all threads are waiting for try {cyclicBarrier.await ();} catch (InterruptedException e) {e.printStackTrace ();} catch (BrokenBarrierException e) {e.printStackTrace () } / / simulate concurrent request orderService.createOrder ();}) .start ();}

After running several times, no duplicate order number was found, and the distributed lock was still effective.

Thread name: Thread-6, orderCode: 202112100945110

Thread name: Thread-1, orderCode: 202112100945111

Thread name: Thread-13, orderCode: 202112100945112

Thread name: Thread-11, orderCode: 202112100945113

Thread name: Thread-14, orderCode: 202112100945114

Thread name: Thread-0, orderCode: 202112100945115

Thread name: Thread-8, orderCode: 202112100945116

Thread name: Thread-17, orderCode: 202112100945117

Thread name: Thread-10, orderCode: 202112100945118

Thread name: Thread-5, orderCode: 202112100945119

Thread name: Thread-2, orderCode: 2021121009451110

Thread name: Thread-16, orderCode: 2021121009451111

Thread name: Thread-19, orderCode: 2021121009451112

Thread name: Thread-4, orderCode: 2021121009451113

Thread name: Thread-18, orderCode: 2021121009451114

Thread name: Thread-3, orderCode: 2021121009451115

Thread name: Thread-9, orderCode: 2021121009451116

Thread name: Thread-12, orderCode: 2021121009451117

Thread name: Thread-15, orderCode: 2021121009451118

Thread name: Thread-7, orderCode: 2021121009451219

Comment on the locked code, and then increase the number of concurrency to simulate.

Package com.example.concurrent.zkSample;import java.util.concurrent.locks.Lock;public class OrderServiceInvoker implements OrderService {@ Override public void createOrder () {/ / Lock zkLock = new ZKDistributeLock ("/ zk-test"); / / Lock zkLock = new ZKDistributeImproveLock ("/ zk-test"); String orderCode = null; try {/ / zkLock.lock (); orderCode = OrderCodeGenerator.generatorOrderCode () } finally {/ / zkLock.unlock ();} System.out.println (String.format ("thread name:% s, orderCode:% s", Thread.currentThread () .getName (), orderCode);}}

After running several times, it is found that the order number is duplicated, so the distributed lock can ensure thread safety in the distributed environment.

7. Fair Zookeeper distributed lock

The above example is an unfair locking method. Once the lock is released, all threads will grab the lock, so it is easy to have a "shock effect":

Huge loss of server performance

Network shock

May cause downtime

Therefore, we need to improve the distributed lock and change it to a fair lock mode.

Fair lock: multiple threads acquire locks in the order in which they apply for locks, and threads queue up in the queue to acquire locks in order. Only the first thread in the queue can acquire the lock. After acquiring the lock, other threads will block waiting until the thread holding the lock releases the lock, and the other threads will be awakened.

Unfair lock: multiple threads will compete for the lock, enter the queue and wait if they fail to get it, and then acquire the lock directly after the competition is obtained; then after the thread holding the lock releases the lock, all waiting threads will compete for the lock.

Flow chart:

Code improvements:

Package com.example.concurrent.zkSample;import org.I0Itec.zkclient.IZkDataListener;import org.I0Itec.zkclient.ZkClient;import org.I0Itec.zkclient.exception.ZkNodeExistsException;import java.util.Collections;import java.util.List;import java.util.concurrent.CountDownLatch;import java.util.concurrent.TimeUnit;import java.util.concurrent.locks.Condition;import java.util.concurrent.locks.Lock;public class ZKDistributeImproveLock implements Lock {private String localPath; private ZkClient zkClient; private String currentPath; private String beforePath ZKDistributeImproveLock (String localPath) {super (); this.localPath = localPath; zkClient = new ZkClient ("localhost:2181"); zkClient.setZkSerializer (new MyZkSerializer ()); if (! zkClient.exists (localPath)) {try {this.zkClient.createPersistent (localPath) } catch (ZkNodeExistsException e) {}} @ Override public void lock () {while (! tryLock ()) {waitForLock ();}} private void waitForLock () {CountDownLatch countDownLatch = new CountDownLatch (1) / / Register watcher IZkDataListener listener = new IZkDataListener () {@ Override public void handleDataChange (String dataPath, Object data) throws Exception {} @ Override public void handleDataDeleted (String dataPath) throws Exception {/ / listen for lock release and wake up thread countDownLatch.countDown ();}} ZkClient.subscribeDataChanges (beforePath, listener); / / Thread waits for if (zkClient.exists (beforePath)) {try {countDownLatch.await ();} catch (InterruptedException e) {e.printStackTrace ();}} / / Unregister zkClient.unsubscribeDataChanges (beforePath, listener) } @ Override public void unlock () {zkClient.delete (this.currentPath);} @ Override public boolean tryLock () {if (this.currentPath = = null) {currentPath = zkClient.createEphemeralSequential (localPath + "/", "123");} / / get all child nodes under the Znode node List children = zkClient.getChildren (localPath) / / list sort Collections.sort (children); if (currentPath.equals (localPath + "/" + children.get (0)) {/ / the current node is the first node return true;} else {/ / get the current index number int index = children.indexOf (currentPath.substring (localPath.length () + 1)) / / get the previous beforePath = localPath + "/" + children.get (index-1);} return false;} @ Override public boolean tryLock (long time, TimeUnit unit) throws InterruptedException {return false;} @ Override public void lockInterruptibly () throws InterruptedException {} @ Override public Condition newCondition () {return null;}}

Thread name: Thread-13, orderCode: 202112100936140

Thread name: Thread-3, orderCode: 202112100936141

Thread name: Thread-14, orderCode: 202112100936142

Thread name: Thread-16, orderCode: 202112100936143

Thread name: Thread-1, orderCode: 202112100936144

Thread name: Thread-9, orderCode: 202112100936145

Thread name: Thread-4, orderCode: 202112100936146

Thread name: Thread-5, orderCode: 202112100936147

Thread name: Thread-7, orderCode: 202112100936148

Thread name: Thread-2, orderCode: 202112100936149

Thread name: Thread-17, orderCode: 2021121009361410

Thread name: Thread-15, orderCode: 2021121009361411

Thread name: Thread-0, orderCode: 2021121009361412

Thread name: Thread-10, orderCode: 2021121009361413

Thread name: Thread-18, orderCode: 2021121009361414

Thread name: Thread-19, orderCode: 2021121009361415

Thread name: Thread-8, orderCode: 2021121009361416

Thread name: Thread-12, orderCode: 2021121009361417

Thread name: Thread-11, orderCode: 2021121009361418

Thread name: Thread-6, orderCode: 2021121009361419

8. Comparison of zookeeper and Redis locks?

Both Redis and Zookeeper can be used to implement distributed locks, and the two can be compared:

Implementation of distributed Lock based on Redis

The implementation is complicated.

There is the possibility of deadlock

Better performance, based on memory, and ensure high availability, redis priority guarantee is AP (distributed CAP theory)

Implementation of distributed Lock based on Zookeeper

The implementation is relatively simple.

High reliability, because zookeeper guarantees CP (distributed CAP theory)

The performance is relatively good and the concurrency is about 10 ~ 20 thousand. If the concurrency is too high, the performance of redis is better.

At this point, the study of "how to implement distributed locks based on Zookeeper" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report