In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "how to use curator for distributed locking". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
Multithreaded operations on shared resources need to be locked to avoid data being written out of order. In distributed systems, this problem also exists, and a distributed lock service is needed. Common distributed lock implementations are based on DB, Redis, and zookeeper. The following author will analyze the design and implementation of these three distributed locks in order. If you want to see the summary of distributed locks directly, you can directly turn to the end of the document.
Distributed locks can be implemented in a variety of ways, but in any case, distributed locks generally have the following characteristics:
Exclusivity: Only one client can acquire a lock at any time.
Fault-tolerance: Distributed lock service generally needs to satisfy AP, that is, as long as most nodes of distributed lock service cluster survive, client can perform lock and unlock operation.
Deadlock avoidance: Distributed locks must be released, even if the client crashes before release or the network is unreachable.
In addition to the above features, distributed locks are best able to meet re-entrant, high performance, blocking lock characteristics (AQS, which can wake up from blocking in time), etc., so let's not talk much about it below, and hurry up (to the design and implementation of distributed locks).
DB lock
Create a new table in the database to control concurrency. The table structure can be as follows:
CREATE TABLE `lock_table`( `id` int(11) unsigned NOT NULL COMMENT 'primary key',`key_id` bigint(20) NOT NULL COMMENT ' distributed key',`memo` varchar(43) NOT NULL DEFAULT ' COMMENT ' recordable operation content',`update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`,`key_id`), UNIQUE KEY `key_id` (`key_id`) USING BTREE) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Key_id is used as a distributed key for concurrency control, memo can be used to record some operation contents (for example, memo can be used to support reentry characteristics, marking the current locked client and the number of locks). Setting key_id to a unique index ensures that only one lock (data insertion) can succeed for the same key_id. The lock and unlock pseudocode is as follows:
def lock : exec sql: insert into lock_table(key_id, memo, update_time) values (key_id, memo, NOW()) if result == true : return true else : return falsedef unlock : exec sql: delete from lock_table where key_id = 'key_id' and memo = 'memo'
Note that the lock operation in the pseudo-code is a non-blocking lock, that is, tryLock. If you want to achieve blocking (or blocking timeout) locking, just fix the lock pseudo-code repeatedly until the locking succeeds. DB-based distributed lock actually has a problem, that is, if the client is down or not unlocked due to network reasons after the lock is successfully locked, then other clients cannot lock the key_id and cannot release it. In order to disable the lock, you need to add a timed task to the application layer to delete expired records that have not been unlocked, such as deleting pseudo-codes that have not been unlocked 2 minutes ago:
def clear_timeout_lock : exec sql : delete from lock_table where update_time
< ADDTIME(NOW(),'-00:02:00') 因为单实例DB的TPS一般为几百,所以基于DB的分布式性能上限一般也是1k以下,一般在并发量不大的场景下该分布式锁是满足需求的,不会出现性能问题。不过DB作为分布式锁服务需要考虑单点问题,对于分布式系统来说是不允许出现单点的,一般通过数据库的同步复制,以及使用vip切换Master就能解决这个问题。 以上DB分布式锁是通过insert来实现的,如果加锁的数据已经在数据库中存在,那么用select xxx where key_id = xxx for udpate方式来做也是可以的。 顺便在此给大家推荐一个Java架构方面的交流学习群:698581634,进群即可免费获取Java架构学习资料:里面有Spring,MyBatis,Netty源码分析,高并发、高性能、分布式、微服务架构的原理,JVM性能优化这些成为架构师必备的知识体系,群里一定有你需要的资料,大家赶紧加群吧。 Redis锁 Redis锁是通过以下命令对资源进行加锁: set key_id key_value NX PX expireTime 其中,set nx命令只会在key不存在时给key进行赋值,px用来设置key过期时间,key_value一般是随机值,用来保证释放锁的安全性(释放时会判断是否是之前设置过的随机值,只有是才释放锁)。由于资源设置了过期时间,一定时间后锁会自动释放。 set nx保证并发加锁时只有一个client能设置成功(Redis内部是单线程,并且数据存在内存中,也就是说redis内部执行命令是不会有多线程同步问题的),此时的lock/unlock伪代码如下: def lock: if (redis.call('set', KEYS[1], ARGV[1], 'ex', ARGV[2], 'nx')) then return true end return false def unlock: if (redis.call('get', KEYS[1]) == ARGV[1]) then redis.call('del', KEYS[1]) return true end return false 分布式锁服务中的一个问题 如果一个获取到锁的client因为某种原因导致没能及时释放锁,并且redis因为超时释放了锁,另外一个client获取到了锁,此时情况如下图所示:So how to solve this problem? One solution is to introduce a lock renewal mechanism, that is, after acquiring the lock, before releasing the lock, the lock renewal will be carried out at regular intervals, such as 1/3 of the lock timeout period.
There are many distributed lock implementations for open source redis, the more famous ones are redisson, Baidu's dlock, and for distributed locks, the author has also written a simple version of distributed lock redis-lock, mainly adding lock renewal and locking mechanisms for multiple keys at the same time.
For high availability, it can generally be solved by clustering or master-slave. The advantage of redis lock is excellent performance. The disadvantage is that the data is in memory. Once the cache service is down, the lock data is lost. For example, redis has its own replication function, which can guarantee the reliability of data to a certain extent. However, since replication is also completed asynchronously, it is still possible to have downtime and loss of lock data when the master node writes lock data but is not synchronized to the slave node.
By the way, I recommend a Java architecture exchange learning group: 698581634, into the group you can get Java architecture learning materials for free: there are Spring, MyBatis, Netty source code analysis, high concurrency, high performance, distributed, microservice architecture principles, JVM performance optimization These become the necessary knowledge system for architects, there must be information you need in the group, everyone hurry to add groups.
zookeeper distributed lock
ZooKeeper is a highly available distributed coordination service created by Yahoo!, an open source implementation of Google Chubby. ZooKeeper provides a basic service: distributed lock services. Zookeeper has three important characteristics: zab protocol, node storage model and watcher mechanism. Data consistency is ensured through zab protocol, zookeeper cluster deployment ensures availability, nodes are stored in memory, data operation performance is improved, and notification mechanism is implemented by using watcher mechanism (for example, clients who successfully lock can notify other clients when releasing locks).
The zookeeper node model supports temporary node characteristics, that is, temporary data written by the client will be deleted when the client is down, so there is no need to add a timeout release mechanism to the lock. When there are multiple concurrent requests for the same path, only one client can be created successfully. This feature is used to implement distributed locks. Note: If the client is not down, the zookeeper service and the client heartbeat fail due to network reasons, then zookeeper will also delete temporary data. At this time, if the client is still operating shared data, there is a certain risk.
Zookeeper based implementation of distributed lock, compared with redis and DB based implementation, easier to use, efficiency and stability is better. Curator encapsulates API operations on zookeeper, and also encapsulates some advanced features, such as Cache event listening, elections, distributed locks, distributed counters, distributed barriers, etc. An example of using curator for distributed locking is as follows:
org.apache.curator curator-framework 2.12.0 org.apache.curator curator-recipes 2.12.0public static void main (String[] args) throws Exception { String lockPath = "/curator_recipes_lock_path"; CuratorFramework client = CuratorFrameworkFactory.builder().connectString ("192.168.193.128:2181") .retryPolicy (new ExponentialBackoffRetry(1000, 3)).build(); client.start(); InterProcessMutex lock = new InterProcessMutex (client, lockPath); Runnable task = () -> { try { lock.acquire(); try { System.out.println ("zookeeper acquire success: " + Thread.currentThread().getName()); Thread.sleep(1000); } catch (Exception e) { e.printStackTrace(); } finally { lock.release(); } } catch (Exception ex) { ex.printStackTrace(); } }; ExecutorService executor = Executors.newFixedThreadPool(10); for (int i = 0; i < 1000; i++) { executor.execute(task); } LockSupport.park();}"How to use curator for distributed locking" is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.