In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
preface
Last week, an emergency project was "launched", received demand after work on Friday, started thinking about solutions on Monday, completed development on Wednesday, and went online on Thursday, which is also regarded as leader-oriented programming. Since the previous projects were self-operation and maintenance, and most of them were in a hurry, they basically added crontabs to the server instead of letting multiple instances handle the concurrency locks of scheduled tasks themselves. Laravel 5.5 began to have concurrency locks, and we are also about to upgrade. But this project is Python project, helpless can only implement it yourself, the following solution is very simple to implement and easy to understand.
import redisr = redis.Redis(...) last_heart = 0 #Record the lock heartbeat obtained last time free_lock_try = 6 #Maximum number of lock heartbeats while not r.setnx ('mylock ', 1): now_heart = r.get ('mylock') print(f"lock not acquired,now_heart={now_heart},last_heart={last_heart},free_lock_try={free_lock_try}") if now_heart == last_heart: free_lock_try = free_lock_try - 1 if free_lock_try == 0: #Lock hasn't had a heartbeat in a minute old_heart = r.getset('mylock', 1) #Reset lock to 1 and return heartbeat value before set if old_heart < now_heart: time.sleep(10) continue else: break #Get lock successfully, exit loop else: free_lock_try = 6 #Lock has heartbeat, reset free_lock_try value last_heart = now_heart time.sleep(10)def producer_exit(): """Automatic lock cleaning when program exits normally"" r.delete ('mylock ')import atexitatexit.register(producer_exit)#Business code while True: r.incr ('mylock') #Let lock heartbeat plus one...
Let's see what problems this program solves in concurrency locking
With high concurrency, multiple processes cannot acquire locks simultaneously. Here redis.setnx is used, if the lock already exists, other processes are unable to reset the lock and acquire it. In addition, when multiple processes find that there is no heartbeat for the lock at the same time, redis.getset is used to reset the heartbeat to 1, and all can be set successfully, but the values obtained by multiple processes are different. Only the process that really obtains the lock returns the heartbeat of the previous process, while other processes obtain 1. The lock process exits normally, you can use aexit to register the process exit function to delete the lock, you can also not do it here, but the next time you start, wait for a new process to wait for a few heartbeats. The lock process exits unexpectedly. After exiting, the heartbeat no longer increases. After exceeding the number of free_lock_tries, other processes will reset and obtain locks. All processes exit unexpectedly. This problem is not about the lock. You can use the supervisor for the daemon process.
Reasons for Redis Concurrency
The so-called only know it can know why, only to understand the cause of the problem, in order to find the right remedy, to find a solution to the problem. As we all know, Redis programs run in single-threaded mode. As a single-threaded program, Redis client commands are executed one by one, also known as One by One execution. Since it is executed one by one, it seems that Redis does not have high concurrency problems on the surface. This view is also reasonable. Atomic Redis commands do not have high concurrency problems themselves, which is different from programs under multithreading. However, after our project works to build Redis environment, it is usually a set of command collection executors. One request contains N Redis execution commands, plus multiple client requests, more commands, resulting in connection timeout, data confusion or errors, request blocking and other problems.
In summary, Redis concurrency is caused by business complexity in the program.
summary
The above is all the content of this article, I hope the content of this article for everyone's study or work has a certain reference learning value, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.