Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Case Analysis of using Java Lock in work

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

Today, I would like to share with you the relevant knowledge points of Java lock in the use of scene instance analysis at work, the content is detailed, the logic is clear, I believe most people still know too much about this knowledge, so share this article for your reference, I hope you can get something after reading this article, let's take a look at it.

1 、 synchronized

Synchronized is a reentrant exclusive lock, which is similar to ReentrantLock lock. ReentrantLock can be used to replace almost any place where synchronized is used. The biggest similarity between the two is: reentrant + exclusive lock. The main differences between the two are:

ReentrantLock has more features, such as providing Condition, locking API that can be interrupted, complex scenarios that can satisfy lock + queue, and so on.

ReentrantLock can be divided into fair lock and unfair lock, while synchronized is unfair lock.

The posture of the two is also different. ReentrantLock needs to declare that there is a lock and release lock API, while synchronized will automatically lock and release the lock on the code block, so synchronized is more convenient to use.

Synchronized and ReentrantLock are similar in function, so let's take synchronized as an example.

1.1. Shared resource initialization

In distributed systems, we like to lock some dead configuration resources into JVM memory when the project starts, so that requests can be taken directly from memory when taking these shared configuration resources, without having to take them from the database every time, reducing time overhead.

Generally speaking, such shared resources are: dead business process configuration + dead business rule configuration.

The steps for initializing shared resources are generally as follows: project startup-> trigger initialization action-> single thread fetching data from the database-> assembling the data structure we need-> putting it into JVM memory.

When a project starts, in order to prevent shared resources from being loaded multiple times, we often add an exclusive lock to allow one thread to load the shared resource after the other thread has finished loading the shared resource. We can choose synchronized or ReentrantLock for the exclusive lock. We take synchronized as an example to write the mock code as follows:

/ / shared resource private static final Map SHARED_MAP = Maps.newConcurrentMap (); / / with or without flag bit private static boolean loaded = false; / * initialize shared resource * / @ PostConstruct public void init () {if (loaded) {return;} synchronized (this) {/ / again check if (loaded) {return } log.info ("SynchronizedDemo init begin"); / / fetching data from the database and assembling it into the SHARED_MAP data format loaded = true; log.info ("SynchronizedDemo init end");}}

I don't know if you have found the @ PostConstruct annotation in the above code. The function of the @ PostConstruct annotation is to execute the method marked in the annotation when the Spring container is initialized, that is, the init method mentioned in the figure above is triggered when the Spring container is started.

You can download the demo code, find the DemoApplication startup file, right-click run on the DemoApplication file, you can start the entire Spring Boot project, and make a breakpoint on the init method to debug.

We use synchronized in the code to ensure that at the same time, only one thread can initialize the shared resource, and we add a shared resource load completed identification bit (loaded) to determine whether the load is complete, if the load is complete, then other loading threads directly return.

If you replace synchronized with ReentrantLock is the same implementation, but you need to use the API of ReentrantLock to add and release locks. One thing to note when using ReentrantLock is that we need to add locks in the try method block and release locks in the finally method block, so as to ensure that even if an exception occurs in the try, the lock can be released correctly in the finally.

Some students may ask, can not directly use the ConcurrentHashMap, why do you need to add a lock? It is true that ConcurrentHashMap is thread-safe, but it can only guarantee the thread safety of Map internal data operations. In the case of multithreading, the whole action of querying the database and assembling the data is only performed once. We lock the whole operation with synchronized to ensure that the whole operation is only performed once.

2. CountDownLatch2.1, scene

1: Xiao Ming bought a product on Taobao and felt bad. He returned the product (the product has not been shipped yet, only a refund). We call it a refund for a single item, which takes 30 milliseconds when it runs in the background system.

2: double 11, Xiaoming bought 40 items on Taobao and generated the same order (actually multiple orders may be generated, for convenience of description, let's say one). The next day Xiaoming found that 30 of the goods were consumed on impulse. 30 items need to be returned together.

2.2, implementation

At this time, the backstage only has the function of refund for a single item, and there is no function for a refund of a batch of goods (we call it a batch of 30 items at a time). In order to quickly realize this function, classmate A follows this plan: for calls the interface for the refund of a single item for 30 times. During the qa environment test, it takes time to refund 30 items: 30 * 30 = 900ms, plus other logic. It takes almost 1 second to refund 30 items, which actually takes a long time. Classmate A raised this question at that time and asked everyone to help us to see how to optimize the time-consuming of the whole scene.

Classmate B proposed at that time that you can use the thread pool to execute and submit all the tasks to the thread pool. If the CPU of the machine is 4 cores, a refund of up to four single items can be executed at the same time. Classmate A thinks it makes a lot of sense, so he is going to modify the plan. In order to make it easier to understand, let's draw both plans and compare them:

Classmate A then wrote the code according to the evolved plan, and after a day, he raised a question: after 30 tasks were submitted to the thread pool, how did the main thread wait for all 30 tasks to be completed? Because the main thread needs to collect the execution of 30 subtasks and return them to the front end.

You can think for yourself without looking down. The kind of lock we talked about in the previous chapters can help solve this problem.

CountDownLatch can. CountDownLatch has the function of letting the main thread wait for all the subtasks to be executed before continuing to execute.

At this point, there is another key, we need to know the result of the execution of the child thread, so we can not use Runnable as the thread task, because Runnable has no return value, we need to choose Callable as the task.

We've written a demo. First, let's take a look at the refund code for a single item:

/ / A refund for a single item takes 30 milliseconds. The refund is returned to true successfully. If it fails, false@Slf4jpublic class RefundDemo {/ * returns a refund based on the product ID * @ param itemId * @ return * / public boolean refundByItem (Long itemId) {try {/ / thread sleeps for 30 milliseconds to simulate the refund process of a single item Thread.sleep (30) Log.info ("refund success,itemId is {}", itemId); return true;} catch (Exception e) {log.error ("refundByItemError,itemId is {}", itemId); return false;}

Then let's take a look at the batch refunds for 30 items, the code is as follows:

@ Slf4jpublic class BatchRefundDemo {/ / defines thread pool public static final ExecutorService EXECUTOR_SERVICE = new ThreadPoolExecutor (10,10,0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue (20)); @ Test public void batchRefund () throws InterruptedException {/ / state initializes to 30 CountDownLatch countDownLatch = new CountDownLatch (30); RefundDemo refundDemo = new RefundDemo () / / prepare 30 commodities List items = Lists.newArrayListWithCapacity (30); for (int I = 0; I

< 30; i++) { items.add(Long.valueOf(i+"")); } // 准备开始批量退款 List futures = Lists.newArrayListWithCapacity(30); for (Long item : items) { // 使用 Callable,因为我们需要等到返回值 Future future = EXECUTOR_SERVICE.submit(new Callable() { @Override public Boolean call() throws Exception { boolean result = refundDemo.refundByItem(item); // 每个子线程都会执行 countDown,使 state -1 ,但只有最后一个才能真的唤醒主线程 countDownLatch.countDown(); return result; } }); // 收集批量退款的结果 futures.add(future); } log.info("30 个商品已经在退款中"); // 使主线程阻塞,一直等待 30 个商品都退款完成,才能继续执行 countDownLatch.await(); log.info("30 个商品已经退款完成"); // 拿到所有结果进行分析 List result = futures.stream().map(fu->

{the timeout of try {/ / get is set to 1 millisecond to indicate that all child threads have completed return (Boolean) fu.get (1MILLISECONDS);} catch (InterruptedException e) {e.printStackTrace ();} catch (ExecutionException e) {e.printStackTrace ();} catch (TimeoutException e) {e.printStackTrace () } return false;}) .print (Collectors.toList ()); / / print result statistics long success = result.stream (). Filter (r-> r.equals (true)). Count (); log.info ("execution result succeeded {}, failed {}", success,result.size ()-success);}}

The above code is only a rough bottom idea, and the real project will add optimization measures such as request grouping, timeout interruptions, and so on.

Let's take a look at the results of the implementation:

From the screenshot of the execution, we can clearly see that CountDownLatch has played a role, and the main thread will wait until the refund result of 30 items.

Then we did a loose experiment (execute the above code many times to calculate the average time). Through the above code, after the refund of 30 items is completed, the overall time is about 200 milliseconds.

Refund through a single item in for cycle takes about 1 second, and the performance difference is about 5 times. The code for refund in for cycle is as follows:

Long begin1 = System.currentTimeMillis (); for (Long item: items) {refundDemo.refundByItem (item);} log.info ("for cycle single refund time {}", System.currentTimeMillis ()-begin1)

The huge improvement in performance is the result of a combination of thread pools and locks.

These are all the contents of the article "Java Lock using scenario example Analysis at work". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report