In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article focuses on "how to deal with blocking and non-blocking of java asynchronous events". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to handle blocking and non-blocking of java asynchronous events.
Preface
Due to the ubiquity of multi-core systems, the application of concurrent programming is undoubtedly more widespread than ever before. But concurrency is difficult to achieve correctly, and users need to use it with the help of new tools. Many JVM-based languages fall into this category of development tools, and Scala is particularly active in this area. This series of articles will introduce some of the newer methods of concurrency programming for the Java and Scala languages.
In any concurrency application, asynchronous event handling is critical. The source of the event may be a different computing task, an Imax O operation, or interaction with an external system. Regardless of the source, the application code must track the event and coordinate the actions taken in response to the event.
Java applications can take two basic approaches to asynchronous event handling: the application has a coordinating thread waiting for the event and then taking action, or the event can perform an operation directly upon completion (usually by executing the code provided by the application). A method that causes a thread to wait for an event is called a blocking method. A method that allows an event to perform an operation so that the thread does not have to explicitly wait for the event is called a non-blocking method.
In the calculation, the use of the words blocking and non-blocking is usually different depending on the context. For example, a non-blocking algorithm for a shared data structure does not require threads to wait to access the data structure. In non-blocking Icando O, the application thread can start an Iripple O operation and then leave to do something else, while the operation is performed asynchronously. In this article, non-blocking refers to an event that completes an operation without waiting for a thread. A common concept in these uses is that a blocking operation requires a thread to wait for a result, while a non-blocking operation does not.
Composite event
Waiting for the event to complete is simple: you have a thread waiting for the event, and when the thread resumes running, you will know that the event has completed. If your thread has something else to do in the meantime, it will finish it and wait. The thread can even use the polling method to interrupt its other activities to check whether the event has been completed. But the basic principle is the same: when you need the result of the event, you park the thread to wait for the event to complete.
Blocking is easy and relatively simple, as long as you have a single main thread waiting for the event to complete. When using multiple threads that are blocked by waiting for each other, you may encounter some problems, such as:
Deadlock: two or more threads respectively control the resources that other threads need to continue execution. Starvation: some threads may not be able to continue execution because other threads greedily consume shared resources. Livelock: threads try to adjust to each other, but end up making no progress.
The non-blocking approach leaves much more room for creativity. Callbacks are a common technique for non-blocking event handling. Callbacks are a symbol of flexibility because you can execute any code you want when an event occurs. The downside of callbacks is that your code becomes messy when you use callbacks to handle many events. And callbacks are particularly difficult to debug because the control flow does not match the order of the code in the application.
Java 8 CompletableFuture supports both blocking and non-blocking event handling methods, including regular callbacks. CompletableFuture also provides a variety of ways to compose and combine events, providing callback flexibility and clean, simple, readable code. In this section, you will see examples of blocking and non-blocking methods that handle events represented by CompletableFuture.
Tasks and sorting
An application usually has to perform multiple processing steps in a particular operation. For example, before returning a result to a user, a Web application might need to:
1. Find the user's information in a database 2. Use the found information to perform a Web service call and perform another database query. 3. Perform a database update based on the results from the previous step.
Figure 1 illustrates this type of structure.
Figure 1. Application task flow
Figure 1 breaks down the process into four different tasks, which are connected by arrows that represent sequential dependencies. Task 1 can be executed directly, Task 2 and Task 3 are executed after Task 1 is completed, and Task 4 is executed after both Task 2 and Task 3 are completed. This is the task structure that I use to demonstrate asynchronous event handling in this article. Real applications, especially server applications with multiple mobile parts, may be much more complex, but this simple example is only used to demonstrate the principles involved.
Modeling asynchronous events
In real systems, the source of asynchronous events is usually parallel computing or some form of Icord O operation. However, it is easier to model such a system with a simple time delay, which is the approach used in this article. Listing 1 shows the basic timed-event code I used to generate events in CompletableFuture format.
Listing 1. Timed event code
Import java.util.Timer;import java.util.TimerTask;import java.util.concurrent.CompletableFuture;public class TimedEventSupport {private static final Timer timer = new Timer (); / * Build a future to return the value after a delay.** @ param delay* @ param value* @ return future*/public static CompletableFuture delayedSuccess (int delay, T value) {CompletableFuture future = new CompletableFuture (); TimerTask task = new TimerTask () {public void run () {future.complete (value);}}; timer.schedule (task, delay* 1000); return future } / * Build a future to return a throwable after a delay.* * @ param delay* @ param t * @ return future*/public static CompletableFuture delayedFailure (int delay, Throwable t) {CompletableFuture future = new CompletableFuture (); TimerTask task = new TimerTask () {public void run () {future.completeExceptionally (t);}}; timer.schedule (task, delay* 1000); return future;}}
Why not use lambda?
The TimerTask in listing 1 is implemented as an anonymous inner class that contains only one run () method. You might think that you can use lambda instead of inner classes here. However, lambda can only be used as an instance of an interface, while TimerTask is defined as an abstract class. Unless the future extension of the lambda feature adds support for abstract classes (possible, but not necessarily feasible due to design problems), or defines parallel interfaces for situations such as TimerTask, you must continue to use Java inner classes to create a single method implementation.
The code in listing 1 uses a java.util.Timer to plan java.util.TimerTask execution after a certain delay. Each TimerTask completes an associated future at run time. DelayedSuccess () plans a task to successfully complete a CompletableFuture and return the future to the caller. DelayedFailure () plans a task to complete a CompletableFuture, throws an exception, and then returns the future to the caller.
Listing 2 shows how to use the code in listing 1 to create events in the form of CompletableFuture that match the four tasks in figure 1. (this code comes from the EventComposition class in the sample code. )
Listing 2. Events for the sample task
/ / task definitionsprivate static CompletableFuture task1 (int input) {return TimedEventSupport.delayedSuccess (1, input + 1);} private static CompletableFuture task2 (int input) {return TimedEventSupport.delayedSuccess (2, input + 2);} private static CompletableFuture task3 (int input) {return TimedEventSupport.delayedSuccess (3, input + 3);} private static CompletableFuture task4 (int input) {return TimedEventSupport.delayedSuccess (1, input + 4);}
Each of the four task methods in listing 2 uses a specific delay value for the completion time of the task: task1 is 1 second, task2 is 2 seconds, task3 is 3 seconds, and task4 is 1 second again. Each task also accepts an input value, which is the input plus the task number as the (final) result value of future. These methods all use the successful form of future; we will look at some examples of using the failed form later.
These tasks require you to run them in the order shown in figure 1, passing each task the result value returned by the previous task (or, for task4, the sum of the results of the first two tasks). If the middle two tasks are executed at the same time, the total execution time is about 5 seconds (1 second + (maximum of 2 seconds, 3 seconds) + 1 second.
If the input to task1 is 1, the result is 2. If the result is passed to task2 and task3, the result will be 4 and 5. If the sum (9) of the two results is passed as input to task4, the final result will be 13.
Blocking waiting
After setting up the execution environment, it's time to set up some actions. The easiest way to coordinate the execution of four tasks is to use blocking wait: the main thread waits for each task to complete. Listing 3 (also from the EventComposition class in the sample code) shows this method.
Listing 3. Blocking waiting for task execution
Private static CompletableFuture runBlocking () {Integer i1 = task1 (1). Join (); CompletableFuture future2 = task2 (i1); CompletableFuture future3 = task3 (i1); Integer result = task4 (future2.join () + future3.join ()). Join (); return CompletableFuture.completedFuture (result);}
Listing 3 uses the join () method of CompletableFuture to complete the blocking wait. Join () waits for the task to complete, and then returns the result value if the task completes successfully, or throws an unchecked exception if it fails or is cancelled. The code first waits for the result of task1, then starts task2 and task3 at the same time, waits for the two tasks to return future in turn, and finally waits for the result of task4. RunBlocking () returns a CompletableFuture to be consistent with the non-blocking form I'll show next, but in this case, future will actually be done before the method returns.
Synthesize and combine future
Listing 4 (also from the EventComposition class in the sample code) shows how to wire the future together to perform tasks in the correct order and with the correct dependencies without blocking.
Listing 4. Non-blocking composition and combination
Private static CompletableFuture runNonblocking () {return task1 (1) .thenCompose (i1-> ((CompletableFuture) task2 (i1) .thenCombine (task3 (i1), (i2Poweri3)-> i2+i3)) .thenCompose (i4-> task4 (i4));}
The code in listing 4 basically constructs an execution plan that specifies how different tasks are executed and how they relate to each other. This code is exquisite and concise, but it may be difficult to understand if you are not familiar with the CompletableFuture method. Listing 5 ReFactor the same code into a more understandable form by separating the task2 and task3 parts into a new method, runTask2and3.
Listing 5. Non-blocking composition and combination after reconstruction
Private static CompletableFuture runTask2and3 (Integer i1) {CompletableFuture task2 = task2 (i1); CompletableFuture task3 = task3 (i1); BiFunction sum = (a, b)-> a + sum return task2.thenCombine (task3, sum);} private static CompletableFuture runNonblockingAlt () {CompletableFuture task1 = task1 (1); CompletableFuture comp123 = task1.thenCompose (EventComposition::runTask2and3); return comp123.thenCompose (EventComposition::task4);}
In listing 5, the runTask2and3 () method represents the middle part of the task flow, where task2 and task3 execute simultaneously, and then combine their resulting values. This order is encoded using the thenCombine () method on one future, which accepts another future as its first parameter and a binary function instance whose input type matches the result type of future as its second parameter. ThenCombine () returns the third future, which represents the value of the function applied to the results of the first two future. In this case, the two future are task2 and task3, which sums the resulting values.
The runNonblockingAlt () method is used to invoke the thenCompose () method twice in a future. The argument to thenCompose () is an instance of a function that takes the value type of the original future as input and returns another future as output. The result of thenCompose () is the third future, with the same result type as this function. This future is used as a placeholder for the future that will eventually be returned by the function after the original future is complete.
The call to task1.thenCompose () returns a future indicating the result of applying the runTask2and3 () function to the result of task1, which is saved as comp123. The call to comp123.thenCompose () returns a future that represents the result of applying the task4 () function to the result of the first henCompose (), which is the overall result of all tasks performed.
Trial exampl
The sample code contains a main () method to run each version of the event code in turn and show that the completion event (about 5 seconds) and the result (13) are correct. Listing 6 shows the result of running the main () method from a console.
Listing 6. Run the main () method
Dennis@linux-guk3:~/devworks/scala3/code/bin > java com.sosnoski.concur.article3.EventCompositionStarting runBlockingrunBlocking returned 13 in 5008 ms.Starting runNonblockingrunNonblocking returned 13 in 5002 ms.Starting runNonblockingAltrunNonblockingAlt returned 13 in 5001 ms.
An unsmooth road
So far, you've seen code that coordinates events in the form of future, which always completes successfully. In real applications, you can't hope that things will always go so well. Problems will occur during processing tasks, and in Java terminology, these problems are usually represented as Throwable.
It's easy to change the task definition in listing 2 by using delayedFailure () instead of the delayedSuccess () method, as shown in task4 here:
Private static CompletableFuture task4 (int input) {return TimedEventSupport.delayedFailure (1, new IllegalArgumentException ("This won't work!"));}
If you run listing 3 and only modify the task4 to throw an exception on completion, you will get the expected IllegalArgumentException thrown by the join () call on task4. If the problem is not caught in the runBlocking () method, the exception is passed through the call chain, and eventually if the problem is not caught, the execution thread is terminated. Fortunately, the code is easy to modify, so if an exception is thrown when any task is completed, the exception is handled by passing the returned future to the caller. Listing 7 shows this change.
Listing 7. With abnormal blocking wait
Private static CompletableFuture runBlocking () {try {Integer i1 = task1 (1). Join (); CompletableFuture future2 = task2 (i1); CompletableFuture future3 = task3 (i1); Integer result = task4 (future2.join () + future3.join ()). Join (); return CompletableFuture.completedFuture (result);} catch (CompletionException e) {CompletableFuture result = new CompletableFuture (); result.completeExceptionally (e.getCause ()); return result;}
Listing 7 is very simple and easy to understand. The original code is wrapped in a try/catch, and catch returns an exception when the returned future is complete. This approach adds a little complexity, but it should still be easy for any Java developer to understand.
The non-blocking code in listing 4 doesn't even need to add try/catch. CompletableFuture compositing and composition operations are responsible for automatically passing exceptions to you so that dependent future also throws an exception when it is complete.
Block or not block
You have looked at the blocking and non-blocking handling of events represented by CompletableFuture. At least for the basic task flow modeled in this article, both approaches are very simple. For more complex task flows, the code is also more complex.
In the case of blocking, the increased complexity is not a big problem, and you still just need to wait for the event to complete. If you want to perform other types of synchronization between threads, you will encounter thread hunger or even deadlock problems.
In non-blocking cases, the code execution triggered by the completion of the event is difficult to debug. When many types of events are executed and there are many interactions between events, it becomes difficult to track which event triggered which execution. This situation is basically a nightmare of callbacks, whether using traditional callbacks or CompletableFuture combination and compositing operations.
All in all, blocking code usually has the advantage of simplicity. So why would anyone want to use a non-blocking approach? This section gives some important reasons.
Cost of switching
When a thread is blocked, the processor core that previously executed the thread moves to execute another thread. The execution state of the previously executed thread must be saved to memory and the state of the new thread loaded. This operation of switching the core from running one thread to running another thread is called context switching.
In addition to the direct context switching performance cost, new threads typically use different data from the previous thread. Memory access is much slower than the processor clock, so modern systems use multi-tier caching between the processor core and the main memory. Although much faster than the main memory, the cache capacity is also much smaller (in general, the faster the cache, the smaller the capacity), so you can only keep a small portion of the total memory in the cache at any time.
When a thread switch occurs and a core starts executing a new thread, the memory data needed by the new thread may not be in the cache, so the core must wait for the data to be loaded from the main memory.
The combination of context switching and memory access latency results in direct significant performance costs. Figure 2 shows the thread switching overhead on my quad-core AMD system using Oracle's Java 8 for 64-bit Linux.
This test uses a variable number of threads, which vary by power of 2 from 1 to 4096, and the memory block size of each thread is variable, between 0 and 64KB. Threads execute in turn, using CompletableFuture to trigger thread execution. Each time a thread executes, it first returns a simple calculation using the data for the thread to show the overhead of loading the data into the cache, and then adds a shared static variable.
Finally, create a new CompletableFuture instance to trigger its next execution, and then start the next thread in the sequence by completing the CompletableFuture that the thread is waiting for. Finally, if you need to execute it again, the thread waits for the newly created CompletableFuture to complete.
Figure 2. Thread switching cost
You can see the impact of the number of threads and the memory of each thread in the chart in figure 2. The impact is greatest when the number of threads is 4, and as long as thread-specific data is small enough, two threads run almost as fast as a single thread. When the number of threads exceeds 4, the impact on performance is relatively small. The greater the amount of memory per thread, the faster the two-tier cache will overflow, resulting in higher switching costs.
The time value shown in figure 2 comes from my somewhat outdated main system. The corresponding time on your system will be different and may be much smaller. However, the shape of the curve should be roughly the same.
Figure 2 shows the microsecond overhead of a thread switch, so even if the cost of a thread switch reaches tens of thousands of processor clocks, the absolute number is not large. For a medium number of threads, the 16KB data has a switching time of 12.5 microseconds (the yellow line in the chart), and the system can perform 80000 thread switches per second. This thread switch is likely to be much more frequent than you see in any well-written single-user application and even many server applications. But for high-performance server applications that handle thousands of events per second, blocking overhead can be a major factor affecting performance. For such applications, it is important to minimize thread switching by using non-blocking code as much as possible.
It is also important to recognize that these time data come from the most ideal scenarios. When I run the thread switcher, I run enough CPU activity to get all the cores running at full speed (at least on my system). In real applications, the processing load may be more sudden. During periods of low activity, modern processors transition some cores to a dormant state to reduce total power consumption and heat generation. The only problem with this power reduction process is that it takes time to wake the core from sleep as the demand increases. The time required to transition from deep hibernation to full speed may reach the microsecond level rather than the millisecond level seen in this thread switching time example.
Reactive application
For many applications, another reason not to block on specific threads is that these threads are used to handle events that need to be responded in a timely manner. The classic example is the UI thread. If you execute code in a UI thread that blocks to wait for asynchronous events to complete, you will delay the processing of user input events. No one likes to wait for the application to respond to their typing, click, or touch actions, so blockages in UI threads can be quickly reflected in error reports from users.
The UI threading concept is supported by a more general principle. Many types of applications, even non-GUI applications, must respond to events quickly, and in many cases it is important to keep a short response to events. For these types of applications, blocking waiting is not an acceptable option.
The name reactive programming represents the programming style used by responsive and extensible applications. The core principle of reactive programming is that applications should be able to:
Respond to events: the application should be event-driven, with loosely coupled components at each level linked by asynchronous communication. Respond to the load: the application should be extensible so that the application can be easily upgraded to handle increased requirements. Respond to the failure: the application should be resilient, localizing the impact of the failure and correcting it quickly. Respond to the user: the application should be able to respond quickly to the user, even when there is a load and failure.
Applications that use blocking event handling cannot meet these principles. Threads are limited resources, so consuming them in blocking waits limits scalability and increases latency (application response time) because blocked threads cannot respond to events immediately. Non-blocking applications can respond to events more quickly, reduce costs, while reducing thread switching overhead and improving throughput.
Reactive programming is much more complex than non-blocking code. Reactive programming involves focusing on the data flows in your application and implementing them as asynchronous interactions without overburdening the receiver or lagging the sender. This focus on data flow helps avoid many of the complexities of traditional concurrent programming.
At this point, I believe you have a deeper understanding of "how to handle blocking and non-blocking of java asynchronous events". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.