In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Spring in how to use the responsibility model, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.
1. External control mode
for external control, this way is relatively simple, each node of the chain only needs to focus on its own logic, and whether to continue to call the next node after the current node call is completed, which is determined by the external control logic. Here we explain with the implementation logic of a filter. In normal work, we often need to filter something according to a series of conditions, such as the design of the task service. When a task is executed, it needs to be tested by filtering conditions such as timeliness testing, risk control interception, the number of tasks completed, and so on before we can judge whether the current task can be executed. We can perform this task only after all the filtering conditions have been completed. So, here we can abstract a Filter interface, designed as follows:
Public interface Filter {/ * is used to filter each task node * / boolean filter (Task task);}
the Filter.filter method here has only one parameter Task, which mainly controls whether the task needs to be filtered on the current day. There is a return value of type boolean, which tells the external control logic whether the task needs to be filtered out. For the subclass of the interface, we only need to declare it as a bean managed by spring:
@ Componentpublic class DurationFilter implements Filter {@ Override public boolean filter (Task task) {System.out.println ("timeliness test"); return true;}} @ compomentpublic class Risk implements Filter {@ override public boolean filter (Task task) {System.out.println ("risk control interception"); return true;}} @ Componentpublic class TimesFilter implements Filter {@ Override public boolean filter (Task task) {System.out.println ("frequency limit test") Return true;}}
Above , we simulate and declare three subclasses of Filter, which are used to design a series of logic to control whether the task needs to be filtered on the same day. The structural logic is actually relatively simple, mainly because it needs to be declared as a bean managed by spring. Here is our control logic:
@ Servicepublic class ApplicationService {@ Autowired private List filters; public void mockedClient () {Task task = new Task (); / / here task is usually the for (Filter filter: filters) {if (! filter.filter (task)) {return;} / / filtered through the database query, followed by the logic}} to execute the task.
In the above control logic of , you only need to obtain the filter through the automatic injection of spring. The injection here is a List, that is, if we have a new Filter instance that needs to participate in the filtering of the responsibility chain, we only need to declare it as a bean managed by a Spring container.
The advantage of is that the control of the chain is simple, it only needs to achieve a unified interface, which basically meets most of the logical control, but it is powerless to meet the needs of some dynamic adjustment chains. For example, after executing to a node, you need to dynamically judge the master to execute the next node, or to execute, the node of some bifurcation point, and so on. At this time, we need to transfer the chain nodes to each node to execute.
1. External control mode
There are three main control points for the way the node controls the call: Handler,HandlerContext and PipeLine. Handle is used to write specific business code, HandlerContext is used to package Handler, and is used to control the call of the next node PipeLine is mainly used to control the invocation of the overall process, such as task query, task filtering, task execution and other processes. The logic control of these processes is pipeline control, and each process contains a series of sub-processes, which are sorted out by HandlerContext and Handler one by one. The overall logic of the responsibility chain is shown in the following figure:
! []
As you can see from the figure above, our entire process is abstracted through the pipeline object, which is divided into three steps: querying task, filtering task, and executing task. In each step, we use a series of chained calls. On the way, it should be noted that each time we call the next node of the chain, we do it through a specific Handler, that is, we call the next node of the chain, and we dynamically control it through the business implementer.
The first thing we need to emphasize about the design of this pattern is the design of the Handler interface, which is as follows:
Public interface Handler {/ * handles the logic of receiving the front-end request * / default void receiveTask (HandlerContext ctx, Request request) {ctx.fireTaskReceived (request);} / * after querying task, the logic of task filtering * / default void filterTask (HandlerContext ctx, Task task) {ctx.fireTaskFiltered (task) After the task filtering is completed, process the logic * / default void executeTask (HandlerContext ctx, Task task) {ctx.fireTaskExecuted (task) that executes the task. } / * when the previous method of the implementation throws an exception, the current method is used for exception handling, so that the exception * for each handler can be handled only within that handler without additional capture * / default void exceptionCaught (HandlerContext ctx, Throwable e) {throw new RuntimeException (e) } / * * in the whole process, the code that is guaranteed to be executed at the end is mainly used for some cleanup work * / default void afterCompletion (HandlerContext ctx) {ctx.fireAfterCompletion (ctx);}}
The Handler API here is mainly an abstraction of the specific business logic. The Handler mainly needs to be explained as follows:
In the previous figure, there is a method corresponding to changing Handler in each level of pipline. When you need to carry out specific business processing, users only need to declare a bean to specifically implement the hierarchical methods that a business needs to deal with, regardless of other logic.
Methods at each level have a default implementation, which is to continue to pass down the call to the chain
In each level of method, the first parameter is of type Handler, which is mainly used for process control, such as whether the call chain at the current level needs to be passed down. Here, the chain is passed mainly through the ctx.filterXXX () method.
There is an exceptionCaught () and afterCompletion () method in each Handler, which are used to control the exception and clean up after all calls are completed, respectively. The exception control here is mainly to catch the exception in the current Handler, while the afterCompetition () method ensures that the exception will be called after all steps, and whether the five-way method will throw an exception.
For the use of Handler, what we hope to achieve is that the user only needs to implement the interface and use some annotation to spring bean its flag bit, regardless of the assembly and flow control of the entire pipeline. In this way, we retain both the convenience provided by each spring and the flexibility of using the pipeline model
In the above process code, we notice that there is a HandlerContext in each level of the method to transmit the control information about the chain. Let's take a look at the source code for this part:
@ Component@Scope ("prototype") public class HandlerContext {HandlerContext prev; HandlerContext next; Handler handler; private Task task; public void fireTaskReceived (Request request) {invokeTaskReceived (next (), request);} / * handle events received by tasks * / static void invokeTaskReceived (HandlerContext ctx, Request request) {if (ctx! = null) {try {ctx.handler (). ReceiveTask (ctx, request) } catch (Throwable e) {ctx.handler () .exceptionCaught (ctx, e);} public void fireTaskFiltered (Task task) {invokeTaskFiltered (next (), task);} / * handle task filter events * / static void invokeTaskFiltered (HandlerContext ctx, Task task) {if (null! = ctx) {try {ctx.handler (). FilterTask (ctx, task) } catch (Throwable e) {ctx.handler () .exceptionCaught (ctx, e);} public void fireTaskExecuted (Task task) {invokeTaskExecuted (next (), task);} / * handle task execution events * / static void invokeTaskExecuted (HandlerContext ctx, Task task) {if (null! = ctx) {try {ctx.handler (). ExecuteTask (ctx, task) } catch (Exception e) {ctx.handler () .exceptionCaught (ctx, e);} public void fireAfterCompletion (HandlerContext ctx) {invokeAfterCompletion (next ());} static void invokeAfterCompletion (HandlerContext ctx) {if (null! = ctx) {ctx.handler (). AfterCompletion (ctx);} private HandlerContext next () {return next;} private Handler handler () {return handler;}}
In HandlerContext, we need to make a few points:
Previously, the ctx.filterXXX () method implemented by default in the Handler interface is delegated to the corresponding invokeXXX method to call, and we notice that in the parameters passed in the invokeXXX () method, the passed HandlerContext objects are obtained through the next () method. That is to say, when we call the ctx.filterXXX method in Handler, we always call the hierarchical method corresponding to the next Handler of the current Handler. In this way, we can pass the call chain.
In the previous point, we said that if we want to pass the chain down in a Handler, we only need to call the FilterXXX () method. If we are in a Handler, according to the business, the current level has been called and there is no need to call the subsequent Handler, then we do not need to call the ctx.filterXXX () method.
In HandlerContext, we also implement the invokeXXX () method, which is provided to external pipeline calls, opening the chain at each level
In each invokeXXX () method, we use try...catch to catch the call at the current level, and then call the ctx.handler (). ExceptionCaught () method, where the exception capture process is handled in HandlerContext ()
At the declaration of HandlerContext, we need to note that it is annotated with @ conpoment and @ ("prototype") annotations, which means that our HandlerContext is managed by a spring container, and since each of our Handler is actually maintained by HandlerContext, it must be declared as a prototype type. In this way, our HandlerContext also has functions such as spring-related bean, which can do some additional processing according to the business needs.
Earlier, we explained the specific implementation of Handler and HandlerContext, as well as some issues that need to be paid attention to in the implementation process. Let's see how the pipeline for process control is implemented in the future. The following is the definition of its API:
Public interface Pipeline {Pipeline fireTaskReceived (); Pipeline fireTaskFiltered (); Pipeline fireTaskExecuted (); Pipeline fireAfterCompletion ();}
Here, a pipeline interface is defined, which defines a series of hierarchical calls, which are the entry methods at each level. The following is the implementation class of the interface:
@ Component ("pipeline") @ Scope ("prototype") public class DefaultPipeline implements Pipeline, ApplicationContextAware, InitializingBean {/ / create a default handler and inject it into the HandlerContext of the first and last nodes. Its function is to pass private static final Handler DEFAULT_HANDLER = new Handler () {} down the chain. / / the main reason for injecting ApplicationContext is that HandlerContext is of type prototype, so its instance private ApplicationContext context needs to be obtained through the ApplicationContext.getBean () method. / / create a header node and a tail node. No processing is done within these two nodes, but each level of the chain is passed down by default. / / here, the main function of the head node and tail node is to mark the beginning and end of the whole chain, and all business nodes are private HandlerContext head; private HandlerContext tail between the two nodes. / / the request object used for business invocation encapsulates the business data private Request request; / / the task object used to execute the task private Task task; / / the original business data needs to be passed in through the constructor, because this is the data needed to drive the entire pipeline. / / generally, the public DefaultPipeline (Request request) {this.request = request is encapsulated by the parameters of the external caller. } / / here we can see that each level call is made through HandlerContext.invokeXXX (head). / / that is to say, the entry of each level chain starts from the header node. Of course, in some cases, we also need to start the chain / / call from the tail node, and pass in tail at this time. @ Override public Pipeline fireTaskReceived () {HandlerContext.invokeTaskReceived (head, request); return this;} / / chain call @ Override public Pipeline fireTaskFiltered () {HandlerContext.invokeTaskFiltered (head, task); return this;} / / chain execution @ Override public Pipeline fireTaskExecuted () {HandlerContext.invokeTaskExecuted (head, task); return this } / / trigger the execution of the finished chain @ Override public Pipeline fireAfterCompletion () {HandlerContext.invokeAfterCompletion (head); return this;} / / the method used to add nodes to Pipeline. Readers can also implement other methods for chain maintenance void addLast (Handler handler) {HandlerContext handlerContext = newContext (handler); tail.prev.next = handlerContext; handlerContext.prev = tail.prev; handlerContext.next = tail Tail.prev = handlerContext;} / / the initialization of Pipeline is achieved by implementing the InitializingBean interface. As you can see, at the beginning, we instantiated two HandlerContext objects through ApplicationContext, then pointed the head.next to the tail node and / / pointed the tail.prev to the head node. In other words, initially, there are only head nodes and tail nodes in the whole chain. @ Override public void afterPropertiesSet () throws Exception {head = newContext (DEFAULT_HANDLER); tail = newContext (DEFAULT_HANDLER); head.next = tail; tail.prev = head;} / / initialize a HandlerContext private HandlerContext newContext (Handler handler) {HandlerContext context = this.context.getBean (HandlerContext.class) with the default Handler; context.handler = handler; return context } / / inject ApplicationContext object @ Override public void setApplicationContext (ApplicationContext applicationContext) {this.context = applicationContext;}}
With regard to the implementation of defaultPipeline, there are the following points to explain:
Defaultpipeline is annotated with @ compoment and @ scope ("prototype") annotations. The current annotation is used to declare it as a bean managed by a spring container, while the latter annotation is used to indicate that defaultPipeline is a multi-instance type. Obviously, the pipeLine here is stateful. What needs to be explained here is that 'stateful' is mainly because we may adjust the node situation of the entire chain dynamically according to the business situation, and the request and task objects here are related to the specific business, because they must be declared as prototype types.
In the above example, the request object is passed in when the pipeline object is constructed, while the task object is generated during the flow of pipeline. Here, for example, after completing the call to the filterTaskReceived () chain, you need to get a task object through the external request request for subsequent processing.
For the subsequent writers of the business code, they only need to implement a Handler interface and do not need to deal with all the logic related to the chain, thinking that we need to get all the bean that implements the Handler interface.
Encapsulate the bean that implements the Handler interface through Handlercontext, and then add it to the pipeline
The first problem here is easy to deal with, because you can get all the bean that implements an interface through ApplicationContext, while the second problem can be achieved by declaring a class that declares the BeanPostProcessoor interface. The following is the specific implementation code:
Verride public void setApplicationContext (ApplicationContext applicationContext) {this.context = applicationContext;}} there are the following main points to note about the implementation of DefaultPipeline: DefaultPipeline is annotated with @ Component and @ Scope ("prototype") annotations, the former annotation is used to declare it as a bean managed by a Spring container, and the latter annotation is used to indicate that DefaultPipeline is a multi-instance type. Obviously, the Pipeline here is stateful. What needs to be explained here is that "stateful" is mainly because we may dynamically adjust the node situation of each chain according to the business situation, and the Request and Task objects here are related to the specific business, so they must be declared as prototype types. In the above example, the Request object is passed in when the Pipeline object is constructed, while the Task object is generated during the flow of the Pipeline. Here, for example, after completing the call to the fireTaskReceived () chain, you need to get a Task object through the external request Request for the subsequent processing of the whole Pipeline. Here we have implemented Pipeline,HandlerContext and Handler, knowing that these bean are bean managed by Spring, so our next problem is mainly how to assemble the whole chain. The assembly method here is relatively simple, and it mainly needs to solve two problems: for the subsequent writers of the business code, they only need to implement a Handler interface without having to deal with all the logic related to the chain, so we need to get all the bean; that implements the Handler interface and encapsulate the bean that implements the Handler interface through HandlerContext, and then add it to the Pipeline. The first problem here is easier to deal with, because you can get all the bean that implements an interface through ApplicationContext, while the second problem can be achieved by declaring a class that implements the BeanPostProcessor interface. The following is the actual code: @ Componentpublic class HandlerBeanProcessor implements BeanPostProcessor, ApplicationContextAware {private ApplicationContext context; / / this method is called after a bean initialization is completed. Here, all the bean that implements the / / Handler interface are obtained after the Pipeline initialization is completed, and then added to the pipeline @ Override public Object postProcessAfterInitialization (Object bean, String beanName) {if (bean instanceof DefaultPipeline) {DefaultPipeline pipeline = (DefaultPipeline) bean by calling the Pipeline.addLast () method. Map handlerMap = context.getBeansOfType (Handler.class); handlerMap.forEach ((name, handler)-> pipeline.addLast (handler));} return bean;} @ Override public void setApplicationContext (ApplicationContext applicationContext) {this.context = applicationContext;}}
Here, the maintenance of the whole link is completed, and we can see that the chain process control has been basically realized. One thing to note here is that the above HandlerBeanProcessor.postProcessAfterInitialization () method is executed after the InitializingBean.afterPropertySet () method, that is, when the HandlerBeanProcessor here is executed, the entire pipeline has already been initialized. Let's take a look at how external clients control this link flow:
When HandlerBeanProcessor is executed, the entire Pipeline has already been initialized. Let's take a look at how external clients control the whole chain: @ Servicepublic class ApplicationService {@ Autowired private ApplicationContext context; public void mockedClient () {Request request = new Request (); / / request usually gets Pipeline pipeline = newPipeline (request) through external calls; try {pipeline.fireTaskReceived (); pipeline.fireTaskFiltered (); pipeline.fireTaskExecuted ();} finally {pipeline.fireAfterCompletion () } private Pipeline newPipeline (Request request) {return context.getBean (DefaultPipeline.class, request);}}
Here we simulate a client call, first creating a pipeline object, then calling methods at each level in turn, and here we use the try....finally structure to ensure that the Pipeline.fireAfterCompletion () method is executed. In this way, we have completed the construction of the whole responsible link model. Here we use the timed filter we used earlier as an example to implement a Handler:
@ Componentpublic class DurationHandler implements Handler {@ Override public void filterTask (HandlerContext ctx, Task task) {System.out.println ("timeliness test"); ctx.fireTaskFiltered (task);}}
We need to explain the following points about the specific business here:
To change Handler, you must use the @ compoment annotation to declare it as a spring container-managed bean, so that the HandlerBeanProcessor we implemented earlier can be dynamically added to the entire pipeline.
In each Handler, we need to implement a specific hierarchical method according to the current business needs. For example, the timeliness test is carried out here, that is, the logic at the level of "task filtering". Because we can execute the task only after passing the timeliness test, what we need to implement here is the Handler.filterTask () method. If we need to implement the logic of executing task, then we need to implement the Handler.executeTask () method. After implementing the specific business logic, we can see whether we need to continue to pass down the chain at the current level according to the current business needs, that is, ctx.fireTaskFiltered (task) here; for the method call, we can see that the previous HandlerContext.fireXXX () method will get the next node of the current node and then call it. If there is no need to pass the chain down according to the business needs, then there is no need to call ctx.fireTaskFiltered (task)
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.