Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand Tao and performance tuning in Tomcat with High concurrency

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "how to understand Tao and performance tuning of Tomcat high concurrency". In daily operation, I believe many people have doubts about how to understand Tao and performance tuning of Tomcat high concurrency. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubt of "how to understand Tao and performance tuning of Tomcat high concurrency". Next, please follow the editor to study!

Core preparation for high concurrency disassembly

This time, disassemble again, focus on Tomcat highly concurrent design and performance tuning, so that everyone has a higher level of understanding and understanding of the whole architecture. The idea of each component designed is to make Java object-oriented, interface-oriented, how to encapsulate change and unchanged, how to abstract different components according to actual requirements, how to design classes to achieve a single responsibility, how to achieve high cohesion and low coupling of similar functions, and how to apply design patterns to the ultimate learning and reference.

This time it mainly involves the Istroke O model, as well as the basics of thread pools.

Before learning, I hope you will accumulate some of the following technical content, many of which have been shared in historical articles. You can climb the stairs and look back. I hope you will pay attention to the following knowledge points. If you master the following knowledge points and then disassemble Tomcat, you will get twice the result with half the effort, otherwise it is easy to lose your way.

Let's take a look at Tomcat how to achieve concurrent connection processing and task processing, performance optimization is that each component plays a corresponding role, how to use the least memory, the fastest execution is our goal.

Design pattern

Template method pattern: abstract algorithm flow encapsulates changes and invariants in the process in abstract classes. Delay the change point to the subclass implementation to achieve the code reuse, open and close principle.

Observer mode: demand scenarios with different response mechanisms for different components of events to achieve decoupling and flexible notification downstream.

Chain of responsibility pattern: connects objects into a chain along which requests are passed. Valve in Tomcat is the application of this design pattern.

For more design patterns, check out the design pattern albums before Code Bytes, here is the portal.

Ipaw O model

It is necessary to understand the concepts of synchronous blocking, asynchronous blocking, Imax O multiplexing, asynchronous non-blocking and the application of Java NIO packets in order to realize high concurrent receive connections with Tomcat. This article will also focus on how iMaple O implements high concurrency connections in Tomcat. Through this article, I believe that you will also have a deep understanding of the Ithumb O model.

Java concurrent programming

To achieve high concurrency, in addition to the elegant design of each component as a whole, the reasonable design pattern, and the use of Istroke O, we also need thread model, how to efficiently concurrency programming skills. In the process of high concurrency, it is inevitable for multiple threads to access shared variables, which needs to be locked, and how to effectively reduce lock conflicts. Therefore, as programmers, we should consciously avoid the use of locks, such as using atomic classes CAS or concurrent collections instead. If you have to use a lock as a last resort, try to minimize the scope and strength of the lock.

For the basic knowledge of concurrency, if readers are interested in "codebytes", they have also written some concurrent albums, so you can move to history articles or albums to read. Here is the portal, which mainly explains the principle of concurrency implementation, what is memory visibility, JMM memory model, read-write lock and other concurrent knowledge points.

Overall architecture of Tomcat

Once again, looking back at the overall architecture design of Tomcat, we mainly designed the connector connector to handle TCP/IP connections, and the container container as a Servlet container to handle specific business requests. External and internal abstraction of the two components to achieve expansion.

A Tomcat instance has a Service by default, while a Service can contain multiple connectors. The connector mainly has two components: ProtocalHandler and Adapter to complete the core function of the connector.

ProtocolHandler is mainly composed of Acceptor and SocketProcessor, which realizes the reading of Socket in TCP/IP layer and converting it into TomcatRequest and TomcatResponse. Finally, according to http or ajp protocol, the appropriate Processor is parsed into application layer protocol, and TomcatRequest and TomcatResponse are converted into standard ServletRequest and ServletResponse through Adapter. Pass the request to the Container container through getAdapter (). Service (request, response);

The adapter.service () implementation forwards the request to the container org.apache.catalina.connector.CoyoteAdapter

/ / Calling the container connector.getService (). GetContainer (). GetPipeline (). GetFirst (). Invoke (request, response)

This call triggers the responsibility chain mode composed of getPipeline to step the request into the container step by step. Each container has a Pipeline, which starts through First, ends with Basic, enters the subclass container held inside the container, and finally goes to Servlet. This is the classic application of responsibility chain mode. The specific source code component is Pipeline to form a request chain, each chain point is composed of Valve. "Code Bytes" in the previous Tomcat architecture parsing to work reference has been explained in detail. As shown in the following figure, the important components of the architecture design of the whole Tomcat are clearly visible. I hope you can deeply imprint this overall architecture picture in your mind and master the overall thinking in order to better analyze the beauty of details.

Startup process: what happened to the startup.sh script

Tomcat startup process

Tomcat is a Java program, so the startup.sh script starts a JVM to run Tomcat's startup class Bootstrap.

Bootstrap is mainly a class loader that instantiates Catalina and initializes Tomcat customization. Hot loading and hot deployment depend on him.

Catalina: parse the server.xml to create the Server component and call the Server.start () method.

Server: manages the Service component, calling the start () method of Server.

Service: the main responsibility is to manage the top-level container Engine of the introducer, calling the start methods of Connector and Engine, respectively.

The Engine container mainly associates each container according to the parent-child relationship in the combination mode, and the Container container inherits Lifecycle to realize the initialization and startup of each container. Lifecycle defines init (), start (), stop () to control the life cycle of the entire container component to start and stop with one click.

Here is an interface-oriented, single responsibility design idea. Container uses composite patterns to manage containers, and LifecycleBase abstract classes inherit Lifecycle to uniformly manage the life cycle of each container. in the process of initialization and startup, LifecycleBase uses template design patterns to abstract the changing and invariable points of components, delaying the initialization of different components to the implementation of specific subclasses. And use the observer mode to release and start event decoupling.

The specific flow of init and start is shown in the swimming lane diagram as follows: this is the note I made while reading the source debug. Readers should not be afraid that it takes a long time to take notes. I believe they will have a deeper understanding if they follow the debug slowly.

Init process

Start process

Readers according to my two articles, grasp the main line components to debug, and then follow the swimming lane map to read the source code, I believe they will gain something, and get twice the result with half the effort. In the process of reading the source code, do not go into any detail, be sure to abstract each component and understand the responsibilities of each component. Finally, after understanding the responsibilities and design philosophy of each component, and then in-depth understanding of the implementation details of each component, do not want to understand a specific leaf in the first place.

I have identified each core class in the architecture diagram and swimming lane diagram. "Code Bytes" will share with you the experience of how to read the source code efficiently and maintain interest in learning.

How to read the source code correctly

Don't get caught up in the details and ignore the overall situation: I stared at the leaves before I figured out what the forest looked like. I couldn't see the whole picture and the overall design idea. So when you read the source code, don't get into the details at the beginning, but look at the overall architectural design ideas and the relationship between modules.

1. Before reading the source code, you need to have a certain technical reserve.

For example, common design patterns must be mastered, especially: template methods, strategy patterns, singletons, factories, observers, dynamic agents, adapters, chains of responsibility, decorators. You can read the historical article on design patterns in "Code Bytes" to build a good foundation.

two。 Must be able to use this framework / class library and be proficient in various workarounds

The devil is in the details, and if you don't know some usage at all, you may be able to see what the code means, but you don't know why it's written.

3. First, look for books and materials to understand the overall design of the software.

From the overall point of view, God's perspective sorted out the main core architecture design, first the forest and then the leaves. What are the modules? How are modules related? How is it related?

You may not understand it at once, but build a whole concept, like a map, to prevent you from getting lost.

When reading the source code, you can see where you are from time to time. Just like "Code Byte" combs the Tomcat-related architecture design for everyone, and then try to follow debug, this efficiency is like a tiger.

4. Build the system and run the source code!

Debug is a very important tool, and it's impossible for you to figure out the system by looking at it without running it. Make rational use of the call stack (observe the context of the calling process).

5. Notes

A very important job is to take notes (writing again! ), draw the class diagram of the system (don't rely on what IDE generated for you), and record the main function calls for later review.

Documentation is extremely important because the code is too complex and the human brain is too limited to remember all the details. The documentation can help you remember the key points, and then you can recall it and move on quickly.

Otherwise, you may forget what you saw today by tomorrow. So friends remember to look through the collection and try to download the source code and debug it over and over again.

Wrong way

Getting caught up in the details, not looking at the overall situation: I stared at the leaves before I figured out what the forest looked like. I couldn't see the whole picture and the overall design idea. So when you read the source code, don't get into the details at the beginning, but look at the overall architectural design ideas and the relationship between modules.

Study how to design before you learn how to use it: first of all, basically the framework uses design patterns, and we should at least understand the commonly used design patterns, even if it is "back". In learning a technology, I recommend looking at the official documentation to see what modules and overall design ideas there are. Then download the example and run it again, and finally look at the source code.

Look at the source code to delve into the details: when you look at the specific source code of a module, you should subconsciously not go into the details. The important thing is to learn design ideas, rather than a specific method to achieve logic. Unless you want to do secondary development based on the source code, and the secondary development is also based on the understanding of the architecture in order to go into the details.

Component design-implementing the idea of single responsibility and interface orientation

When we receive a functional requirement, the most important thing is abstract design, disassembling the main core components of the function, then finding the changes and invariants of the requirements, cohesion of similar functions, coupling between functions, and external support can be expandable at the same time. Turn off changes internally. When we strive to achieve a requirement, we need reasonable abstract ability to abstract different components, instead of mixing all the functions into one class or even one method. This kind of code affects the whole body and cannot be expanded. Difficult to maintain and read.

With the question, let's analyze how Tomcat designs components to complete connectivity and container management.

Take a look at how Tomcat enables Tomcat to start and how it accepts requests and forwards them to our Servlet.

Catalina

The main task is to create a Server, not simply to create it, but to parse the server.xml file to create the meaning of each component of the file configuration, and then call the init () and start () methods of Server, where the startup journey begins. At the same time, exceptions should also be taken into account. For example, to shut down Tomcat, you need to gracefully shut down the resources created by the startup process, while Tomcat registers a "close hook" in JVM. I have commented the source code and omitted some irrelevant codes. At the same time, Tomcat is turned off by await () listening to the stop instruction.

/ * Start a new server instance. * / public void start () {/ / if server is empty, parse server.xml to create if (getServer () = = null) {load ();} / / if creation fails, an error is reported and exit startup if (getServer () = = null) {log.fatal ("Cannot start server. Server instance is not configured. "); return;} / / starts launching server try {getServer (). Start ();} catch (LifecycleException e) {log.fatal (sm.getString (" catalina.serverStartFail "), e); if try {/ / exception, execute destroy destruction resource getServer (). Destroy () } catch (LifecycleException E1) {log.debug ("destroy () failed for failed Server", E1);} return;} / / create and register JVM close hook if (useShutdownHook) {if (shutdownHook = = null) {shutdownHook = new CatalinaShutdownHook () } Runtime.getRuntime () .addShutdownHook (shutdownHook);} / / listen for stop request if (await) {await (); stop ();}} via await method

By "closing the hook", you do some cleaning work when JVM is closed, such as releasing thread pools, cleaning some 00:00 files, and flushing memory data to disk. ...

The "close hook" is essentially a thread that JVM tries to execute before stopping. Let's take a look at what the CatalinaShutdownHook hook does.

/ * Shutdown hook which will perform a clean shutdown of Catalina if needed. * / protected class CatalinaShutdownHook extends Thread {@ Override public void run () {try {if (getServer ()! = null) {Catalina.this.stop () }} catch (Throwable ex) {...}} / * close the created Server instance * / public void stop () {try {/ / Remove the ShutdownHook first so that server.stop () / / doesn't get invoked twice if (useShutdownHook) {Runtime.getRuntime () .removeShutdownHook (shutdownHook) Catch (Throwable t) {. } / / close Server try {Server s = getServer (); LifecycleState sstate = s.getState (); / / determine whether it has been closed, and if it is closed, do not perform any action if (LifecycleState.STOPPING_PREP.compareTo (state) = 0) {/ / Nothing to do. Stop () was already called} else {s.stop (); s.destroy ();}} catch (LifecycleException e) {log.error ("Catalina.stop", e);}}

It actually executes the stop method of Server, and the stop method of Server frees and cleans up all resources.

Server component

To experience the beauty of interface design below, see how Tomcat designs components and interfaces, abstract Server components, Server components need life cycle management, so inherit Lifecycle to achieve one-click start and stop.

Its specific implementation class is StandardServer, as shown in the following figure, we know that the main methods of Lifecycle are the initialization, startup, stop, destruction of components, and the management and maintenance of listeners, which is actually the design of the observer pattern. When different events are triggered, events are issued to listeners for different business processing. Here is the embodiment of the design philosophy of how to decouple.

Server, on the other hand, is responsible for managing Service components.

Next, let's take a look at the specific implementation classes of Server components. What are the functions of StandardServer and which classes are associated with it?

In the process of reading the source code, we must pay more attention to the interface and the abstract class, which is the abstraction of the global design of the component, while the abstract class is basically the application of the template method pattern, and the main purpose is to abstract the whole algorithm flow. give the change point to the subclass and reuse the constant point code.

StandardServer inherits LifeCycleBase, its life cycle is managed uniformly, and its subcomponent is Service, so it also needs to manage the life cycle of Service, that is, it calls the startup methods of Service components at startup and their stop methods when stopped. Server maintains several Service components internally, which are stored as arrays, so how does Server add a Service to the array?

/ * add Service to the defined array * * @ param service The Service to be added * / @ Override public void addService (Service service) {service.setServer (this); synchronized (servicesLock) {/ / create a results array of services.length + 1 length Service results [] = new Service [services.length + 1] / / copy the old data to the results array System.arraycopy (services, 0, results, 0, services.length); results [services.length] = service; services = results / / start Service component if (getState () .isAvailable ()) {try {service.start () } catch (LifecycleException e) {/ / Ignore}} / / Observer mode is used to trigger the listening event support.firePropertyChange ("service", null, service);}}

As you can see from the above code, it is not to assign a very long array at the beginning, but to dynamically expand the length in the process of adding, here is to save space, for our usual development is not the main space complexity to bring about memory loss, the pursuit is the ultimate beauty.

In addition, there is another important function. The last line of Caralina's startup method above calls the Server's await method.

This method mainly listens to the stop port. In the await method, a Socket listener port 8005 is created, and the connection request on the Socket is received in an endless loop. If a new connection arrives, the connection is established, and then the data is read from the Socket. If the read data is the stop command "SHUTDOWN", exit the loop and enter the stop process.

Service

Also interface-oriented design, the specific implementation class of Service components is that StandardService,Service components still inherit the Lifecycle management life cycle, here is no longer cumbersome to show the picture diagram. Let's first look at the main methods and member variables defined by the Service interface. Only through the interface can we know the core function. When reading the source code, we must pay more attention to the relationship between each interface, and do not rush into the implementation class.

Public interface Service extends Lifecycle {/ /-main member variable / / Top-level container Engine public Engine getContainer () contained by / Service component; / / set Engine container public void setContainer (Engine engine) of Service; / / Server component public Server getServer () to which the Service belongs / /-Public Methods / / add connector public void addConnector (Connector connector) associated with Service; public Connector [] findConnectors (); / / Custom thread pool public void addExecutor (Executor ex) / / the main function is to locate Service,Mapper according to url. The main function is to locate the component where a request is located to process Mapper getMapper ();}

Then take a closer look at the implementation class of Service:

Public class StandardService extends LifecycleBase implements Service {/ / name private String name = null; / / Server instance private Server server = null; / / Connector array protected Connector connectors [] = new Connector [0]; private final Object connectorsLock = new Object (); / / corresponding Engine container private Engine engine = null / / the mapper, its listener, and the use of the observer mode protected final Mapper mapper = new Mapper (); protected final MapperListener mapperListener = new MapperListener (this);}

StandardService inherits the LifecycleBase abstract class, which defines three final template methods to define the life cycle, each of which defines the point of change and the abstract method to make different components time their own process. This is also where we learn, using the template method to abstract change and invariance.

In addition, there are some familiar components in StandardService, such as Server, Connector, Engine and Mapper.

Then why is there another MapperListener? This is because Tomcat supports hot deployment, and when the deployment of Web applications changes, the mapping information in Mapper will also change. MapperListener is a listener that listens to container changes and updates the information to Mapper, which is a typical observer mode. Downstream services deal differently according to the actions of multiple upstream services, which is the application scenario of the observer pattern, where multiple listeners trigger an event, and the event publisher does not have to call all the downstream services, but is decoupled by the observer pattern trigger.

Service manages the connector and the Engine top-level container, so moving on to its startInternal method is actually the abstract method defined by the LifecycleBase template. See how he starts the sequence of each component.

Protected void startInternal () throws LifecycleException {/ / 1. Trigger the start listener setState (LifecycleState.STARTING); / / 2. Starting Engine,Engine first will start its child containers, because the composition mode is used, so each layer of containers will start its own child containers first. If (engine! = null) {synchronized (engine) {engine.start ();} / / 3. Then start the Mapper listener mapperListener.start (); / / 4. Finally, start the connector, which starts its subcomponents, such as Endpoint synchronized (connectorsLock) {for (Connector connector: connectors) {if (connector.getState ()! = LifecycleState.FAILED) {connector.start ();}}

Service starts the Engine component first, then the Mapper listener, and finally the connector. This is easy to understand because only when the inner component is started can the external service be provided and the outer connector component can be started. Mapper also relies on container components, which start up before they can listen for changes, so Mapper and MapperListener start after the container component. The order in which components are stopped is the opposite of the startup order and is also based on their dependencies.

Engine

As the top-level component of Container, Engine is essentially a container that inherits ContainerBase and sees abstract classes once again using template design patterns. ContainerBase uses a HashMap children = new HashMap (); member variable to hold the child containers of each component. At the same time, use protected final Pipeline pipeline = new StandardPipeline (this); Pipeline to form a pipeline to process requests from the connector, and the chain of responsibility pattern builds the pipeline.

Public class StandardEngine extends ContainerBase implements Engine {}

The child container of Engine is Host, so what children holds is Host.

Let's see what ContainerBase has done.

InitInternal defines container initialization and creates a thread pool specifically for starting and stopping containers.

StartInternal: the container starts the default implementation, builds the container parent-child relationship through the combination mode, first obtains its own child container, and uses the startStopExecutor promoter container.

Public abstract class ContainerBase extends LifecycleMBeanBase implements Container {/ / provides the default initialization logic @ Override protected void initInternal () throws LifecycleException {BlockingQueue startStopQueue = new LinkedBlockingQueue () / / create a thread pool to start or stop the container startStopExecutor = new ThreadPoolExecutor (getStartStopThreadsInternal (), getStartStopThreadsInternal (), 10, TimeUnit.SECONDS, startStopQueue, new StartStopThreadFactory (getName () + "- startStop-")); startStopExecutor.allowCoreThreadTimeOut (true); super.initInternal () } / / Container starts @ Override protected synchronized void startInternal () throws LifecycleException {/ / gets the child container and submits it to the thread pool to launch Container children [] = findChildren (); List results = new ArrayList (); for (Container child: children) {results.add (startStopExecutor.submit (new StartChild (child);} MultiThrowable multiThrowable = null / / get the startup result for (Future result: results) {try {result.get ();} catch (Throwable e) {log.error (sm.getString ("containerBase.threadedStartFailed"), e); if (multiThrowable = = null) {multiThrowable = new MultiThrowable () } multiThrowable.add (e);}} / / start the pipeline pipeline to process the request if (pipeline instanceof Lifecycle) {((Lifecycle) pipeline) .start ();} / / publish the startup event setState (LifecycleState.STARTING); / / Start our thread threadStart ();}}

Inherit the LifecycleMBeanBase, that is, it also implements the life cycle management, provides the default startup mode of the sub-container, and provides the CRUD function of the sub-container.

Engine uses ContainerBase's startInternal method when launching the Host container. What else did Engine do himself?

Let's take a look at the constructor. Pipeline sets setBasic and creates StandardEngineValve.

/ * Create a new StandardEngine component with the default basic Valve. * / public StandardEngine () {super (); pipeline.setBasic (new StandardEngineValve ());. }

The main function of the container is to process the request and forward the request to a Host sub-container for processing, specifically through Valve. Each container component has a Pipeline to form a chain of responsibility to pass the request. There is a base valve (Basic Valve) in Pipeline, and the base valve of the Engine container is defined as follows:

Final class StandardEngineValve extends ValveBase {@ Override public final void invoke (Request request, Response response) throws IOException, ServletException {/ / choose an appropriate Host to process the request, and get the appropriate Host Host host = request.getHost () through the Mapper component. If (host = = null) {response.sendError (HttpServletResponse.SC_BAD_REQUEST, sm.getString ("standardEngine.noHost", request.getServerName ()); return } if (request.isAsyncSupported ()) {request.setAsyncSupported (host.getPipeline (). IsAsyncSupported ());} / / get the Pipeline first Valve of the Host container and forward the request to Host host.getPipeline (). GetFirst (). Invoke (request, response);}

The basic valve implementation is simple enough to forward the request to the Host container. The Host container object that processes the request is obtained from the request, so how can there be a Host container in the request object? This is because before the request arrives in the Engine container, the Mapper component routes the request, the Mapper component locates the corresponding container through the requested URL, and saves the container object in the request object.

Summary of component design

Have you found that the design of Tomcat is almost interface-oriented, that is, isolating functional design through interfaces is actually the embodiment of a single responsibility, each interface abstracts different components of objects, and defines the common execution process of components through abstract classes. The meaning of the word "single responsibility" is actually reflected here. In the process of analysis, we see the observer pattern, template method pattern, composition pattern, responsibility chain pattern and the design philosophy of how to abstract components for interface design.

Icano Model of Connector and Thread Pool Design

The main function of the connector is to accept TCP/IP connections, limit the number of connections, then read the data, and finally forward the request to the Container container. So here is bound to involve Imax O programming, today we will analyze how Tomcat uses the Imax O model to achieve high concurrency, and enter the world of IWeiO together.

There are five main Icano models: synchronous blocking, synchronous non-blocking, Iripple O multiplexing, signal-driven, and asynchronous Iripple O. Are you familiar but foolish enough to tell the difference between them?

The so-called Icano is the process of copying data between computer memory and external devices.

CPU reads the data from external devices into memory before processing. Please consider this scenario. When a program issues a read instruction to an external device through CPU, it often takes some time for the data to be copied from the external device to memory. At this time, the CPU has nothing to do. Does the program take the initiative to give CPU to someone else? Or let CPU keep checking: has the data arrived? has the data arrived?

This is the problem to be solved by the Imax O model. Today I will first talk about the differences between the various Ithumb O models, and then focus on how the NioEndpoint component of Tomcat implements the non-blocking Ithumb O model.

Ipaw O model

A network Icano communication process, such as network data reading, involves two objects, namely, the user thread that invokes the Icano operation and the operating system kernel. The address space of a process is divided into user space and kernel space, and user threads cannot access kernel space directly.

There are two main steps for network reading:

The user thread waits for the kernel to copy the data from the network card to the kernel space.

The kernel copies data from kernel space to user space.

In the same way, sending data to the network is the same process, copying the data from the user thread to the kernel space, which copies the data to the network card to send.

The difference between different Icano models: the two steps are implemented in different ways.

For synchronization, it refers to whether the application calls a method to return immediately without waiting.

For blocking and non-blocking: the main question is whether the read and write operations of copying data from the kernel to user space are blocking waiting.

Synchronous blocking IBO

When the user thread initiates the read call, the thread blocks and can only give up the CPU, while the kernel waits for the data from the network card to arrive and copies the data from the network card to the kernel space. When the kernel copies the data to the user space and wakes up the read user thread that has just blocked, the threads of both steps are blocked.

Synchronous non-blocking

The user thread keeps calling the read method and fails if the data has not been copied to the kernel space until the data reaches the kernel space. The user thread is blocked while waiting for the data to be copied from kernel space to user space, and the data is not awakened until it reaches user space. There is no blocking when calling the read method in a loop.

Ipaw O Multiplexing

The read operation of the user thread is divided into two steps:

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

The user thread initiates the select call first, mainly asking if the kernel data is ready? When the kernel has the data ready, perform the second step.

The user thread then initiates the read call, and the initiating read thread is blocked while waiting for the kernel to copy data from kernel space to user space.

Why is it called select O multiplexing? the core is that a single call to the kernel can query the status of multiple data channels (Channel), so it is called multiplexing.

Asynchronous IPUBO

When the user thread executes the read call, it registers a callback function, and the read call returns immediately without blocking the thread. After waiting for the kernel to prepare the data, the user thread calls the newly registered callback function to process the data. In the whole process, the user thread does not block.

Tomcat NioEndpoint

The NioEndpoit component of Tomcat actually implements the Imax O multiplexing model, which is officially excellent because of this concurrency capability. Let's take a peek at the design principle of Tomcat NioEndpoint.

For the use of Java multiplexer, there are no more than two steps:

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

Create a Seletor, register various interesting events on it, and then call the select method to wait for the interesting things to happen.

When something of interest occurs, such as being able to read, a new thread is created to read data from the Channel.

Although the implementation of the NioEndpoint component of Tomcat is relatively complex, the basic principle is the above two steps. Let's first take a look at what components it has. It consists of five components: LimitLatch, Acceptor, Poller, SocketProcessor and Executor. Their working process is shown in the figure below:

It is precisely because of the use of Poller O-multiplexing that the internal essence of Poller is to hold the I-Poller O time when Java Selector detects channel, and when the data is readable and writable, the task of creating SocketProcessor is thrown into the thread pool for execution, that is, a small number of threads listen for read and write events, and then the dedicated thread pool performs read and write to improve performance.

Custom thread pool model

In order to improve the processing power and concurrency, the Web container usually puts the work of processing requests on the thread pool. Tomcat expands the native thread pool of Java to improve the concurrency requirement. Before entering the Tomcat thread pool principle, let's review the Java thread pool principle.

Java thread pool

To put it simply, a thread array and a task queue are maintained internally in the Java thread pool, and when the task cannot be handled, the task is put into the queue and processed slowly.

ThreadPoolExecutor

To peek into the constructor of the core class of the thread pool, we need to understand the role of each parameter in order to understand how the thread pool works.

Public ThreadPoolExecutor (int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) {. }

CorePoolSize: the number of threads left in the pool, even if they are idle, will not be shut down unless allowCoreThreadTimeOut is set.

MaximumPoolSize: the maximum number of threads allowed in the pool after the queue is full.

KeepAliveTime, TimeUnit: if the number of threads is greater than the number of cores, the maximum holding time of excess idle threads will be destroyed. Unit is the time unit of the keepAliveTime parameter. When allowCoreThreadTimeOut (true) is set, thread idle time within the scope of corePoolSize in the thread pool that reaches keepAliveTime will also be recycled.

WorkQueue: when the number of threads reaches corePoolSize, the new tasks are placed in the work queue workQueue, while the threads in the thread pool try to work from the workQueue lira, that is, calling the poll method to get the task.

ThreadFactory: a factory that creates threads, such as setting whether it is a background thread, thread name, and so on.

RejectedExecutionHandler: deny policy, the handler executes the reject policy because it reaches the thread limit and queue capacity. You can also customize the rejection policy, as long as you implement RejectedExecutionHandler. Default reject policy: AbortPolicy rejects the task and throws a RejectedExecutionException exception; the thread that submitted the task by CallerRunsPolicy executes; ``

To analyze the relationship between each parameter:

When submitting a new task, if the number of thread pools is less than corePoolSize, create a new thread pool to execute the task. When the number of threads = corePoolSize, the new task will be placed in the work queue workQueue, and the threads in the thread pool will try to take the task from the queue to execute.

If there are many tasks, the workQueue is full, and when the number of frontlines is < maximumPoolSize, temporary threads are created to execute the task, and if the total number of threads exceeds maximumPoolSize, threads are no longer created, but the reject policy is executed. DiscardPolicy does nothing and discards tasks directly; DiscardOldestPolicy discards the oldest unhandled programs

The specific implementation process is shown in the following figure:

Tomcat thread pool

The customized version of ThreadPoolExecutor inherits java.util.concurrent.ThreadPoolExecutor. There are two critical parameters for thread pools:

Number of threads.

Queue length.

Tomcat must be limited to two parameters, otherwise it may lead to resource exhaustion of CPU and memory in high concurrency scenarios. Inherits the same as java.util.concurrent.ThreadPoolExecutor, but implements it more efficiently.

The construction method is as follows, which is the same as the official Java.

Public ThreadPoolExecutor (int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, RejectedExecutionHandler handler) {super (corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, handler); prestartAllCoreThreads ();

The component that controls the thread pool in Tomcat is StandardThreadExecutor, which also implements the life cycle interface. Here is the code to start the thread pool.

@ Override protected void startInternal () throws LifecycleException {/ / Custom task queue taskqueue = new TaskQueue (maxQueueSize); / / Custom thread factory TaskThreadFactory tf = new TaskThreadFactory (namePrefix,daemon,getThreadPriority ()); / / create custom thread pool executor = new ThreadPoolExecutor (getMinSpareThreads (), getMaxThreads (), maxIdleTime, TimeUnit.MILLISECONDS,taskqueue, tf); executor.setThreadRenewalDelay (threadRenewalDelay) If (prestartminSpareThreads) {executor.prestartAllCoreThreads ();} taskqueue.setParent (executor); / / Observer mode, publish startup event setState (LifecycleState.STARTING);}

The key points are:

Tomcat has its own customized task queue and thread factory, and can limit the length of the task queue, its maximum length is maxQueueSize.

Tomcat also limits the number of threads, setting the number of core threads (minSpareThreads) and the maximum number of thread pools (maxThreads).

In addition, Tomcat has redefined its thread pool processing process on an official basis, and the native process has been mentioned above.

For the first corePoolSize task, a new thread is created for each task.

There is also a task submission, which is placed directly in the queue, which is full, but if the maximum number of thread pools is not reached, a temporary thread is created to fight the fire.

The number of thread buses reaches maximumPoolSize, and the reject policy is executed directly.

The Tomcat thread pool extends the native ThreadPoolExecutor and implements its own task processing logic by overriding the execute method:

For the first corePoolSize task, a new thread is created for each task.

There is also a task submission, which is placed directly in the queue, which is full, but if the maximum number of thread pools is not reached, a temporary thread is created to fight the fire.

When the number of thread buses reaches maximumPoolSize, continue to try to put the task in the queue. If the queue is also full and the insert task fails, the reject policy is executed.

The biggest difference is that when the total number of threads reaches the maximum, Tomcat does not execute the reject policy immediately, but tries to add tasks to the task queue and then executes the reject policy after adding failure.

The code is as follows:

Public void execute (Runnable command, long timeout, TimeUnit unit) {/ / record the number of submitted tasks + 1 submittedCount.incrementAndGet (); try {/ / calls the java native thread pool to execute the task, when the native reject policy super.execute (command) is thrown } catch (RejectedExecutionException rx) {/ / if the number of buses reaches maximumPoolSize,Java, the native will execute the rejection policy if (super.getQueue () instanceof TaskQueue) {final TaskQueue queue = (TaskQueue) super.getQueue () Try {/ / try to put the task in the queue if (! queue.force (command, timeout, unit)) {submittedCount.decrementAndGet () / / the queue is still full. If insertion fails, the reject policy throw new RejectedExecutionException ("Queue capacity is full.");}} catch (InterruptedException x) {submittedCount.decrementAndGet (); throw new RejectedExecutionException (x) is executed. }} else {/ / submit Mission-1 submittedCount.decrementAndGet (); throw rx;}

The Tomcat thread pool uses submittedCount to maintain that it has been submitted to the thread pool, which is related to the customized version of Tomcat's task queue. Tomcat's task queue TaskQueue extends LinkedBlockingQueue in Java, and we know that LinkedBlockingQueue is unlimited by default unless it is given a capacity. So Tomcat gives it an integer parameter capacity,TaskQueue in the constructor of capacity,TaskQueue to pass capacity to the constructor of the parent class LinkedBlockingQueue to prevent memory overflow caused by unlimited add tasks. And the default is unlimited, which will cause the current number of threads to reach the number of core threads, and then the task's thread pool will add the task to the task queue and will always succeed, so there will never be a chance to create a new thread.

To solve this problem, TaskQueue overrides the offer method of LinkedBlockingQueue, returning false at the right time and false indicating that task addition failed, when the thread pool creates a new thread.

Public class TaskQueue extends LinkedBlockingQueue {. @ Override / / when the thread pool calls the method of the task queue, the current number of threads must be greater than the number of core threads. Public boolean offer (Runnable o) {/ / if the number of threads has reached the maximum, new threads cannot be created and tasks can only be added to the task queue. If (parent.getPoolSize () = = parent.getMaximumPoolSize ()) return super.offer (o); / / execution here indicates that the current number of threads is greater than the number of core threads and less than the maximum number of threads. / / indicates that a new thread can be created, so do you want to create it or not? There are two situations: / / 1. If the number of tasks submitted is less than the current number of threads, there are still free threads, so there is no need to create a new thread if (parent.getSubmittedCount ())

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report