In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "which is better, Jetty or Tomcat". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "which is better, Jetty or Tomcat".
Jetty
Basic architecture
Jetty is currently a promising Servlet engine, its architecture is relatively simple, but also a scalable and very flexible application server. It has a basic data model, this data model is Handler (processor), all components that can be extended can be added to Server as a Handler, Jetty is to help you manage these Handler. Another indispensable component of Jetty is Connector, which accepts connection requests from clients and assigns them to a processing queue for execution.
The following figure is the basic architecture diagram of Jetty. The core component of the whole Jetty is composed of Server and Connector. The whole Server component works based on the Handler container, which is similar to the Container container of Tomcat.
Figure 1. Basic architecture of Jetty
There are also some optional components in Jetty, which we can extend. For example, JMX, we can define some Mbean to add it to the Server, and when the Server starts, these Bean will work together.
Figure 2. Class diagram of the main components of Jetty
You can see from the figure above that the core of the entire Jetty is built around the Server class, which inherits Handler, and associates Connector and Container,Container as containers for managing Mbean.
The main extension of Jetty's Server is to implement Handler one by one and add Handler to Server, and Server provides access rules for calling these Handler.
The lifecycle management of all components of the entire Jetty is based on the observer template design, which is similar to the management of Tomcat.
Figure 3. Class diagram of LifeCycle
Each component holds a collection of observers (in this case, the Listener class, which usually corresponds to the Observer role commonly used in the observer pattern. For more information about the observer pattern, please refer to the article "Tomcat system Architecture and Design patterns, part 2: design pattern Analysis"). When events such as start, fail or stop are triggered, these Listener will be called, which is the easiest way to design. It is much simpler than Tomcat's LifeCycle.
System structure
The Jetty mentioned above is mainly based on Handler, and the architecture of Handler affects all aspects of the whole Jetty. Here is a summary of the types and functions of Handler:
Figure 4. Architecture of Handler
Jetty mainly provides two types of Handler, one is HandlerWrapper, it can delegate a Handler to another class to execute, if we want to add a Handler to the Jetty, then we must delegate the Handler to Server to call. With the ScopeHandler class, we can intercept the execution of Handler, before or after calling Handler, we can do something else, similar to Valve (valve) in Tomcat; another Handler type is HandlerCollection, which can assemble multiple Handler together to form a Handler chain, making it easy for us to expand.
Startup process
The entrance to Jetty is the Server class. When the Server class is started, it means that Jetty can provide services for you.
What services it can provide depends on which component's start methods are called when the Server class starts.
From the configuration file of Jetty, we can see that the process of configuring Jetty is the process of configuring those classes to Server. The following is the startup sequence diagram of Jetty:
Figure 5. Jetty startup sequence diagram
Because all components in Jetty inherit LifeCycle, the start method call of Server calls all components that are already registered with Server, and the order in which Server starts the other components is:
Start the Handler set to Server (usually this Handler will have many sub-Handler, these Handler will form a Handler chain, and Server will start all the Handler on this chain in turn)
Then the Mbean registered with JMX on Server will be started to make Mbean work together
* Connector will be launched, the port will be opened, and client requests will be accepted.
Accept the request
As an independent Servlet engine, Jetty can provide Web services independently, but it can also be integrated with other Web application servers, so it can provide work based on two protocols, one is HTTP and the other is AJP protocol.
If you integrate Jetty into Jboss or Apache, you can make Jetty work based on AJP mode.
The following describes how Jetty works based on these two protocols, and how to establish connections and accept requests.
Work based on HTTP protocol
If there are no other web servers at the front end, then Jetty should work based on the HTTP protocol. That is, when Jetty receives a request, it must parse the request in accordance with the HTTP protocol to encapsulate the returned data. So how does Jetty accept a request and how does it handle it?
We set the Connector implementation class of Jetty to org.eclipse.jetty.server.bi.SocketConnector and let Jetty work as BIO.
When Jetty starts, it will create a working environment for BIO, and it will create a HttpConnection class to parse and encapsulate the protocols of HTTP1.1.
The ConnectorEndPoint class is responsible for handling connection requests in the same way as BIO
ServerSocket is responsible for establishing socket connections to receive and transmit data
Executor handles the thread pool of connections and handles the tasks in each request queue
AcceptorThread is listening for connection requests. Once there is a socket connection, it will enter the following processing flow.
When the socket is actually executed, the HttpConnection is called, which defines how the request is passed to the servlet container and how the request is eventually routed to the destination servlet.
The following figure is a sequence diagram of Jetty initiating the creation of a connection:
Figure 6. Timing diagram for establishing a connection
Jetty requires three steps to create an accept connection environment:
Create a queue thread pool that handles each task generated by establishing a connection, which can be specified by the user, which is similar to Tomcat.
Create a ServerSocket to prepare to accept socket requests from the client and some helper classes that the client uses to wrap the socket.
Create one or more listening threads to listen for connections to the access port.
Jetty has simpler logic, fewer classes, and less code to execute than Tomcat creates an environment for establishing connections.
When the environment for establishing the connection is ready, the HTTP request can be accepted. When the Acceptor accepts the socket connection, it will be transferred to the process shown in the following figure:
Figure 7. Processing connection sequence diagram
The Accetptor thread will create a ConnectorEndPoint for this request. HttpConnection is used to indicate that the connection is a HTTP protocol connection, it creates a HttpParse class to parse the HTTP protocol, and creates Request and Response objects that conform to the HTTP protocol. The thread is then handed over to the queue thread pool for execution.
Work based on AJP
Usually, the back-end server of a web site will not expose the Java application server directly to service visitors, but add a web server (such as Apache or nginx) in front of the application server (such as Apache or nginx). I think this reason should be easy to understand, such as log analysis, load balancing, permission control, prevention of malicious requests and static resource preloading, and so on.
The following figure is a typical architecture diagram of the web server:
Figure 8. Web server architecture
Under this architecture, the servlet engine does not need to parse and encapsulate the returned HTTP protocol, because the parsing of the HTTP protocol has been completed on the Apache or Nginx server, and the Jboss only needs to work based on the simpler AJP protocol, which can speed up the response speed of the request.
Comparing the timing diagrams of the HTTP protocol, we can see that their logic is almost the same, except that a class Ajp13Parserer is replaced instead of HttpParser, which defines how to deal with the AJP protocol and which classes are needed to cooperate.
In fact, the only difference between AJP processing requests and HTTP is how the socket packet is translated when the packet is read. Parsing according to the packet format of HTTP protocol is HttpParser, and if parsing according to AJP protocol, it is Ajp13Parserer. The same is true of the data returned by encapsulation.
For Jetty to work under the AJP protocol, you need to configure the implementation class of connector as Ajp13SocketConnector, which inherits the SocketConnector class and overrides the newConnection method of the parent class in order to create Ajp13Connection objects instead of HttpConnection. The following figure shows the sequence diagram of Jetty creating the connection environment:
Figure 9. Jetty sequence diagram for creating a connection environment
The only difference from the HTTP approach is that the SocketConnector class is replaced with Ajp13SocketConnector. The purpose of changing to Ajp13SocketConnector is to create an Ajp13Connection class, indicating that the current connection uses the AJP protocol, so you need to use the Ajp13Parser class to parse the AJP protocol, and the logic for handling connections is the same. As shown in the following sequence diagram:
Work based on NIO
The Jetty described above is based on BIO to establish the connection between the client and the processing client. It also supports another way of dealing with NIO (Network Interface object), in which the default connector of Jetty is NIO.
For more information about how NIO works, please refer to the article on NIO on developerworks. Usually, the working prototype of NIO is as follows:
Selector selector = Selector.open (); ServerSocketChannel ssc = ServerSocketChannel.open (); ssc.configureBlocking (false); SelectionKey key = ssc.register (selector, SelectionKey.OP_ACCEPT); ServerSocketChannel ss = (ServerSocketChannel) key.channel (); SocketChannel sc = ss.accept (); sc.configureBlocking (false); SelectionKey newKey = sc.register (selector, SelectionKey.OP_READ); Set selectedKeys = selector.selectedKeys ()
Creating a Selector is equivalent to an observer, opening a Server side channel, registering the server channel with the observer, and specifying the events to listen for. Then traverse the observer to observe the event, take out the event of interest and then deal with it. The core thing here is that we don't need to create a thread for each observer to monitor its events at any time. Instead, these observers are registered in one place for unified management, and then the triggered events are uniformly sent to the interested program modules. The core here is to be able to uniformly manage the events of each observed, so we can uniformly manage the data sent and received by each established connection on the server as an event, so that it is not necessary for each connection to be maintained by a thread.
When it is important to note here, many people think that listening to SelectionKey.OP_ACCEPT events is already a non-blocking mode. In fact, Jetty still uses a thread to listen to the client's connection request. When the request is received, the request is registered on Selector, and then executed in a non-blocking mode. Another misunderstanding in this place is that there is only one thread to handle all requests when Jetty works in NIO mode, and even that different users will share a thread on the server, which will lead to problems with ThreadLocal-based programs. In fact, from the source code of Jetty, we can find that the processing of really sharing a thread is only listening for data transfer events of different connections. For example, multiple connections have been established, the traditional way is that when there is no data transmission, the thread is blocked, that is, it has been waiting for the arrival of the next data, while the NIO processing method is that only one thread is waiting for the data of all connections to arrive, and when a connection data arrives, Jetty will assign it to the corresponding processing thread of the connection to process it, so the processing threads of different connections are still independent.
The NIO handling of Jetty is almost the same as that of Tomcat, except in the way listening events are assigned to the corresponding connections. From the test results, the NIO processing method of Jetty is more efficient. The following is the NIO processing sequence diagram of Jetty:
Process the request
Let's look at how Jetty handles a HTTP request.
In fact, the way Jetty works is very simple. When Jetty receives a request, Jetty gives the request to the agent Handler registered in Server to execute. How to execute your registered Handler is also up to you. What Jetty needs to do is to call the handle (String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) method of your registered Handler, and it's up to you to decide what to do next.
To be able to accept a web request access, first create a ContextHandler, as shown in the following code:
Server server = new Server (8080); ContextHandler context = new ContextHandler (); context.setContextPath ("/"); context.setResourceBase ("."); context.setClassLoader (Thread.currentThread (). GetContextClassLoader ()); server.setHandler (context); context.setHandler (new HelloHandler ()); server.start (); server.join ()
When we type in http://localhost:8080 in the browser, the request will be proxied to the handle method of the Server class, the handle method of Server will proxy the request to the handle method of ContextHandler, and ContextHandler will call the handle method of HelloHandler. Whether this call is similar to how Servlet works, initialize it before startup, and then call the service method of Servlet after creating the object. I usually only implement a wrapped class in Servlet's API, and the same is true in Jetty. Although ContextHandler is only a Handler, this Handler is usually implemented by Jetty for you. Generally, we just need to implement some Handler related to the specific business logic we need to do, while some procedural or certain specification Handler can be used directly. For example, the following Handler about Jetty supporting Servlet has a variety of implementations. Here is a simple flow of HTTP requests.
Access the code of a Servlet:
Server server = new Server (); Connector connector = new SelectChannelConnector (); connector.setPort (8080); server.setConnectors (new Connector [] {connector}); ServletContextHandler root = new ServletContextHandler (null, "/", ServletContextHandler.SESSIONS); server.setHandler (root); root.addServlet (new ServletHolder (new org.eclipse.jetty.embedded.HelloServlet (Hello)), "/"); server.start (); server.join ()
Create a ServletContextHandler and add a Servlet to the Handler, where ServletHolder is a decoration class for Servlet, which is very similar to StandardWrapper in Tomcat. The following is the sequence diagram for requesting the Servlet:
Figure 12. Timing diagram of Jetty processing requests
The above figure shows that the process of Jetty processing the request is the execution of the handle method on the Handler chain. What needs to be explained here is the processing rules of ScopeHandler. ServletContextHandler, SessionHandler and ServletHandler all inherit ScopeHandler, so these three classes form a Handler chain, and their execution rules are: ServletContextHandler.handle ServletContextHandler.doScope SessionHandler. DoScope ServletHandler. DoScope ServletContextHandler. DoHandle SessionHandler. DoHandle ServletHandler. DoHandle, which is a mechanism that allows us to do some extra work in doScope.
Integrate with Jboss
It was introduced earlier that Jetty can work based on AJP protocol. In normal enterprise applications, Jetty as a Servlet engine works based on AJP protocol, so there must be a server in front of it. Generally speaking, it is very possible to integrate with Jboss. Here is how to integrate with Jboss.
Jboss is a JMX-based architecture, so any system or framework that conforms to the JMX specification can be added to Jboss as a component to extend the functions of Jboss. Jetty, as the main Servlet engine, certainly supports integration with Jboss. The specific integration methods are as follows:
Jetty as an independent Servlet engine integrated into Jboss needs to inherit Jboss's AbstractWebContainer class, which implements the template pattern, in which an abstract method needs to be implemented by a subclass, which is getDeployer, which can specify the Deployer to create the web service. There is a jetty-jboss module in the Jetty project. Compiling this module will result in a SAR package, or you can download a SAR package directly from the official website. After decompression, the figure is as follows:
Figure 13. Jboss-jetty directory
There is a webdefault.xml configuration file in the jboss-jetty-6.1.9 directory, which is the default web.xml configuration of Jetty. Find the jboss-service.xml file in the META-INF directory, which is configured with MBean, as follows:
Similarly, this org.jboss.jetty.JettyService class is also inherited into the org.jboss.web.AbstractWebContainer class, overriding the parent class's startService method, which directly calls jetty.start to start Jetty.
Comparison with Tomcat
Both Tomcat and Jetty are widely used as a Servlet engine, and they can be compared to the relationship between China and the United States. Although Jetty has grown into an excellent Servlet engine, the status of Tomcat is still difficult to shake. In comparison, they all have their own advantages and disadvantages.
After a long time of development, Tomcat has been widely accepted and recognized by the market. Compared with Jetty, Tomcat is still relatively stable and mature, especially in enterprise applications, Tomcat is still the choice. However, with the development of Jetty, the market share of Jetty is also increasing, and the reason is due to many advantages of Jetty, which are also reflected by the technical advantages of Jetty.
Architecture comparison
Architecturally speaking, Jetty is obviously simpler than Tomcat. If you don't know much about the architecture of Tomcat, it is recommended that you take a look at the article "Tomcat system Architecture and Design patterns".
The architecture of Jetty can be seen from the previous analysis that all its components are implemented based on Handler, and of course it also supports JMX. But the major functional extensions can be implemented in Handler. It can be said that Jetty is Handler-oriented architecture, just as Spring is Bean-oriented architecture, iBATIS is oriented to statement, while Tomcat is built with multi-level containers, their architecture design must have a "meta-god", and all other components built with this "meta-god" are physical.
From the design template point of view, the design of Handler is actually a chain of responsibility pattern, the interface class HandlerCollection can help developers build a chain, and another interface class ScopeHandler can help you control the access order of the chain. Another design template used is the Observer pattern, which controls the entire life cycle of Jetty. As long as you inherit the LifeCycle interface, your objects can be uniformly managed by Jetty. So extending Jetty is very simple and easy to understand. The simplicity of the overall architecture also brings incomparable benefits. Jetty can be easily extended and tailored.
In contrast, Tomcat is much more bloated, and the overall design of Tomcat is very complex. I said earlier that the core of Tomcat is the design of its containers, from Server to Service to container containers such as engine. As an application server, this kind of design is unreasonable, and the hierarchical design of the container is also for better expansion, but this expansion way exposes the internal structure of the application server to external users, so that if you want to extend Tomcat, developers must first understand the overall design structure of Tomcat, and then know how to expand according to its specifications. This invisibly increases the cost of learning Tomcat. Not only containers, but actually Tomcat also has a design approach based on the chain of responsibility, such as cascading Pipeline's Vavle design is similar to Jetty's Handler. It is as difficult to implement a Vavle yourself as it is to write a Handler. On the face of it, Tomcat is more powerful than Jetty, because Tomcat has done a lot of work for you, and Jetty only tells you what you can do, how to do it, and it's up to you to do it. For example, just like a child learning math, Tomcat tells you the result of 1pm, 2Magi, 3rect, 2Q, 2Q, and then you can get the result in this way. Other numbers can only be calculated according to the formula it gives you, while Jetty tells you the rules of addition, subtraction, multiplication and division, and then you can do the calculation yourself according to this rule. So once you master Jetty,Jetty, it will become extremely powerful.
Performance comparison
Simply comparing the performance of Tomcat and Jetty is not very meaningful, it can only be said that in a certain usage scenario, its performance is different. Because they face different usage scenarios. Architecturally, Tomcat has an advantage in handling a small number of very busy connections, which means that if the life cycle of the connection is short, the overall performance of Tomcat is higher.
Jetty, by contrast, Jetty can handle a large number of connections at the same time and can maintain those connections for a long time. For example, some web chat applications are very suitable to use Jetty as a server, such as Taobao's web Wangwang uses Jetty as the Servlet engine.
In addition, because the architecture of Jetty is very simple, as a server, it can load components on demand, so that unnecessary components can be removed, so that the memory overhead of the server itself can be reduced, and the temporary objects generated by processing a request can also be reduced, so the performance will be improved. In addition, Jetty defaults to NIO technology which is more advantageous in handling ITomcat O requests, while Tomcat uses BIO by default, and Tomcat is not as good as Jetty when dealing with static resources.
Characteristic comparison
As a standard Servlet engine, they all support the standard Servlet specification, as well as the Java EE specification. Because Tomcat is more widely used, it supports these more comprehensively, and many features Tomcat are directly integrated. But the response of Jetty is faster, on the one hand, because the development community of Jetty is more active, on the other hand, it is because the modification of Jetty is easier, it just needs to replace the corresponding components, while the overall structure of Tomcat is much more complex, and the modification function is relatively slow. So Tomcat's support for the Servlet specification is always later than expected.
Thank you for your reading, the above is the content of "which is better, Jetty or Tomcat". After the study of this article, I believe you have a deeper understanding of which is better between Jetty and Tomcat, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.