Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze the principle of High Reliability of Netty

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

In this issue, the editor will bring you about how to analyze the high reliability principle of Netty. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

1. Background

1.1. The cost of downtime is 1.1.1. Telecom industry

A survey of 74 operators in 46 countries by KPMG International found that the global communications industry lost about $40 billion a year, or 1 per cent of total revenue. There are many factors leading to the loss of revenue, the main reason is the billing BUG.

1.1.2. Internet industry

Google suffered an outage from 03:50 to 03:55 Pacific time (06:50 to 06:55 Beijing time on August 17th). According to hindsight, Google lost $545000 in just five minutes. In other words, for every minute of service interruption, the loss amounts to $108000.

In 2013, starting at 02:45 EDT on Aug. 19, some users were the first to notice an outage on Amazon, which returned to normal more than 20 minutes later. The outage cost Amazon nearly $67000 per minute, and consumers were unable to shop through sites such as Amazon.com, Amazon mobile and Amazon.ca during the outage.

1.2. Software reliability

Software reliability refers to the probability of software running without error in a given time and in a specific environment. Software reliability includes the following three elements:

1) specified time: software reliability is only reflected in its running phase, so the running time is taken as a measure of the specified time. The running time includes the cumulative time of working and suspending (open but idle) after the software system is running. Due to the randomness of the environment in which the software runs and the selection of the program path, the failure of the software is a random event, so the running time is a random variable.

2) specified environmental conditions: environmental conditions refer to the operating environment of the software. It involves various supporting elements needed for the operation of the software system, such as supporting hardware, operating system, other supporting software, input data format and scope, operating procedures and so on. The reliability of software is different under different environmental conditions. Specifically, the specified environmental conditions mainly describe the configuration of the computer when the software system is running and the requirements for input data, and assume that all other factors are ideal. With clearly defined environmental conditions, we can effectively judge whether the responsibility for software failure lies with the user or the provider.

3) specified functions: software reliability is also related to specified tasks and functions. Due to the different tasks to be completed, the running profile of the software will be different, the sub-modules called will be different (that is, the program path selection is different), and its reliability may be different. Therefore, in order to accurately measure the reliability of the software system, we must first clarify its tasks and functions.

1.3. Reliability of Netty

First of all, we need to analyze the reliability of Netty in terms of its main uses. There are three main uses of Netty:

1) build the basic communication components of RPC invocation to provide remote service invocation across nodes

2) NIO communication framework for data exchange across nodes

3) the basic communication components of other application protocol stacks, such as HTTP protocol and other application layer protocol stacks developed based on Netty.

Take Ali's distributed service framework Dubbo as an example, Netty is the core of Dubbo RPC framework. An example of its service invocation is as follows:

Figure 1-1 Node role description diagram of Dubbo

Among them, RPC calls can be made between service providers and service callers through Dubbo protocol, and messages are sent and received by default through Netty.

Through the analysis of the mainstream application scenarios of Netty, we find that the reliability problems faced by Netty are roughly divided into three categories:

1) traditional network I / O faults, such as network flash disconnection, firewall Hang live connection, network timeout, etc.

2) NIO-specific faults, such as BUG specific to NIO class library, read-write half-packet handling exception, Reactor thread running, etc.

3) encode and decode related exceptions.

In most business application scenarios, once the Netty cannot work properly because of some failure, the business will often be paralyzed. Therefore, from the perspective of business demands, the reliability requirements of the Netty framework are very high. As the most popular NIO framework in the industry, Netty has been widely used in different industries and fields, and its high reliability has been tested by hundreds of production systems.

How does Netty support high reliability of the system? Next, let's take a look at it from several different dimensions.

2. The way of high reliability of Netty 2.1. Network communication failure 2.1.1. Client connection timed out

In the traditional synchronous blocking programming mode, when a client Socket initiates a network connection, it is often necessary to specify the connection timeout for two main purposes:

1) in the synchronous blocking Ithumb O model, the connection operation is synchronously blocked, and if the timeout is not set, the client Ithumb O thread may be blocked for a long time, which will lead to a reduction in the number of available I / O threads in the system.

2) the business layer needs: most systems have restrictions on the execution time of business processes, for example, the response time of WEB interaction classes is less than 3s. The client sets the connection timeout to achieve the business layer timeout.

The native Socket connection interface for JDK is defined as follows:

Figure 2-2 JDK NIO class library SocketChannel connection interface

From the above interface definition, we can see that the NIO class library does not have a ready-made connection timeout interface for users to use directly. If you want to support connection timeout in NIO programming, you often need NIO framework or users to encapsulate their own implementation.

Let's take a look at how Netty supports connection timeout. First, you can configure the connection timeout parameter when creating a NIO client:

Figure 2-3 Netty client creation supports setting connection timeout parameters

After setting the connection timeout, when Netty initiates a connection, it creates a ScheduledFuture to mount on the Reactor thread according to the timeout, which is used to monitor whether a connection timeout occurs. The relevant code is as follows:

Figure 2-4 create timeout monitoring scheduled tasks based on connection timeout

After the connection timeout task is created, NioEventLoop is responsible for executing it. If the connection has timed out but the server still does not return a TCP handshake reply, close the connection with the code shown in the figure above.

If the connection operation is completed within the timeout period, the connection timeout task is cancelled, and the related code is as follows:

Figure 2-5 cancel the connection timeout task

The client connection timeout parameter of Netty is configured together with other commonly used TCP parameters, which is very convenient to use, and the upper layer users do not have to care about the underlying timeout implementation mechanism. This not only meets the personalized needs of users, but also realizes the hierarchical isolation of faults.

2.1.2. The communication peer forcibly closes the connection

During the normal communication between the client and the server, if a network flash break, a sudden downtime of the other process, or other abnormal link shutdown events occur, the TCP link will be abnormal. Because TCP is full-duplex, both sides of the communication need to close and release the Socket handle so that handle leakage does not occur.

In the actual NIO programming process, we often find functional and reliability problems caused by the handle not being closed in time. The reasons are summarized as follows:

1) the read and write operations of IO are not only concentrated within the reader thread. Some customized actions at the upper layer of the user may lead to the escape of IO operations, such as business custom heartbeat mechanism. These customized behaviors increase the difficulty of unified exception handling. The more scattered the IO operation is, the greater the probability of failure is.

2) some abnormal branches do not take into account that the failure will occur when the program enters these branches due to external environmental incentives.

Let's use fault simulation to see how Netty handles end-to-end link forced shutdown exceptions. First, start the Netty server and the client. After the TCP link is successfully established, both parties maintain the link and check the link status. The results are as follows:

Figure 2-6 Netty server and client TCP link status is normal

Forcibly close the client, simulate the client downtime, and print the following exception on the server console:

Figure 2-7 simulates TCP link failure

Judging from the stack information, the server has monitored that the client has forcibly closed the connection. Let's see whether the server has released the connection handle and executed the netstat command again. The execution result is as follows:

Figure 2-8 View the status of the failed link

As can be seen from the execution result, the server has closed the TCP connection with the client, and the handle resources are released normally. It can be concluded that the bottom layer of Netty has dealt with the fault automatically.

Let's take a look at how Netty senses the link shutdown exception and handles it correctly. Take a look at the writeBytes method of AbstractByteBuf, which is responsible for writing the buffer data of the specified Channel to ByteBuf. The detailed code is as follows:

Figure 2-9 writeBytes method for AbstractByteBuf

IOException occurs when calling the read method of SocketChannel, with the following code:

Figure 2-10 IO exception occurred when reading buffer data

To ensure that the IO exception is handled uniformly, the exception is thrown up, and the unified exception handling is performed by AbstractNioByteChannel. The code is as follows:

Figure 2-11 Link abnormal exit exception handling

To unify the exception policy, to facilitate maintenance, and to prevent handle leakage caused by improper handling, the handle is closed and the close method of AbstractChannel is called uniformly. The code is as follows:

Figure 2-12 Unified Socket handle shuts down the interface

2.1.3. Normal connection closed

For short connection protocols, such as HTTP protocol, after the data exchange between the two sides of the communication is completed, the server usually closes the connection according to the agreement of both parties. After the client obtains the TCP connection closure request, it closes its own Socket connection, and the two sides formally disconnect.

In the actual NIO programming process, there is often a misunderstanding: as long as the other party closes the connection, an IO exception will occur, and then close the connection after catching the IO exception. In fact, the legal closure of the connection will not occur IO exception, it is a normal scenario, if you omit the judgment and processing of the scenario, it will lead to connection handle leakage.

Let's simulate the fault and see how Netty handles it. The design of the test scenario is as follows: under the transformation of the Netty client, after the dual-hair link is established successfully, wait for 120s, and the client shuts down the link normally. See if the server can perceive and release handle resources.

First, start the Netty client and server, and the TCP links between the two sides are normal:

Figure 2-13 TCP connection status is normal

After 120s, the client closes the connection and the process exits. In order to see the whole process, we set a breakpoint at the Reactor thread on the server without processing. The link status is as follows:

Figure 2-14 TCP connection handle waiting to be released

As can be seen from the figure above, the server does not close the Socket connection at this time, and the link is in CLOSE_WAIT state. Release the code and let the server finish execution. The result is as follows:

Figure 2-15 TCP connection handle is released normally

Let's take a look at how the server determines that the client closes the connection. When the connection is legally closed by the other party, the closed SocketChannel will be ready. The read operation of SocketChannel returns a value of-1, indicating that the connection has been closed. The code is as follows:

Figure 2-16 needs to judge the number of bytes read

If SocketChannel is set to non-blocking, its read operation may return three values:

1) greater than 0, indicating that the number of bytes has been read

2) equals 0, and no message is read. Maybe TCP is in Keep-Alive state and receives TCP handshake message.

3)-1, the connection has been legally closed by the other party.

Through debugging, we found that the return value of the NIO class library is indeed-1:

Figure 2-17 the link is down normally, and the return value is-1

After learning that the connection is closed, Netty sets the close operation bit to true and closes the handle, as follows:

Figure 2-18 connection closes normally, releasing resources

2.1.4. Fault customization

In most scenarios, when the underlying network fails, the underlying NIO framework should be responsible for releasing resources, handling exceptions, and so on. The upper-level business applications do not need to care about the underlying processing details. However, in some special scenarios, users may need to perceive these exceptions and customize their handling, such as:

1) disconnection and reconnection mechanism of the client

2) cached retransmission of messages

3) record the fault details in the API log

4) Operation and maintenance related functions, such as alarm, trigger email / SMS, etc.

The handling strategy of Netty is the occurrence of IO exceptions, which releases the underlying resources, and notifies the upper users of the exception stack information in the form of events, and the exceptions are customized by the users. This handling mechanism not only ensures the security of exception handling, but also provides flexible customization ability to the upper layer.

The specific API definition and default implementation are as follows:

Figure 2-19 Fault Custom Interface

Users can override the API for personalized exception customization. For example, initiating reconnection and so on.

2.2. Link validity detection

When a single pass occurs in the network, the connection is occupied by the firewall Hang, GC for a long time or unexpected exception occurs in the communication thread, the link will be unavailable and not easy to be discovered in time. In particular, the exception occurs in the early morning business trough, when the morning service peak comes, because the link is not available will lead to an instant large number of business failures or timeouts, which will pose a major threat to the reliability of the system.

From a technical point of view, in order to solve the problem of link reliability, the effectiveness of the link must be tested periodically. At present, the most popular and common method is heartbeat detection.

The heartbeat detection mechanism is divided into three levels:

1) heartbeat detection at the TCP level, that is, the Keep-Alive mechanism of TCP, whose scope is the entire TCP protocol stack.

2) the heartbeat detection in the protocol layer mainly exists in the long connection protocol. For example, SMPP protocol

3) heartbeat detection in the application layer, which is mainly realized by each business product sending heartbeat messages to each other regularly in an agreed way.

The purpose of heartbeat detection is to confirm that the current link is available, that the other party is alive and can receive and send messages normally.

As a highly reliable NIO framework, Netty also provides a heartbeat detection mechanism. Let's familiarize ourselves with the heartbeat detection principle.

Figure 2-20 heartbeat detection mechanism

There are also differences in heartbeat detection mechanisms among different protocols, which can be divided into two categories:

1) Ping- Pong heartbeat: the communicator sends a Ping message regularly. After the other party receives the Ping message, it immediately returns the Pong response message to the other party, which belongs to the request-response heartbeat.

2) Ping- ping heartbeat: without distinguishing between heartbeat request and response, both sides of the communication send heartbeat Ping messages to each other at the agreed time, which belongs to two-way heartbeat.

The heartbeat detection strategy is as follows:

1) if no Pong reply message or Ping request message is received for N consecutive heartbeat tests, the link is considered to have failed logically, which is called heartbeat timeout.

2) how an IO exception occurs directly when reading and sending heartbeat messages, indicating that the link has failed, which is called heartbeat failure.

Regardless of heartbeat timeout or heartbeat failure, the link needs to be closed and the client initiates the reconnection operation to ensure that the link can return to normal.

The heartbeat detection of Netty is actually realized by using the link idle detection mechanism, and the related code is as follows:

Figure 2-21 Code package path for heartbeat detection

There are three idle detection mechanisms provided by Netty:

1) read idle, link duration t did not read any message

2) write idle, link duration t did not send any message

3) read and write are idle, and the link duration t does not receive or send any messages.

The default read and write idle mechanism of Netty is to close the connection when a timeout exception occurs, but we can customize its timeout implementation mechanism to support different user scenarios.

The timeout API of WriteTimeoutHandler is as follows:

Figure 2-22 write timeout

The timeout API of ReadTimeoutHandler is as follows:

Figure 2-23 read timeout

The free APIs for reading and writing are as follows:

Figure 2-24 idle read and write

Using the link idle detection mechanism provided by Netty, the heartbeat detection in the protocol layer can be realized very flexibly. In the chapter on the design and development of private protocol stacks in the authoritative Guide to Netty, I implemented another heartbeat detection mechanism using the custom Task interface provided by Netty. Interested friends can refer to the book.

2.3. Protection of Reactor threads

Reactor thread is the core of IO operation, and the engine of NIO framework, if it fails, will cause the multiplexer and multiple links mounted on it to fail to work properly. Therefore, its reliability requirements are very high.

The author has encountered the fault that the reader thread ran away and a large number of business request processing failed because of improper exception handling. Let's take a look at how Netty can effectively improve the reliability of Reactor threads.

2.3.1. Be careful with exception handling

Although Reactor threads mainly deal with IO operations, and the exceptions that occur are usually IO exceptions, in fact, non-IO exceptions occur in some special scenarios, and simply catching IO exceptions may cause the Reactor thread to run. To prevent this from happening, be sure to catch the Throwable inside the loop, not the IO exception or Exception.

The relevant codes of Netty are as follows:

Figure 2-25 Reactor thread exception protection

After capturing the Throwable, even if an unexpected unknown pair exception occurs, the thread does not run, it hibernates 1s, prevents the exception wrap caused by the dead loop, and then resumes execution. The core idea of this approach is:

1) the exception of a message should not cause the entire link to be unavailable

2) the unavailability of one link should not cause other links to become unavailable

3) the unavailability of a process should not cause other cluster nodes to be unavailable.

2.3.2. Endless cycle protection

In general, a dead cycle is detectable, preventable, but cannot be completely avoided. Reactor threads usually deal with IO-related operations, so we focus on endless loops at the IO level.

The most famous JDK NIO class library is epoll bug, which will cause Selector empty polling and IO thread CPU 100%, seriously affecting the security and reliability of the system.

SUN claims to have solved the BUG in the JKD1.6 update18 version, but according to industry testing and feedback, the BUG still exists until early versions of JDK1.7 and has not been completely fixed. The resource usage figure of the host where the BUG occurs is as follows:

Figure 2-26 epoll bug CPU empty polling

SUN is not good at solving the BUG problem, so we can only circumvent the problem from the NIO framework level. Let's take a look at how Netty solves this problem.

Netty's resolution strategy:

1) according to the characteristics of the BUG, first detect whether the BUG occurs

2) transfer the Channel registered on the problem Selector to the newly created Selector

3) the old problem Selector is closed and replaced with the new Selector.

Let's take a look at the code. First, check whether the BUG has occurred:

Figure 2-27 epoll bug detection

Once the BUG is detected, the Selector is rebuilt as follows:

Figure 2-28 rebuilding Selector

After the reconstruction is complete, replace the old Selector with the following code:

Figure 2-29 replace Selector

The operation of a large number of production systems shows that the avoidance strategy of Netty can solve the problem of IO thread CPU dead loop caused by epoll bug.

2.4. Graceful exit

The elegant downtime of Java is usually realized by registering the ShutdownHook of JDK. When the system receives the exit instruction, it first marks that the system is in the exit state and no longer receives new messages, then deals with the backlog of messages, and finally calls the resource recovery API to destroy the resources, and finally each thread exits the execution.

There is usually a time limit for elegant exit, such as 30s. If the operation before the exit is still not completed at the execution time, the monitoring script will directly kill-9 exit and force the exit.

The elegant exit function of Netty is also increasing with the optimization and evolution of the version. Let's take a look at the elegant exit of Netty5.

First, take a look at Reactor threads and thread groups, which provide an elegant exit interface. The APIs of EventExecutorGroup are defined as follows:

Figure 2-30 EventExecutorGroup exits gracefully

The resource release API implementation of NioEventLoop:

Figure 2-31 NioEventLoop resource release

The shutdown interface of ChannelPipeline:

Figure 2-32 ChannelPipeline shuts down the interface

At present, the main interfaces and class libraries provided by Netty provide users with the interfaces of resource destruction and elegant exit. Users' custom implementation classes can inherit these interfaces to complete the release of user resources and elegant exit.

2.5. Memory protection 2.5.1. Memory leak protection of buffer

To improve memory utilization, Netty provides memory pooling and object pooling. However, after the implementation of the cache pool, it is necessary to strictly manage the application and release of memory, otherwise it is easy to lead to memory leaks.

If the memory pool technology is not used, each object is created as a local variable of the method, and after the use is complete, as long as it is no longer referenced, the JVM will be automatically released. However, once the memory pool mechanism is introduced, the life cycle of the object will be managed by the memory pool, which is usually a global reference, and this part of the memory will not be reclaimed without explicitly releasing JVM.

For Netty users, the technical level of users varies greatly, and some users who do not understand the JVM memory model and memory leak mechanism may only remember to apply for memory and forget to release memory actively, especially JAVA programmers.

To prevent memory leakage due to user omission, Netty automatically releases the memory in the tail Handler of Pipe line. The related code is as follows:

Figure 2-33 memory recovery operation for TailHandler

For a memory pool, you actually put the buffer back into the memory pool for recycling, as follows:

Figure 2-34 memory recovery operation for PooledByteBuf

2.5.2. Buffer memory overflow protection

Readers who have done the protocol stack know that when we decode a message, we need to create a buffer. Buffers are usually created in two ways:

1) capacity pre-allocation, if not enough to expand in the actual read and write process

2) create a buffer based on the length of the protocol message.

In the actual commercial environment, if we encounter some problems, such as abnormal code stream attack, abnormal protocol message coding, message packet loss and so on, it may parse to an ultra-long length field. The author has encountered a similar problem, the value of the message length field is more than 2G, because a branch of the code does not effectively protect the upper limit of the length, resulting in memory overflow. Memory overflowed again within a few seconds after the system rebooted, but fortunately, the root cause of the problem was located in time, which almost led to a serious accident.

Netty provides a codec framework, so it is very important to protect the upper limit of the decoding buffer. Next, let's take a look at how Netty protects buffers at the upper limit:

First, specify the upper limit of buffer length when allocating memory:

Figure 2-35 buffer allocator can specify the maximum buffer length

Secondly, when writing to the buffer, if the buffer capacity is insufficient and needs to be expanded, the maximum capacity is judged first, and if the expanded capacity exceeds the upper limit, the extension is rejected:

Figure 2-35 buffer extension upper limit protection

Finally, when decoding, the length of the message is judged. If the maximum capacity limit is exceeded, a decoding exception is thrown and memory allocation is refused:

Figure 2-36 Decoding of half packets exceeding the capacity limit failed

Figure 2-37 throws a TooLongFrameException exception

2.6. Flow shaping

Most commercial systems are composed of multiple network elements or components, such as participating in SMS interaction, which will involve mobile phones, base stations, SMS centers, SMS gateways, SP/CP and other network elements. The processing performance of different network elements or components is different. In order to prevent the downstream network elements from being crushed because of the surge service or the low performance of the downstream network elements, it is sometimes necessary for the system to provide traffic shaping function.

Let's take a look at the definition of traffic shaping (traffic shaping): traffic shaping (Traffic Shaping) is a measure to actively adjust the output rate of traffic. A typical application is to control the output of local traffic based on the TP index of downstream network nodes. The main difference between traffic shaping and traffic regulation is that traffic shaping caches packets that need to be discarded in traffic regulation-usually by putting them into buffers or queues, also known as Traffic Shaping (TS). When the token bucket has enough tokens, the cached messages are sent out evenly. Another difference between traffic shaping and traffic regulation is that shaping may increase delays, while regulation introduces few additional delays.

The principle diagram of traffic shaping is as follows:

Figure 2-38 flow shaping schematic diagram

As a high-performance NIO framework, Netty's traffic shaping has two functions:

1) prevent the downstream network elements from being crushed and business processes interrupted due to the uneven performance of the upstream and downstream network elements.

2) to prevent the problem of "holding up" caused by the untimely processing of the back-end business thread due to the communication module receiving messages too quickly.

Next we will specifically learn the traffic shaping function of Netty.

2.6.1. Global traffic shaping

The scope of global traffic shaping is process-level, and no matter how many Channel you create, its scope is for all Channel.

Users can set parameters: the receiving rate of the message, the sending rate of the message, and the shaping period. The relevant interfaces are as follows:

Figure 2-39 Global traffic shaping parameter settings

The principle of Netty traffic shaping is that the number of writable bytes of ByteBuf read each time is calculated, the current packet traffic is obtained, and then compared with the traffic shaping threshold. If the threshold has been reached or exceeded. The wait time delay is calculated, the current ByteBuf is cached in the scheduled task Task, and the scheduled task thread pool continues to process the ByteBuf after delaying the delay. The related code is as follows:

Figure 2-40 dynamic calculation of current traffic

If the shaping threshold is reached, the newly received ByteBuf is cached and placed in the message queue of the thread pool, which is processed later. The code is as follows:

Figure 2-41 caching the current ByteBuf

The delay time of the scheduled task is calculated according to the detection period T and the traffic shaping threshold. The code is as follows:

Figure 2-42 calculate the cache wait period

It should be pointed out that the larger the threshold limit of traffic shaping is, the higher the accuracy of traffic shaping is. Traffic shaping function is a guarantee of reliability, which can not achieve 100% accuracy. This is related to the back-end codec and buffer processing strategy, which will not be discussed here. Interested friends can think about why Netty is not 100% accurate.

The biggest difference between traffic shaping and flow control is that flow control rejects messages, and traffic shaping does not reject or discard messages. No matter how much it is received, it can always send messages at an approximately constant speed, which is similar to the principle and function of transformer.

2.6.2. Single link traffic shaping

In addition to global traffic shaping, Netty also supports link-capable traffic shaping, and the relevant interfaces are defined as follows:

Figure 2-43 single link traffic shaping

The biggest difference between single-link traffic shaping and global traffic shaping is that it takes a single link as the scope, and different shaping strategies can be set for different links.

The principle of its implementation is similar to that of global traffic shaping, which we will not repeat. It is worth noting that Netty supports user-defined traffic shaping policy, which can be customized by inheriting AbstractTrafficShapingHandler's doAccounting method. The relevant APIs are defined as follows:

Figure 2-44 customized traffic shaping strategy

3. Summary

Although Netty has done a lot of fine design on architecture reliability, and a lot of reliability protection of the system based on defensive programming. However, the reliability of the system is a process of continuous investment and improvement, which can not be achieved overnight in one version, and reliability work has a long way to go.

From a business point of view, different industries and application scenarios have different requirements for reliability. For example, the reliability requirements of the telecommunications industry are 5 9s. For special industries such as railways, the reliability requirements are higher, up to 6 9s. For some edge IT systems of the enterprise, the reliability requirements will be lower.

Reliability is a kind of investment, for enterprises, the pursuit of extreme reliability is a heavy burden on R & D costs, but on the contrary, if we do not pay attention to the reliability of the system, once the unfortunate encounter of online accidents, the loss is often alarming.

The above is the editor for you to share how to analyze the principle of high reliability of Netty, if you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report