Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Netty filling-on filling holes, I am a professional

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. Why does the netty server quit automatically when it starts?

Since the execution at code 1 goes directly to 2 and 3, the netty server shuts down and exits.

First, add synchronization blocking sync directly after code 1, then the following statement will be executed only when the server closes channel normally

Solution 2. Move codes 2 and 3 into operationComplete, and only when channel is closed will the two thread groups of netty be closed.

2. Netty client connection pool resource OOM

The production environment uses netty as the client. In order to improve performance, the client creates multiple links with the server, while the client creates a TCP connection pool. Results OOM at the peak of business

Judging from the exception log and thread resource consumption, the cause of the memory leak is that the application has created a large number of EventLoopGroup thread pools. This is the pattern in which a TCP connection corresponds to a Nio thread. The mistake is to use the BIO mode to invoke the NIO communication framework, not only no optimization effect, but also the occurrence of OOM.

The correct operation is

Note: Bootstrap itself is not thread-safe, but the connection operations that perform Bootstrap are performed serially. The connect method creates a new NioSocketChannel and selects a NioEventLoop thread from the initially constructed EventLoopGroup to perform the actual Channel connection operation, regardless of the thread executing the Boostrap. Create multiple client connections in the same Boostrap. EventLoopGroup is shared. These connections share the same Nio thread group EventLoopGroup. When a link is abnormal or closed, you only need to close and release the Channel itself. You cannot destroy NioEventLoop and all thread groups EventLoopGroup at the same time. Below is the error code.

Bootstrap is not thread-safe, so it is dangerous and pointless to operate Bootstrap concurrently in multiple threads.

3. Netty actually has the problem of memory pool leakage

When the ctx.writeAndFlush method is called, when the message is sent, Netty will actively help the application free memory. The memory release scenario is as follows

(1) if it is heap memory (PooledHeapByteBuf), convert HeapByteBuffer to DirectByteBuffer and release PooledHeapByteBuf to the memory pool.

(2) if it is DirectByteBuffer, no conversion is required. After the message is sent, the remove method of ChannelOutboundBuffer is responsible for releasing it.

In order to better manage ByteBuf in a real project, there are four scenarios below

(1) request ByteBuf based on memory pool, which mainly includes PooledDirectByteBuf and PooledHeapByteBuf, which is allocated by NioEventLoop thread when processing Channel read operation, and needs to be released after the business ChannelInboundHandler processes the request message (usually after decoding). Its release strategy is as follows:

ChannelInboundHandler inherits from SimpleChannelInboundHandler. The release business of channelRead0,ByteBuf, the abstract method that implements it, is not concerned, and SimpleChannelInboundHandler is responsible for releasing it.

Call ctx.fireChannelRead (msg) in the business ChannelInboundHandler to let the request continue to execute backwards until the inner class TailContext of DefaultChannelPipeline is called, which is responsible for releasing the request information.

Call ReferenceCountUtil.release (reqMsg) directly in the channelRead method

(2) request ByteBuf based on non-memory pool, which also needs to release memory according to the way of memory pool.

(3) based on the response ByteBuf of memory pool, according to the previous analysis, as long as the writeAndFlush or flush method is called, the memory will be freed by the Netty framework after the message is sent, and the business does not need to release memory actively.

(4) based on the response ByteBuf of non-memory pool, the service does not need to release memory actively.

Of course, a non-memory pool does not have to be released manually, but it is best to release it manually. There are four main ByteBuf in Netty, of which byte [] under UnpooledHeapByteBuf can be recycled naturally by JVM GC, while under UnpooledDirectByteBuf is DirectByteBuffer, which is memory outside the Java heap. Besides waiting for JVM GC, it is best to take the initiative to recycle it, otherwise it will cause memory leakage due to insufficient memory. For PooledHeapByteBuf and PooledDirectByteBuf, you must take the initiative to put the used byte [] / ByteBuffer back into the pool, otherwise the memory will burst.

Startup parameters can be configured for monitoring memory pool leaks.

The information of different parameters is as follows:

DISABLED completely turns off memory leak detection and is not recommended

SIMPLE detects leaks at a sampling rate of 1%. Default level

ADVANCED sampling rate is the same as SIMPLE, but shows a detailed leak report

The sampling rate of PARANOID is 100%, and the report information is the same as ADVANCED.

Finally, quietly tell you that the online netty entry demo mostly has the problem of memory pool leakage, but the amount of data transmission is less, it may be more than half a year to run the LEAK, even the "netty authoritative Guide" entry demo also exists this problem, maybe it is just an entry demo, so it is not too complicated. If you don't believe it, you can add the following code to the TimeClientHandler or TimeServerHandler of the entry demo.

ByteBuf firstMessage = null; for (int j = 0; j

< Integer.MAX_VALUE; j++) { firstMessage = Unpooled.buffer(1024); for (int i = 0; i < firstMessage.capacity(); i ++) { firstMessage.writeByte((byte) i); } ctx.writeAndFlush(firstMessage); } 妥妥的 这就是为什么很多人照抄网上的demo仍会出现内存池泄露的原因 4、Netty发送队列积压导致内存泄漏 客户端频繁发送消息可以导致发送队列积压,进而内存增大,响应时间长,CPU占用高。 此时我们可以为客户端设置高低水位机制,防止自身队列消息积压 此外,除了客户端消息队列积压也可能因网络链接处理能力、服务器读取速度小于己方发送速度有关。所以在日常监控中,需要将Netty的链路数、网络读写速度等指标纳入监控系统,发现问题之后需要及时告警。 5、API网关高并发压测性能波动 服务端转发请求 public void channelRead(ChannelHandlerContext ctx, Object msg) { ctx.write(msg); char [] req = new char[64 * 1024]; executorService.execute(()->

{char [] dispatchReq = req; / / after simple processing, forward the request message to the back-end service, omitting try {/ / simulating business logic processing time-consuming TimeUnit.MICROSECONDS.sleep;} catch (Exception e) {e.printStackTrace () });}

The results showed that the occupancy of memory and CPU was high, while the QPS decreased. When the pressure test was stopped for a period of time, the CPU footprint and memory decreased, and QPS returned to normal.

Using MAT to analyze

It is concluded that the char [] backlog of the thread pool has entered the old age, resulting in frequent full gc. The reason for this is that every time a char of 64kb is created to hold processing messages, even if there are 100 bytes of messages actually received. Change the char size to the message size, and the problem is solved

6. Why can't the Netty Server receive the message from the client?

Reason: in handler, the business information is processed directly, causing the operation of IO to be blocked and unable to read the message sent by the client.

It is recommended that business operations will be handled by another thread rather than in the IO thread.

Recommended calculation formula for threads:

(1) number of threads = (total thread time / bottleneck resource time) * number of thread parallelism of bottleneck resources

(2) Total time of QPS=1000/ threads * number of threads

7. "bug" is generated after Netty3.x is upgraded to Netty4.x.

1. After the version upgrade, the response data sent by the server to the client has been tampered with.

After netty upgrade 4, the thread model changes, and the encoding of the response message is executed asynchronously by the NioEventLoop thread, and the business thread returns. At this time, if the encoding operation is performed after modifying the business logic of the reply message, the running result is incorrect and the data is tampered with.

2. Why is the context lost after the upgrade?

Netty4 modifies the threading model of outbound, which just affects the context delivery of business messages, resulting in the loss of business thread variables.

3. After the upgrade, the performance has not been improved as described by the official, but decreased?

Time-consuming anti-sequence operations can be placed in the business thread instead of ChannelHandler, because Netty4 has only one NioEventLoop thread to handle this operation, and the business time-consuming ChannelHandler is executed serially by the Imaco thread, so the execution efficiency is low. In the message sending thread model, Netty3 makes full use of the parallel coding of business threads and the advantages of ChanelHandler, and can process N business messages in a cycle T.

Performance optimization suggestion: appropriate high thread number of work threads (NioEventLoopGroup) to share the load of each NioEventLoop thread and improve the concurrency of ChannelHandler execution. At the same time, the business time-consuming operations are removed from the ChannelHandler and put into the business thread pool for processing. For some time-consuming logic that is not appropriate to be transferred to business threads, performance can also be improved by binding thread pools for ChannelHandler. The Downstream of Netty3 is executed by business threads, which means that multiple business threads operate ChannelHandler at the same time, and users need concurrency protection.

8. Why doesn't the business thread pool included in netty improve performance, and why does netty have bug?

The server side uses the thread pool that comes with netty to handle business.

The client side is as follows

The actual result is that the QPS on the server side has only one digit. The reason is that a tcp connection corresponds to a channel, and a channel corresponds to a DefaultEventExecutor (business thread) execution, so although it binds the channel thread pool, a channel is still being processed by a business thread. The solution is to create another thread pool in ChannelHandler so that you can take advantage of the parallel processing power of the thread pool.

Of course, the server side uses the thread pool that comes with netty to handle business, which means that when multiple tcp connections are established, each connection can correspond to one thread to process the ChannelHandler. Therefore, it can improve the parallel processing ability of the business when there are multiple tcp connections.

The business thread pool provided by Netty can reduce lock competition and improve the concurrent processing performance of the system. If you use the thread pool implemented by business customization, if you pursue higher performance, you need to make efforts to eliminate or reduce lock competition (ThreadPoolExecutor adopts the model of "one blocking queue + N worker threads". If there are a large number of business threads, fierce lock competition will be formed.)

9. What if the processing speed is not balanced between the sender and the receiver?

The biggest difference between available traffic × × scheme, traffic × × and flow control is that flow control rejects messages, and traffic × × does not reject or discard messages. No matter how much it receives, it can always send messages at an approximately constant speed, similar to the principle and function of transformers.

The receiver code is as follows:

/ / configure the server-side Nio thread group EventLoopGroup bossGroup = new NioEventLoopGroup (); EventLoopGroup workerGroup = new NioEventLoopGroup (); try {ServerBootstrap b = new ServerBootstrap () B.group (bossGroup, workerGroup) .channel (NioServerSocketChannel.class) .option (ChannelOption.SO_BACKLOG) Handler (new LoggingHandler (LogLevel.INFO)) .childHandler (new ChannelInitializer () {@ Override public void initChannel (SocketChannel ch) throws Exception {ch.pipeline () .addLast ("ChannelTrafficShaping", new ChannelTrafficShapingHandler (1024 * 1024, 1024 * 1024) 1000)) ByteBuf delimiter = Unpooled.copiedBuffer ("$_" .getBytes ()); ch.pipeline () .addLast (new DelimiterBasedFrameDecoder (2048 * 1024, delimiter)) Ch.pipeline () .addLast (new StringDecoder ()); ch.pipeline () .addLast (new TrafficShapingServerHandler ());}}); / / bind port, ChannelFuture f = b.bind (port) .sync () / wait for the server listening port to close f.channel (). CloseFuture (). Sync ();} finally {/ / exit gracefully, releasing thread pool resources bossGroup.shutdownGracefully (); workerGroup.shutdownGracefully ();}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report