Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the components of NIO-based network programming framework Netty

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the knowledge of "what are the components of NIO-based network programming framework Netty". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Overview of Netty

Netty is an asynchronous and event-driven network application framework that supports the rapid and simple development of maintainable high-performance servers and clients.

The so-called event-driven means that the flow of the program is determined by a variety of event responses. Asynchronous and event-driven are everywhere in Netty. This feature enables applications to respond to events generated at any point in time in any order. It brings a very high degree of scalability, allowing your application to deal with growing work. Adapt to this growth by some feasible way or by expanding its processing capacity.

Netty provides high performance and ease of use with the following features:

It has a well-designed and unified API, supports a variety of transport types such as NIO and OIO (blocking IO), and supports true connectionless UDP Socket.

Simple and powerful threading model, highly customizable threads (pools). (customized Reactor model)

Good modularity and decoupling, support for extensible and flexible event models, and easy separation of concerns to reuse logical components (pluggable).

High performance, higher throughput than Java core API, and minimum memory replication consumption through zero-copy functionality.

Built-in many commonly used protocol codecs, such as HTTP, SSL, WebScoket and other common protocols can be used out of the box through Netty. Users can also use Netty to implement their own application layer protocols simply and easily.

Most people use Netty mainly to improve the performance of applications, and high performance is inseparable from non-blocking IO. Netty's non-blocking IO is based on Java NIO and encapsulated (using Java NIO API directly is a tedious and error-prone operation in high-complexity applications, and Netty helps you encapsulate these complex operations).

Introduction to etty after reading this chapter, we can basically understand all the important components of Netty, have a comprehensive understanding of Netty, which is very important for the next in-depth study of Netty, and after learning this chapter, we can already use Netty to solve some conventional problems. What are the components of Netty? To better understand and delve further into Netty, let's first take a general look at the components used by Netty and how they work together in the overall Netty architecture. Essential components in Netty applications:

Bootstrap or ServerBootstrap

EventLoop

EventLoopGroup

ChannelPipeline

Channel

Future or ChannelFuture

ChannelInitializer

ChannelHandler

Bootstrap, a Netty application usually starts with a Bootstrap, and its main function is to configure the whole Netty program and concatenate the components.

Handler, in order to support a variety of protocols and the way data is handled, gave birth to Handler components. Handler is mainly used to handle a variety of events, and there are a wide range of events here, such as connections, data reception, exceptions, data conversions, and so on.

ChannelInboundHandler, one of the most commonly used Handler. The purpose of this Handler is to handle events when data is received, that is to say, our business logic is generally written in this Handler, and ChannelInboundHandler is used to deal with our core business logic.

ChannelInitializer, when a link is established, we need to know how to receive or send data, of course, we have a variety of Handler implementations to deal with it, so ChannelInitializer is used to configure these Handler, it will provide a ChannelPipeline and add Handler to the ChannelPipeline.

ChannelPipeline, a Netty application based on the ChannelPipeline mechanism, needs to rely on EventLoop and EventLoopGroup because all three are related to event or event handling.

The purpose of EventLoops is to handle IO operations for Channel, and a single EventLoop can serve multiple Channel.

The EventLoopGroup will contain multiple EventLoop.

Channel represents a Socket link, or other component related to IO operations, which is used with EventLoop to participate in IO processing.

Future, all IO operations in Netty are asynchronous, so you can't immediately know whether the message is handled correctly, but we can wait for it to complete or register a listener directly. The specific implementation is through Future and ChannelFutures, they can register a listener, which will be triggered automatically when the operation succeeds or fails. In short, all operations return a ChannelFuture.

How does Netty handle connection requests and business logic?

Channels, Events and IO

Netty is a non-blocking, event-driven, network programming framework. Of course, it's easy to understand that Netty uses threads to handle IO events, and for those of you who are familiar with multithreaded programming, you might think about how to synchronize your code, but Netty doesn't need us to think about it, specifically:

A Channel corresponds to an EventLoop, and an EventLoop corresponds to a thread, that is, only one thread is responsible for the IO operation of a Channel.

For the relationship between these nouns, you can see the following figure:

As shown in the figure: when a connection arrives, Netty registers a channel, and EventLoopGroup allocates an EventLoop to bind to the channel. Throughout the life cycle of the channel, the bound EventLoop serves it, and the EventLoop is a thread.

Speaking of which, what is the relationship between EventLoops and EventLoopGroups? We said earlier that an EventLoopGroup contains multiple Eventloop, but let's take a look at the following picture, which is an inheritance tree, from which we can see that EventLoop actually inherits from EventloopGroup, that is, in some cases, we can use an EventLoopGroup as an EventLoop.

How to configure a Netty application?

BootsStrapping

We use BootsStrap to configure netty applications, there are two types, one for the client side: BootsStrap, the other for the server side: ServerBootstrap, to distinguish how to use them, you just need to remember that one is used on the client side and the other on the server side. Let's describe the difference between the two types in detail:

1. The first and most obvious difference is that ServerBootstrap is used on the server side and binds to a port listening connection by calling the bind () method; Bootstrap is used on the client side and needs to call the connect () method to connect to the server side, but we can also get Channel to the connect server side by calling the ChannelFuture returned by the bind () method.

two。 The client-side Bootstrap typically uses one EventLoopGroup, while the server-side ServerBootstrap uses two (these two can also be the same instance). Why do you need two EventLoopGroup on the server side? This design has obvious advantages. If a ServerBootstrap has two EventLoopGroup, the first EventLoopGroup can be used specifically to bind to port listening connection events, and the second EventLoopGroup can be used to handle each received connection. Let's use a diagram to show this pattern:

PS: if only one EventLoopGroup handles all requests and connections, in the case of a large amount of concurrency, the EventLoopGroup may be busy processing received connections and unable to process new connection requests in time. With two, there will be special threads to handle connection requests, which will not lead to request timeout, which greatly improves the concurrent processing capacity.

We know that a Channel needs to be bound by an EventLoop, and once bound, the two will not change. In general, the number of EventLoop in an EventLoopGroup will be less than the number of Channel, so it is very likely that multiple Channel share an EventLoop, which means that if the EventLoop in a Channel is very busy, it will affect the Eventloop's processing of other Channel, which is why we cannot block the EventLoop.

Of course, our Server can also use only one EventLoopGroup, and an instance can handle connection requests and IO events, as shown in the following figure:

How does Netty handle data?

Netty Core ChannelHandler

Let's take a look at how data is handled in netty, and recall the Handler we talked about earlier, and by the way, that's it. When it comes to Handler, we have to mention that ChannelPipeline,ChannelPipeline is responsible for arranging the sequence and execution of Handler. Let's introduce them in detail:

ChannelPipeline and handlers

The most commonly used in our application should be ChannelHandler, we can imagine that data flows in a ChannelPipeline, and ChannelHandler is one of the small valves, these data will pass through every ChannelHandler and be processed by it. Here is a public interface ChannelHandler:

As we can see from the figure above, ChannelHandler has two subclasses, ChannelInboundHandler and ChannelOutboundHandler, which correspond to two data streams. If the data flows into our application from the outside, we think of it as inbound, on the contrary, outbound. In fact, ChannelHandler is similar to Servlet. After one ChannelHandler processes the received data, it will be passed to the next Handler, or if it is not processed, it will be passed directly to the next one. Let's take a look at how ChannelPipeline arranges ChannelHandler:

From the figure above, we can see that a ChannelPipeline can mix two kinds of Handler (ChannelInboundHandler and ChannelOutboundHandler) together. When a data stream enters a ChannelPipeline, it will be passed to the first ChannelInboundHandler from the head of the ChannelPipeline, and then to the next one after the first processing, all the way to the end of the pipe. In contrast, when the data is written out, it starts at the end of the pipe, passes through the "last" ChannelOutboundHandler at the end of the pipe, and passes it to the previous ChannelOutboundHandler when it is processed.

Data is passed between each Handler, which requires calling the ChanneHandlerContext passed in the method to operate. The API of netty provides two base classes, ChannelOutboundHandlerAdapter and ChannelInboundHandlerAdapter, who only call ChanneHandlerContext to pass the message to the next Handler, because we only care about processing data, so our program can inherit these two base classes to help us do this, and we only need to implement the part of processing data.

We know that InboundHandler and OutboundHandler are mixed in ChannelPipeline, so how do they distinguish them from each other? It's easy because they implement different interfaces, and OutboundHandler will be skipped automatically for inbound event,Netty, whereas outbound event,ChannelInboundHandler will be ignored.

When a ChannelHandler is added to the ChannelPipeline, it gets a reference to the ChannelHandlerContext, and the ChannelHandlerContext can be used to read and write data streams in the Netty. Therefore, there are two ways to send data, one is to write the data directly to Channel, and the other is to write data to ChannelHandlerContext. The difference between them is that if you write to Channel, the data stream will start at the head of Channel, and if you write ChannelHandlerContext, the data stream will flow to the next Handler in the pipeline.

How to deal with our business logic?

Encoders, Decoders and Domain Logic

There will be a lot of Handler in Netty, and which Handler depends on whether they inherit InboundAdapter or OutboundAdapter. Of course, Netty also provides a series of Adapter to help us simplify development, we know that each Handler in Channelpipeline is responsible for passing the Event to the next Handler, with these auxiliary Adapter, the extra work can be done automatically, we just need to cover the part of the implementation we really care about. In addition, there are some Adapter that provide additional features, such as encoding and decoding. So let's take a look at three of the commonly used ChannelHandler:

Encoders and Decoders

Because we can only transmit byte stream when transmitting over the network, we must convert our message type to bytes before sending data, and correspondingly, after receiving the data, we must convert the received bytes into message. We call the bytes to message process Decode (decode it into what we can understand) and turn the message to bytes process into Encode.

Many off-the-shelf encoders / decoders are provided in Netty, and we can generally know their uses from their names, such as ByteToMessageDecoder, MessageToByteEncoder, such as ProtobufEncoder and ProtobufDecoder, which are specially used to deal with Google Protobuf protocols.

As we said earlier, which kind of Handler depends on whether they inherit InboundAdapter or OutboundAdapter. For Decoders, it is easy to know whether it inherits from ChannelInboundHandlerAdapter or ChannelInboundHandler, because decoding means decoding the bytes passed in by ChannelPipeline into a message (that is, Java Object) that we can understand, while ChannelInboundHandler deals with Inbound Event, and what is passed in Inbound Event is the byte stream. Decoder overrides the "ChannelRead ()" method, in which the specific decode method is called to decode the passed byte stream, and then the encoded Message is passed to the next Handler by calling the ChannelHandlerContext.fireChannelRead (decodedMessage) method. Similarly, Encoder doesn't need much.

Domain Logic

In fact, what we are most concerned about is how to deal with the received decoded data. Our real business logic is to deal with the received data. Netty provides one of the most commonly used base class SimpleChannelInboundHandler, where T is the type of data processed by this Handler (the last Handler has been decoded for us). When the message arrives at this Handler, Netty will automatically call the channelRead0 (ChannelHandlerContext,T) method in this Handler, and T is the passed data object. In this method, we can write our business logic at will.

In a way, Netty is a set of NIO framework, which is encapsulated on the basis of Java NIO, so if you want to learn Netty well, I suggest you understand Java NIO well.

NIO can be called New IO or Non-blocking IO, and it is much more efficient in performance than Java's old blocking IO (if you let the IO operation in each connection create a separate thread, then blocking IO will not lag behind NIO in performance, but it is not possible to create an infinite number of threads, which can be bad in the case of a very large number of connections).

The data transfer of ByteBuffer:NIO is based on buffer, and ByteBuffer is the buffer abstraction used in NIO data transfer. ByteBuffer supports the allocation of memory out of the heap and attempts to avoid redundant replication during the execution of the I _ swap O operation. The general I IO O operation requires a system call, which will first switch to the kernel state. The kernel state will first read the data from the file to its buffer, and only after the data is ready will the data be written from the kernel state to the user state. The so-called blocking O actually means blocking during the period of waiting for the data to be ready. If you want to avoid this extra kernel operation, you can use mmap (Virtual memory Mapping) to let the user mode manipulate the file directly.

Channel: similar to a (fd) file descriptor, it simply represents an entity (such as a hardware device, file, Socket, or a program component that can perform one or more different Icano operations). You can read data from a Channel to a buffer, or you can write data from a buffer to Channel.

Selector: selector is the key to the implementation of NIO. NIO uses I Channel O multiplexing to achieve non-blocking. Selector listens to the IO events of each Channel in a thread to determine which Channel is ready for IO operation, so it can check the completion status of any read or write operation at any time. This approach avoids blocking while waiting for IO operations to prepare data, and can handle many connections with fewer threads, reducing the overhead of thread switching and maintenance.

After understanding the implementation idea of NIO, I think it is also necessary to understand the Imax O model in Unix, which has the following five Imax O models in Unix:

Blocking iPink O (Blocking IBG O)

Non-blocking I-Non-blocking O (non-blocking I-O)

I multiplexing O Multiplexing (I am O multiplexer (select and poll))

Signal-driven signal driven O (SIGIO)

Asynchronous asynchronous O (the POSIX aio_functions)

Blocking Icano model is the most common Icano model. Usually, the InputStream/OutputStream we use is based on blocking Icano model. In the figure above, we use UDP as an example. The recvfrom () function is a function used by the UDP protocol to receive data. It needs to use system calls and block until the kernel is ready, and then copy the data from the kernel buffer to the user state (that is, recvfrom () receives the data). Blocking means doing nothing while waiting for the kernel to prepare the data.

To take an example in life, blocking Icano is like going to a restaurant, where you have to sit and wait while waiting for the meal to be ready (if you are playing with your phone, then this is non-blocking Icano).

In the non-blocking EWOULDBLOCK O model, the kernel returns an error code EWOULDBLOCK when the data is not ready, while recvfrom does not choose to block hibernation in the case of failure, but keeps asking the kernel whether it is ready. In the above figure, the kernel returns EWOULDBLOCK on the first three times, until the fourth query, the kernel data is ready, and then begins to copy the cached data in the kernel to the user state. This way of constantly asking the kernel to see if a certain state is complete is called polling.

Non-blocking iPink O is like ordering takeout, except that you are in such a hurry that you have to call every once in a while to ask if the takeout boy has arrived.

The idea of multiplexing is the same as that of non-blocking I _ plink O, except that in non-blocking I _ plink O, the kernel is polled in the user mode (or a thread) of recvfrom, which consumes a lot of CPU time. On the other hand, select O multiplexing is responsible for polling through the system call of select () or poll () to monitor the status of read and write events. As in the figure above, when select detects that a datagram is readable, it is left to recvfrom to send a system call to copy the data in the kernel to the user state.

The advantage of this approach is obvious: multiple file descriptors can be listened for through Icano multiplexing, and the monitoring task can be completed in the kernel. But the disadvantage is that at least two system calls (select () and recvfrom ()) are required.

The same applies to ordering takeout, except that you can do your own thing while waiting for takeout. When takeout arrives, you will be notified via APP or called by takeout buddy (because the kernel will poll you for you).

Select () and poll () are available in Unix for two Istroke O multiplexing functions. Select () is more compatible, but the number of file descriptors it can monitor in a single process is limited, and this value is related to FD_SETSIZE, which is 2048 by default in 32-bit systems. Another drawback of select () is his polling method, which takes a linear scan polling mode, traversing FD_SETSIZE file descriptors each time, whether they are active or not. Poll () is essentially no different from the implementation of select (), but there is a big difference in data structure. The user must assign an array of pollfd structure, which maintains the kernel state. For this reason, poll () does not have the size limit like select (), but the disadvantage is also obvious. A large number of fd arrays will be copied between the user state and the kernel state, regardless of whether such replication makes sense or not.

There is also a more efficient implementation than select () and poll () called epoll (), which is a scalable iUnip O multiplexing implementation introduced by the Linux kernel 2.6 to replace select () and poll (). Epoll () also has no upper limit for file descriptors. It uses one file descriptor to manage multiple file descriptors and uses a red-black tree as the storage structure. At the same time, it also supports edge trigger (edge-triggered) and horizontal trigger (level-triggered) mode (poll () only supports horizontal trigger). In edge trigger mode, epoll_wait will only return when a new event object is added to epoll for the first time, while in horizontal trigger mode, epoll_wait will trigger continuously before the event state has not changed. That is, the edge trigger mode is notified only once when the file descriptor becomes ready, and the horizontal trigger mode constantly notifies the file descriptor until it is processed.

For more information about epoll_wait, please refer to epoll API below.

/ / create an epoll object and return its file descriptor. The / / parameter flags allows you to modify the behavior of epoll, which has only one valid value, EPOLL_CLOEXEC. Int epoll_create1 (int flags); / / configuration object, which describes which file descriptors and events are monitored. Int epoll_ctl (int epfd, int op, int fd, struct epoll_event * event); / / wait for any event registered with epoll_ctl until the event occurs once or times out. / / returns the events that occur in events. A maximum of maxevents events are returned at the same time. Int epoll_wait (int epfd, struct epoll_event * events, int maxevents, int timeout)

Another highlight of epoll is that it uses an event-driven approach instead of polling. File descriptors registered in epoll_ctl activate the file descriptor through a callback mechanism when the event is triggered, and epoll_wait can receive notifications. In this way, efficiency is not proportional to the number of file descriptors.

In Java NIO2 (introduced from JDK1.7), epoll is used as long as the Linux kernel version is above 2.6, as shown in the following source code (DefaultSelectorProvider.java).

Public static SelectorProvider create () {String osname = AccessController.doPrivileged (new GetPropertyAction ("os.name")); if ("SunOS" .equals (osname)) {return new sun.nio.ch.DevPollSelectorProvider ();} / / use EPollSelectorProvider for Linux kernels > = 2.6if ("Linux" .equals (osname)) {String osversion = AccessController.doPrivileged (new GetPropertyAction ("os.version")); String [] vers = osversion.split ("\\.", 0) If (vers.length > = 2) {try {int major = Integer.parseInt (vers [0]); int minor = Integer.parseInt (vers [1]); if (major > 2 | (major = = 2 & & minor > = 6)) {return new sun.nio.ch.EPollSelectorProvider ();} catch (NumberFormatException x) {/ / format not recognized}} return new sun.nio.ch.PollSelectorProvider ();}

The signal-driven Ihop O model uses signals, and the kernel notifies them when the data is ready. We first turn on a signal-driven IhampO socket and use sigaction system calls to install the signal handler. The kernel returns directly and does not block the user state. When the datagram is ready, the kernel will send a SIGIN signal, and when the recvfrom receives the signal, it will send a system call to start the SIGIN O operation.

The advantage of this model is that the main process (thread) will not be blocked. When the data is ready, the main process (thread) is informed by the signal processing program that it is ready to perform Icano operation and data processing.

The various Iamp O models we discussed earlier refer to the data preparation phase, regardless of whether they are blocking or non-blocking. The asynchronous Iripple O model also relies on the signal processor for notification, but unlike all the above Iripple O models, the asynchronous Iripple O model notifies that the Iripple O operation has been completed rather than data preparation.

It can be said that the asynchronous IWeiO model is really non-blocking, and the main process just does its own thing, and then calls the callback function to complete some data processing operations when the IWeiO operation is completed.

After so much gossiping, we must have a deep understanding of the Imax O model. After that, we will explore the core components in Netty with partial source code (Netty4.X), and how to use Netty, and you will find how easy it is to implement a Netty program (along with high performance and maintainability).

ByteBuf

The basic unit of network transmission is bytes. ByteBuffer is provided as a byte buffer container in Java NIO, but the API of this class is not convenient to use, so Netty implements ByteBuf as its substitute. Here are the advantages of using ByteBuf:

It is easier to use than ByteBuffer.

Transparent zero-copy is implemented through the built-in compound buffer type.

Capacity can be increased on demand.

Different index pointers are used for reading and writing.

Chain calls are supported.

Support for reference counting and pooling.

Can be extended by a user-defined buffer type.

Before we discuss ByteBuf, we need to understand the implementation of ByteBuffer so that we can understand the difference between them more deeply.

ByteBuffer, which inherits from abstract class Buffer (so there are other types of implementations such as LongBuffer, IntBuffer, and so on), is essentially a finite linear sequence of elements containing three important attributes.

Capacity: the size of the elements in the buffer. You can only write capacity elements to the buffer. Once the buffer is full, you need to clean the buffer to continue writing data.

Position: an index pointer to the next location where the data is written, with an initial position of 0 and a maximum of capacity-1. When the write mode is converted to read mode, the position needs to be reset to 0.

Limit: in write mode, limit is the maximum index that can be written to the buffer, that is, it is equivalent to the capacity of the buffer in write mode. In read mode, limit represents the maximum index of data that can be read.

Because only one index pointer is maintained in Buffer, its switch between read and write modes requires a call to a flip () method to reset the pointer. The process for using Buffer is generally as follows:

Writes data to the buffer.

Call the flip () method.

Read data from the buffer

Call buffer.clear () or buffer.compact () to clean up the buffer so that the data is written next time.

RandomAccessFile aFile = new RandomAccessFile ("data/nio-data.txt", "rw"); FileChannel inChannel = aFile.getChannel (); / / allocate a 48-byte buffer ByteBuffer buf = ByteBuffer.allocate (48); int bytesRead = inChannel.read (buf); / / read data to the buffer while (bytesRead! =-1) {buf.flip (); / / reset position to 0while (buf.hasRemaining ()) {System.out.print ((char) buf.get ()) / / read the data and output it to the console} buf.clear (); / / clear the buffer bytesRead = inChannel.read (buf);} aFile.close (); the implementation of the core method in Buffer is also very simple, mainly in manipulating the pointer position.

The implementation of the core methods in Buffer is also very simple, mainly in manipulating the pointer position.

/ * Sets this buffer's mark at its position.** @ return This buffer*/public final Buffer mark () {mark = position; / / mark attribute is used to mark the current index position return this;} / / reset the current index position to the position marked by mark public final Buffer reset () {int m = mark;if (m)

< 0)throw new InvalidMarkException();position = m;return this;}// 翻转这个Buffer,将limit设置为当前索引位置,然后再把position重置为0public final Buffer flip() {limit = position;position = 0;mark = -1;return this;}// 清理缓冲区// 说是清理,也只是把postion与limit进行重置,之后再写入数据就会覆盖之前的数据了public final Buffer clear() {position = 0;limit = capacity;mark = -1;return this;}// 返回剩余空间public final int remaining() {return limit - position;} Java NIO中的Buffer API操作的麻烦之处就在于读写转换需要手动重置指针。而ByteBuf没有这种繁琐性,它维护了两个不同的索引,一个用于读取,一个用于写入。当你从ByteBuf读取数据时,它的readerIndex将会被递增已经被读取的字节数,同样的,当你写入数据时,writerIndex则会递增。readerIndex的最大范围在writerIndex的所在位置,如果试图移动readerIndex超过该值则会触发异常。 ByteBuf中名称以read或write开头的方法将会递增它们其对应的索引,而名称以get或set开头的方法则不会。ByteBuf同样可以指定一个最大容量,试图移动writerIndex超过该值则会触发异常。 public byte readByte() { this.checkReadableBytes0(1); // 检查readerIndex是否已越界 int i = this.readerIndex; byte b = this._getByte(i); this.readerIndex = i + 1; // 递增readerIndex return b;}private void checkReadableBytes0(int minimumReadableBytes) { this.ensureAccessible(); if(this.readerIndex >

This.writerIndex-minimumReadableBytes) {throw new IndexOutOfBoundsException (String.format ("readerIndex (% d) + length (% d) exceeds writerIndex (% d):% s", new Object [] {Integer.valueOf (this.readerIndex), Integer.valueOf (minimumReadableBytes), Integer.valueOf (this.writerIndex), this}));} public ByteBuf writeByte (int value) {this.ensureAccessible (); this.ensureWritable0 (1); / / check whether writerIndex will cross capacity this._setByte (this.writerIndex++, value); return this } private void ensureWritable0 (int minWritableBytes) {if (minWritableBytes > this.writableBytes ()) {if (minWritableBytes > this.maxCapacity-this.writerIndex) {throw new IndexOutOfBoundsException ("writerIndex (% d) + minWritableBytes (% d) exceeds maxCapacity (% d):% s", new Object [] {Integer.valueOf (this.writerIndex), Integer.valueOf (minWritableBytes), Integer.valueOf (this.maxCapacity), this});} else {int newCapacity = this.alloc (). CalculateNewCapacity (this.writerIndex + minWritableBytes, this.maxCapacity) This.capacity (newCapacity);} / / get and set check only the incoming index, and then get or setpublic byte getByte (int index) {this.checkIndex (index); return this._getByte (index);} public ByteBuf setByte (int index, int value) {this.checkIndex (index); this._setByte (index, value); return this;}

ByteBuf also supports allocation in and out of the heap. Intra-heap allocation, also known as supporting array mode, provides fast allocation and release without pooling.

ByteBuf heapBuf = Unpooled.copiedBuffer (bytes); if (heapBuf.hasArray ()) {/ / determine whether there is a supporting array byte [] array = heapBuf.array (); / / calculate the offset of the first byte int offset = heapBuf.arrayOffset () + heapBuf.readerIndex (); int length = heapBuf.readableBytes (); / / get the readable byte handleArray (array,offset,length); / / call your processing method}

Another mode is out-of-heap allocation, where the Java NIO ByteBuffer class already allows the JVM implementation to allocate memory outside the heap through JNI calls (calling the malloc () function to allocate memory outside the JVM heap), mainly to avoid additional buffer copy operations.

ByteBuf directBuf = Unpooled.directBuffer (capacity); if (! directBuf.hasArray ()) {int length = directBuf.readableBytes (); byte [] array = new byte [length]; / / copy bytes to directBuf.getBytes (directBuf.readerIndex (), array); handleArray (array,0,length);}

ByteBuf also supports a third mode, called a composite buffer, which provides an aggregate view of multiple ByteBuf. In this view, you can add or remove ByteBuf instances as needed, and CompositeByteBuf, a subclass of ByteBuf, implements the pattern.

A scenario suitable for using a composite buffer is the HTTP protocol, where messages transmitted over the HTTP protocol are divided into two parts-the header and the body. If these two parts are generated by different modules of the application, they will be assembled when the message is sent, and the application will reuse the same message body for multiple messages, so a new header will be created for each message. A lot of unnecessary memory operations are generated. Using CompositeByteBuf is a good choice, which eliminates these extra copies to help you reuse these messages.

CompositeByteBuf messageBuf = Unpooled.compositeBuffer (); ByteBuf headerBuf =....; ByteBuf bodyBuf =....; messageBuf.addComponents (headerBuf,bodyBuf); for (ByteBuf buf: messageBuf) {System.out.println (buf.toString ());}

CompositeByteBuf transparently implements zero-copy,zero-copy to avoid copying data back and forth between two memory areas. At the operating system level, zero-copy refers to avoiding data buffer replication between kernel state and user state (avoided by mmap), while zero-copy in Netty tends to optimize data operations in user mode, just as using CompositeByteBuf to reuse multiple ByteBuf to avoid additional replication, you can also use the wrap () method to wrap a byte array into ByteBuf Or use ByteBuf's slice () method to split it into multiple ByteBuf that share the same memory area, all to optimize memory usage.

So how do you create a ByteBuf? Unpooled is used in the above code, which is a utility class provided by Netty for creating and allocating ByteBuf. It is recommended to use this utility class to create your buffer, not to call the constructor yourself. WrappedBuffer () and copiedBuffer () are often used, one for wrapping a byte array or ByteBuffer as a ByteBuf, and one for copying a new ByteBuf based on the passed-in byte array and ByteBuffer/ByteBuf.

/ / use array.clone () to copy an array to wrap public static ByteBuf copiedBuffer (byte [] array) {return array.length = = 0?EMPTY_BUFFER:wrappedBuffer ((byte []) array.clone ());} / / default is in-heap allocation public static ByteBuf wrappedBuffer (byte [] array) {return (ByteBuf) (array.length = = 0?EMPTY_BUFFER:new UnpooledHeapByteBuf (ALLOC, array, array.length));} / / also provides a method of out-of-heap allocation private static final ByteBufAllocator ALLOC Public static ByteBuf directBuffer (int initialCapacity) {return ALLOC.directBuffer (initialCapacity);} Channel channel =...; ByteBufAllocator allocator = channel.alloc (); ByteBuf buffer = allocator.directBuffer (); do something.

In order to optimize memory utilization, Netty provides a manual way to track inactive objects. Objects like UnpooledHeapByteBuf allocated in the heap benefit from JVM's GC management without extra worry, while UnpooledDirectByteBuf is allocated outside the heap, and its internal based on DirectByteBuffer,DirectByteBuffer will first apply for a quota from the Bits class (Bits also has a global variable totalCapacity, which records the total size of all DirectByteBuffer) Before each application, it will check whether the upper limit set by-XX:MaxDirectMemorySize has been exceeded. If it exceeds the limit, it will try to call System.gc () to try to reclaim part of the memory, and then sleep for 100ms. If the memory is still insufficient, only an OOM exception can be thrown. Although there is such a layer of guarantee for the recovery of out-of-heap memory, active recycling is also necessary in order to improve performance and utilization. Because Netty also implements pooling of ByteBuf, people like PooledHeapByteBuf and PooledDirectByteBuf must rely on manual recycling (putting it back into the pool).

Netty uses reference counters to track inactive objects. The interface for reference counting is ReferenceCounted, and its idea is very simple. As long as the reference count of a ByteBuf object is greater than 0, it is guaranteed that the object will not be released and recycled. You can decrease or increase the reference count of the object by manually calling the release () and retain () methods. Users can also customize an implementation class of ReferenceCounted to meet the custom rules.

Package io.netty.buffer;public abstract class AbstractReferenceCountedByteBuf extends AbstractByteBuf {/ / because there are so many instance objects of ByteBuf, instead of wrapping refCnt as AtomicInteger//, a global AtomicIntegerFieldUpdater is used to manipulate refCntprivate static final AtomicIntegerFieldUpdater refCntUpdater = AtomicIntegerFieldUpdater.newUpdater (AbstractReferenceCountedByteBuf.class, "refCnt"); / / the initial reference value of each ByteBuf is 1private volatile int refCnt = 1). Public int refCnt () {return this.refCnt;} protected final void setRefCnt (int refCnt) {this.refCnt = refCnt } public ByteBuf retain () {return this.retain0 (1);} / / reference count increment increment,increment must be greater than 0public ByteBuf retain (int increment) {return this.retain0 (ObjectUtil.checkPositive (increment, "increment"));} public static int checkPositive (int I, String name) {if (I)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report