Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the principle of netty?

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "what is the principle of netty". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the principle of netty"?

Detailed explanation of the principle of netty

Netty is a high-performance, asynchronous event-driven NIO framework, which provides support for TCP, UDP and file transfer. As an asynchronous NIO framework, all IO operations of Netty are asynchronous and non-blocking. Through the Future-Listener mechanism, users can easily obtain the results of IO operations actively or through the notification mechanism.

As the most popular NIO framework, Netty has been widely used in the Internet field, big data distributed computing field, game industry, communication industry and so on. Some well-known open source components in the industry are also based on Netty's NIO framework.

Netty architecture analysis

Netty is designed with a typical three-tier network architecture, and the logical architecture diagram is as follows:

The first layer: Reactor communication scheduling layer, which is completed by a series of auxiliary classes, including Reactor thread NioEventLoop and its parent class, NioSocketChannel/NioServerSocketChannel and its parent class, ByteBuffer and various Buffer, Unsafe and inner classes derived from it. The main responsibility of this layer is to monitor the read, write and connection operations of the network. It is responsible for reading the data of the network layer into the memory buffer, and then triggering various network events, such as connection creation, connection activation, read events, write events, and so on. These events are triggered into the PipeLine, and the responsibility chain acted by PipeLine is used for subsequent processing.

The second layer: the responsibility chain PipeLine, which is responsible for the orderly transmission of events in the responsibility chain, as well as the dynamic arrangement of the responsibility chain. The responsibility chain can choose to monitor and handle the events that it cares about. It can intercept and process and propagate backward / forward events. The functions of Handler nodes of different applications are also different. In general, codec Hanlder is often developed for message codec. It can convert external protocol messages into internal POJO objects, so that the upper business side only needs to care about processing business logic, does not need to perceive the underlying protocol differences and thread model differences, and realizes the hierarchical isolation at the architecture level.

The third layer: business logic processing layer, which can be divided into two categories:

1. Pure business logic processing, such as order processing.

two。 Application layer protocol management, such as HTTP protocol, FTP protocol, etc.

Next, I will talk about the architecture of Netty from three aspects that affect communication performance (the Istroke O model, the thread scheduling model, and the serialization mode).

IO model

The Netty model is based on the non-blocking IZP O implementation, and the underlying layer relies on the Selector of the JDK NIO framework.

Selector provides the ability to select tasks that are already in place. To put it simply, Selector will constantly poll the Channel registered on it. If there are new TCP connection access, read and write events on a Channel, the Channel is ready and will be polled by Selector, and then the collection of ready Channel can be obtained through SelectionKey for subsequent ISelectionKey operations.

Thread scheduling model

There are three commonly used Reactor threading models, which are as follows:

1.Reactor single-threaded model: the Reactor single-threaded model, which means that all the Icano operations are done on the same NIO thread. For some small-capacity application scenarios, the single-threaded model can be used.

2.Reactor multithreaded model: the biggest difference between the Rector multithreaded model and the single-threaded model is that there is a set of NIO threads that deal with the Ihand O operation. It is mainly used in scenarios with high concurrency and large volume of business.

3. Master-slave Reactor multithreading model: the characteristic of the master-slave Reactor thread model is that the server is no longer a single NIO thread to receive client connections, but an independent Nio thread pool. Using the master-slave Nio thread model, we can solve the problem that one server listening thread can not effectively handle all client connections.

Serialization mode

The key factors that affect serialization performance are summarized as follows:

1. Serialized stream size (network bandwidth consumption)

two。 Serialization & performance of deserialization (CPU resource consumption)

3. Performance of concurrent calls: stability, linear growth, occasional delay burr, etc.

Link validity detection

The heartbeat detection mechanism is divided into three levels:

Heartbeat detection at the 1.TCP level, that is, the Keep-Alive mechanism of TCP, whose scope is the entire TCP protocol stack

two。 Heartbeat detection in the protocol layer mainly exists in the long connection protocol. For example, SMPP protocol

3. The heartbeat detection in the application layer is mainly realized by each business product sending heartbeat messages to each other regularly in an agreed way.

The purpose of heartbeat detection is to confirm that the current link is available, that the other party is alive and can receive and send messages normally. As a highly reliable NIO framework, Netty also provides a heartbeat detection mechanism based on link idle:

1. Read idle, link duration t did not read any message

two。 Write idle, link duration t did not send any message

3. The read and write are idle and the link duration t does not receive or send any messages.

Zero copy

"Zero copy" means that during computer operation, CPU does not need to consume resources for copying data between memory. It usually refers to the way that when a computer sends a file on the network, it does not need to copy the contents of the file to the user space (User Space) and transmits it directly to the network in the kernel space (Kernel Space).

The "zero copy" of Netty is mainly reflected in three aspects.

The receiving and sending ByteBuffer of Netty adopts DIRECT BUFFERS, and the direct memory outside the heap is used for Socket reading and writing, and the second copy of byte buffer is not needed. If traditional heap memory (HEAP BUFFERS) is used for Socket read and write, JVM copies the heap memory Buffer into direct memory before writing to Socket. Compared with out-of-heap direct memory, the message has one more memory copy of the buffer during sending.

Read directly from "direct memory outside the heap", unlike traditional heap memory and direct memory copy

ByteBufAllocator allocates out-of-heap memory through ioBuffer

Netty provides composite Buffer objects, which can aggregate multiple ByteBuffer objects. Users can manipulate the combined Buffer as easily as operating a Buffer, avoiding the traditional way of merging several small Buffer into a large Buffer through memory copy.

Netty allows us to merge multiple pieces of data into a whole piece of virtual data for users to use without copying the data.

Combine Buffer objects to avoid memory copying

ChannelBuffer interface: Netty establishes a unified ChannelBuffer interface for the data to be transmitted.

Use the getByte (int index) method to achieve random access

Using double pointers to achieve sequential access

Netty mainly implements HeapChannelBuffer,ByteBufferBackedChannelBuffer, the CompositeChannelBuffer class directly related to Zero Copy.

CompositeChannelBuffer class

The function of the CompositeChannelBuffer class is to make multiple ChannelBuffer into a virtual ChannelBuffer to operate.

Why is it virtual? because CompositeChannelBuffer does not really combine multiple ChannelBuffer, but only saves their references, so it avoids copying data and implements Zero Copy, the internal implementation.

ReaderIndex read pointer and writerIndex write pointer are inherited from AbstractChannelBuffer.

Components is an array of ChannelBuffer that holds all the child Buffer that make up the virtual Buffer

Indices is an array of type int that holds the index values of each Buffer

LastAccessedComponentId is an int value that records the child Buffer ID on the last visit

CompositeChannelBuffer actually saves a series of Buffer through an array, and then implements the interface of ChannelBuffer, so that in the upper layer, operating these Buffer is like operating a single Buffer.

The file transfer of Netty adopts the transferTo method, which can send the data of the file buffer directly to the target Channel, which avoids the memory copy problem caused by the traditional circular write.

Both the sendfile () in Linux and the FileChannel.transferTo () method in Java NIO realize the function of zero copy, and in Netty, zero copy is achieved by wrapping the FileChannel.transferTo () method of NIO in FileRegion.

The Zero-copy of Netty is reflected in the following aspects:

L Netty provides the CompositeByteBuf class, which can merge multiple ByteBuf into a logical ByteBuf, avoiding copying between ByteBuf.

Through the wrap operation, we can wrap the byte [] array, ByteBuf, ByteBuffer, and so on into a Netty ByteBuf object, thus avoiding the copy operation.

L ByteBuf supports slice operation, so ByteBuf can be decomposed into multiple ByteBuf that share the same storage area, avoiding memory copy.

The file transfer is realized through the FileChannel.tranferTo wrapped by FileRegion, and the data of the file buffer can be sent directly to the target Channel, which avoids the memory copy problem caused by the traditional circular write.

Thank you for your reading, the above is the content of "what is the principle of netty", after the study of this article, I believe you have a deeper understanding of what the principle of netty is, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report