Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the concepts of NIO

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the relevant knowledge of the concept of NIO, the content is detailed and easy to understand, the operation is simple and fast, and it has a certain reference value. I believe you will gain something after reading this NIO concept. Let's take a look at it.

Buffer (Buffers)

The new Buffer class is the link between regular Java classes and channels. A fixed-length array of raw data elements that is encapsulated in an object containing state information and stored in a buffer. The buffer provides a rendezvous point: the channel can either extract the data placed in the buffer (write) or store data into the buffer for reading (reading). In addition, there is a special type of buffer for memory-mapped files.

Channel (Channels)

The most important abstraction newly introduced by NIO is the concept of channels. The Channel object simulates a communication connection, and the pipe can be either unidirectional (in or out) or bi-directional (in and out). You can think of a channel as a shortcut to connect buffers and Icano services.

File locking and memory mapped files (File locking and memory-mapped files)

The new FileChannel object is included in the java.nio.channels package and provides many new file-oriented features, of which two of the most interesting are file locking and memory mapping files.

In the case of multiple processes working together, file locking is an essential tool to coordinate the access of each process to shared data.

Map files to memory so that the file data on disk looks to you as if it were in memory. This makes use of the virtual memory function of the operating system, and there is no need to actually keep a copy of the file in memory to realize the dynamic cache of the file content.

Socket (Sockets)

The socket channel class provides a new way to interact using network sockets. Socket channels can operate in non-block mode and can be used with selectors. As a result, multiple sockets can be multiplexed and managed more efficiently than the traditional sockets provided by java.net. Three new socket channels, ServerSocketChannel, SocketChannel, and DatagramChannel

Selector (Selectors)

The selector enables ready selection. The Selector class provides a mechanism for determining the current state of one or more channels. With the help of a single thread, a large number of active Imax O channels can be monitored and maintained using a selector.

Regular expression (Regular expressions)

The new java.util.regex package introduces regular expression processing mechanisms similar to the Perl language into Java. This long-awaited feature has a wide range of uses.

The new regular expression API is considered part of NIO because JSR 51 details it along with other NIO features. Although it lacks parallelism with other components of NIO in many ways, it is extremely useful in many areas, such as file processing.

Character set (Character sets)

Java.nio.charsets provides a new class to handle the mapping between characters and byte streams. You can choose the character translation mapping method, or you can create your own mapping.

An example of disk Icano

Many details are obviously ignored in the figure, showing only the basic steps involved.

As shown in the picture

Note the concepts of user space and kernel space in the diagram. User space is the area where regular processes are located. JVM is a regular process that is stationed in user space. User space is an unprivileged area: for example, code executed in that area cannot directly access the hardware device. The kernel space is the area where the operating system is located. Kernel code has special powers: it can communicate with the device controller, control the running state of processes in the user area, and so on. Most importantly, all Ithumb O passes through kernel space directly (as described here) or indirectly (see section 1.4.2).

When a process requests an IWeiO operation, it executes a system call (sometimes called a trap) to hand over control to the kernel. The underlying functions known to programmers as open (), read (), write (), and close () do nothing more than set up and execute the appropriate system calls. When the kernel is called in this way, it then takes any necessary steps to find the data needed by the process and transfer the data to a specified buffer in user space. The kernel attempts to cache or pre-read data, so the data needed by the process may already be in kernel space. If so, the data can be simply copied. If the data is not in kernel space, the process is suspended and the kernel begins to read the data into memory.

Looking at figure 1-1, you may find it redundant to copy data from kernel space to user space. Why not just ask the disk controller to send the data to the user space buffer? There are several problems with this. First, the hardware usually does not have direct access to user space 1. Second, block-based hardware devices such as disks operate fixed-size blocks, while user processes may request arbitrarily sized or unaligned blocks. In the process of data traveling between user space and storage devices, the kernel is responsible for the decomposition and reassembly of data, so it acts as a middleman.

Divergence / convergence

Many operating systems can make the assembly / decomposition process more efficient. According to the concept of divergence / aggregation, a process can pass a series of buffer addresses to the operating system with only one system call. The kernel can then fill or drain multiple buffers sequentially, divert the data into multiple user space buffers while reading, and aggregate the data from multiple buffers while writing so that the user process does not have to perform multiple system calls (which may be expensive), and the kernel can optimize the data processing because it already has all the information about the data to be transferred. If the system is equipped with multiple CPU, you can even fill or drain multiple buffers at the same time.

Virtual memory

All modern operating systems use virtual memory. Virtual memory is meant to replace physical (hardware RAM) memory addresses with false (or virtual) addresses. There are many benefits to this, which can be divided into two main categories:

1. More than one virtual address can point to the same physical memory address.

two。 The virtual memory space can be larger than the actual available hardware memory.

Device controllers cannot be stored directly into user space through DMA, but the same effect can be achieved by taking advantage of the * * items mentioned above. Map kernel space addresses and user space virtual addresses to the same physical address so that DMA hardware (which can only access physical memory addresses) can fill buffers that are visible to both kernel and user space processes

As shown in the figure, process virtual memory and kernel virtual memory are mapped to the same physical address, so that after DMA is written to the actual physical memory, both buffers point to the same physical address, so they can access the data at the same time.

However, the prerequisite is that the kernel and user buffers must use the same page alignment, and the size of the buffer must also be a multiple of the disk controller block size (usually 512 bytes of disk sectors). The operating system divides the memory address space into pages, that is, fixed-size byte groups. The size of a memory page is always a multiple of the block size, usually to the second power (which simplifies addressing). Typical memory pages are 1024, 2048, and 4096 bytes. Virtual and physical memory pages are always the same size.

The figure above shows how virtual memory pages from multiple virtual addresses are mapped into physics.

Memory page scheduling

To support the second feature of virtual memory (addressing space is larger than physical memory), virtual memory paging is necessary (often called swapping, although real swapping is done at the process level, not at the page level). According to this scheme, pages in virtual memory space can continue to exist in external disk storage, thus making room for other virtual pages in physical memory. In essence, physical memory acts as a cache for a paging area, while a paging area is a memory page that is replaced from physical memory and stored on disk.

This is the end of the article on "what are the concepts of NIO?" Thank you for reading! I believe you all have a certain understanding of "what is the concept of NIO". If you want to learn more, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

  • How to create a custom layout using Android AS

    How to use Android AS to create a custom layout, for this problem, this article introduces the corresponding analysis and solutions in detail, hoping to help more partners who want to solve this problem to find a more simple and easy way. First create a title.xml

    © 2024 shulou.com SLNews company. All rights reserved.

    12
    Report