Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How does android I paw O work at the bottom?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

The main content of this article is to explain "what is the working principle of android I Pot O at the bottom". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next, let's let the editor take you to learn "what is the working principle of android Imax O at the bottom"?

Cache processing and kernel vs user space

The handling of buffering and buffering is the basis of all I _ swap O operations. The term "input, output" is only meaningful for moving data into and out of the cache. Keep it in mind at all times. Typically, the process executes the Istroke O request of the operating system, including data outflow from the buffer (write operation) and data fill buffer (read operation). This is the overall concept of IWeiO. The mechanism for performing these transfers within the operating system can be very complex, but conceptually simple. We will discuss it in a small part of the article.

The figure above shows a simplified "logical" diagram that shows how block data is moved from an external source, such as a disk, to the storage area of a process, such as RAM. First, the process requires its buffer to be filled with the read () system call. This system call causes the kernel to issue a command to the disk control hardware to get data from the disk. The disk controller writes the data directly to the memory buffer of the kernel through the DMA without further help from the main CPU. When the read () operation is requested, once the disk controller has finished filling the cache, the kernel copies the data from the temporary cache in the kernel space to the cache specified by the process.

It is important to note that when the kernel tries to cache and prefetch data, the data requested by the process in kernel space may already be ready. If so, the data requested by the process will be copied out. If the data is not available, the process is suspended. The kernel reads the data into memory.

Virtual memory

You may have heard of virtual memory many times. Let me introduce it again.

All modern operating systems use virtual memory. Virtual memory means that a human or virtual address replaces a physical (hardware RAM) memory address. Virtual addresses have two important advantages:

Multiple virtual addresses can be mapped to the same physical address.

A virtual address space can be larger than the actual available hardware memory.

In the above introduction, copying from kernel space to the end-user cache seems to add extra work. Why not tell the disk controller to send data directly to the user space cache? Well, this is done by virtual memory. Take advantage of the above 1.

By mapping the kernel space address to the same physical address as a virtual address in user space, DMA hardware (which can only locate the physical memory address) can populate the cache. This cache is visible to both kernel and user-space processes.

This eliminates the copy between the kernel and user space, but requires the kernel and user buffers to use the same page alignment. The multiple of the block size that the buffer must use (usually a 512-byte disk sector). The operating system divides its memory address space into pages, which are fixed-size byte groups. These memory pages are always multiples of the block size and are usually 2 times (simplified addressing). Typical memory page sizes are 1024, 2048, and 4096 bytes. Virtual and physical memory pages are always the same size.

Memory paging

To support the second advantage of virtual memory (having addressable space larger than physical memory), virtual memory paging (often referred to as page swapping) is required. This mechanism relies on the virtual memory space of the page can be persisted in external disk storage, thus providing space for other virtual pages to be put into physical memory. In essence, physical memory acts as the cache for the paging area. The paging area is the space on disk where the contents of the memory page are saved when forced to swap out of physical memory.

Adjust the memory page size to a multiple of the block size so that the kernel can send instructions directly to the disk controller hardware, write memory pages to disk, or reload when needed. It turns out that all disk Ibind O operations are done at the page level. This is the only way for data to move between disk and physical memory on modern paging operating systems.

Modern CPU includes a subsystem called memory Management Unit (MMU). This device is logically located between CPU and physical memory. It contains mapping information from a virtual address to a physical memory address. When CPU references a memory location, MMU decides which pages need to reside (usually by shifting or masking some bits of the address) and converting virtual page numbers to physical page numbers (implemented by hardware, which is extremely fast).

File-oriented, block Ibank O

The file Icano always occurs in the context switch of the file system. The file system is completely different from the disk. The disk stores data in segments of 512 bytes per segment. It is a hardware device and knows nothing about the semantics of the saved file. They simply provide a certain number of slots that can hold data. In this respect, the segment of a disk is similar to memory paging. They all have a uniform size and are a large addressable array.

On the other hand, the file system is a higher-level abstraction. The file system is a special way to arrange and translate data from disk (or other randomly accessible, block-oriented devices). The code you write almost always interacts with the file system rather than directly with the disk. The file system defines abstractions such as file name, path, file, file attribute, and so on.

A file system organizes (on a hard disk) a series of data blocks of uniform size. Some blocks hold meta-information, such as mapping, catalogs, indexes of free blocks, and so on. Other blocks contain the actual file data. The meta-information of a single file describes which blocks contain file data, where the data ends, when it was last updated, and so on. When the user process sends a request to read the file data, the file system implements the exact location on the disk. Then take action to put these disk sectors into memory.

The file system also has the concept of a page, which may be the same size as a basic memory page or a multiple of it. A typical file system page size ranges from 2048 to 8192 bytes and is always a multiple of the basic memory page size.

The execution of Istroke O by a paging file system can be summed up in the following logical steps:

Determine which file system pages the request spans (a collection of disk segments). The file content and metadata on disk may be distributed across multiple file system pages, which may be discontiguous.

Allocate enough kernel space memory pages to save the same file system pages.

Establish the mapping between these memory pages and the file system pages on disk.

A paging error is generated for each memory page.

The virtual memory system gets caught in a paging error and schedules pagins (page calls) to validate these pages by reading from disk.

Once the pageins is complete, the file system decomposes the original data to extract the requested file content or attribute information.

It is important to note that this file system data will be cached like other memory pages. In the subsequent IWeiO request, some or all of the file data is still stored in physical memory and can be directly reused without re-reading from disk.

File locking

File locking is a mechanism by which a process can prevent other processes from accessing a file or restrict other processes from accessing the file. Although it is called "file locking", it means locking the entire file (which is often done). Locking can usually be at a more fine-grained level. As the granularity drops to the byte level, the area of the file is usually locked. The lock is associated with a specific file, starts at the specified byte location of the file and runs to the specified byte range. This is important because it allows multiple processes to collaborate to access specific areas of the file without preventing other processes from operating in other locations of the file.

There are two forms of file locks: shared and exclusive. Multiple shared locks can be valid in the same file area at the same time. On the other hand, an exclusive lock requires that no other lock is valid for the requested area.

Stream I DropO

Not all Ibig O is block-oriented. There is also the stream Imax O, which is the prototype of the pipe and must access the bytes of the Imax O data stream sequentially. Common data flows include TTY (console) devices, print ports, and network connections.

The data flow is usually but not necessarily slower than the block device, providing intermittent input. Most operating systems allow you to work in non-blocking mode. Allows a process to check whether the input to the data stream is available without blocking when it is not available. This management allows the process to process when the input arrives and to perform other functions when the input stream is idle.

One step further than the non-blocking mode is conditional selection (readiness selection). It is similar to non-blocking mode (and is usually based on non-blocking mode), but reduces the burden on the operating system to check that the flow is ready. The operating system can be told to observe the set of streams and return instructions to the process as to which stream is ready. This capability allows processes to reuse multiple activity streams using common code and a single thread by taking advantage of the preparation information returned by the operating system. This method is widely used in network servers to handle a large number of network connections. Preparation selection is critical for high-capacity expansion.

At this point, I believe that you have a deeper understanding of "android I Pot O at the bottom of the working principle", might as well to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report