In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the relevant knowledge of how to achieve Linux's direct Icano mechanism, the content is detailed and easy to understand, the operation is simple and fast, and it has a certain reference value. I believe you will gain something after reading this article on how to achieve the direct Icano mechanism of Linux. Let's take a look at it.
The motivation of direct IPUBO
Before introducing direct Imax O, this section first describes why there is such a mechanism, that is, what are the shortcomings of the traditional Imax O operation.
What is Buffered iCandle O (cache iUnip O)
The default operation for most file systems is cache Icano, which is also known as the standard Icano. In Linux's cache iMab O mechanism, the operating system caches the data in the page cache (page cache) of the file system, that is, the data is copied into the buffer of the operating system kernel before it is copied from the buffer of the operating system kernel to the address space of the application. Caching Iripple O has the following advantages:
The cache Iswap O uses the operating system kernel buffer, which separates the application space from the actual physical device to some extent.
Caching Iripo can improve performance by reducing the number of disk reads.
When an application tries to read a piece of data, if the data is already stored in the page cache, the data can be immediately returned to the application without the actual physical disk reading operation. Of course, if the data is not stored in the page cache before the application reads it, then you need to read the data from disk to the page cache first. For write operations, the application will also write the data to the page cache first. Whether the data is immediately written to disk depends on the write mechanism adopted by the application: if the user uses a synchronous write mechanism (synchronous writes), then the data will be written back to disk immediately, and the application will wait until the data is written. If the user uses a deferred write mechanism (deferred writes), then the application does not need to wait until all the data is written back to disk, as long as the data is written to the page cache. In the case of a delayed write mechanism, the operating system periodically brushes the data placed in the page cache to disk. Unlike the asynchronous write mechanism (asynchronous writes), the deferred write mechanism does not notify the application when the data is fully written to disk, while the asynchronous write mechanism returns the application when the data is fully written to disk. Therefore, the delayed write mechanism itself has the risk of data loss, while the asynchronous write mechanism will not have this concern.
Disadvantages of caching IPUBO
In the DMA mechanism, data can be read directly from disk to page cache or written back to disk directly from page cache, instead of directly transferring data between application address space and disk. In this way, data needs to be copied multiple times between application address space and page cache during transfer. The CPU and memory overhead caused by these data copy operations is very large.
For some special applications, transferring data directly between the application address space and disk by avoiding the operating system kernel buffer will achieve better performance than using the operating system kernel buffer. The self-caching application mentioned in the following section is one of them.
Self-caching application (self-caching applications)
For some applications, it has its own data caching mechanism, for example, it caches data in the application address space, which does not need to use the cache in the operating system kernel at all, and such applications are called self-caching applications (self-caching applications). Database management system is a representative of this kind of application. Self-caching applications tend to use logical rather than physical representations of data; when the system memory is low, self-caching applications allow the logical cache of such data to be swapped out, rather than the actual data on disk. The self-caching application knows the semantics of the data to be manipulated like the back of its hand, so it can use a more efficient cache replacement algorithm. Self-caching applications may share a piece of memory among multiple hosts, so self-caching applications need to provide a mechanism that can effectively invalidate cached data in the user's address space. this ensures the consistency of the cached data in the application address space.
Caching Iripple O is obviously not a good choice for self-caching applications. This leads to the direct Icano technology in Linux, which is an important introduction in this article. The direct iMab O technology in Linux is very suitable for applications such as self-caching. This technology omits the use of the operating system kernel buffer in the caching Imax O technology, and the data is transferred directly between the application address space and the disk, so that the self-caching application can omit the complex system-level cache structure and execute the data read and write management defined by the program itself. This reduces the impact of system-level management on application access to data. In the following section, we will focus on the design and implementation of the direct Iripple O mechanism provided in Linux, which provides good support for self-caching applications.
Direct iCandle O Technology in Linux 2.6A
Several file access methods provided in Linux 2.6,
All Icano operations are done by reading or writing files. Here, we treat all peripherals, including keyboards and monitors, as files in the file system. There are a variety of ways to access files, and here are the following file access methods supported in Linux 2.6.
Standard way to access files
In Linux, this way of accessing files is achieved through two system calls: read () and write (). When the application calls read () system call to read a piece of data, if the block of data is already in memory, it is directly read out of memory and returned to the application; if the block of data is not in memory, then the data will be read from disk to the page high cache and then copied from the page cache to the user address space. If one process reads a file, no other process can read or change the file; for write data operations, when a process calls write () system call to write data to a file, the data will be copied from the user address space to the operating system kernel address space in the page cache before being written to disk. But for this standard way of accessing files, when the data is written to the page cache, the write () system call does not wait for the data to be fully written to disk, even if the execution is complete. Linux uses the deferred write mechanism (deferred writes) that we mentioned earlier.
How to access files synchronously
The way of synchronously accessing files is similar to the standard way of accessing files above, a key difference between these two methods is that when accessing files synchronously, the operation of writing data is completed only when the data is completely written back to disk, while the write operation of standard access file is completed when the data is written to the page cache.
Memory mapping mode
In many operating systems, including Linux, the memory region can be associated with a common file or a certain part of a block device file. If a process wants to access a byte of data in a memory page, the operating system will convert the operation of accessing the memory area into the corresponding operation of accessing a byte of the file. System call mmap () is provided in Linux to implement this file access method. Compared with the standard file access mode, the memory mapping mode can reduce the data copy operation caused by the read () system call in the standard file access mode, that is, reduce the data copy operation between the user address space and the operating system kernel address space. Mapping is usually suitable for a wide range of data, and the cost of mapping is much lower than that of CPU copy for data of the same length. When a large amount of data needs to be transferred, it is more efficient to use memory mapping to access files.
Direct Ipaw O mode
When the data is transmitted by direct Istroke O, the data is transferred directly between the buffer and disk in the user's address space, which does not need the support of page cache at all. The cache provided by the operating system layer tends to give applications better performance when reading and writing data, but for some special applications, such as database management systems, they prefer to choose their own caching mechanism, because database management systems tend to know more about the data stored in the database than the operating system. Database management system can provide a more effective caching mechanism to improve the access performance of data in the database.
The way to access files asynchronously
Linux Asynchronous I / O is a standard feature in Linux 2.6.The essential idea is that after the process sends out a data transfer request, the process will not be blocked and does not have to wait for any operation to be completed, and the process can continue to perform other operations during data transfer. Compared with synchronous access to files, asynchronous access to files can improve the efficiency of the application and improve the utilization of system resources. Direct Ihop O is often used in conjunction with asynchronous access to files.
The Design and implementation of Direct Istroke O in Linux 2.6A
There is no need to worry about the problem of implementing direct Imax O in block devices or network devices. the high-level code in the kernel of Linux 2.6 operating system has set and used Direct Imax O, and the driver-level code does not even need to know that Direct Imax O has been executed. However, for character devices, it is not feasible to perform direct I _ sign O, and Linux 2.6 provides the function get_user_pages () to implement direct I _ sign O. These two situations are introduced respectively in this section.
Support provided by the kernel for block devices to execute direct Istroke O
In order to perform direct iThano in a block device, the process must set the access mode to O_DIRECT when opening the file, which is tantamount to telling the operating system process to use the direct Imaco mode when using read () or write () system calls to read and write files, and that the transferred data does not pass through the operating system kernel cache space. You must pay attention to the buffer alignment (buffer alignment) and the size of the buffer, that is, the second and third parameters of the read () and write () system calls. The alignment here refers to the alignment of the file system block size, and the buffer size must also be an integral multiple of the block size.
This section focuses on three functions: open (), read (), and write (). There is a variety of file access in Linux, so these three functions define different processing methods for dealing with different file access methods. This paper mainly introduces their functions and functions related to the direct Icano mode. First, let's take a look at the open () system call, whose function prototype is as follows:
Int open (const char * pathname, int oflag, … / *, mode_t mode * /); |-10-20-30-40-50-60-70-80-9 |-XML error: The previous line is longer than the max of 90 characters-|
The identifier macro definitions used by the system call open () defined by the Linux 2.6 kernel are listed below:
Table 1. Identifiers provided by the open () system call
Identifier name
Identifier description
O_RDONLY
Open a file as read-only
O_WRONLY
Open the file in write-only mode
O_RDWR
Open a file by reading and writing
O_CREAT
If the file does not exist, create the file
O_EXCL
Open the file in exclusive mode; if both O_EXCL and O_CREATE are set, the open operation will fail if the file already exists
O_NOCTTY
If the descriptor is set, the file cannot be treated as a terminal
O_TRUNC
Truncate the file and delete it if it exists
O_APPEND
If the descriptor is set, the file pointer is set to the bottom of the file before writing the file
O_NONBLOCK
Open a file in a non-blocking manner
O_NELAY
Same as O_NELAY, if you set O_NELAY and Odyssey NONBLOCK at the same time, it will work first.
O_SYNC this descriptor will have an impact on the write operation of an ordinary file. If the descriptor is set, the write operation to the file will not end FASYNC until the data is written to disk.
If the descriptor is set, the Icano event notification is signaled.
O_DIRECT this descriptor provides support for direct O_LARGEFILE O this descriptor provides support for large files exceeding 2GB O_DIRECTORY this descriptor indicates that the open file must be a directory, otherwise the open operation fails O_NOFOLLOW if the descriptor is set, the symbolic link at the end of the path name is not parsed
When an application needs to access a file directly without going through the operating system page cache, it needs to specify the O_DIRECT identifier when it opens the file.
The kernel function in the operating system kernel that handles open () system calls is sys_open (), and sys_open () calls do_sys_open () to handle the main open operation. It mainly does three things: first, it calls getname () to read the pathname of the file from the process address space; then, do_sys_open () calls get_unused_fd () to find an idle file table pointer from the process's file table, and the corresponding new file descriptor is stored in the local variable fd; then, the function do_filp_open () will perform the corresponding open operation according to the passed parameters. Listing 1 shows one of the main function diagrams that handle open () system calls in the operating system kernel.
Listing 1. Main calling function diagram
Sys_open () |-do_sys_open () |-getname () |-get_unused_fd () |-do_filp_open () |-nameidata_to_filp () | |-_ dentry_open () |
The function do_flip_open () will call the function nameidata_to_filp () during execution, and nameidata_to_filp () will eventually call the _ _ dentry_open () function. If the process specifies the O_DIRECT identifier, the function will check whether the direct I.Unip O operation can act on the file. Listing 2 lists the code in the _ _ dentry_open () function that is related to the direct Imax O operation.
Listing 2. The code related to the direct iMab O in the function dentry_open ()
If (f-> f_flags & O_DIRECT) {if (! f-> flips mapping-> a_ops | | (! f-> flips mapping-> astatops-> direct_IO) & & (! f-> flips mapping-> astatops-> get_xip_page) {fput (f); f = ERR_PTR (- EINVAL) }}
When the O_DIRECT identifier is specified when the file is opened, the operating system will know that the next read or write to the file is to use the direct IBO mode.
Let's take a look at what the system does when the process reads a file with the O_DIRECT identifier set through the read () system call. The prototype of the function read () is as follows:
Ssize_t read (int feledes, void * buff, size_t nbytes)
The entry function that handles the read () function in the operating system is sys_read (), and its main calling function diagram is shown in listing 3:
Listing 3. Main call function diagram
Sys_read () |-vfs_read () |-generic_file_read () |-generic_file_aio_read () |-generic_file_direct_IO ()
After getting the file descriptor and the current operation location of the file from the process, the function sys_read () calls the vfs_read () function to perform the specific operation, and the vfs_read () function finally calls the relevant operations in the file structure to complete the file read operation, that is, the generic_file_read () function is called. The code is as follows:
Listing 4. Function generic_file_read ()
Ssize_t generic_file_read (struct file * filp, char _ user * buf, size_t count, loff_t * ppos) {struct iovec local_iov = {.iov _ base = buf, .iov _ len = count}; struct kiocb kiocb; ssize_t ret; init_sync_kiocb (& kiocb, filp); ret = _ _ generic_file_aio_read (& kiocb, & local_iov, 1, ppos) If (- EIOCBQUEUED = = ret) ret = wait_on_sync_kiocb (& kiocb); return ret;}
The function generic_file_read () initializes the iovec and the kiocb descriptor. The descriptor iovec is mainly used to store two contents: the address of the user address space buffer used to receive the read data and the size of the buffer; and the descriptor kiocb is used to track the completion status of the Imax O operation. After that, the function generic_file_read () uses the function _ _ generic_file_aio_read (). This function checks whether the user address space buffer described in iovec is available, then checks the access mode, and executes the code associated with direct O_DIRECT if the access mode descriptor is set. The code in the function _ _ generic_file_aio_read () related to the direct Icano is as follows:
Listing 5. The code related to direct I _ generic_file_aio_read O in function _
If (filp- > f_flags & O_DIRECT) {loff_t pos = * ppos, size; struct address_space * mapping; struct inode * inode; mapping = filp- > frankmapping; inode = mapping- > host; retval = 0; if (! count) goto out; size = i_size_read (inode) If (pos
< size) { retval = generic_file_direct_IO(READ, iocb, iov, pos, nr_segs); if (retval >0 & &! is_sync_kiocb (iocb) retval =-EIOCBQUEUED; if (retval > 0) * ppos = pos + retval;} file_accessed (filp); goto out;}
The above code snippet mainly checks the value of the file pointer, the size of the file, and the number of bytes requested to read, etc., after that, the function calls generic_file_direct_io () and passes it the operation type READ, descriptor iocb, descriptor iovec, the value of the current file pointer and the number of user address space buffers specified in the descriptor io_vec as parameters. When the generic_file_direct_io () function completes, the function _ _ generic_file_aio_read () continues to complete the subsequent operations: update the file pointer, set the timestamp to access the file I node; when all these operations are completed, the function returns. The function generic_file_direct_IO () takes five parameters, and the meaning of each parameter is as follows:
Rw: operation type, which can be READ or WRITE
Iocb: pointer to the kiocb descriptor
Iov: pointer to an array of iovec descriptors
Offset:file structure offset
The number of iovec in the nr_segs:iov array
The code for the function generic_file_direct_IO () is as follows:
Listing 6. Function generic_file_direct_IO ()
Static ssize_t generic_file_direct_IO (int rw, struct kiocb * iocb, const struct iovec * iov,loff_t offset, unsigned long nr_segs) {struct file * file = iocb- > ki_filp; struct address_space * mapping = file- > frankmapping; ssize_t retval; size_t write_len = 0 If (rw = = WRITE) {write_len = iov_length (iov, nr_segs); if (mapping_mapped (mapping)) unmap_mapping_range (mapping, offset, write_len, 0);} retval = filemap_write_and_wait (mapping) If (retval = = 0) {retval = mapping- > astatops-> direct_IO (rw, iocb, iov,offset, nr_segs); if (rw = = WRITE & & mapping- > nrpages) {pgoff_t end = (offset + write_len-1) > > PAGE_CACHE_SHIFT Int err = invalidate_inode_pages2_range (mapping, offset > > PAGE_CACHE_SHIFT, end); if (err) retval = err;}} return retval;}
The function generic_file_direct_IO () does some special handling for WRITE operation types, which will be explained later when the write () system call is introduced below. In addition, it mainly calls the direct_IO method to perform direct read or write operations. Brush the relevant dirty data in the page cache back to the disk before performing the direct Ihammer O read operation, which ensures that the data is read from the disk. The direct_IO method here eventually corresponds to the _ _ blockdev_direct_IO () function. The code for the _ _ blockdev_direct_IO () function is as follows:
Listing 7. Function _ _ blockdev_direct_IO ()
Ssize_t _ _ blockdev_direct_IO (int rw, struct kiocb * iocb, struct inode * inode,struct block_device * bdev, const struct iovec * iov, loff_t offset,unsigned long nr_segs, get_block_t get_block, dio_iodone_t end_io,int dio_lock_type) {int seg; size_t size; unsigned long addr; unsigned blkbits = inode- > i_blkbits Unsigned bdev_blkbits = 0; unsigned blocksize_mask = (1 ki_filp- > fattening; if (dio_lock_type! = DIO_OWN_LOCKING) {mutex_lock (& inode- > i_mutex); release_i_mutex = 1 } retval = filemap_write_and_wait_range (mapping, offset,end-1); if (retval) {kfree (dio); goto out } if (dio_lock_type = = DIO_OWN_LOCKING) {mutex_unlock (& inode- > i_mutex); acquire_i_mutex = 1 }} if (dio_lock_type = = DIO_LOCKING) down_read_non_owner (& inode- > i_alloc_sem) } dio- > is_async =! is_sync_kiocb (iocb) & (rw & WRITE) & & (end > i_size_read (inode)); retval = direct_io_worker (rw, iocb, inode, iov, offset,nr_segs, blkbits, get_block, end_io, dio) If (rw = = READ & & dio_lock_type = = DIO_LOCKING) release_i_mutex = 0; out: if (release_i_mutex) mutex_unlock (& inode- > i_mutex); else if (acquire_i_mutex) mutex_lock (& inode- > i_mutex) Return retval;}
This function splits the data to be read or written and checks the buffer alignment. When introducing the open () function earlier, this article pointed out that we must pay attention to the problem of buffer alignment when using direct I _ blockdev_direct_IO O to read and write data. As can be seen from the above code, the buffer alignment check is carried out in the _ _ buffer () function. The buffer of the user address space can be determined by the iovec descriptor in the iov array. The read or write operations of the direct read O are synchronized, that is, the function _ _ blockdev_direct_IO () will not return until all the operations are finished, so once the application read () system call returns, the application can access the buffer containing the corresponding data in the user's address space. However, this approach cannot close the application until the application read operation is complete, which will cause the application to be closed slowly.
Next, let's take a look at the processing implementation associated with the direct Imax O in the write () system call. The prototype of the function write () is as follows:
Ssize_t write (int filedes, const void * buff, size_t nbytes)
The entry function in the operating system that handles write () system calls is sys_write (). The main calling function relationships are as follows:
Listing 8. Main call function diagram
Sys_write () |-vfs_write () |-generic_file_write () |-generic_file_aio_read () |-_ _ generic_file_write_nolock () |-- _ _ Generic_file_aio_write_nolock |-- generic_file_direct_write () |-- generic_file_direct_IO ()
The function sys_write () performs almost the same steps as sys_read (). After getting the file descriptor and the current operation location of the file from the process, it calls the vfs_write () function to perform the specific operation, and the vfs_write () function finally calls the related operations in the file structure to complete the file write operation, that is, it calls the generic_file_write () function. In the function generic_file_write (), the function generic_file_write_nolock () finally calls the generic_file_aio_write_nolock () function to check the settings of the O_DIRECT and calls the generic_file_direct_write () function to perform the direct write operation.
The code in the function generic_file_aio_write_nolock () that is related to the direct Istroke O is as follows:
Listing 9. The code related to the direct iMab O in the function generic_file_aio_write_nolock ()
If (unlikely (file- > f_flags & O_DIRECT)) {written = generic_file_direct_write (iocb, iov,&nr_segs, pos, ppos, count, ocount); if (written
< 0 || written == count) goto out; pos += written; count -= written; } 从上边代码可以看出, generic_file_aio_write_nolock() 调用了 generic_file_direct_write() 函数去执行直接 I/O 操作;而在 generic_file_direct_write() 函数中,跟读操作过程类似,它最终也是调用了 generic_file_direct_IO() 函数去执行直接 I/O 写操作。与直接 I/O 读操作不同的是,这次需要将操作类型 WRITE 作为参数传给函数 generic_file_direct_IO()。 前边介绍了 generic_file_direct_IO() 的主体 direct_IO 方法:__blockdev_direct_IO()。函数 generic_file_direct_IO() 对 WRITE 操作类型进行了一些额外的处理。当操作类型是 WRITE 的时候,若发现该使用直接 I/O 的文件已经与其他一个或者多个进程存在关联的内存映射,那么就调用 unmap_mapping_range() 函数去取消建立在该文件上的所有的内存映射,并将页缓存中相关的所有 dirty 位被置位的脏页面刷回到磁盘上去。对于直接 I/O 写操作来说,这样做可以保证写到磁盘上的数据是***的,否则,即将用直接 I/O 方式写入到磁盘上的数据很可能会因为页缓存中已经存在的脏数据而失效。在直接 I/O 写操作完成之后,在页缓存中相关的脏数据就都已经失效了,磁盘与页缓存中的数据内容必须保持同步。 如何在字符设备中执行直接 I/O 在字符设备中执行直接 I/O 可能是有害的,只有在确定了设置缓冲 I/O 的开销非常巨大的时候才建议使用直接 I/O。在 Linux 2.6 的内核中,实现直接 I/O 的关键是函数 get_user_pages() 函数。其函数原型如下所示: int get_user_pages(struct task_struct *tsk,struct mm_struct *mm,unsigned long start,int len,int write,int force,struct page **pages,struct vm_area_struct **vmas); 该函数的参数含义如下所示: ·tsk:指向执行映射的进程的指针;该参数的主要用途是用来告诉操作系统内核,映射页面所产生的页错误由谁来负责,该参数几乎总是 current。 ·mm:指向被映射的用户地址空间的内存管理结构的指针,该参数通常是 current->Mm .
Start: the address of the user address space to be mapped.
Len: the length of the buffer within the page.
Write: if you need to have write permission to the mapped page, this parameter should be set to non-zero.
Force: the setting of this parameter tells the get_user_pages () function to provide the requested read or write access without considering the protection of the specified memory page.
Page: output parameters. When the call is successful, the parameter contains a list of pointers that describe the page structure of the user-space page.
Vmas: output parameters. If the parameter is not empty, the parameter contains a pointer to the vm_area_struct structure, which contains each mapped page.
When using the get_user_pages () function, you often need to use the following functions together:
Void down_read (struct rw_semaphore * sem); void up_read (struct rw_semaphore * sem); void SetPageDirty (struct page * page); void page_cache_release (struct page * page)
First of all, before using the get_user_pages () function, you need to call the down_read () function to set the mmap to read mode for the reader / writer semaphore that gets the user's address space; after calling the get_user_pages () function, call the pairing function up_read () to release the semaphore sem. If the get_user_pages () call fails, the error code is returned; if the call succeeds, the number of pages actually mapped is returned, which may be less than the number of requests. The mapped user pages after a successful call are locked in memory, and the caller can access these user pages through the pointer of the page structure.
The caller of the direct Iripple O must do the follow-up work, and once the direct Iripple O operation is completed, the user's memory page must be freed from the page cache. Before the user memory pages are released, if the contents of these pages change, the caller must notify the operating system kernel, otherwise the virtual storage subsystem will think that the pages are clean. as a result, the modified pages cannot be written back to the storage until they are released. Therefore, if you change the data in the page, you must use the SetPageDirty () function to mark each changed page. For Linux 2.6.18.1, the macro is defined in / include/linux/page_flags.h. The code that performs this operation usually needs to check the page first to make sure that the page is not in the reserved area of the memory map, because the pages in this area will not be swapped out, and the code is as follows:
If (! PageReserved (page)) SetPageDirty (page)
However, because pages mapped in user space are not usually marked as reserved, the checks in the above code are not strictly required.
Eventually, after the direct Iamp O operation is completed, the pages must be released from the page cache, whether or not they have been changed, or those pages will always be there. The function page_cache_release () is used to release these pages. After the pages are released, the caller cannot access them again.
Linux 2.6.18.1 source code / drivers/scsi/st.c gives a complete example of how to add support for direct Imax O to character device drivers. The functions sgl_map_user_pages () and sgl_map_user_pages () cover almost everything described in this section.
The characteristics of Direct Ipaw O Technology
The advantages of direct Icano
The main advantage of direct I _ paw O is that it reduces the use of CPU and the use of memory bandwidth when reading and writing files by reducing the number of data copies in the kernel buffer and application address space of the operating system. This is a good choice for some special applications, such as self-caching applications. If the amount of data to be transferred is very large, the data transfer is carried out by using the direct Ithod O method without the participation of the operating system kernel address space copy data operation, which will greatly improve the performance.
The potential problems of Direct IPUBO
Direct Ihop O does not always provide a satisfactory performance leap. It is very expensive to set up direct Iripple O, which does not provide the advantage of caching Iripple O. The read operation of the cache Istroke O can fetch data from the cache, while the read operation of the direct Imax O will result in synchronous read of the disk, which will lead to performance differences and cause the process to take a long time to complete. For write data operations, using direct write O requires the synchronous execution of the iUnip O system call, otherwise the application will not know when it will be able to use its iUnip O buffer again. Similar to the direct Imax O read operation, the direct Imax O write operation can also cause the application to close slowly. Therefore, when an application uses direct Ibind O for data transfer, it is usually used in conjunction with the use of asynchronous Ishock O.
This is the end of the article on "how to implement the direct Ithumb O mechanism of Linux". Thank you for reading! I believe that you all have a certain understanding of the knowledge of "how to realize the direct Linux mechanism". If you want to learn more knowledge, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.