Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Is the idea of UNIX that everything is a file correct?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Everything is XX, which is the cry of the believers of essentialism and holism.

Quote. There is no direct relationship between tree models and everything files, but their experiences are very similar.

I recently read a book bought by my wife, "Visual Beauty". In ancient times, people had a passion for trees, and eventually the organizational structure, including classification, became trees. After reading the following chapters, I found that the original tree model was not the essence of everything. It was artificially created in the crescent zone of Mesopotamia around 10000 BC, that is, the agricultural revolution led to the idea of tree models.

The tree model has a sense of order, any two nodes are unidirectional connected, any node can be traced back and be traced, everything seems perfect. It can even cover everything in the world. Take taxonomy as an example, the tree of the ancient Greek philosopher Poffelli is a typical example. He defined all the attributes in advance, and then classified things with "with or without the attribute". In the end, everything can find a leaf on the tree as its place.

When everything works well under the tree model (including economics, politics, corporate management, etc.), people find that the more essential idea is the idea of interconnection, so when the link is everything, the tree model is no longer applicable. Today, we are in an era of complex networks, where complex networks are everywhere, including social circles, cities, stocks, weather, and our brains themselves. However, we can not deny the ancients who put forward that "everything is a tree". It is precisely because of their simple model that human civilization will be born and developed. today, we are confused by chaos, and eventually we will find that chaos is orderly chaos.

What I want to say is that although some ideas have been proved wrong in the end, you have to say that at the beginning, because simplicity is the driving force for progress, and when you encounter obstacles or bottlenecks, things will develop horizontally, that is, toward complexity and chaos, but complexity and chaos are not goals, but try to find a hole to continue upward, maybe make some adjustments, and then cross the obstacles and continue simply upward. Just as plants grow, they are full of vitality. Documentation of everything is one of UNIX's tenets, and now it also faces some challenges.

For the counterexample procfs of everything is a file, the process file system. Is an in-memory file system that displays process status and related data in UNIX systems. It has an ancient history and may have become the de facto preacher of "everything is a file" from the very beginning. You see, even the process state can be represented as a file.

Everything is a file, and its original meaning is like this, that is, file operations have a unified and simple interface. In the 1000 BC era of computers, people attributed all operations to read, write, and control, so read,write,ioctl became the oldest set of file operations. If you try to classify all operations into file operations, it is necessary to establish a series of mappings. These mappings abstract mechanisms and policies, this mapping is an one-to-many, a unified operation primitive representation mechanism, while many different operation implementations represent policies, and finally, VFS is born!

With VFS, it is convenient for people to implement everything as a file system. Linux still does this to this day, and doesn't seem to stop like UNIX, which is, of course, later. When this kind of thing is done enough, when there are more and more security requirements, just like the problems faced by the IP network later, the simple owner-based ACL is not enough to map all the security control rules. The crux of the problem is that the type and number of file systems caused by VFS are uncontrolled, while the file ACL of UNIX is determined, so another mapping parallel to VFS is needed, that is, another mechanism to policy mapping, which I can call VACL if I can.

However, VACL did not appear because the granularity of ACL is too coarse and its semantics is only aimed at the file owner. It just says "can or not" and does not mean "what must be done if it can". But later, something similar to the Bovary tree classification, called "capability", emerged, that is, all the operations that can be thought of are represented by a binary bit, if an entity has permission for this operation. It is 1, otherwise it is 0, which produces the capability model of UNIX, that is, POSIX Cap, and everything looks perfect. However, unlike the later popular classification index method to replace the tree classification index method (in fact, to this day, people still advocate the tree model!) POSIX Cap is not very easy to use. Procfs is also not suitable for using Cap to manage security!

Everything is available in procfs. The content of the file system is automatically generated, each process has a directory inside, there are attributes of the process under the directory, I would like to ask, who will define the operation of these files Cap, if the system, then the system generated a process, how to define; if the user, then undoubtedly added a HOOK between fork/exec and procfs, this is too complicated. The original purpose of procfs is simple, there are two types:

1. Export system information

two。 Export process information

In any case, it is to enhance its debugging capabilities. In any case, you can't try to use procfs to do something that violates the principles of UNIX. The first problem is that the information exported by procfs includes the process address space. Isolating the process address space is the fundamental principle of UNIX and even all operating systems. As long as it is displayed in procfs, it may be read,write,mmap.... Many UNIX, including BSD, have had problems because of this, so later versions simply removed procfs. The second question is whether the kernel space should deal with the information format. Because VFS is in HOOK kernel state, the operations of various actual file systems are also implemented in kernel state, so there will be a lot of formatting operations in the kernel, but it should not be done in the kernel. If you export binary data directly, it violates the original intention of procfs. Why not fix bugs and problems and simply let procfs out of class, which reflects the purism of UNIX design, on the contrary, it is Linux's eclecticism.

There is a lot of debate around the fate of procfs, which revolves around two aspects:

1.procfs should be dismissed.

Completely use the sysctl interface instead of procfs. Since the address space that is part of the process attributes cannot be exported, why keep another 90% +?

2.procfs should be retained

Considering that sysctl is not part of a standard toolset for every UNIX system, it is recommended that procfs with a unified interface be retained.

In any case, it is a debate around the philosophy of UNIX. Linux has completely put this aside and implemented its own procfs.

Instead of abandoning procfs, Linux's eclectic Linux fixes its key issues, and the Linux community doesn't care about other non-critical issues. In the definition of procfs's VFS operation set, Linux adopts the following definition:

# define mem_write NULL#ifndef mem_write// a scary comment! / * This is a security hazard * / static ssize_t mem_write (struct file * file, const char * buf,...#endifstatic struct file_operations proc_mem_operations = {.llgirls = mem_lseek,//read operation has many restrictions Access to the address space of other processes is not allowed. Read = mem_read,//NULL defines write operations. Write = mem_write, .open = mem_open,// has no implementation of mmap} This avoids security problems!

Linux is going farther and farther along the road of all files, and will continue to go on. Nowadays you can see a lot of unconventional FS, such as procfs,sysfs,devfs,debugfs,cpuset,cgroup,sockfs and so on. Linux carefully manages all unconventional file systems, including procfs, which can be read, which can be write, which should be banned, and so on, all need to be carefully considered. I think only an open development platform like Linux dares to do so, and any loopholes can be found immediately and corrected as quickly as possible!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report