Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the comprehensive knowledge points of Linux operating system?

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what are the comprehensive knowledge points of Linux operating system". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what are the comprehensive knowledge points of the Linux operating system?"

Introduction to Linux

UNIX is an interactive system for dealing with multiple processes and multiple users online at the same time. Why do you say UNIX? it is because Linux is developed from UNIX, UNIX is designed by programmers, and its main service object is also programmers. Linux inherits the design goals of UNIX. The Linux operating system is everywhere from smartphones to cars, supercomputers and household appliances, from home desktops to corporate servers.

Most programmers like to make the system as simple, elegant and consistent as possible. For example, from the lowest point of view, a file should be just a collection of bytes. In order to achieve sequential access, random access, keystroke access, remote access can only hinder your work. The same, if the command

Ls A *

Means that only all files that start with An are listed, then the command

Rm A *

All files that start with A should be removed instead of just files with the file name A*. This feature is also the principle of least surprise (principle of least surprise)

Half of the principle of least surprise is often used in user interface and software design. Its prototype is that the function or feature should meet the user's expectations and should not surprise or shock the user.

Some experienced programmers usually want the system to have strong functionality and flexibility. One of the basic goals of designing Linux is to do only one thing per application and do it well. So the compiler is only responsible for compiling, and the compiler does not generate lists, because there are other applications that do better than compilers.

A lot of people don't like redundancy, so why can you describe what you want to do in cp and use copy? This is a complete waste of valuable hacking time. To extract all lines that contain the string ard from the file, Linux programmers should type

Grep ard fLinux interface

The Linux system is a pyramid model system, as shown below

The application initiates a system call to put the parameters in the register (sometimes on the stack) and issues the trap system trap instruction to switch from user mode to kernel state. Because you cannot write trap instructions directly in C, C provides a library whose functions correspond to system calls. Some functions are written in assembly, but can be called from C. Each function first puts the arguments in place and then executes the system call instruction. So if you want to make a read system call, the C program will call the read function library to execute it. By the way, it is the library interface specified by POSIX rather than the system call interface. That is, POSIX tells a standard system which library procedures should be provided, what their parameters are, what they must do, and what results they must return.

In addition to the operating system and system call library, the Linux operating system also provides some standard programs, such as text editors, compilers, file manipulation tools, and so on. It is the above applications that deal directly with users. So we can say that Linux has three different interfaces: system call interface, library function interface and application program interface.

GUI (Graphical User Interface) in Linux is very similar to that in UNIX. This GUI creates a desktop environment, including windows, targets and folders, toolbars, and file drag and drop functions. A complete GUI also includes window managers and various applications.

GUI on Linux is supported by X window, and the main components are X server, control keyboard, mouse, monitor and so on. When using a graphical interface on Linux, users can run programs or open files with mouse clicks, and copy files by dragging and dropping.

Linux component

In fact, the Linux operating system can be made up of the following parts

Bootstrap (Bootloader): a bootstrap is software that manages the computer startup process. For most users, it just pops up a screen, but the internal operating system actually does a lot of things.

Kernel: the kernel is the core of the operating system and is responsible for managing CPU, memory, peripherals, etc.

Initialization system (Init System): this is a subsystem that guides user space and is responsible for controlling the daemon. Once the initial boot is handed over from the boot loader, it is the initialization system used to manage the boot process.

Background process (Daemon): as its name implies, background processes are programs that run in the background, such as printing, sound, scheduling, etc., which can be started during boot or after logging in to the desktop.

Graphics server (Graphical server): this is the subsystem that displays graphics on the monitor. It is often referred to as an X server or X.

Desktop environment (Desktop environment): this is the part with which the user actually interacts. There are many desktop environments to choose from, each containing built-in applications such as file managers, Web browsers, games, etc.

Applications (Applications): desktop environments do not provide complete applications, and like Windows and macOS, Linux provides thousands of high-quality software that can be easily found and installed.

Shell

Although Linux applications provide GUI, most programmers still prefer to use the command line (command-line interface), called shell. Users usually launch a shell window in GUI and then work under the shell window.

The shell command line is faster, more powerful, easy to scale, and does not cause repetitive physical strain (RSI).

Here are some of the simplest bash shell. When shell starts, it initializes first, outputting a prompt (prompt) on the screen, usually a percent sign or dollar sign, waiting for user input

After the user enters a command, shell extracts the first word, which refers to a series of characters separated by spaces or tabs. Assuming that the word is the name of the program that will run the program, the program will be searched and run if it is found. Shell then suspends itself until the program is finished, and then tries to read in the next instruction. Shell is also an ordinary user program. Its main function is to read the user's input and display the output of the calculation. The shell command can contain parameters that are passed as strings to the calling program. such as

Cp src dest

The cp application is called and contains two parameters, src and dest. The program explains that the first parameter is an existing file name, and then creates a copy of the file named dest.

Not all parameters are file names, such as the following

Head-20 file

The first argument,-20, tells the head application to print the first 20 lines of the file instead of the default 10 lines. Parameters that control command operations or specify optional values are called flags (flag), and by convention flags should be represented by -. This symbol is necessary, such as

Head 20 file

Is a perfectly legal command that tells the head program to output the first 10 lines of a file named 20, and then output the first 10 lines of a file named file. The Linux operating system can accept one or more parameters.

To make it easier to specify multiple file names, shell supports magic characters (magic character), also known as wildcards (wild cards). For example, * can match one or more possible strings

Ls * .c

Tell ls to list all files whose filenames end with .c. If multiple files exist at the same time, they will be juxtaposed later.

Another wildcard is the question mark, which is responsible for matching any character. A set of characters in square brackets can represent any one of them, so

Ls [abc] *

All files that start with a, b, or c are listed.

Shell applications do not necessarily input and output through the terminal. When shell starts, it acquires the ability to access standard input, standard output, and standard error files.

Standard output is input from the keyboard, and standard output or standard error is output to the monitor. Many Linux programs input from standard input and output from standard output by default. such as

Sort

The sort program is called, reads the data from the terminal (until the end of the user's input ctrl-d), sorts it alphabetically, and then outputs the results to the screen.

Usually, you can also redirect standard input and standard output, redirecting standard input using the

< 后面跟文件名。标准输出可以通过一个大于号 >

Redirect. Allows standard input and output to be redirected in a command. For example, commands

Sort out

Causes sort to get input from the file in and output the results to the out file. Because standard error is not redirected, the error message is printed directly to the screen. A program that reads in from standard input, processes it, and writes it to standard output is called a filter.

Consider the following instructions made up of three separate commands

Sort temp;head-300) {parent_handle () / / parent process code} else {child_handle () / / child process code}

The parent process gets the PID of the child process after the fork, and this PID represents the unique identifier of the child process, that is, PID. If the child process wants to know its own PID, it can call the getpid method. When the child process ends running, the parent process will get the PID of the child process, because a process will fork many child processes, and the child process will also fork the child processes, so PID is very important. The process after the first call to fork is called the original process, and an original process can generate an inheritance tree.

Linux interprocess communication

The communication mechanism between Linux processes is usually called Internel-Process communication,IPC. Let's talk about the communication mechanism between Linux processes. Generally speaking, the communication mechanism between Linux processes can be divided into six types.

Let's give an overview of them respectively.

Signal signal

Signal is the first inter-process communication mechanism used in UNIX system, because Linux inherits from UNIX, so Linux also supports signal mechanism, which is realized by sending asynchronous event signals to one or more processes, the signal can be generated from the keyboard or access to a location that does not exist, and the signal sends the task to the child process through shell.

You can type kill-l on the Linux system to list the signals used by the system. Here are some of the signals I provided.

Processes can choose to ignore incoming signals, but there are two that cannot be ignored: SIGSTOP and SIGKILL signals. The SIGSTOP signal notifies the currently running process to perform a shutdown operation, and the SIGKILL signal notifies the current process that it should be killed. In addition, the process can choose the signal it wants to process, and the process can choose to block the signal. If it does not block it, it can choose to handle it itself, or it can choose to do kernel processing. If you choose to leave it to the kernel for processing, the default processing is performed.

The operating system will interrupt the process of the target program to send a signal to it. In any non-atomic instruction, execution can be interrupted. If the process has registered a new number handler, then the process will be executed. If it is not registered, it will be handled by default.

For example, when a process receives a signal of a SIGFPE floating-point exception, the default action is to dump and exit it. The saying that the signal has no priority. If two signals are generated for a process at the same time, they can be presented to the process or processed in any order.

Let's take a look at what these signals are for.

SIGABRT and SIGIOT

SIGABRT and SIGIOT signals are sent to the process to tell it to terminate, which is usually started by the process itself when the abort () function of the C standard library is called

SIGALRM 、 SIGVTALRM 、 SIGPROF

When the set clock function times out, SIGALRM, SIGVTALRM, SIGPROF will be sent to the process. SIGALRM is sent when the actual time or clock time expires. A SIGVTALRM is sent when the CPU time used by the process times out. A SIGPROF is sent when the CPU time used by the process and the system on behalf of the process times out.

SIGBUS

SIGBUS will send to the process when it causes a bus interrupt error.

SIGCHLD

When a child process is terminated, interrupted, or resumed, the SIGCHLD is sent to the process. A common use of this signal is to instruct the operating system to clear the resources used by a child process after it terminates.

SIGCONT

The SIGCONT signal instructs the operating system to continue the process that was previously paused by the SIGSTOP or SIGTSTP signal. One of the important uses of this signal is in job control in Unix shell.

SIGFPE

The SIGFPE signal is sent to the process when performing incorrect arithmetic operations, such as dividing by zero.

SIGUP

When the terminal controlled by the SIGUP signal is closed, it is sent to the process. Many daemons will reload their configuration files and reopen their log files instead of exiting when this signal is received.

SIGILL

The SIGILL signal is issued when attempting to execute an illegal, malformed, unknown, or privileged instruction

SIGINT

When the user wants to interrupt the process, the operating system sends a SIGINT signal to the process. The user enters ctrl-c to interrupt the process.

SIGKILL

The SIGKILL signal is sent to the process to terminate it immediately. Compared with SIGTERM and SIGINT, this signal cannot be captured and ignored, and the process cannot perform any cleanup operations after receiving this signal. Here are some exceptions.

The zombie process cannot be killed because the zombie process is dead and is waiting for the parent process to capture it

A blocked process will not be dropped by kill until it is awakened again.

The init process is the initialization process of Linux, which ignores any signals.

SIGKILL is usually used as a signal to kill the process in the end, and it is usually sent to the process when the SIGTERM does not respond.

SIGPIPE

Sent to the process when SIGPIPE tries to write to the process pipe when it finds that the pipe is not connected and cannot be written

SIGPOLL

When an event occurs on an explicitly monitored file descriptor, a SIGPOLL signal is sent.

SIGRTMIN to SIGRTMAX

SIGRTMIN to SIGRTMAX is a real-time signal

SIGQUIT

When the user requests to exit the process and perform the core dump, the SIGQUIT signal will be sent to the process by its control terminal.

SIGSEGV

When the SIGSEGV signal makes an invalid virtual memory reference or a segmentation error, it is sent to the process when a segmentation violation is performed.

SIGSTOP

When SIGSTOP instructs the operating system to terminate for later recovery

SIGSYS

When the SIGSYS signal passes the error parameter to the system call, the signal is sent to the process.

SYSTERM

We briefly mentioned the term SYSTERM above, and this signal is sent to the process to request termination. Unlike the SIGKILL signal, the signal can be captured or ignored by the process. This allows the process to perform a good termination, freeing resources and saving state when appropriate. SIGINT is almost the same as SIGTERM.

SIGTSIP

The SIGTSTP signal is sent to the process by its control terminal to request the terminal to stop.

SIGTTIN and SIGTTOU

When SIGTTIN and SIGTTOU signals attempt to read or write from tty in the background, respectively, the signal is sent to the process.

SIGTRAP

When an exception or trap occurs, the SIGTRAP signal is sent to the process

SIGURG

When the socket has readable emergency or out-of-band data, the SIGURG signal is sent to the process.

SIGUSR1 and SIGUSR2

SIGUSR1 and SIGUSR2 signals are sent to the process to indicate user-defined conditions.

SIGXCPU

When the SIGXCPU signal runs out of CPU for more than a predetermined value that can be set by a user, it is sent to the process

SIGXFSZ

When the SIGXFSZ signal grows beyond the maximum allowable size of the file, the signal is sent to the process.

SIGWINCH

The SIGWINCH signal is sent to the process when its control terminal changes its size (window change).

Pipeline pipe

Processes in a Linux system can communicate by establishing a pipeline pipe.

Between two processes, a channel can be established, into which one process writes a byte stream, and the other reads a byte stream from this channel. The pipe is synchronous, and when a process tries to read data from an empty pipe, the process is blocked until data is available. The pipeline pipelines in shell is implemented in pipelines, when the shell discovers the output

Sort kernel space-> user space is expensive, but the time loss of thread initialization is negligible. The advantage of this implementation is that the thread switching time is determined by the clock, so it is unlikely to bind the time slice to the time occupied by other threads in the task. Similarly, I am not a problem with blocking.

Hybrid implementation

Combining the advantages of user space and kernel space, the designer adopts a kernel-level thread approach, and then multiplexes user-level threads with some or all kernel threads.

In this model, programmers are free to control the number of user threads and kernel threads, with a great degree of flexibility. With this approach, the kernel only recognizes kernel-level threads and schedules them. Some of these kernel-level threads are multiplexed by multiple user-level threads.

Linux scheduling

Let's take a look at the scheduling algorithm of the Linux system. First of all, we need to realize that the threads of the Linux system are kernel threads, so the Linux system is thread-based, not process-based.

For scheduling purposes, the Linux system divides threads into three categories

Real-time first in, first out

Real-time polling

Time-sharing

The real-time FIFO thread has the highest priority and will not be preempted by other threads unless it is a newly prepared thread with a higher priority. The real-time rotation thread is basically the same as the real-time first-in-first-out thread, except that each real-time rotation thread has an amount of time, which can be preempted when the time is up. If multiple real-time threads are ready, each thread runs the time specified by its amount of time, and then inserts to the end of the real-time rotation thread.

Note that this real-time is only relative, and absolute real-time cannot be achieved because the running time of the thread cannot be determined. They are more real-time than time-sharing systems.

The Linux system assigns a nice value to each thread, which represents the concept of priority. The default value of the nice value is 0, but it can be modified by the system call nice value. Modify the range of values from-20 to + 19. The nice value determines the static priority of the thread. The average system administrator's nice value is higher than the average thread's priority, and its range is-20-1.

Let's discuss the two scheduling algorithms of Linux system in more detail, which are very similar to the design of scheduling queue (runqueue). The run queue has a data structure for monitoring all runnable tasks in the system and selecting the next runnable task. Each run queue is related to each CPU in the system.

The Linux O (1) scheduler is a popular scheduler in history. The name comes from the fact that it can perform task scheduling in a constant time. In the O (1) scheduler, the scheduling queue is organized into two arrays, an array of active tasks and an array of expired tasks. As shown in the following figure, each array contains 140 chain headers, each with a different priority.

The general process is as follows:

The scheduler selects a task with the highest priority from the active array. If the time slice of this task expires, move it to the expiration array. If the task is blocked, for example, waiting for the CPU O event to expire, then the task will continue to run once the IWeiO operation is completed before its time slice expires, and it will be put back into the previously active array, because the task will run the rest of the time slice because it has already consumed some of its time slices. When the task runs its time slice, it is placed in the expiration array. Once there are no other tasks in the active task array, the scheduler will exchange pointers so that the active array becomes an expired array and the expired array becomes an active array. In this way, you can ensure that each priority task can be executed without causing thread hunger.

In this scheduling method, the time slices allocated by CPU for tasks with different priorities are also different. High-priority processes often get longer time slices, while low-priority tasks get less time slices.

In order to provide better service, this method usually gives higher priority to the interactive process, and the interactive process is the user process.

The Linux system does not know whether a task is I / O-intensive or CPU-intensive, it just depends on the interactive approach, and the Linux system will distinguish between static priority and dynamic priority. Dynamic priority is achieved by using an incentive mechanism. There are two ways of reward mechanism: reward interactive threads and punish threads that occupy CPU. In the Linux O (1) scheduler, the highest priority award is-5. Note that the lower this priority is, the easier it is to be accepted by the thread scheduler, so the highest penalty priority is + 5. The specific embodiment is that the operating system maintains a variable called sleep_avg. Task wake-up will increase the value of the sleep_avg variable, and when the task is preempted or the amount of time expires, the value of this variable will be reduced, which is reflected in the reward mechanism.

The O (1) scheduling algorithm is a 2.6 kernel version of the scheduler, which was originally introduced in the unstable 2.5 version. Early scheduling algorithms demonstrated that scheduling decisions can be made by accessing an active array in a multiprocessor environment. So that the scheduling can be completed in a fixed time O (1).

What does it mean that the O (1) scheduler uses a heuristic approach?

In computer science, heuristic is a way to solve a problem quickly when the traditional method is very slow, or to find an approximate solution when the traditional method can not find any exact solution.

O (1) using this heuristic approach makes the priority of the task complex and imperfect, resulting in poor performance when dealing with interactive tasks.

In order to improve this disadvantage, the developers of O (1) scheduler put forward a new scheme, namely Completely Fair Scheduler (CFS). The main idea of CFS is to use a red-black tree as a scheduling queue.

The data structure is too important.

CFS arranges tasks in a tree in order according to how long they have been running on the CPU, accurate to nanoseconds. Here is the construction model of CFS

The scheduling process of CFS is as follows:

The CFS algorithm always gives priority to scheduling the tasks that use the least CPU time. The smallest tasks are usually in the leftmost position. When there is a new task to run, CFS compares the task with the leftmost value, and if the task has a minimum time value, it will run, otherwise it will compare and find the right place to insert. CPU then runs the leftmost task currently compared on the red-black tree.

The time to select a node in the red-black tree to run can be constant, but the time to insert a task is O (loog (N)), where N is the number of tasks in the system. This is acceptable considering the current load level of the system.

The scheduler only needs to consider the tasks that can be run. These tasks are placed in the appropriate scheduling queue. Tasks that are not runnable and tasks that are waiting for various Igamot O operations or kernel events are placed in a waiting queue. The waiting queue header contains a pointer to the task list and a spin lock. Spin locks are very useful for concurrent processing scenarios.

Synchronization in Linux system

Let's talk about the synchronization mechanism in Linux. The early Linux kernel had only one large kernel lock (Big Kernel Lock,BKL). It prevents the ability of different processors to process concurrently. Therefore, some finer-grained locking mechanisms need to be introduced.

Linux provides several different types of synchronization variables that can be used both in the kernel and in user applications. In the layer, Linux provides encapsulation of hardware-supported atomic instructions by using operations such as atomic_set and atomic_read. The hardware provides memory reordering, which is the mechanism of the Linux barrier.

When two processes access resources at the same time, after one process acquires the resource, the other process does not want to be blocked, so it spins and waits to access the resource. Linux also provides mechanisms such as mutexes or semaphores, as well as non-blocking calls such as mutex_tryLock and mutex_tryWait. Interrupts are also supported for handling transactions, and can also be achieved by dynamically disabling and enabling corresponding interrupts.

Linux start

Let's talk about how Linux is started.

When the computer is powered on, BIOS will power on self-test (Power-On-Self-Test, POST) to detect and initialize the hardware. Because the boot of the operating system will use disk, screen, keyboard, mouse and other devices. Next, the first partition on the disk, also known as the MBR (Master Boot Record) master boot record, is read into a fixed area of memory and executed. There is a very small program in this partition with only 512 bytes. The program calls the boot independent program from the disk, and the boot program copies itself to the memory of the high address, thus freeing the memory of the low address for the operating system.

After the copy is complete, the boot program reads the root directory of the startup device. Boot programs need to understand file system and directory formats. The boot program is then called into the kernel and control is transferred to the kernel. Until here, boot has finished its work. The system kernel starts running.

The kernel startup code is completed by assembly language, which mainly includes creating kernel stack, identifying CPU type, calculating memory, disabling interrupt, starting memory management unit and so on, and then calling the main function of C language to execute the operating system.

This part will also do a lot of things, first of all, a message buffer will be allocated to store the debugging problems, and the debugging information will be written to the buffer. If there is an error in debugging, this information can be called up through diagnostics.

Then the operating system will automatically configure, detect the device, load the configuration file, if the detected device responds, it will be added to the linked device table, if there is no corresponding, it will be classified as unconnected and directly ignored.

After configuring all the hardware, the next thing to do is to carefully manually process process 0, set its stack, then run it, perform initialization, configure the clock, and mount the file system. Create the init process (process 1) and the daemon (process 2).

The init process detects its flag to determine whether it is a single-user or multi-user service. In the former case, it calls the fork function to create a shell process and waits for the process to finish. In the latter case, the fork function is called to create a process that runs the system-initialized shell script (that is, / etc/rc), which can check the file system consistency, mount the file system, open the daemon, and so on.

The / etc/rc process then reads data from / etc/ttys, and / etc/ttys lists all the terminals and properties. For each enabled terminal, the process calls the fork function to create a copy of itself, perform internal processing, and run a program called getty.

The getty program will be entered on the terminal

Login:

Wait for the user to enter the user name. After entering the user name, the getty program ends and the login program / bin/login starts running. The login program needs to enter the password and compare it with the password saved in / etc/passwd. If entered correctly, the login program replaces itself with the user shell program, waiting for the first command. If incorrect, the login program asks for another user name.

The whole system startup process is as follows

Linux memory management

The Linux memory management model is straightforward because this mechanism of Linux makes it portable and Linux can be implemented on machines with similar memory management units, so let's take a look at how Linux memory management is implemented.

Basic concept

Every Linux process has an address space, which consists of three segment areas: text segment, data segment, and stack segment. The following is an example of a process address space.

Data segment contains the storage of variables, strings, arrays, and other data of the program. The data segment is divided into two parts, data that has been initialized and data that has not yet been initialized. The uninitialized data is what we call BSS. The initialization of the data segment requires compiling a constant determined by the date and a variable with an initial value to start the program. Variables in all BSS sections are initialized to 0 after loading.

Unlike code snippets (Text segment), data segment segments can be changed. The program always modifies its variables. Moreover, many programs need to allocate space dynamically at execution time. Linux allows data segments to increase or decrease as memory is allocated and reclaimed. To allocate memory, the program can increase the size of the data segment. In the C language, there is a standard library malloc that is often used to allocate memory. The process address space descriptor contains dynamically allocated areas of memory called heap.

The third part is the stack segment. On most machines, the stack segment is at the top of the virtual memory address and expands to the lower location (to the address space of 0). For example, on a 32-bit x86 machine, the stack starts at 0xC0000000, which is the 3GB virtual address limit that the process allows to be visible in user mode. If the stack grows until it exceeds the stack segment, a hardware failure occurs and the page is dropped by one page.

When the program starts, the stack area is not empty; instead, it contains all the shell environment variables and the command line entered into shell to invoke it. For example, when you type

Cp cxuan lx

The cp program runs with the string cp cxuan lx on the stack so that you can find out the names of the source and target files.

When two users are running in the same program, such as an editor (editor), two copies of the editor's program code are kept in memory, but this approach is not efficient. The Linux system supports shared text segments as an alternative. In the following figure, we will see two processes An and B, which have the same text area.

Data segments and stack segments are shared only after fork, and sharing is also about sharing unmodified pages. If any one needs to be larger but there is no adjacent space, it will not be a problem, because adjacent virtual pages do not have to be mapped to adjacent physical pages.

In addition to dynamically allocating more memory, processes in Linux can access file data through memory-mapped files. This feature allows us to map a file to part of the process space and the file can be read and written like a byte array in memory. Mapping a file in makes it much easier to read and write randomly than to use the Igamot O system call such as read and write. This mechanism is used for access to shared libraries. As shown below

We can see that two identical files are mapped to the same physical address, but they belong to different address spaces.

The advantage of mapping files is that two or more processes can be mapped to the same file at the same time, and the write operations of any one process to the file are visible to other files. By mapping temporary files, you can provide high bandwidth for multithreaded shared memory, and temporary files disappear after the process exits. But in fact, there are no two identical address spaces, because each process maintains different open files and signals.

Linux memory management system call

Let's take a look at the system call method for memory management. In fact, POSIX does not specify any system calls for memory management. However, Linux has its own memory system calls, and the main system calls are as follows

System call description s = brk (addr) change segment size a = mmap (addr,len,prot,flags,fd,offset) map s = unmap (addr,len) Unmap

If an error is encountered, the return value of s is-1 addr an is the memory address, len is the length, prot is the control protection bit, flags is the other flag bit, fd is the file descriptor, and offset is the file offset.

Brk specifies the size of the segment by giving the first byte address beyond the segment. If the new value is larger than the original, the data area will become larger and larger, and vice versa.

The mmap and unmap system calls control the mapping file. The first parameter of mmp, addr, determines the address of the file map. It must be a multiple of the page size. If the parameter is 0, the system assigns an address and returns a. The second parameter is length, which tells how many bytes to map. It is also a multiple of the page size. Prot determines the protection bits of the mapping file, which can be marked as readable, writable, executable, or a combination of these. The fourth parameter, flags, controls whether the file is private or readable and whether addr is required or just prompted. The fifth parameter, fd, is the file descriptor to be mapped. Only open files can be mapped, so if you want to do file mapping, you must open the file; the last parameter, offset, indicates when the file starts, not necessarily from scratch every time.

Implementation of memory Management in Linux

Memory management system is one of the most important parts of the operating system. From the early days of the computer, we actually used more memory than we actually had in the system. The memory allocation strategy overcomes this limitation, and the most famous of these is virtual memory (virtual memory). By sharing virtual memory among competing processes, virtual memory allows the system to have more memory. The virtual memory subsystem mainly includes the following concepts.

Large address space

The operating system makes the system seem to be much larger than the actual physical memory because virtual memory is many times larger than physical memory.

Protection

Each process in the system has its own virtual address space. These virtual address spaces are completely separate from each other, so the process running one application does not affect the other. Also, the hardware virtual memory mechanism allows memory to protect critical memory areas.

Memory mapping

Memory mapping is used to map images and data files to the process address space. In memory mapping, the contents of the file are mapped directly to the virtual space of the process.

Fair physical memory allocation

The memory management subsystem allows each running process in the system to allocate the physical memory of the system fairly.

Shared virtual memory

Although virtual memory allows processes to have their own memory space, sometimes you need to share memory. For example, several processes are running in shell at the same time, which involves inter-process communication in IPC, where you need shared memory for information transmission rather than running independently by copying a copy of each process.

At this point, I believe you have a deeper understanding of "what are the comprehensive knowledge points of the Linux operating system?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report