Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the shared memory and tmpfs file system of Linux

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the knowledge of "what is the shared memory of Linux and the tmpfs file system". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Preface

Shared memory is mainly used for interprocess communication, and Linux has two shared memory (Shared Memory) mechanisms:

In addition, memory mapping (which can also be used for inter-process communication) has to be mentioned in Linux:

System V shared memory has a long history and is widely used, which is supported by many types of Unix systems. Generally speaking, we usually use * * when writing programs. Instead of discussing how to use them here, you can refer to 1 and 2 here for a detailed description of POSIX shared memory.

* * with so much said, the question is, what is the relationship between shared memory and tmpfs? * *

POSIX shared memory is implemented based on tmpfs. In fact, further, not only PSM (POSIX shared memory), but also SSM (System V shared memory) is implemented in the kernel based on tmpfs.

Tmpfs introduction

Tmpfs has two main functions:

(1) used for SYSV shared memory and anonymous memory mapping; this part is managed by the kernel and is not visible to users.

(2) for POSIX shared memory, the user is responsible for mount, and generally mount to / dev/shm; depends on CONFIG_TMPFS

At this point, we can understand the difference between SSM and PSM and the role of / dev/shm.

Let's do some tests:

test

We set the tmpfs of / dev/shm to 64m:

# mount-size=64M-o remount / dev/shm# df-lh Filesystem Size Used Avail Use% Mounted on tmpfs 64m 0 64m 0 0% / dev/shm

The * size of SYSV shared memory is 32m:

# cat / proc/sys/kernel/shmmax 33554432

(1) failed to create 65m system V shared memory:

# ipcmk-M 68157440 ipcmk: create share memory failed: Invalid argument

This is normal.

(2) adjust shmmax to 65m

# echo 68157440 > / proc/sys/kernel/shmmax# cat / proc/sys/kernel/shmmax 6815744 ipcmk-M 68157440 Shared memory id: ipcs-m-Shared Memory Segments-key shmid owner perms bytes nattch status 0xef46b249 0 root 644 68157440 0

You can see that the size of system v shared memory is not affected by / dev/shm.

(3) create POSIX shared memory

Click (here) to collapse or open

/ * gcc-o shmopen shmopen.c-lrt*/#include # define MAP_SIZE 68157440 int main (int argc, char * argv []) {int fd; void* result; fd = shm_open ("/ shm1", O_RDWR | O_CREAT, 0644); if (fd

< 0){ printf("shm_open failed\n"); exit(1); } return 0; } # ./shmopen# ls -lh /dev/shm/shm1 -rw-r--r-- 1 root root 65M Mar 3 06:19 /dev/shm/shm1 仅管/dev/shm只有64M,但创建65M的POSIX SM也可以成功。 (4)向POSIX SM写数据 点击(此处)折叠或打开 /*gcc -o shmwrite shmwrite.c -lrt*/#include #include #include #include #include #include #include #define MAP_SIZE 68157440 int main(int argc, char *argv[]) { int fd; void* result; fd = shm_open("/shm1", O_RDWR|O_CREAT, 0644); if(fd < 0){ printf("shm_open failed\n"); exit(1); } if (ftruncate(fd, MAP_SIZE) < 0){ printf("ftruncate failed\n"); exit(1); } result = mmap(NULL, MAP_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0); if(result == MAP_FAILED){ printf("mapped failed\n"); exit(1); } /* ... operate result pointer */ printf("memset\n"); memset(result, 0, MAP_SIZE); //shm_unlink("/shm1"); return 0; } # ./shmwrite memset Bus error 可以看到,写65M的数据会报Bus error错误。 但是,却可以在/dev/shm创建新的文件: # ls -lh /dev/shm/ -lh 总用量 64M -rw-r--r-- 1 root root 65M 3月 3 15:23 shm1 -rw-r--r-- 1 root root 65M 3月 3 15:24 shm2 这很正常,ls显示的是inode->

Size . # stat / dev/shm/shm2 File: "/ dev/shm/shm2" Size: 68157440 Blocks: 0 IO Block: 4096 Common documents Device: 10h/16d Inode: 217177 Links: 1 Access: (0644 root) Gid: (0 / root) Access: (0 / root) Access: 2015-03-03 15V 24RV 28.025985167 + 0800 Modify: 2015-03-03 15RB 24RH 28.025985167 + 0800 Change: 2015-03-03 15RU 24RV 28.025985167 + 0800

(5) write data to SYS V shared memory

Adjust the System V shared memory * * value to 65m (/ dev/shm is still 64m).

# cat / proc/sys/kernel/shmmax 68157440

Click (here) to collapse or open

/ * gcc-o shmv shmv.c*/#include # define MAP_SIZE 68157440 int main (int argc, char** argv) {int shm_id,i; key_t key; char temp; char* pumped map; char* name = "/ dev/shm/shm3"; key= ftok (name,0); if (key==-1) perror ("ftok error"); shm_id=shmget (key,MAP_SIZE,IPC_CREAT) If (shm_id==-1) {perror ("shmget error"); return;} paired map = (char*) shmat (shm_id,NULL,0); memset (p_map, 0, MAP_SIZE); if (shmdt (p_map) = =-1) perror ("detach error");} #. / shmv

But it can be executed normally.

(7) conclusion

Although both System V and POSIX share memory through tmpfs, they are subject to different limitations. That is, / proc/sys/kernel/shmmax only affects SYS V shared memory, and / dev/shm only affects Posix shared memory. In fact, System V and Posix share memory using two different tmpfs instances (instance).

Kernel analysis

When the kernel initializes, it automatically mount a tmpfs file system and mount it as shm_mnt:

Click (here) to collapse or open

/ / mm/shmem.cstatic struct file_system_type shmem_fs_type = {.owner = THIS_MODULE, .name = "tmpfs", .get _ sb = shmem_get_sb, .kill _ sb = kill_litter_super,}; int _ init shmem_init (void) {. Error = register_filesystem (& shmem_fs_type); if (error) {printk (KERN_ERR "Could not register tmpfs\ n"); goto out2;} / / Mount tmpfs (for SYS V) shm_mnt = vfs_kern_mount (& shmem_fs_type, MS_NOUSER,shmem_fs_type.name, NULL)

The mount of / dev/shm is similar to the normal file mount and will not be discussed any more. It is worth noting, however, that the default size of / dev/shm is 1 stroke 2 of the current physical memory:

Shmem_get_sb-> shmem_fill_super

Click (here) to collapse or open

/ / mem/shmem.c int shmem_fill_super (struct super_block * sb, void * data, int silent) {. # ifdef CONFIG_TMPFS / * * Per default we only allow half of the physical ram per * tmpfs instance, limiting inodes to one per page of lowmem; * but the internal instance is left unlimited. * / if (! (sb- > s_flags & MS_NOUSER)) {/ the kernel sets MS_NOUSER sbinfo- > max_blocks = shmem_default_max_blocks (); sbinfo- > max_inodes = shmem_default_max_inodes (); if (shmem_parse_options (data, sbinfo, false)) {err =-EINVAL; goto failed }} sb- > s_export_op = & shmem_export_ops; # else... # ifdef CONFIG_TMPFS static unsigned long shmem_default_max_blocks (void) {return totalram_pages / 2;}

As you can see: because the kernel specifies MS_NOUSER when mount tmpfs, there is no size limit for the tmpfs, so the memory space that can be used by SYS V shared memory is only limited by / proc/sys/kernel/shmmax, while the user's mounted / dev/shm defaults to 1 dev/shm of physical memory.

Note that CONFIG_TMPFS.

In addition, the VFS interface is used to create files in / dev/shm, while SYS V and anonymous mapping are implemented through shmem_file_setup:

SIGBUS

When the application accesses the address space corresponding to the shared memory, the fault method will be called if the corresponding physical PAGE has not been allocated. If the allocation fails, an OOM or BIGBUS error will be returned:

Click (here) to collapse or open

Static const struct vm_operations_struct shmem_vm_ops = {.fault = shmem_fault, # ifdef CONFIG_NUMA .set _ policy = shmem_set_policy, .get _ policy = shmem_get_policy, # endif}; static int shmem_fault (struct vm_area_struct * vma, struct vm_fault * vmf) {struct inode * inode = vma- > vm_file- > fancipath.dentry-> diciinode; int error; int ret = VM_FAULT_LOCKED Error = shmem_getpage (inode, vmf- > pgoff, & vmf- > page, SGP_CACHE, & ret); if (error) return ((error = =-ENOMEM)? VM_FAULT_OOM: VM_FAULT_SIGBUS); return ret;} shmem_getpage-> shmem_getpage_gfp: / * * shmem_getpage_gfp-find page in cache, or get from swap, or allocate * * If we allocate a new one we do not mark it dirty. That's up to the * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache * / static int shmem_getpage_gfp (struct inode * inode, pgoff_t index, struct page * * pagep, enum sgp_type sgp, gfp_t gfp, int * fault_type) {. If (sbinfo- > max_blocks) {/ dev/shm will have this value if (& sbinfo- > used_blocks,sbinfo- > max_blocks) > = 0) {error =-ENOSPC; goto unacct;} percpu_counter_inc (& sbinfo- > used_blocks);} / / assign a physical PAGE page = shmem_alloc_page (gfp, info, index) If (! page) {error =-ENOMEM; goto decused;} SetPageSwapBacked (page); _ _ set_page_locked (page); error = mem_cgroup_cache_charge (page, current- > mm,gfp & GFP_RECLAIM_MASK); / / mem_cgroup check if (! error) error = shmem_add_to_page_cache (page, mapping, index, gfp, NULL)

Shared memory and CGROUP

Currently, the space of shared memory is calculated at * group accessing shared memory.

POSIX shared memory and Docker

Currently, Docker limits / dev/shm to 64m without providing parameters, which is a bad practice. If you apply POSIX shared memory with large memory, it will inevitably cause problems.

This is the end of the content of "what is the shared memory of Linux and the tmpfs file system". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report