In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "GlusterFS file occupancy space display is not unified how to solve", in the daily operation, I believe that many people in the GlusterFS file occupation space display is not unified how to solve the problem, editor consulted all kinds of data, sorted out a simple and easy to use method of operation, hope to answer the "GlusterFS file occupation space display is not unified how to solve" the doubt is helpful! Next, please follow the editor to study!
Test environment description: glusterfs 3.6 replica 2
Normally, the size of the file footprint is closely related to the block size of the file system.
For example, if the default block size of xfs is 4K, even if you create a 1-byte file, it will take up 4K space.
Customer fuse mounts, and the creation of small files is calculated in terms of 512 bytes. Therefore, if you create 100 small files with less than 512 bytes on the client, it will take up 50K of space, and the result is as follows:
[root@lab23:/mnt/dzq/test] $ls
0 12 16 2 23 27 30 34 38 41 45 49 52 56 6 63 67 70 74 78 81 85 89 92 96
1 13 17 20 24 28 31 35 39 42 46 5 53 57 60 64 68 71 75 79 82 86 9 93 97
10 14 18 21 25 29 32 36 4 43 47 50 54 58 61 65 69 72 76 8 83 87 90 94 98
11 15 19 22 26 3 33 37 40 44 48 51 55 59 62 66 7 73 77 80 84 88 91 95 99
[root@lab23:/mnt/dzq/test] $ls-lh | head
Total 50K
-rw-r--r-- 1 root root 2 Jan 29 15:33 0
-rw-r--r-- 1 root root 2 Jan 29 15:33 1
-rw-r--r-- 1 root root 3 Jan 29 15:33 10
-rw-r--r-- 1 root root 3 Jan 29 15:33 11
-rw-r--r-- 1 root root 3 Jan 29 15:33 12
-rw-r--r-- 1 root root 3 Jan 29 15:33 13
-rw-r--r-- 1 root root 3 Jan 29 15:33 14
-rw-r--r-- 1 root root 3 Jan 29 15:33 15
-rw-r--r-- 1 root root 3 Jan 29 15:33 16
In fact, it does not take up so little space. It should follow the backend server, and the backend display results are as follows:
[root@lab21:/letv/disk3/test] $ls
0 12 16 2 23 27 30 34 38 41 45 49 52 56 6 63 67 70 74 78 81 85 89 92 96
1 13 17 20 24 28 31 35 39 42 46 5 53 57 60 64 68 71 75 79 82 86 9 93 97
10 14 18 21 25 29 32 36 4 43 47 50 54 58 61 65 69 72 76 8 83 87 90 94 98
11 15 19 22 26 3 33 37 40 44 48 51 55 59 62 66 7 73 77 80 84 88 91 95 99
[root@lab21:/letv/disk3/test] $ls-lh | head
Total 400K
-rw-r--r-- 2 root root 2 Jan 29 15:33 0
-rw-r--r-- 2 root root 2 Jan 29 15:33 1
-rw-r--r-- 2 root root 3 Jan 29 15:33 10
-rw-r--r-- 2 root root 3 Jan 29 15:33 11
-rw-r--r-- 2 root root 3 Jan 29 15:33 12
-rw-r--r-- 2 root root 3 Jan 29 15:33 13
-rw-r--r-- 2 root root 3 Jan 29 15:33 14
-rw-r--r-- 2 root root 3 Jan 29 15:33 15
-rw-r--r-- 2 root root 3 Jan 29 15:33 16
The reason is that the latter is the xfs file system, and the default block size is 4K, so the client creates a small file less than 4K, which takes up 4K space.
[root@lab21:/letv/disk3/test] $stat-f / letv/disk3/test/
File: "/ letv/disk3/test/"
ID: fd0200000000 Namelen: 255 Type: xfs
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 238263636 Free: 216480157 Available: 216480157
Inodes: Total: 953520128 Free: 953175313
The client calculates according to 512Byte, and the server uses 4K for calculation. The comparison is as follows:
Client:
[root@lab23:/mnt/dzq/new] $ls-lh
Total 1.5K
-rw-r--r-- 1 root root 513 Jan 29 17:04 a
-rw-r--r-- 1 root root 55 Jan 29 15:51 b
Server side:
[root@lab21:/letv/disk3/new] $ls-lh
Total 8.0K
-rw-r--r-- 2 root root 513 Jan 29 17:04 a
-rw-r--r-- 2 root root 55 Jan 29 15:51 b
The block size seen from the client is 128K, but it is actually calculated according to 512, which requires further analysis:
[root@lab23:/mnt/dzq/new] $stat-f / mnt/dzq/test/
File: "/ mnt/dzq/test/"
ID: 0 Namelen: 255 Type: fuseblk
Block size: 131072 Fundamental block size: 131072
Blocks: Total: 7445738 Free: 6765004 Available: 6765004
Inodes: Total: 953520128 Free: 953175312
It is true that the client file system should not be affected, and the client file system block size is as follows:
[root@lab23:/mnt/dzq/new] $stat-f / mnt/
File: "/ mnt/"
ID: 3af66a85b8161ce8 Namelen: 255 Type: ext2/ext3
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 2015852 Free: 169897 Available: 67497
Inodes: Total: 512064 Free: 364916
Remaining questions:
Why does the customer show that the block size is 128k, but when the statistical file occupies space, it uses 512 bytes?
Fuse transfers data at 128K. Starting from the application scenario, you can consider increasing the backend xfs block size to provide small file performance to be verified.
At this point, on the "GlusterFS file space size display is not unified how to solve" on the end of the study, I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.