In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
What is CephFS
1. Distributed file system (Distributed File System) means that the physical storage resources managed by the file system are not necessarily directly connected to the local node, but through the computer network and the node necklace.
2. CephFS uses Ceph clusters to provide POSIX-compatible file systems
3. Allow Linux to directly mount Ceph storage locally
4. It can provide shared folders like NFS or SAMBA, and the client can use the storage provided by Ceph by mounting the directory.
II. CephFS application
1. Network topology
When using CephFS, you need a MDS (metadata) server, so what is metadata?
The so-called Metadata:
①, data in any file system is divided into data and metadata
②, actual data in ordinary files at the time of data
③, and metadata refers to the system data used to describe the characteristics of a file
④, such as access rights, file owners, and distribution information (inode...) of file blocks, etc.
Therefore, the CephFS must have a MDS node
2. Deploy the metadata server
①, log in to ceph-d, and install the ceph-mds package
[root@ceph-d ~] # yum-y install ceph-mds
②, log in to ceph-a, deploy node operations
[root@ceph-a ~] # cd / etc/ceph/ [root@ceph-a ceph] # ceph-deploy mds create ceph-d
③, synchronization profile, and key
[root@ceph-a ceph] # ceph-deploy admin ceph-d
④, check the / etc/ceph directory of ceph-d, and find that the configuration file and key file have been synchronized
⑤, create two pools, one named cephfs-data to store data, and one named cephfs-metadata to store metadata
[root@ceph-a ceph] # ceph osd pool create cephfs-data 128 [root@ceph-a ceph] # ceph osd pool create cephfs-metadata 64
128indicates that the number of PG is 128. PG is the configuration group, files are saved in PG, and PG is stored in the pool
⑥, create a file system named cephfs1
[root@ceph-a ceph] # cephfs new cephfs1 cephfs-metadata cephfs-data
⑦, View statu
[root@ceph-a ceph] # ceph mds stat
⑧ view file system information
[root@ceph-a ceph] # ceph fs ls
⑨, client mount use
Create a mount directory cephfs on ceph-f
[root@ceph-f ~] # mkdir / cephfs
Mounting
[root@ceph-f] # mount-t ceph 192.168.20.144 ceph 6789 name=admin,secret=AQBBhQ9cJh/tDxAAzdcwBz3QZzPsCfWbQE0qjg== / / cephfs/-o name=admin,secret=AQBBhQ9cJh/tDxAAzdcwBz3QZzPsCfWbQE0qjg==
Parameter resolution:
-t: restrict the collection of file system types
-o: mounting option
⑩, check the mount status
[root@ceph-f] # df-h
3. View the statistics of cluster free space
[root@ceph-a ~] # ceph df
For the implementation of Ceph cluster, please refer to another blog post: https://blog.51cto.com/4746316/2329558
For the application of Ceph block devices, please refer to another blog post: https://blog.51cto.com/4746316/2330070
For Ceph object storage, please refer to my other blog post: https://blog.51cto.com/4746316/2330455
III. Summary
CephFS is rarely used in the actual production environment, because it is not mature enough, we only need to understand it here, there is no need to go too deep, and we can find out again after it is stable in the later stage.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.