Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction and Construction of Ceph (CephFS) File system

2025-03-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

One: introduction before installation

A Ceph (CephFS) file system requires at least two RADOS pools, one for data and the other for metadata. When configuring these pools, we should consider the following three points

Use a higher replication level for the metadata pool because any data loss in this pool may cause the entire file system to lose access to storage with lower latency for the metadata pool (such as SSD), because this directly affects the latency of file system operations observed on the client the datapool used to create the file system is the "default" datapool and is the location where all inode retro information is stored For hard link management and disaster recovery. Therefore, all inode created in CephFS have at least one object in the default datapool. If an erasure-encoded pool is planned for the file system, it is usually best to use the replication pool for the default data pool to improve the read and write performance of small objects to update backtracking. In addition, you can add another erasure-coded datapool that can be used for the entire hierarchy of directories and files 2: installation configuration steps

Create two pools with default settings for the file system, and create that mds,2 is pg_number. The value of pgp is not specified here. For instructions on pg and pgp, please refer to my previous article, Ceph concept introduction and component introduction.

[root@ceph-node1 ~] # ceph osd pool create cephfs_data 2

Pool 'cephfs_data' created

[root@ceph-node1 ~] # ceph osd pool create cephfs_metadata 2

Pool 'cephfs_metadata' created

[root@ceph-node1 ~] # ceph-deploy mds create ceph-node2

[root@ceph-node1 ~] # ceph mds stat

Cephfs-1/1/1 up {0=ceph-node2=up:active}

After you have created the pool, you can create the cephFS

[root@ceph-node1 ~] # cephfs new cephfs cephfsmetadata cephfsdata

New fs with metadata pool 41 and data pool 40

[root@ceph-node1 ~] # ceph fs ls

Name: cephfs, metadata pool: cephfsmetadata, data pools: [cephfsdata]

Three: client mount (kernel driver)

Create a mount directory

[root@ceph-client /] # mkdir-p / mnt/cephfs

On ceph-node2, create the user client.cephfs

[root@ceph-node2] # ceph auth get-or-create client.cephfs mon 'allow r' mds' allow r, allow rw path=/' osd 'allow rw pool=cephfs_data'

On ceph-node2, get the key of the client.cephfs user

[root@ceph-node2 ~] # ceph auth get-key client.cephfs

AQCL2d1dj4OgFRAAFeloClm23YTBsPn1qQnfTA==

Save the key obtained by the previous command to the ceph client

[root@ceph-client ~] # echo AQCL2d1dj4OgFRAAFeloClm23YTBsPn1qQnfTA== > / etc/ceph/cephfskey

Mount this file system

[root@ceph-client] # mount-t ceph ceph-node2:6789:/ / mnt/cephfs-o name=cephfs,secretfile= / etc/ceph/cephfskey

Write to fstab

[root@ceph-client ~] ech ceph-node2:6789:/ / mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 00 o ceph-node2:6789:/ / mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0 > / etc/fstab

Check the mount status

[root@ceph-client ~] # df-Th | grep cephfs

172.16.4.79 6789 / ceph 46G 0 46G 0% / mnt/cephfs

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report