Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The operation method of CephFs

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces the operation of CephFs in detail, the architecture of CephFs and the deployment and configuration of CephFs are introduced in great detail. Zero foundation can also refer to this article, and interested friends can refer to it.

1. Introduction of CephFs

Ceph File System (CephFS) is a POSIX-compliant file system that provides access to files on Ceph storage clusters. Jewel version (10.2.0) is the first Ceph version that includes stable CephFS. CephFS requires at least one Metadata Server-MDS daemon (ceph-mds) to run, and MDS daemon manages metadata related to files stored on the CephFS and coordinates access to the Ceph storage system.

Said above, cephfs is actually a file system for users. The software ceph uses the space inside to simulate the format of a file system to provide services. It has posix standard file system interface that can store files for ceph clusters and provide access. At present, cephfs is rarely used in most companies, also due to performance reasons, but there are also some scenarios that will be used.

The cost of object storage is still higher than that of ordinary file storage, so it is necessary to purchase special object storage software and high-capacity hard drives. If the requirement for a large amount of data is not massive, but just for file sharing, it is better to use the form of file storage directly, which is cost-effective.

II. CephFS architecture

The underlying layer is dependent on the core cluster, including:

OSDs (ceph-osd): CephFS's data and metadata are stored on OSDs

MDS (ceph-mds): Metadata Servers, which manages the metadata of CephFS

Mons (ceph-mon): Monitors manages the master copy of the cluster Map

Because this map maintains the information index of a lot of data, all the data has to be obtained from the map in mons and looked for this data in osd. In fact, the process of obtaining this data is probably the same, except that it exists in different libraries and different map.

The protocol layer of Ceph storage cluster is Ceph native librados library, which interacts with core cluster.

The CephFS library layer includes the CephFS library libcephfs, which works on top of librados and represents the Ceph file system. The top layer is two types of clients that can access the Ceph file system. Because of this libcephfs library, cephfs can provide services, because the bottom layer can not provide services, and can only provide access through it, a third-party lib library.

Metadata: the name and attribute information of the file is called metadata, which is separated from the data

How is CephFs's data accessed?

First of all, the client arrives at MDS through the RPC protocol, obtains the information of metadata from MDS, and the client obtains an IO operation of the file with RADOS. Then with these two information, the user can get the file they want. Between MDS and RADOS, through journal metadate, this Journal records the file to be written to the log, this is also stored in OSD, and there is interaction between MDS and rados, because all the final data will be stored in rados.

!

3. Configure CephFS MDS

To use CephFS, you need at least one metadata server process. You can create a MDS manually, or you can deploy MDS using ceph-deploy or ceph-ansible.

Log in to the ceph-deploy working directory to execute

Hostname specifies the hostname of the ceph cluster

# ceph-deploy mds create $hostname

Fourth, deploy the Ceph file system

To deploy a CephFS, follow these steps:

Create a Ceph file system on a Mon node.

To use CephX authentication, you need to create a client that accesses CephFS

Mount CephFS to a dedicated node.

Mount CephFS as kernel client

Mount CephFS as FUSE client

1. Create a Ceph file system

1. First, you need to create two pool, one is cephfs-data and the other is cephfs-metadate, which stores file data and file metadata respectively. This pg can also be set to a smaller size, which can be configured according to OSD.

# ceph osd pool create cephfs-data 256 256#ceph osd pool create cephfs-metadata 64 64

Check that it has been created successfully

[root@cephnode01 my-cluster] # ceph osd lspools1 .rgw.root2 default.rgw.control3 default.rgw.meta4 default.rgw.log5 rbd6 cephfs-data7 cephfs-metadata

About the ceph log, you can view the relevant information under / var/log/ceph

[root@cephnode01 my-cluster] # tail-f / var/log/ceph/cephceph.audit.log ceph.log ceph-mgr.cephnode01.log ceph-osd.0.logceph-client.rgw.cephnode01.log ceph-mds.cephnode01.log ceph-mon.cephnode01.log ceph-volume.log

Note: general metadata pool can be started from a relatively small number of PGs, and then you can add PGs as needed. Because metadata pool stores the metadata of CephFS files, in order to ensure security, it is better to have more copies. In order to have a lower latency, consider storing metadata on SSDs.

2. Create a CephFS with the name cephfs: you need to specify the names of the two created pool

# cephfs new cephfs cephfs-metadata cephfs-datanew fs with metadata pool 7 and data pool 6

3. Verify that at least one MDS has entered the Active state, that is, active

You can also see that the two spare ones are cephnode01 and cephnode03.

# cephfs status cephfscephfs-0 clients+-+-+ | Rank | State | MDS | Activity | dns | inos | +- -+ | 0 | active | cephnode02 | Reqs: 0 / s | 10 | 13 | +- -+ | Pool | type | used | avail | +-+ | cephfs-metadata | metadata | 1536k | 17.0g | | cephfs-data | data | 0 | 17 .0G | +-+-+ | Standby MDS | +-+ | cephnode01 | | cephnode03 | +-+ MDS version: ceph version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable)

4. On Monitor, create a user named client.cephfs to access CephFs

# ceph auth get-or-create client.cephfs mon 'allow r' mds' allow rw' osd 'allow rw pool=cephfs-data, a key will be generated here in allow rw pool=cephfs-metadata'. Users need to use this key to visit [client.cephfs] key = AQA5IV5eNCwMGRAAy4dIZ8+ISfBcwZegFTYD6Q==

View the list of permissions and which users have created permissions

[root@cephnode01 my-cluster] # ceph auth listclient.cephfs key: AQA5IV5eNCwMGRAAy4dIZ8+ISfBcwZegFTYD6Q== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw pool=cephfs-data, allow rw pool=cephfs-metadataclient.rgw.cephnode01 key: AQBOAl5eGVL/HBAAYH93c4wPiBlD7YhuPY0u7Q== caps: [mon] allow rw caps: [osd] allow r

5. Verify whether key is effective.

# ceph auth get client.cephfs can see that this user is exported keyring for client.cephfs [client.cephfs] key = AQA5IV5eNCwMGRAAy4dIZ8+ISfBcwZegFTYD6Q== caps mds = "allow rw" caps mon = "allow r" caps osd = "allow rw pool=cephfs-data, allow rw pool=cephfs-metadata" with read and write access to cephfs

6. Check CephFs and mds status

# ceph-s View Cluster has added mds configuration cluster: id: 75aade75-8a3a-47d5-ae44-ec3a84394033 health: HEALTH_OK services: mon: 3 daemons, quorum cephnode01,cephnode02,cephnode03 (age 2h) mgr: cephnode01 (active, since 2h), standbys: cephnode02,cephnode03 mds: cephfs:1 {0=cephnode02=up:active} 2 up:standby osd: 3 osds: 3 up (since 2h), 3 in (since 2h) rgw: 1 daemon active (cephnode01) data: pools: 7 pools 96 pgs objects: 263 objects, 29 MiB usage: 3.1 GiB used, 54 GiB / 57 GiB avail pgs: 96 active+clean#ceph mds stat 1 is shown here as active status 2 standby cephfs:1 {0=cephnode02=up:active} 2 up:standby#ceph fs ls there are two poolname: cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data] # cephfs status1.1 mounts the CephFS as kernel client

Here, other machines are used for mounting. Here, it is mounted on a prometheus host, but this can be mounted anywhere. Kernel mainly contacts the system kernel and interacts with the system kernel to mount the file system in this way.

1. Create a mount directory cephfs

# mkdir / cephfs

2. Mount the directory, where the address of the cluster ceph node is written, followed by the key that creates the user to access the cluster.

# mount-t ceph 192.168.1.10 cephfs/ 6789192.168.1.11 cephfs/ 6789192.168.1.12

3. Automatic mount

# echo "mon1:6789,mon2:6789,mon3:6789:/ / cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfs.key,_netdev,noatime 0 0" | sudo tee-a / etc/fstab

4. Verify whether the mount is successful.

# stat-f / cephfs file: "/ cephfs" ID:4f32eedbe607030e file name length: 255Typ: ceph Block size: 4194304 basic Block size: 4194304 blocks: total: 4357 Idle: 4357 available: 4357Inodes: total: 0 Idle:-11.2Mount CephFS as FUSE client

1. Install ceph-common. After installation, you can use rbd,ceph related commands.

Here we still use our intranet yum source to install these dependent packages

Yum-y install epel-releaseyum install-y ceph-common

2. Install the client tool of ceph-fuse,ceph, that is, mount the file system by means of ceph

Yum install-y ceph-fuse

3. Copy the ceph.conf of the cluster to the client

Scp root@192.168.1.10:/etc/ceph/ceph.conf / etc/ceph/chmod 644 / etc/ceph/ceph.conf

4. Use ceph-fuse to mount CephFS

If it is mounted on another host, you need this key using cephfs. This is the one we just created.

Just take this server and use it.

[root@prometheus ~] # more / etc/ceph/ceph.client.cephfs.keyring exported keyring for client.cephfs [client.cephfs] key = AQA5IV5eNCwMGRAAy4dIZ8+ISfBcwZegFTYD6Q== caps mds = "allow rw" caps mon = "allow r" caps osd = "allow rw pool=cephfs-data, allow rw pool=cephfs-metadata" # ceph-fuse-- keyring / etc/ceph/ceph.client.cephfs.keyring-- name client.cephfs-m 192.168.1.10VRV 6789192.168.1.11RV 6789192.168.1.12RV 6789 / cephfs/

5. Verify that CephFS has been mounted successfully

6. Automatic mount

# echo "none / cephfs fuse.ceph ceph.id=cephfs [, ceph.conf=/etc/ceph/ceph.conf], _ netdev,defaults 0" | sudo tee-a / etc/fstab or # echo "id=cephfs,conf=/etc/ceph/ceph.conf / mnt/ceph3 fuse.ceph _ netdev,defaults 0" | sudo tee-a / etc/fstab

7. Uninstall

# fusermount-u / cephfs

5. Switching between MDS master / slave and master / master

1. Configure the main main mode

When the performance of cephfs appears on MDS, you should configure multiple active MDS. Typically, multiple client applications perform a large number of metadata operations in parallel, and each has its own separate working directory. In this case, the multi-host MDS mode is suitable.

Configure MDS multi-master mode

Each cephfs file system has a max_mds setting, which can be understood to mean that it will control how many primary MDS is created. Note that the mdx_mds setting takes effect only if the actual number of MDS is greater than or equal to the value set by max_mds. For example, if there is only one MDS daemon running and the max_mds is set to two, a second primary MDS will not be created.

Add the setting of max_mds 2, that is, 2 activity,1 standby, called active / standby mode.

# cephfs set cephfs max_mds 2 [root@cephnode01 ceph] # cephfs statuscephfs-1 clients+-+-+ | Rank | State | MDS | Activity | dns | inos | + -+ | 0 | active | cephnode02 | Reqs: 0 / s | 11 | 14 | 1 | active | cephnode01 | Reqs: 0 / s | 10 | 13 | +- -+ + | Pool | type | used | avail | +- +-+ | cephfs-metadata | metadata | 2688k | 16.8g | | cephfs-data | data | 521m | 16.8g | +-+-+ | Standby MDS | +-+ | cephnode03 | +-+

That is, when you use a lot of cephfs and a large amount of data, there will be performance problems, that is, when you configure the mds of multiple avtive, you will encounter a system bottleneck. At this time, you need to configure the master mode to make this data a similar load balance. If there are multiple hosts, these hosts will provide services at the same time.

# 1.3Configuring standby MDS

Even if there is more than one active MDS, if one of the MDS fails, a standby daemon is still needed to take over. Therefore, for high-availability systems, it is best to actually configure max_mds by one less than the total number of MDS in the system.

However, if you are sure that your MDS will not fail, you can use the following settings to inform ceph that you do not need a backup MDS, otherwise an insufficient standby daemons available alarm message will appear:

# ceph fs set standby_count_wanted 0

2. Restore a single master MDS

2.1.Setting max_mds

If you restore it, set it directly to max_mds 1, that is, one activity and two standby.

# cephfs set max_mds 1 [root@cephnode01 ceph] # cephfs statuscephfs-1 clients=+-+-+ | Rank | State | MDS | Activity | dns | inos | +- -+ | 0 | active | cephnode02 | Reqs: 0 / s | 11 | 14 | +-+- -+ + | Pool | type | used | avail | +-+ | cephfs-metadata | metadata | 2688k | 16.8g | | Cephfs-data | data | 521m | 16.8g | +-+-+ | Standby MDS | +-+ | cephnode03 | | cephnode01 | +-+

If you want to execute the relevant ceph commands on the client side, you need to install ceph-common and ceph-fuse client tools to copy the ceph.client.admin.keyring and ceph.conf files to the appropriate client side to execute the ceph command.

[root@cephnode01 ceph] # scp ceph.client.admin.keyring root@192.168.1.14:/etc/cephroot@192.168.1.14's password: ceph.client.admin.keyring [root@prometheus ceph] # ceph-s cluster: id: 75aade75-8a3a-47d5-ae44-ec3a84394033 health: HEALTH_OK services: mon: 3 daemons, quorum cephnode01,cephnode02,cephnode03 (age 4h) mgr: cephnode01 (active, since 4h), standbys: cephnode02,cephnode03 mds: cephfs:2 {0=cephnode02=up:active 1=cephnode03=up:active} 1 up:standby osd: 3 osds: 3 up (since 4h), 3 in (since 4h) rgw: 1 daemon active (cephnode01) data: pools: 7 pools, 96 pgs objects: 345 objects, 203 MiB usage: 3.6 GiB used, 53 GiB / 57 GiB avail pgs: 96 active+clean

These are the details of CephFs. Is there anything to gain after reading it? If you want to know more about it or want to know the specific deployment and configuration steps, you are welcome to follow the industry information!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report