Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

(8) the client uses cephfs

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

The name of the Ceph wen component system is CephFS, which is an POSIX-compatible distributed file system that uses Ceph RADOS to store data. To implement a Ceph wen component system, you need a functioning Ceph storage cluster and at least one Ceph metadata server (Ceph Metadata Server, MDS).

The client can use the Ceph wen component system in two ways: mounting the CephFS using a native kernel driver, or using Ceph FUSE.

(1) prepare a healthy ceph cluster

[root@node140 mds] # ceph-s

Cluster:

Id: 58a12719-a5ed-4f95-b312-6efd6e34e558

Health: HEALTH_OK

Services:

Mon: 2 daemons, quorum node140,node142 (age 22h)

Mgr: admin (active, since 6d), standbys: node140

Osd: 16 osds: 16 up (since 17h), 16 in (since 3D)

Data:

Pools: 5 pools, 768 pgs

Objects: 2.61k objects, 9.8 GiB

Usage: 47 GiB used, 8.7 TiB / 8.7 TiB avail

Pgs: 768 active+clean

(2) create a metadata directory

[root@node140 ceph] # mkdir / var/lib/ceph/mds/ceph-node140-pv

(3) create a key for the bootstrap-mds client

[root@node140 ceph] # # ceph-authtool-- create-keyring / var/lib/ceph/bootstrap-mds/ceph.keyring-- gen-key-n client.bootstrap-mds

(4) create a bootstrap-mds client in the ceph auth library and grant permission to add the previously created key

[root@node140 ceph] # ceph auth add client.bootstrap-mds mon 'allow profile bootstrap-mds'-I / var/lib/ceph/bootstrap-mds/ceph.keyring

(5) create a ceph.bootstrap-mds.keyring file in the root home directory

[root@node140 ceph] # touch / root/ceph.bootstrap-mds.keyring

(6) Import the key in keyring / var/lib/ceph/bootstrap-mds/ceph.keyring into the ceph.bootstrap-mds.keyring file in the home directory

[root@node140 ceph] # ceph-authtool-- import-keyring / var/lib/ceph/bootstrap-mds/ceph.keyring ceph.bootstrap-mds.keyring

(7) create a mds.a user in the auth library, grant permissions and create a key, and save the key in / var/lib/ceph/mds/ceph-node140/keyring file

[root@node140 ceph] # ceph--name client.bootstrap-mds-- keyring / var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.an osd 'allow rwx' mds' allow' mon' allow profile mds'-o / var/lib/ceph/mds/ceph-a/keyring

(8) Grant file permissions

[root@node140 ceph] # chown-R ceph.ceph / var/lib/ceph/mds/ceph-node140

(9) add a MDS node to the configuration file

Vim / etc/ceph/ceph.conf

[mds.node140]

Host = node140

(10) start ceph

[root@node140 mds] # systemctl enable ceph-mds@node140

[root@node140 mds] # systemctl start ceph-mds@node140

[root@node140 mds] # systemctl status ceph-mds@node140

(11) add MDS to other nodes

[root@node141 ceph] # vim / etc/ceph/ceph.conf

[mds.node140]

Host = node140

[mds.node141]

Host = node141

[root@node140 ceph] # scp / var/lib/ceph/bootstrap-mds/ceph.keyring node141:/etc/ceph/

[root@node140 ceph] # scp / root/ceph.bootstrap-mds.keyring node141:/etc/ceph/

[root@node141 ceph] # mkdir / var/lib/ceph/mds/ceph-node141/-pv

[root@node141 ceph] # cp / etc/ceph/ceph.keyring / var/lib/ceph/bootstrap-mds/

[root@node141 ceph] # ceph auth add client.bootstrap-mds mon 'allow profile bootstrap-mds'-I / var/lib/ceph/bootstrap-mds/ceph.keyring

[root@node141 ceph] # ceph--name client.bootstrap-mds-- keyring / var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node141 osd 'allow rwx' mds' allow' mon' allow profile mds'-o / var/lib/ceph/mds/ceph-node141/keyring

[root@node141 ceph] # systemctl start ceph-mds@node141

[root@node141 ceph] # systemctl status ceph-mds@node141

[root@node141 ceph] # systemctl enable ceph-mds@node141

(12) # create cephfs_data pool

[root@node140 mds] # ceph osd pool create cephfs_data 128

Pool 'cephfs_data' created

(13) create a cephfs metadata pool

[root@node140 mds] # ceph osd pool create cephfs_metadata 128

Pool 'cephfs_metadata' created

(14) create a cephfs

[root@node140 mds] # cephfs new cephfs cephfs_metadata cephfs_data

New fs with metadata pool 7 and data pool 6

(15) View cephfs

[root@node140 mds] # ceph fs ls

Name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data]

[root@node140 mds] # ceph mds stat

Cephfs:1 {0=node140=up:active}

Client uses kernel driver to mount cephfs (1) server node to view ceph.client.admin.keyring

[root@node140 ceph-node140] # cat / etc/ceph/ceph.client.admin.keyring

[client.admin]

Key = AQB9w2BdnggFIBAA7SR+7cO/PtZl9PTlriBL1A==

Caps mds = "allow"

Caps mgr = "allow"

Caps mon = "allow"

Caps osd = "allow"

[root@docker38 ceph] # vim admin.key # copy ceph.client.admin.keyring, content to client

[root@docker38 ceph] # chown 600 admin.key

(2) Mount and use

[root@docker38 ceph] # mount-t ceph node140:6789:/ / mnt-o name=admin,secret=AQB9w2BdnggFIBAA7SR+7cO/PtZl9PTlriBL1A==

(3) centos7 boot and mount

[root@docker38 ~] # vim / etc/rc.local

Mount-t ceph node140:6789:/ / mnt-o name=admin,secret=AQB9w2BdnggFIBAA7SR+7cO/PtZl9PTlriBL1A==

If the boot does not automatically execute / etc/rc.local

[root@docker38 ~] # chmod + x / etc/rc.d/rc.local

The client installs the ceph-fuse software package on the cephfs (1) client by FUSE.

[root@docker38 ceph] # yum-y install ceph-fuse

(2) create a mount directory

[root@docker38 ~] # mkdir / ceph/cephfs-pv

(3) disconnect from the server and copy the client key to / etc/ceph/

[root@node140 ceph] # scp ceph.client.admin.keyring root@10.10.204.38:/etc/ceph/

(4) Mount cephfs

[root@docker38] # ceph-fuse-m node140:6789 / ceph/cephfs/

(5) View mount

[root@docker38] # df-h

Filesystem Size Used Avail Use% Mounted on

/ dev/mapper/centos-root 46G 24G 22G 54% /

Devtmpfs 4.8G 0 4.8G 0% / dev

Tmpfs 4.9G 0 4.9G 0% / dev/shm

Tmpfs 4.9G 8.9m 4.8G 1% / run

Tmpfs 4.9G 0 4.9G 0% / sys/fs/cgroup

/ dev/sda1 1014M 255M 760M 26% / boot

10.10.202.140 6789 / 2.8T 0 2.8T 0% / mnt

Tmpfs 984M 0 984m 0% / run/user/0

Ceph-fuse 2.8T 0 2.8T 0% / ceph/cephfs

(6) automatic mount on boot

[root@docker38 ~] # vim / etc/rc.local

Ceph-fuse-m node140:6789 / ceph/cephfs/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report