In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Install ceph
Official document
0. Preparation before deployment
Three CentOS hosts need to be prepared before deployment. I use CentOS7.5 here, and upgrade the system kernel to 4.x long-term supported version. The relevant configuration information is as follows:
Node hostname and IP information local-node-1: 10.0.0.1local-node-2: 10.0.0.2local-node-3: 10.0.0.3
Configure hosts to resolve the hostnames of the three nodes and configure the three nodes in password-free authentication mode.
Turn off the firewall and Selinux
Each CVM node adds at least 3 disks for ceph storage. In actual production, multiple disks can be used as raid,ceph automatically formatted when disks are added, so there is no need for formatting here.
Ceph requires high time between nodes, so you need to install ntp, synchronize time and configure epel source. 1. All nodes install the dependency package yum install snappy leveldb gdisk python-argparse gperftools-libs-y
Add yum source and import key. Here I use the latest version of mimic:
Rpm-- import 'https://download.ceph.com/keys/release.asc'su-c' rpm-Uvh https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-0.el7.noarch.rpm'
Due to domestic network problems, you can choose to use Aliyun's source here. Modify the ceph.repo file as follows:
[Ceph] name=Ceph packages for $basearchbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc[Ceph-noarch]name=Ceph noarch packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc2. All nodes install ceph yum install ceph-y-disablerepo=epel3. Check whether the installation package is complete [root@local-node-1 ~] # rpm-qa | egrep-I "ceph | rados | rbd" ceph-deploy-2.0.1-0.noarchlibrados2-13.2.5-0.el7.x86_64libradosstriper1-13.2.5-0.el7.x86_64ceph-mgr-13.2.5-0.el7.x86_64ceph-13.2.5-0.el7.x86_64python-rados-13.2.5-0.el7.x86_64libcephfs2-13.2 .5-0.el7.x86_64python-rbd-13.2.5-0.el7.x86_64ceph-common-13.2.5-0.el7.x86_64ceph-selinux-13.2.5-0.el7.x86_64ceph-mon-13.2.5-0.el7.x86_64ceph-osd-13.2.5-0.el7.x86_64librbd1-13.2.5-0.el7.x86_64python-cephfs-13.2.5-0.el7.x86_64ceph -base-13.2.5-0.el7.x86_64ceph-mds-13.2.5-0.el7.x86_64 deploy ceph cluster one. Deploy Monitor to create a configuration file directory and create a configuration file mkdir / etc/ceph/touch / etc/ceph/ceph.conf to generate a FSDI for the cluster: [root @ local-node-1 ~] # uuidgen7bd25f8d-b76f-4ff9-89ec-186287bbeaa5
3. The cluster creates a key chain and creates a key for the Monitor service:
[root@local-node-1] # ceph-authtool-- create-keyring / tmp/ceph.mon.keyring-- gen-key-n mon. -- cap mon 'allow *' creating / tmp/ceph.mon.keyring
4. Create an administrator keychain, generate a client.admin user, and add this user to the keychain:
[root@local-node-1 ~] # ceph-authtool-- create-keyring / etc/ceph/ceph.client.admin.keyring-- gen-key-n client.admin-- cap mon 'allow *'-- cap osd 'allow *'-- cap mds' allow *'--cap mgr 'allow *' creating / etc/ceph/ceph.client.admin.keyring create bootstrap-osd keychain Add the client.bootstrap-osd user to the keychain: [root@local-node-1 ~] # ceph-authtool-- create-keyring / var/lib/ceph/bootstrap-osd/ceph.keyring-- gen-key-n client.bootstrap-osd-- cap mon 'profile bootstrap-osd'creating / var/lib/ceph/bootstrap-osd/ceph.keyring add the generated key to ceph.mon.keyring.root @ local-node-1 ~] # ceph-authtool / tmp/ceph.mon .keyring-- import-keyring / etc/ceph/ceph.client.admin.keyring [root@local-node-1 ~] # ceph-authtool / tmp/ceph.mon.keyring-- import-keyring / var/lib/ceph/bootstrap-osd/ceph.keyring generates monitor map using hostname, IP address and FSID: [root@local-node-1 ~] # monmaptool-- create-- add local-node-1 10.0.0.1-- fsid 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 / tmp/monmapmonmaptool: monmap file / tmp/monmapmonmaptool: set fsid to 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5monmaptool: writing epoch 0 to / tmp/monmap (1 monitors) create the mon directory Use the form of cluster name-hostname: mkdir / var/lib/ceph/mon/ceph-local-node-1 fill in the information of the first mon daemon: [root@local-node-1 ~] # ceph-mon-- mkfs-I local-node-1-- monmap / tmp/monmap-- keyring / tmp/ceph.mon.keyring configuration / etc/ceph/ceph.conf file: [root@local-node-1 ~] # cat / etc/ceph/ceph . confs [global] fsid = 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 # generated FSIDmon initial members = local-node-1 # hostname mon host = 10.0.0.1 # corresponding to IPpublic network = 10.0.0.1 # 24 auth cluster required = cephxauth service required = cephxauth client required = 1024osd pool default size = 3osd pool default min size = 2osd pool default pg Num = 333osd pool default pgp num = 333osd crush chooseleaf type = 1 because we use the You need to set the permission to ceph (you can also modify the startup file of systemd to change the ceph user to root) And start Monitorchown-R ceph:ceph / var/lib/cephsystemctl start ceph-mon@local-node-1.service to confirm that the service has been started normally: [root@local-node-1 ~] # ceph- s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_OK services: mon: 1 daemons, quorum local-node-1 mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects 0B usage: 0B used, 0B / 0 B avail pgs: [root@local-node-1 ~] # netstat-lntp | grep ceph-montcp 0 010.0.0.1 B avail pgs 6789 0.0.0.0 B avail pgs * LISTEN 1799/ceph-mon II. Deploy Manager
When we have configured the ceph-mon service, we need to configure the ceph-mgr service.
Generate an authentication key (ceph-mgr is a custom name Replace the following $name): ceph auth get-or-create mgr.$name mon 'allow profile mgr' osd' allow * 'mds' allow * 'such as: [root@local-node-1 ~] # ceph auth get-or-create mgr.ceph-mgr mon' allow profile mgr' osd 'allow *' mds' allow *'[mgr.ceph-mgr] key = AQBC56VcK2PALhAArjY0icXMK6/Hs0xZm/smPA== create the directory where the key is stored (directory name is called cluster name-$name): sudo-u ceph mkdir / var/lib/ceph/mgr/ceph-ceph-mgr stores the key file generated above in this directory And named keyring: [root @ local-node-1 ~] # cat / var/lib/ceph/mgr/ceph-ceph-mgr/keyring [mgr.ceph-mgr] key = AQBC56VcK2PALhAArjY0icXMK6/Hs0xZm/smPA== start the ceph-mgr service: ceph-mgr-I $name such as: [root ~] # ceph-mgr-I ceph-mgr check whether the service is started, check ceph status It should be mgr: ceph-mgr (active) [root@local-node-1 ~] # ceph- s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_OK services: mon: 1 daemons, quorum local-node-1 mgr: ceph-mgr (active) # if it is starting, you need to wait a while osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects 0 B usage: 0 B used 0 B / 0 B avail pgs: [root@local-node-1 ~] # netstat-lntp | grep cephtcp 00 10.0.0.1 netstat 6789 0.0.0.0 * LISTEN 1799/ceph-mon tcp 00 10.0.0.1 netstat 6800 0.0.0.0 * LISTEN 133336 / After the ceph-mgr mgr service is normal We can use the following command to view the modules available in the current mgr: [root@local-node-1 ~] # ceph mgr module ls {"enabled_modules": ["balancer", "iostat", "restful", "status"], "disabled_modules": [{"name": "dashboard", "can_run": true "error_string": ""}, {"name": "hello", "can_run": true, "error_string": ""}, {"name": "influx", "can_run": false "error_string": "influxdb python module not found"}, {"name": "localpool", "can_run": true, "error_string": ""}, {"name": "prometheus", "can_run": true "error_string": ""}, {"name": "selftest", "can_run": true, "error_string": ""}, {"name": "smart", "can_run": true, "error_string": ""} {"name": "telegraf", "can_run": true, "error_string": ""}, {"name": "telemetry", "can_run": true, "error_string": "}, {" name ":" zabbix " "can_run": true, "error_string": ""}]} if you want to open a module You can use the following command: [root@local-node-1 ~] # ceph mgr module enable dashboard [root@local-node-1 ~] # ceph mgr module ls {"enabled_modules": ["balancer", "dashboard", "iostat", "restful", "status"],. # disable module ceph mgr module disable dashboard if the module can publish its address (such as http service) at load time You can view the open service address with the following command: [root@local-node-1 ~] # ceph mgr services {} when the cluster starts for the first time, it uses the mgr_initial_modules setting to override the module to be enabled. However, ignore this setting for the rest of the cluster life cycle: use it for boot only. For example, before starting the monitor daemon for the first time, you can add this section to ceph.conf: [mon] mgr initial modules = high availability of dashboard balancerceph-mgr
In general, we should configure the ceph-mgr service on each host running the ceph-mon daemon to achieve the same level of availability.
By default, the first instance of ceph-mgr that appears will be activated by Monitor, and the rest will become standby nodes. There is no need for arbitration in the ceph-mgr daemon.
If the active daemon fails to send a beacon to the monitor more than mon mgr beacon grace (the default is 30 seconds), it will be replaced by the standby database.
If you want to preempt a failover, you can use ceph mgr fail to explicitly mark the ceph-mgr daemon as a failure.
Help commands for related modules can be used:
Ceph tell mgr help III. Create OSD
Official document
When mon initialization is complete and running normally, you should add OSD. The cluster cannot reach the active + clean state until there is enough OSD to handle the number of copies of the object (for example, the osd pool default size = 3 requires at least three OSD). After booting the monitor, your cluster has a default CRUSH mapping; however, the CRUSH mapping does not map to any Ceph OSD daemons of the Ceph node.
Ceph provides a ceph-volume utility that initializes logical volumes, disks, partitions, etc., used by Ceph. The ceph-volume utility creates the OSD ID by incrementing the index. In addition, ceph-volume adds the new OSD to the CRUSH map under the host. You can get CLI details by executing ceph-volume-h. The ceph-volume tool can simplify many manual deployment steps, and if you don't use ceph-volume, you need to do some configuration manually. To create the first three OSD using the short format procedure, do the following on all the node where you want to create the osd:
There are two different architectural choices for the creation of ceph OSD. Filestore and bluestore,bulestore are the default configurations of the community version and are designed to optimize the performance of filestore. The specific differences between the two will be described in the principles of ceph. = =
BlueStore method 1: create OSD, for example, we have three disks on each node, which are called sdb,sdc,sdd. Ceph-volume lvm create-- data / dev/sdb ceph-volume lvm create-- data / dev/sdc ceph-volume lvm create-- data / dev/sdd View the current lvm logical volume Activate OSD according to this output. [root@local-node-1] # ceph-volume lvm list= osd.1 = [block] / dev/ceph-fad16202-18c0-4444-9640-946173373925/osd-block-43a082d5-79c4-4d3f-880e-ecc7eaef6a83 type block osd id 1 cluster fsid 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 Cluster name ceph osd fsid 43a082d5-79c4-4d3f-880e-ecc7eaef6a83 encrypted 0 cephx lockbox secret block uuid W68QgI-8eHM-bSEr-I9Gs-dQx8-tdf9-lHRbqa block device / dev/ceph-fad16202-18c0-4444-9640-946173373925/osd-block-43a082d5-79c4-4d3f-880e-ecc7eaef6a83 vdo 0 crush device class None devices / dev/sdc= osd.0 = [block] / dev/ceph-6c675287-4a42-43f0-8cef-69b0150c3b06/osd-block-f829a5f0-0a11-4ae7-983a-ecd01718a81a type block osd id 0 cluster fsid 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 Cluster name ceph osd fsid f829a5f0-0a11-4ae7-983a-ecd01718a81a encrypted 0 cephx lockbox secret block uuid E0YDG4-lm1W-WbqE-yRHy-hqGL-H0af-eZzKjr block device / dev/ceph-6c675287-4a42-43f0-8cef-69b0150c3b06/osd-block-f829a5f0-0a11-4ae7-983a-ecd01718a81a vdo 0 crush device class None devices / dev/sdb= osd.2 = [block] / dev/ceph-256d0c82-3d7b-4672-a241-99c9c614809d/osd-block-75c04fb3-90e8-40af-9fb4-1c94b22664be type block osd id 2 cluster fsid 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 Cluster name ceph osd fsid 75c04fb3-90e8-40af-9fb4-1c94b22664be encrypted 0 cephx lockbox secret block uuid fNFmrI-Y1dZ-4cHd-UCVi-ajLD-Uim2-wkcx3y block device / dev/ceph-256d0c82-3d7b-4672-a241-99c9c614809d/osd-block-75c04fb3-90e8-40af-9fb4-1c94b22664be vdo 0 crush device class None devices / dev/sdd mode 2:
In this way, more detailed parameters are configured, which is divided into two steps: preparing the disk and activating the OSD.
Prepare OSD
Ceph-volume lvm prepare-- data {data-path} {data-path} such as: ceph-volume lvm prepare-- data / dev/hdd1 activate OSDceph-volume lvm activate {ID} {FSID} such as: ceph-volume lvm activate 0 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 View ceph status: [root@local-node-1 ~] # ceph- s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_OK services: mon: 1 daemons Quorum local-node-1 mgr: ceph-mgr (active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 3.0 GiB used 27 GiB / 30 GiB avail pgs: [root@local-node-1 ~] # netstat-lntp | grep cephtcp 0 0 10.0 0 1 netstat 6805 0 0 0. 0 LISTEN 1132/ceph-osd tcp 0 0 10 0 0 0 1 V 0 0 0 5 0 0 0 LISTEN 1132/ceph-osd Tcp 0 0 10 0 0 0 1 LISTEN 1126/ceph-osd tcp 0 0 10 0 0 0 1 LISTEN 1126/ceph-osd tcp 0 0 10 0 0 0 0.0.0.0LISTEN * LISTEN 1126/ceph-osd tcp 0 010.0.0.1virtual 6810 0.0.0.0 LISTEN 1126/ceph-osd tcp 0010.0.0.0.1 LISTEN / ceph-osd tcp 0 0 10.0.0.1 LISTEN 1093/ceph-mon tcp 6812 0.0.0.0 LISTEN 1093/ceph-mon tcp 0 0 10.0.0.1 LISTEN 1093/ceph-mon tcp : 6800 0.0.0.0 LISTEN 1128/ceph-osd tcp 00 10.0.0.1 6801 0.0.0.0 LISTEN 1128/ceph-osd tcp 00 10.0.0.1 6802 0.0.0.0 LISTEN 1128/ceph-osd tcp 0 010.0.0.1 LISTEN 1132/ceph-osd FILESTORE 6803 0.0.0.0 LISTEN 1128/ceph-osd tcp 0 010.0.0.1 LISTEN 1132/ceph-osd FILESTORE
Refer to this document: http://docs.ceph.com/docs/master/install/manual-deployment/#filestore
Expand the cluster
With the basic components deployed on a stand-alone machine, we need to create a highly available cluster and join the other two nodes, local-node-2 and local-node-3.
one。 Extend MON to modify the configuration on the node-1 node: [root@local-node-1 ~] # cat / etc/ceph/ceph.conf [global] fsid = 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5public network = 10.0.0.0/24mon initial members = local-node-1,local-node-2 Local-node-3mon host = 10.0.0.1, 10.0.0.2, 10.0.0.3auth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 1024osd pool default size = 3osd pool default min size = 2osd pool default pg num = 333osd pool default pgp num = 333osd crush chooseleaf type = 1 [mon] mon allow pool delete = true [mds.local-node-1] host = local-node-1 to distribute configuration and key files to scp / etc/ceph/* on other nodes 10.0.0.2:/etc/ceph/scp / etc/ceph/* 10.0.0.3:/etc/ceph/ creates ceph-related directories on the new node And add permissions: mkdir-p / var/lib/ceph/ {bootstrap-mds,bootstrap-mgr,bootstrap-osd,bootstrap-rbd,bootstrap-rgw,mds,mgr,mon Osd} chown-R ceph:ceph / var/lib/cephsudo-u ceph mkdir / var/lib/ceph/mon/ceph-local-node-2 # specify the node name ID modify the configuration file of this node [root@local-node-2 ~] # cat / etc/ceph/ceph.conf [global] fsid = 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5public network = 10.0.0.0/24mon initial members = local-node-1,local-node-2 Local-node-3mon host = 10.0.0.1, 10.0.0.2, 10.0.0.3auth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 3osd pool default min size = 2osd pool default pg num = 333osd pool default pgp num = 333osd crush chooseleaf type = 1 [mon] mon allow pool delete = true [mon.local-node-2] mon_addr = 10.0.0.2:6789host = loacl-node-2 to get the key and mapceph auth get mon in the cluster. -o / tmp/monkeyringceph mon getmap-o / tmp/monmap adds a new Monitor using the existing key and map, and specifies the hostname sudo-u ceph ceph-mon-- mkfs-I local-node-2-- monmap / tmp/monmap-- keyring / tmp/monkeyring startup service systemctl start ceph-mon@local-node-2 using the above method to add other nodes Check the mon status after successfully adding: [root@local-node-3] # ceph- s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_OK services: mon: 3 daemons, quorum local-node-1,local-node-2,local-node-3 mgr: ceph-mgr (active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 3.0 GiB used 27 GiB / 30 GiB avail pgs: [root@local-node-3] # ceph mon state3: 3 mons at {local-node-1=10.0.0.1:6789/0,local-node-2=10.0.0.2:6789/0,local-node-3=10.0.0.3:6789/0}, election epoch 28, leader 0 local-node-1, quorum 0 local-node-1 1 local-node-1,local-node-2,local-node- 3 2. Add OSD to copy the configuration file and key under / etc/ceph from the already running ceph node to the new node Modify the configuration to: [root@local-node-2 ~] # cat / etc/ceph/ceph.conf [global] fsid = 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5public network = 10.0.0.0/24mon initial members = local-node-1,local-node-2 Local-node-3mon host = 10.0.0.1, 10.0.0.2, 10.0.0.3auth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 1024osd pool default size = 3osd pool default min size = 2osd pool default pg num = 333osd pool default pgp num = 333osd crush chooseleaf type = 1 [mon] mon allow pool delete = true [mon.local-node-2] mon_addr = 10.0.0.2:6789host = loacl-node-2 copy from an existing osd node Initialized key file: scp-p / var/lib/ceph/bootstrap-osd/ceph.keyring 10.0.0.2:/var/lib/ceph/bootstrap-osd/scp-p / var/lib/ceph/bootstrap-osd/ceph.keyring 10.0.0.3:/var/lib/ceph/bootstrap-osd/ when adding osd You need to consider whether to add a bluestore or filestore backend storage cluster, which needs to be consistent with the original cluster. Here, take bluestore as an example: ceph-volume lvm create-- data / dev/sdbceph-volume lvm create-- data / dev/sdcceph-volume lvm create-- data / dev/sdd adds other nodes as described above. The successful results are as follows: [root@local-node-1 ~] # ceph- s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_OK services: mon: 3 daemons, quorum local-node-1,local-node-2,local-node-3 mgr: ceph-mgr (active) osd: 9 osds: 9 up, 9 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 9.1 GiB used 81 GiB / 90 GiB avail pgs: deploy CephFS I. Add MDS
= = MDS services need to be created only in scenarios where cephFS is to be used. = =
Create the directory mkdir-p / var/lib/ceph/mds/ {cluster-name}-{id} # where ID is set to the local hostname EG: [root @ local-node-1 ~] # sudo-u ceph mkdir-p / var/lib/ceph/mds/ceph-local-node-1 create the keychain: ceph-authtool-- create-keyring / var/lib/ceph/mds/ {cluster-name}-{id} / keyring-- gen-key-n mds. {id} eg:ceph-authtool-- create-keyring / var/lib/ceph/mds/ceph-local-node-1/keyring-- gen-key-n mds.local-node-1 Import key And set caps:ceph auth add mds. {id} osd "allow rwx" mds "allow" mon "allow profile mds"-I / var/lib/ceph/mds/ {cluster}-{id} / keyringEG:ceph auth add mds.local-node-1 osd "allow rwx" mds "allow" mon "allow profile mds"-I / var/lib/ceph/mds/ceph-local-node-1/keyring add mds zone configuration [root@local-node-1 ~] # cat / etc/ceph / ceph.conf.global] fsid = 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5public network = 10.0.0.0/24mon initial members = local-node-1mon host = 10.0.0.1auth cluster required = cephxauth service required = cephxauth client required = cephxosd journal size = 1024osd pool default size = 3osd pool default min size = 333osd pool default pgp num = 333osd crush chooseleaf type = 1 [mds.local-node-1] # add the configuration here host = local-node- 1 manually start the service [root@local-node-1 ~] # ceph-mds-- cluster ceph- I local-node-1-m local-node-1:6789
If you start with root, you need to pay attention to the permission problem. It is best to use systemd to modify the service to ceph:
Chown-R ceph:ceph / var/lib/ceph/mds/ systemctl start ceph-mds@local-node-1 systemctl enable ceph-mds@local-node-1 check whether the service starts [root@local-node-1 ~] # ps-ef | grep ceph-mdsceph 2729 10 17:32? 00:00:00 / usr/bin/ceph-mds-f-- cluster ceph--id local-node-1-- setuser ceph--setgroup ceph [root@local-node -1 ~] # netstat-lntp | grep ceph-mdstcp 0 0 10.0.0.1 grep ceph-mdstcp 6813 0.0.0.0 LISTEN 2729/ceph-mds
7. Check the status of the ceph cluster
[root@local-node-1] # ceph- s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_OK services: mon: 1 daemons, quorum local-node-1 mgr: ceph-mgr (active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 3.0 GiB used 27 GiB / 30 GiB avail pgs: [root@local-node-1 ~] # ceph osd treeID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-1 0.02939 root default-3 0.02939 host local-node-1 0 hdd 0.00980 osd.0 up 1.00000 1 .00000 1 hdd 0.00980 osd.1 up 1.00000 1.00000 2 hdd 0.00980 osd.2 up 1.00000 1.00000 II. Create ceph filesystem1. Create pools
CephFS requires at least two RADOS pool, one to store data and the other to store metadata. When configuring these pool, we need to consider two issues:
Use a higher replication level for the metadata pool because any data loss in this pool can make the entire file system inaccessible. Use lower latency storage, such as SSD, as the metadata pool because this directly affects the latency of file system operations on the client.
Create these two pool using the following command:
Ceph osd pool create cephfs_data ceph osd pool create cephfs_metadata EG: [root@local-node-1 ~] # ceph osd pool create cephfs_data 64 [root@local-node-1 ~] # ceph osd pool create cephfs_metadata 642. Enable cephfs file system [root@local-node-1 ~] # cephfs new cephfs cephfs_metadata cephfs_datanew fs with metadata pool 2 and data pool 1
View the file system status:
[root@local-node-1 ~] # cephfs lsname: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data] [root@local-node-1 ~] # ceph mds statcephfs-1/1/1 up {0=local-node-1=up:active}
If you have created more than one Ceph filesystem in your system, you can set the default mount disk by using ceph fs set-default without having to specify which file system to mount.
3. Mount cephfs
= mount using ceph-fuse = =
Here is a demonstration of mounting cephfs to node-4 (10.0.0.4), first installing the ceph-fuse client:
Yum install ceph-fuse-y-- disablerepo=epel # hosts in non-ceph clusters need to open the epel installation dependency package
If the client to be mounted is not a member of the ceph cluster, you need to copy the keys and configuration files from the ceph cluster to the / etc/ceph directory:
[root@node-4 ~] # mkdir / etc/ceph [root@local-node-2 ~] # scp / etc/ceph/ceph.conf 10.0.0.4:/etc/ceph/ [root@local-node-2 ~] # scp / etc/ceph/ceph.client.admin.keyring 10.0.0.4:/etc/ceph/
Mount cephfs:
Ceph-fuse-m 10.0.0.2 6789 / mnt/cephfs
View the mount configuration:
# df-h | grep cephfsceph-fuse 26G 0 26G 0% / mnt/cephfs
Through the test, it can be found that files can be mounted and shared on any ceph cluster node:
# the specified mon must be normal If it is not active or standby, you cannot mount ceph-fuse-m 10.0.0.3PV6789 / mnt/cephfs [root@local-node-2 cephfs] # ceph- s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_WARN 1 mons down 3 mons down, quorum local-node-2,local-node-3 services: mon: 3 daemons, quorum local-node-2,local-node-3 Out of quorum: local-node-1 mgr: ceph-mgr (active) mds: cephfs-1/1/1 up {0=local-node-1=up:active} osd: 9 osds: 9 up, 9 in data: pools: 2 pools, 128 pgs objects: 24 objects, 11 KiB usage: 9.1 GiB used, 81 GiB / 90 GiB avail pgs: 128 active+clean
= mount using the kernel's own driver = =
When using kernel drivers, there are certain requirements for the kernel version of the system. If you use Ceph's latest TUNABLES (adjustable parameters) jewel, the official recommendation is 4.14 or 4.9 kernels. If the kernel is lower than 4.5, a mount error will occur:
Local-node-1 kernel: libceph: mon0 10.0.0.2 mon0 6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 40000000000000 local-node-1 kernel: libceph: mon0 10.0.0.2 mon0 6789 missing required protocol features
For more information on kernel version support, please refer to the official documentation.
If this is the case, it is recommended to use ceph-fuse, of course, you can also change it to an earlier version of crush tunables (default default, actually jewel) with the following command:
Ceph osd crush tunables hammer# ceph osd crush show-tunables {"choose_local_tries": 0, "choose_local_fallback_tries": 0, "choose_total_tries": 50, "chooseleaf_descend_once": 1, "chooseleaf_vary_r": 1, "chooseleaf_stable": 0, "straw_calc_version": 1, "allowed_bucket_algs": 54, "profile": "hammer", "optimal_tunables": 0 "legacy_tunables": 0, "minimum_required_version": "hammer", "require_feature_tunables": 1, "require_feature_tunables2": 1, "has_v2_rules": 0, "require_feature_tunables3": 1, "has_v3_rules": 0, "has_v4_buckets": 1, "require_feature_tunables5": 0, "has_v5_rules": 0}
Hammer supports kernel version 4. 1 or later.
Use the following two ways to mount:
# method 1: [root@local-node-1 ~] # mount-t ceph 10.0.0.2 mnt 6789 name=admin,secret=AQDo1aVcQ+Z0BRAAENyooUgFgokkjw9hBUOseg==# / / mnt-o name=admin,secret=AQDo1aVcQ+Z0BRAAENyooUgFgokkjw9hBUOseg==# method 2: [root@local-node-1 ~] # mount-t ceph 10.0.0.2 ceph 6789 / / mnt-o name=admin,secretfile=/tmp/keyring # keyring contains only the key and no other parameters
If the mount fails, you need to check whether the mon and mds services are normal.
three。 Remove CephFS and turn off mds service: # ceph mds statcephfs-1/1/1 up {0=local-node-1=up:creating} ceph mds fail local-node-1 # or use systemctl stop ceph-mds@local-node-1 to delete CephFS# list current CephFSceph fs ls# delete CephFSceph fs rm cephfs--yes-i-really-mean-it # cephfs lsNo filesystems enabled delete pool# ceph osd lspools3 cephfs_data4 cephfs_metadataceph osd pool delete cephfs_data cephfs_data-- yes- I-really-really-mean-itceph osd pool delete cephfs_metadata cephfs_metadata-- yes-i-really-really-mean-it confirms the status after deletion # ceph-s cluster: id: 7bd25f8d-b76f-4ff9-89ec-186287bbeaa5 health: HEALTH_OK services: mon: 1 daemons Quorum local-node-1 mgr: ceph-mgr (active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 3.0 GiB used, 27 GiB / 30 GiB avail pgs:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.