Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Rancher (2), K8S persistent storage Ceph RBD construction and configuration

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. Configure host and install ntp (optional)

2. Configure secret-free SSH

3. Configure ceph, yum source

vim /etc/yum.repo.d/ceph.cepo[ceph]name=cephbaseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/x86_64/gpgcheck=0priority=1[ceph-noarch]name=cephnoarchbaseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/noarch/gpgcheck=0priority=1[ceph-source]name=Ceph source packagesbaseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/SRPMSenabled=0 gpgcheck=1type=rpm-mdgpgkey=http://mirrors.cloud.tencent.com/ceph/keys/release.ascpriority=1

4. Install Ceph-deploy

yum updateyum install ceph-deploy

5. Installation

If an error is reported during installation, you can clear the configuration using the following command:

ceph-deploy purgedata {ceph-node} [{ceph-node}]ceph-deploy forgetkeys

The following command clears the ceph installation package together:

ceph-deploy purge {ceph-node} [{ceph-node}]mkdir -p /root/clustercd /root/cluster/ceph-deploy new yj-ceph2

If it is wrong:

Traceback

(most recent call last):

File "/usr/bin/ceph-deploy", line 18, in

from ceph_deploy.cli import main

File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in

import pkg_resources

ImportError: No module named pkg_resources

Installation:

yum install python-setuptools

Change the default number of copies in the Ceph configuration file from 3 to 2, so that only two OSDs can reach active + clean state.

vim ceph.conf [global]fsid = 8764fad7-a8f0-4812-b4db-f1a65af66e4amon_initial_members = ceph2,ceph3mon_host = 192.168.10.211,192.168.10.212auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxosd pool default size = 2mon clock drift allowed = 5mon clock drift warn backoff = 30ceph-deploy install yj-ceph2 yj-ceph3ceph-deploy mon create-initialceph-deploy osd create --data /dev/vdb yj-ceph2ceph-deploy osd create --data /dev/vdb yj-ceph3

Use ceph-deploy to copy the configuration file and admin key to the admin node and Ceph node, so that you do not need to specify the monitor address and ceph.client.admin.keyring every time you execute the Ceph command line.

ceph-deploy admin yj-ceph2 yj-ceph3ceph osd treeceph-deploy mgr create yj-ceph2ceph healthceph -s

A ceph cluster can have multiple pools. Each pool is a logical isolation unit. Different pools can have completely different data processing methods, such as Replica Size, Placement Groups, CRUSH Rules, snapshots, owners, etc.

Usually before creating a pool, you need to override the default pg_num, official recommendation:

If there are less than 5 OSD, set pg_num to 128. 5 - 10 OSD, set pg_num to 512. 10~50 OSD, set pg_num to 4096. More than 50 OSD, can refer to pgcalc calculation.

osd pool default pg num = 128

osd pool default pgp num = 128

ceph osd pool create k8s-pool 128 128

The administrator key needs to be stored as secret to k8s, preferably configured in default space.

ceph auth get-key client.admin|base64

Replace the key below with the value obtained.

vim ceph-secret-admin.yamlapiVersion: v1kind: Secretmetadata: name: ceph-secret-admintype: "kubernetes.io/rbd"data: key: QVFBTHhxxxxxxxxxxFpRQmltbnBDelRkVmc9PQ==kubectl apply -f ceph-secret-admin.yaml

rancher error:

MountVolume.SetUp failed for volume "pvc-a2754739-cf6f-11e7-a7a5-02e985942c89" :

rbd: map failed exit status 2 2017-11-22 12:35:53.503224 7f0753c66100 -1 did not load config file,

using default settings. libkmod: ERROR ../ libkmod/libkmod.c:586 kmod_search_moddep:

could not open moddep file '/lib/modules/4.9.45-rancher/modules.dep.bin' modinfo: ERROR:

Module alias rbd not found. modprobe:

ERROR: ../ libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file

'/lib/modules/4.9.45-rancher/modules.dep.bin' modprobe:

FATAL: Module rbd not found in directory /lib/modules/4.9.45-rancher rbd: failed to load rbd kernel module (1)

rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map failed:

(2) No such file or directory

new node needs to install ceph-client already configured ceph profile:

yum install ceph-common

configure user

ceph.client.admin.keyring ceph.client.kube.keyring ceph.client.test.keyring ceph.conf Copy to/etc/ceph/

Because the container cannot access/lib/modules, you need to add:

services: etcd: backup_config: enabled: true interval_hours: 6 retention: 60 kubelet: extra_binds: - "/lib/modules:/lib/modules"

Then use:

rke up --config rancher-cluster.yml

Or rancher using ceph problem:

SC is set up,

Deployment created using pvc, error reported,

Ceph map failed.

emm,

In the end, I found that I needed to manually map each node once ~, so I didn't report the error again ~~~MMP

rbd create foo --size 1024 --image-feature=layring -p testrbd map foo -p test

Ceph rbd expansion:

View the image id that needs to be expanded, and expand it on ceph:

rbd resize --size 2048 kubernetes-dynamic-pvc-572a74e9-db6a-11e9-9b3a-525400e65297 -p test

Modify pv configuration to the corresponding size, restart the corresponding container.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report