In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
I. preparatory work
Ceph version: v13.2.5 mimic stable version
1. Prepare storage pool [root@ceph-node1 ceph] # ceph osd pool create K8s 128 128pool 'k8s' created [root@ceph-node1 ceph] # ceph osd pool lsk8s2 on Ceph and prepare K8S client account on Ceph
The admin account of Ceph is directly used in this environment. Of course, different accounts should be assigned according to different functional clients in the production environment:
Ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s'-o ceph.client.k8s.keyring
Obtain the key of the account:
[root@ceph-node1 ceph] # ceph auth get-key client.admin | base64QVFDMmIrWmNEL3JTS2hBQWwwdmR3eGJGMmVYNUM3SjdDUGZZbkE9PQ==3, provide rbd command for controller-manager
When you dynamically create a PV using StorageClass, controller-manager automatically creates the image on the Ceph, so we need to prepare the rbd command for it.
(1) if the cluster is deployed with kubeadm, since there is no rbd command in the official controller-manager image, we need to import the external configuration:
Kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner rules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"]-apiGroups: ["] resources: [" persistentvolumeclaims "] verbs: [" get "," list "," watch " "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"]-apiGroups: ["] resources: [" events "] verbs: [" create "," update "," patch "]-apiGroups: ["] resources: [" services "] resourceNames: [" kube-dns " "coredns"] verbs: ["list" "get"]-kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner subjects:-kind: ServiceAccount name: rbd-provisioner namespace: default roleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io-apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: rbd-provisioner rules:-apiGroups: [""] resources: ["secrets"] verbs : ["get"]-apiGroups: [""] resources: ["endpoints"] verbs: ["get" "list", "watch", "create", "update" "patch"]-apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rbd-provisioner roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisioner subjects:-kind: ServiceAccount name: rbd-provisioner namespace: default-apiVersion: extensions/v1beta1 kind: Deployment metadata: name: rbd-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: App: rbd-provisioner spec: containers:-name: rbd-provisioner image: quay.io/external_storage/rbd-provisioner:latest env:-name: PROVISIONER_NAME value: ceph.com/rbd serviceAccount: rbd-provisioner-apiVersion: v1 kind: ServiceAccount metadata: name: rbd-provisioner
Kubectl apply-f rbd-provisioner.yaml
Note: the image of rbd-provisioner should be adapted to the version of ceph. Here, the image uses the latest one. According to the official prompt, ceph mimic version is supported.
(2) if the cluster is deployed in binary mode, you can install ceph-common directly on the master node.
YUM source:
[Ceph] name=Ceph packages for $basearchbaseurl= http://download.ceph.com/rpm-mimic/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.ascpriority=1[Ceph-noarch]name=Ceph noarch packagesbaseurl= http://download.ceph.com/rpm-mimic/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.ascpriority=1[ceph-source]name=Ceph source packagesbaseurl= http://download.ceph.com/rpm-mimic/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.ascpriority=1
# install client
Yum-y install ceph-common-13.2.5
# copy keyring file
Copy the ceph.client.admin.keyring file of ceph to the / etc/ceph directory of master.
4. Provide rbd command for kubelet
When creating a pod, kubelet needs to use the rbd command to detect and mount the ceph image corresponding to the pv, so install the ceph client ceph-common-13.2.5 on all worker nodes.
Second, try Ceph RBD storage on K8S. Create a storage class kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: ceph-sc namespace: default annotations: storageclass.kubernetes.io/is-default-class: "false" provisioner: ceph.com/rbdreclaimPolicy: Retainparameters: monitors: 172.16.1.31 Retainparameters 6789172.16.32 adminId: admin adminSecretName: storage-secret adminSecretNamespace: default pool: K8s fsType: xfs userId: admin userSecretName: storage-secret imageFormat: "2" imageFeatures: "layering"
Kubectl apply-f storage_class.yaml
2. Provide secretapiVersion: v1kind: Secretmetadata: name: storage-secret namespace: defaultdata: key: QVFDMmIrWmNEL3JTS2hBQWwwdmR3eGJGMmVYNUM3SjdDUGZZbkE9PQ==type: kubernetes.io/rbd for storage classes
Kubectl apply-f storage_secret.yaml
Note: the value of provisioner should be the same as that set by rbd-provisioner
3. Create PVCkind: PersistentVolumeClaimapiVersion: v1metadata: name: ceph-pvc namespace: defaultspec: storageClassName: ceph-sc accessModes:-ReadWriteOnce resources: requests: storage: 1Gi
Kubectl apply-f storage_pvc.yaml
# after the PVC is created, the PV will be created automatically:
[root@k8s-master03 ceph] # kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-315991e9-7d4b-11e9-b6cc-0050569ba238 1Gi RWO Retain Bound default/ceph-sc-test prom-sc 13h
# normally, PVC is also in Bound status
[root@k8s-master03 ceph] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEceph-sc-test Bound pvc-315991e9-7d4b-11e9-b6cc-0050569ba238 1Gi RWO prom-sc 17s4, Create a test application apiVersion: v1kind: Podmetadata: name: ceph-pod1spec: nodeName: k8s-node02 containers:-name: nginx image: nginx:1.14 volumeMounts:-name: ceph-rdb-vol1 mountPath: / usr/share/nginx/html readOnly: false volumes:-name: ceph-rdb-vol1 persistentVolumeClaim: claimName: ceph-pvc
Kubectl apply-f storage_pod.yaml
# View pod status
[root@k8s-master03 ceph] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESceph-pod1 1 bind 1 Running 0 3d23h 10.244.4.75 k8s-node02
# enter the container to check the mount, and you can see that rbd has been mounted to the / usr/share/nginx/html directory.
[root@k8s-master03 ceph] # kubectl exec-it ceph-pod1-/ bin/bashroot@ceph-pod1:/# df-hT/dev/rbd0 xfs 1014M 33M 982m 4% / usr/share/nginx/html# add a test file root@ceph-pod1:/# cat / usr/share/nginx/html/index.htmlhello ceph under the mount directory!
# check the node mounted by the corresponding image on Ceph. Currently, it is 172.16.1.22, that is, k8s-node02.
[root@ceph-node1 ~] # rbd status k8s/kubernetes-dynamic-pvc-2410765c-7dec-11e9-aa80-26a98c3bc9e4Watchers: watcher=172.16.1.22:0/264870305 client.24553 cookie=18446462598732840961
# and then we delete the pod of this
[root@k8s-master03 ceph] # kubectl delete-f storage_pod.yaml pod "ceph-pod1" deleted
# modify the manifest file storage_pod.yaml, schedule pod to k8s-node01, and apply it.
# later, check the status of pod and change that pod has been deployed on k8s-node01.
[root@k8s-master01] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESceph-pod1 1 Running 0 34s 10.244.3.28 k8s-node01
# check the image mount node again on Ceph. It is currently 172.16.1.21, that is, k8s-node01.
[root@ceph-node1 ~] # rbd status k8s/kubernetes-dynamic-pvc-2410765c-7dec-11e9-aa80-26a98c3bc9e4Watchers: watcher=172.16.1.21:0/1812501701 client.114340 cookie=18446462598732840963
# enter the container and check that the file is not lost, indicating that pod used the original image after switching nodes.
[root@k8s-master03 ceph] # kubectl exec-it ceph-pod1-/ bin/bashroot@ceph-pod1:/# cat / usr/share/nginx/html/index.htmlhello ceph!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.