In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
kubeadm k8s configuration ceph rbd storage type storageClass
storageclass of k8s, dynamic creation of pvc, binding
First install ceph refer to blog.csdn.net/qq_42006894/article/details/88424199
1. Deployment of RBD-provisioner
git clone https://github.com/kubernetes-incubator/external-storage.git
cd external-storage/ceph/rbd/deploy
NAMESPACE=ceph
sed -r -i "s/namespace: [^ ]+/namespace: $NAMESPACE/g" ./ rbac/clusterrolebinding.yaml ./ rbac/rolebinding.yaml
kubectl -n $NAMESPACE apply -f ./ rbac
2. Create secret and pool
###Create admin secret
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print $3'} |xargs echo -n > /tmp/key
kubectl create secret generic ceph-admin-secret --from-file=/tmp/key --namespace=ceph --type=kubernetes.io/rbd
###Creating a ceph d pool
ceph osd pool create kube 128 128
ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
###Create user secret
ceph auth get-key client.kube > /tmp/key1
kubectl create secret generic ceph-secret --from-file=/tmp/key1 --namespace=ceph --type=kubernetes.io/rbd
3. Create storageclass
cat ceph-sc-ceph.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-ceph
namespace: ceph
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ceph.com/rbd
parameters:
monitors: 172.16.13.198:6789,172.16.13.199:6789,172.16.13.200:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: ceph
pool: kubernetes
userId: kubernetes
userSecretName: ceph-secret
userSecretNamespace: ceph
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
4. Configure rbac permission control of rdb-provisioner
cd external-storage/ceph/rbd/deploy
Just change the default namespace inside.
cat clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner1
subjects:
kind: ServiceAccount
name: rbd-provisioner1
roleRef:
kind: ClusterRole
name: rbd-provisioner1
apiGroup: rbac.authorization.k8s.io
cat clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner1
namespace: ceph
rules:
apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
cat deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rbd-provisioner1
namespace: ceph
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner1
spec:
containers:
name: rbd-provisioner1
image: "quay.io/external_storage/rbd-provisioner:latest"
env:name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner1
cat rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner1
namespace: ceph
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner1
subjects:
kind: ServiceAccount
name: rbd-provisioner1
namespace: ceph
cat role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner1
namespace: ceph
rules:
apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
cat serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner1
namespace: ceph
kubectl apply -f .
5. Creating PVC
cat ceph-pv-ceph.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim-ceph
namespace: ceph
spec:
accessModes:
ReadWriteOnce
storageClassName: ceph-rbd-ceph
resources:
requests:
storage: 10Gi
kubectl apply -f ceph-pv-ceph.yml
Check if bound status indicates success
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim1 Bound pvc-ff3e450b-4629-11e9-9740-080027a073ff 1Gi RWO rbd 51m
If there is an error, locate the wrong method
kubectl describe pvc/claim1
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-wayne-8744474d7-kz29c 1/1 Running 0 5d19h 10.244.6.85 node2
rbd-provisioner-67b4857bcd-8vsb8 1/1 Running 0 63m 10.244.3.7 node04
#View rbd-provisioner on node04, then view container log on node04
docker logs -f 6ed00b76cb55 # 6ed00b76cb55 is the id of the container (docker ps view)
6, encountered pit
error retrieving resource lock default/ceph.com-rbd: endpoints "ceph.com-rbd" is forbidden: User "system:serviceaccount:default:rbd-provisioner" cannot get endpoints in the namespace "default"
clusterrole added last in rbd-provisioner.yaml
apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
Other references blog.csdn.net/qq_34857250/article/details/82562514
7、
Reference: github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd
PS Set default storageclass:
kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Error Message:
MountVolume.WaitForAttach failed for volume "pvc-23555713-da9e-11e9-931b-000c29158017" : fail to check rbd image status with: (executable file not found in $PATH), rbd output: ()
solutions
Executed at K8s nodes
yum install -y ceph-common
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.