Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Rook notes

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Rook Note 1.rook basic Concepts

Basic concept analysis: cloud native storage orchestration manager.

Basic concept details: rook completes subsequent operations through an operator (operator), and only needs to define the required state. Rook monitors status changes through the monitor and assigns configuration files to the cluster to take effect. In the traditional sense, cluster administrators must master and monitor the system, while Rook storage operation is completely automated. Rook storage is run as an kubernetes extension through third-party resources.

2.rook basic component: rook operator: is a simple container with the ability to boot and monitor a storage cluster to provide the most basic RADOS storage. Manage CRD, object storage, file system. Rook agent: these agents are pod deployed on each Kubernetes node. Each agent is configured with a Flexvolume plug-in that integrates with Kubernetes's volume controller framework. Handle all required storage operations on the node, such as attaching network storage devices, mounting volumes, and formatting file systems. Discover: periodic node discovery of new devices 3.rook implementation principle description: rook is based on FelixVolume storage plug-in mechanism.

4. Start the process

5.rook controls the flow of 1.rook operator to run, and runs a rook agent pod2. Create a pvc and specify that the storageclass uses the provision () function of rook.io/block provisionor3.operator provisionr to be called to create the block image in the cluster. At this point, the Provision phase is complete, and pvc/pv is considered to be bound together; 4. When a pod using pvc is created, kubelete calls the mount function of rook Felexxvolume to use predetermined storage; 5. Then, agent will create a volume as described by CRD and attach it to the physical machine; 6.agent will volume map to the local machine and update the status of CRD and the path value of the device (for example, / dev/rbd0) 7. Control is then transferred to driver, and if mapping executes successfully, driver will mount the specified device to the specified path. "if the type of file system is also indicated in the configuration file, driver also formats the volume." 8.driver will report that the kubelet Mount () operation has been successful. Create rook, experimental environment can use: play-with-k8s# said in the previous focus, delete / var/lib/rook.cd cluster/examples/kubernetes/cephkubectl create-f operator.yaml# is mainly some custom resource object definition CRD: Cluster, Filesystem, ObjectStore, Pool, Volume; also created three components of rook Operator, Agent, Discover. Kubectl create-f cluster.yaml#cluster.yaml mainly creates a Cluster (ceph cluster) custom resource object (CR) # if the following error occurs, please refer to the following solution reference 1: configure operator.yaml Need to be consistent with kubelet's-- volume-plugin-dir parameter vim operator.yaml-name: FLEXVOLUME_DIR_PATH value: "/ usr/libexec/kubernetes/kubelet-plugins/volume/exec" solution reference 2exit status 2. Rbd: error opening pool' replicapool': (2) No such file or directory-- this error, I have encountered a similar problem Solution: add env:- name: FLEXVOLUME_DIR_PATHvalue: "/ var/lib/kubelet/volumeplugins" to rook-operator.yml to make sure it's the same as kubelet's-- volume-plugin-dir. Re-apply. Restart kubelet. Delete the pod of rook-agent and rook-operator, automatically rebuild it and wait 10 minutes for my error report to disappear. -- volume-plugin-dir=/var/lib/kubelet/volumeplugins7. Clean rook- cleanup namespace rook-system, which contains rook Operator and rook agent- cleanup namespace rook, which contains rook storage cluster (cluster CRD)-/ var/lib/rook directory of each node, where the configuration of ceph mons and osds are cached in this kubectl delete-n rook pool replicapoolkubectl delete storageclass rook-blockkubectl delete-nkube-system secret rook-adminkubectl delete-f kube-registry.yaml# delete Cluster CRDkubectl delete-n rook cluster rook# when Cluster CRD is deleted Delete Rook Operator and Agentkubectl delete thirdpartyresources cluster.rook.io pool.rook.io objectstore.rook.io filesystem.rook.io volumeattachment.rook.io # ignore errors if on K8s 1.7+kubectl delete crd clusters.rook.io pools.rook.io objectstores.rook.io filesystems.rook.io volumeattachments.rook.io # ignore errors if on K8s 1.5 and 1.6kubectl delete-n rook-system daemonset rook-agentkubectl delete-f rook-operator.yamlkubectl delete clusterroles rook-agentkubectl delete clusterrolebindings rook-agent# delete namespaces parsing common kubectl delete namespace rook8.rook commands:

Cluster.yaml

ApiVersion: rook.io/v1alpha1kind: Clustermetadata: name: rook namespace: rookspec: # Storage backend, support Ceph,NFS, etc. Backend: ceph # configuration files are stored in the host's directory dataDirHostPath: / var/lib/rook # if set to true, the host's network is used instead of the container's SDN (software-defined network) hostNetwork: false # the number of mon started must be odd 1-9 monCount: 3 # controls how various services of Rook are scheduled by K8S placement: # General rules Specific services (api, mgr, mon) Osd) overrides the general rule all: on which node can the Pod of # Rook be called (according to the pod tag) nodeAffinity: # hard limit Pod that has been run after configuration change will not be affected requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: # node must have role=storage-node-matchExpressions: -key: role operator: In values:-storage-node # Rook can be called on the topology domain of what other Pod is running (based on node tolerance) podAffinity: podAntiAffinity: # nodes with which taint can be tolerated tolerations:-key: storage-node operator: Exists api: NodeAffinity: podAffinity: podAntiAffinity: tolerations: mgr: nodeAffinity: podAffinity: podAntiAffinity: tolerations: mon: nodeAffinity: tolerations: osd: nodeAffinity: podAffinity: podAntiAffinity: tolerations: # configure resource requirements for various services resources: api: limits: cpu: "500m" Memory: "1024Mi" requests: cpu: "500m" memory: "1024Mi" mgr: mon: osd: # Cluster level storage configuration Each node can override storage: # whether all nodes are used for storage. If you specify the nodes configuration, it must be set to false useAllNodes: true # whether all devices found on the node are automatically consumed by OSD useAllDevices: false # regular, which specifies which devices can be consumed by OSD. Example: # sdb uses only the device / dev/sdb # ^ sd. Use all / dev/sd* devices # ^ sd [Amurd] use sda sdb sdc sdd # to specify bare devices, and Rook automatically partitions but does not mount deviceFilter: ^ vd [BMurc] # the device used to store OSD metadata on each node. Using devices with low read latency, such as SSD/NVMe to store metadata, can improve performance metadataDevice: # location information of a cluster, such as Region or data center, is passed directly to Ceph CRUSH map location: # OSD storage format configuration information storeConfig: # optional filestore or bluestore, default to the latter, it is a new storage engine for Ceph # bluestore directly manages bare devices Abandoned local file systems such as ext4/xfs. Use Linux AIO in user mode to directly remove this parameter from bare device IO storeType: bluestore # bluestore disks with normal database capacity, such as 100GB + databaseSizeMB: 1024 # filestore log capacity disks with normal size, such as 20GB + journalSizeMB: 1024 # node directory for storage. Use two directories on one physical device It will have a negative impact on performance directories:-path: / rook/storage-dir # you can configure nodes: # Node A configuration for each node-name: "172.17.4.101" directories:-path: "/ rook/storage-dir" resources: limits: cpu: "500m" memory: "1024Mi" requests: cpu: "500m" memory: "1024Mi" # Node B configuration-name: "172.17.4.201"-name: "sdb"-name: "sdc" storeConfig: storeType: bluestore-name: "172.17.4.301" deviceFilter: "^ sd."

Pool.yaml

ApiVersion: rook.io/v1alpha1kind: Poolmetadata: name: ecpool namespace: rookspec: # whether each piece of data in the storage pool is a copy of replicated: # copy size: 3 # Ceph Erasure-coded storage pool consumes less storage space, replicated erasureCoded: # number of blocks per object dataChunks: 2 # number of blocks per object codingChunks: 1 crushRoot: default

ObjectStore

ApiVersion: rook.io/v1alpha1kind: ObjectStoremetadata: name: my-store namespace: rookspec: # metadata pool, only support replication metadataPool: replicated: size: 3 # data pool, support replication or erasure codin dataPool: erasureCoded: dataChunks: 2 codingChunks: 1 # RGW daemon settings gateway: # support S3 type: S3 # secret pointing to K8S Contains digital certificate information sslCertificateRef: # RGW Pod and service listening port port: 80 securePort: # the number of RGW Pod provided by load balancer for this object store instances: 1 # whether to start RGW on all nodes. If false, you must set instances allNodes: false placement: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: role operator: In values:-rgw-node tolerations:-key: rgw-node operator: Exists podAffinity: podAntiAffinity: resources: limits: Cpu: "500m" memory: "1024Mi" requests: cpu: "500m" memory: "1024Mi"

Filesystem

ApiVersion: rook.io/v1alpha1kind: Filesystemmetadata: name: myfs namespace: rookspec: # metadata pool metadataPool: replicated: size: 3 # data pool dataPools:-erasureCoded: dataChunks: 2 codingChunks: 1 # MDS daemon settings metadataServer: # MDS number of active instances activeCount: 1 # if set to true, additional MDS instances are in the active Standby state Maintain a hot cache of file system metadata # if set to false, additional MDS instances are in passive Standby state activeStandby: true placement: resources:9.PV and PVC knowledge point supplement

PV access method:

ReadWriteOnce-read-write rw mode RWOReadOnlyMany by single node mount-read-only ro mode ROXReadWriteMany by multiple nodes mount-read-write rw mode RWX by multiple node mount

Pv recovery mechanism:

Retain-manual reuse of Recycle-basic delete operation ("rm-rf / thevolume/*") Delete-associated backend storage volumes are deleted together, backend storage such as AWS EBS, GCE PD or OpenStack Cinder

Status of volume:

Available-idle state, not bound to PVCBound-bind to PVCReleased-PVC is deleted, resources are not being used Failed-automatic recycling failure CLI will display the PVC name bound to PV. The role of nodes in 10.rook complements the role of each pod after rook starts: a monitoring component, which is responsible for monitoring the status of the ceph cluster. Mgr:manager components, mainly monitoring some non-paxos related components (such as pg related statistics), stateless components. Collection available for prometheus metrics to start ceph dashboardosd:osd components, cluster core components mds: metadata server. One or more mds collaborates to manage the namespaces of file systems and coordinate access to shared osd clusters. When you declare a shared file system in the cluster, rook creates a matadata and datapool for cephfs, creates a file system, and specifies the number of mds instances of active-standby. The created file system will be used by pod in the cluster using rgw: provide RESTful type object storage interface for applications 1. Create a metada and datapool for the object 2. Start rgw daemon 3. Create a service to provide a load balancer address for rgw damon. 11.rook management cluster continues to operate

Deploy dashboard through graphical interface management

Cd / rook/cluster/examples/kubernetes/cephkubectl apply-f dashboard-external-http.yamlkubectl get svc-n rook-ceph# View Port number # access through browser

Create Block Storage storageclass

1. View the ceph data disk file type df-Th2. Modify the fstype parameter vims torageclass.yamlfstype: xfs3 according to the file system type of the ceph data disk. Create storageclasskubectl create-f storageclass.yaml

Create a block usage instance

Cd / rook/cluster/examples/kuberneteskubectl apply-f mysql.yamlkubectl apply-f wordpress.yaml

Create a shared file system

Cd / rook/cluster/examples/kubernetes/cephkubectl create-f filesystem.yamlkubectl get Filesystem-n rook-ceph

Discuss and think together. I hope I can help you understand!

Reference Resources:

Kubernetes Scheme based on rook

Rook control flow

Rook detailed explanation

Rook-github

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report