Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes 1.5 implements stateful containers through Ceph

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

In the previous post, we completed the deployment of sonarqube through kubernetes's devlopment and service. It seems to be available, but there is still a big problem. We know that databases like mysql need to save data and can't let it be lost. Once the container exits, all data will be lost. Once our mysql-sonar container is restarted, any subsequent settings we make to sonarqube will be lost. So we have to find a way to save the mysql data in the mysql-sonar container. Kubernetes provides a variety of options for persisting data, including the use of hostPath,nfs,flocker,glusterfs,rbd, and so on. We use the rbd block storage provided by ceph to implement kubernetes persistent storage.

To use ceph as storage, you must first install ceph. Here I briefly list the process of installing ceph through ceph-deploy, and first talk about the running environment:

Server-117: admin-node mon-node client-nodeserver-236: osd-node mon-nodeserver-227: osd-node mon-node

Configure the yum source on all machines:

Wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repowget-O / etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repovim / etc/yum.repos.d/ceph.repo [ceph] name=cephbaseurl= http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/gpgcheck=0[ceph-noarch]name=cephnoarchbaseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/gpgcheck=0

Then configure ntp time synchronization on all machines without instructions.

Configure ssh secret-free access from admin-node to other nodes. It is easy for me to use root to complete the follow-up operations, but in the official standard installation, it is required to use a normal account, and this account cannot be named ceph, because ceph is the account that starts the ceph daemon by default. Give an example:

Useradd cephadminecho "cephadmin" | passwd-- stdin cephadmin requires all cephadmin to have secret-free sudo permission: echo "cephd ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/cephdsudo chmod 0440 / etc/sudoers.d/cephdvim / etc/sudoersDefaults:cephadmin! requiretty then use this account to complete ssh secret-free access to each other and finally edit ~ / .ssh/config on admin-node Examples of content are as follows: Host server-117 Hostname server-117 User cephadminHost server-236 Hostname server-236 User cephadminHost server-227 Hostname server-227 User cephadmin

To deploy ceph, all of the following operations are done on admin-node:

Install ceph-deployyum install-y ceph-deploymkdir ceph-cluster # to create a deployment directory, which will generate some necessary configuration files cd ceph-cluster. If ceph has been installed before, it is officially recommended to use the following command to get a clean environment: ceph-deploy purgedata server-236 server-227ceph-deploy forgetkeysceph-deploy purge server-236 server-227 creates a ceph cluster:ceph-deploy new server-117 server-236 server-227 # where server-117 is mon-node You can specify multiple mon-node. After the above command is executed, some auxiliary files will be synthesized in the current directory. The default content of ceph.conf is as follows: [global] fsid = 23078e5b-3f38-4276-b2ef-7514a7fc09ffmon_initial_members = server-117mon_host = 10.5.10.117auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx I added the following additional lines: public_network=10.5.10.0/24 # defines the public network that communicates with each other mon_clock_drift_allowed = 2 # defines between multiple mon nodes The error of time is defined as 2sosd_pool_default_size = 2 #. A minimum of two osd is allowed. The default is 3. If the number of nodes is sufficient, there is no need to modify # the following three-line configuration is to solve the "ERROR: osd init failed: (36) File name too long" error when the storage disk of the data node is ext4 file system. Ceph officially recommends that the storage disk use the xfs file system, but under certain circumstances, we can only use the ext4 file system. Osd_max_object_name_len = 256osd_max_object_namespace_len = 64filestore_xattr_use_omap = true

Perform the installation of ceph:

Ceph-depoly server- server- server--y ceph ceph-radosgw

Initialize mon-node:

Ceph-deploy mon create-

After this process is completed, several * .keyring files appear in the current directory, which are required for secure access between ceph components

At this point, you can use ps-ef | grep ceph to view the running status of ceph-mon-related processes on each node:

Ceph 31180 10 16:11? 00:00:04 / usr/bin/ceph-mon-f-- cluster ceph--id server-117-- setuser ceph--setgroup ceph

Initialize osd-node:

Starting Osd-node can be divided into two steps: prepare and activate. Osd-node is the node that really stores data, and we need to provide independent storage space for ceph-osd, usually a separate disk, but you can also use directories instead.

Create directories for storage on both osd-node nodes:

Ssh server-236mkdir / data/osd0exitssh server-227mkdir / data/osd1exit

Do the following:

# the prepare operation creates some subsequent activate activation and files required by the osd runtime in the two directories created above. Ceph-deploy osd prepare server-236:/data/osd0 server-227:/data/osd1# activates osd-node and starts ceph-deploy osd prepare server-236:/data/osd0 server-227:/data/osd1.

After execution, an error similar to the following is usually thrown:

[WARNIN] 2016-11-04 1415 14 error creating empty object store in/ var/local/osd0: error creating empty object store in/ var/local/osd0: (13) Permission denied [ERROR] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy] [ERROR] RuntimeError: Failed to execute command: / usr/sbin/ceph-disk-v activate-- mark-init upstart-- mount / data/osd0

This is because the user of ceph's default daemon is ceph, and ceph does not have access to directories created using normal ceph-admin or root.

So you need to do another authorization on osd-node:

Ceph.ceph-R / data/osd0server-227:chown ceph.ceph-R / data/osd1

Synchronize configuration files to each node:

Ceph-deploy admin server-117 server-236 server-227

Note that if the configuration file is modified, you need to re-perform the synchronization operation and re-perform the activate operation

Check the cluster status with the following instructions:

Ceph-sceph osd tree

When the ceph cluster installation is complete, we will create the corresponding rbd blocks for kubernetes storage. Before you can create a block device, you need to create a storage pool, and ceph provides a default storage pool called rbd. We create a kube storage dedicated to storing the block devices used by kubernetes, and the subsequent operations are performed on client-node:

Ceph osd pool create kube 100100 # the last two 100s are pg-num and pgp-num, respectively

Create an image file called mysql-sonar in the kube storage pool and the size of the image file is 5GB:

Rbd create kube/mysql-sonar-size 5120-p_w_picpath-format 2-p_w_picpath-feature layering

Note that the above command, in my environment, throws the following exception if you do not use-- p_w_picpath-feature layering:

Rbd: sysfs write failedRBD p_w_picpath feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable" .in some cases useful info is found in syslog-try "dmesg | tail" or so.rbd: map failed: (6) No such device or address

This is because my current kernel of centos 7.2 does not support some of the new features of ceph.

Map the image file created above to a block device:

Rbd map kube/mysql-sonar-name client.admin

At this point, the operation on ceph is complete.

Next let's take a look at how to use the block device created by ceph above on kubernetes.

We can find the relevant sample files in the examples/volumes/rbd directory of the source code file of kubernetes as follows:

[root@server-116 ~] # ll / data/software/kubernetes/examples/volumes/rbd/

Total 12

-rw-r-. 1 root root 962 Mar 8 08:26 rbd.json

-rw-r-. 1 root root 985 Mar 8 08:26 rbd-with-secret.json

-rw-r-. 1 root root 2628 Mar 8 08:26 README.md

Drwxr-x---. 2 root root 29 Mar 8 08:26 secret

[root@server-116 ~] # ll / data/software/kubernetes/examples/volumes/rbd/secret/

Total 4

-rw-r-. 1 root root 156 Mar 8 08:26 ceph-secret.yaml

Rbd.json is an example file that mounts a rbd device as a kubernetes volumes. Rbd-with-secret.json is a sample file for mounting ceph rbd using secret. Ceph-secret.yaml is a sample file for secret.

We can take a look at the ceph-secret.yaml file first:

ApiVersion: v1kind: Secretmetadata: name: ceph-secrettype: "kubernetes.io/rbd" data: key: QVFCMTZWMVZvRjVtRXhBQTVrQ1FzN2JCajhWVUxSdzI2Qzg0SEE9PQ==

We just need to change the key value of the last line. This value is encrypted through base64. The pre-processing values can be obtained on ceph with the following command:

Ceph auth get-- ~] # / etc/ceph/= AQDRvL9YvY7vIxAA7RkO5S8OWH6Aidnu22OiFw=== = =

We get the key and do the base64 processing:

[root@server-117 ~] # echo "AQDRvL9YvY7vIxAA7RkO5S8OWH6Aidnu22OiFw==" | base64QVFEUnZMOVl2WTd2SXhBQTdSa081UzhPV0g2QWlkbnUyMk9pRnc9PQo=

So our modified ceph-secret.yaml is as follows:

ApiVersion: v1kind: Secretmetadata: name: ceph-secrettype: "kubernetes.io/rbd" data: key: QVFEUnZMOVl2WTd2SXhBQTdSa081UzhPV0g2QWlkbnUyMk9pRnc9PQo=

Create a secret:

Kubectl create-f ceph-secret

Because the life cycle of data files mounted directly using volumes is the same as that of pod, it is released with the release of pod. Therefore, it is not recommended to mount it directly by using volumes, but by using pv.

Let's first create a pv file:

ApiVersion: v1kind: PersistentVolumemetadata: name: mysql-sonar-pvspec: capacity: storage: 5Gi accessModes:-ReadWriteOnce rbd: monitors:-10.5.10.117Para6789-10.5.10.236Rose 6789-10.5.10.227Remo6789 pool: kube p_w_picpath: mysql-sonar user: admin secretRef: name: ceph-secret fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: Recycle

Create a pv the size of 5GB:

Kubectl create-f mysql-sonar-pv.yml

Create another pvc file:

Kind: PersistentVolumeClaimapiVersion: v1metadata: name: mysql-sonar-pvcspec: accessModes:-ReadWriteOnce resources: requests: storage: 5Gi

Create a pvc:

Kubectl create-f mysql-sonar-pvc.yml

Finally, we modify the mysql-sonar-dm.yml file created in the previous blog post, which is as follows:

ApiVersion: extensions/v1beta1kind: Deployment metadata: name: mysql-sonarspec: replicas: 1 # selector:# app: mysql-sonar template: metadata: labels: app: mysql-sonarspec: containers:-name: mysql-sonar p_w_picpath: myhub. Fdccloud.com/library/mysql-yd:5.6 ports:-containerPort: 3306 env:-name: MYSQL_ROOT_PASSWORD value: "mysoft"-name: MYSQL_DATABASE value: sonardb-name: MYSQL_USER value: sonar-name: MYSQL_PASSWORD value: sonar volumeMounts: -name: mysql-sonar mountPath: / var/lib/mysql volumes:-name: mysql-sonar persistentVolumeClaim: claimName: mysql-sonar-pvc

Create a mysql pod:

Kubectl create-f mysql-sonar-dm.yml

In this way, we create a pod for data persistence. We can test whether the data is lost by writing some data to the database, then deleting the pod, and then recreating a pod.

It is important to note that if the rbd device is not created beforehand when creating the container, or if we delete the current pod but start a new pod before it is completely deleted, the pod will always be in ContainerCreating status, and an error will be reported in the kubelet log. For more information, please refer to http://tonybai.com/2016/11/07/integrate-kubernetes-with-ceph-rbd/

It is also important to note that the ceph-common package must be installed on all node nodes, otherwise an error similar to the following will occur when starting the container:

NtVolume.SetUp failed for volume "kubernetes.io/rbd/da0deff5-0bef-11e7-bf41-00155d0a2521-mysql-sonar-pv" (spec.Name: "mysql-sonar-pv") pod "da0deff5-0bef-11e7-bf41-00155d0a2521" (UID: "da0deff5-0bef-11e7-bf41-00155d0a2521") with: rbd: map failed executable file not found in $PATH

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report