Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy Redis Cluster on K8s

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to deploy Redis clusters on K8s. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

I. Preface

Architectural principle: each Master can have multiple Slave. When the Master goes offline, the Redis cluster elects a new Master from multiple Slave as a replacement, and the old Master becomes the Slave of the new Master when it comes online again.

Second, prepare for operation

This deployment is mainly based on the project:

Https://github.com/zuxqoj/kubernetes-redis-cluster

It includes two ways to deploy Redis clusters:

StatefulSet

Service&Deployment

The two approaches have their own advantages and disadvantages, and StatefulSet is the preferred way for stateful services such as Redis, Mongodb, Zookeeper, etc. This article focuses on how to deploy a Redis cluster using StatefulSet.

III. Brief introduction of StatefulSet

RC, Deployment, and DaemonSet are all stateless services. The IP, name, start and stop order of the Pod they manage are all random. What is StatefulSet? As the name implies, stateful collections manage all stateful services, such as MySQL, MongoDB clusters, and so on.

StatefulSet is essentially a variant of Deployment, which has become the GA version in v1.9. In order to solve the problem of stateful service, the Pod it manages has a fixed Pod name, start and stop order. In StatefulSet, the Pod name is Network identity (hostname), and shared storage must be used.

In Deployment, the corresponding service is service, while the corresponding headless service,headless service in StatefulSet, that is, headless service, is different from service in that it does not have Cluster IP. Parsing its name will return the Endpoint list of all Pod corresponding to that Headless Service.

In addition, StatefulSet creates a DNS domain name for each copy of Pod controlled by StatefulSet on the basis of Headless Service. The format of this domain name is:

$(podname). (headless server name) FQDN: $(podname). (headless server name) .namespace.svc.cluster.local

In other words, for stateful services, it is best to mark the node with a fixed network identity (such as domain name information), which also requires the support of the application (such as Zookeeper supports writing the host domain name in the configuration file).

Based on Headless Service (that is, Service without Cluster IP), StatefulSet implements a stable network flag for Pod (including Pod's hostname and DNS Records) and remains unchanged after Pod rescheduling. At the same time, combined with PV/PVC,StatefulSet, you can achieve stable persistent storage, even after Pod rescheduling, you can still access the original persistent data.

The following is the architecture for deploying Redis using StatefulSet, both Master and Slave, as a copy of StatefulSet, and the data is persisted through PV, exposed as a Service, and accepted client requests.

IV. Deployment process

This article briefly introduces the steps of creating a StatefulSet-based Redis in the README of the project:

1. Create NFS Stora

two。 Create PV

3. Create PVC

4. Create Configmap

5. Create a headless service

6. Create Redis StatefulSet

7. Initialize the Redis cluster

Here, I will refer to the steps above, practice the operation, and introduce the deployment process of the Redis cluster in detail. Many concepts of K8S will be involved in this article. I hope you can learn about it in advance.

1. Create NFS Stora

NFS storage is mainly created to provide stable back-end storage for Redis. When the Pod of Redis is restarted or migrated, the original data can still be obtained. Here, we first create the NFS, and then mount a remote NFS path for the Redis by using PV.

Install NFS

Yum-y install nfs-utils (main package provides file system) yum-y install rpcbind (provides rpc protocol)

Then, add a new / etc/exports file to set the path that needs to be shared:

[root@ftp pv3] # cat / etc/exports/usr/local/k8s/redis/pv1 192.168.0.0 cat 24 (rw,sync,no_root_squash) / usr/local/k8s/redis/pv2 192.168.0.0 rw,sync,no_root_squash 24 (rw,sync,no_root_squash) / usr/local/k8s/redis/pv3 192.168.0.0 rw,sync,no_root_squash / usr/local/k8s/redis/pv4 192.168.0.0 rw,sync 24 (rw,sync) No_root_squash) / usr/local/k8s/redis/pv5 192.168.0 rw,sync,no_root_squash 24 (rw,sync,no_root_squash) / usr/local/k8s/redis/pv6 192.168.0 Universe 24

Create the appropriate directory

[root@ftp quizii] # mkdir-p / usr/local/k8s/redis/pv {1.. 6}

Next, start the NFS and rpcbind services:

Systemctl restart rpcbindsystemctl restart nfssystemctl enable nfs [root@ftp pv3] # exportfs-v/usr/local/k8s/redis/pv1 192.168.0.0Compare 24 (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash) / usr/local/k8s/redis/pv2 192.168.0.0amp24 (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash No_all_squash) / usr/local/k8s/redis/pv3 192.168.0.0 sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash 24 (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash) / usr/local/k8s/redis/pv4 192.168.0.0 sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash 24 No_all_squash) / usr/local/k8s/redis/pv5 192.168.0 sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash 24 (sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash) / usr/local/k8s/redis/pv6 192.168.0 Universe 24

Client

Yum-y install nfs-utils

View storage-side shares

[root@node2 ~] # showmount-e 192.168.0.222Export list for 192.168.0.222:/usr/local/k8s/redis/pv6 192.168.0.0/24/usr/local/k8s/redis/pv5 192.168.0.0/24/usr/local/k8s/redis/pv4 192.168.0.0/24/usr/local/k8s/redis/pv3 192.168.0.0/24/usr/local/k8s/redis/pv2 192.168.0.0 / 24/usr/local/k8s/redis/pv1 192.168.0.0/24

Create PV

Each Redis Pod needs a separate PV to store its own data, so you can create a pv.yaml file that contains six PV:

[root@master redis] # cat pv.yaml apiVersion: v1kind: PersistentVolumemetadata: name: nfs-pv1spec: capacity: storage: 200m accessModes:-ReadWriteMany nfs: server: 192.168.0.222 path: "/ usr/local/k8s/redis/pv1"-apiVersion: v1kind: PersistentVolumemetadata: name: nfs-vp2spec: capacity: storage: 200m accessModes:-ReadWriteMany nfs: server: 192.168.0.222 path: "/ usr/local/ K8s/redis/pv2 "--apiVersion: v1kind: PersistentVolumemetadata: name: nfs-pv3spec: capacity: storage: 200m accessModes:-ReadWriteMany nfs: server: 192.168.0.222 path:" / usr/local/k8s/redis/pv3 "- apiVersion: v1kind: PersistentVolumemetadata: name: nfs-pv4spec: capacity: storage: 200m accessModes:-ReadWriteMany nfs: server: 192.168.0.222 path:" / usr/local/k8s/ Redis/pv4 "--apiVersion: v1kind: PersistentVolumemetadata: name: nfs-pv5spec: capacity: storage: 200m accessModes:-ReadWriteMany nfs: server: 192.168.0.222 path:" / usr/local/k8s/redis/pv5 "- apiVersion: v1kind: PersistentVolumemetadata: name: nfs-pv6spec: capacity: storage: 200m accessModes:-ReadWriteMany nfs: server: 192.168.0.222 path:" / usr/local/k8s/redis/pv6 "

As shown above, you can see that all PV are basically the same except for the name and mount path. Just perform the creation:

[root@master redis] # kubectl create-f pv.yaml persistentvolume "nfs-pv1" createdpersistentvolume "nfs-pv2" createdpersistentvolume "nfs-pv3" createdpersistentvolume "nfs-pv4" createdpersistentvolume "createdpersistentvolume" nfs-pv6 "created

two。 Create Configmap

Here, we can directly convert the configuration file of Redis to Configmap, which is a more convenient way to read the configuration. The configuration file redis.conf is as follows

[root@master redis] # cat redis.conf appendonly yescluster-enabled yescluster-config-file / var/lib/redis/nodes.confcluster-node-timeout 5000dir / var/lib/redisport 6379

Create a Configmap named redis-conf:

Kubectl create configmap redis-conf-from-file=redis.conf

View the created configmap:

[root@master redis] # kubectl describe cm redis-confName: redis-confNamespace: defaultLabels: Annotations: Data====redis.conf:----appendonly yescluster-enabled yescluster-config-file / var/lib/redis/nodes.confcluster-node-timeout 5000dir / var/lib/redisport 6379Events:

As above, all configuration items in redis.conf are saved to redis-conf, the Configmap.

3. Create Headless service

Headless service is the foundation for StatefulSet to achieve stable network identity, and we need to create it in advance. The preparation file headless-service.yml is as follows:

[root@master redis] # cat headless-service.yaml apiVersion: v1kind: Servicemetadata: name: redis-service labels: app: redisspec: ports:-name: redis-port port: 6379 clusterIP: None selector: app: redis appCluster: redis-cluster

Create:

Kubectl create-f headless-service.yml

View:

As you can see, the service name is redis-service and its CLUSTER-IP is None, indicating that this is a "headless" service.

4. Create a Redis cluster node

Once the Headless service is created, you can use StatefulSet to create Redis cluster nodes, which is the core of this article. Let's first create the redis.yml file:

[root@master redis] # cat redis.yaml apiVersion: apps/v1beta1kind: StatefulSetmetadata: name: redis-appspec: serviceName: "redis-service" replicas: 6 template: metadata: labels: app: redis appCluster: redis-cluster spec: terminationGracePeriodSeconds: 20 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution:-weight: 100 podAffinityTerm: labelSelector: MatchExpressions:-key: app operator: In values:-redis topologyKey: kubernetes.io/hostname containers:-name: redis image: redis command:-"redis-server" args:-"/ etc/redis/redis.conf" -"--protected-mode"-"no" resources: requests: cpu: "100m" memory: "100Mi" ports:-name: redis containerPort: 6379 protocol: "TCP"-name: cluster containerPort: 16379 Protocol: "TCP" volumeMounts:-name: "redis-conf" mountPath: "/ etc/redis"-name: "redis-data" mountPath: "/ var/lib/redis" volumes:-name: "redis-conf" configMap: name: "redis-conf" items: -key: "redis.conf" path: "redis.conf" volumeClaimTemplates:-metadata: name: redis-data spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 200m

As above, a total of 6 Redis nodes (Pod) have been created, of which 3 will be used for master, and the other 3 will be configured as master to use volume to mount the previously generated redis-conf this Configmap to the container's / etc/redis/redis.conf;Redis data storage path using the volumeClaimTemplates declaration (that is, PVC), which will be bound to the PV we created previously.

Here is a key concept-Affinity, please refer to the official documentation for details. Among them, podAntiAffinity represents anti-compatibility, which determines which Pod can not be deployed in the same topology domain with a pod, and can be used to disperse the POD of a service in different hosts or topology domains to improve the stability of the service itself.

On the other hand, PreferredDuringSchedulingIgnoredDuringExecution says that the affinity or anti-affinity rules are satisfied as far as possible during the scheduling period, and if the rules are not met, POD may also be dispatched to the corresponding host. During later runs, the system will no longer check whether these rules are met.

Here, matchExpressions stipulates that the Redis Pod should not be dispatched to the Node that contains the app as redis, that is, the Node that already exists on the Redis should not reassign Redis Pod as far as possible. However, since we have only three Node and six copies, according to PreferredDuringSchedulingIgnoredDuringExecution, these peas have to be squeezed, which is healthier.

In addition, according to the rules of StatefulSet, the hostname of the six Pod of the generated Redis will be named $(statefulset name)-$(serial number) in turn, as shown in the following figure:

[root@master redis] # kubectl get pods-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEredis-app-0 1 redis- 1 Running 0 2 h 172.17.24.3 192.168.0.144 App-1 1/1 Running 0 2h 172.17.63.8 192.168.0.148 redis-app-2 1/1 Running 0 2h 172.17.24.8 192.168.0.144 redis-app-3 1/1 Running 0 2h 172.17.63.9 192.168.0.148 redis-app-4 1/1 Running 0 2h 172.17.24.9 192.168.0.144 redis-app-5 1/1 Running 0 2h 172.17.63.10 192.168.0.148

As shown above, you can see that these Pods are deployed with {0... The order of Nmur1} is created in turn. Note that the redis-app-0 does not start until it reaches the Running state after the redis-app-1 state starts.

At the same time, each Pod will get a DNS domain name in the cluster in the format of $(podname). $(service name). $(namespace). Svc.cluster.local, that is:

Redis-app-0.redis-service.default.svc.cluster.localredis-app-1.redis-service.default.svc.cluster.local... And so on.

Within the K8S cluster, these Pod can use the domain name to communicate with each other. We can use the nslookup of busybox image to verify these domain names:

[root@master redis] # kubectl exec-ti busybox-- nslookup redis-app-0.redis-serviceServer: 10.0.0.2Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName: redis-app-0.redis-serviceAddress 1: 172.17.24.3

As you can see, the IP of redis-app-0 is 172.17.24.3. Of course, if Redis Pod is migrated or restarted (we can manually delete a Redis Pod to test it), IP will change, but Pod's domain name, SRV records, and A record will not change.

In addition, we can find that all the pv we created before have been successfully bound:

[root@master redis] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEnfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-2 3hnfs-pv3 200M RWX Retain Bound default/redis-data-redis-app- 4 3hnfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 3hnfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-1 3hnfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-0 3hnfs-vp2 200M RWX Retain Bound default/redis-data-redis-app-3 3h

5. Initialize the Redis cluster

After creating 6 Redis Pod, we also need to initialize the cluster using common Redis-tribe tools.

Create a Ubuntu container

Because the Redis cluster cannot be initialized until all nodes are started, it is a very complex and inefficient behavior to write initialization logic into Statefulset. Here, I have to praise the ideas of the original project author, which is worth learning. That is, we can create an additional container on K8S that is dedicated to managing and controlling certain services within the K8S cluster.

Here, we specifically launch a container for Ubuntu, in which you can install Redis-tribe, and then initialize the Redis cluster and execute:

Kubectl run-it ubuntu-image=ubuntu-restart=Never / bin/bash

We use Aliyun's Ubuntu source to execute:

Root@ubuntu:/# cat > / etc/apt/sources.list EOF

After success, the original project required the following command to install the basic software environment:

Apt-get updateapt-get install-y vim wget python2.7 python-pip redis-tools dnsutils

Initialize the cluster

First, we need to install redis-trib:

Pip install redis-trib==0.5.1

Then, create a cluster with only Master nodes:

Redis-trib.py create\ `short redis-app-0.redis- service.default.svc.cluster.local`: 6379\ `dig + short redis-app-1.redis- service.svc.cluster.local`: 6379\ `dig + short redis-app-2.redis- service.default.svc.cluster.local`: 6379

Second, add Slave for each Master

Redis-trib.py replicate\-- master-addr `dig + short redis-app-0.redis- service.svc.cluster.local`: 6379\-- slave-addr `dig + short redis-app-3.redis- service.default.svc.cluster.local`: 6379redis-trib.py replicate\-- master-addr `dig + short redis-app-1.redis- service.default.svc.cluster.local`: 6379\-- slave-addr `cluster.local`. Local`: 6379redis-trib.py replicate\-- master-addr `dig + short redis-app-2.redis- service.default.svc.cluster.local`: 6379\-- slave-addr `local` dig + short redis-app-5.redis-service.default.svc.cluster.loca l`: 6379

At this point, our Redis cluster is really created. Connect to any Redis Pod to check it out:

[root@master redis] # kubectl exec-it redis-app-2 / bin/bashroot@redis-app-2:/data# / usr/local/bin/redis-cli-c127.0.0.1 usr/local/bin/redis-cli 6379 > cluster nodes5d3e77f6131c6f272576530b23d1cd7592942eec 172.17.24.3 Vera 6379V 16379 master-01559628533000 1 connected0-5461a4b529c40a920da314c6c93d17dc603625d6412c 172.17.63.10 connected 6379U 16379 master-01559628531670 6 connected 10923-16383368971dc8916611a86577a8726e4f1f3a69c5eb7 172.17.17.9V 6379U 16379 slave 0025e6140f85cb243c60c214467b7e77bf819ae3 01559628533672 4 connected0025e6140f85cb243c60c214467b7e77bf819ae3 172.17.63.8anger 6379@ 16379 master-0 1559628533000 2 connected 5462-109226d5ee94b78b279e7d3c77a55437695662e8c039e 172.17.24.8 myself Slave a4b529c40a920da314c6c93d17dc603625d6412c 0 1559628532000 5 connected2eb3e06ce914e0e285d6284c4df32573e318bc01 172.17.63.9 slave 5d3e77f6131c6f272576530b23d1cd7592942eec 01559628533000 3 connected127.0.0.1:6379 > cluster infocluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:6cluster_stats_messages_ping_sent:14910cluster_stats_messages_pong_sent:15139cluster_stats_messages_sent:30049cluster_stats_messages_ping_received: 15139cluster_stats_messages_pong_received:14910cluster_stats_messages_received:30049127.0.0.1:6379 >

In addition, you can view the data mounted by Redis on NFS:

[root@ftp pv3] # ll / usr/local/k8s/redis/pv3total 12 root root Jun RW Jun 4 11:49 nodes.conf-1 root root 92 Jun 4 11:36 appendonly.aof-rw-r--r-- 1 root root 175 Jun 4 11:36 dump.rdb-rw-r--r-- 1 root root 794 Jun 4

6. Create for access to Service

We created a Headless Service to implement StatefulSet earlier, but this Service does not have a Cluster Ip, so it cannot be used for external access. Therefore, we also need to create a Service designed to provide access and load balancing for the Redis cluster:

[root@master redis] # cat redis-access-service.yaml apiVersion: v1kind: Servicemetadata: name: redis-access-service labels: app: redisspec: ports:-name: redis-port protocol: "TCP" port: 6379 targetPort: 6379 selector: app: redis appCluster: redis-cluster

As above, the Service is named redis-access-service, port 6379 is exposed in the K8S cluster, and the pod with labels name of app: redis or appCluster: redis-cluster is load balanced.

View after creation:

[root@master redis] # kubectl get svc redis-access-service-o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE SELECTORredis-access-service ClusterIP 10.0.0.64 6379/TCP 2h app=redis,appCluster=redis-cluster

As above, in the K8S cluster, all applications can access the Redis cluster through 10.0.0.64: 6379. Of course, to facilitate testing, we can also add a NodePort mapping to the physical machine for Service, which is not covered in detail here.

Fifth, test master-slave switching

After building a complete Redis cluster on K8S, what we are most concerned about is whether its original high availability mechanism is normal. Here, we can randomly select a Pod of Master to test the master-slave switching mechanism of the cluster, such as redis-app-0:

[root@master redis] # kubectl get pods redis-app-0-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEredis-app-1 1 o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEredis-app-1 1 Running 0 3h 172.17.24.3 192.168.0.144

Go to redis-app-0 to view:

[root@master redis] # kubectl exec-it redis-app-0 / bin/bashroot@redis-app-0:/data# / usr/local/bin/redis-cli-c127.0.0.1 usr/local/bin/redis-cli 6379 > role1) "master" 2) (integer) 133703) 1) 1) "172.17.63.9" 2) "6379" 3) "13370" 127.0.0.1

As you can see above, app-0 is master,slave 172.17.63.9 or redis-app-3.

Next, we manually delete the redis-app-0:

[root@master redis] # kubectl delete pod redis-app-0pod "redis-app-0" deleted [root@master redis] # kubectl get pod redis-app-0-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEredis-app-0 1 Running 0 4m 172.17.24.3 192.168.0.144

Let's go inside redis-app-0 to see:

[root@master redis] # kubectl exec-it redis-app-0 / bin/bashroot@redis-app-0:/data# / usr/local/bin/redis-cli-c127.0.0.1 usr/local/bin/redis-cli 6379 > role1) "slave" 2) "172.17.63.9" 3) (integer) 63794) "connected" 5) (integer) 13958

As above, redis-app-0 becomes slave, subordinate to its previous slave node 172.17.63.9, that is, redis-app-3.

VI. Questions

At this point, you may wonder, why can Redis Pod fail over normally without using the stability flag? This involves the mechanism of Redis itself. Because each node in the Redis cluster has its own NodeId (saved in the automatically generated nodes.conf), and the NodeId does not change with the IP, which is actually a fixed network logo. That is, even if a Redis Pod is restarted, the Pod will still load the saved NodeId to maintain its identity. We can view redis-app-1 's nodes.conf file on NFS:

[root@k8s-node2] # cat / usr/local/k8s/redis/pv1/nodes.conf 96689f2018089173e528d3a71c4ef10af68ee462 192.168.169.209 slave d884c4971de9748f99b10d14678d864187a9e5d3 01526460952651 4 connected237d46046d9b75a6822f02523ab894928e2300e6 192.168.169.200 Vera 6379V 16379 slave c15f378a604ee5b200f06cc23e9371cbc04f4559 01526460952651 1 connected

C15f378a604ee5b200f06cc23e9371cbc04f4559 192.168.169.1977 c15f378a604ee5b200f06cc23e9371cbc04f4559 637916379 master-01526460952651 1 connected 10923-16383d884c4971de9748f99b10d14678d864187a9e5d3 192.168.169.205Vera 637915379 master-01526460952651 4 connected 5462-10922c3b4ae23c80ffe31b7b34ef29dd6f8d73beaf85f 192.168.169.198Vera 637916379 myself,slave c8a8f70b4c29333de6039c47b2f3453ed11fb5c2 01526460952565 3 connected

C8a8f70b4c29333de6039c47b2f3453ed11fb5c2 192.168.169.201379 master-0 1526460952651 6 connected 0-5461vars currentEpoch 6 lastVoteEpoch 4

As above, the first column is NodeId, which is stable, and the second column, IP and port information, may change.

Here, we introduce two usage scenarios of NodeId:

When a Slave Pod is disconnected and reconnected, the IP changes, but the Master finds that its NodeId is still the same, so it thinks that the Slave is still the previous Slave.

When a Master Pod goes offline, the cluster elects a new Master in its Slave. When the old Master is launched, the cluster finds that its NodeId is the same, which will turn the old Master into the slave of the new Master.

For these two scenarios, you can also test them yourself if you are interested, and pay attention to the log of Redis.

This is the end of the article on "how to deploy Redis clusters on K8s". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report