Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed tutorial: how to deploy Redis clusters on Kubernetes

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Jieshao

Redis (REmote DIctionary Server means) is an open source in-memory data store that is commonly used as a database, cache, and message broker. It can store and manipulate advanced data structure types, such as lists, maps, sets, and sorting sets. Redis accepts keys in a variety of formats, so you can perform operations on the server, reducing the workload on the client. It keeps the database completely in memory and only uses disks for persistent storage. Redis is a popular data storage solution, which is favored by Github, Pinterest, Snapchat, Twitter, StackOverflow, Flickr and other technology giants.

Why use Redis?

It is very fast, it is written by ANSI C and can run on POSIX systems such as Linux, Mac OS X, and Solaris.

Reis is often rated as the most popular key-value database and the most popular NoSQL database used on containers.

Its caching solution reduces calls to the cloud database backend.

Applications can access it through the client-side API library.

All popular programming languages support Redis.

It's open source and very stable.

Application case of Redis

In some Facebook online games, game scores are updated very frequently. When you use Redis to sort set, even if there are millions of users and millions of new scores per minute, performing these operations is very simple.

Twitter stores the timeline of all users in the Redis cluster.

Pinterest stores the user follower graph in a Redis cluster, where the data is distributed across hundreds of instances.

Github uses Redis as a queue

What is a Redis cluster?

A Redis cluster is a collection of multiple Redis instances used to extend the database by partitioning the database to make it more resilient. Each member of the cluster, whether primary or secondary, manages a subset of the hash slot. If a primary server has an inaccessible failure, its secondary server will be promoted to the primary server. In the smallest Redis cluster consisting of three master nodes, each master node has a slave node (to ensure at least a minimum degree of failover), and each master node is assigned a hash slot in the range of 0 to 16383. Node A contains hash slots ranging from 0 to 5000, node B from 5001 to 10000, and node C from 10001 to 18383. The communication within the cluster is carried out through the internal bus, using the gossip protocol to spread information about the cluster or to discover new nodes.

Deploy a Redis cluster on Kubernetes

Deploying Redis clusters in Kubernetes is challenging because each Redis instance relies on a configuration file that tracks other cluster instances and their roles. To do this, we need to use a combination of Kubernetes state sets (StatefulSets) and persistent volumes (PersistentVolumes).

Preparation in advance

To complete this demo, we need to make the following preparations:

Rancher

Google cloud platform or other cloud provider account. GKE is used in the following demonstration, but it is possible to use any cloud provider, and it works in much the same way.

Start the Rancher instance

If you don't have an example of Rancher, you can simply and quickly start one by referring to the Quick Start documentation here:

Https://rancher.com/quick-start/

Deploy GKE clusters with Rancher

Start and configure your Kubernetes cluster with Rancher. For more information, please see the documentation:

Https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/

When the cluster is ready, we can check the current status through the kubectl instruction

Deploy Redis

Then deploy the Redis cluster, where we can either apply the YAML files through kubectl or import them into Rancher UI. All the YAML files we need are listed below.

The YAML content is as follows:

Redis-sts.yaml

Redis-svc.yaml

Validate deployment

Check that the Redis node is up and running:

The following six volumes are created by us

$kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-ae61ad5c-f0a5-11e8-a6e0-42010aa40039 1Gi RWO Delete Bound default/data-redis-cluster-0 standard 7mpvc-b74b6ef1-f0a5-11e8-a6e0-42010aa40039 1Gi RWO Delete Bound default/data-redis-cluster-1 standard 7mpvc-c4f9b982-f0a5-11e8-a6e0-42010aa40039 1Gi RWO Delete Bound default/data-redis-cluster-2 standard 6mpvc-cd7af12d-f0a5-11e8-a6e0-42010aa40039 1Gi RWO Delete Bound default/data-redis-cluster-3 standard 6mpvc-d5bd0ad3-f0a5-11e8-a6e0-42010aa40039 1Gi RWO Delete Bound default/data-redis-cluster-4 standard 6m

We can check any pod to see the volumes it adds:

$kubectl describe pods redis-cluster-0 | grep pvc Normal SuccessfulAttachVolume 29m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ae61ad5c-f0a5-11e8-a6e0-42010aa40039"

The same data can also be seen on Rancher UI.

Deploy Redis clusters

The next step is to create a Redis cluster. To do this, we need to run the following command and enter yes to accept the configuration. The first three nodes become the master node, and the last three nodes are set as slave nodes.

$kubectl exec-it redis-cluster-0-- redis-cli-- cluster create-- cluster-replicas 1 $(kubectl get pods-l app=redis-cluster-o jsonpath=' {range.items [*]} {.status.podIP}: 6379')

The following is the complete output command:

> > Performing hash slots allocation on 6 nodes...Master [0]-> Slots 0-5460Master [1]-> Slots 5461-10922Master [2]-> Slots 10923-16383Adding replica 10.60.1.13RV 6379 to 10.60.2.12:6379Adding replica 10.60.2.14RV 6379 to 10.60.1.12:6379Adding replica 10.60.1.14146379 to 10.60.2.136379M: 2847de6f6e7c8aaa8b0d2f204cf3ff6e8562a75b 10.60.2.12RV 6379 slots: [0-5460] ] (5461 slots) masterM: 3f119dcdd4a33aab0107409524a633e0d22bac1a 10.60.1.12 slots 6379 slots: [5461-10922] (5462 slots) masterM: 754823247cf28af9a2a82f61a8caaa63702275a0 10.60.2.13 slots 6379 slots: [10923-16383] (5461 slots) masterS: 47efe749c97073822cbef9a212a7971a0df8aecd 10.60.1.13 replicates 2847de6f6e7c8aaa8b0d2f204cf3ff6e8562a75bS 6379 replicates 2847de6f6e7c8aaa8b0d2f204cf3ff6e8562a75bS: e40ae789995dc6b0dbb5bb18bd243722451d2e95 10.60.2.14146379 replicates 3f119dcdd4a33aab0107409524a633e0d22bac1aS: 8d627e43d8a7a2142f9f16c2d66b1010fb472079 10.60.1.14 slots? (type 'yes' to accept): yes > > Nodes configuration updated > > Assign a different config epoch to each node > > Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join.... > > Performing Cluster Check (using node 10.60.2.12 slots) slots: [0-5460] (5461 slots) master 1 additional replica (s) S: 47efe749c97073822cbef9a212a7971a0df8aecd 10.60.1.13 master 6379 slots: (0 slots) slave replicates 2847de6f6e7c8aaa8b0d2f204cf3ff6e8562a75bM: 754823247cf28af9a2a82f61a8caaa63702275a0 10.60.2. 13master 6379 slots: [10923-16383] (5461 slots) master 1 additional replica (s) M: 3f119dcdd4a33aab0107409524a633e0d22bac1a 10.60.1.12 master 6379 slots: [5461-10922] (5462 slots) master 1 additional replica (s) S: e40ae789995dc6b0dbb5bb18bd243722451d2e95 10.60.2.146379 slots: (0 slots) slave replicates 3f119dcdd4a33aab0107409524a633e0d22bac1aS: 8d627e43d8a7a2142f9f16c2d66b1010fb472079 10.60.1.1414 slots 6379 slots: (0 slots) slave replicates 754823247cf28af9a2a82f61a8caaa63702275a0 [OK] All nodes agree about slots configuration. > > Check for open slots. Check slots coverage... [OK] All 16384 slots covered.

Verify cluster deployment

Check the cluster details and the role of each member

Test the Redis cluster

We want to use clustering and simulate node failures. For the former task, we will deploy a simple python application, and for the latter task, we will delete a node to observe the cluster behavior.

Deploy Hit Counter applications

We will deploy a simple application in the cluster and place a load balancer before it. The purpose of this application is to increase the value of the counter and store the value on the Redis cluster before returning it as the return value of the HTTP response.

Deploy using kubectl or Rancher UI:

The YAML content is as follows:

App-deployment-service.yaml

Rancher shows the resource we created: a pod containing a python application, and a service of type LoadBalancer. Within the details of the service, its public IP address will be displayed:

At this point, we can use a browser to access IP and generate the value of hit counter:

Simulated node failure

We can simulate the failure of a cluster member by removing the pod (using kubectl or Rancher UI). When we delete redis-cluster- 0, which was originally master, we see that Kubernetes promotes redis-cluster-3 to master, and when redis-cluster-0 comes back, redis-cluster-3 restores its dependency.

Before

After that

We can see that the IP of redis-cluster-0 has changed, so how does the cluster recover?

When we create the cluster, we create the ConfigMap, which in turn creates a script at / conf/update-node.sh, which is called by the container at startup. The script updates the Redis configuration with the new IP address of the local node. With the new IP in confic, the cluster can be started and restored with a different IP address in the new pod.

In the process, if we continue to load the page, the counter will continue to increase, and after the cluster converges, we will see that no data has been lost.

Conclusion theory

Redis is a powerful data storage and caching tool. Because of the way Redis stores data, Redis clusters can further extend their capabilities by providing sharding, related performance benefits, linear scaling, and high availability. The data is automatically split between multiple nodes, and the operation can continue even if a subset of the nodes fails or cannot communicate with the rest of the cluster.

For more information about Redis clustering, visit the tutorial (https://redis.io/topics/cluster-tutorial) or documentation (https://redis.io/topics/cluster-spec).

For more information about Rancher, please visit our home page (https://www.cnrancher.com) or deployment documentation (https://www.cnrancher.com/docs/rancher/v2.x/cn/overview/).

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report