In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to achieve data slicing, read-write separation and traffic mirroring of Redis clusters in Istio. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
Redis is a high-performance key-value storage system, which is widely used in micro-service architecture. If we want to use the advanced features provided by the Redis cluster mode, we need to make changes to the client code, which makes it difficult to upgrade and maintain the application. With Istio and Envoy, we can achieve client-side unaware Redis Cluster data fragmentation without modifying the client code, and provide advanced traffic management features such as read-write separation and traffic mirroring.
Redis Cluster
A common use of Redis is as a data cache. By adding a Redis cache layer between the application server and the database server, a large number of read operations to the database by the application server can be reduced, the risk of slow response or even downtime of the database server under great pressure can be avoided, and the robustness of the whole system can be significantly enhanced. The principle of Redis as a data cache is shown in the figure:
In a small-scale system, the single Redis shown in the above figure can well implement the function of the cache layer. When the amount of data that needs to be cached in the system is large, one Redis server can not bear the cache requirements of all application servers; at the same time, when a single Redis instance fails, a large number of read requests will be sent directly to the back-end database server, resulting in excessive instantaneous pressure on the database server and affecting the stability of the system. We can use Redis Cluster to slice the cached data and put different data into different Redis shards to improve the capacity of the Redis cache layer. In each Redis shard, multiple replica nodes can also be used to share the load of cached read requests and achieve high availability of Redis. The system with Redis Cluster is shown in the following figure:
As you can see from the figure, in Redis Cluster mode, the client needs to send the read and write operations of different key to different Redis nodes in the cluster according to the slicing rules of the cluster, so the client needs to understand the topology of Redis Cluster, which makes it impossible for us to smoothly migrate an application using Redis independent node mode to Redis Cluster without modifying the client. In addition, because the client needs to know the internal topology of Redis Cluster, it will also lead to coupling between the client code and the operation and maintenance of Redis Cluster. For example, to achieve read-write separation or traffic mirroring, the code of each client needs to be modified and redeployed.
In this scenario, we can place an Envoy proxy server between the application server and the Redis Cluster, and the Envoy is responsible for routing the cache read and write requests issued by the application to the correct Redis node. In a micro-service system, there are a large number of application processes that need to access the cache server. In order to avoid a single point of failure and performance bottleneck, we deploy an Envoy proxy for each application process in the form of Sidecar. At the same time, to simplify the management of these agents, we can use Istio as the control plane to configure all Envoy agents uniformly, as shown in the following figure:
In later parts of this article, we will introduce how to manage Redis Cluster through Istio and Envoy, achieve client-side unaware data partitioning, and advanced routing strategies such as read-write separation and traffic mirroring.
Deploy Istio
Redis protocol has been supported in Pilot, but its function is weak, so it can only configure a default route for Redis agent, and does not support Redis Cluster mode, so it is unable to realize the advanced traffic management functions of Redis filter, such as data fragmentation, read-write separation, traffic mirroring and so on. In order to enable Istio to distribute Redis Cluster-related configurations to Envoy Sidecar, we modified the EnvoyFilter configuration-related code to support the "REPLCAE" operation of EnvoyFilter. The modified PR Implement REPLACE operation for EnvoyFilter patch has been submitted to the Istio community and incorporated into the main branch and will be released in subsequent versions of Istio.
At the time of this writing, the PR has not been incorporated into the latest Istio release 1.7.3. So I built a Pilot image to enable the "REPLACE" operation of EnvoyFilter. When installing Istio, we need to specify the Pilot image in the istioctl command, as shown on the following command line:
$cd istio-1.7.3/bin$. / istioctl install-set components.pilot.hub=zhaohuabing-set components.pilot.tag=1.7.3-enable-ef-replace
Note: if your Istio version is newer than 1.7.3 and you have incorporated the PR, you can directly use the default Pilot image in the Istio version.
Deploy Redis Cluster
Download the relevant code you need in the following example from https://github.com/zhaohuabing/istio-redis-culster:
$git clone https://github.com/zhaohuabing/istio-redis-culster.git$ cd istio-redis-culster
Let's create a "redis" namespace to deploy the Redis Cluster in this example.
$kubectl create ns redisnamespace/redis created
Deploy the Statefulset and Configmap of the Redis server.
$kubectl apply-f k8s/redis-cluster.yaml-n redisconfigmap/redis-cluster createdstatefulset.apps/redis-cluster createdservice/redis-cluster created verify Redis deployment
Verify that the Redis node is up and running:
$kubectl get pod-n redisNAME READY STATUS RESTARTS AGEredis-cluster-0 2 4m25sredis-cluster-1 2 Running 0 3m56sredis-cluster-2 2 3m56sredis-cluster-2 2 Running 0 3m28sredis-cluster-3 2 Running 0 2m58sredis-cluster-4 2 2m58sredis-cluster-4 2 Running 0 2m27sredis-cluster-5 2 Running 0 117s create Redis Cluster
In the above steps, we deployed six Redis nodes using Statefulset, but currently these six nodes are independent of each other and have not formed a cluster. Let's use Redis's cluster create command to compose these nodes into a Redis Cluster.
$kubectl exec-it redis-cluster-0-n redis--redis-cli-- cluster create-- cluster-replicas 1 $(kubectl get pods-l app=redis-cluster-o jsonpath=' {range.items [*]} {.status.podIP}: 6379'- n redis) Defaulting container name to redis.Use 'kubectl describe pod/redis-cluster-0-n redis' to see all of the containers in this pod. > > Performing hash slots allocation on 6 nodes...Master [0]-> Slots 0-5460Master [1] -> Slots 5461-10922Master [2]-> Slots 10923-16383Adding replica 172.16.0.72 to 172.16.0.138:6379Adding replica 172.16.0.2016379 to 172.16.1.52:6379Adding replica 172.16.0.139V 6379 to 172.16.1.53 to 172.16.0.138:6379Adding replica 6379M: 8fdc7aa28a6217b049a2265b87bff9723f202af0 172.16.0.138 slots: [0-5460] (5461 slots) masterM: 4dd6c1fecbbe4527e7d0de61b655e8b74b411e4c 172.16.1.52 4dd6c1fecbbe4527e7d0de61b655e8b74b411e4c 172.16.1.52 ] (5462 slots) masterM: 0b86a0fbe76cdd4b48434b616b759936ca99d71c 172.16.1.53 slots 6379 slots: [10923-16383] (5461 slots) masterS: 94b139d247e9274b553c82fbbc6897bfd6d7f693 172.16.0.139 slots 6379 replicates 0b86a0fbe76cdd4b48434b616b759936ca99d71cS: e293d25881c3cf6db86034cd9c26a1af29bc585a 172.16.0.72 replicates 4dd6c1fecbbe4527e7d0de61b655e8b74b411e4cCan I set the above configuration 6379 replicates 8fdc7aa28a6217b049a2265b87bff9723f202af0S: ab897de0eca1376558e006c5b0a49f5004252eb6 172.16.0.2016379 replicates 4dd6c1fecbbe4527e7d0de61b655e8b74b411e4cCan I set the above configuration? (type 'yes' to accept): yes > > Nodes configuration updated > Assign a different config epoch to each node > > Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join. > > Performing Cluster Check (using node 172.16.0.138 master 6379) M: 8fdc7aa28a6217b049a2265b87bff9723f202af0 172.16.0.138 master 6379 slots: [0-5460] (5461 slots) master 1 additional replica (s) M: 4dd6c1fecbbe4527e7d0de61b655e8b74b411e4c 172.16.1.52 master 6379 slots: [5461-10922] (5462 slots) master 1 additional replica (s) S: 94b139d247e9274b553c82fbbc6897bfd6d7f693 172.16.0.139 slave replicates 0b86a0fbe76cdd4b48434b616b759936ca99d71cM: (0 slots) slave replicates 0b86a0fbe76cdd4b48434b616b759936ca99d71cM: 0b86a0fbe76cdd4b48434b616b759936ca99d71c 172.16.1.53 94b139d247e9274b553c82fbbc6897bfd6d7f693 6379 slots: [10923-16383] (5461 slots) master 1 additional replica (s) S: ab897de0eca1376558e006c5b0a49f5004252eb6 172.16.0.201 slots 6379 slots: (0 slots) slave replicates 4dd6c1fecbbe4527e7d0de61b655e8b74b411e4cS: e293d25881c3cf6db86034cd9c26a1af29bc585a 172.16.0.72 slots 6379 slots: (0 slots) slave replicates 8fdc7aa28a6217b049a2265b87bff9723f202af0 [OK] All nodes agree about slots configuration. > > Check for open slots... > Check slots coverage... [OK] All 16384 slots covered. Verify Redis Cluster
We can use the cluster info command to view the configuration information of Redis Cluster and the member nodes in Cluster to verify that the cluster has been created successfully.
$kubectl exec-it redis-cluster-0-c redis- n redis--redis-cli cluster info cluster_state:okcluster_slots_assigned:16384cluster_slots_ok:16384cluster_slots_pfail:0cluster_slots_fail:0cluster_known_nodes:6cluster_size:3cluster_current_epoch:6cluster_my_epoch:1cluster_stats_messages_ping_sent:206cluster_stats_messages_pong_sent:210cluster_stats_messages_sent:416cluster_stats_messages_ping_received:205cluster_stats_messages_pong_received 206cluster_stats_messages_meet_received:5cluster_stats_messages_received:416 deployment test client
We deploy a client to send test commands:
$kubectl apply-f k8s/redis-client.yaml-n redisdeployment.apps/redis-client created distributes Redis Cluster-related Envoy configurations via Istio
In the following steps, we will send the Redis Cluster-related configuration to Envoy Sidecar through Istio to enable the advanced features of Redis Cluster, including data fragmentation, read-write separation, and traffic mirroring, without changing the client.
Create Envoy Redis Cluster
Envoy provides a "envoy.clusters.redis" type of Envoy Cluster to connect to the backend Redis Cluster,Envoy. This Cluster is used to obtain the topology of the backend Redis Cluster, including how many shards (shard), which slot each shard is responsible for, and which nodes are included in the shard to distribute requests from the client to the correct Redis nodes.
Use EnvoyFilter to create the required Envoy Redis Cluster:
$kubectl apply-f istio/envoyfilter-custom-redis-cluster.yamlenvoyfilter.networking.istio.io/custom-redis-cluster created create Envoy Redis Proxy
TCP proxy filter is configured in the LDS issued by Istio by default, and we need to replace it with Redis Proxy filter.
Since the "REPLACE" operation of EnvoyFilter is not supported in 1.7.3, we need to update the CRD definition of EnvoyFilter before we can create the EnvoyFilter:
$kubectl apply-f istio/envoyfilter-crd.yaml customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io configured
Use EnvoyFilter to replace TCP proxy filter with Redis Proxy filter so that Envoy can proxy Redis operation requests from the client:
$sed-I .bak "s /\ ${REDIS_VIP} / `kubectl get svc redis-cluster-n redis- ovarijsonpathpaths'{.spec.clusterIP}'`/" istio/envoyfilter-redis-proxy.yaml$ kubectl apply-f istio/envoyfilter-redis-proxy.yamlenvoyfilter.networking.istio.io/add-redis-proxy created verifies Redis Cluster functionality
Now that everything is in place, let's verify the functionality of Redis Cluster.
Redis data fragmentation
After we send the configuration defined in EnvoyFilter to Envoy through Istio, Envoy can automatically discover the topology of the backend Redis Cluster and automatically distribute the request to the correct node in the Redis Cluster according to the key in the client request.
From the command line output from the previous Redis Cluster creation step, we can see the topology of the Redis Cluster: there are three shards in the Cluster, one Master node in each shard, and one Slave (Replica) node. The client accesses the Redis Cluster through the Envoy Proxy deployed in the same Pod as it does, as shown in the following figure:
The Master and Slave node addresses of each shard in Redis Cluster:
Shard [0] Master [0] redis-cluster-0 172.16.0.138replica redis-cluster-4 172.16.0.72 Shard 6379-> Slots 0-5460 Shard [1] Master [1] redis-cluster-1 172.16.1.52 Master 6379 replica redis-cluster-5 172.16.0.201 Master 6379-> Slots 5461-10922Shard [2] Master [2] replica redis-cluster-3 172.16.0.139 6379-> Slots 10923-16383
Note: if you deploy this example in your K8s cluster, the IP address and topology of each node in Redis Cluster may be slightly different, but the basic structure should be similar.
We try to send some set requests for different key from the client to Rdeis Cluster:
$kubectl exec-it `kubectl get pod-l app=redis-client-n redis- o jsonpath= "{.items [0] .metadata.name}" `- c redis-client-n redis--redis-cli-h redis-clusterredis-cluster:6379 > set an aOKredis-cluster:6379 > set b bOKredis-cluster:6379 > set c cOKredis-cluster:6379 > set d dOKredis-cluster:6379 > set e eOKredis-cluster:6379 > set f fOKredis-cluster:6379 > set g gOKredis-cluster:6379 > set h hOK
From the client point of view, all the requests are successful, and we can use the scan command to view the data in each node on the server side:
Look at the data in the shard Shard [0]. The master node is the redis-cluster-0 slave node and the redis-cluster-4 node.
$kubectl exec redis-cluster-0-c redis- n redis--redis-cli-- scanbf$ kubectl exec redis-cluster-4-c redis- n redis--redis-cli-- scanfb
Look at the data in the shard Shard [1]. The master node is the redis-cluster-1 slave node and the redis-cluster-5 node.
$kubectl exec redis-cluster-1-c redis- n redis--redis-cli-- scancg$ kubectl exec redis-cluster-5-c redis- n redis--redis-cli-- scangc
Look at the data in the shard Shard [2]. The master node is the redis-cluster-2 slave node and the redis-cluster-3 node.
$kubectl exec redis-cluster-2-c redis- n redis--redis-cli-- scanaedh$ kubectl exec redis-cluster-3-c redis- n redis--redis-cli-- scanheda
As you can see from the verification results above, the data set by the client is distributed to three shards in Redis Cluster. The data distribution process is implemented automatically by Envoy Redis Proxy, and the client is not aware of the back-end Redis Cluster. For the client, the interaction with the Redis Cluster is the same as the interaction with a single Redis node.
Using this method, we can seamlessly migrate the Redis in the system from single node to cluster mode when the scale of application business is gradually expanding and the pressure of a single Redis node is too high. In cluster mode, the data of different key are cached in different data shards. We can increase the number of Replica nodes in the shard to expand the capacity of one shard, or we can increase the number of shards to expand the entire cluster to cope with the increased data pressure caused by the continuous expansion of business. Because Envoy is aware of Redis Cluster cluster topology, data distribution is completed by Envoy, and the whole migration and expansion process does not need clients, which will not affect the normal operation of online business.
Redis read-write separation
In a Redis shard, there is usually one Master node, one or more Slave (Replica) nodes, and the Master node is responsible for writing and synchronizing data changes to the Slave node. When the pressure of the read operation from the application is high, we can add more Replica to the shard to share the load of the read operation. Envoy Redis Rroxy supports setting different read policies:
MASTER: read data only from Master nodes, which is required when the client requires strong data consistency. This mode has great pressure on Master, so it is impossible to use multiple nodes to share the load of read operation in the same shard.
PREFER_MASTER: read data from the Master node first, and read from the Replica node when the Master node is not available.
REPLICA: only read data from Replica nodes. Because the data replication process from Master to Replica is performed asynchronously, it is possible to read expired data in this way, so it is suitable for scenarios where the client does not require high data consistency. In this mode, multiple Replica nodes can be used to share the read load from the client.
PREFER_REPLICA: read data from the Replica node first, and read from the Master node when the Replica node is not available.
ANY: reads data from any node.
In the EnvoyFilter issued earlier, we set the read policy of the Envoy Redis Proxy to "REPLICA", so the client read should only be sent to the Replica node. Let's use the following command to verify the read-write separation strategy:
Initiate a series of get and set operations with a key of "b" through the client:
$kubectl exec-it `kubectl get pod-l app=redis-client-n redis- o jsonpath= "{.items [0] .metadata.name}" `- c redis-client-n redis--redis-cli-h redis-clusterredis-cluster:6379 > get b "b" redis-cluster:6379 > set b bbOKredis-cluster:6379 > get b "bb" redis-cluster:6379 >
In the previous Redis Cluster topology, we already know that key "b" belongs to the Shard [0] fragment. We can view the commands received in the Master and Replica nodes in the shard by using the command redis-cli monitor.
Master node:
$kubectl exec redis-cluster-0-c redis- n redis--redis-cli monitor
Slave node:
$kubectl exec redis-cluster-4-c redis- n redis--redis-cli monitor
As you can see from the figure below, all get requests are sent to the Replica node by Envoy.
Redis traffic mirroring
Envoy Redis Proxy supports traffic mirroring, that is, requests sent by clients are sent to a mirrored Redis server / cluster at the same time. Traffic mirroring is a very useful feature. We can use traffic mirroring to import online data from the production environment into the test environment, so that we can use online data to simulate and test the application as realistically as possible without affecting the normal use of online users.
We create a single-node Redis node, which is used as a mirror server:
$kubectl apply-f k8s/redis-mirror.yaml-n redis deployment.apps/redis-mirror createdservice/redis-mirror created
Use EnvoFilter to enable the mirroring policy:
$sed-I .bak "s /\ ${REDIS_VIP} / `kubectl get svc redis-cluster-n redis- overnjsonpathpaths'{.spec.clusterIP}'`/" istio/envoyfilter-redis-proxy-with-mirror.yaml$ kubectl apply-f istio/envoyfilter-redis-proxy-with-mirror.yamlenvoyfilter.networking.istio.io/add-redis-proxy configured
Initiate a series of get and set operations with a key of "b" through the client:
$kubectl exec-it `kubectl get pod-l app=redis-client-n redis- o jsonpath= "{.items [0] .metadata.name}" `- c redis-client-n redis--redis-cli-h redis-clusterredis-cluster:6379 > get b "b" redis-cluster:6379 > set bbbOKredis-cluster:6379 > get b "bb" redis-cluster:6379 > set b bbbOKredis-cluster:6379 > get b "bbb" redis-cluster:6379 > get b "bbb"
You can view the commands received in the Master, Replica, and mirror nodes by using the command redis-cli monitor.
Master node:
$kubectl exec redis-cluster-0-c redis- n redis--redis-cli monitor
Slave node:
$kubectl exec redis-cluster-4-c redis- n redis--redis-cli monitor
Mirror node:
$kubectl exec-it `kubectl get pod-l app=redis-mirror-n redis- o jsonpath= "{.items [0] .metadata.name}" `- c redis-mirror-n redis--redis-cli monitor
As you can see from the figure below, all set requests are sent to a mirror node by Envoy.
Realization principle
In the above steps, we created two EnvoyFilter configuration objects in Istio. These two EnvoyFilter modify the configuration of the Envoy agent, mainly including two parts: Redis Proxy Network Filter configuration and Redis Cluster configuration.
The following EnvoyFilter replaces the TCP Proxy Network Filter in the Listener that Pilot created for Redis Service, replacing it with a Network Filter of type "type.googleapis.com/envoy.config.filter.network.redis_proxy.v2.RedisProxy". The default route for this Redis Proxy points to "custom-redis-cluster" and is configured with read-write separation policy and traffic mirroring policy.
ApiVersion: networking.istio.io/v1alpha3kind: EnvoyFiltermetadata: name: add-redis-proxy namespace: istio-systemspec: configPatches:-applyTo: NETWORK_FILTER match: name: ${REDIS_VIP} _ 6379 # Replace REDIS_VIP with the cluster IP of "redis-cluster service filterChain: filter: name:" envoy.filters.network.tcp_proxy "patch: operation: REPLACE value: name: envoy.redis_proxy typed_config: "@ type": type.googleapis.com/envoy.config.filter.network.redis_proxy.v2.RedisProxy stat_prefix: redis_stats prefix_routes: catch_all_route: request_mirror_policy: # Send requests to the mirror cluster-cluster : outbound | 6379 | | redis-mirror.redis.svc.cluster.local exclude_read_commands: True # Mirror write commands only: cluster: custom-redis-cluster settings: op_timeout: 5s enable_redirection: true enable_command_stats: true read_policy: REPLICA # Send read requests to replica
The following EnvoyFilter creates a Cluster of type "envoy.clusters.redis" in the CDS issued by Pilot: "custom-redis-cluster". The Cluster uses the CLUSTER SLOTS command to query a random node in the Redis cluster for the topology of the cluster, and saves the topology locally to distribute requests from clients to the correct Redis nodes in the cluster.
ApiVersion: networking.istio.io/v1alpha3kind: EnvoyFiltermetadata: name: custom-redis-cluster namespace: istio-systemspec: configPatches:-applyTo: CLUSTER patch: operation: INSERT_FIRST value: name: "custom-redis-cluster" connect_timeout: 0.5s lb_policy: CLUSTER_PROVIDED load_assignment: cluster_name: custom-redis-cluster endpoints:-lb_endpoints :-endpoint: address: socket_address: address: redis-cluster-0.redis-cluster.redis.svc.cluster.local port_value: 6379-endpoint: address: socket_address: address: Redis-cluster-1.redis-cluster.redis.svc.cluster.local port_value: 6379-endpoint: address: socket_address: address: redis-cluster-2.redis-cluster.redis.svc.cluster.local port_value: 6379-endpoint: Address: socket_address: address: redis-cluster-3.redis-cluster.redis.svc.cluster.local port_value: 6379-endpoint: address: socket_address: address: redis-cluster-4.redis-cluster.redis.svc.cluster .local port_value: 6379-endpoint: address: socket_address: address: redis-cluster-5.redis-cluster.redis.svc.cluster.local port_value: 6379 cluster_type: name: envoy.clusters.redis typed_config: "@ type": type.googleapis.com/google.protobuf.Struct value: cluster_refresh_rate: 5s cluster_refresh_timeout: 3s redirect_refresh_interval: 5s redirect_refresh_threshold: 5
This paper introduces how to use Envoy to provide client-side unaware Redis data fragmentation for micro-service applications, and how to manage the Redis Cluster configuration of multiple Envoy agents in the system through Istio. We can see that the use of Istio and Envoy can greatly simplify the coding and configuration of the client using Redis Cluster, and can modify the operation and maintenance policy of Redis Cluster online to achieve advanced traffic management such as read-write separation and traffic mirroring. Of course, the introduction of Istio and Envoy does not reduce the complexity of the whole system, but centralizes the work of Redis Cluster maintenance from the distributed application code to the service grid infrastructure layer. For the majority of application developers, their business value mainly comes from the application code, so it is not cost-effective to devote a lot of energy to this kind of infrastructure.
The above content is how to achieve data fragmentation, read-write separation and traffic mirroring of Redis cluster in Istio. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.