In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
What is the principle of Redis partition implementation? Why zoning? Many people don't know much about it. Today, in order to give you a better understanding of the principle of Redis partition implementation, let's sum up the following contents. Let's move on.
Redis Partitioning is Redis partitioning, which simply means distributing data among different redis instances, so the content stored for each redis instance is only a subset of everything.
Why are we zoning? What is the motivation for zoning? Generally speaking, the benefits of Redis partitioning are roughly the following two aspects:
1. With the improvement of performance, the network capacity and computing resources of stand-alone Redis are limited. Distributing requests to multiple machines and making full use of the computing power of multiple machines can help to improve the overall service capability of Redis.
2. Horizontal expansion of storage, even if the service capacity of Redis can meet the needs of applications, with the increase of data storage, a single machine is limited by the storage capacity of the machine itself, so that Redis services can be scaled horizontally by distributing data to multiple machines.
In general, partitioning makes it no longer a problem that we are limited by the hardware resources of a single computer, and there is not enough storage? Not enough computing resources? Not enough bandwidth? We can all solve these problems by adding machines.
Redis Partition Foundation
There are many partition-specific strategies in practical application. For example, suppose we already have a set of four Redis instances, R0, R1, R2, R3, and we have a batch of keys that represent users, such as user:1,user:2,... Wait, wait, How do I do that? One of the easiest ways to do this is range partitioning (range partitioning). Let's take a look at what we can do based on scope partitioning.
Range partition
Range partitioning means that all key within a range are mapped to the same Redis instance and added to the dataset or the user data mentioned above, as follows:
We can map the user data of user ID from 0 to 10000 to R0 instance, and the object of user ID from 10001 to 20000 to R1 instance, and so on.
Although this method is simple, it is very effective in practical application, but there are still some problems:
We need a table to store the mapping of the user ID range to the Redis instance, for example, the mapping of the user ID0-10000 to the R0 instance.
Not only do we need to maintain this table, but we also need one for each object type, for example, we are currently storing user information, and if we are storing order information, we need to build another mapping table.
What if the key of the data we want to store cannot be divided by scope? for example, if our key is a set of uuid, it is difficult to use range partitioning at this time.
Therefore, in practical application, range partitioning is not a very good choice, do not worry, we have a better way, and then understand the hash partition.
Hash partition
One of the obvious advantages of hash partition over range partition is that hash partition is suitable for any form of key, unlike range partition, which requires key in the form of object_name:, and the partition method is very simple, a formula can be expressed:
Id=hash (key)% N
Where id represents the number of Redis instances, and the formula describes how to first calculate a numeric value based on key and a hash function (such as crc32 function). Moving on to the above example, our first key to deal with is user:1,hash (user:1) with a result of 93024922.
The hash result is then modeled in order to calculate a value between 0 and 3 so that this value can be mapped to one of our Redis instances. For example, if the result of 93024922% 4 is 2, we will know that foobar will be stored on R2.
Different partition implementations
Partitions can be implemented in different parts of the redis software stack. Let's take a look at the following:
Client implementation
The client implementation, that is, key determines which Redis instance will be stored on the redis client, as shown below:
The above is a schematic diagram of the client implementing the Redis partition.
Agent implementation
Proxy implementation means that the client sends the request to the proxy server, and the proxy server implements the Redis protocol, so the proxy server can communicate between the client and the Redis server. The proxy server forwards the client's request to the correct Redis instance through the configured partition schema and returns the feedback message to the client. The schematic diagram of the Redis partition implemented by the agent is as follows:
Both Redis and Memcached proxy Twemoroxy implement proxy partitioning.
Query routing
Query routing is a Redis partitioning method implemented by Redis Cluster:
In the process of querying the route, we can randomly send the query request to any Redis instance, which is responsible for forwarding the request to the correct Redis instance. The Redis cluster implements a query routing through hybrid that cooperates with the client.
Disadvantages of Redis Partition
Although Redis partitions have so far been so far so good, Redis partitions have some fatal shortcomings, which result in some Redis features that don't work well in partitioned environments. Let's take a look:
Multi-key operations are not supported, for example, the keys we want to operate in batches are mapped to different Redis instances.
Multi-key Redis transactions are not supported.
The minimum granularity of partitions is keys, so we cannot map large datasets associated to one key to different instances.
When partitioning is applied, the processing of the data is very complex, for example, we need to deal with multiple rdb/aof files and aggregate the files distributed in different instances for backup.
Adding and removing machines is complex. For example, Redis clusters support almost transparent rebalancing that needs to be done to add or decrease machines at runtime, while approaches such as client and agent partitioning do not support this feature.
Persistent storage or caching
Although data partitioning is conceptually the same for both persistent data storage and caching for Redis, there is still a big limitation to persistent data storage. When we use Redis as persistent storage, each key must always be mapped to the same Redis instance. When Redis is used as a cache, for this key, if one instance is not available, the key can also be mapped to other instances.
Consistent hashing implementations usually make it possible to map a key to another instance when the instance to which it is mapped is not available. Similarly, if you add a machine, part of the key will be mapped to the new machine. Here are two things we need to know:
1. If Redis is used as a cache and it is easy to add or delete machines, it is very easy to use consistent hashing.
2. If Redis is used as (persistent) storage, a fixed key-to-instance mapping is required, so we no longer have the flexibility to add or remove machines. Otherwise, we need to be able to rebalace when adding or deleting machines, which is currently supported by Redis Cluster.
Pre-Sharding
From the above introduction, we know that Redis partitions are problematic when applied, and unless we only use Redis as a cache, it is very troublesome to add or delete machines.
However, usually our Redis capacity changes are very common in practical applications. For example, I need 10 Redis machines today and maybe 50 machines tomorrow.
Given that Redis is a very lightweight service (only 1m per instance), a simple solution to the above problem is:
We can open multiple Redis instances, even though it is a physical machine, we can also open multiple instances at the beginning. We can select some instances, such as 32 or 64 instances, as our working cluster. When there is not enough storage on a physical machine, we can move the general instances to our second physical machine, and in turn, we can ensure that the number of Redis instances in the cluster remains the same and achieve the purpose of expanding the machines.
How to move Redis instances? When you need to move the Redis instance to a separate machine, you can do this by following the steps below:
1. Start a new Redis instance on the new physical machine.
2. Take the new physical machine as the slave machine to be moved.
3. Stop the client.
4. Update the IP address of the Redis instance to be moved.
5. Send SLAVEOF ON ONE commands to slave machines.
6. Start the Redis client with the new IP.
7. Close the Redis instance that is no longer in use.
Summary
On the basis of understanding the concept of Redis partition, this paper introduces several common implementation methods and principles of Redis partition, and finally introduces the Pre-Sharding solution according to the problems encountered in the implementation.
What is the principle of Redis partition implementation? Why the division is shared here, of course, not only the above and everyone's analysis methods, but the editor can ensure that its accuracy is absolutely no problem. I hope that the above content can have a certain reference value for everyone, and can be put into practice. If you like this article, you might as well share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.