Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the main problems solved by the functions of Redis

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "what are the main problems solved by the functions of Redis", the content is simple and clear, and I hope it can help you solve your doubts. Here, let the editor lead you to study and learn what problems are mainly solved by the functions of Redis.

Let's take a look at what Redis is. The official profile explains:

Redis is an open source project based on BSD, a storage system that keeps structured data in memory, and you can use it as database, cache and messaging middleware. Both strings,lists,hashes,sets,sorted sets,bitmaps,hyperloglogs and geospatial indexes data types are supported. It also has built-in copy, lua script, LRU, transaction and other functions, through redis sentinel to achieve high availability, through redis cluster to achieve automatic slicing. And transactions, publish / subscribe, automatic failover, and so on.

To sum up, Redis provides a wealth of features that may be dazzling when you see them for the first time. What are these functions for? What problems have been solved? Under what circumstances will the corresponding functions be used? Well, let's start from scratch and give a rough explanation step by step.

1 start from scratch

The initial requirement was very simple, and we had a consumer of the api: http://api.xxx.com/hot-news recording API, which provides a list of hot news, complaining that it takes about 2 seconds for each request to return a result.

Then we started to figure out how to improve the performance of api consumer perception, and soon the simplest and roughest first solution came out: add a HTTP-based cache control cache-control:max-age=600 to the API response, which allows consumers to cache the response for ten minutes.

If api consumers make effective use of the cache control information in the response, they can effectively improve their perceived performance (within 10 minutes). But there are two drawbacks: the first is that api consumers may get old data within 10 minutes after the cache takes effect; the second is that it still takes 2 seconds for api clients to access API directly regardless of the cache.

2 caching based on native memory

In order to solve the problem that it still takes 2 seconds to call API, after investigation, the main reason is that it took nearly 2 seconds to use SQL to get hot news, so we came up with a simple and rude solution, that is, to cache the results of the SQL query directly in the memory of the current api server (set the cache validity time to 1 minute). Requests within the next 1 minute are read cached directly, and it no longer takes 2 seconds to execute SQL.

If the api receives 100 requests per second, then one minute is 6000, that is, only the crowded requests in the first 2 seconds will take 2 seconds, and all requests in the next 58 seconds will be able to respond without waiting for 2 seconds.

Other API friends found this a good idea, and soon we found that the memory of the API server was about to be full.

3 Redis on the server side

When the memory of the API server was full of caches, we found that we had to think of another solution. The most direct idea is that we throw all these caches on a dedicated server and configure it with a large amount of memory. And then we went after redis. As for how to configure and deploy redis, it is not explained here, but the redis official has a detailed description. Then we used a separate server as the Redis server, and the memory pressure on the API server was resolved.

3.1 persistence (Persistence)

There are always a few days a month when a single Redis server is in a bad mood and goes on strike, resulting in the loss of all cache (redis data is stored in memory). Although it is possible to bring the Redis server back online, due to the loss of data in memory, resulting in a cache avalanche, the pressure on the API server and database is coming up at once. So the persistence feature of Redis comes in handy at this time to mitigate the impact of the cache avalanche. Redis persistence means that redis writes the data in memory to the hard disk and loads the data when redis is rebooted, thus minimizing the impact of cache loss.

3.2 Sentinel (Sentinel) and replication (Replication)

A strike on Redis servers without warning is a hassle. So what do we do? Answer: back up one, you hang up on it. So how do you know that a redis server is down, how to switch, and how to ensure that the backup machine is a full backup of the original server? That's when Sentinel and Replication are needed. Sentinel can manage multiple Redis servers, which provides monitoring, reminder and automatic failover functions, while Replication is responsible for enabling a Redis server to be equipped with multiple backups. Redis also uses these two features to ensure the high availability of Redis. In addition, the Sentinel function is a use of Redis's publish and subscribe capabilities.

3.3Cluster (Cluster)

There is always an upper limit on the resources of a single server. CPU resources and IO resources can be separated from reads and writes by master-slave replication, thus transferring part of the pressure of CPU and IO to the slave server. But what about the memory resources, the master-slave mode can only backup the same data and cannot expand the memory horizontally; the memory of a single machine can only be increased, but there is always an upper limit. So we need a solution that allows us to scale out. The ultimate goal is to make each server responsible for only one part of it, so that all of these servers form a whole, and this group of distributed servers is like a centralized server to outside consumers (previously explained the difference between over-distribution and web-based architecture in the blog reading REST).

Before Redis's official distributed solution came out, there were two schemes, twemproxy and codis, which generally rely on proxy to distribute, that is to say, redis itself does not care about distributed things, but is responsible for by twemproxy and codis. The cluster scheme officially given by redis is to achieve this part of the distributed work in each redis server, so that it can independently complete the distributed requirements without the need for other components. We don't care about the advantages of these solutions here, let's take a look at what the distribution here is going to deal with. What exactly is the problem that twemproxy and codis handle independently of the distributed logic and the logic of cluster integration into redis services?

As we said earlier, a distributed service looks like a centralized service to the outside world. So to do this, we are faced with a problem that needs to be solved: increasing or decreasing the number of servers in a distributed service should be insensitive to the client consuming this service; then it means that the client cannot penetrate the distributed service and tie itself to a server, because once that happens, you can no longer add new servers or replace faults.

There are two ways to solve this problem:

The first way is the most straightforward, and that is, I add an intermediate layer to isolate this specific dependency, that is, twemproxy adopts the way that all clients can only consume redsi services through it, through it to isolate this dependency (but you will find that twermproxy will become a single point), in this case, each redis server is independent, and they do not know each other's existence.

The second way is to let the redis server know that each other exists, and use the redirect mechanism to guide the client to complete the desired operation. For example, the client connects to a certain redis server and says that I want to perform this operation, and the redis server finds that it cannot complete the operation, so it gives the information of the server that can complete the operation to the client and asks the client to request another server. At this point, you will find that every redis server needs to keep a complete copy of the distributed server information, otherwise how does it know which other server the client wants to find to perform the client's desired operation.

The above paragraph explains so much, I do not know whether it is the first way or the second way, there is one thing in common, that is, the information of all the servers in the distributed service and the services they can provide. This information must exist anyway, the difference is that the first way is to manage this information separately and use this information to coordinate multiple independent redis servers at the back end; the second way is to let each redis server hold this information and know each other's existence to achieve the same goal as the first way, and the advantage is that an additional component is no longer needed to deal with this part of the thing.

The specific implementation details of Redis Cluster adopt the concept of Hash slots, that is, 16384 slots are allocated in advance: on the client side, which slot is obtained by CRC16 (key)% 16384 operation on Key. On the redis server, each server is responsible for some slots. When new servers are added or removed, these slots and their corresponding data are migrated. At the same time, each server holds a complete slot and its corresponding server information, which enables the server to redirect client requests.

4 Redis of the client

The third section above mainly introduces the evolution steps of the Redis server side, explaining how Redis evolved from a stand-alone service to a highly available, decentralized, distributed storage system. This section focuses on the redis services that clients can consume.

4.1 data types

Redis supports a wide range of data types, from the most basic string to complex and commonly used data structures:

String: the most basic data type, binary secure string, up to 512m.

List: a list of strings that keep the order in which they are added.

Set: an unordered collection of strings with no duplicate elements.

Sorted set: a sorted collection of strings.

A collection of hash:key-value pairs.

Bitmap: a more refined operation, in bit units.

Hyperloglog: a probability-based data structure.

These numerous data types are mainly designed to support the needs of various scenarios, of course, each type has a different time complexity. In fact, these complex data structures are equivalent to the specific implementation of remote data access (Remote Data Access = RDA) that I introduced in the architecture style of "interpreting REST" series of blogs based on web applications, that is, by executing a set of standard operation commands on the server to get the desired reduced result set between the servers, thus simplifying the use of the client and improving network performance. For example, if there is no list data structure, you can only save the list as a string, the client gets the complete list, and then submit it to redis completely after the operation, which will result in a lot of waste.

4.2 transaction

In the above data types, each data type has its own command to operate. In many cases, we need to execute more than one command at a time, and we need to succeed or fail at the same time. Redis's support for transactions also stems from this part of the requirement, that is, the ability to execute multiple commands sequentially at once and to ensure their atomicity.

4.3 Lua script

On a transaction basis, if we need to perform more complex operations (including some logical judgments) on the server at one time, then lua can be useful (for example, when fetching a cache and prolonging its expiration time). Redis ensures the atomicity of lua scripts and can replace the transaction-related commands provided by redis in certain scenarios. It is equivalent to the concrete implementation of remote evaluation (Remote Evluation = REV) introduced in the architecture style based on network application.

4.4 Pipeline

Because the connection between the client and server of redis is based on TCP, the default is that only one command can be executed per connection. Pipes allow multiple commands to be processed using a single connection, saving some of the overhead of tcp connections. The difference between pipes and transactions is that pipes are designed to save communication overhead, but they do not guarantee atomicity.

4.5 distributed Lock

It is officially recommended to use the Redlock algorithm, even if the string type is used, a specific key is given when the lock is added, and then a random value is set; when the lock is unlocked, the lua script is used to obtain the comparison, and then delete the key. The specific commands are as follows:

SET resource_name my_random_value NX PX 30000if redis.call ("get", KEYS [1]) = ARGV [1] then return redis.call ("del", KEYS [1]) else return 0end is all the contents of this article "what are the main problems solved by the functions of Redis?" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report