Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summary of the methods of realizing Distribution by redis

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Why do you use Redis

There are two main considerations for using Redis in a project: performance and concurrency. If it is only for distributed locking and other functions, there are other middleware such as Zookpeer to replace, it is not necessary to use Redis.

Performance:

As shown in the following figure, when we encounter a SQL that takes a long time to execute and the results do not change frequently, it is particularly suitable to cache the run results. In this way, the subsequent request is read in the cache so that the request can respond quickly.

Especially in the second kill system, at the same time, almost everyone is ordering and placing orders. Perform the same operation-look up the data from the database.

There is no fixed standard for response time depending on the effect of the interaction. Ideally, our page jump needs to be resolved in an instant, and in-page operations need to be resolved in an instant.

Concurrency:

As shown in the following figure, in the case of large concurrency, all requests directly access the database, and the database will have a connection exception. At this point, you need to use Redis to do a buffering operation so that the request accesses the Redis first instead of directly accessing the database.

Using Redis FAQ caching and database double write consistency cache avalanche cache breakdown concurrency contention problem two-threaded Redis why is it so fast

This problem is an investigation of the internal mechanism of Redis. Many people don't know that Redis is a single-threaded working model.

The main reasons are the following three points: pure memory operation, single-threaded operation, avoiding frequent context switching and adopting non-blocking Icano multiplexing mechanism.

Let's talk about the IWeiO multiplexing mechanism in detail and take an example: Xiaoming opened a fast food restaurant in City A, which is responsible for fast food service in the same city. Xiao Ming hired a group of distributors because of financial constraints, and then Xiaoqu found that there was not enough money to buy a car for express delivery.

Mode one of operation

Every time a customer places an order, Xiao Ming asks a distributor to keep an eye on it, and then asks someone to drive it. Slowly Xiaoqu found the following problems with this mode of operation:

Time has been spent on robbing cars, and most of the dispatchers are idle. They can only deliver them when they get the car.

With the increase in the number of orders and the number of delivery staff, Xiaoming found that the express store was becoming more and more crowded and could not hire new delivery staff. Coordination between dispatchers takes a lot of time. Synthesizing the above shortcomings, Xiaoming learned from the bitter experience and put forward the second mode of operation. Mode 2 of operation

Xiao Ming employs only one delivery man. When the customer places an order, Xiaoming marks it according to the place of delivery and puts it in one place in turn. Finally, let the distributor drive the car to deliver it in turn, and then come back to get the next one. Compared with the above two modes of operation, it is obvious that the second one is more efficient.

In the above analogy: each distributor →, each thread, each order →, each place of delivery of the Socket order, the different status of the customer delivery request of the → Socket, the request of the → from the client, the operation mode of the → server, the code running on the → server, the audit of the → CPU of a car, then comes to the following conclusion: the first mode of operation is the traditional concurrency model. Each Ithumb O stream (order) is managed by a new thread (dispatcher). The second mode of operation is Ihammer O multiplexing. There is only a single thread (a distributor) that manages multiple Ipicuro streams by tracking the status of each Ipicuro stream (the place of delivery of each dispatcher).

The following analogy to the real Redis threading model is shown in the figure:

When Redis-client operates, it produces Socket with different event types. On the server side, there is an Imax O multiplexer program that places it in the queue. Then, the file event dispatcher takes it from the queue in turn and forwards it to different event handlers.

3. Data types and usage scenarios of Redis

A qualified programmer will use all five types.

String

For the most common set/get operation, Value can be either String or a number. Generally do some complex counting function of the cache.

Hash

Here Value stores structured objects, and it is convenient to manipulate one of the fields. When I do single sign-on, I use this data structure to store user information, use CookieId as Key, and set 30 minutes as cache expiration time, which can well simulate the effect similar to Session.

List

Using the data structure of List, you can do a simple message queue function. In addition, you can use the lrange command to do Redis-based paging function, excellent performance, good user experience.

Set

Because Set stacks a collection of values that are not repeated. So you can do the global de-duplication function. Our systems are generally deployed in clusters, so it is troublesome to use the Set that comes with JVM. In addition, the use of intersection, union, subtraction and other operations, you can calculate common preferences, all preferences, their own unique preferences and other functions.

Sorted Set

Sorted Set has an extra weight parameter Score, and the elements in the collection can be arranged by Score. Can do ranking application, take TOP N operation. Sorted Set can be used to do deferred tasks.

4. Expiration strategy and memory elimination mechanism of Redis

Whether Redis is used at home or not can be seen from here. For example, you can only store 5 gigabytes of data in Redis, but if you write 10 gigabytes, you will delete 5 gigabytes. How to delete, have you thought about this question?

Correct solution: Redis adopts the strategy of periodic deletion + lazy deletion.

Why not delete policies regularly?

Delete regularly, use a timer to monitor the Key, and delete automatically when it expires. Although memory is released in time, it consumes CPU resources. In the case of large concurrent requests, CPU applies time to processing requests rather than deleting Key, so it does not adopt this strategy.

Periodic deletion + lazy deletion how to work

Delete it periodically. Redis defaults to every 100ms check, and if there is an expired Key, delete it. It is important to note that Redis does not check all Key once for every 100ms, but is randomly selected for inspection. If you only use the periodic deletion policy, it will result in a lot of Key not being deleted by the time. As a result, lazy deletion comes in handy.

Is there no other problem with regular deletion + lazy deletion?

No, if you delete Key regularly, you don't delete it. And you didn't request Key in time, which means lazy deletion didn't work either. In this way, the memory of Redis will be higher and higher. Then the memory elimination mechanism should be adopted.

There is a line of configuration in redis.conf:

# maxmemory-policy volatile-lru

This configuration is equipped with a memory elimination strategy: noeviction: when there is not enough memory to hold new writes, new writes will report an error. Allkeys-lru: when there is not enough memory to hold newly written data, remove the least recently used Key in the key space. (recommended, currently used by projects) allkeys-random: when there is not enough memory to hold newly written data, a Key is randomly removed from the key space. (no one should use it either. If you don't delete, at least use Key.) volatile-lru: when there is not enough memory to hold newly written data, remove the least recently used Key from the key space with expiration time set. In this case, Redis is generally used as both caching and persistent storage. (not recommended) volatile-random: when there is not enough memory to hold newly written data, randomly remove a Key from the key space where the expiration time is set. (still not recommended) volatile-ttl: when there is not enough memory to hold new writes, Key with an earlier expiration date takes precedence in the key space where the expiration time is set. (not recommended) 5 Redis and database double write consistency problem

The problem of consistency can be subdivided into final consistency and strong consistency. Database and cache double write, there are bound to be inconsistencies. The premise is that if there is a strong consistency requirement for the data, the storage cannot be slowed down. What we do can only ensure the ultimate consistency.

In addition, what we have done can only fundamentally reduce the probability of inconsistencies. Therefore, data with strong consistency requirements cannot be slowed down. First of all, adopt the correct update strategy, update the database first, and then delete the cache. Second, because there may be a failure to delete the cache, you can provide a compensatory measure, such as using message queuing.

How to deal with the problems of cache penetration and cache avalanche

These two problems are very difficult for small and medium-sized traditional software enterprises to encounter. If there are large concurrent projects with millions of traffic, these two issues must be deeply considered. Cache traversal, that is, hackers deliberately request data that does not exist in the cache, resulting in all requests to the database, resulting in abnormal database connection.

Cache traversal solution: using mutexes, when the cache fails, first get the lock, get the lock, and then request the database. If you don't get the lock, you'll sleep for a while and try again. The asynchronous update strategy is used, which is returned directly regardless of whether the Key takes a value or not. A cache expiration time is maintained in the Value value. If the cache expires, a thread is asynchronously set up to read the database and update the cache. Need to do cache warm-up (load the cache before the project starts). Provide an interception mechanism that can quickly determine whether a request is valid, for example, using a Bloom filter to internally maintain a series of legitimate and valid Key. Quickly determine whether the Key carried by the request is legal or valid. If it is illegal, return directly.

Cache avalanche, that is, a large area of cache failure at the same time, when there is another wave of requests, resulting in requests to the database, resulting in abnormal database connection.

Cache avalanche solution: add a random value to the cache expiration time to avoid collective failure. Mutexes are used, but the throughput of this scheme is significantly reduced. Double cache. We have two caches, cache An and cache B. The expiration time of cache An is 20 minutes, and cache B has no expiration time. Do your own cache warm-up operation. Then subdivide the following points: read the database from cache A, then return directly; A has no data, read data directly from B, return directly, and start an update thread asynchronously, which updates both cache An and cache B. 8. How to solve the concurrency competitive Key problem of Redis

The problem is that there are multiple subsystems to Set a Key at the same time. What should we pay attention to at this time? Basically, everyone recommends using the Redis transaction mechanism.

However, it is not recommended to use Redis's transaction mechanism. Because our production environment is basically a Redis cluster environment, we have done data slicing operation. When you have multiple Key operations involved in a transaction, the multiple Key may not be stored on the same redis-server. Therefore, the transaction mechanism of Redis is very chicken.

If you operate on this Key, no order is required

In this case, prepare a distributed lock, everyone to grab the lock, grab the lock to do set operation, it is relatively simple.

If the Key operation is required, order is required.

Suppose you have a key1, system A needs to set key1 to valueA, system B needs to set key1 to valueB, and system C needs to set key1 to valueC.

Expect the value value of key1 to change in the order of valueA > valueB > valueC. In this case, we need to save a timestamp when the data is written to the database.

Suppose the timestamp is as follows:

System A key 1 {valueA 3:00} system B key 1 {valueB 3:05} system C key 1 {valueC 3:10}

So, suppose system B grabs the lock first and sets key1 to {valueB 3:05}. Then system A grabs the lock and finds that the timestamp of its valueA is earlier than the timestamp in the cache, so it does not do the set operation, and so on. Other methods, such as using queues, can also turn the set method into serial access.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report