Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the relevant knowledge points and noodle trials of redis?

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the redis-related knowledge points and noodle pilot what the relevant knowledge, the content is detailed and easy to understand, the operation is simple and fast, has a certain reference value, I believe you will have a harvest after reading this redis-related knowledge points and noodle pilot articles, let's take a look at it.

4. Why is Redis so fast

1. Based entirely on memory, most of the requests are purely memory operations, which is very fast. The data is stored in memory, and the advantage similar to HashMap,HashMap is that the time complexity of search and operation is O (1).

2. The data structure is simple and the data operation is simple. The data structure in Redis is specially designed.

3. Single thread avoids unnecessary context switching and competition conditions, and there is no CPU consumption caused by multi-process or multi-thread switching, there is no need to consider all kinds of locks, there is no lock release operation, and there is no performance consumption caused by possible deadlocks.

4. Using the multi-channel Istroke O multiplexing model, non-blocking IO

5. Different underlying models are used, and the underlying implementation between them and the application protocols for communication with clients are different. Redis directly builds its own VM mechanism, because the general system calls system functions, it will waste a certain amount of time to move and request.

The above points are easy to understand. Let's make a simple discussion on the multi-channel Icano reuse model.

(1) Multi-channel Istroke O multiplexing model

The multiplexing model of multiple streams uses the ability of select, poll, and epoll to monitor multiple streams' Icano events at the same time, blocking the current thread when idle, and waking up from the blocked state when one or more streams have Icano events, so the program polls all streams once (epoll polls only those streams that actually emit events) and processes only those that are ready in sequence. This practice avoids a large number of useless operations.

Here, "multiplex" refers to multiple network connections, and "multiplexing" refers to the reuse of the same thread. Using multi-channel IO multiplexing technology allows a single thread to process multiple connection requests efficiently (minimizing the time consumption of network IO), and Redis operates data in memory very fast, that is to say, in-memory operations will not become a bottleneck affecting Redis performance, which mainly makes Redis have high throughput.

Cache traversal: neither in the cache nor in the database

Solution: cache empty object Bloom filter

Cache empty object: if it is not found in the redis and not found in the database, fill the redis with an empty object corresponding to the key. If the empty object is found in the redis next time, it means that the data cannot be queried. Disadvantages: a large number of empty objects will be created, and the expiration time will be set.

Bloom filter:

Enter too white to hash three times to mark the value as 1, and so on, and so on, when you look for it later, you can find it three times, that is, there may be one that can't be found, but it doesn't exist at all.

If you type something that doesn't exist but has a hash of 1 three times or it's possible, it's a mistouch.

Miscontact rate is related to the number and length of hash Bloom filter disadvantages: need to maintain; cannot be deleted

The underlying data saved by redis is an array of bits.

The need to like and the solution to using redis. The first kind of like demand is a more conventional like demand, which is similar to the like mode of Weibo. Users can like a message, cancel it, query whether or not to like, the number of times they are liked, and so on. The second kind of like is slightly special. Users can like any user within one day. After canceling the like, they are not allowed to like the same user again. The next day, the restriction is lifted, and you can like the same player again (that is, likes can be added up). Then there is a need to check the overall ranking of the number of likes received by users in real time.

Demand-solution idea

For requirement one, redis bitmap is used to implement it.

Introduction to bitmap

Bitmap

A bitmap is a series of binary numbers (0recover1), each of which is located in an offset (offset), on which AND,OR,XOR and other operations can be performed.

Bitmap count

Bitmap counting means counting the number of bits with a value of 1 in bitmap. The efficiency of bitmap counting is very high.

Redis bitmap

Binary Key allowed in redis and binary Value,bitmap are binary Value.

Like / cancel like

Suppose the user has a digital id of 1000 and likes photos with an id of 100. First, the redis key stored in the like data is generated according to the photo id. For example, users with a policy of like_photo: {photo_id} and an id of 1000 need only to set the 1000th position of like_photo:100 to 1 (cancel the like to set it to 0).

The time complexity of redis setbit operation is O (1), so this kind of like method is very efficient.

123redis.setbit ('like_photo:100', 1000, 1, function (err, ret) {/ / deal err and ret.}); whether to like it or not

When you open an image, you need to query whether you have liked the photo. You can query whether you like it by using redis getbit operation. For example, to find out whether a user with an id of 1000 has liked a photo with an id of 100, you only need to take a value of the 1000th bit of like_photo:100bitmap.

The time complexity of redis getbit operation is also O (1).

1234redis.getbit ('like_photo:100', 1000, function (err, liked) {/ / deal err. / / if liked==1 liked, liked==0 not like yet.}); total number of likes queried

For example, if you need to show the number of likes of a photo with an id of 100, you only need to count the bitmap of the like_photo:100bitmap.

Although the time complexity of redis bitcount operation is O (N), there is no need to worry about bitcount efficiency in the case of most of the data.

123redis.bitcount ('like_photo:100', function (err, likeCnt) {/ / deal with err and likeCnt.})

Cache breakdown: when there is data in the database but there is no data in the cache or when the cached data happens to fail, a large number of data will be accessed suddenly, which will cause the database to crash.

Solution distributed lock

When 99 requests come, let one go first, and then there is no cache in redis, and then a new one will be created and then unlocked, and the remaining 98 can be read to the cache.

Cache avalanche most cache invalidation or redis crash

The solution is to build a highly available cluster or stagger the expiration time.

How to keep the consistency between database and cache

Data consistency between redis and mysql

Here, we discuss three update strategies:

Update the cache before updating the database

Update the database before updating the cache

Delete the cache before updating the database

Update the database before deleting the cache

First, update the cache before updating the database

Problem: update the cache successfully, update the database failed, resulting in data inconsistency.

Second, update the database before updating the cache

Question:

1. A updates the database

2. B update the database

3. B write cache

4. A write cache

There are data inconsistencies.

Considering another situation, there are two points as follows:

(1) if you are a business requirement with more write database scenarios and fewer read data scenarios, the cache will be updated frequently before the data is read, resulting in a waste of performance.

(2) if you write the value to the database, it is not written directly to the cache, but is written to the cache after a series of complex calculations. Then, after each write to the database, the write cache value is calculated again, which is undoubtedly a waste of performance. Obviously, it is more appropriate to delete the cache.

Third, delete the cache before updating the database.

Question:

1. A delete cache

2. B query the database to get the old value

3. B updates the cache

4. A update the database

There is a problem of data inconsistency

Delayed double deletion public void write (String key,Object data) {redis.delKey (key); db.updateData (data); Thread.sleep (1000); redis.delKey (key);}

Problem 1: delayed double deletion, evolved into: first update the database, and then delete the cache. no, no, no.

For example:

1. A delete cache

2. B query the database to get the old value

3. B updates the cache

4. A update the database

5. A delay deletion cache

After the execution of step 1-3, the database and cache are consistent, which is equivalent to no deletion.

Step 4: update the database first, then delete the cache.

So the delayed double deletion becomes: update the database first, and then delete the cache. The problem is still unsolved.

Why? Suppose, at this point, before the execution of step 4, there is another query called C ~ C to find the old value. Step 6: C inserts the old value into the cache. The cache and database are inconsistent at this time.

Latency does not solve: C inserts the cache later in step 5, such as C encounters network problems, GC problems, and so on. Of course it's a small probability, but it doesn't mean it doesn't exist.

Of course, the longer the delay, the more the problem can be avoided. If the business requirements are not very strict, they can be ignored.

Problem 2: throughput

Problem 3: after the database is updated, there is no guarantee that the value obtained from the cache will be consistent with the database in the next query.

Fourth, update the database first, and then delete the cache

Question: the query of C above has already explained the problem.

The probability of data inconsistency is relatively small. The adoption of this solution depends on business requirements.

Ultimate solution request serialization

The really reliable solution: serialize the access operation

Delete the cache first and put the operation of updating the database into an ordered queue

All query operations that cannot be found from the cache enter an ordered queue.

Problems that need to be solved:

There is a backlog of read requests and a large number of timeouts, resulting in the pressure on the database: current limit, circuit breaker

How to avoid a large backlog of requests: split the queue horizontally to improve parallelism.

Ensure that the same request is routed correctly.

This is the end of the article on "what are the redis-related knowledge points and noodle pilots?" Thank you for reading! I believe you all have a certain understanding of "redis-related knowledge points and noodle pilot" knowledge. If you want to learn more knowledge, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report