Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Feel the unique charm of Redis from a small requirement (requirement design)

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Share a simple small requirement about how to design and implement and about the use of Redis

Redis is widely used in practical applications. This article starts with a simple requirement and tells you how a requirement is done from beginning to end and how it is perfected step by step. Previously wrote a "how to achieve page advertising at any time online, expired automatic offline and automatic online at that time", but also related to the practical application of Redis in the project, interested can take a look.

Demand

Set, now we have an APP, the product puts forward a new function called "programmer tree hole", the specific function is not to say, one thing this function needs to do is when using this function, if it is the first time to enter, a protocol page will be displayed, the user needs to check the post-point to confirm to enter the function, and then enter the function again, no longer show the protocol page directly into the function. As shown in the following figure

Prototype diagram

Demand analysis

The requirement is so simple, let's analyze it.

1. When a user clicks on this feature, the frontend needs to know which page to display to the user. In this step, the backend needs to request the backend API, and the backend tells the frontend whether the user has agreed to the agreement.

2. The user checks the protocol point to determine that the backend needs to record this step (recording that the user has agreed to the agreement). In this step, the front end needs to request the backend interface when the point is determined.

Outline design

As mentioned in the previous requirements analysis, the back-end needs to tell the front-end users whether there is a unified agreement, so the back-end needs to record this information, preferably in the database, then a table is needed to record the users who have agreed to the agreement. The table structure is roughly as follows: id, customer number, insertion time.

Detailed design

1. Record whether the customer has agreed to the agreement and provide query function (query whether the customer has agreed to the agreement)

2. How to store user information that has not been agreed and agreed upon

3. How to efficiently query whether you have agreed?

4. How to ensure the availability of service and database under high concurrency

Function realization

The backend provides two interfaces

1. HasAgree () to query whether the user has agreed to the agreement

2. RecordAgree (), recording that the user has agreed to the agreement

First edition of Just DB

It's easy! It's just CRUD. It's no big deal. The user comes in to check whether there is a record in the database, and does not return that the user has not agreed to the agreement, and the front end shows the agreement page to the user, otherwise the function page is displayed; after the user clicks the agreement, the background record the user has clicked the agreement and recorded it to the library. One query and one insert will be done in 5 minutes.

Just throw the code.

The first version of the code is like the above, and I think any beginner programmer can write it. If the number of users is small, the number of clicks on the feature is not large, this is still barely justified. Why does it barely make sense? because there are hidden dangers, you see, ah, if you check the database every time you click, if someone attacks maliciously, imitates high concurrency, and instantly a large number of requests come to check the database, it is very likely that the database will fail. Or even if the database is not hung up, every search of the database is a waste. So this is a hidden danger, or a potential danger, so we will solve this problem in the second edition.

The second edition introduces Redis cache

Considering that it is wasteful to check the database every time, shall we use the cache? Check if there is any corresponding data in the cache first. If there is any data in the cache, return it directly. If not, check the database, and store the cache if there is any in the database. In this way, redis shares some of the pressure on the database.

The code is presented.

This version is a little better, and some of the requests are apportioned to redis, reducing the pressure on the database.

The third edition addresses cache penetration

With the increase of the number of customers, the number of times and frequency of clicking on this function is getting higher and higher. If someone clicks on this function frequently, after popping up the agreement, exit, click again, exit again. I just don't know for sure.

What's the problem with that?

In this case, it is not in the background cache, nor in the database. Every time it goes to the database, bypassing the cache and directly going to the database. It is also a problem that there are too many requests of this kind, that is, cache penetration. So in the third edition, let's solve the problem of cache penetration.

Solve cache traversal: because there is no database or cache, we can save what is not in the database to redis. The data type of redis needs to be changed from set to map in order to record the status value.

As you can see, our key-field-value does not set the expiration time, because we can think of this key as a hot key. Our way to deal with hot key is to be permanent or expire for as long as possible.

Cache preheating of the fourth edition prevents cache breakdown

Another problem with caching is cache breakdown.

What is cache breakdown? If the feature is highly publicized in the early stage, or if it is expected to have a large number of clicks after the feature is launched, it is likely that a large number of users will click on this feature immediately after it is launched, because our previous logic is to enter the user display protocol page of the feature for the first time. Although our background processing has added redis cache, all users have not clicked on the new feature, so there is no cache in redis. Do all users' requests fall into the database? Once the instant traffic is very large, there are hidden dangers in database security, and it is possible to be destroyed.

This problem can be understood as cache breakdown. (the actual cache breakdown is that after a key does not exist or expires in the cache, many requests come to access the key at a certain moment, and it is determined that there is no such key in the redis, so check the database. )

So how to solve it? We can put the data that needs to be cached into redis in advance before the feature is launched, that is, cache preheating.

How to preheat it? Put all the user's information into redis. For example, Chestnut (perhaps not the best), we use Redis's hash data structure, key-field-value. Key We can fix a string such as coderTreeHole_Agreement_Check,field We can use the customer number (unique), value is a flag bit, with 0 for not agreed to the agreement, 1 for agreed. Generally speaking, the hot key will be warmed up before the e-commerce promotion, otherwise it will not be able to bear it.

And, when the number of users is very large, is the key of coderTreeHole_Agreement_Check in redis very large? In the redis cluster deployment mode, is the key all on the same node? Why?

Cluster mode is added to redis3.0 to realize the distributed storage of redis, that is, different content is stored on each redis node. On every node of the redis, there are two things, one is the slot (slot), its value range is: 0-16383. Another is cluster, which can be understood as a plug-in for cluster management. When our access key arrives, redis will get a result according to crc16's algorithm, and then calculate the remainder of the result to 16384, so that each key will correspond to a hash slot numbered between 0 and 16383, through this value, to find the node corresponding to the corresponding slot, and then automatically jump to the corresponding node for access operation.

After reading the above passage, you see. Does the request for this big key and hot key fall on a certain redis node? Big key will bring a lot of problems, the reason for the space will be explained in detail later, stray from the subject.

In response to this requirement, what other ways do you have to prevent cache breakdown?

The fifth edition of the message queue cut the peak and fill the valley.

We can see that the above designs actually operate on the database in real time.

For example, when the user clicks yes, the front end calls the recordAgree method in the background to record the record to the database, that is, the record is immediately inserted into the database.

If you have just launched this function, a large number of users click on this feature at the same time, and if the concurrent amount is large, the request goes to the background, then there will be a lot of operations to write to the library, and the number of database connections will suddenly surge, and the database will not be able to stand it.

So to avoid traffic concentrating on the database, we can use message queuing MQ at this time. Send the request of the insert operation to the message queue, so that the insert operation can be executed to the database at a certain rate, so that the number of requests to the database is as smooth as possible, and the message sent to the message queue immediately returns to the front end successfully, without waiting for the insertion library to be completed. Asynchronous decoupling, peak cutting and valley filling are realized with MQ.

When you get here, you can't help saying that the design is awesome.

In addition, there are many points to pay attention to in the use of MQ, such as: message queue message repeat consumption problem, order problem, transaction message and so on.

Summary

The extent to which this requirement is designed depends on your number of users and concurrency. If it is like Singles Day, you must use a message queue. For example, a small one, such as 10 million users, 100000 daily active users, and the most concentrated requests are 9-12:00 noon, 13-17:00 pm, almost 8 hours, an average of 12500 an hour, and users will click on this function every minute. 3.5 per second, which is not a high concurrency, and the database is completely bearable.

To sum up, we need to use knowledge points (blackboard knocking), redis data cache, redis cache penetration, cache breakdown, hot key issues, redis big key issues (not specific), message queue asynchronous decoupling, and so on.

Drawing code words is not easy, if you think I can write, remember to like and encourage, oh, if you think there is a problem, you are welcome to correct.

All right, that's all for everyone.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report