Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Is Redis really that easy to use?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Whether you are engaged in Python, Java, Go, PHP, Ruby, etc. Redis should be a familiar middleware. But most programmers who often write business code may only use set value and get value in their actual work. Lack of an overall understanding of Redis. Today, let's make a summary of Redis's common problems. I hope I can help you. What is Redis?

Redis is an open source low-level key-value storage database written in C language. It can be used in scenarios such as caching, event publication subscriptions, high-speed queues, and so on. And support rich data types: string (string), hash (hash), list (list), set (unordered set), zset (sorted set: ordered set)

The Application scenario of Redis in the Project

1. Cache data

Most commonly, it is often called hot data for data that often needs to be queried and does not change very frequently.

2. Message queue

Equivalent to message subscription systems, such as ActiveMQ, RocketMQ. If there are high consistency requirements for data, it is still recommended to use MQ)

3. Counter

For example, counting the click rate and approval rate, redis is atomic and can avoid concurrency problems.

4. E-commerce website information

Large e-commerce platform initializes the cache of page data. For example, when you go to where to buy a ticket, the price of the home page will be different from the price you click in.

5. Hot spot data

For example, real-time hot spots of news websites and hot searches on Weibo, etc., need to be updated frequently. When the total amount of data is relatively large, querying directly from the database will affect performance.

Give me a reason to love.

We usually do this on a single-node server.

With the development of enterprises and the expansion of business. In the face of huge amounts of data, the direct use of MySql will lead to performance degradation, and data reading and writing will be very slow. So we can match the cache to deal with large amounts of data.

So now we are like this:

The above figure only outlines the role of caching. When the data continues to grow, we need to use master-slave replication technology to achieve read-write separation.

The database layer interacts directly with the cache, and if any data in the cache is returned directly to the client, it will be queried from the MySql if not. Thus, the pressure on the database is reduced and the efficiency is improved.

Usually released a new mobile phone, there will be panic buying activities. At the same time, the server will receive a lot of order requests.

We need to use the atomic operation of redis to implement this "single thread". First of all, let's put the inventory in a list, suppose there are 10 pieces of inventory, then go to the number of push20 in the list, this number has no practical significance, it just represents 10 pieces of inventory. After the rush purchase starts, each user who arrives will pop a number from the list, indicating that the user has snapped up successfully. When the list is empty, it has been robbed. Because the pop operation of the list is atomic, even if many users arrive at the same time, it is executed in turn.

Digression: some rush purchases restrict requests directly on the front-end page. These requests are directly intercepted by the front-end, and not to the back-end server.

Why is Redis so fast?

1. Redis is a memory-only operation, which needs to be manually persisted to the hard disk when needed.

2. Redis is single-threaded, thus avoiding the operation of frequent context switching in multithreading.

3. The data structure of Redis is simple, and the operation of the data is relatively simple.

4. Different underlying models are used, and the underlying implementation between them and the application protocol for communicating with the client are different. Redis directly builds its own VM mechanism, because the general system calls system functions, it will waste a certain amount of time to move and request.

5. Using the multi-channel Istroke O multiplexing model, non-blocking Ipicuro

Multi-channel I / O multiplexing

Icano multiplexing technology is a technology to solve the problem that processes or threads block to a certain Icano system call. It can monitor multiple descriptors, and once a descriptor is ready (usually read or write ready, that is, before the file descriptor is ready for read and write operations), it can inform the program to read and write accordingly.

Redis data type application scenario

As mentioned earlier, Redis supports five rich data types, so how do we choose in different scenarios?

String

Strings are the most commonly used data type, and they can store any type of string, including binary, JSON-based objects, and even base64-encoded images. The maximum capacity of a string in Redis is 512MB, which can be said to be omnipotent.

Hash

Often used to store structured data, such as the forum system can be used to store users'Id, nicknames, avatars, points and other information. If you need to modify the information in it, you only need to take out the Value through Key to deserialize and modify the value of an item, and then serialize and store it in Redis, which is stored in the Hash structure. Because the Hash structure will compress and store a single Hash element in less than a certain number, it can save a lot of memory. This does not exist in the String structure.

List

The implementation of List is a two-way linked list, that is, it can support reverse search and traversal, which is more convenient to operate, but it brings some additional memory overhead. Many implementations within Redis, including send buffer queues, also use this data structure. In addition, you can use the lrange command to do Redis-based paging function, excellent performance, good user experience.

Set

Set external function is similar to list is a list function, the special feature is that set can automatically arrange weight, when you need to store a list of data, but do not want to duplicate data, you can choose to use set.

Sorted Set

It can be sorted by the weight of a condition, such as a data application that can make a ranking by the number of clicks.

Data consistency of Redis caching

In the real sense, the data of the database is impossible to be consistent with the cached data, and the data can be divided into two categories: the final consistent and the strong consistent. If the data requirements in the business must be strong all the time, then caching cannot be used. All the cache can do is to ensure the ultimate consistency of the data.

All we can do is to keep the data as consistent as possible. Whether it is to delete the library first and then delete the cache or delete the cache first and then delete the database, there may be data inconsistencies, because read and write operations are concurrent, and there is no way to guarantee their order. The specific response strategy should be based on business needs, so I won't repeat it here.

Expiration and memory obsolescence of Redis

We can set the expiration time of Redis when it stores data. But how did this key be deleted?

At first I thought it was a regular deletion, but later I found that this is not the case, because if it is regularly deleted, a timer is needed to monitor the key constantly. Although memory is freed, it consumes a lot of cpu resources.

Redis expired deletion is performed on a regular basis. By default, every 100ms is detected, and expired key is deleted. The detection here is not sequential detection, but random detection. So will there be a fish out of the net? Obviously, Redis also takes this into consideration. When we read / write an expired key, it will trigger Redis's lazy delete policy and directly kill the expired key.

Memory obsolescence means that a part of the key stored by the user can be automatically deleted by Redis, so that the data cannot be found in the cache. The memory of our server is 2G, but with the development of the business, the cached data has exceeded 2G. However, this does not affect the operation of our program, because the visible memory of the operating system is not limited by physical memory. It doesn't matter that there is not enough physical memory, the computer will set aside a piece of space from the hard disk to use as virtual memory. That's why Redis designed two application scenarios: caching and persistent storage.

Cache breakdown

Cache is just a layer of protection added to ease the pressure on the database. When we can't query the data we need from the cache, we have to query it in the database. If it is used by hackers to frequently access data that is not in the cache, then the cache loses its meaning, and instantly the pressure of all requests falls on the database, which will lead to abnormal database connection.

Solution:

1. Set up scheduled tasks in the background to actively update cached data. This scheme is easy to understand, but when the key is scattered, it is still more complicated to operate.

2. Hierarchical cache. For example, if you set up two cache protection layers, the first-level cache has a short expiration time and the second-level cache has a long expiration time. If there is a request, it is priority to look it up from the level 1 cache. If the corresponding data is not found in the level 1 cache, the thread is locked, and the thread takes the data from the database and updates it to the level 1 and level 2 cache. Other threads get it directly from level 2 threads.

3. Provide an interception mechanism to maintain a series of legal key values internally. If the requested key is invalid, return directly.

Cache avalanche

Cache avalanche means that the overall crash of the cache is dropped due to some reasons (such as downtime, cache service failure or non-response), resulting in a large number of requests reaching the back-end database, resulting in database crash, the whole system crash, and disaster, that is, the cache breakdown mentioned above.

The picture is from the Internet.

How to avoid avalanches:

1. Add a random effective time within a certain range to the cache, and different key sets different expiration time to avoid collective failure at the same time.

2. Similar to the cache breakdown solution, do a second-level cache and read data from the copy cache when the original cache expires.

3. Use lock or queue to avoid too many requests and read and write to the server at the same time.

Write at the end

The performance of Redis is extremely high, with a read speed of 110000 times per second and a write speed of 81000 times per second. It supports transactions, backup and rich data types.

Everything has two sides, and Redis has its drawbacks:

1. Because it is an in-memory database, the amount of data stored on a single machine is limited, which requires developers to estimate in advance and delete unwanted data in a timely manner.

2. After modifying the data of Redis, the data persisted to the hard disk needs to be readded to the content for a long time, and Redis will not work properly at this time.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report