Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Java web case analysis

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

In this article, the editor introduces "java web case analysis" in detail, the content is detailed, the steps are clear, and the details are handled properly. I hope this "java web case analysis" article can help you solve your doubts.

Text

In a real work project, caching is a key component of a high-concurrency, high-performance architecture, so why can Redis be used as a cache? First of all, it can be used as the two main features of the cache:

Good access performance in memory / CPU in hierarchical system

Cache data is saturated and there is a good data elimination mechanism.

Because Redis naturally has these two characteristics, Redis is based on memory operation, and it has a perfect data elimination mechanism, so it is very suitable to be used as a cache component.

Among them, based on memory operation, the capacity can be 32-96GB, and the average operation time is 100ns, and the operation efficiency is high. And there are many data elimination mechanisms. After Redis 4.0, there are eight that make Redis as a cache suitable for many scenarios.

So why does Redis cache need a data elimination mechanism? What are the eight data elimination mechanisms?

Data elimination mechanism

If the Redis cache is implemented based on memory, its cache capacity is limited, so what should Redis do when the cache is full?

Redis for the case where the cache is full, Redis needs the cache data elimination mechanism to brush out and delete some data through certain elimination rules, so that the cache service can be reused. So what elimination strategies does Redis use to swipe and delete data?

After Redis 4.0, there are 6 kinds of Redis cache elimination strategies, including three categories:

Do not eliminate data

Noeviction, no data elimination. When the cache is full, Redis does not provide service and directly returns an error.

In the key-value pair that sets the expiration time

Volatile-random, which is randomly deleted in the key-value pair that sets the expiration time

Volatile-ttl, when setting the key-value pair of the expiration time, it is deleted based on the sequence of the expiration time. The earlier the expiration time, the earlier the deletion.

Volatile-lru, based on the LRU (Least Recently Used) algorithm, filters the key-value pairs that set the expiration time, and uses the least recently principle to filter the data.

Volatile-lfu, use the LFU (Least Frequently Used) algorithm to select the key-value pair with the expiration time set, and use the key-value pair with the least frequency to filter the data.

In all the key-value pairs

Allkeys-random, randomly selects and deletes data from all key-value pairs

Allkeys-lru, filtering through all data using the LRU algorithm

Allkeys-lfu, filtering through all data using the LFU algorithm

Note: LRU (least recently used, Least Recently Used) algorithm, LRU maintains a bi-directional linked list, with the head and tail of the linked list representing the MRU end and the LRU side, respectively, representing the most frequently used data and the least recently used data, respectively.

When the LRU algorithm is actually implemented, it needs to use linked lists to manage all the cached data, which will bring additional space overhead. Moreover, when data is accessed, the data needs to be moved to the MRU side on the linked list. If a large amount of data is accessed, it will bring a lot of linked list moving operations, which will be very time-consuming, which will reduce the performance of Redis cache.

Among them, LRU and LFU are implemented based on the lru and refcount attributes of the Redis object structure redisObject:

Typedef struct redisObject {time when the unsigned type:4; unsigned encoding:4; / / object was last accessed unsigned lru:LRU_BITS; / * LRU time (relative to global lru_clock) or * LFU data (least significant 8 bits frequency / / reference count * and most significant 16 bits access time). * / int refcount; void * ptr;} robj

Redis's LRU uses redisObject's lru to record the last time it was accessed, randomly selects the number of parameter maxmemory-samples configurations as a candidate set, and selects the data with the lowest lru attribute value to be eliminated.

In the actual project, how to choose the data phase-out mechanism?

The allkeys-lru algorithm is preferred to keep the most frequently accessed data in the cache to improve the access performance of the application.

Some top data uses the volatile-lru algorithm. The top data does not set the cache expiration time, while other data sets the expiration time, which is filtered based on LRU rules.

After understanding the Redis cache elimination mechanism, let's take a look at how many modes Redis has as a cache.

Redis cache mode

Based on whether or not to receive write requests, the Redis cache mode can be divided into read-only cache and read-write cache:

Read-only cache: only read operations are processed, and all update operations are in the database, so there is no risk of data loss.

Cache Aside mode

Read and write cache, read and write operations are performed in the cache, downtime failure will lead to data loss. There are two types of cache write-back data to the database: synchronous and asynchronous:

Synchronization: low access performance, which focuses more on ensuring data reliability

Read-Throug mode

Write-Through mode

Async: there is a risk of data loss, which focuses on providing low latency access

Write-Behind mode

Cache Aside mode

The query data first reads the data from the cache, and then reads the data from the database if it does not exist in the cache. After obtaining the data, it is updated to the cache Cache. But in the update data operation, the data of the database type is updated first, and then the data of the cache species is invalidated.

And there is a concurrency risk in Cache Aside mode: perform a read operation to miss the cache, then query the data from the database, the data has not been put into the cache, at the same time, an update write operation makes the cache invalid, and then the read operation loads the query to the data cache, resulting in the cached dirty data.

Read/Write-Throug mode

Both query data and update data access the cache service directly, and the cache service updates the data to the database synchronously. The probability of dirty data is low, but it strongly depends on cache, which requires the stability of cache service, but synchronous update will lead to its poor performance.

Write Behind mode

Both query and update data access the cache service directly, but the cache service updates data to the database asynchronously (through asynchronous tasks) with high speed and efficiency, but the consistency of the data is poor. There may also be data loss, and the implementation logic is more complex.

In the actual project development, the cache mode is selected according to the actual business scenario requirements. So after understanding the above, why do we need to use redis cache in our application?

Using Redis cache in applications can improve system performance and concurrency, mainly reflected in

High performance: based on memory query, KV structure, simple logic operation

High concurrency: Mysql can only support about 2000 requests per second, while Redis can easily support more than 1W per second. Letting more than 80% of the queries go to the cache and less than 20% of the queries to the database can greatly improve the throughput of the system.

Although using Redis cache can greatly improve the performance of the system, there will be some problems when using cache, such as bidirectional inconsistency between cache and database, cache avalanche and so on. How to solve these problems?

Common problems with using caching

When caching is used, there will be some problems, mainly reflected in:

Inconsistent double writes between cache and database

Cache avalanche: the Redis cache cannot handle a large number of application requests, and the transfer to the database tier results in a surge of pressure on the database tier.

Cache traversal: access data does not exist in the Redis cache and database, resulting in a large number of access traversal caches transferred directly to the database, resulting in a surge of pressure on the database tier

Cache breakdown: the cache cannot handle high-frequency hot data, resulting in direct high-frequency access to the database, resulting in a surge of pressure on the database layer

The cache is inconsistent with the database data

Read-only cache (Cache Aside mode)

For read-only cache (Cache Aside mode), read operations occur in the cache, and data inconsistencies only occur in delete operations (not new operations, because the new operations are only processed in the database). When a delete operation occurs, the cache marks the data as invalid and updates the database. Therefore, in the process of updating the database and deleting cache values, no matter which operation is performed first or later, as long as one operation fails, there will be data inconsistency.

After reading this, the article "java web case Analysis" has been introduced. If you want to master the knowledge points of this article, you still need to practice and use it yourself to understand it. If you want to know more about related articles, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report