In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the knowledge of "how to use biz cache in cache design in go-zero practice". Many people will encounter this dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Examples of applicable scenarios
Course selection system
Content social system
SecKill
Like these systems, we can add another cache in the business layer to store the key information in the system, such as course selection information for middle school students, course remaining places, content information between a certain period of time in the content social system, and so on.
Next, we take the content social system as an example.
In the content social system, we usually query a list of content first, and then click on a piece of content to view the details
Before adding the biz cache, the query flowchart for the content information should be:
From the well-designed cache design of the above figure and the previous article, we can know that the acquisition of the content list cannot rely on caching. If we add a layer of cache in the business layer to store the key information (or even complete information) in the list, then access to multiple rows of records is no longer a problem, which is what biz redis does. Next, let's take a look at the design, assuming that a single-line record in the content system contains the following fields
Field name Field Type remarks idstring content idtitlestring title contentstring details createTimetime.Time creation time
Our goal is to obtain a batch of content lists, and try our best to avoid the access pressure caused by db. First of all, we use redis's sort set data structure to store the field information that the root needs to store. There are two redis storage schemes:
Cache local information compresses and stores its key field information (such as id, etc.) according to certain rules. For score, we use the create time millisecond value (time values are not discussed here). The advantage of this storage scheme is to save redis storage space, on the other hand, the disadvantage is that the list details need to be checked twice (but this time the review will take advantage of the row record cache of the persistence layer).
Cache complete information stores all published content after compression according to certain rules. In the same score, we still use the create time millisecond value. The advantage of this storage scheme is that the business adds, deletes, queries, and changes all reids, while the db layer does not have to consider the row record cache. The persistence layer only provides data backup and recovery. On the other hand, its shortcomings are also obvious. The required storage space and configuration requirements are higher, and the cost will increase accordingly.
Sample code:
Type Content struct {Id string `json: "id" `Title string `json: "title" `Content string `json: "content" `CreateTime time.Time `json: "create_time" `} const bizContentCacheKey = `biz#content# cache` / / AddContent provides content storage func AddContent (r redis.Redis, c * Content) error {v: = compress (c) _, err: = r.Zadd (bizContentCacheKey, c.CreateTime.UnixNano () / 1e6 V) return err} / / DelContent provides content to delete func DelContent (r redis.Redis, c * Content) error {v: = compress (c) _, err: = r.Zrem (bizContentCacheKey) V) return err} / / content compression func compress (c * Content) string {/ / todo: do it yourself var ret string return ret} / / content decompression func uncompress (v string) * Content {/ / todo: do it yourself var ret Content return & ret} / / ListByRangeTime provides data query based on time period func ListByRangeTime (r redis.Redis, start, end time.Time) ([] * Content, error) {kvs Err: = r.ZrangebyscoreWithScores (bizContentCacheKey, start.UnixNano () / 1e6, end.UnixNano () / 1e6) if err! = nil {return nil, err} var list [] * Content for _, kv: = range kvs {data: = uncompress (kv.Key) list = append (list, data)} return list, nil}
In the above example, redis does not set the expiration time, and we synchronize the add, delete, modify and query operations to redis. We believe that this solution is designed only when the list access request of the content social system is relatively high. In addition, there is also some data access, which is not accessed as frequently as the content design system, which may be due to a sudden increase in traffic within a certain period of time. It may take a long time to visit again, at this interval, or not again, how should we consider the design of the cache in the face of such a scenario? In go-zero content practice, there are two solutions to this problem:
Increase the memory cache: use the memory cache to store the data that may have a large number of sudden visits at present. The commonly used storage scheme uses the map data structure, and the implementation of map data storage is relatively simple, but the cache expiration processing needs to be processed by adding timers. Another scheme is through the Cache in the go-zero library, which is specially used for memory cache management.
Adopt biz redis and set a reasonable expiration time
This is the end of "how to use biz cache in cache design in go-zero practice". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.