Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to improve the concurrency of in-process cache

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

What this article shares with you is about how to improve the concurrency of cache in the process. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article. Let's take a look at it with the editor.

Cache is designed to reduce heavy IO operations and increase the concurrency ability of the system. Whether it is CPU multi-level cache, page cache, or the familiar redis cache in our business, the essence is to store limited hot data in a storage medium with faster access.

The cache design of the computer itself is that CPU adopts multi-level caching. For our service, can we also use this multi-level cache to organize our cached data? At the same time, redis access will go through the network IO, so can we directly store the hot data in the process and let the process cache the hottest data?

This leads to what we are talking about today: local cache, local cache, also known as process cache.

Getting started

As a process storage design, of course, crud has:

Let's initialize local cache first.

/ / initialize local cachecache first, err = collection.NewCache (time.Minute, collection.WithLimit (10)) if err! = nil {log.Fatal (err)}

The meaning of the parameters:

Expire:key uniform expiration time

CacheOption:cache settings. For example, the upper limit of key is set.

Basic operation cache

/ / 1. Add/update adds / modifies the APIcache.Set ("first", "first element") / / 2. Get gets the valuevalue under key, ok: = cache.Get ("first") / / 3. Del deletes a keycache.Del ("first")

Set (key, value) set cache

Value, ok: = Get (key) read cache

Del (key) delete cache

Advanced operation

Cache.Take ("first", func () (interface {}, error) {/ / Analog Logic is written to local cache time.Sleep (time.Millisecond * 100) return "first element", nil})

The previous Set (key, value) simply adds the cache; Take (key, setFunc) executes the passed fetch method when the value of the key does not exist, gives the specific read logic to the developer, and automatically places the result in the cache.

At this point, the core code is basically finished, in fact, it looks quite simple. You can also go to https://github.com/tal-tech/go-zero/blob/master/core/collection/cache_test.go to see the use of test.

Solution

First of all, caching is essentially a medium for storing limited hot spot data, which is faced with the following problems:

Limited capacity

Hot spot data statistics

Multithreaded access

Let's talk about our design practice in these three aspects.

Limited capacity

Limited means to be eliminated when it is full, which involves the elimination strategy. What is used in cache is: LRU (least recently).

So how did elimination happen? There are several options:

Turn on a timer, loop all key continuously, wait until the preset expiration time, and execute the callback function (in this case, delete the key in the map)

Lazy deletion. Determine whether the key is deleted during access. The disadvantage is that if it is not accessed, it will increase the waste of space.

The first active deletion is adopted in cache. However, the biggest problems encountered in proactive deletions are:

Continuous loop, empty consumption of CPU resources, even in the extra coordination process, is not necessary.

In cache, additional expiration notifications are recorded in the time round, and when there is a notification in the expired channel, the deletion callback is triggered.

More design articles on the time wheel: https://go-zero.dev/cn/timing-wheel.html

Hot spot data statistics

For caching, we need to know whether this cache is valuable with extra space and code, and whether we need to further optimize the expiration time or cache size, all of which we rely on statistical capabilities, which are also provided by sqlc and mongoc in go-zero. Therefore, we have also added the cache in cache to provide developers with the feature of local cache monitoring, so that developers can more intuitively monitor the distribution of caches when they connect to ELK.

And the design is actually very simple, that is: Get () hit, just add 1 to the statistical count.

Func (c * Cache) Get (key string) (interface {}, bool) {value, ok: = c.doGet (key) if ok {/ / hit hit+1 c.stats.IncrementHit ()} else {/ / missed miss+1 c.stats.IncrementMiss ()} return value, ok} multithreaded access

When multiple protocols are accessed concurrently, the following issues are involved for caching:

Write-write conflict

Conflicts in the moving process of elements in LRU

When writing to the cache concurrently, the traffic is impacted or invalid.

In this case, write conflicts are easily resolved, and the easiest way is to add locks:

/ / Set (key, value) func (c * Cache) Set (key string, value interface {}) {/ / Lock Then write map c.lock.Lock () _, ok: = c.data [key] c.data [key] = value / / lru add key c.lruCache.add (key) c.lock.Unlock ()} / / as a key-value pair in cache: Get () func (c * Cache) doGet (key string) (interface {}) Bool) {c.lock.Lock () defer c.lock.Unlock () / / when key exists Then adjust the position in the LRU item, which is also the locked value, ok: = c.data [key] if ok {c.lruCache.add (key)} return value, ok}

The write logic is executed concurrently, which is mainly introduced by the developer himself. And this process:

Func (c * Cache) Take (key string, fetch func () (interface {}, error)) (interface {}, error) {/ / 1. First get the value if val in doGet (), ok: = c.doGet (key); ok {c.stats.IncrementHit () return val, nil} var fresh bool / / 2. Multiple programs are obtained through sharedCalls. One program gets the shared result val of multiple programs, err: = c.barrier.Do (key, func () (interface {}, error) {/ / double check) {/ / double check to prevent multiple reads of if val, ok: = c.doGet (key); ok {return val, nil}. / / the key point is to execute the passed cache setting function val, err: = fetch (). C.Set (key, val)}) if err! = nil {return nil, err}. Return val, nil}

On the other hand, sharedCalls saves many times of function execution and reduces collaborative program competition by sharing the returned results.

The above is how to improve the concurrency of in-process cache. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report