In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Redis is needed in the project, and more places are used after business. The problem arises, because the operation of redis is too frequent, causing the operation of redis to become the bottleneck of the whole project. After investigation and comparison, the debut of memory-based cache at this time is simply pure memory-level cache, which can be implemented.
1. Limit the number of caches (cannot have unlimited heap memory, it will burst)
2. Set expiration time (only cache data with high frequency in memory)
Put on the comparison diagram of the business process, that is, add a layer before the redis to compare the redis based on memory, but the connection, including the operation, still has to generate the network io operation.
Here are the comparative tests I did:
General data:
1. Suppose all misses (neither memory nor redis): [root@master test] # python 6.py this is the result of 100 times memory: [0.006078958511352539,0.006076094884033332, 0.006433963775634766] redis: [0.00573420524597168,0.0070250033227539,0.005178928375244141] this is the result of 1000 times memory: [0.074383974035303438,0.0615389347076416] redis: [0.04864096641540527,0.0474910360839844] 0.05013895034790039] this is the result memory of 10000 times: [0.5369880199432373,0.48474812507629395, 0.4684739112854004] redis: [0.4230480194091797, 0.5131900310516357, 0.43289995193481445] this is the result memory of 100000 times: [5.565299987792969,5354228019714355,5.658163070678711] redis: [4.795120000839233,5.0205230712890625] 4.469913005828857] 2, suppose all hits: [root@master test] # python 6.py this is the result of 100 times memory: [0.000402684084020996094,0.00021195411682128906,0.00021600723266601562] redis: [0.0059568881988525390.005934001525,0.005537986755371094] this is the result memory of 1000 times: [0.0021610260009656250.00205087661743164060.002026081085205078] redis: [0.054672002792358884, 0.04969382281777] 0.04725193977355957] this is the result memory of 10000 times: [0.014709949493408203, 0.01748490333557129,0.016735076904296875] redis: [0.500324010848999,0.6110620498657227, 0.5946261882781982] this is the result memory of 100000 times: [0.20346498489379883,0.2012200927734375,0.154673838355957] redis: [5.065227031707764, 5.54321384429316,1670072078483]
Data in json format:
1. Suppose all misses: [root@master test] # python json_test.py this is the result memory [0.00627589225769043,0.006350040435791016,0.006167888641357422] redis [0.005381822586059570.005352973937988281,0.005239009857177734] this is the result memory of 1000 times [0.06096196174621582,0.05894589424133301, 0.05318595123291] redis [0.0453431606296296,0.046444177676348,0.042047977447509766] this is the result of 10000 memory [0.0652687191931958] 0.492424964904785160.54292893409729] redis [0.46350693702697754, 0.5339851379394531, 0.5140450076294] this is the result memory of 100000 times [5.3060479164123535,5.807142972946167,4.886216163635254] redis [4.2876131534576642, 4.5280160903066, 5.158953905105591] 2, suppose all hits: [root@master test] # python json_test.py this is the result of 100 times memory [0.0005319118499755859, 0.000305895103698730470.0002970695495605469] redis [0.00260266930530548,58944453591] 2 0.006082773208618164] this is the result of 1000 times of memory [0.0028162002563476561248779297,0.0026869811248779297,0.0026869773864746094] redis [0.078500989924316,0.06138491630554199, 0.05786609649658203] this is the result of 10000 times of memory [0.02676105499267578, 0.026623010635375977, 0.0266301035375977] redis [0.6534669999914770.6395609378814697, 0.47389698814697] this is the result of 100000 times of memory [0.206871032784375, 0.2074561179790.1999549082] redis [5.5367489400813353605605] 4.935602903366089]
As you can see, when all misses (the actual situation will only happen for the first time, otherwise there is no need to add redis), the performance based on memory and redis is basically the same, but if it is hit, the performance will be greatly improved.
Go directly to the code:
#! / usr/bin/env python#-*-coding:utf8-* -''Author: mafeiDate: 2019-09-26'''import timeimport weakrefimport collectionsimport ujson as jsonclass Base (object): notFound = {} class Dict (dict): def _ del__ (self): pass def _ init__ (self) Maxlen=10): self.weak = weakref.WeakValueDictionary () self.strong = collections.deque (maxlen=maxlen) @ staticmethod def now_time (): return int (time.time ()) def get (self, key): v = self.weak.get (key) Self.notFound) if (v is not self.notFound): expire = v [r 'expire'] if (self.now_time () > expire): self.weak.pop (key) return self.notFound else: return v else: return self.notFound def set (self, key) Value): self.weak [key] = strongRef = Base.Dict (value) self.strong.append (strongRef) class MemoryCache (object): def _ init__ (self, maxlen=1000 * 10000, life_cycle=5*60): self.memory_cache = Base (maxlen=maxlen) self.maxlen = maxlen self.life_cycle = life_cycle @ staticmethod def _ compute_key (key): return key def get (self) K): memory_key = self._compute_key (k) result = self.memory_cache.get (memory_key). Get ('result', None) if result is None: return result return result def set (self, k, v, life_cycle=None): self._set_memory (k, v, life_cycle) def get_json (self Key): res = self.get (key) try: return json.loads (res) except: return res def set_json (self, k, v, life_cycle=None): try: v = json.dumps (v) except: pass self.set (k, v, life_cycle) def set_with_lock (self, k, v Life_cycle=None): self._set_memory (k, v, life_cycle) def _ set_memory (self, k, v, life_cycle=None): life_cycle= life_cycle or self.life_cycle memory_key = self._compute_key (k) self.memory_cache.set (memory_key, {'ip': k, ringing resultsin: v, ringing rearrangement: life_cycle + self.memory_cache.now_time ()})
Only two parameters need to be passed in when calling:
Maxlen: how many pieces of data can be cached in memory
Life_cycle: data failure time
Advantages:
1. Efficient, much faster than calling redis directly
2. No network io and disk io will be generated.
Disadvantages:
1. The supporting structure is relatively simple. Of course, this can be implemented by self-expansion.
2. If it is not convenient to update the value in memory, there are other ways to implement it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.