In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to understand the NHibernate cache management mechanism". In the daily operation, I believe many people have doubts about how to understand the NHibernate cache management mechanism. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to understand the NHibernate cache management mechanism". Next, please follow the editor to study!
The main problems of cache management
As a data center, cache has the operation of adding, updating and deleting data, so similar to database, there are some problems that need to be solved, such as transactional, concurrent data consistency and so on.
A typical way to use caching is as follows:
Database db = new Database (); Transaction tx = db.BeginTransaction (); try {/ / read MyEntity1 entity1 = cache.Get from cache ("competitive of entity1"); / / read if from database when not in cache (entity1 = = null) entity1 = db.Get ("competitive of entity1"); / / A pair of entity1 to process updated = db.Update (entity1) / / the update of entity1 is saved to if (updated) cache.Put (entity1) in the database; / / if the database is updated successfully, other processes in the cache / / transaction are updated, tx.Commit ();} catch {tx.Rollback (); throw;}
The above sample code uses the cache in a transactional environment, and there is an update operation (not a read-only cache). If this is a shared cache, there are many problems with this use, for example, if other processing in the transaction causes an exception, the update to entity1 in the database can be rolled back, but the entity1 in cache has been updated. If you don't deal with such a situation, the entity1 read out from cache later is an incorrect data.
Therefore, in order to use the cache correctly, there must be a perfect scheme to fully consider the transaction, concurrency and other conditions to ensure the correctness and consistency of the data.
NHibernate caching mechanism at 2 levels
Compared with session, the primary cache is a private cache and the secondary cache is a shared cache
The search order of session loaded entities is: 1. Look up from the first-level cache; 2. Look up from the secondary cache; 3. Look up from the database
The first-level cache acts as an isolated area between transactions. All additions, modifications and deletions to entity objects within the transaction are not visible to other session before the transaction commits, and these updates are applied to the secondary cache in batches after the transaction commits successfully.
Such a 2-level cache mechanism can ensure the correctness of the data to a large extent (for example, if the transaction fails in the previous example code, the data will not be updated to the secondary cache, preventing incorrect data in the secondary cache), as well as other transaction consistency problems such as ReadUncommited.
Internally, the management of the first-level cache is simple, and all loaded entities (as well as entities that have created proxy but not loaded, etc.) are cached in the persistence context (NHibernate.Engine.StatefulPersistenceContext)
Entities to be added, updated or deleted are cached with three lists and applied to the database and secondary cache when the transaction is committed (Flush calls or automatic Flush operations performed by NHibernate due to queries, etc.) will also be applied to the database, but will not be applied to the secondary cache, which is updated only after the transaction is successfully committed)
The three lists in NH1.2 are maintained in SessionImpl, and there are quite a lot of new features added by NH2.0 and refactoring actions in the code itself. These three lists are maintained in NHibernate.Engine.ActionQueue.
Because the secondary cache is a shared cache, there are concurrent update conflicts, but it is necessary to ensure the correctness of the secondary cache data, so the processing mechanism is much more complex. The following is a detailed secondary cache processing mechanism
The main structure of secondary cache
Main interfaces:
API responsibilities:
ICache: a unified cache access interface
ICacheProvider: factory class, initialization class, used to create ICache object, initialize cache server or component at startup, perform necessary exit processing for cache server or component when exiting, etc.
Process:
1. The implementation class of ICacheProvider specified in the configuration file
2. Create an ICacheProvider object when SessionFactory starts, execute the ICacheProvider.Start () method, and create an ICache object for each cache region
3. During the whole running process, NHibernate can use the ICache created by SessionFactory to access the cache.
4. Call the ICacheProvider.Stop () method when SessionFactory is closed
Transition of entity state:
Take memcached as an example, the state transition of entity caching is like the figure above.
Tuplizers of new features of NHibernate2.1
Analysis of delayed loading of one-to-one Mapping of NHibernate
Horizontal comparison between LINQ to SQL and NHibernate
Explain Nhibernate and Code Generation
Explain NHibernate Session
1. CacheEntry represents an object that needs to be stored in or returned from the cache
The CacheEntry contains the value of the disassembled entity attribute (DisassembledState,object [] type, the value of each attribute in the array), the version of the entity (used for optimistic locking), and the type name. In this way, the domain objects we define can be serialized and stored in the cache without implementing the Serializable interface.
There is no special processing for the disassembly and assembly process of primitive type entity attributes For entity attributes such as composite component, one-to-one, one-to-many 's collection, the id value of owner (that is, the entity object currently cached) is stored in DisassembledState after decomposition. During the assembly process, the relevant object is set to this attribute according to this id value (it may be loaded from first-level cache, second-level cache, or database, depending on the specific setting and runtime state).
2. CacheItem is used to solve the problem of data consistency when updating the secondary cache concurrently (if this problem is not taken into account, just store the CacheEntry in the cache directly), mainly dealing with the soft lock mechanism, which is described in detail later.
3. The process of converting CacheItem to DictionaryEntry is carried out by NHibernate.Caches.Memcache and is completely redundant.
NHibernate uses the rule [full class name # id value] to generate cache key,NHibernate.Caches.Memcache by adding [region name @] to the key generated by NHibernate (if the region name is not set in the hbm file of the class, the default region is the full class name, so the complete class name will appear twice in cache key)
The key of memcached can only be 250 characters long. When the cache key exceeds 250 characters, NHibernate.Caches.Memcache takes the hash value of key as the new memcached key value, because there will be hash conflicts, so NHibernate.Caches.Memcache constructs a DictionaryEntry object (the MD5 of the original key value as the key value of DictionaryEntry, and the cached object as value) and stores the DictionaryEntry in memcached. When caching a get object, NHibernate.Caches.Memcache compares the key value of the returned DictionaryEntry again to rule out hash conflicts
Using memcached in this way is too wasteful in efficiency. If you are not careful, the full class name will appear in the cached data 4 times!
Based on the mechanism of NHibernate and the characteristics of memcached, we can consider using cache region to distinguish different memcached clusters. For example, An and B servers are used as read-only caches, region is named readonly_region;C, D and E servers as read-write caches, and region is named readwrite_region.
4. The processing from DictionaryEntry to Memcached Server is completed by Memcached.ClientLibrary. For the analysis of Memcached.ClientLibrary, refer to memcached client-memcacheddotnet (Memcached.ClientLibrary).
Resolve concurrent update conflicts
NHibernate defines three caching policies: read-only policy (useage= "read-only"), non-strict read-write policy (useage= "nonstrict-read-write"), and read-write policy (useage= "read-write")
A structure that handles concurrent updates
ICacheConcurrencyStrategy aggregates an ICache object. When NHibernate operates the cache, it does not use the ICache object directly, but through ICacheConcurrencyStrategy, which ensures that the system operates on the secondary cache under a specific cache policy.
The semantics of ICacheConcurrencyStrategy and ICache interfaces are different. ICache is purely the operation interface of cache, while ICacheConcurrencyStrategy is related to the state change of entities.
Semantics of ICacheConcurrencyStrategy
Evict: invalidate cache entries
Get, Put, Remove, Clear: the same as the related methods of ICache, pure cache read, storage and other operations
Insert, AfterInsert: the method when a new entity is added. The Insert method is executed after the entity is added to the database, and the AfterInsert method is executed after the transaction is committed. How to deal with secondary caching in these methods is determined by the specific caching policy.
Update, AfterUpdate: the method when updating the entity. The Update method is executed after the entity modifies the update to the database, and the AfterUpdate method is executed after the transaction is committed. How to deal with secondary caching in these methods is determined by the specific caching policy.
Lock, Release: these two methods lock and unlock cache items respectively. Semantically, the Lock method is executed on the cache item when the entity is updated in the transaction, and the Release method is executed on the cache item after the transaction is committed. How to handle the secondary cache in these methods is determined by the specific cache policy.
In the previous entity state transition diagram, the transition from CacheEntry to CacheItem is done by the ICacheConcurrencyStrategy interface, CacheItem is only used by ICacheConcurrencyStrategy, and CacheEntry and ICacheConcurrencyStrategy interfaces are used in other places within the NHibernate that need to interact with the cache
ReadOnly strategy
In the application scenario, the data will not be updated and the secondary cached data will not be updated by NHibernate. Entities with read-only policy cannot perform update operations, otherwise an exception will be thrown and new and delete operations can be performed. The read-only policy is written to the cache only after the entity is loaded from the database
UnstrictReadWrite strategy
In the application scenario, the data will be updated, but the frequency is not high, and the concurrent storage is rare.
Entities using this strategy will not operate the secondary cache when new; when updating, simply delete the secondary cached data (Update, the secondary cached data will be deleted in the AfterUpdate method), so that during this period or subsequent requests will load the data from the database and re-cache
Because lock is not used for cached data during the update process, and version checking is not performed during reading, data consistency cannot be guaranteed during concurrent access. Here is an example scenario:
1, 2: request 1 to perform an update in the transaction, and NH updates the database and deletes the data from the secondary cache
3: some operations (such as ISession.Evict) invalidate the data in the first-level cache of request 1
4, 5: request 2 loads the data from the database and puts it into the secondary cache. Because request 2 is in another transaction context, the loaded data does not contain an update for request 1
6: request 1 needs to reload the data, because it is not in the first-level cache, so read from the second-level cache, the result will be the wrong data.
ReadWrite strategy
The application scenario is that the data may be updated concurrently frequently. NHibernate ensures the transaction isolation level of ReadCommitted. If the isolation level of the database is RepeatableRead, this strategy can also basically ensure that the secondary cache meets the isolation level of RepeatableRead.
NHibernate achieves this goal by using mechanisms such as version, timestamp checking, soft lock, etc.
The principle of soft lock is relatively simple, if the transaction needs to update key for 839 data, first create a soft lock object, use 839 this key to store in cache (if cache has already cached this data with 839 key, and also directly covered him with soft lock), then update the database, complete other processing of the transaction, and then re-store the entity object with id for 839 into cache after the transaction is committed. All other requests to read 839 from the secondary cache during the transaction will return a soft lock object, indicating that the data in the secondary cache has been locked, so it is transferred to the database for reading.
ReadWriteCache.ILockable is the soft lock interface, which is implemented by two classes, CacheItem and CacheLock
Processing steps when updating data
1: lock the secondary cached data before the update operation
2d3: cache data from the second tier. If null or CacheItem is returned, create a new CacheLock and store it in the secondary cache. If a CacheLock is returned, it indicates that another transaction has locked the value. Increase the concurrent locking counter by 1 and update it back to the secondary cache.
4: return the lock object to EntityAction
5, 6, 7: update the database, complete other processing of the transaction, commit the transaction. ReadWriteCache's Update does not do any processing.
8: the AfterUpdate method of executing ReadWriteCache after the transaction is committed
First read the CacheLock object from the secondary cache. If null is returned, the lock has expired (caused by too long transaction time)
If the lock has expired, or if the returned CacheLock is no longer the one returned when the lock is locked (the lock is re-locked by other threads after the lock expires), create a new CacheLock, set the unlock state to the secondary cache, and end the entire update process.
If the CacheLock is in a concurrent lock state, the CacheLock concurrent lock counter is subtracted by one, updated back to the secondary cache, and the entire update process is completed.
If this is not the case, there are no concurrent updates during the period, and the new entity state is updated to the secondary cache (the lock is naturally released)
Once a concurrent update occurs and a concurrent transaction commits, NHibernate will not re-store the entity in the secondary cache. In this case, a CacheLock object with unlock state is stored in the secondary cache. Only after the CacheLock expires can the entity be re-cached in the secondary cache. This processing method is adopted because when concurrent transactions occur, NHibernate does not know which transaction in the database will be executed first and which will be executed later. In order to ensure the semantics of the ReadWrite strategy, the secondary cache will be forced to expire during this period of time.
ReadWriteCache's Get method not only returns null when the secondary cached data is locked, but also compares the timestamp of the cache item with the transaction time of the request thread, and may also return null, so that the request is directed to the database query, and the transaction isolation level is guaranteed by the database.
The put method also compares versions of entities (in the case of optimistic locks)
When looking at the source code, the Timestamper class is a combination of timestamps and counters, accurate in time to milliseconds, using a counter of 1-4096 per millisecond, incremental allocation. NHibernate.Caches.MemCache sets the secondary cache lock timeout of ReadWriteCache to 0xea60000, which translates to 1 minute.
At this point, the study on "how to understand the NHibernate cache management mechanism" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.