In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "what is the principle of ZooKeeper persistence". In daily operation, I believe that many people have doubts about what the principle of ZooKeeper persistence is. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "what is the principle of ZooKeeper persistence?" Next, please follow the editor to study!
There are several special concerns in ZK's data and storage:
The relationship between memory data and disk data:
Restore memory data, restore site
Data synchronization: data synchronization between different nodes in a cluster (in addition, proposal cache queue proposals in memory)
Disk data, why include: snapshot, transaction log? For the sake of data granularity
Therefore, the granularity is very fine, and when the site is restored, it can be restored to the transaction granularity.
Because the time interval for generating snapshots is too large, that is, the granularity of snapshots is too coarse.
If only snapshots are included, data will be lost when the scene is restored.
Transaction log, flush to disk for each committed transaction
In-memory data is the data that really provides services.
Disk data, function:
Timing of snapshot generation: based on threshold, random factors are introduced
CountLog is the cumulative number of transactions executed
SnapCount is the configured threshold
RandRoll is a random factor (value: 0~snapCount/2)
Because dump snapshot consumes a lot of disk IO and CPU
Simultaneous dump of all nodes will seriously affect the external service capability of the cluster.
Key problem to be solved: avoid all nodes dump snapshot at the same time
CountLog > snapCount/2 + randRoll, where:
The snapshot file of ZK is a Fuzzy snapshot, not a snapshot accurate to a certain time, but a snapshot within a certain period of time
RDB files are accurate snapshots because of memory space isolation between processes
The kernel of the system uses Copy-On-Write technology to save a lot of memory space.
Shared memory space between threads, resulting in Fuzzy snapshots
This requires that all transaction operations of ZK are idempotent, otherwise there will be data inconsistency.
In fact, all operations of ZK are idempotent.
ZK uses "asynchronous threads" to generate snapshots:
Analogy: use "asynchronous process" in Redis to generate snapshot RDB (Redis Dump Binary)
Https://blog.csdn.net/varyall/article/details/79564418
If a client request is received while Zookeeper is taking a snapshot, will the request be applied to DataTree at this time?
Zookeeper is called zks.takeSnapshot () to generate the snapshot file.
This method and its underlying methods do not lock the DataTree
Therefore, generating snapshot files is not an atomic operation.
So transactions that occur between the beginning of snapshot execution and the end of snapshot execution will also be applied to the DataTree
It will also be persisted to the snapshot file, which means that even if the snapshot suffix is n, the snapshot file may contain the execution results of transactions such as nasty 1.
If so, what will be the problem? How to solve?
Https://blog.csdn.net/jpf254/article/details/80769525
At this point, the study of "what is the principle of ZooKeeper persistence" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.