In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
The services provided by ZooKeeper are mainly through: data structure Node+ primitive + watcher mechanism
ZooKeeper is a distributed small file system, which can avoid single point of failure through election algorithm and cluster replication.
Because it is a file system, even if all ZooKeeper nodes are dead, the data will not be lost
After restarting the server, the data can be restored.
All the functions implemented by ZooKeeper are realized by the nature of the ZK node and the data associated with the node.
As for what data to associate, it depends on what you do.
① cluster management: using the characteristics of temporary nodes, the nodes are associated with the host name, IP address and other related information of the machine, and the cluster single point of failure also belongs to this category.
② unified naming: mainly makes use of the uniqueness of nodes and the tree structure of directory nodes.
③ configuration management: nodes are associated with configuration information.
④ distributed locks: nodes are associated with resources to compete for.
The following are the practical applications in the project
1. Using the watcher mechanism:
When vqsapi sends data to six interfaces, zookeeper can be used to monitor the existing mongodb and api.
Vqsapi dynamic acquisition, sending data to the interface where both mongodb and api survive
two。 Take advantage of the characteristics of temporary nodes
Operation and maintenance interface probe, each server can register a temporary node in zookeeper, when the interface is down
Session is disconnected to achieve the purpose of monitoring.
3. Make use of the characteristics of node uniqueness
Distributed lock, operate the same resource at the same time, when concurrency problems may occur, the last lock
Because the native zookeeper statements are cumbersome and difficult to understand, the curator framework is well implemented. Here is the locking operation.
String path = String.format (LockPathScheme.STRATEGY_MODEL_ROUTE, modelId, isp,province,value); / / locking operation CuratorFramework curator = CuratorFrameworkFactory.builder (). RetryPolicy (new ExponentialBackoffRetry (10000, 3)) .connectString (zookeeperserver). Build (); curator.start (); InterProcessMutex lock = new InterProcessMutex (curator, path); try {boolean b = lock.acquire (3, TimeUnit.SECONDS) If (! B) {resultMap.put ("statusCode", 300); resultMap.put ("message", "record is being manipulated!") ; return resultMap;} / / resultMap = this.strategyRoute_dnspod_save_detail (request, id, modelId, modelName, category, province, isp, containCname, type, remark, node, value, ttl, weight, status,customerViewId); return resultMap } catch (Exception e) {e.printStackTrace (); resultMap.put ("statusCode", 300); resultMap.put ("message", "Internal error!") ; return resultMap;} finally {/ / remember to release lock try {lock.release ();} catch (Exception e) {System.out.println (path + "lock release failed" + e);} CloseableUtils.closeQuietly (curator);}
Finally, the last perfect example makes good use of the features of the zookeeper framework.
Transferred from http://www.cnblogs.com/wuxl360/p/5817549.html
Suppose our cluster has:
(1) servers of 20 search engines: each is responsible for the search task as part of the overall index.
Fifteen of the servers of the ① search engine now provide search services.
② 5 servers are generating indexes.
The servers of these 20 search engines often ask the servers that are providing search services to stop providing services and start to generate indexes, or the servers that generate indexes have completed index generation and are ready to provide search services.
(2) A master server: responsible for issuing search requests to the servers of these 20 search engines and merging result sets.
(3) A backup master server: responsible for replacing the master server when the master server goes down.
(4) A cgi of web: sends a search request to the master server.
Using Zookeeper ensures that:
(1) Master server: automatically perceives how many servers provide search engines and sends search requests to these servers.
(2) standby master server: the standby master server is automatically enabled in case of downtime.
(3) cgi of web: can automatically learn the change of the network address of the main server.
(4) the implementation is as follows:
① servers that provide search engines create znode,zk.create ("/ search/nodes/node1", "hostname" .getBytes (), Ids.OPEN_ACL_UNSAFE, CreateFlags.EPHEMERAL) in Zookeeper.
The ② master server can get a list of znode's child nodes from Zookeeper, zk.getChildren ("/ search/nodes", true)
The ③ master server traverses these child nodes and obtains the data of the child nodes to generate a list of servers that provide search engines.
④ when the master server receives the event information changed by the child node, return to the second step
⑤ master server creates nodes in Zookeeper, zk.create ("/ search/master", "hostname" .getBytes (), Ids.OPEN_ACL_UNSAFE, CreateFlags.EPHEMERAL)
The ⑥ standby master server monitors the "/ search/master" node in the Zookeeper. When the node data of the znode changes, boot yourself into the master server and put your network address data into the node.
The cgi of ⑦ web obtains the network address data of the master server from the "/ search/master" node in Zookeeper and sends a search request to it.
The cgi of ⑧ web monitors the "/ search/master" node in the Zookeeper. When the node data of the znode changes, the network address data of the master server is obtained from this node, and the network address of the current master server is changed.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.