In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article shows you what MetaServer is SOFARegistry, the content is concise and easy to understand, can definitely brighten your eyes, through the detailed introduction of this article, I hope you can get something.
Function introduction
As the metadata center of SOFARegistry, the core function of MetaServer can be summarized as cluster member management. In the distributed system, how to know the list of nodes in the cluster, how to deal with the expansion capacity of the cluster, and how to deal with the exception of cluster nodes are all problems that have to be considered. The existence of MetaServer is to solve these problems. Its position in SOFARegistry is shown in the figure: cdn.nlark.com/yuque/0/2019/png/338467/1568254454389-0cefa85d-131a-4c2d-a844-f66e2c9807b4.png ">
MetaServer ensures high availability and consistency through SOFAJRaft, similar to the registry, which manages the list of members within the cluster:
Registration and storage of node list
Notification of changes to the list of nodes
Node health monitoring
Internal architecture
The internal architecture is shown in the following figure:
Based on Bolt, MetaServer provides services in the form of TCP private protocol, including DataServer, SessionServer, etc., and handles requests for node registration, renewal and list query.
At the same time, it also provides a control interface based on the Http protocol, such as whether the session node enables change notification, health check interface and so on.
The member list data is stored in Repository, and Repository is packaged by the conformance protocol layer. As a state machine implementation of SOFAJRaft, all operations on Repository are synchronized to other nodes, and the storage layer is operated through Rgistry.
MetaServer uses the Raft protocol to ensure data consistency, while maintaining the heartbeat with registered nodes, and expels the nodes whose heartbeats are timed out without renewal to ensure the validity of the data.
In terms of availability, as long as no more than half of the nodes are down, the cluster can provide services normally, and more than half of them are dead. Raft protocol cannot select master and log replication, so it cannot guarantee the consistency and validity of registered member data. The unavailability of the entire cluster does not affect the normal function of Data and Session nodes, but is not aware of changes in the node list.
Source code analysis service starts
When MetaServer starts, it starts three Bolt Server and registers the Processor Handler to process the corresponding request, as shown in the following figure:
DataServer: processing DataNode-related requests
SessionServer: processing SessionNode-related requests
MetaServer: processing MetaNode-related requests
Then start HttpServer, which is used to process Admin requests and provide Http interfaces such as push switch and cluster data query.
Finally, the Raft service is started, and each node acts as both RaftClient and RaftServer for change and data synchronization between clusters.
The default ports for each Server are:
Meta.server.sessionServerPort=9610meta.server.dataServerPort=9611meta.server.metaServerPort=9612meta.server.raftServerPort=9614meta.server.httpServerPort=9615 node registration
As you can see from the previous section, both DataServer and SessionServer have Handler that handles node registration requests. The registration behavior is done by Registry. The registration interface is implemented as follows:
@ Override public NodeChangeResult register (Node node) {StoreService storeService = ServiceFactory.getStoreService (node.getNodeType ()); return storeService.addNode (node);}
Regitsry acquires the corresponding StoreService according to different node types, such as DataNode, which is implemented as DataStoreService and then stored in Repository by StoreService. The implementation is as follows:
/ / Storage node information dataRepositoryService.put (ipAddress, new RenewDecorate (dataNode, RenewDecorate.DEFAULT_DURATION_SECS)); / /... / Storage change event dataConfirmStatusService.putConfirmNode (dataNode, DataOperator.ADD)
After calling the API RepositoryService#put for storage, a change event is stored in the queue, which is mainly used for data push and consumption processing.
The storage of node data is essentially stored in a hash table in memory, and its storage structure is as follows:
/ / RepositoryService underlying storage Map registry;// NodeRepository underlying storage Map nodeMap
Storing the RenewDecorate in the Map completes the whole process of node registration. As for how to combine and synchronize with the Raft protocol, this article describes below.
The logic of node removal is similar, removing node information from the Map also stores a change event to the queue.
Registration information renewal and eviction
I don't know if you have noticed that when a node registers, the node information is packaged by RenewDecorate. This is the key to the renewal and expulsion of registration information:
Private T renewal; / / Node object encapsulates private long beginTimest / / Registration event private volatile long lastUpdateTimest / / Renewal time private long duration; / / timeout
The object is the registration node information, with the registration time, the last renewal time, and the expiration time attached. Then the renewal operation is to modify the lastUpdateTimestamp. Whether it expires is to determine whether System.currentTimeMillis ()-lastUpdateTimestamp > duration is established, and if it is established, it is considered that the node times out for expulsion.
As with registration, the processing Handler of the renewal request is ReNewNodesRequestHandler, and the renewal operation is eventually handed over to StoreService. In addition, if the registration node is not queried when renewing the contract, the node registration operation will be triggered.
The expelled operation is completed by scheduled tasks. When MetaServer starts, multiple scheduled tasks are started. For more information, please see ExecutorManager#startScheduler,. One of the tasks will call Registry#evict, which is implemented by traversing the stored Map, obtaining expired lists, calling the StoreService#removeNodes method, and removing them from the Repository. This action will also trigger change notification. This task is performed by default every 3 seconds.
Node list change push
As mentioned above, after processing a node registration request, a node change event is also stored, that is:
DataConfirmStatusService.putConfirmNode (dataNode, DataOperator.ADD)
DataConfirmStatusService is also a storage synchronized by the Raft protocol, and its storage structure is:
BlockingQueue expectNodesOrders = new LinkedBlockingQueue (); ConcurrentHashMap expectNodes = new ConcurrentHashMap ()
ExpectNodesOrders is used to store node change events
ExpectNodes is used to store nodes that need to be acknowledged for change events, that is, NodeOperator will be removed from expectNodesOrders only if it is confirmed by other nodes.
So when events are stored in BlockingQueue, where can they be consumed? Look at the source code and find that it is not an imaginary read that is blocked by a thread.
A scheduled task is started in ExecutorManager to poll the queue for data. That is, the Registry#pushNodeListChange method is called periodically to get the header node of the queue and consume it. Data and Session each correspond to a task. The specific process is shown in the following figure:
First get the queue (expectNodesOrders) header node. If it is Null, it will return directly.
Get a list of nodes in the current data center and store them in the confirmation table (expectNodes)
Submit Node change push Task (firePushXxListTask)
Processing task, that is, calling the pushXxxNode method of XxNodeService, that is, obtaining all node connections through ConnectionHandler and sending a list of nodes
After receiving the reply, if confirmation is required, the StroeService#confirmNodeStatus method is called to remove the node from the expectNodes
After all the nodes are removed from the expectNodes, the operation will be removed from the expectNodesOrders and processed.
Node list query
Data,Meta,Session Server provides getNodesRequestHandler to process requests for querying the list of current nodes, which essentially reads data back from the underlying storage Repository, which is not discussed here. For the specific structure of the returned results, see the NodeChangeResult class, which contains a list of nodes and version numbers for each datacenter.
Raft-based storage
The backend Repository can be regarded as the state machine of SOFAJRaft. Any operation on Map will be within the cluster and will be synchronized by the Raft protocol to achieve consistency within the cluster. From the source code point of view, all operations are directly called RepositoryService and other interfaces, so how is it combined with Raft services?
If you look at the source code, you will find that @ RaftReference is added wherever RepositoryService is referenced, and @ RaftService is annotated to the concrete implementation class of RepositoryService. The key is here, and its processing class is RaftAnnotationBeanPostProcessor. The specific process is as follows:
In the processRaftReference method, any attribute annotated with @ RaftReference will be replaced by the dynamic proxy class. The proxy implementation can be seen in the ProxyHandler class, that is, the method call is encapsulated as ProcessRequest and sent to RaftServer through RaftClient.
Classes with @ RaftService are added to the Processor class, which is distinguished by serviceId (interfaceName + uniqueId). After RaftServer receives the request, it will be effective to the state machine of SOFAJRaft. The specific implementation class is ServiceStateMachine, that is, the Processor method will be called, and the implementation class will be found through serviceId and the corresponding method call will be executed.
Of course, if the native is the master node, some query requests do not need to follow the Raft protocol and directly call the local implementation method.
This process is actually very similar to the RPC call. The method call initiated by the referrer does not actually execute the method. Instead, it is encapsulated as a request sent to the Raft service, and the Raft state machine makes the real method call, such as storing the node information in the Map. The consistency of data between all nodes is guaranteed by Raft protocol.
Summary
In distributed systems, cluster member management is an unavoidable problem. Some clusters write list information directly to configuration files or configuration centers, while others choose to use zookeeper or etcd to maintain cluster metadata. SOFARegistry chooses to develop an independent MetaServer based on the consistency protocol Raft to achieve cluster list maintenance and real-time push of changes, in order to improve the flexibility of cluster management and the robustness of clusters.
The above content is what is MetaServer what is SOFARegistry, have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.