In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Windows Server failover cluster implementation mechanism is what kind of, I believe that many inexperienced people are helpless about this, this article summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.
Windows Server Failover Cluster (WSFC) uses quorum Voting to determine the health of the cluster, either to cause automatic failover, or to take the cluster offline. When a node in the cluster fails, other nodes take over and continue to provide services, but when communication problems occur between nodes or when most nodes fail, the cluster stops providing services. But how many node failures can the cluster tolerate? This is determined by the quorum configuration, which uses the majority rule, and as long as the number of healthy nodes in the cluster reaches the quorum specified number (majority votes in favor), the cluster will continue to provide services, otherwise the cluster will stop providing services. During the period of stopping providing service, the normal node continuously monitors whether the failed node returns to normal. Once the number of normal nodes returns to the number specified by arbitration, the cluster returns to normal and continues to provide service. Arbitration voting is enabled by default (Cluster Managed Voting: Enable).
I. Arbitration model
Quorum mode is configured at the WSFC cluster level to specify the method of quorum voting, and by default the failover cluster manager automatically recommends a quorum mode based on the number of cluster nodes. The quorum configuration affects the availability of the cluster, where the reconstituted cluster nodes must be online or the cluster will have to go out of service due to insufficient quorum.
1. Explanation of Terms
Quorum: a predetermined number of voting nodes or witnesses;
Quorum voting: A quorum of nodes and witnesses vote, and if a majority vote in favor, the cluster is judged to be in a healthy state;
Voting node: In a cluster, a node with voting rights is called a voting node. If the voting node votes yes, it means that the node considers the cluster to be healthy; however, a single node cannot determine the overall health status of the cluster.
Voting Witness: In addition to voting nodes, shared Files and Disks can also vote, called voting witness. Shared File voting witness is called File Share Witness. Shared Disk voting witness is called Disk Witness.
Quorum Node Set: The nodes with votes and Witnesses are collectively referred to as the quorum node set; the health status of the cluster as a whole is determined by the voting results of the quorum node set.
2. Arbitration mode
The majority rule of arbitration mode means that all voting nodes vote. If more than 50% of the votes are in favor, WSFC considers the cluster to be in a healthy state, performs failover, and continues to provide services. Otherwise, WSFC considers the cluster to have a serious failure, and WSFC takes the cluster offline and stops providing services. According to the composition type of the arbitration node set, the arbitration mode is divided into the following four types:
Node Majority: In a cluster, voting nodes are all node servers of the cluster. If more than half of the voting nodes vote in favor, WSFC determines that the cluster is healthy.
Node and File Share Majority: Similar to Node Majority mode, except that the remote file share is configured as a voting witness, the shared file is called an arbitration file, or witness file. With a quorum file, remote files have voting rights, and if other nodes can connect to the shared file, the file is considered to vote yes. WSFC determines that the cluster is healthy if voting nodes and file shares cast more than half of the votes in favor. As a best practice, File Share Witnesses should not be stored on any node server in the cluster, and any node server should have access.
Node and Disk Majority: Similar to Node Majority mode, except that the shared disk is configured as a voting witness, the shared disk is called an arbiter disk, or witness disk. The quorum hard disk needs shared storage, and each node in the cluster needs to mount the same shared hard disk.
Disk Only: There is no majority, only a shared hard disk as the only witness, any node in the cluster can access the shared hard disk, which means that once the quorum hard disk goes offline, the cluster stops providing services.
A common arbitration pattern is node majority Node Majority and Node and File Majority (Node and File Share Majority). If the number of nodes in the cluster is odd, then the node majority arbitration mode is used; if the number of nodes in the cluster is even, then the node and file sharing majority arbitration mode is used. This mode requires configuring a shared folder. Each node in the cluster has permission to access the shared folder, and the shared folder cannot be created on a node that is a cluster.
2. Quorum Configuration
Open the Failover Cluster Manager, right-click the cluster node, click More Actions in the context menu, select Configure Cluster Quorum Settings in the expanded menu, open the Quorum Configuration Wizard, and configure quorum for the cluster
Step1: Open the Arbitration Configuration Wizard and start configuring arbitration.
step2, select the arbitration configuration option
There are three options for arbitration configuration:
Use default arbitration configuration: This option leaves the choice of arbitration configuration options to the cluster system;
Arbitration witness: This option adds arbitration witness to the cluster, and the cluster decides other arbitration management options;
Advanced Arbitration Configuration: User controls all options for arbitration configuration
In this example, select Advanced quorum configuration to control all configuration options of arbitration autonomously.
Step3: Select Voting
By default, every node in the cluster is a voting node. By explicitly removing voting rights from nodes, users can adjust the voting arbitration settings. In this example, the default option is selected: All Nodes, which means that all nodes in the cluster have voting rights.
Step4: Select Quorum Witness
In a cluster, two types of quorum witnesses can be added: File Share Witness and Disk Witness. Disk Witness refers to adding a shared hard disk as an quorum voting node. File Share Witness refers to adding a file share as an quorum voting node. If other nodes in the cluster can access this node, then this node is considered as such.
Step5: Select the file sharing path.
Third, voting arbitration
By default, each node in a failover cluster is a cluster arbiter node, and each node has voting rights. If a node votes in favor, it means that the cluster is healthy. However, a single node cannot determine the health status of the cluster as a whole, but is determined by the voting results of all arbitrators in the cluster.
At any time, from the perspective of each node, other nodes may be offline, failing over, or unresponsive due to network connectivity failures. The key to arbitration voting is to determine the true state of all voting nodes. In addition to the "Disk Only" arbitration mode, other arbitration modes rely on periodic heartbeat signal communication between voting nodes. Once a node fails to respond to heartbeat signals due to network communication failure, system downtime, hardware damage, power failure in the computer room, etc., the remaining nodes consider that the node is abnormal and exclude the node from the current cluster. WSFC counts the arbitration results of all voting nodes and determines the health status of the cluster.
If the nodes of the cluster are located in different subnets, when a node is considered to be a failed node in subnet 1, in fact, the node may not be perceived by the nodes of subnet 1 due to network communication failure, but the node is online and healthy in subnet 2. If voting nodes are able to establish multiple voting arbitrations in different subnets, a split-brain scenario will result. In this scenario, nodes located in different arbitrations behave differently, causing arbitration conflicts, and WSFC fails to perform failover correctly, possibly resulting in data out of sync. A split-brain scenario can only occur when a system administrator manually performs a Forced Quorum operation.
IV. Health testing and arbitration voting
WSFC performs health detection and arbitration voting among nodes in the cluster. Each node detects the health status of other nodes by sending heartbeat signals periodically, and shares health data with other nodes. A node that fails to respond to heartbeat signals is considered to be in an abnormal state, and all healthy nodes in the cluster will soon know that the node has failed. the set of arbitration node is that combination of voting node and witness nodes, and the arbitration result is determined by a majority (Majority) node decision, the health status of the cluster as a whole is determined by the results of periodic arbitration voting, WSFC performs automatic failover or takes the cluster offline according to the results of arbitration voting: if the voting results of the quorum node Set show that most nodes are healthy, then the cluster will fail over and continue to provide services; If the vote is a minority, the cluster is offline.
After reading the above, do you know how to implement failover clustering in Windows Server? If you still want to learn more skills or want to know more related content, welcome to pay attention to the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.