In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to get started with failover clustering in Windows Server 2008, which is concise and easy to understand, and will definitely brighten your eyes. I hope you can get something through the details of this article.
I. preparation
A failover cluster must meet some of the requirements of hardware, software, and network infrastructure, and it requires an administrative account with appropriate domain privileges. The details are as follows:
(I) hardware requirements for failover clusters
In a failover cluster, the following hardware is required:
(1) Server: it is recommended that you use a set of matching computers that contain the same or similar components.
Note that Microsoft supports failover clustering solutions only if all hardware components are marked "Certified for Windows Server 2008". In addition, the complete configuration (server, network, and storage) must pass all tests in the verify configuration wizard, which is included in the failover cluster snap-in.
(2) Network adapters and cables (for network communications): like other components in the failover clustering solution, the network hardware must be marked as certified for Windows Server 2008. If you use iSCSI, the network adapter must be dedicated to network communication or iSCSI, not both.
In a network infrastructure that connects cluster nodes together, avoid having a single point of failure. There are many ways to achieve this. Cluster nodes can be connected through several different networks. Alternatively, you can connect cluster nodes through a network consisting of grouped network adapters, redundant switches, redundant routers, or similar hardware that eliminates single-cause points of failure.
Note: if you are connecting cluster nodes through a network, the network needs to meet the redundancy requirements in the validation configuration wizard. However, the report in the wizard will contain a warning that the network should not have a single failure point.
(3) device controller or corresponding adapter for storage:
-for Serial attached SCSI or fibre Channel: if you are using Serial attached SCSI or fibre Channel in all clustered servers, the Mass Storage device controllers dedicated to clustered storage devices should be the same. They should also use the same firmware version.
Note: with Windows Server 2008, you cannot use parallel SCSI to connect storage to a clustered server.
-for iSCSI: if you are using iSCSI, each clustered server must have one or more network adapters or host bus adapters dedicated to clustered storage devices. The network used for iSCSI cannot be used for network communication. "in all clustered servers, the network adapters used to connect to iSCSI storage targets should be the same, and Gigabit Ethernet or higher-speed Ethernet is recommended."
For iSCSI, grouped network adapters cannot be used because they are not supported by iSCSI.
-Storage: shared storage compatible with Windows Server 2008 must be used.
In most cases, storage should contain multiple independent disks (LUN) configured at the hardware level. For some clusters, use a disk as the witness disk. Other disks contain files required by the cluster service or application. Storage requirements include the following:
-to use native disk support included in a failover cluster, use basic disks instead of dynamic disks.
-it is recommended to format the partition in NTFS format (for witness disks, the partition must be in NTFS format).
-for the partition form of the disk, you can choose to use the master boot record (MBR) or the GUID partition table (GPT).
A witness disk is a disk in cluster storage that is designated to hold a copy of the cluster configuration database. A failover cluster has a witness disk only if the witness disk is specified as part of the quorum configuration.
(II) Software requirements for failover clustering
All servers in a failover cluster must be running the same version of Windows Server 2008. The server can run any of the following versions of the operating system:
-Windows Server 2008 Enterprise
-Windows Server 2008 Datacenter
-Server core installation for Windows Server 2008 Enterprise
-Server core installation for Windows Server 2008 Datacenter
In addition, all servers must run the same hardware version of the operating system (32-bit, x64-based, or Itanium-based architecture). For example, if a server is running the x64-based version of Windows Server 2008 Enterprise, all servers in the failover cluster must be running that version.
All servers should also have the same software updates (patches) and Service Pack.
(III) Network infrastructure and domain account requirements for failover clusters
The network infrastructure of the following failover clusters and administrative accounts with the following domain permissions will be required:
-Network settings and IP addresses: when using the same network adapters for the network, use the same communication settings (such as speed, duplex mode, flow control, and media type) on these adapters. In addition, compare the settings between the network adapter and the switch to which it is connected, and ensure that the settings do not conflict.
If the private network you own is not routed to the rest of the network infrastructure, ensure that each such private network uses a unique subnet. This is necessary, even if each network adapter is assigned a unique IP address. For example, if you have two cluster nodes in a headquarters that uses one physical network and two other nodes in a branch office that uses a separate physical network, do not specify 10.0.0.0amp 24 for both networks, even if each adapter is assigned a unique IP address.
-DNS: servers in the cluster must use the Domain name system (DNS) for name resolution. You can use DNS to dynamically update the protocol.
-Domain role: all servers in the cluster must be in the same Active Directory domain. The practice is that all clustered servers should have the same domain role (member server or domain controller). The recommended role is a member server.
-Domain controller: the recommended cluster server is a member server. If they are member servers, the other servers will be domain controllers in the domain that contains the failover cluster.
-clients: for clients, there are no specific requirements other than the obvious requirements for connectivity and compatibility: clients must be able to connect to the clustered server and they must run software compatible with the services provided by the clustered server.
-account used to manage the cluster: * when creating a cluster or adding servers to a cluster, you must log in to the domain using an account that has administrator privileges for all servers in the cluster. This account does not need to be a Domain Admins account-it can be a Domain Users account in an Administrators group located on each clustered server. In addition, if the account is not a Domain Admins account, you must delegate the create computer object permission in the domain to the account (or the group of which the account is a member).
Note: compared to Windows Server 2003, the way the cluster service runs in Windows Server 2008 has changed. In Windows Server 2008, there is no cluster service account. The cluster service runs automatically in a specific context that provides the specific permissions required for the service (similar to the local system context, but with reduced permissions).
II. Installation
You may be familiar with the concept of server cluster, which has been given a new name in Windows Server 2008: failover cluster. A cluster is a group of independent computers that work together to improve the availability of services and applications. Multiple clustered servers (called nodes) are connected by physical cables and software. If one of the nodes fails, the other node starts providing services through a process called failover.
In Windows Server 2008, failover clusters are improved to simplify clusters, make them more secure, and enhance cluster stability. It is easier to set up and manage clusters. Just as the method in which a failover cluster communicates with storage has been improved, so has security and networking in the cluster.
It is important to note that the failover clustering feature is included in Windows Server 2008 Enterprise and Windows Server 2008 Datacenter. This feature is not available in Windows Server 2008 Standard or Windows Web Server 2008.
Install failover clustering featur
Log in as an administrator and install failover Cluster by using the initial configuration Task or the add Features command in Server Manager. The specific steps are as follows:
1. If you recently installed Windows Server 2008 on the server and the initial configuration tasks interface is displayed, under customize this server, click add Features. (skip to step 3. )
two。 If the initial configuration Task is not displayed, add features through Server Manager:
-if the server manager is already running, under feature Summary, click add feature.
-if Server Manager is not running, click start, click Administrative tools, click Server Manager, and then, if prompted to have permission to continue, click continue. Then, under feature Summary, click add feature.
3. In the add feature Wizard, click failover Cluster, and then click install.
4. When the wizard finishes, close it.
5. Repeat the process for each server you want to include in the cluster.
At this point, the failover cluster feature is installed, and then the failover cluster can be created.
Third, create
When your hardware environment fully meets the conditions for the creation of a failover cluster, and has completed the addition of the failover cluster function. The failover cluster can then be created.
Create a new failover cluster
1. Verify that the hardware is connected and the hardware configuration is verified as described in the following topics:
-prepare hardware before verifying the failover cluster
-verify a new or existing failover cluster.
Note: Microsoft supports failover clustering solutions only if the complete configuration (server, network, and storage) can pass all the tests in the verify configuration wizard. In addition, all hardware components in the solution must be marked "Certified for Windows Server 2008".
two。 In the failover Cluster Management snap-in, verify that failover Cluster Management is selected, and then under Administration, click create Cluster.
3. Follow the instructions in the wizard to specify:
-servers to include in the cluster.
-name of the cluster.
-IP address information that is not automatically provided by your DHCP settings.
4. When the wizard runs and the Summary page appears, if you want to view the report of the tasks performed by the wizard, click View report.
To view the report after closing the wizard, check the following folder, where SystemRoot is the location of the operating system (for example, C:\ Windows):
SystemRoot\ Cluster\ Reports\
Tip: to open the failover Cluster snap-in, click start, click Administrative tools, and then click failover Cluster Management. If the user account Control dialog box appears, verify that the action you want to perform is displayed, and then click continue.
Add a server to a failover cluster
1. Verify that the network and storage are connected to the server you want to add.
two。 Verify the hardware configuration, including existing cluster nodes and recommended new nodes.
Important: Microsoft supports failover clustering solutions only if the complete configuration (server, network, and storage) can pass all the tests in the verify configuration wizard. In addition, all hardware components in the solution must be marked "Certified for Windows Server 2008".
3. If the cluster you want to configure is not displayed in the failover Cluster snap-in, right-click failover Cluster Management in the console tree, click manage clusters, and then select or specify the desired cluster.
4. Select the cluster, and in the actions pane, click add Node.
5. Follow the instructions in the wizard to specify the server to add to the cluster.
6. When the wizard runs and the Summary page appears, if you want to view the report of the tasks performed by the wizard, click View report.
IV. Arbitration configuration
The quorum configuration in the failover cluster determines the number of failures allowed by the cluster. If more failures occur, the cluster must stop running.
The significance of arbitration
In the event of network problems, quorum can interfere with communication between cluster nodes. A small group of nodes may communicate with each other in one functional part of the network, but cannot communicate with a different group of nodes in another part of the network. This can lead to serious problems. In this case of "separation", at least one set of nodes must stop running as a cluster.
To prevent problems caused by separation in the cluster, the cluster software requires that any node set running as a cluster must use a voting algorithm to determine whether the node set has an arbitration at a specified time. Because the specified cluster contains a specific set of nodes and a specific quorum configuration, the cluster will know how many "votes" will constitute a majority (that is, a quorum). If the number of votes falls below the majority, the cluster stops running. The node will still listen for the presence of other nodes, and when another node reappears on the network, the node will not start running as a cluster until the quorum is regained.
For example, in a five-node cluster that uses "node majority", consider what would happen if nodes 1, 2, and 3 could communicate with each other, but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute the majority, and they continue to operate as clusters. Nodes 4 and 5 are in the minority and stop running as a cluster. If Node 3 loses communication with other nodes, all nodes stop running as clusters. However, all running nodes will continue to listen for communication so that the cluster can form and start running when the network starts working again.
Note that the entire functionality of the cluster depends not only on the quorum, but also on the capacity of the services and applications on each node to support failover to that node. For example, a five-node cluster still has a quorum after two nodes fail, but it continues to serve clients only if each remaining cluster node has sufficient capacity to support services and applications that fail over to it.
(I) options for arbitration configuration
You can choose from four possible quorum configurations:
-Node majority (recommended for clusters with odd nodes)
The number of fault nodes that can bear is half of the number of nodes (rounded) minus one. For example, a cluster of seven nodes can withstand the failure of three nodes.
-Node and disk majority (recommended for clusters with even number of nodes)
The number of failed nodes that can be borne while witnessing that the disk remains online is half of the number of nodes (rounded). For example, when witnessing a disk online, a cluster of six nodes can withstand the failure of three nodes.
The number of failed nodes that can bear when witnessing a disk go offline or fail is half of the number of nodes (rounded) minus one. For example, a cluster of six nodes that witnessed a disk failure can withstand two (3-1 / 2) node failures.
-Node and file sharing majority (suitable for clusters with special configurations)
It works similar to the Node and disk majority, but this cluster uses witness file sharing instead of witness disk.
Note that if you use Node and File sharing Majority, at least one available cluster node must contain a * * copy of the cluster configuration before you can start the cluster. Otherwise, you must force the cluster to start through a specific node.
-No majority: disk only (not recommended)
It can withstand the failure of all nodes except one node (if the disk is online), but this configuration is not recommended because the disk may become a single point of failure.
II) Arbitration configuration legend
The following figure illustrates how the three quorum configurations work. The fourth configuration is only described in words because it is similar to the "node and disk majority" configuration illustration.
Note: in the illustration, for all configurations except disk only, note whether most of the relevant elements are communicating (regardless of the number of elements). While they are communicating, the cluster continues to work. When they no longer communicate, the cluster stops running. (figure 1)
Node majority arbitration configuration, three nodes
As shown in the previous illustration, in a cluster that uses a Node majority configuration, only nodes are counted when calculating the majority. (figure 2)
Node and disk majority arbitration configuration, four nodes (plus disk)
As shown in the previous illustration, in clusters that use the Node and disk Majority configuration, count nodes and witness disks when calculating the majority.
Node and file sharing most arbitration configuration
In clusters that use the Node and File sharing majority configuration, count nodes and witness file sharing when counting the majority. This is similar to the Node and disk majority quorum configuration shown in the previous illustration, except that the witness is a file share accessible to all nodes in the cluster, not the disks in the cluster storage. (figure 3)
No majority (disk only) quorum configuration, three nodes
In a cluster that uses a disk-only configuration, the number of nodes does not affect how the quorum is implemented. The disk is the arbitration. However, if you lose communication with the disk, the cluster becomes unavailable.
(3) Select the arbitration option for the cluster
1. If the cluster you want to configure is not displayed in the failover Cluster snap-in, right-click failover Cluster Management in the console tree, click manage clusters, and then select or specify the desired cluster.
two。 When the cluster is selected, in the actions pane, click more actions, and then click configure Cluster Quorum Settings.
3. Follow the instructions in the wizard to select the quorum configuration for the cluster. If you choose a configuration that includes witness disk or witness file sharing, follow the instructions for the specified witness.
4. When the wizard runs and the Summary page appears, if you want to view the report of the tasks performed by the wizard, click View report.
V. Management
When a failover cluster is created, it is inevitable to operate on it one way or another. Therefore, it is very important to manage it correctly. This article focuses on sharing typical management operations about failover clusters.
(I) bring cluster services or applications online or offline
A service or application in a failover cluster is sometimes involved during maintenance or diagnostics, and you may need to bring the service or application online or offline. Although the cluster service handles the process in an orderly manner, bringing the application online or offline does not trigger a failover. For example, if a particular clustered application requires a specific disk, the cluster service ensures that the disk is available before the application starts. The specific steps are as follows:
A. use the Windows interface to bring cluster services or applications online or offline
1. In the failover Cluster Management snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click failover Cluster Management, click manage clusters, and then select or specify the cluster you want.
two。 If the console tree is collapsed, expand the console tree under the cluster you want to manage.
3. Under Services and applications, expand the console tree.
4. Check the status of the service or application that you want to take online or offline by clicking the service or application in the status column in the central pane.
5. Right-click the service or application that you want to bring online or offline.
6. Click the appropriate command: "bring this service or application online" or "take this service or application offline".
B. use the command prompt window to bring the cluster service or application online or offline
1. To open a command prompt window, click start, right-click command prompt, and then click run as administrator or click open.
two。 If the user account Control dialog box appears, verify that the action you want to perform is displayed, and then click continue.
3. Check the status of the cluster service and application by typing the following command:
CLUSTER [cluster-name] GROUP / STATUS
4. Type one of the following commands:
-to bring the cluster service or application online, type:
CLUSTER [cluster-name] GROUP "service-or-application name" / ON [: node-name] [/ WAIT [: timeout-seconds]]
-to take the cluster service or application offline, type:
CLUSTER [cluster-name] GROUP "service-or-application name" / ON [: node-name] [/ WAIT [: timeout-seconds]]
(II) suspend or resume nodes in a failover cluster
When you pause a node, existing groups and resources can remain online, but other groups and resources on that node cannot be online. A node is usually paused to apply software updates to that node. If you need to perform extensive diagnostics or maintenance on a cluster node, it may not be possible to simply pause that node. In this case, you can stop the cluster service on that node.
A. Pause or resume nodes in a failover cluster using the Windows interface
1. In the failover Cluster Management snap-in, if the cluster to be managed is not displayed, right-click failover Cluster Management in the console tree, click manage Cluster, and then select or specify the desired cluster.
two。 If the console tree is collapsed, expand it under the cluster you want to manage.
3. Expand the console tree under nodes.
4. Right-click the node you want to pause or resume, and then click pause or resume.
B. pause or resume nodes in a failover cluster using the command prompt window
1. To open a command prompt window, click start, right-click command prompt, and then click run as administrator or click open.
two。 If the user account Control dialog box appears, verify that the action you want to perform is displayed, and then click continue.
3. Type one of the following commands:
-to pause a node, type:
CLUSTER [cluster-name] NODE node-name / PAUSE
-to restore a node, type:
CLUSTER [cluster-name] NODE node-name / RESUME
(III) start or stop the cluster service on the cluster node
During some troubleshooting or maintenance operations, it may be necessary to stop and restart the cluster service on a cluster node. When you stop the cluster service on a node, the service or application on that node will fail over and the node will stop running in the cluster until the cluster service is restarted. If you want a particular node to function properly to support the services or applications it currently owns, while preventing other services and applications from failing over to that node, pause the node (do not stop the cluster service).
A. use the Windows interface to start or stop the cluster service on the cluster node
1. In the failover Cluster Management snap-in, if the cluster to be managed is not displayed, right-click failover Cluster Management in the console tree, click manage Cluster, and then select or specify the desired cluster.
two。 If the console tree is collapsed, expand it under the cluster you want to manage.
3. To reduce the disruption to the client to a level of *, move the application currently owned by that node to another node before stopping the cluster service on that node. To do this, expand the console tree under the cluster you want to manage, and then expand Services and applications. Click each service or application, and then view the current owner (in the central pane). If the owner is the node on which you want to stop the cluster service, right-click the appropriate service or application, click move this service or application to another node, and then select the node.
4. Expand the console tree under nodes.
5. Right-click the node you want to start or stop, and then click more actions.
6. Click the appropriate command:-to start the service, click start Cluster Service. -to stop the service, click stop Cluster Service.
B. use the command prompt window to start or stop the cluster service on the cluster node
1. To open a command prompt window, click start, right-click command prompt, and then click run as administrator or click open.
two。 If the user account Control dialog box appears, verify that the action you want to perform is displayed, and then click continue.
3. To reduce the disruption to the client to a level of *, check the status of the cluster service and application before stopping the cluster service on the node, and then move the services and applications currently owned by the node that needs to stop the cluster service. To do this:
1) Type: CLUSTER [cluster-name] GROUP / STATUS
2) then, for each service or application you want to move, type: CLUSTER [cluster-name] GROUP "service-or-application-name" / MOVE [: node-name] if you move a group called "cluster group", be sure to enclose the name in quotation marks.
3) to confirm that the services and applications have been moved as needed, press the up button one or more times until you see the following command, and then press Enter:CLUSTER [cluster-name] GROUP / STATUS
4) Type the following form of command:-to start the cluster service on a node, type:
CLUSTER [cluster-name] NODE node-name / START [/ WAIT [: timeout-seconds]]
To stop the cluster service on a node, type:
CLUSTER [cluster-name] NODE node-name / STOP [/ WAIT [: timeout-seconds]]
(4) View the events and logs of the failover cluster
By viewing events through the failover Cluster Management snap-in, you can view events for all nodes in the cluster at a time, rather than events for only one node at a time. When you use the command prompt window to generate and view logs, you can view a detailed list (trace) of the series of actions recently taken by the failover cluster software.
A. use the Windows interface to view the events and logs of the failover cluster
1. In the failover Cluster Management snap-in, if the cluster is not displayed, right-click failover Cluster Management in the console tree, click manage clusters, and then select or specify the desired cluster.
two。 If the console tree is collapsed, expand the console tree under the cluster whose events you want to view.
3. In the console tree, right-click Cluster events, and then click query.
4. In the Cluster event filter dialog box, select the conditions that the event you want to display must meet.
To return to the default condition, click the reset button.
5. Click OK.
6. To sort events, click the title, such as level or date and time.
7. To view a specific event, click the event to view the details in the event details pane.
B. use the command prompt window to view the detailed logs of the failover cluster
1. To open a command prompt window, click start, right-click command prompt, and then click run as administrator or click open.
two。 If the user account Control dialog box appears, verify that the action you want to perform is displayed, and then click continue.
3. Enter a command in the following format:
CLUSTER [cluster-name] LOG / GEN / COPY: "pathname"
A detailed trace log for each node is generated and copied to the path you specify.
4. To change to the folder where the log is located, make a note of the pathname specified in the previous step and enter the command in the following format:
CD "pathname"
5. Type:
DIR
6. To view the log in notepad, find the name of the log file and type:
NOTEPAD "filename"
VI. Modification of network configuration
For each network that is actually connected to a server (node) in a failover cluster, you can specify whether the network is used by the cluster and, if used by the cluster, whether the network is used only by the node or also by the client. Note that in this case, the client contains not only the client computers that access the cluster services and applications, but also the remote computers used to manage the cluster.
If you use a network for iSCSI, do not use it for network communication in the cluster.
Modify the network settings of a failover cluster
1. If the cluster you want to configure is not displayed in the failover Cluster snap-in, right-click failover Cluster Management in the console tree, click manage clusters, and then select or specify the desired cluster.
two。 If the console tree is collapsed, expand the tree below the cluster you want to configure.
3. Expand the Network.
4. Right-click the network for which you want to modify settings, and then click Properties.
5. If necessary, change the name of the network.
6. Select one of the following options:
-allow clusters to use this network
If you select this option and you want the network to be used only by nodes (not clients), clear allow clients to connect through this network. Otherwise, make sure it is selected.
-clusters are not allowed to use this network
Select this option if your network is used only for iSCSI (communicating with storage devices) or for backup purposes only. (these are the most common reasons for choosing this option. )
Attachment: add storage to the failover cluster step
1. If the cluster you want to configure is not displayed in the failover Cluster snap-in, right-click failover Cluster Management in the console tree, click manage clusters, and then select or specify the desired cluster.
two。 If the console tree is collapsed, expand the tree below the cluster you want to configure.
3. Right-click Storage, and then click add disk.
4. Select the disk you want to add.
The above is how to get started with failover clustering in Windows Server 2008. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.