Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Server 2016 deployment of failover cluster cluster

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Blog catalogue

What is a failover cluster?

II. Requirements for failover clustering

Third, fault detection

Deploy the failover cluster cluster

What is a failover cluster? 1. Overview of failover clustering

With the development of Internet application, many companies rely more and more on online services to create value. These important online services generally have very strict application requirements, a single server can not meet such stringent requirements, and can only be implemented using cluster technology.

The Windows operating system provides a new technical solution, which is integrated into the Windows server operating system and is called failover clustering. Failover clustering mainly refers to a group of independent computers, and cluster servers (nodes) can work together through network and software connections to enhance the availability of applications and services. A failover cluster consists of a storage device connected to all nodes, that is, a shared storage device. Shared storage devices are used to store common and quorum data for the cluster. A two-node failover cluster is shown in the following figure:

To ensure the integrity of the data on the shared storage device, only one node in the failover cluster owns the shared storage device at a time. Only when the node fails or the ownership is artificially transferred will the ownership of the shared storage device be owned by the next node. This determines the functional difference between the failover cluster and the network load balancing cluster, that is, only one node in the failover cluster generally provides services to the user (active), and the rest of the nodes are in a backup state. When the active node is down because of a failure, the next node will take over to provide services to users, while the nodes in the network load balancing cluster can provide services to users at the same time.

2. Characteristics of failover clusters

Failover clusters provide highly available services to applications through resource failover. It focuses on maintaining client access to applications and system services.

A failover cluster can support up to 64 nodes and 8000 virtual machines.

Local or Microsoft Azure cloud witnesses can be used as arbitration.

A failover cluster requires a shared storage device. II. Requirements for failover clustering

The failover cluster must meet the default requirements of the hardware, software, and network infrastructure, and require an administrative account with appropriate domain privileges.

1. Hardware requirements

Servers: it is recommended that you use a set of computers that contain the same or similar configurations and are required to be compatible with Windows server 2016.

Network adapters and cables (for network communications): like other components in a failover clustering solution, network hardware requires compatibility with Windows server 2016, and if iSCSI is used, the network configuration must be dedicated to network communications or iSCSI, and not both.

The device controller or corresponding adapter used for storage.

Shared storage devices: must be compatible with Windows server 2016 and contain at least two separate volumes. One volume will be used to witness the disk, and the other volume will contain the files required by the cluster role. Shared storage devices that are native disks cannot be set as dynamic volumes, only basic disks can be used, and the NTFS file system is recommended. 2. Network infrastructure and account requirements

Network settings: it is recommended that you use the same adapter and use the same communication settings on the adapter. In addition, compare the settings between the network adapter and the switch to which it is connected to ensure that the settings do not conflict.

IP address: if the cluster's private network is not routed to other devices, try to ensure that each such private network uses a unique subnet and do not specify the same network segment for different purposes.

DNS: servers in the cluster must use DNS for name resolution, and DNS can be used to dynamically update the protocol.

Domain roles: it is recommended that all servers in the cluster be in the same active directory domain. It is recommended that all cluster servers are member servers and DC should not be configured as cluster nodes.

Account used to manage the cluster: when you first create a cluster or add servers to a cluster, you must log in to the domain using an account that has administrator privileges for all servers in the cluster. Third, fault detection

Key advantages provided by failover clustering in fault detection and prevention. When a node or application in the cluster fails, the failover cluster can respond by restarting the failed application or spreading the work of the failed system to the surviving cluster nodes. Failover cluster fault detection and prevention includes two-way failover, application failover, parallel recovery and automatic fault recovery.

Cluster service 921 detects failures of individual resources or nodes, dynamically transfers application, data, and file resources to normal servers available in the cluster, and then restarts them. As a result, resources such as databases, file shares, and applications can be highly available to users and client applications.

Failover cluster mainly detects cluster failure through heartbeat and arbitration.

1. Heartbeat line

The nodes of the cluster periodically use dedicated cluster network switching devices to send messages to each other (by default, every 5 seconds), because as long as the cluster nodes are still working, messages are constantly and periodically sent to the rest of the cluster, all of which are called heartbeat information, and the private network used to transmit heartbeat information is called heartbeat lines. Through heartbeat communication, each node can check the availability of other nodes and their applications.

If a backup node fails, it is allowed to prove in any of several ways that it is still functioning and can communicate with other normal nodes within a given period of time. If it cannot be proved, it will be removed from the cluster at this time.

If the active node fails and the backup node does not receive heartbeat information within the specified period (default is two cycles, that is, 10 seconds), it will fail over and the standby node will take over the cluster and provide services.

2. Arbitration

Whether a failover cluster can work properly is determined by the vote of the cluster members. By default, each cluster node can cast one vote. In addition, the arbitration witness can cast an extra vote (the arbitration witness can be a disk or file sharing resource). Only if there is more than half of the votes can the cluster work properly.

Most nodes (no witness), only cluster nodes have the right to vote. It is suitable for situations where the number of cluster nodes is odd.

For most nodes with witness (disk or file sharing), in addition to the cluster node having the right to vote, the arbitration witness also has one vote. It is suitable for situations where the number of cluster nodes is even.

There is no majority (disk witness only), no cluster node has the right to vote, only disk witness has one vote. This model is generally not recommended and is prone to a single point of failure. Deploy the failover cluster cluster

Taking a two-node failover cluster as an example, this case clearly distinguishes three networks with different uses: VM2 network card transmitting heartbeat information, VM3 connecting to storage server, and VM4 external service.

1. Case environment description:

Four Windows server 2016

A Windows 7

A server 2016 branch domain controller (I am a pre-deployed domain controller) network card I set up the VM4 external service.

The second deployment of cluster node 1 (cluster01) requires three network cards, VM2 (private) for transmitting heartbeat information, VM3 (SAN) for connecting to the storage server, and VM4 (public) for external services. Join the domain environment.

The third deployment cluster node 2 (cluster02) also requires three network cards, VM2 (private) for transmitting heartbeat information, VM3 (SAN) for connecting to the storage server, and VM4 (public) for external services. Join the domain environment.

The fourth deployment storage service (SAN), a network card, VM3, to ensure interworking with the VM3 network of cluster01 and cluster02. Join the domain environment.

Windows 7 acts as a client, a piece of VM4 network, ensuring interworking with the first four networks and joining the domain.

2. Start deployment: AD_DNS is configured as follows:

The network card is transferred to the VM4 network card for external service.

Login domain

Modify IP address, DNS and turn off firewall

Create a new organizational unit and create two users, which will be used later to verify the failover cluster

User creation completed

The cluster01 configuration is as follows:

Add three network cards

First prove the names of the three network cards, and then configure the IP address. Private is the transmission heartbeat, SAN connection storage service, public external service.

After modification, join it to the domain

Join successfully, restart the computer

Local administrator login domain

The Cluster02 configuration is as follows:

Add three network cards

Find out the three network cards and start to configure IP addresses.

When I added it to the domain after the configuration was completed, I omitted a few steps of the picture and did not understand how to join the domain with reference to cluster01.

Local administrator login domain

The SAN configuration is as follows:

Add a network card connected to storage, that is, VM3

Configure IP addr

Install the storage service

Default next step

After confirmation, start the installation.

Installation completed

Configure Storage Servic

Create a virtual disk location

Specify a virtual disk name

Set the virtual disk size

Specify a target name

Specify the access server. I use the IP address to specify. After the assignment is completed, other IP addresses cannot be accessed, and then there is no need to enable CHAP verification.

The IP address may not be specified before. It is also possible to enable verification in this step, and configure it according to your needs.

Confirm and start the creation.

Creation completed

Then create the second virtual hard disk

After the creation of two virtual hard drives, you will find that they are not connected.

Cluster01 connects to the storage server:

Next, to start the connection, open the Windows management tool on the Cluster01 server

Click Yes

Enter the IP address of the storage server to connect

Open the computer manager of cluster01, and you will find that there are two offline hard drives. We initialize them online, then create a simple volume, set Q disk as heartbeat disk and S disk as database disk.

New simple Volume

Create a new Q disk

Default next step

The second hard disk to create S disk, as a database disk, the operation is the same, I will not take a screenshot.

When you open this computer after two hard drives are configured, you will find that there are two more disks.

Cluster02 connects to the storage server:

The steps for Cluster02 to connect to the storage server and cluster01 are the same, and the drive letter must be the same. Just follow the cluster01 configuration.

After connecting to the storage server successfully, opening disk management will find that the drive letter is inconsistent with that of cluster01. We need to change the drive letter manually.

The first hard disk changes the drive letter to Q disk.

Change the letter of the second hard drive

Change to S disk

Cluster01 starts installing the file server failover cluster:

Add File Server

Add a failover cluster

Confirm and start the installation.

Installation completed

Cluster02 is also installed in this step. After installation, nothing needs to be configured. Just configure it on cluster01.

Cluster01 configures the failover cluster before configuring the file server

Create a new cluster

Enter the server name dc3.benet.com and add (computer name plus domain name)

Default next step

If you are in a normal working environment, you still need to install a driver, but I do not need it in a simulated environment. Pay attention to check if there is an error, and the warning will not be affected.

Enter the cluster name and cluster IP address

Continue to add a second node

Enter the computer name of the second node plus the domain name. Mine is dc4.benet.com, which is basically the same as the configuration of the first node. I will omit the same steps.

Complete the verification configuration

After clicking on this computer, you will find that the shared disk is on the cluster01, cluster01 is the active node, and cluster02 is the backup node, and there is an extra cluster host under the DNS forward query area benet.com on the AD server.

Configure the arbitration witness disk

Choose arbitration witness

Configure as disk witness

Select S disk as the witness storage volume

Configuration complete

Start configuring the file server

Select a file server

Default next step

Create a name and public IP address for client access

Add Shar

Fast sharing

Add shared location

Create share name

Default next step

Users add permissions, that is, the two users created on AD

Bob sets read-only permissions

Tom sets read and write permissions

Share add permission

Delete the everyone after configuration is completed, and then apply OK.

The client configuration is as follows:

Set IP address

Client joins the domain

Local administrator login domain

Add tom and bob to the administrator group, disable administrator users, and switch bob login authentication

Tom login

Access shared files

At this point, you can see that tom has read and write permissions, and can be downloaded or uploaded.

Toggle Bob login

Access shared files

You will find that bob can only read but not write

Simulate the cluster01 failure, and the active node automatically switches to the cluster02 server to verify that it can ensure the normal access of users.

Check this computer on cluster02 after shutdown, you will find that the cluster automatically switches over, and when you tap ipconfig in cmd, you will also find that the IP addresses of the two clusters are switched automatically, and the backup node becomes an active node, and the switching time is only 5 seconds.

It does not affect client access, and there is no sign that cluster is malfunctioning

-this is the end of this article. Thank you for reading-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report