Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

WSFC Guest Cluster Architecture

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Guest clustering is an omitted chapter in front of Lao Wang's WSFC series. In this article, Lao Wang will discuss the architecture of guest cluster and demonstrate some of its concepts. Before, Lao Wang and some friends discussed guest cluster and found that there were some misunderstandings in their understanding of the concept of guest cluster, so I hope this article can help you understand the architecture of guest cluster correctly.

First of all, Lao Wang gives us a definition of guest clustering: application clustering based on virtualized hosts or virtual machines in a virtualized cluster

When we talk about virtualized clusters, in fact, we are not talking about guest clusters. The virtual clusters we are talking about in real life usually refer to host-level clusters. We may turn several physical machines into virtual host clusters. There are a lot of virtual machines running on it, but virtualized clustering is essentially fault tolerance at the physical host level, when a physical machine goes down. All the virtual opportunities carried above are transferred to other living nodes.

Guest clustering, on the other hand, means that we have made a cluster between two virtual machines. Some people will ask, wouldn't it be good to be a virtualized cluster? why do we want to be a guest cluster? in fact, it is the normal demand brought about by virtualization. For example, we have currently completed the migration of corporate virtualization, converting most of the original physical machines into virtualized cluster resource pools. All the above businesses have been migrated to a virtualized environment, so well, my original business is a cluster architecture, which ensures high availability. How can you help me solve the problem of high availability of my applications after migrating to a virtualized cluster? this is a question that the application administrator should ask the cluster administrator.

Traditionally, cluster administrators may want to back up virtual machines or control users' multiple virtual machines at the virtualization cluster level, such as using anti-correlation to ensure that users' application virtual machines are always distributed to different hosts. High availability of user virtual machines is guaranteed at this level, but this configuration is only applicable to front-end virtual machines, if it is a stateful application virtual machine, database virtual machine It needs to be configured as a replication architecture so that when one is down, the other can be used normally, but if the application administrator is unwilling to design the virtual machine as a replication architecture to match the anti-correlation at the host cluster level for some reason, you need to find another way, and the answer is to deploy the guest cluster.

Allow application administrators to deploy clusters between two virtual machines to solve the problem of high application availability. By deploying guest clusters, users will have the same experience as the previous physical machine management cluster. Applications in the virtual machine will be highly available. When a virtual machine goes down, the application in the virtual machine will fail over to another virtual machine to provide services.

The usage scenarios of guest clusters are as follows

For a stand-alone physical machine, the company may have limited resources and can only provide one physical machine with sufficient performance, on which the administrator deploys the virtual machine for business use. The business needs to ensure the high availability of the virtual machine and the minimum RTO, so the guest clustering scheme is adopted to deploy the cluster for the virtual machine, while the security means are used to control user access to the virtual machine.

Virtualized cluster + guest cluster, application administrators want to have full management rights to their own application system clusters, and want applications to maintain the same high availability architecture as before, ensuring high end-to-end availability from hosts to virtual machine applications

For application administrators, deploying guest clusters is another guarantee for the availability of their applications. For example, if guest clusters are not deployed, the users may be two virtual machines deployed on the Hyper-V cluster, assuming that the Hyper-V host detects a virtual machine blue screen and will restart the virtual machines or perform migration to other hosts, but in fact, this is not what we need from the application point of view. What is needed is to be quickly transferred to another virtual machine by the application on the blue screen. The cluster at the host level cannot understand your application. At most, it can only know that your virtual machine is blue screen, or that service has stopped. I should migrate your virtual machine to another host to see if it can be started, instead of controlling the failover of applications in the virtual machine. If we do not design an automatic failover replication architecture for the virtual machine, the applications in the virtual machine will face downtime.

If the guest cluster architecture is deployed, what will happen is a virtual machine blue screen, and the above application must be dead. Other virtual machines in the guest cluster have detected virtual machine downtime through health detection, and will automatically transfer the applications above to provide services. This is the difference between whether there is a guest cluster.

For the guest cluster, from the point of view of the application, the application does not know whether this is a guest cluster or a physical machine cluster. The application only knows that my underlying operating system is an operating system, whether the operating system has deployed a cluster, and if there is a deployment cluster. Then I can complete the failover between guest nodes.

The deployment of guest clustering is to protect the internal applications of the virtual machine, which prevents the collapse of the operating system of this virtual machine. Once the operating system of this virtual machine crashes, my application can be quickly transferred to other virtual machine nodes.

Deploying a virtualized cluster is to protect virtual machine objects and hosts from the collapse of a physical machine. Once a physical machine crashes or a hardware failure occurs, all the above virtual machines can be transferred to other nodes to work.

Although we said above that after the deployment of the guest cluster, we can ensure the availability of the application, in addition to preventing host failure, it can also prevent operating system failure, but if we are virtualizing the cluster + guest cluster architecture, we still need the cooperation of application administrators and cluster administrators to improve end-to-end high availability, for example, without cooperation. The virtualized cluster is usually scheduled by a dedicated resource management system, and it is very likely that all nodes of the user's guest cluster will be placed on a host node, so once the host machine goes down, all nodes of the guest cluster will go down, and it is meaningless to deploy the guest cluster. therefore, in order to ensure the high availability of guest cluster applications, the cluster administrator must be required to configure maintenance policies for the guest cluster. The best solution is to configure anti-correlation so that the two guest cluster virtual machines do not run on the same physical machine unless only the last physical machine is left. After this configuration, both WSFC and SCVMM will follow the anti-correlation strategy, so that the host is highly available and the virtual machine operating system is highly available. Even if a physical machine breaks down, it will never affect the application in the virtual machine.

If it is a guest cluster with four nodes, you can refer to this strategy. If the host cluster configures two virtual machines, the preferred node of the two virtual machines is the first physical machine, and the preferred node of the two virtual machines is the second physical machine. In this way, the preferred owner policy will be followed when the cluster evaluates the placement policy, ensuring that the two virtual machines are on two physical machines respectively, if one physical machine is down. Guest cluster is still available on another physical machine

Although the solution of deploying guest clusters looks good and can bring more security to the applications in the virtual machine, it also has its accompanying problems.

For clusters, clustering is no matter whether you are a virtual machine or a physical machine. WSFC supports all-virtualized clusters, all-physical clusters, and mixed clusters of virtual machine physical machines. As long as each node of the cluster meets the prerequisites for cluster deployment, then the most important point is shared storage. We say that the deployment of clusters requires shared storage, and applications need to store data in shared storage. In order to seamlessly fail over the application

If we deploy the guest cluster, what about the storage? we need the administrator to find a way to expose the disks in the physical environment to the guest cluster and let the guest cluster complete the establishment of the cluster.

In general, there are several schemes for storage allocation in guest clusters.

ISCSI, this is the most common. With the increase of network speed, ISCSI has been used in many real enterprise environments. If it is to be provided to guest clusters, if the device or super-convergence product supports it, you can directly assign a target to a virtual machine on the physical environment, or deploy iscsi server, which can be Microsoft or starwind, preferably a highly available ISCSI to provide presentation. If there is no environment, then deploy a separate virtual machine. Or it is possible to install ISCSI directly on the physical machine and provide it to the virtual machine.

Pass-through disk is also a feasible solution. To put it simply, pass-through disk means that we take the physical disk offline in the disk management of the virtualized host and transfer it to the virtual machine without creating a virtual disk. The virtual machine uses the disk directly. In WSFC, the pass-through disk is limited to the guest virtual machine cluster, and there are certain restrictions. Starting from WSFC 2008, Microsoft supports adding pass-through disks to the cluster. In theory, we could deploy a virtualized cluster, but instead of allocating shared storage to the cluster, let the virtual machines use pass-through disks.

In the WSFC 2008 era, the steps to add pass-through disks to the cluster are as follows

Offline physical machine disk

Offline guests cluster virtual machines

Add SCSI controller to select the physical machine disk that is offline

When the virtual machine is powered on, the pass-through disk assigned by the physical machine is seen internally.

Refresh the virtual machine configuration in the host cluster manager and see the pass-through disk become the virtual machine dependent disk

In the WSFC 2012 era, the steps to add pass-through disks to the cluster are as follows

Offline physical machine disk

Add pass-through disks to cluster disks

Shut down the guest virtual machine, add SCSI controller, and select pass-through disk

When the virtual machine is powered on, the pass-through disk assigned by the physical machine is seen internally.

See pass-through disks as virtual machine dependent disks in the host cluster administrator

As you can see, although we say that pass-through disks can be added to a virtualized cluster, in essence, we do not mean to use pass-through disks as cluster shared disks, but to add pass-through disks as a dependent project in virtual machine configurations, what is the effect achieved when an unplanned failover occurs, the virtual machine is transferred to other nodes, relying on pass-through disks It will also be transferred because the Hyper-V host does not actually have storage installed. The guest virtual machine directly performs pass-through disk IO, which means that all nodes cannot access storage at the same time, so when a failover occurs, the pass-through disk will be offline at the current physical node and then mounted online to other nodes before the virtual machine migration can be completed, which will greatly increase the failover time. During real-time migration, the pass-through disk will have to be uninstalled from the current Hyper-V host and installed on the new Hyper-V host. This process slows down the migration of VM and may cause clients to pause or even disconnect significantly.

In addition, a pass-through disk will be bound to a single virtual machine. For example, if we assign a disk to a virtual machine, the pass-through disk will no longer be used for other purposes.

Therefore, it is not likely to use pass-through disks as guest clusters in the real environment. In the 2008 era, the efficiency of pass-through disks was significantly different from that of VHD, and at that time, a single VHD was limited by 2TB. At that time, by deploying pass-through disks, it can help us solve performance problems, virtual machine disk size problems, and deliver the underlying FC or other architecture storage directly to virtual machines.

Even if pass-through disks are used, usually enterprises will not use them alone. There will be multiple guest clusters in a host cluster, and the operating systems of these guest virtual machines will still be stored in shared storage. Pass-through disk is more of a concept of dedicated storage. We can store some data such as database files on cut-through disks for mixed use.

Advantages and disadvantages of pass-through disk cluster architecture

Support mapping SAN,ISCSI,NAS of Hyper-V physical environment connection, local hard disk to virtual machine

Support for mapping USB storage before there is no Hyper-V enhanced session mode

Snapshot, differential disk, dynamic disk, Hyper-V copy are not supported

Host backup cannot back up the transfer disk. You need to install an agent in the guest virtual machine for backup.

Downtime for planned maintenance migration

The management is not flexible enough, it is not as convenient as managing VHD, and there are few pass-through disk management interfaces.

Since 2012, virtual machine disk files have been optimized, and the performance gap between VHDX format disk and pass-through disk is close to that of pass-through disk. At the same time, it has reached the size limit of a single disk 64TB, and the guest cluster architecture is also more flexible, providing virtual fibre Channel, ShareVHDX and other storage delivery architectures, so there are fewer and fewer cases of using pass-through disks in clusters, and users will still continue their habits in a few scenarios. Add pass-through disks to the virtual machine on the stand-alone.

3. Virtual fibre Channel

Before 2012, if we want to provide SAN to virtual machines, we can only implement ISCSI gateways in FC, or adopt pass-through disks, and Microsoft began to introduce virtual fibre Channel function in 2012, so that virtual machines can use virtual HBA to have virtual fibre Channel like physical machines, and have their own WWN,VM directly connected to the LUN in FC SAN.

The realization of this technology mainly depends on three technologies.

Virtual fibre Channel for NPIV-Hyper-V guest virtual machines uses the existing N_Port ID Virtualization (NPIV) T11 standard to map multiple virtual N_Port ID to a single physical fibre Channel N_port. Each time you start a virtual machine with virtual HBA configured, a new NPIV port is created on the host. When the virtual machine stops running on the host, the NPIV port is deleted.

Virtual SAN-defines a set of named physical fibre Channel ports connected to the same physical SAN.

Virtual HBA-hardware components assigned to virtual machine guests and assigned to a specific virtual SAN

Conditions and limitations for implementing virtual fibre Channel:

FC SAN that supports NPIV

The host must be running Windows Server 2012/2012R2

The host must have a FC HBA with Hyper-V and NPIV drivers supported

Cannot use virtual fibre Channel adapter to boot VM; from SAN Virtual fibre Channel is used only for data LUN

The only client operating systems that support virtual fibre Channel are Windows Server 2008, windows Server 2008 R2 and Windows Server 2102.

WWPN: a unique number provided to a fibre Channel HBA similar to an MAC address to allow the storage structure to recognize a specific HBA

WWNN (Global Node name): each virtual machine can be assigned its own proprietary WWNN and connect directly to the SAN on this basis

In order to understand how virtual machines move from one host to another without disrupting the IO stream from VM to storage, Hyper-V designed an architecture of two WWN per virtual HBA, where virtual machines connect to storage using WWN A. During a live migration, the new instance of the virtual machine on the target host is set up with WWN B. When the live migration is on the target host, VM can connect to the LUN immediately and continue the IO without interruption, for the original host or any other host, each subsequent live migration will result in virtual machines alternating between WWN An and WN B. This is true for every virtual HBA in a virtual machine. There can be up to 64 hosts in a Hyper-V cluster, but each virtual fibre Channel adapter will alternate between two WWN.

The configuration steps are as follows

Create a virtual SAN for Hyper-V

two。 Shut down the virtual machine, add fibre Channel adapters to the virtual machine, and access the virtual SAN

3. Allocate storage for virtual machine WWNN, power on virtual machine, and create guest cluster

Virtual fibre Channel is the technology of hyper-v 2012. Virtual HBA NPIV and other technologies are used to connect virtual machines directly to physical SAN, which solves the limitations in the past. However, this technology still has many limitations, for example, it can only be used in Windows operating system virtual machines, but it cannot be used if it is a linux virtual machine. The sharevhdx of 2012R2 supports more operating systems relatively, and the technical configuration is not as complicated as virtual SAN.

4.ShareVHDX

ShareVHDX is a technology introduced by 2012R2, which looks like a technology in virtualization, but it mainly relies on WSFC, which is mainly used to provide shared storage for guest clusters. Through this technology, for shielding the underlying physical storage structure for guest clusters, virtual machines will not be directly associated with physical storage, but through the ShareVHDX provided by virtual hosts to achieve guest clusters.

In the 2012R2 era, this technology actually works. We add the same SCSI virtual disk for the guest cluster virtual machine in turn, and choose to enable virtual disk sharing in the disk advanced options. After this choice, we can give one virtual disk to two virtual machines at the same time. For the guest cluster, this is a shared disk that can be used by the cluster. But the prerequisite is that the virtual disk must be stored in a cluster CSV volume or SOFS path

This technology is very easy to use. Lao Wang once did a project in Shandong. The project uses two linux virtual machines to do oracle rac clustering, but it needs to share disks, and it is inconvenient to expose the underlying storage to the virtual machines, so it uses ShareVHDX technology to attach the disks to two virtual machines at the same time, and the virtual machine can create rac normally. The effect is very good.

There is no doubt that this is the best and most convenient solution for the guest cluster, but a very important premise is that the underlying layer must have the support of the virtualized cluster, and the disk files of the ShareVHDX must exist under the CSV or SOFS path of the virtualized cluster, or there must be a dedicated storage cluster for the virtualized cluster to use, all ShareVHDX are stored in the storage cluster, and the front-end virtualized cluster is not configured with shared storage. All virtual machines point to the SOFS path of the storage cluster to store sharevhdx, but in practice, Lao Wang thinks that in the era of 2012R2, it is better for ShareVHDX to be stored directly in its own virtualized cluster CSV.

One of the biggest benefits of ShareVHDX technology is the shielding of the underlying storage architecture. Your virtual machine does not care that my underlying storage is SAN,JBOD,S2D,ISCSI. As long as you deliver a CSV or SOFS path to VM, VM can use this path to complete the creation of ShareVHDX, and then deliver it to the guest cluster shared storage.

ShareVHDX technology can also be used for back-end storage clusters, where the virtual machine points to the storage cluster SOFS path on two of the front-end Hyper-V hosts, respectively. After doing so, the guest cluster can get high availability, but the host does not cluster, which also brings hidden dangers, so it is best to virtualize the cluster + guest cluster.

In the era of 2012R2, ShareVHDX still has some technical limitations, so check the disk to enable hard disk sharing.

Resizing and migrating Share VHDX is not supported

Creating backups or copies of Share VHDX is not supported

The technology in Windows Server 2016 was updated and upgraded to ShareSet, removing these restrictions, but requiring that GuestOS must be Windows server 2016 before it can be used, and the technology continues until 2019

In 2016, ShareSet is added in the following way

1. Create a VHD Set disk for the virtual machine and store it under the CSV or SOFS path

two。 Add Share Drive under the SCSI controller to the virtual machine

3. VHD Set exists for Share Drive mount

The created VHD Set will produce two new file formats

An .avhdx file that contains entity data, which is fixed or dynamic.

A .vhds file that contains metadata for coordinating information between guest cluster nodes. The size of the file is almost 260KB.

For virtual machines that already use ShareVHDX technology, you can use Convert-VHD to convert ShareVHDX files offline to VHD Set format, and then add them to ShareDrive

If you are currently using a linux guest cluster in your environment, but using 2012R2 ShareVHDX, it is recommended that you do not upgrade to 2016 ShareSet yet, as there may be unsupported situations.

For guest cluster Lao Wang, it is suggested that 2012R2 ShareVHDX or 2016 ShareSet should be preferred as guest cluster shared storage architecture. This scheme makes the least changes to the existing environment and does not need to change the physical storage topology, followed by ISCSI/ virtual fibre Channel / pass-through disk.

To sum up, after writing this blog, Lao Wang also thought about the application of the actual scenario, and the enterprise does not have to deploy guest clusters, especially when there is already a virtualized cluster.

For a virtualized cluster, each of your virtual machines is a cluster role object for WSFC. The node detects downtime and I fail over normally according to the policy.

But with the cooperation of WSFC and the Hyper-V team, applications in the Guest virtual machine can now be protected only at the host level.

For example, blue screen detection, for the virtual machine we deployed, WSFC can detect whether the virtual machine OS is blue, if the blue screen is to be on the current node or transferred to another node.

Application detection, WSFC2012 can also detect a service in the virtual machine for the virtual machine. Once the number of failures exceeds the limit, it will restart on the current node, or transfer to another node.

The guest network card protection can detect the network card connected to the virtual machine, and once the connection is lost, the virtual machine will fail over to other nodes.

In fact, if we do not deploy guest clusters, we can also ensure the health of virtual machine objects, virtual machine OS, virtual machine network connections, and applications in virtual machines at these levels, but if the application is really critical, we still need to deploy guest cluster architecture to achieve the highest availability. Once a single node virtual machine OS crashes, the application can fail over to another virtual machine, greatly reducing downtime. If only one virtual machine is deployed in combination with the host cluster, it will result in downtime for restart.

Level1-level virtual machine application protection: deploy a single virtual machine, combined with blue screen detection, application detection, and network card detection to prevent application downtime caused by these three factors except host downtime

Level2-level virtual machine application protection: deploy multiple virtual machines, but not guest clusters, apply replication technology between virtual machines, cooperate with host clusters to achieve anti-correlation, so that virtual machines are not always on the same node, single downtime, and use replication technology to automatically or manually switch applications to other virtual machines.

Level3-level virtual machine application protection: deploy guest cluster + host cluster, combined with anti-correlation to ensure that the nodes of the guest cluster are always in different hosts, whether a single host downtime or a single virtual machine downtime will not affect the application

Enterprise administrators or consultants can choose the appropriate scheme according to the actual scenario and the level of protection required by virtual machine applications. I hope you can think about it through this article!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report