Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Simple Test of Windows 2016 S2D (2) in vSphere Environment

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

After understanding the basic concept and architecture of S2D, we will do some specific configuration and testing. The environment of this experiment is based on vCenter6.0u2. Four virtual machines are configured as nodes of S2D. The specific configuration of each virtual machine is as follows:

OS:Windows 2016 datacenter

4 vCPU& 8GB RAM

4 vNics

1 40GB disk with OS; plus 2 x 50GB (analog PCIe SSD), 2 x 100GB (analog SSD), 4 x 300GB (HDD)

The idea of this test is to use the simulated NVMe PCIe SSD disk as the read and write cache and SSD and HDD as the capacity layer. The S2D itself is flexible and supports either an all-flash configuration or a hybrid disk configuration, depending on the customer's comprehensive consideration of performance, capacity and price in combination with their applications. I feel that the two-tier disk configuration is more appropriate in the practical application. The purpose of simulating the three-tier configuration here is to test more and explore its working mechanism. The following blog post from Microsoft well explains the principles and best practices of S2D caching, so I won't repeat it here. As long as you use the hardware in the Microsoft certification list, the system automatically sets the highest-level disk to read-write cache when S2D is enabled (default for SSD disks only as write cache; for HDD disks as read-write cache). However, when testing on a virtual machine, the disk type and usage sometimes need to be specified manually. In the following steps, there are specific commands and screenshots for your reference.

Https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/understand-the-cache

Let's transition to the specific configuration steps:

1. In the next steps, we will use the Powershell command to specify the 50GB disk type as SCM;. Here, edit the vmx configuration file for each virtual machine and add: scsi x:x.virtualSSD = "true". Or open the virtual machine setting option-"VM option--" Advanced Settings-- "Configuration Parameters--" Edit Configuration, add it directly in the following interface, and set the corresponding two 100GB disks to SSD type:

two。 After the four virtual machines are installed in W2016, add the required "file and storage services" role and "failover cluster" function, complete the network and other basic configuration, and join the domain. Virtual machines can be placed on different physical machines to increase high availability by using the settings of VM-Host affinity as needed. Two virtual network cards can be configured as Team respectively, and different network segments can be configured for the communication between production network and Cluster nodes.

3. The most important detail that is easy to overlook when configuring virtual machines to cluster is the setting of the system clock. After the virtual machine installs VMware Tools, by default, the time of the VM and the host clock will be automatically calibrated in the following situations: (1) when the virtual machine system restarts or resumes from the suspended state; (2) when the virtual machine VMotion to another host; (3) when a snapshot is created or restored, or when other commands cause such operations to be triggered automatically; (4) after the VMWARE Tool service is restarted. If the host clock is not standard, it will cause a lot of problems. Therefore, it is suggested that the VMWARE Tools clock synchronization service of these S2D nodes should be turned off according to VMware KB1189, and the system Windows Time service should be turned on to synchronize the clock automatically with the domain control. It would be better if there is an accurate clock server in the network.

Https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1189

4. Make preparations for the creation of the cluster, such as the configuration of the arbitration plate, etc. Next, run Powershell in the graphical interface or as an administrator, enter the following command to add Failover Clustering functionality for each host (if it has not already been added), and create a new cluster:

Add-WindowsFeature-Name failover-clustering-IncludeManagementTools-ComputerName xxx.yourdomain.com-Verbose-Credential yourdomain\ administrator

New-Cluster-Name xxxx-StaticAddress x.x.x.x-Node node1.yourdomain.com,node2.yourdomain.com,node3.yourdomain.com,node4.yourdomain.com-Verbose

5. Open the Powershell ISE in the built cluster node and enter the following command to view the physical disk details contributed by all nodes:

Get-PhysicalDisk | select friendlyname,serialnumber,canpool,operationalstatus, OperationalDetails,healthstatus,usage,size,bustype,mediatype | ft

Small reminder: if the test is conducted on a physical platform, it is best to use the Clear-disk-removedata-removeOEM command to clean up all the information of the hard disk used by all nodes to build the storage pool. The command is executed normally on the premise that the corresponding disk needs to remain online. The output is similar to the following figure. In the Partition Style column, the disks are all in RAW status:

6. When building an S2D in a production environment, make sure that all kinds of hardware meet Microsoft's official compatibility list. In this test environment, after checking the number of physical disks and there is no problem with the status, if you find that some types of disks are not correctly identified, you can try to specify their disk types manually with the command, but the disk type is one of its properties in the storage pool, so you must add all disks to the storage pool before you can set their types manually. Here, we will enable S2D, disable the cache temporarily, and skip disk detection:

Enable-ClusterS2D-CacheState disabled-AutoConfig:0-SkipEligibilityChecks

7. Use the New-storagepool command to create your own storage pool, which allows you to place different disks in different pools. Here we put all the disks into a storage pool called "mys2dpool1".

New-StoragePool-StorageSubSystemFriendlyName * cluster*-FriendlyName mys2dpool1-ProvisioningTypeDefault Fixed-PhysicalDisks (Get-PhysicalDisk |? Canpool-EQ "true")

When you are finished, you can use the Get-storagepool command to check its status. Or go to the Server Manager graphical interface to view it, as shown below:

Status in Failover Manager:

The following figure shows the details of the physical disks after joining the storage pool. You can see that except for 8 SSD disks, 300GB's HDD and 50GB's NVMe PCIe SSD type disks are not recognized:

8. Next, use the following command to specify the unrecognized 50GB disk as the SCM type:

Get-PhysicalDisk | where {$_ .mediatype-eq "unspecified"-and $_ .canpool} | Set-PhysicalDisk-MediaType SCM

This command can be flexibly combined, such as classifying and specifying according to disk size, and then set the 300GB disk type to HDD with the following command:

Get-PhysicalDisk | where {$_ .size-eq 322122547200} | Set-PhysicalDisk-MediaType HDD

The output of the Get-physicaldisk command after completion is shown as follows:

The 9.S2D system automatically uses the highest-level disk as a read-write cache when creating a storage pool. In this example, you need to manually adjust the designation of SCM as the cache layer (journal):

Get-PhysicalDisk | where {$_ .mediatype-eq "scm"} | Set-PhysicalDisk-Usage Journal

When finished, use the get- physicaldisk command to check the final physical disk status. Make sure it's right.

10. Then turn on the cache of S2D:

(Get-Cluster) .S2DCacheDesiredState = 2

After completion, you can use Get-clusters2d to view its status:

11. Similar to VMWARE VSAN, S2D can also support different types of failure domains to enhance high availability in production environments. The fault domain types include: Node, Rack, Chassis, Site and so on. Here we create four Rack-based failure domains and place the four nodes in different failure domains:

1. 4 | ForEach-Object {New-ClusterFaultDomain-Name fd0 $_-FaultDomainType rack}

1. 4 | ForEach-Object {Set-ClusterFaultDomain-Namedths2sofsnode$_-Parent fd0 $_}

After using Get-Clusterfaultdomain, you can see a result similar to the following:

twelve。 The process of creating Virtual Disk (Storage Space) is simple, either through the graphical interface of ServerManager or through the Powershell command. It is important to note that there are few options that can be customized through the graphical interface, which can be done in a few steps with the wizard; creating through Powershell gives customers more flexibility. For example, we use the New-Virtualdisk command to create a VD named "testshrink1" with a size of 20GB dural Parity layout.

New-VirtualDisk-StoragePoolFriendlyName mys2dpool1-FriendlyName testshrink1-Size 20GB-ResiliencySettingName parity-PhysicalDiskRedundancy 2

The options for creating a VD in Server Manager are shown below:

After completion, you can check its status in Server Manager or in Powershell with the following command:

Get-VirtualDisk-FriendlyName testshrink1 | select *

The output information is as follows. As you can see, by default, the write cache for this volume is 1GB:

The following figure shows the information seen in the Failover Cluster management interface, which can be initialized from disk management in the Owner Node of the VD, formatted into NTFS or ReFS, and then converted to CSV format in Cluster Administrator.

13. We can also create a ReFS volume "myvol3" in Powershell in one step and make it CSV with two copies of Mirror:

New-Volume-StoragePoolFriendlyName mys2dpool1-FriendlyName myvol3-FileSystem CSVFS_ReFS-Size 25GB-ResiliencySettingName mirror-PhysicalDiskRedundancy 1

Similarly, you can view its details with the Get-volume command:

Here is the status you see in Faliover Manager:

Careful students may find that the size we defined at the time of creation is 25GB, but how can the generated size be 32GB?

Then we use Get-Virtualdisk to view the details of the volume corresponding to VD:

Pay attention to the parameter configuration in the red box above. Here are a few important concepts in Storage Space:

Slab: the basic unit that makes up Virtual Disk in a storage pool. The disks in the storage pool are divided into blocks of Slab, and then virtual disks are composed of user-defined data protection methods (Mirror or Parity) to be allocated to the host. The size of each slab in S2D is 256MB.

Column: it can be simply understood as stripe width, that is, how many physical disks are included when Storage Space writes data to VD in stripe mode. In theory, there is more Column and more working disks, and there is a corresponding increase in IOPS. In fact, because of the presence of read-write caching, you need to further test the performance differences brought about by different column.

Interleave: it can be simply understood as stripe depth, that is, when Storage Space writes data to VD in stripe mode, it finally lands on each disk. S2D defaults to 256KB.

In S2D, Microsoft recommends that when creating a VD, it is best not to set the Column and Interconnect values manually, and the system will automatically optimize the configuration. The size of the Slab is not adjustable. But the reason why the actual size of the VD "myvol3" seen in the above picture exceeds the defined size lies in the choice of different Column. As shown above, the system automatically configures "Numbersofcolumn" to 8 for the "myvol3" of the two-way Mirror layout. The author guesses that each copy of each data will be written across 8 disks, and each disk will pre-allocate space with the Slab of 256MB, which is bound to result in the allocation of extra space. The smaller the size defined when creating a VD, the more obvious this phenomenon is. The author has tried to create the VD of 1GB's two-way Mirror, but the generated size is 8GB; but if you create a very large VD, there is no obvious allocation of extra space. Therefore, in the real production environment, the impact should be small. The following figure shows the details of 1TB VD:

In addition, the system automatically configures the "faultdomainawareness" value "StorageScaleUnit" for VD "myvol3", that is, the failure domain based on extended units. In the actual production environment, it is necessary to define the fault domain which can really improve the fault tolerance of the system according to the on-site situation. There are currently five fault domain types: "physicaldisk", "StorageChassis", "StorageEnclosure", "StorageRack" and "StorageSacleUnit". In combination with the "StorageRack" failure domain we created earlier, we can also define that each copy of the data is fixed in each failure domain when creating a VD like the following command. In this case, we create a new VD named "3mirrorvd8". The failure domain type is the StorageRack defined in the previous step, and the number of HDD of each node is limited to 4:

New-VirtualDisk-StoragePoolFriendlyName mys2dpool1-FriendlyName 3mirrorvd8-Size 10GB-ResiliencySettingName mirror-NumberOfDataCopies 3-FaultDomainAwareness StorageRack-NumberOfColumns 4

14. With Powershell, when creating a VD, you can easily pin it to different types of disks according to your business performance needs. For example, let's create a Mirror volume "ssdvol1" and fix it in the SSD disk:

New-volume-StoragePoolFriendlyName mys2dpool1-FriendlyName ssdvol1-FileSystem CSVFS_ReFS-MediaType SSD-Size 15GB-ResiliencySettingName mirror-PhysicalDiskRedundancy 1

The following is the details seen with get-virtualdisk. You can see that the disks on which the volume resides are all SSD disks. In the same way, we can also attach some large-capacity, low-performance VD directly to the HDD.

We can easily expand the volume online in Server Manager or with the following command:

Resize-VirtualDisk-FriendlyName ssdvol1-Size 25GB-Verbose

After completion, the ssdvol1 file system can be expanded online in the disk manager of the node to which the VD volume belongs:

Reminder: VD in S2D can only support online capacity expansion, not size reduction. In addition, for S2D storage pools in the cluster, only VD in Fix format is supported, and automatic thin volumes that allocate space according to actual use are not supported.

15. Because there are two types of disks in the storage pool, SSD and HDD, as capacity tiers, next we try to create volumes across multiple tiers of storage (Multi Resilient Volume), which is limited to W2016 S2D and ReFS file systems. The purpose of using this type of volume is to automatically balance the layout of hot and cold data, thereby optimizing the performance of applications on it, while saving high-level disk space. When data is written, it is written to the predefined Mirror layer (SSD) in the way of Mirror, and then the "cooled" data is automatically dispatched to the Parity layer (HDD) as needed, so as to save SSD space and free up SSD space for hot data that really needs performance. The following figure, from the official Microsoft documentation, is a good summary of the types of VD fault-tolerant layouts:

Next, we define the Mirror layer and the Parity layer in the storage pool, the Mirror layer is named "perf", and the data uses a 2-copy Mirror layout with less write penalty and better performance; the Parity layer is named "cap", using a more space-saving Dual Parity layout (similar to Raid6), while providing better security. The specific commands are as follows:

New-StorageTier-StoragePoolFriendlyName mys2dpool1-FriendlyName perf-MediaType SSD-ResiliencySettingName mirror-PhysicalDiskRedundancy 1

New-StorageTier-StoragePoolFriendlyName mys2dpool1-FriendlyName cap-MediaType HDD-ResiliencySettingName parity-PhysicalDiskRedundancy 2

The output is shown below:

Let's create a volume called "mrvol1" with the following command, which is 60GB in size and 10GB in the Magi Mirror section and 50GB in the parity part:

New-volume-StoragePoolFriendlyName mys2dpool1-FriendlyName mrvol1-FileSystem CSVFS_ReFS-StorageTierFriendlyNames perf,cap-StorageTierSizes 10GB Magi 50GB-verbose

Using Microsoft's scripting tool "show-prettyvolume", you can see the following information about the volume:

You can also view the detailed configuration information of the volume on two-tier disks with commands similar to the following:

Get-VirtualDisk mrvol1 | Get-StorageTier | select *

There is no hard and fast rule as to how appropriate the capacity ratio of MRV-type volumes is when they are created between Mirror tier and Parity tier. Generally speaking, the newly written hot data will be stored on the SSD, so it can be used as a benchmark according to how much new data is generated by the business running on it every day, and the capacity of the Mirror layer cannot at least be less than this value. In addition, if the Mirror layer occupies more than 60% of the defined space, it will also trigger data movement (to the Parity layer). Therefore, it is recommended to relax the defined capacity of the Mirror layer to avoid the additional system burden caused by frequent data movement. According to the best practices given by Microsoft, it is recommended that you reserve twice the size of the hot spot data in the Mirror layer, and that the size of the entire volume definition is about 20% more than the required capacity. However, similar to the previous experimental steps, you can easily expand the Mirror layer or Parity layer online through Powershell. For example, the following command expands the two layers to 10GB respectively. After completion, you can go to the disk management tool of the node to which the volume belongs and expand its file system.

Get-VirtualDisk-FriendlyName mrvol1 | Get-StorageTier |? Friendlyname-eq mrvol1_perf | Resize-StorageTier-Size 20GB

Get-VirtualDisk-FriendlyName mrvol1 | Get-StorageTier |? Friendlyname-eq mrvol1_cap | Resize-StorageTier-Size 60GB

16. Finally, you can optimize the Multi-Resilient volume on demand with the following command. In addition, the system is also configured with the corresponding data scheduling task by default, which can be found in Task Scheduler, as shown below. Users can adjust the starting time according to their own business needs.

Optimize-Volume-FileSystemLabel mrvol1-TierOptimize-Verbose

As can be seen from the above steps, the configuration and use of S2D is relatively not difficult, but also very flexible, and can meet the different needs of different usage scenarios as far as possible. However, the administrator is required to be familiar with the Powershell command. Microsoft product experts have shared some useful S2D scripts online with the following link, which can be downloaded and simply modified according to their own environment.

View usage scripts for storage pools and volumes:

Http://cosmosdarwin.com/Show-PrettyPool.ps1

Http://cosmosdarwin.com/Show-PrettyVolume.ps1

A script that completely clears the S2D configuration:

Https://gallery.technet.microsoft.com/scriptcenter/Completely-Clearing-an-ab745947

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report