Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

VMware Virtual SAN management and debugging

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Click to download the Word document

Click to view the original text

Catalogue

1. Data center preparation 1

(1) Storage device preparation 1

1) prepare storage device 1

2) device 1 for preparing disk groups

3) original capacity and available capacity 1

4) do not include the size of the flash cache device in the capacity. These devices do not provide storage and will be used as caches unless flash devices have been added for storage. two

5) provide enough space to handle the fault values allowed in the virtual machine storage policy. two

6) verify that the Virtual SAN data store has enough space to operate by checking the space on each host rather than on the consolidated Virtual SAN data store object. For example, when you withdraw a host, all available space in the data store may be on the host that is being withdrawn, and the data center will not be able to accommodate the withdrawal to other hosts. two

7) pay attention to the available storage space overhead of Virtual SAN. two

(II) the impact of Virtual SAN policy on capacity 2

(3) Host preparation 3

1) provide memory for Virtual SAN 3

2) memory must be set up for the host based on the maximum number of devices and disk groups to be mapped to the Virtual SAN. four

3) in order to meet the maximum number of devices and disk groups, 32 GB memory must be set up for the host for system operation. The minimum memory is 8GB. four

(IV) requirements and suggestions when preparing to enable Virtual SAN. four

1) ensure that at least three hosts provide storage for the Virtual SAN data store. five

2) because maintenance and repair operations are required in the event of a failure, at least four hosts need to be added to the data center. five

3) specify hosts with a unified configuration to achieve the best storage balance in the data center. five

4) do not add hosts with only computing resources to the data center to avoid an uneven distribution of storage components on the hosts that provide storage. Virtual machines that require a lot of storage space and run on compute-only hosts may store a large number of components on a single capacity host. As a result, storage performance in the data center may be low. five

5) do not configure aggressive CPU power management policies on the host to save power. The performance of some applications that are sensitive to CPU speed latency may be very low. five

6) consider workload configurations that are placed in hybrid or all-flash disk configurations. five

(v) Storage controller preparation 5

1) verify that the storage controller on the Virtual SAN host meets the specific requirements for mode, driver and firmware versions, queue depth, cache, and advanced features. six

(VI) Network configuration 7

1) put hosts on the same subnet 7

2) enable IP Multicast 8 on the physical switch

3) specify network bandwidth 8 on the physical adapter

4) configure port group 9 on the virtual switch

5) check the firewall on the Virtual SAN host 10

(VII) points for attention for licenses 11

1) when preparing the data center for Virtual SAN, review the requirements for the Virtual SAN license. eleven

II. Create a Virtual SAN data center 11

(I) preparation of the experimental environment 11

(II) VSAN data Center Features 12

1) the Virtual SAN data center includes the following features: 12

2) each vCenter Server instance can have multiple Virtual SAN data centers. You can use a single vCenter Server to manage multiple Virtual SAN data centers. thirteen

3) Virtual SAN will occupy all devices, including flash cache and capacity devices, and will not share devices with other features. thirteen

4) Virtual SAN data centers can contain hosts with / without capacity devices. The minimum requirement is three hosts with capacity devices. For best results, create an Virtual SAN data center with a uniformly configured host. thirteen

5) if the host provides capacity, the host must have at least one flash cache device and one capacity device. thirteen

6) in hybrid data centers, disks are used to provide capacity and flash devices are used to provide read and write caching. Virtual SAN allocates 70 percent of all available caches as read caches and 30 percent as write buffers. In this configuration, the flash device acts as a read cache and write buffer. thirteen

7) in an all-flash data center, one designated flash device is used as a write cache and other flash devices are used as capacity devices. In an all-flash data center, all read requests come directly from the flash pool capacity. thirteen

8) only local capacity devices or directly connected capacity devices can join the Virtual SAN data center. Virtual SAN cannot occupy other external storage connected to the data center, such as SAN or NAS. thirteen

(III) Virtual SAN data center requirements 14

(4) add ESXI to vCenter and set up Virtual SAN network 16

1) add ESXI hosts to the vCenter data center 17

2) create distributed switch 21

3) add hosts to distributed switch 24

4) create a VSAN dedicated distributed port group 30

5) create VSAN data center 43

The data center prepares the storage device prepares the storage device the storage device must meet the following requirements for Virtual SAN to declare that the storage device is local to the ESXi host. Virtual SAN could not declare the remote device. The storage device does not have any partition information that already exists. All-flash disk groups and mixed disk groups cannot exist at the same time on the same host. Devices preparing disk groups each disk group provides a flash cache device and at least one disk or flash capacity device. "without protecting the replica, the capacity of the flash cache device must be at least 10% of the expected consumption of storage on the capacity device." Virtual SAN requires at least one disk group on the host that provides storage to the data center, and the data center consists of at least three hosts. Use a host with a unified configuration to achieve the best performance of Virtual SAN. Raw and available capacity do not include the size of the flash cache device in the capacity. These devices do not provide storage and will be used as caches unless flash devices have been added for storage. Provide enough space to handle the fault values allowed in the virtual machine storage policy. Verify that the Virtual SAN data store has enough space to operate by checking the space on each host, rather than on the consolidated Virtual SAN data store object. For example, when you withdraw a host, all available space in the data store may be on the host that is being withdrawn, and the data center will not be able to accommodate the withdrawal to other hosts. Note the free storage space overhead of Virtual SAN. For disk format version 2.0 and later, the cost is 1% of the capacity on each capacity device. 2%. For disk format version 1.0, the cost is 1 GB per capacity device. The influence of Virtual SAN Policy on capacity

The host preparing to provide memory for the Virtual SAN must set up memory for the host based on the maximum number of devices and disk groups to be mapped to the Virtual SAN. To meet the maximum number of devices and disk groups, 32 GB of memory must be set up for the host for system operation. The minimum memory is 8GB. Requirements and recommendations when preparing to enable Virtual SAN. Ensure that at least three hosts provide storage for the Virtual SAN data store. Because maintenance and repair operations are required in the event of a failure, at least four hosts need to be added to the data center. Specify hosts with a unified configuration to achieve the best storage balance in the data center. Do not add hosts with only computing resources to the data center to avoid an uneven distribution of storage components on the hosts that provide storage. Virtual machines that require a lot of storage space and run on compute-only hosts may store a large number of components on a single capacity host. As a result, storage performance in the data center may be low. Do not configure aggressive CPU power management policies on the host to save power. The performance of some applications that are sensitive to CPU speed latency may be very low. Consider workload configurations that are placed in hybrid or all-flash disk configurations. For a high level of predictable performance, provide an all-flash disk group data center. To strike a balance between performance and cost gains, provide a mixed disk group data center. The Storage Controller is ready to confirm that the Storage Controller on the Virtual SAN host meets the specific requirements for patterns, driver and firmware versions, queue depth, caching, and advanced features.

Network configuration places hosts on the same subnet

In order to achieve the best network performance, hosts must be connected in the same subnet. In Virtual SAN 6.0and later, hosts can also be connected on the same layer 3 network if necessary.

Enable IP Multicast on the physical switch to verify that the physical switch is configured for multicast traffic so that hosts can exchange Virtual SAN metadata. Configure IGMP snooping queries on the physical switch to transmit multicast messages only through the physical switch port connected to the Virtual SAN host. If there are multiple Virtual SAN data centers in the same subnet, change the default multicast address of the added data center. Specify network bandwidth on the physical adapter

Allocate at least 1 Gbps of bandwidth to the Virtual SAN. You can use one of the following configuration options:

Specify the 1-GbE physical adapter for the mixed host configuration. Configure a dedicated or shared 10-GbE physical adapter for all-flash memory. If possible, use dedicated or shared 10-GbE physical adapters for mixed configurations. Direct Virtual SAN traffic on the 10-GbE physical adapter that handles other system traffic and reserve bandwidth for Virtual SAN using the vSphere Network I Control O Control on Distributed Switch. Configure a port group on the virtual switch to assign the physical adapter of the Virtual SAN as the active uplink to the port group. When a network card group is used to achieve network availability, a grouping algorithm is selected according to the connection between the physical adapter and the switch. Virtual SAN traffic can be assigned to VLAN by design by enabling tagging in the virtual switch. Check the firewall on the Virtual SAN host Virtual SAN sends messages on certain ports on each host in the data center. Verify that the host firewall allows these ports to send traffic.

License considerations when preparing the data center for Virtual SAN, review the Virtual SAN license requirements. Ensure that a valid license has been obtained to achieve complete host configuration control in the data center. The license should be different from the license used for evaluation purposes. After the license or evaluation period for Virtual SAN expires, you can continue to use the current configuration of the Virtual SAN resource. However, you cannot add capacity to a disk group or create a disk group. If the data center includes an all-flash disk group, verify that the all-flash feature is available under the license. If the Virtual SAN data center uses advanced features such as deduplication and compressing or extending the data center, verify that this feature is available under the license. When adding or removing hosts from the data center, consider the CPU capacity of the Virtual SAN license based on the entire data center. The Virtual SAN license contains the capacity of each CPU. When assigning Virtual SAN licenses to a data center, the license capacity used is equal to the total CPU of each host that joined the data center. Preparation for creating an experimental environment for Virtual SAN data center

Hostnam

Server role

Manage IP addr

Gateway

VSAN IP

Network card

Magnetic disk

DC01

Domain controller

10.0.0.20

10.0.0.1

/

1G

100G*1

Vcenter

Vcenter server

10.0.0.28

10.0.0.1

/

1G

100G*1

ESX01

ESXi physical host

10.0.0.50

10.0.0.1

10.10.1.50

2G1G

2x 10g

SSD25G*1

HDD30G*4

ESX02

ESXi physical host

10.0.0.51

10.0.0.1

10.10.1.51

2G1G

2x 10g

SSD25G*1

HDD30G*4

ESX03

ESXi physical host

10.0.0.52

10.0.0.1

10.10.1.52

2G1G

2x 10g

SSD25G*1

HDD30G*4

ESX04

ESXi physical host

10.0.0.53

10.0.0.1

10.10.1.53

2G1G

2x 10g

SSD25G*1

HDD30G*4

Network topological graph

VSAN data Center Features Virtual SAN data centers include the following features: each vCenter Server instance can have multiple Virtual SAN data centers. You can use a single vCenter Server to manage multiple Virtual SAN data centers. Virtual SAN consumes all devices, including flash cache and capacity devices, and does not share devices with other features. Virtual SAN data centers can contain hosts with / without capacity devices. The minimum requirement is three hosts with capacity devices. For best results, create an Virtual SAN data center with a uniformly configured host. If the host provides capacity, the host must have at least one flash cache device and one capacity device. In hybrid data centers, disks are used to provide capacity and flash devices are used to provide read and write caching. Virtual SAN allocates 70 percent of all available caches as read caches and 30 percent as write buffers. In this configuration, the flash device acts as a read cache and write buffer. In an all-flash data center, one designated flash device is used as a write cache and other flash devices are used as capacity devices. In an all-flash data center, all read requests come directly from the flash pool capacity. Only local capacity devices or directly connected capacity devices can join the Virtual SAN data center. Virtual SAN cannot occupy other external storage connected to the data center, such as SAN or NAS. Virtual SAN data Center requirement

You can use this comparison table to verify that the data center meets the guidelines and basic requirements.

Add ESXI to vCenter and set up the Virtual SAN network

Click to view the installation steps of vsphere. Please refer to the documentation and do not introduce it here.

Add ESXI hosts to the vCenter data center

Enter ESXI host addr

Enter user name esxi user name and password

Confirm the security prompt

Confirm the message and click next.

Select key

Lock mode is not needed for the time being. Just keep the default.

Select data Center

Click finish after confirming that the information is correct.

Here you can see that the hosts have been added, just continue to add the other two. Create a distributed switch to enter the vCenter and aim at the data center.

Click Distributed Swith → New Distributed Switch to proceed to the new distributed switch step

Just select the latest version.

Set the number of ports created for the first time (the number here is arbitrary. If there are not enough ports, they will be created automatically.

Click "finish" after confirming that the information is correct

Add hosts to the distributed switch right-click the newly created distributed switch → "add and manage hosts"

We are a newly created distributed switch, so check "add Host" here and click "next"

Click "New Host" to add a host

Check the hosts to be added and click

Follow the steps below to add the first network card of the three hosts to uplink port 1, and the second network card to uplink port 2. Select the first network card and click "assign uplink" in the upper left corner.

Select uplink 1 and click OK, so we add the first network card to uplink port 1. Note to check "apply this uplink allocation to all other hosts" in the lower left corner at the same time, and then add the second network card (here we can add only one or two network cards)

Follow the steps above to add the first network card of the three hosts to uplink port 1, and the second network card to uplink port 2. (you can see from the figure below that the network card has been added to uplink port 1. Follow the steps above to add network card 2 to uplink port group 2). After the second network card is added, click "next" to assign the VMkernel to the distributed switch.

Check the only distributed switch that allows migration, and check "apply this port group assignment to all other hosts" in step 2 in the lower left corner. Click OK.

Click next after confirming the message (the following image shows the screenshot after creation. I forgot to take a screenshot during the operation. The port is in use and the source port is different)

There are no virtual machines in the environment you just created. Just click "next" here.

After confirming the information, click "finish".

Wait for the task bar to prompt each of the three hosts to "update VSAN configuration", indicating that the addition is successful.

Create a VSAN dedicated distributed port group, open "Network" → right "data Center" → "Dswitch Distributed Switch" → "distributed Port Group" → "New distributed Port Group"

Enter the name of the VSAN dedicated distributed port group, and then click next

Just keep the default here and click "next" directly.

Click "finish" after confirming that the information is correct.

Click "Network" → "data Center" → "DSwitch" → and right-click → "add VMkernel Adapter" at the distributed port group you just created.

Select the host on which the VMkernel adapter needs to be created

Select all hosts and click OK to complete the check.

Check the box and click "next" directly.

Click "VSAN" at the available services, and then click "next".

Click "use static IPV4 address" → to enter the VSAN dedicated VMkernel adapter address for each host (Note: the VMkernel adapter address had better be bad and the management network uses the same network segment "→" next "

Click "finish" after confirming that the information is correct.

Click "Network" → "data Center" → "DSWitch distributed switch" → "VSAN Network distributed Port Group" → "Edit"

Navigate to "binding and failover" → We use the down arrow in step 2 to migrate uplinks 1 and 2 to unused uplinks. (note: this is because we have assigned uplinks 1 and 2 to the management network in our previous lab, where the VSAN private network assigns uplink ports 3 and 4.

After the migration is completed, click "OK" as shown in the following figure.

Go back to "Network" → "data Center" → right "DSwitch" distributed switch → "add and manage hosts"

This is not the first time to manage a distributed switch. We have just added hosts, so click "manage Host Network" here.

Select all hosts, and then click OK

After the check box is completed, just click "next".

Assign network cards 3 and 4 to uplink port groups 3 and 4, respectively

Assign Nic 3 to uplink port 3, and then check "apply this uplink assignment to all other hosts" so that we don't have to manually assign the other two hosts again.

After the addition is completed, use the same steps to assign the network card 4 to uplink port 4. I will not repeat it here, as shown in the following figure.

We have just modified the VMkernel port group. There is no need to do the operation here, just go to the next step (Note: in order to filter your thoughts, this step can be broken down separately. If you can use it skillfully, you can put step 1-14 of the 2-4-4 step before 2-4-3, and assign 1 and 2 network cards to uplink ports 1, 2, and 3 directly in 2-4-3. 3, 4 Nic assigned to uplink ports 3, 4)

We haven't created a virtual machine yet. just skip this step.

Click "finish" after confirming that the information is correct.

Enter the "network" → "data center" → "Dswitch distributed switch" → "configuration" → "Topology" to see that there are three network cards in each of the two distributed port groups, and you can also see the IP address clearly.

Click on the management network to see that the distributed port group uses 1 and 2 uplink ports

Click on the VSAN private network distributed port group to see that the distributed port group uses 3 and 4 uplink ports

Create a VSAN data center create a new data center

Enter cluster name

Migrate the host to the cluster, select the compute node in the data center, right-click and click "migrate to"

Expand the drop-down menu and select the VSAN cluster you just created

The hosts have been migrated to the VSAN cluster. Follow the same steps to migrate the other two hosts. I won't go into too much detail here.

Enter the "hosts and clusters" → "Dangxiao data Center" → "VSAN Cluster" → "configuration" → "VSAN" → "Service" → "configuration"

Select but site clustering, and the other two will be discussed later.

We do not open deduplication and compression here. Deduplication and compression are only applicable to all-flash clusters. Just click "next".

We need life cache and capacity layer here (the cache layer must use solid-state disk, the capacity layer generally uses mechanical hard disk, or solid-state disk can be used as all-flash memory). Generally, solid-state drives automatically recognize that we only need to click the drop-down menu to declare the cache layer or capacity layer, and solid-state drives can be declared as both capacity layer and cache layer. The mechanical disk can only be declared as a capacity layer (if the flash drive (solid state) cannot be displayed here, please see step 11

Declare capacity layer

(if SSDs can be identified above, this step can be ignored.) sometimes some minority brands of SSDs cannot be recognized as SSDs by VMware, so we need to manually mark SSDs (this method is also suitable for test environments.) Just do experiments when we don't have SSDs) open "Host and Cluster" → "Dangxiao data Center" → "VSAN Cluster" → "need to mark SSDs computing node" → "configure" → "storage device" → "select SSDs that need to be marked" → "marked as Flash disk", so we mark mechanical hard drives as SSDs.

The default storage policy for three hosts supports only one host failure.

You can view the complete information here. Make sure the information is correct and click "finish".

Wait for the update VSAN of the three hosts in the taskbar to be completed.

After setting up the above steps, you need to enable HA and DRS, otherwise you cannot create a virtual machine. Enter the "Cluster" → "Dangxiao data Center" → "VSAN" → "configure" → "vsphere DRS" → "Edit".

Click the switch on the right side of vSphere DRS to open DRS. The status that has been opened is shown in the image below. You can adjust the automation level and migration threshold according to your needs.

Go to "Cluster" → "Dangxiao data Center" → "VSAN" → "configure" → "vSphere availability" → "Click" Edit "

After vsphere HA is enabled, if the current compute node host fails, the virtual machine will be automatically restarted on the host where the replica exists.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report