Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example of using Dell PowerEdge RAID control card (PERC H710P as an example)

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Sample list of Dell PERC usage (H710p)

In particular, the operation of RAID in this article is only for netizens to learn and understand the functions and usage of Dell PowerEdge server RAID control card in the test environment. Do not do relevant experiments directly on the production server, which may cause misoperation and the risk of data loss!

Example demonstration environment: PowerEdge R620 + H710p Raid control card + 9 x 300g 10k SAS hard disk

The basic skills of PERC Card

Initialization of RAID configuration information for PERC card:

The configuration of Dell PowerEdge server RAID control card can be carried out using a variety of tools and interfaces provided by Dell. It includes H710p onboard PERC BIOS management interface, life cycle controller, system settings, and OMSA software. Below, for different purposes of use, we select the recommended configuration tools and interface to do a demonstration.

By default, we can use the on-board hardening configuration management interface of the PERC H710p for most functional configurations. If you need to reinitialize the RAID configuration information of the hard disk in the raid card, we can take the following steps:

Note: in the following demonstration of RAID configuration information initialization for PERC cards, we will clear the disk array information of PERC cards, but not the actual user data stored on the disk. In a later experiment, we will demonstrate creating the same RAID configuration on the original hard drive to recover user data on the hard drive.

1. When the server boots and the system self-tests, when loaded into the PERC BIOS card self-test interface, press to enter the PERC management interface

Here, PERC has created two RAID arrays: RAID1 (Disk ID=0,1) and RAID10 (Disk ID=2,3,4,5)

Next, we will clear the PERC card array information in preparation for the initial configuration of all the hard drives.

two。 Highlight the PERC card to be managed, press F2, and select "Clear Config" in the pop-up menu.

3. Select "YES" in the alarm prompt window to confirm

4. Perc card disk array information has been cleared successfully. Check the disk list. The status of each disk has changed to "Ready", which is ready for the later configuration demonstration.

Note: at this point, we have only cleared the configuration information of RAID, not the user data on the hard drive.

Summary, end.

_

Creation of RAID array:

On the previous page, we cleared the original configuration information of the PERC card, and the user data on the hard disk was not initialized. Next, we have the opportunity to create the same RAID configuration (RAID1+RAID10) and restore the original user data. Let's demonstrate (except for the cause of the failure, it is not recommended that administrators do similar tests on production servers).

1. First check to make sure that the hard drives are in the "Ready" state, highlight the PERC card to be configured, press F2, and select "Create New VD" from the pop-up menu to create a new array (here called VD: Virtual Disk)

two。 Let's first create the first RAID1 array: select "RAID-1" in "RAID Level", and then select the first two hard drives (Disk ID=0, 1) as RAID1 in the list of physical disks below it. We noticed that we are using two 300Gb hard drives to do RAID1, and the generated VD Size is still 278.87GB, that is, about the amount of space on a hard disk. Keep the default stripe and cache settings and select OK

3. The system prompts that if it is a newly created RAID array, it is recommended to initialize the array after creation, so that the user data already stored on the hard disk will also be cleaned clean. In this example, we just want to restore the original cleared RAID configuration information, so choose OK and accept the prompt. If you want to clean up user data, check the initialization and management of the RAID array.

4. In this way, we use the same two hard disk ID=00,01 as before, and create the same RAID1 array as before.

5. In the same way, create a RAID10 using the same 4 hard drives (Disk ID=2,3,4,5) as before

6. In this way, the two disk arrays that were previously cleared are created back.

Note: after clearing the PERC card array configuration information, you can use the same disk and create the same disk array. If the array is not initialized, the user data on the disk still exists. In this example, the Windows 2008 operating system originally installed on the server is still stored on the hard disk, and we can restart the server and enter the operating system immediately.

Summary, end.

_

Initialization and management of RAID array:

If we create a RAID array to deploy a new server, we recommend that all newly created RAID arrays should be initialized so that the original user data on the hard disk will be erased for subsequent system and software installation.

1. We temporarily created a new RAID1 array, as shown in figure (Disk Group:1;Virtual Disks ID:2). Select the array, press F2, and select Initialization from the pop-up menu.

two。 Selecting Start Init will enter the array initialization, and of course, Stop Init can stop the initialization process. The percentage of progress of initialization can be seen at the top right of the screen. Users need to wait for the initialization process to finish before they can start using the array.

3. We can also choose Fast Init, we are entering the background array initialization process. This process is carried out automatically in the background and is transparent to the user. The user can restart the server and start using the array to install the system and software immediately. The initialization process continues to complete the required initialization steps until the end when the server is powered on.

4. After the array initialization, we can also check the array, which is often in the event of a hardware or disk alarm to ensure the integrity of the RAID.

5. Of course, there is also the deletion of the array, as shown in the picture, so I won't go into details.

Summary, end.

_

Settings of Dedicated hot spare hard disk:

Summary, end.

_

Settings for global hot spare hard drives:

In the following example, as shown in the figure, we have made two RAID arrays, RAID1 and RAID10. ID=06 's hard drive has been set to RAID10 exclusive hot spare hard drive. We want to set the free hard disk ID=07 as a global hot spare hard drive. In this way, in the event of a hard disk failure in the RAID1 or RAID10 array, the hot spare hard disk will automatically reconstruct the data and add it to the corresponding array.

1. Press CTRL+N to switch to the PD Mgnt (physical hard disk Management) interface, and we can see that the dedicated hot spare status of ID=06 is already Hotspare.

two。 Highlight ID=07 's free hard drive, press F2, and select Make Global HS from the pop-up menu

3. When the setting is complete, the hard disk status of ID=07 has changed to Hotspare, and you can't see the difference between global and dedicated hot backup drives in PD Mgnt.

4. Press CTRL+P to return to VD Mgmt, and we can see that ID=07 's hard drive is already a global hot spare that supports RAID1 and RAID10 arrays.

Summary, end.

_

Online management and maintenance of RAID disk members

Summary, end.

_

Advanced use skills of PERC Card

Disk roaming:

Disk roaming refers to moving physical disks between cable connections or backplane slots of the same controller. The controller automatically recognizes the relocated physical disk and logically places it on a virtual disk that is part of the disk group. Note: disk roaming can be performed only when the system is turned off. Do not attempt disk roaming during RAID-level migration (RLM) or online capacity expansion (OCE). This will result in the loss of virtual disks. Let's do a demonstration of disk roaming: 1. Lab environment: this is a PowerEdge R620 server running Windows 2008 R2 Enterprise Edition server. There are two disk arrays (Drive C: RAID1,Drive D: RAID 5) on the server, and files are stored on both arrays. We note that the member of RAID1 is ID=00, and the member of 01 Magi raid 5 is ID=02,03042. Power off the system, physical disks, cabinets, and system components. Disconnect the power cord from the system, then unplug all the hard drives of the ID=00~04, disrupt the order, and randomly plug it back into the backplane slot. Boot to the PERC BIOS management interface: here, we notice that the members of RAID1 have become ID=02,04. This is because the two hard drives of the original RAID1 are plugged into the slot 02p04on the backplane of the server. You can also see that the hard drive of RAID5 is plugged into the slot of 00Magi 01 and 03. Perform a security check. Make sure the physical disk is inserted correctly. Then exit the management interface, restart the server, and enter the Windows operating system: the server enters the Windows 2008 R2 system normally, so the files on the drive are not affected.

Summary, end.

_

Migration of RAID arrays:

The PERC H710, H710P, and H810 cards support the migration of virtual disks between different controllers without taking the target controller offline. The controller can import RAID virtual disks that are in an optimal, degraded, or partially degraded state. However, you cannot import virtual disks that are offline.

Disk Migration Tip: support for virtual disk migration from PERC H700 and H800 to PERC H710P and H810 supports migration of volumes created in H710, H710P, or H810 to H710, H710P, or H810 does not support migration from H700 or H800 to H310 does not support migration from H710, H710P, or H810 to H310

Note: the source controller must be offline before performing a disk migration.

Note: the disk cannot be migrated to an old or pre-replacement PERC card.

Note: non-RAID disks are only supported on the PERC H310 controller. Migration to any other PERC product is not supported.

Note: import of secure virtual disks is supported when the appropriate key (LKM) is provided or configured.

If the controller detects that the physical disk contains an existing configuration, it marks the physical disk as foreign (external) and generates an alert prompt that the external disk is detected.

Caution: do not attempt disk roaming during RLM or online capacity expansion (OCE). This will result in the loss of virtual disks.

Let's do an experiment like this:

1. Let's first take a look at the RAID configuration of the server (R620+H710p): a RAID5 consisting of three hard drives that allocates virtual disk space for 100GB

two。 Shut down the server, remove the above hard drive from the backplane slot, then insert it into another server of the same model, and power on.

Server self-test prompt: PERC card found foreign configuration (Foreign Configration). If we press the "F" key here, PERC will automatically import the relevant RAID configuration information from the hard drive. In order to see clearly, we choose to press "C" to enter the management interface for configuration.

3. We can't see the information about RAID in the management interface yet. Highlight the RAID card, press F2, and select "Foreign Config"-"Import" from the menu to import PERC configuration information.

4. Alarm prompt, select YES

5. In this way, the RAID configuration information stored on the hard drive is migrated to the new PERC card. Our disk migration is complete

Summary, end.

_

RAID disk array expansion:

Here we discuss how we can expand the space of the original virtual disk without deleting the data when the server is running out of hard disk space.

Demonstrate the Quick entry:

Online capacity expansion (OCE)

RAID level Migration (RLM)

Brief introduction

We can reconfigure the online virtual disk by expanding the capacity and / or changing the RAID level.

Note: straddle virtual disks, such as RAID 10, 50, and 60, cannot be reconfigured.

Note: reconfiguring a virtual disk generally has an impact on disk performance until the reconfiguration is complete.

Online capacity expansion (OCE) can be achieved in two ways.

"if there is only one virtual disk in the disk group, and there is free space available, the capacity of the virtual disk can be expanded within the available space." If a virtual disk has been created, but the space used by the virtual disk does not reach the upper limit of the disk group size, there is free space left.

Free space is also available when the physical disk of a disk group is replaced with a larger disk through the Replace Member (change member) feature. The capacity of virtual disks can also be expanded by performing OCE operations to increase the number of physical disks.

RAID level migration (RLM) refers to changing the RAID level of a virtual disk. RLM and OCE can be implemented at the same time, so that virtual disks can change the RAID level and increase capacity at the same time. After completing the RLM/OCE operation, there is no need for a reboot. To view a list of feasibility for RLM/OCE operations, refer to the following table. The source RAID level column represents the virtual disk RAID level before the RLM/OCE operation is performed, and the destination RAID level column represents the RAID level after the operation is completed.

Note: if the controller contains the maximum number of virtual disks, any virtual disks can no longer be migrated or expanded at the RAID level.

Note: the controller changes the write cache policy for all virtual disks that are RLM/OCE operations to write-through until the RLM/OCE is complete.

RAID level Migration:

Next, let's demonstrate disk expansion in two scenarios:

Online capacity expansion (OCE)

The scenario of the experiment is that there is one R620 server and two hard drives. Drive C: is the array of RAID1, installs the operating system; Drive D: is the RAID1 array of 10GB, with data files. As shown in the figure:

Let's restart the server and press CTRL-R to go to the PERC BIOS management interface to check the configuration of RAID:

The virtual disk of 10GB is built on a RAID1 array with a total capacity of 278GB, and the array still has the remaining space of 268GB. We are going to use this remaining space to expand the virtual disk of 10GB beyond 50GB.

Note: our following demonstration is done in the PERC BIOS management interface. But the actual operation can also be done in the OMSA GUI management interface. For the latter demonstration of RLM, we will do it in the OMSA management interface:

1. Highlight the virtual disk VD1 to be expanded, press F2, and select Expand VD size from the pop-up menu

two。 Enter the percentage of space that needs to be expanded by 15%, and the estimated virtual disk size after expansion is shown below. Then select the Resize button

3. When the administrative interface returns to the main page and selects VD1, you can see that the space has become 50GB, and the background initialization is in progress on the right. The initialization here is the initialization of the newly added white space and does not delete the original data.

4. After initialization, the server restarts back to the operating system. We can see in the disk manager of the server manager that the original disk 1 has increased the free space of 40.33GB.

5. Let's do the space extension of Drive D:. Right-click Drive D: select expand Volume from the pop-up menu

6. Go to the expand Volume wizard and click next

7. Select the volume that needs to be extended, and you already want to expand the size. We use the default value, that is, all the free space, the next step

8. Confirm the action performed, next step

9. Task completed, Drive D: successfully expanded to 50GB

10. Reconfirm that the new space 50GB of the file system and the original data files are saved

RAID level Migration (RLM)

Let's take a look at RLM, which can be expanded by changing the level of the RAID array or by adding new hard disk members to the array. Let's demonstrate how to expand RAID1 into a RAID5 consisting of four hard drives to achieve capacity expansion:

For the installation of the OMSA console, see the instructions for online management and maintenance of the title of the RAID disk members.

1. Log in to the OMSA console and check the configuration information of the array: there are two RAID1 arrays, VD0 is on the first RAID1, and the operating system is installed, so we don't do anything. VD1 uses the 200GB space of another RAID1, which has 78GB free space.

Take a look at the resource manager. Drive D for VD1: user data is stored:

two。 Select "reconfigure" from the available tasks in VD1, and click execute.

3. By the way, because the RAID1 where VD1 is located has 78GB spare space, if you want to do OCE in the OMSA management interface, you can click "expand capacity" in the interface here.

Then you can see the configuration interface similar to that in the OCE section above to do the OCE extension, which will not be demonstrated here.

4. The original RAID1 is made up of ID=02,03 's hard disk. We append ID=04,05 's free hard disk and click "continue"

5. Select the new RAID level as RAID-5, note the hint: the new capacity will become 600GB, click "continue"

6. Confirm the configuration information and click "finish"

7. VD1 enters the RLM expansion state, and we can see the percentage progress in the OMSA manager until it is completed.

8. Check to make sure that the VD1 extension process ends, the level has been displayed as RAID5, and the member has become an ID=02~-05 hard disk.

9. After exiting OMSA, we noticed that in windows disk Manager and Explorer, the expanded new space has not yet taken effect, or 200GB

10. Restart the server and check in PERC BIOS. After the server is restarted, the new space takes effect.

11. The server restarts, re-enters the operating system, and this time in disk Manager, we can see the extra unallocated space.

twelve。 Please follow the steps of volume expansion in OCE introduction to expand Drive DRV. This demonstration is complete.

Summary, end.

_

Power management of physical disks:

Physical disk power management

It is the power saving function of PERC H310, H710, H710P, and H810 cards. This feature allows disk shutdowns based on disk configuration and Istroke O activity. This feature (including unconfigured, configured, and hot spare disks) is supported on all rotary SAS and SATA disks.

The physical disk power management feature is disabled by default. This feature can be enabled by using the Unified Extensible firmware Interface (UEFI) RAID configuration utility in the Dell Open Manage storage management application.

There are four power saving modes available:

No Power Savings (non-power saving mode) (default mode)-all power saving features are disabled. Balanced Power Savings (load balancing Power Saving Mode)-enable shutdown only for unconfigured and hot spare disks. Maximum Power Savings (maximum Power Saving Mode)-enables shutdown for configured, unconfigured, and hot spare disks. Customized Power Savings (Custom Power Saving Mode)-all power saving features can be customized. You can specify a Premium Service time window during which you exclude the shutdown of configured disks.

Let's do a demonstration of physical disk power management to help you understand how Dell PowerEdge server PERC implements physical disk power management to achieve energy-saving features.

1. Let's first take a look at the configuration of the demonstration device: we have 9 physical disks, ID=0,1 is composed of VD0 (RAID1), ID=2~5 is composed of VD1 (RAID5), ID=6 is the exclusive hot spare disk of RAID5, ID=7 is the global hot spare disk, and ID=8 is the idle disk. We can see that there is no data IO on ID=6~8 's hard disk, but they are running like other hard drives and consume power. We will now control the operation of the idle hard drive by setting the "load balancing power saving mode".

two。 Let's boot into the operating system, run OMSA, enter the management interface, and click in the left navigation bar to enter the list of physical disks. We noticed that ID=6~8 's hard drive power status shows "high-speed rotation", which is the same as other online hard drives and does not save energy.

This is the complete view:

3. Click the storage menu in the left column, select the corresponding PERC control card on the right, select "manage physical disk Power" in "available tasks", and then click execute.

4. In the physical disk power management settings, select balanced Energy Saving Mode, and then click apply changes.

5. At the end of the setting, when we go back to the list of physical disks, we can see that the power supply has not changed, and it is still "spinning at a high speed" for the time being.

6. After the dry time (about 30 minutes), refresh the list of physical disks, we can see that the three idle hard drives of ID=6~8, whether hot standby or completely idle, the power supply condition has entered a "shutdown" state, realizing the purpose of server energy saving.

Summary, end.

_

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report