Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use mdadm command to operate RAID in Linux

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

In this issue, the editor will bring you about how to use mdadm commands to operate RAID in Linux. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

Mdadm is used to build, manage and monitor RAID arrays

Usage:

Mdadm-create device options...

Create raid options with unused devices

Mdadm-assemble device options...

Merge the previously created raid array.

Mdadm-build device options...

Create or merge a raid without metadata.

Mdadm-manage device options...

Make changes to an existing array

Mdadm-misc options... Devices

Report or modify all kinds of MD related equipment.

Mdadm-grow options device

Adjust to activate raid array

Mdadm-incremental device

Add / remove devices from a raid

Mdadm-monitor options...

Monitor changes to one or more raid arrays

Mdadm device options...

-- abbreviation for manage

Mdadm-- the main parameter of create

-- auto=yes: decided to build the next software disk array device, that is, / dev/md0, / dev/md1.

-- raid-devices=N: devices that use several disks (partition) as disk arrays

-- spare-devices=N: a standby device that uses several disks to function as a disk array

-- level= [015]: sets the level of the disk array, which is commonly used.

Mdadm-- the main parameter of manage

-- add: the following devices will be added to the MD!

-- remove: the following devices will be removed from this MD

-- fail: the subsequent device will be set to an error state

1. At present, the software RAID is implemented in the way of MD (Multiple Devices) virtual block device in linux system, a new virtual device is virtualized by using multiple underlying block devices, and the data blocks are evenly distributed to multiple disks by stripping technology to improve the read and write performance of virtual devices. Different data redundancy algorithms are used to protect user data from being completely lost due to the failure of a block device. It can also restore the lost data to the new device after the device has been replaced.

At present, MD supports different levels and levels of redundancy such as linear,multipath,raid0 (stripping), raid1 (mirror) and raid4,raid5,raid6,raid10. Of course, it can also support the cascading of multiple RAID displays to form raid10, raid5 1 and other types of displays.

This paper mainly explains how to manage the software RAID in user layer mdadm and the problems and solutions in use. Now popular systems generally have compiled the MD driver module directly into the kernel or into a dynamically loadable driver module. We can use cat / proc/mdstat to see whether the kernel has loaded the MD driver or whether cat / proc/devices has a md block device, and we can use lsmod to see whether the MD module can be loaded into the system.

The code is as follows:

[root@testggv ~] # cat / proc/mdstat

Personalities:

Unused devices:

[root@testggv ~] #

[root@testggv ~] # cat / proc/devices | grep md

1 ramdisk

9 md

254 mdp

[root@testggv] # mdadm-- version

[root@testggv] # mdadm-- version

Mdadm-v2.5.4-13 October 2006

[root@testggv ~] #

II. Mdadm management soft raid display

The mdadm program is an independent program that can perform all the software raid management functions. There are mainly 7 usage modes:

Create

Create a new array with idle devices, each with metadata blocks

Assemble

Assemble each block device that originally belonged to an array into an array

Build

Create or assemble arrays that do not require metadata, with no metadata blocks per device

Manage

Manage devices that are already in the storage array, such as adding hot spare disks or setting a disk to fail, and then remove the disk from the array

Misc

Report or modify information about related devices in the array, such as querying the status information of the array or device

Grow

Change the capacity of each device in the array or the number of devices in the array

Monitor

Monitor one or more arrays and report specified events

If the MD driver is compiled into the kernel, when the kernel calls to execute the MD driver, it automatically looks for disks with a partition of FD (linux raid autodetect format). Therefore, you will generally use the fdisk tool to partition the HD disk or SD disk and set it to the FD disk.

The code is as follows:

[root@testggv ~] # fdisk / dev/hdc

The number of cylinders for this disk is set to 25232.

There is nothing wrong with that, but this is larger than 1024

And could in certain setups cause problems with:

1) software that runs at boot time (e.g.old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n

Command action

E extended

P primary partition (1-4)

P

Partition number (1-4): 1

First cylinder (1-25232, default 1):

Using default value 1

Last cylinder or size or sizeM or sizeK (1-25232, default 25232):

Using default value 25232

Command (m for help): t

Selected partition 1

Hex code (type L to list codes): fd

Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): W

The partition table has been altered!

Calling ioctl () to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or

Busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@testggv ~] #

If the MD driver is loaded in the form of a module, the user-level script needs to control the RAID display to start the operation when the system is running. For example, in the FedoraCore system, there are instructions to start the soft RAID array in the / etc/rc.d/rc.sysinit file, and if the RAID configuration file mdadm.conf exists, then call mdadm to check the options in the configuration file, and then start the RAID array.

The code is as follows:

Echo "raidautorun / dev/md0" | nash-- quiet

If [- f / etc/mdadm.conf]; then

/ sbin/mdadm-A-s

Fi-A: refers to loading an existing display-s: refers to finding configuration information in the mdadm.conf file.

Stop the inventory manually:

The code is as follows:

# mdadm-S / dev/md0

Create a new display

Mdadm uses the-- create (or its abbreviation-C) parameter to create a new display and use the identification information of some important arrays as metadata that can be written in a specified range of each underlying device.

-- level (or its abbreviation-l) indicates the RAID level of the array

-- chunk (or its abbreviation-c) indicates the size of each stripe unit, which is in KB. The default is 64KB. The size configuration of the stripe unit has a great impact on the read-write performance of the array under different loads.

-- raid-devices (or its abbreviation-n) indicates the number of active devices in the array

-- spare-devices (or its abbreviation-x) indicates the number of hot spares in the array. Once a disk in the array fails, the MD kernel driver automatically adds the hot spare disk to the array, and then reconstructs the data on the lost disk to the hot spare disk.

Create a RAID 0 device:

The code is as follows:

Mdadm-- create / dev/md0-- level=0-- chunk=32-- raid-devices=3 / dev/sdb1 / dev/sdc1 / dev/sdd1

Create a raid 1 device:

The code is as follows:

Mdadm-create / dev/md0-level=1-chunk=128-raid-devices=2-spare-devices=1 / dev/sdb1 / dev/sdc1 / dev/sdd1

Create a RAID5 device:

The code is as follows:

Mdadm-- create / dev/md0-- level=5-- raid-devices=5 / dev/sd [Cmurg] 1-- spare-devices=1 / dev/sdb1

Create a RAID 10 device:

The code is as follows:

Mdadm-C / dev/md0-L10-N6 / dev/sd [bmurg]-x1 / dev/sdh

Create a RAID1 0 device:

The code is as follows:

Mdadm-C / dev/md0-L1-N2 / dev/sdb / dev/sdc

Mdadm-C / dev/md1-L1-N2 / dev/sdd / dev/sde

Mdadm-C / dev/md2-L1-N2 / dev/sdf / dev/sdg

Mdadm-C / dev/md3-10-n3 / dev/md0 / dev/md1 / dev/md2

The initialization time is related to the performance of the disk array and the application load of reading and writing. Use cat / proc/mdstat information to query the current reconfiguration speed and expected completion time of the RAID array.

The code is as follows:

Cat / proc/mdstat

[root@fc5 mdadm-2.6.3] # cat / proc/mdstat

Personalities: [raid10]

Md0: active raid10 sdh [6] (S) sdg [5] sdf [4] sde [3] sdd [2] sdc [1] sdb [0]

3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU]

[>.] Resync = 15.3% (483072 + 3145536) finish=0.3min speed=120768K/sec

Unused devices:

[root@fc5 mdadm-2.6.3] # cat / proc/mdstat

Personalities: [raid10]

Md0: active raid10 sdh [6] (S) sdg [5] sdf [4] sde [3] sdd [2] sdc [1] sdb [0]

3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU]

Unused devices:

Use the display:

MD devices can be read and written directly like ordinary block devices, or they can be formatted by the file system.

The code is as follows:

# mke2fs-j / dev/md0

Mkdir-p / mnt/md-test

# mount / dev/md0 / mnt/md-test

Stop the running display:

When the array does not have a file system or other storage applications and advanced devices, you can use-stop (or its abbreviation-S) to stop the array; if the command returns an error of device or resource busy type, it means / dev/md0 is being used by the upper application and cannot be stopped for the time being. The upper application must be stopped first, which can also ensure the consistency of the data on the array.

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm--stop / dev/md0

Mdadm: fail to stop array / dev/md0: Device or resource busy

[root@fc5 mdadm-2.6.3] # umount / dev/md0

[root@fc5 mdadm-2.6.3] #. / mdadm--stop / dev/md0

Mdadm: stopped / dev/md02.3 assembles an array pattern that has been created-- assemble or its abbreviation (- A). The main purpose is to examine the metadata information of the underlying device and then assemble it into an active array. If we already know which devices the array consists of, we can specify which devices to use to boot the array.

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm- A / dev/md0 / dev/sd [bmurh]

Mdadm: / dev/md0 has been started with 6 drives and 1 spare. If you have a configuration file (/ etc/mdadm.conf), use the command mdadm-As / dev/md0. Mdadm first checks the DEVICE information in the mdadm.conf, then reads the metadata information from each device, checks whether it is consistent with the ARRAY information, and starts the array if the information is the same. If the / etc/mdadm.conf file is not configured and you do not know which disks the array consists of, you can use the command-- examine (or its abbreviation-E) to detect whether there is metadata information for the array on the current block device. [root@fc5 mdadm-2.6.3] #. / mdadm- E / dev/sdi

The code is as follows:

Mdadm: No md superblock detected on / dev/sdi.

[root@fc5 mdadm-2.6.3] #. / mdadm- E / dev/sdb

/ dev/sdb:

Magic: a92b4efc

Version: 00.90.00

UUID: 0cabc5e5:842d4baa:e3f6261b:a17a477a

Creation Time: Sun Aug 22 17:49:53 1999

Raid Level: raid10

Used Dev Size: 1048512 (1024.11 MiB 1073.68 MB)

Array Size: 3145536 (3.00 GiB 3.22 GB)

Raid Devices: 6

Total Devices: 7

Preferred Minor: 0

Update Time: Sun Aug 22 18:05:56 1999

State: clean

Active Devices: 6

Working Devices: 7

Failed Devices: 0

Spare Devices: 1

Checksum: 2f056516-correct

Events: 0.4

Layout: near=2, far=1

Chunk Size: 64K

Number Major Minor RaidDevice State

This 0 8 16 0 active sync / dev/sdb

0 0 8 16 0 active sync / dev/sdb

1 1 8 32 1 active sync / dev/sdc

2 2 8 48 2 active sync / dev/sdd

3 3 8 64 3 active sync / dev/sde

4 4 8 80 4 active sync / dev/sdf

5 5 8 96 5 active sync / dev/sdg

6 6 8 112 6 spare / dev/sdh

From the above command results, you can find the unique identifier UUID of the array and the device name contained in the array, and then use the above command to assemble the array, or you can use the UUID logo to assemble the array. Information devices that do not have consistent metadata (such as / dev/sda, / dev/sda1, etc.) are automatically skipped by mdadm programs.

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm- Av-uuid=0cabc5e5:842d4baa:e3f6261b:a17a477a

/ dev/md0 / dev/sd*

Mdadm: looking for devices for / dev/md0

Mdadm: no recogniseable superblock on / dev/sda

Mdadm: / dev/sda has wrong uuid.

Mdadm: no recogniseable superblock on / dev/sda1

Mdadm: / dev/sda1 has wrong uuid.

Mdadm: no RAID superblock on / dev/sdi

Mdadm: / dev/sdi has wrong uuid.

Mdadm: / dev/sdi1 has wrong uuid.

Mdadm: no RAID superblock on / dev/sdj

Mdadm: / dev/sdj has wrong uuid.

Mdadm: / dev/sdj1 has wrong uuid.

Mdadm: no RAID superblock on / dev/sdk

Mdadm: / dev/sdk has wrong uuid.

Mdadm: / dev/sdk1 has wrong uuid.

Mdadm: / dev/sdb is identified as a member of / dev/md0, slot 0.

Mdadm: / dev/sdc is identified as a member of / dev/md0, slot 1.

Mdadm: / dev/sdd is identified as a member of / dev/md0, slot 2.

Mdadm: / dev/sde is identified as a member of / dev/md0, slot 3.

Mdadm: / dev/sdf is identified as a member of / dev/md0, slot 4.

Mdadm: / dev/sdg is identified as a member of / dev/md0, slot 5.

Mdadm: / dev/sdh is identified as a member of / dev/md0, slot 6.

Mdadm: added / dev/sdc to / dev/md0 as 1

Mdadm: added / dev/sdd to / dev/md0 as 2

Mdadm: added / dev/sde to / dev/md0 as 3

Mdadm: added / dev/sdf to / dev/md0 as 4

Mdadm: added / dev/sdg to / dev/md0 as 5

Mdadm: added / dev/sdh to / dev/md0 as 6

Mdadm: added / dev/sdb to / dev/md0 as 0

Mdadm: / dev/md0 has been started with 6 drives and 1 spare.

Configuration file:

/ etc/mdadm.conf, as the default configuration file, is mainly used to make it easy to track the configuration of soft RAID, especially the monitoring and event reporting options. The Assemble command can also use-- config (or its abbreviation-c) to specify the configuration file. We can usually set up a configuration file with the following command

The code is as follows:

# echo DEVICE / dev/sdc1 / dev/sdb1 / dev/sdd1 > / etc/mdadm.conf

# mdadm-- detail-- scan > > / etc/mdadm.conf

When starting the array using the configuration file, mdadm queries the configuration file for the devices and array contents, and then starts running all RAID arrays. If you specify the device name of the array, only the corresponding array is started.

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm- As

Mdadm: / dev/md1 has been started with 3 drives.

Mdadm: / dev/md0 has been started with 6 drives and 1 spare.

[root@fc5 mdadm-2.6.3] # cat / proc/mdstat

Personalities: [raid0] [raid10]

Md0: active raid10 sdb [0] sdh [6] (S) sdg [5] sdf [4] sde [3] sdd [2] sdc [1]

3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU]

Md1: active raid0 sdi1 [0] sdk1 [2] sdj1 [1]

7337664 blocks 32k chunks

Unused devices:

[root@fc5 mdadm-2.6.3] #. / mdadm- S / dev/md0 / dev/md1

Mdadm: stopped / dev/md0

Mdadm: stopped / dev/md1

[root@fc5 mdadm-2.6.3] #. / mdadm- As / dev/md0

Mdadm: / dev/md0 has been started with 6 drives and 1 spare.

[root@fc5 mdadm-2.6.3] # cat / proc/mdstat

Personalities: [raid0] [raid10]

Md0: active raid10 sdb [0] sdh [6] (S) sdg [5] sdf [4] sde [3] sdd [2] sdc [1]

3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU]

Unused devices:

Query the status of the array

We can view the status of all running RAID arrays through the cat / proc/mdstat information. In the first line is the device name of MD, active and inactive options indicate whether the array can read or write, then the RAID level of the array, followed by the block device belonging to the array, the number in square brackets [] indicates the sequence number of the device in the array, (S) indicates that it is a hot spare, and (F) indicates that the disk is in faulty status. In the second row, the first is the size of the array, in units of KB, then the size of chunk-size, and then the type of layout. Different RAID levels have different layout types. [6 UUUUUU] and [UUUUUU] indicate that the array has six disks and all six disks are operational, while [5x6] and [_ UUUUU] indicate that five of the six disks in the array are operational, and the disk in the position corresponding to the underscore is faulty.

The code is as follows:

[root@fc5 mdadm-2.6.3] # cat / proc/mdstat

Personalities: [raid6] [raid5] [raid4] [raid1]

Md0: active raid5 sdh [6] (S) sdg [5] sdf [4] sde [3] sdd [2] sdc [1] sdb [0]

5242560 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

Unused devices:

[root@fc5 mdadm-2.6.3] #. / mdadm / dev/md0-f / dev/sdh / dev/sdb

Mdadm: set / dev/sdh faulty in / dev/md0

Mdadm: set / dev/sdb faulty in / dev/md0

[root@fc5 mdadm-2.6.3] # cat / proc/mdstat

Personalities: [raid6] [raid5] [raid4] [raid1]

Md0: active raid5 sdh [6] (F) sdg [5] sdf [4] sde [3] sdd [2] sdc [1] sdb [7] (F)

5242560 blocks level 5, 64k chunk, algorithm 2 [6/5] [_ UUUUU]

Unused devices:

We can also view the brief information (using-- query or its abbreviation-Q) and details (using-- detail or its abbreviation-D) of the specified array through the mdadm command, including the version of RAID, time of creation, RAID level, array capacity, free space, number of devices, super block status, update time, UUID information, status of each device, RAID algorithm level type and layout, as well as block size and other information. Device status information is divided into active, sync, spare, faulty, rebuilding, removing and so on.

The code is as follows:

Root@fc5 mdadm-2.6.3] #. / mdadm--query / dev/md0

/ dev/md0: 2.100GiB raid10 6 devices, 1 spare. Use mdadm-- detail for more detail.

[root@fc5 mdadm-2.6.3] #. / mdadm--detail / dev/md0

/ dev/md0:

Version: 00.90.03

Creation Time: Sun Aug 22 17:49:53 1999

Raid Level: raid10

Array Size: 3145536 (3.00 GiB 3.22 GB)

Used Dev Size: 1048512 (1024.11 MiB 1073.68 MB)

Raid Devices: 6

Total Devices: 7

Preferred Minor: 0

Persistence: Superblock is persistent

Update Time: Sun Aug 22 21:55:02 1999

State: clean

Active Devices: 6

Working Devices: 7

Failed Devices: 0

Spare Devices: 1

Layout: near=2, far=1

Chunk Size: 64K

UUID: 0cabc5e5:842d4baa:e3f6261b:a17a477a

Events: 0.122

Number Major Minor RaidDevice State

0 8 16 0 active sync / dev/sdb

1 8 32 1 active sync / dev/sdc

2 8 48 2 active sync / dev/sdd

3 8 64 3 active sync / dev/sde

4 8 80 4 active sync / dev/sdf

5 8 96 5 active sync / dev/sdg

6 8 112-spare / dev/sdh

Manage Array

Mdadm can add and delete disks to running arrays in Manage mode. It is often used to identify failed disks, add spare (hot spare) disks, and remove failed disks from the array, and so on. Use-- fail (or its abbreviation-f) to specify disk corruption.

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm / dev/md0-- fail / dev/sdb

Mdadm: set / dev/sdb faulty in / dev/md0

Use the-- remove (or its abbreviation-- f) parameter to remove the disk from the disk array when it is damaged, but not if the device is still in use by the array.

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm / dev/md0-- remove / dev/sdb

Mdadm: hot removed / dev/sdb

[root@fc5 mdadm-2.6.3] #. / mdadm / dev/md0-- remove / dev/sde

Mdadm: hot remove failed for / dev/sde: Device or resource busy

If the array has spare disks, automatically reconstruct the data on the damaged disks to the new spare disks

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm- f / dev/md0 / dev/sdb; cat / proc/mdstat

Mdadm: set / dev/sdb faulty in / dev/md0

Personalities: [raid0] [raid10]

Md0: active raid10 sdh [6] sdb [7] (F) sdc [0] sdg [5] sdf [4] sde [3] sdd [2]

3145536 blocks 64K chunks 2 near-copies [6/5] [U_UUUU]

[= >.] Recovery = 35.6% (373888max 1048512) finish=0.1min speed=93472K/sec

Unused devices:

If the array does not have hot spare disks, you can use the-- add (or its abbreviation-a) parameter to add hot spare disks.

The code is as follows:

[root@fc5 mdadm-2.6.3] #. / mdadm / dev/md0-- add / dev/sdh

Mdadm: added / dev/sdh

Monitoring array

You can use mdadm to monitor the RAID array, and the monitor regularly queries whether a specified event has occurred, and then handles it properly according to the configuration. For example, when there is a problem with the disk device in the array, you can send an email to the administrator; or when there is a disk problem, the callback program will automatically replace the disk, and all monitoring events can be recorded in the system log. Currently, events supported by mdadm are RebuildStarted, RebuildNN (NN is 20, 40, 60, or 80), RebuildFinished, Fail,FailSpare,SpareActive,NewArray, DegradedArray, MoveSpare, SparesMissing, TestMessage.

If the mdadm monitoring process is configured to query the MD device every 300s, when an error occurs in the array, an email will be sent to the specified user, the event handling program will be executed, and the reported event will be recorded to the system log file. Use the-- daemonise parameter (or its abbreviation-f) to keep the program running in the background. If you need the sendmail program to run to send an email, you should first test whether it can be sent when the email address is configured as a public network address.

The code is as follows:

[root@fc5 mdadm-2.6.3] # / mdadm--monitor-- mail=root@localhost-- program=/root/md.sh

-syslog-delay=300 / dev/md0-daemonise

The above is how to use the mdadm command to operate RAID in the Linux shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report