In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to use mdadm commands in Linux to manage RAID disk arrays". In daily operation, I believe many people have doubts about how to use mdadm commands in Linux to manage RAID disk arrays. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "how to use mdadm commands in Linux to manage RAID disk arrays". Next, please follow the editor to study!
Mdadm is a command used to create and manage software RAID under linux, and it is a model command. However, because the servers are generally equipped with RAID array cards, and RAID array cards are also very cheap, and due to the defects of the software RAID (can not be used as a boot partition, using CPU implementation, reducing CPU utilization), it is not applicable in the production environment. However, in order to learn and understand the principles and management of RAID, there is still a detailed explanation:
Mdadm main command description modes (7):
Assemble: join a previously defined array
Build: create an array without super blocks
Create: create a new array with super blocks for each device
Manage: managing arrays (such as adding and deleting)
Misc: allows individual operations on a device in the array (such as stopping the array)
Follow or Monitor: monitoring the status of RAID
Grow: change the capacity of the RAID or the number of devices in the array
Options:
-A,-- assemble: join a previously defined array
-B,-- build: create an array without super blocks (Build a legacy array without superblocks.)
-C,-- create: create a new array
-F,-- follow,-- monitor: select Monitor mode
-G,-- grow: change the size or shape of the active array
-I,-- incremental: add a separate device to the appropriate array and may boot the array
-- auto-detect: request the kernel to start any automatically detected array
-h,-- help: help information, which is displayed when used in the above option
-- help-options: show more detailed help
-V,-- version: print the version information of mdadm
-v,-- verbose: show details
-b,-- brief: fewer details. For-- detail and-- examine options
-Q,-- query: look at a device and determine whether it is a md device or part of a md array
-D,-- detail: print the details of one or more md device
-E,-- examine: print the contents of md superblock on device
-c,-- config=: specify the configuration file. Default is / etc/mdadm.conf.
-s,-- scan: scan the configuration file or / proc/mdstat for missing information. Configuration file / etc/mdadm.conf
-C create Raid (/ dev/md0 is my raid name)
-n number of disk arrays
-l raid level,-x hostspare, standby disk
-- size specifies the size of each disk
-- add-a: hotadd subsequent devices to the array
-- remove-r: remove subsequent devices, which must not be active
-- fail-f: mark subsequent devices a faulty
-set-faulty: same as-- fail
-- run-R: start a partially built array
-- stop-S: deactivate array, releasing all resources
-- readonly-o: mark array as readonly
-readwrite-w: mark array as readwrite
Options that are valid with management mode are:
-- add-a: hotadd subsequent devices to the array
-- remove-r: remove subsequent devices, which must not be active
-- fail-f: mark subsequent devices a faulty
-set-faulty: same as-- fail
-- run-R: start a partially built array
-- stop-S: deactivate array, releasing all resources
-- readonly-o: mark array as readonly
-readwrite-w: mark array as readwrite
Use the cat / proc/mdstat command to check the status of RAID
After the configuration, mdadm-D-scan > / etc/mdadm.conf is required to update the configuration.
Before stopping the array, umount is required to execute mdadm-S / dev/mdX
Restart execution of mdadm-As / dev/mdX
Remove hard disk mdadm / dev/mdX-r / dev/sdX from the array group
Add hard disk mdadm / dev/mdX-a / dev/sdX to the array group
View a single partition mdadm-E / dev/sdX
Let's take a look at an example:
First, create a model
Option:-C
Dedicated options:
-l level
-n number of Devic
-a {yes | no} automatically create a device file for it
-c specifies the block size (chunk)
-x specifies the number of free disks (hot spare disks), which can be replaced automatically when the working disk is damaged.
Note: when creating an array, the number of disks required for the array is the sum of the number of-n parameters and-x parameters.
Example:
1. Create a raid0:
1.1 create raid
The code is as follows:
Mdadm-C / dev/md0-a yes-l 0-n 2 / dev/sdb {1Magne2}
Note: the disk partition type used to create raid should be fd.
1.2 formatting:
Mkfs.ext4 / dev/md0
Note: when formatting, you can specify the stride parameter under the-E option to specify how many times the stripe size is, which can improve the soft RAID performance to some extent. For example, the default block size is 4k and the stripe size defaults to 64k, then the stride is 16. This avoids the need for RAID to calculate the stripe size every time it accesses data, such as:
Mkfs.ext4-E stride=16-b 4096 / dev/md0
Where stride=chunk/block is to the n power of 2.
2. Create a raid1:
2.1Create raid
The code is as follows:
[root@localhost] # mdadm-C / dev/md1-a yes-n 2-l 1 / dev/sdb {5pm 6}
Mdadm: Note: this array has metadata at the start and
May not be suitable as a boot device. If you plan to
Store'/ boot' on this device please ensure that
Your boot-loader understands md/v1.x metadata, or use
-- metadata=0.90
Continue creating array? Y
Mdadm: Defaulting to version 1.2 metadata
Mdadm: array / dev/md1 started.
Note: the hint is that soft raid cannot be used as a boot partition.
2.2 formatting:
The code is as follows:
[root@localhost ~] # mkfs.ext4 / dev/md1
3. Create a raid5:
Because there is no disk space, I delete all the test disks that I used to do raid1 and re-establish four partitions for raid5 testing, which are sdb5-8.
3.1Create raid5
The code is as follows:
[root@localhost] # mdadm-C / dev/md2-a yes-l 5-n 3 / dev/sdb {5pr 6je 7}
Mdadm: / dev/sdb5 appears to be part of a raid array:
Level=raid1 devices=2 ctime=Sun Jul 14 09:14:25 2013
Mdadm: / dev/sdb6 appears to be part of a raid array:
Level=raid1 devices=2 ctime=Sun Jul 14 09:14:25 2013
Mdadm: / dev/sdb7 appears to be part of a raid array:
Level=raid1 devices=2 ctime=Sun Jul 14 09:14:25 2013
Continue creating array? Y
Mdadm: Defaulting to version 1.2 metadata
Mdadm: array / dev/md2 started.
Note: since my partition has just been used on raid1, I have this prompt.
3.2 formatting:
[root@localhost ~] # mkfs.ext4 / dev/md2
3.3 add hot spare disks:
[root@localhost] # mdadm / dev/md2-a / dev/sdb8
4. Check the md status:
4.1 View the details of the RAID array:
The code is as follows:
Option:-D =-- detail
Mdadm-D / dev/md# view the details of the specified RAID device
4.2 View raid status
The code is as follows:
[root@localhost ~] # cat / proc/mdstat
Personalities: [raid0] [raid1]
Md0: active raid0 sdb2 [1] sdb1 [0]
4206592 blocks super 1.2 512k chunks
Md1: active raid1 sdb6 [1] sdb5 [0]
2103447 blocks super 1.2 [2/2] [UU]
Unused devices:
Note: before creating a raid, you should check whether the disk is recognized. If the kernel is still recognized, an error will be reported when creating the Raid:
The code is as follows:
Cat / proc/partitions
If it is not recognized, you can execute the command:
The code is as follows:
Kpartx / dev/sdb
Or
The code is as follows:
Partprobe/dev/sdb
Second, management mode
Options:-a (--add),-d (--del),-r (--remove),-f (--fail)
1. Simulated damage:
The code is as follows:
Mdadm / dev/md1-f / dev/sdb5
2. Remove the damaged disk:
The code is as follows:
Mdadm / dev/md1-r / dev/sdb5
3. Add a new hard disk to the existing array:
The code is as follows:
Mdadm / dev/md1-a / dev/sdb7
Note:
3.1. The newly added hard disk needs to be the same size as the original hard disk.
3.2.If the original array lacks working disks (for example, raid1 has only one working disk and raid5 has only 2 working disks), the newly added disks will directly become working disks. If the original array works properly, the newly added disks will be hot spare disks.
4. Stop the array:
Option:-S =-- stop
The code is as follows:
Mdadm-S / dev/md1
Third, monitoring mode
Option:-F
It is not commonly used and does not elaborate.
4. Growth mode, which is used to add disks and expand the capacity of the array:
Option:-G
For example, add the hot spare disk of the above raid5 to the array working disk
The code is as follows:
[root@localhost] # mdadm-G / dev/md2-n 4
Note:-n 4 means to use four working disks
Use the-D option again to view the array details as follows:
The code is as follows:
[root@localhost] # mdadm-D / dev/md2
…… Omit some of the information here.
Number Major Minor RaidDevice State
0 8 21 0 active sync / dev/sdb5
1 8 22 1 active sync / dev/sdb6
3 8 23 2 active sync / dev/sdb7
4 8 24 3 active sync / dev/sdb8
Fifth, assembly mode, soft RAID is based on the system, when the original system is damaged, the RAID needs to be reassembled
Option:-A
Example: reassemble the array that has stopped above:
The code is as follows:
Mdadm-A / dev/md1 / dev/sdb5 / dev/sdb6
Achieve automatic assembly:
The mdadm runtime automatically checks the / etc/mdadm.conf file and attempts to automatically assemble, so you can import the information into / etc/mdadm.conf after configuring raid for the first time, as follows:
The code is as follows:
[root@localhost ~] # mdadm-Ds > / etc/mdadm.conf
At this point, the study on "how to use the mdadm command in Linux to manage the RAID disk array" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.