In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to build RAID1 disk array under Linux", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to build RAID1 disk array under Linux" this article.
Raid1 means that a piece of data is stored on multiple hard drives, so that when the data is needed, it can be read on multiple hard drives at the same time, which is faster and more expensive than reading on only one hard disk.
System information
Centos8
RAID disk:
DeviceSize/dev/sda20GB/dev/sdb20GB/dev/sdc20GB/dev/sdd20GB installation mdadm.root @ localhost ~] # yum-y install mdadm to create RAID 1 array
First of all, give the four disk partitions / dev/sd [a murd]. The first partition of each disk is given 2G of space to make a RAID 1 array. I will not demonstrate the partitioning process here. Create RAID 1 below:
[root@localhost] # mdadm-- create / dev/md0-- level=1-- raid-devices=2 / dev/sda1 / dev/sdb1
The above parameters explain:
-- create /-C creates a new array-- level= /-l specifies the level of RAID, and currently supports raid0,1,4,5,6,10--raid-devices= /-n to specify the number of disks.
If you need to check the RAID configuration, execute the following command:
[root@localhost ~] # cat / proc/mdstatPersonalities: [raid1] md0: active raid1 sdb1 [1] sda1 [0] 2094080 blocks super 1.2 [2Accord 2] [UU] unused devices:
The RAID configuration is not permanent and will be lost after the computer restarts. We must create a configuration file and add RAID-related information to it:
[root@localhost ~] # mdadm-- detail-- scan > / etc/mdadm.conf
Format the created / dev/md0 device as a XFS file system and mount:
[root@localhost ~] # mkdir / data [root@localhost ~] # mkfs.xfs / dev/md0meta-data=/dev/md0 isize=512 agcount=4, agsize=130880 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1data = bsize=4096 blocks=523520 Imaxpct=25 = sunit=0 swidth=0 blksnaming = version 2 bsize=4096 ascii-ci=0, ftype=1log = internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1realtime = none extsz=4096 blocks=0, rtextents=0 [root@localhost ~] # mount / dev/md0 / data/
Write the mount entry to / etc/fstab to power it on and mount it automatically:
[root@localhost ~] # blkid | grep md0/dev/md0: UUID= "ccdef7f5-2b39-4fa1-96cd-e3c0dbbc32d9" TYPE= "xfs" [root@localhost ~] # echo 'UUID= "ccdef7f5-2b39-4fa1-96cd-e3c0dbbc32d9" / data xfs defaults 00' > > / etc/fstab
Let's test the RAID 1 array and write a file in the / data folder:
[root@localhost data] # dd if=/dev/zero of=/data/test.img bs=1M count=600600+0 records in600+0 records out629145600 bytes (629 MB, 600 MiB) copied, 1.92519 s, 327 MB/s
Restart the system and check that the RAID array is mounted automatically:
# reboot enable and deactivate RAID array
Use mdadm-S/--stop to stop the array. Use mdadm-A/--assemble to start the array:
[root@localhost] # umount / data [root@localhost ~] # mdadm-- stop / dev/md0mdadm: stopped / dev/md0 [root@localhost ~] # mdadm-- assemble / dev/md0mdadm: / dev/md0 has been started with 2 drives. Add disks to the array
Now, let's add another disk / dev/sdc to the existing array and add it to the array using the following command:
[root@localhost ~] # mdadm-- manage / dev/md0-- add / dev/sdc1mdadm: added / dev/sdc1 [root@localhost ~] # mdadm-D / dev/md0/dev/md0: Version: 1.2 Creation Time: Thu Mar 11 21:51:38 2021 Raid Level: raid1 Array Size: 2094080 (2045.00 MiB 2144.34 MB) Used Dev Size: 2094080 (2045.00 MiB 2144.34 MB) Raid Devices: 2 Total Devices : 3 Persistence: Superblock is persistent Update Time: Fri Mar 12 11:28:37 2021 State: clean Active Devices: 2 Working Devices: 3 Failed Devices: 0 Spare Devices: 1Consistency Policy: resync Name: localhost.localdomain:0 (local to host localhost.localdomain) UUID: 428966f1:c78ce423:e3559739:a8c6048e Events: 20 Number Major Minor RaidDevice State 0 8 1 0 active sync / dev/sda1 1 8 17 1 active sync / dev/sdb1 2 8 33-spare / dev/sdc1
You can see that the newly added hard drive status is a spare disk. If the active disk fails, the disk automatically becomes the active disk.
The following is to expand the hard drives in the RAID 1 array to three, and all three hard drives are active, which means to transition / dev/sdc1 from hot standby to active state:
[root@localhost] # mdadm-- grow / dev/md0-- raid-devices=3raid_disks for / dev/md0 set to 3 [root@localhost ~] # mdadm-D / dev/md0
You can see that the number of active devices in the above two pictures has changed from 2 to 3. The hot standby state changes to the active synchronization state. Now the raid1 array is three disks.
Remove disks from the array
There are now three active disks in the disk array running on RAID 1. Let's delete the disk / dev/sdc1 and replace it with the new / dev/sdd1 disk. Let's first simulate the / dev/sdc1 failure state:
[root@localhost] # mdadm-- manage / dev/md0-- fail / dev/sdc1mdadm: set / dev/sdc1 faulty in / dev/md0 [root@localhost ~] # mdadm-D / dev/md0
You can see that / dev/sdc1 has become a failure state. Remove / dev/sdc1 from md0 below:
[root@localhost] # mdadm-- manage / dev/md0-- remove / dev/sdc1mdadm: hot removed / dev/sdc1 from / dev/md0 [root@localhost ~] # cat / proc/mdstat [root@localhost ~] # mdadm-D / dev/md0
Let's add the / dev/sdd1 disk to the array, and be sure to update the / etc/mdadm.conf configuration file after adding it.
[root@localhost ~] # mdadm-- manage / dev/md0-- add / dev/sdd1mdadm: added / dev/sdd1 [root@localhost ~] # cat / proc/mdstat [root@localhost ~] # mdadm-D / dev/md0 [root@localhost] # mdadm-- detail-- scan > / etc/mdadm.conf
Mdadm parameter explanation:
-- manage management array-- detail /-D prints the details of md devices-- scan /-s is used in conjunction with the-D parameter to get a list of md devices. -- stop /-S stops an array-- assemble /-An activates an existing array-- add /-an adds disks to the array-- remove /-r deletes disks in the array-- fail /-f simulates a disk failure-- grow /-G changes the size of the array, number of active disks deletes the RAID configuration
The following command deletes the configuration for RAID 1:
[root@localhost ~] # umount / data [root@localhost ~] # mdadm-S / dev/md0mdadm: stopped / dev/md0 [root@localhost ~] # rm-rf / etc/mdadm.conf
Then delete the mount entry in / etc/fstab.
Below, delete the raid signature signature information from these four hard drives and use the overlay block when the device contains a valid md super block using the-- zero-superblock option.
[root@localhost] # mdadm-- zero-superblock / dev/sd [Amurd] 1
These are all the contents of the article "how to build RAID1 disk Array under Linux". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.