Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Configuration of RAID in Linux and its detailed explanation

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Blog outline what is the setting of RAIDSoftware,hardware RAID software disk array emulation RAID wrong rescue mode boot automatically start RAID and mount automatically

1. What is RAID:

1. The full name of the disk array is "Redundant Arrays of Inexpensive Disks, RAID", which means fault-tolerant cheap disk array.

2.RAID can integrate multiple smaller disks into a larger disk device through a single technology (software or hardware); and this larger disk function is not just storage, it also has data protection functions.

3. The whole RAID has different functions due to the different levels (level) selected.

RAID-0 (equivalent Mode, stripe): best performance

This mode works better if it is made up of disks of the same model and capacity.

RAID in this mode will first cut out the same amount of chunks (named chunk, which can be set between 4K~1M). Then when a file is to be written to RAID, the file will be cut according to the size of chunk, and then placed on each disk in order.

Because each disk staggers data, when your data is to be written to RAID, the data will be placed on each disk in the same amount.

RAID-1 (Mapping Mode, mirror): full backup

This mode also requires the same disk capacity, preferably exactly the same disk.

If disks of different capacities make up RAID-1, then the total capacity will be dominated by the smallest disk! The main purpose of this mode is to "keep the same data intact on two disks".

For example, if I have a 100MB file and I only have two disks to make up RAID-1, then the two disks will write 100MB synchronously to their storage space. As a result, the overall RAID capacity is almost 50% less. Because the contents of the two hard drives are exactly the same, like mirrors, we also call it mirror mode.

Because the data in the two disks are exactly the same, your data can be preserved completely when any hard disk is damaged.

Maximum advantage backup

RAID 1, 0, and raid 0, 1 (in combination)

The performance of RAID-0 is good but the data is not secure, and the data of RAID-1 is secure but not good, so can you integrate the two to set up RAID?

RAID 1: 0 is:

(1) first, let two disks form RAID 1, and there are two sets of such settings.

(2) the two groups of RAID 1 were reorganized into a group of RAID 0. This is RAID 1: 0.

RAID 0room1 is:

(1) first, let the two disks form RAID 0, and there are two sets of such settings.

(2) the two groups of RAID 0 were reorganized into a group of RAID 1. This is RAID 031.

RAID5: balanced consideration of performance and data backup

RAID-5 requires at least three or more disks to form this type of disk array.

The data writing of this disk array is a bit similar to RAID-0, but during each cycle (striping), a parity check data (Parity) is added to each disk, which records the backup data of other disks for rescue in the event of disk corruption.

When each cycle is written, a part of the parity check code (parity) is recorded, and the recorded parity check code is recorded on a different disk each time. Therefore, when any disk is damaged, the data on the original disk can be reconstructed by the check code of other disks. It is important to note, however, that because of the parity check code, the total capacity of RAID 5 will be one disk minus the total number of disks.

The original 3 disks will only have the capacity of 2 disks (3-1).

When the number of damaged disks is greater than or equal to two, the entire set of RAID 5 data is corrupted. Because RAID 5 can only support the corruption of one disk by default

Read performance: excellent write performance: general RAID5: support 1 damaged RAID6: support 2 damaged Spare Disk: prepare disk function:

In order to enable the system to actively rebuild when the hard disk is broken in real time, it is necessary to prepare the disk (spare disk). The so-called spare disk is one or more disks not included in the original disk array level. This disk is not usually used by the disk array. When any disk in the disk array is damaged, the spare disk will be actively pulled into the disk array and the broken hard disk will be removed from the disk array! And then immediately rebuild the data system.

When the disk of the disk array is damaged, you have to unplug the broken disk and replace it with a new one.

The advantages of disk array: data security and reliability: it does not refer to the security of network information, but whether the data can be safely rescued or used when the hardware (refers to disks) is damaged; read and write performance: for example, RAID 0 can enhance the read and write performance, so that you can improve the Igamo part of your system; capacity: multiple disks can be combined, so a single file system can have considerable capacity.

Software,hardware RAID:

System resources, such as the operation of CPU and the resources of Imax O bus, etc. But at present, our personal computers are really very fast, so the previous speed limit no longer exists!

Our CentOS provides software disk array as mdadm this software, this software will be in partition or disk as the disk unit, that is, you do not need more than two disks, as long as there are more than two partitions (partition) will be able to design your disk array.

In addition, mdadm supports the RAID0/RAID1/RAID5/spare disk and so on we just mentioned! Moreover, the management mechanism provided can also achieve a function similar to hot plug, which can be used online (normal use of the file system) to swap partitions.

Settings for the software disk array:

Configure RAID

(add a hard drive to the server and create 5 partitions (you can also add 5 hard drives each)

1. [root@localhost ~] # gdisk / dev/sdb / / enter the hard disk Creating new GPT entries.Command (? For help): n / / create partition Partition number (1-128, default 1): 1 / / default is 1First sector (34-41943006, default = 2048) or {+ -} size {KMGTP}: Last sector (2048-41943006, default = 41943006) or {+ -} size {KMGTP}: + 1G / / size is 1GCurrent type is' Linux filesystem'Hex code or GUID (L to show codes, Enter = 8300): Changed type of partition to 'Linux filesystem' is configured as above and 4 partition Command (? For help): P / / View partition table information Disk / dev/sdb: 41943040 sectors, 20.0 GiBLogical sector size: 512 bytesDisk identifier (GUID): 6A978C77-4505-4345-ABEC-AE3C31214C6DPartition table holds up to 128 entriesFirst usable sector is 34 Last usable sector is 41943006Partitions will be aligned on 2048-sector boundariesTotal free space is 31457213 sectors (15.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 2099199 1024.0 MiB 8300 Linux filesystem 2 2099200 4196351 1024.0 MiB 8300 Linux filesystem 3 4196352 6293503 1024.0 MiB 8300 Linux filesystem 46293504 8390655 1024.0 MiB 8300 Linux filesystem 5 8390656 10487807 1024.0 MiB 8300 Linux filesystemCommand (? For help): wq / / Save exit [root@localhost ~] # mdadm-- create-- auto=yes / dev/md0-- level=5-- raid-devices=4-- spare-devices=1 / dev/sd [bmure] 1 command argument:-- create # indicates that you want to create raid--auto=yes / dev/md0 # the newly created software disk array device is md0,md serial number can be 0-9--level=5 # disk array level Here indicates the number of blocks created by raid5--raid-devices # adding disks used as disk arrays-- spare-devices # number of blocks added as spare disks / dev/sd [bmurf] 1 # device used by disk arrays You can also write / dev/sdb1 / dev/sdc1 / dev/sdd1 / dev/sde1 [root@localhost ~] # cat / proc/mdstat # to view the RAID configuration file Personalities: [raid6] [raid5] [raid4] md0: active raid5 sdd1 [4] sde1 [3] (S) sdc1 [1] sdb1 [0] 41908224 blocks super 1.2 level 5,512k chunk Algorithm 2 configuration * [3Creation Time 3] [UUU] * * unused devices: [root@localhost ~] # mdadm-D / dev/md0 # View RAID configuration file details / dev/md0: Version: 1.2 Creation Time: Sun Jun 30 10:43:20 2019 * * Raid Level: raid5** # Array Type is raid5 . # omit part Active Devices: 3 # number of active disks Working Devices: 4 # number of all disks Failed Devices: 0 # number of failed disks Spare Devices: 1 # number of hot backup disks Number Major Minor RaidDevice State 0 8 17 0 active sync / dev/sdb1 1 8 33 1 active sync / dev/sdc1 48 49 2 active sync / dev/sdd1 38 65-spare / dev/sde1 # one disk is used for hot backup [root@localhost] # mkfs.xfs / dev/md0# format disk [root@localhost] # mkdir / a [root@localhost] # Mount / dev/md0 / a # Mount disk [root@localhost ~] # df-hT # View disk size. # omit part of the content / dev/md0 xfs 40G 33m 40G 1% / a [root@localhost ~] # vim / etc/fstab # write files to boot and mount automatically. # omit part of / dev/md0 / a xfs defaults 0 [root@localhost ~] # cd / a [root@localhost a] # touch 123.txt 456.txt # create a test file [root@localhost a] # mdadm / dev/md0-f / dev/sdb1 # simulate sdb1 damage mdadm: set / dev/sdb1 faulty in / dev/md0 [ Root@localhost a] # mdadm-D / dev/md0 # View / dev/md0 details. # omit some contents: Number Major Minor RaidDevice State 38 65 0 spare rebuilding / dev/sde1 1 8 33 1 active sync / dev/sdc1 4 8 49 2 active sync / dev/sdd1 0 8 17-faulty / dev/sdb1 [root @ localhost a] # cat / proc/mdstat Personalities: [raid6] [raid5] [raid4] md0: active raid5 sdd1 [4] sde1 [3] sdc1 [1] sdb1 [0] (F) 41908224 blocks super 1.2 level 5 512k chunk, algorithm 2 [* * 3Accord 3] [UUU] * * unused devices: [root@localhost a] # ll # View the total amount of files tested. 1 root root June 30 11:06 123.txtmurr RWMurr Murray. 1 root root June 30 11:06 456.txt [root@localhost a] # mdadm / dev/md0-r / dev/sdb1 # remove damaged disks mdadm: hot removed / dev/sdb1 from / dev/md0 [root@localhost a] # mdadm-D / dev/md0 # View / dev/md0 details . # omit part of Number Major Minor RaidDevice State 38 65 0 active sync / dev/sde1 1 8 33 1 active sync / dev/sdc1 48 49 2 active sync / dev/sdd1 [root@localhost a] # mdadm / dev/md0-a / dev/sdb1 # add a hard disk mdadm: added / dev/sdb1 [root@localhost a] # mdadm-D / dev/md0 # View / dev/md0 details

Add another disk to the server, restart and then add:

[root@localhost a] # mdadm / dev/md0-a / dev/sdf1mdadm: added / dev/sdf1 [root@localhost a] # mdadm-D / dev/md0. # omit part of Number Major Minor RaidDevice State 38 65 0 active sync / dev/sde1 1 8 33 1 active sync / dev/sdc1 4 8 49 2 active sync / dev/sdd1 58 17-spare / dev/sdb1 6 8 81-spare / dev/sdf1 [root@localhost a] # mdadm / dev/md0-G-n4 active murn is used to specify the number of active disks in the raid. It is best to make sure that there are enough hot spares to add. [root@localhost a] # mdadm-D / dev/md0/dev/md0: Version: 1.2 Creation Time: Sun Jun 30 10:43:20 2019 Raid Level: raid5 Array Size: 41908224 (39.97 GiB 42.91 GB) # disk capacity is about to change Used Dev Size: 20954112 (19.98 GiB 21.46 GB) Raid Devices: 4 Total Devices: 5 Persistence: Superblock is persistent Update Time: Sun Jun 30 11:22:00 2019 State: clean # build is complete. # omit part of Number Major Minor RaidDevice State 38 65 0 active sync / dev/sde1 1 8 33 1 active sync / dev/sdc1 4 8 49 2 active sync / dev/sdd1 68 81 3 active sync / dev/sdf1 58 17-spare / dev/sdb1 # there are already four hard drives in the raid [root@localhost a] # df-hT # after viewing The capacity has not changed. # omit some content / dev/md0 xfs 40G 33m 40G 1% / a [root@localhost a] # resize2fs / dev/md0# (resizefx applies to ext3 File systems such as ext4 are not applicable to xfs file systems) # resize2fs command is used to update disk resize2fs 1.42.9 (28-Dec-2013) resize2fs: Bad magic number in super-block cannot find a valid file system superblock when trying to open / dev/md0. [root@localhost a] # xfs_growfs / dev/md0 # expand the file system meta-data=/dev/md0 isize=512 agcount=16 Agsize=654720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0data = bsize=4096 blocks=10475520, imaxpct=25 = sunit=128 swidth=256 blksnaming = version 2 bsize=4096 ascii-ci=0 ftype=1log = internal bsize=4096 blocks=5120 Version=2 = sectsz=512 sunit=8 blks, lazy-count=1realtime = none extsz=4096 blocks=0, rtextents=0data blocks changed from 10475520 to 15715584 [root@localhost a] # df-hT file system type capacity available available% mount point. # omit some content / dev/md0 xfs 60G 33m 60G 1% / a # check again that the capacity has changed

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report