Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation and configuration of RAID 5 of Centos 7

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. What is RAID:

The full name of the disk array is "Redundant Arrays of Inexpensive Disks, RAID", which means fault-tolerant cheap disk array. RAID can integrate multiple smaller disks into a larger disk device through a single technology (software or hardware); and this larger disk function is not just storage, it also has data protection functions. The whole RAID has different functions due to the different levels (level) selected.

1. RAID-0 (equivalent mode, stripe): best performance

This mode works better if it is made up of disks of the same model and capacity. RAID in this mode will first cut out the same amount of chunks (named chunk, which can be set between 4K~1M). Then when a file is to be written to RAID, the file will be cut according to the size of chunk, and then placed on each disk in order. Because each disk staggers data, when your data is to be written to RAID, the data will be placed on each disk in the same amount.

2. RAID-1 (Mapping Mode, mirror): full backup

This mode also requires the same disk capacity, preferably the same disk! If disks of different capacities make up RAID-1, then the total capacity will be dominated by the smallest disk! The main purpose of this mode is to "keep the same data completely on two disks." for example, if I have a 100MB file and I have only two disks to form a RAID-1, then the two disks will be synchronously written to 100MB to their storage space. As a result, the overall RAID capacity is almost 50% less. Because the contents of the two hard drives are exactly the same, like mirrors, we also call them mirror mode,

3. RAID 1, 0, 0, raid, 0, 1

The performance of RAID-0 is good but the data is not secure, and the data of RAID-1 is secure but not good, so can you integrate the two to set up RAID? RAID 1: 0 is:

(1) first, let two disks form RAID 1, and there are two sets of such settings.

(2) the two groups of RAID 1 were reorganized into a group of RAID 0. This is the RAID 1+0RAID 01st which is:

(1) first, let the two disks form RAID 0, and there are two sets of such settings.

(2) the two groups of RAID 0 were reorganized into a group of RAID 1. This is RAID 031.

4. RAID5: balanced consideration of performance and data backup (key point)

RAID-5 requires at least three or more disks to form this type of disk array. The data writing of this disk array is a bit similar to RAID-0, but during each cycle (striping), a parity check data (Parity) is added to each disk, which records the backup data of other disks for rescue in the event of disk corruption.

How does RAID5 work:

When each cycle is written, a part of the parity check code (parity) is recorded, and the recorded parity check code is recorded on a different disk each time. Therefore, when any disk is damaged, the data on the original disk can be reconstructed by the check code of other disks. It is important to note, however, that because of the parity check code, the total capacity of RAID 5 will be one disk minus the total number of disks. The original 3 disks will only have the capacity of 2 disks (3-1). When the number of damaged disks is greater than or equal to two, the entire set of RAID 5 data is corrupted. Because RAID 5 can only support corruption of one disk by default.

RAID 6 can support two disk corruption

SPare Disk (reserved disk function):

In order to enable the system to actively rebuild when the hard disk is broken in real time, it is necessary to prepare the disk (spare disk). The so-called spare disk is one or more disks not included in the original disk array level. This disk is not usually used by the disk array. When any disk in the disk array is damaged, the spare disk will be actively pulled into the disk array and the broken hard disk will be removed from the disk array! And then immediately rebuild the data system.

Advantages of disk arrays:

Data security and reliability: does not refer to the security of network information, but refers to whether the data can be safely rescued or used when the hardware (refers to disks) is damaged; read and write performance: for example, RAID 0 can enhance read and write performance, so that your system Icano part can be improved; capacity: multiple disks can be combined, so a single file system can have considerable capacity.

2. Software,hardware RAID:

.

Why is disk array divided into hardware and software?

The so-called hardware disk array (hardware RAID) is to achieve the purpose of the array through disk array cards. The disk array card has a special chip on it to handle RAID tasks, so it will be better in terms of performance. In many tasks (such as RAID 5's parity check code calculation), the disk array will not repeatedly consume the original system's I big O bus, and its performance will be better in theory. In addition, at present, the general medium-and high-level disk array cards support hot swapping, that is, replacing damaged disks without shutting down, which is very useful for system recovery and data reliability.

The software disk array mainly simulates the task of the array through software, so it will consume a lot of system resources, such as the operation of CPU and the resources of Ibino bus and so on. But at present, our personal computers are really very fast, so the previous speed limit no longer exists!

Our CentOS provides software disk array as mdadm this software, this software will be in partition or disk as the disk unit, that is, you do not need more than two disks, as long as there are more than two partitions (partition) will be able to design your disk array.

In addition, mdadm supports the RAID0/RAID1/RAID5/spare disk and so on we just mentioned! And the management mechanism can also achieve similar hot plug function, can be online (normal use of the file system) for partition swapping! It is also very convenient to use!

Third, the configuration of software disk array:

After all the nagging, let's configure the software disk array:

Approximate steps:

RAID 5 is composed of 4 partition; each partition is about the size of 1GB, so it is better to make sure that each partition is the same size; use 1 partition to set spare disk chunk to 256K! The size of this spare disk is the same as the partition required by other RAID! Mount this RAID 5 device to the / srv/raid directory

Start the configuration:

1. Zoning

[root@raid5 /] # gdisk / dev/sdb # create a partition through the gdisk command, or use fdiskCommand (? For help): n # add a new partition Partition number (1-128, default 1): 1 # partition number is 1First sector (34-41943006, default = 2048) or {+ -} size {KMGTP}: Last sector (2048-41943006, default = 41943006) or {+ -} size {KMGTP}: + 1G # size is 1GCurrent type is' Linux filesystem'Hex code or GUID (L to show codes Enter = 8300): # GUID number Changed type of partition to 'Linux filesystem'# creates four partitions Command (? For help): P # View the created partition... / / omit part of Number Start (sector) End (sector) Size Code Name 1 2048 2099199 1024.0 MiB 8300 Linux filesystem 2 2099200 4196351 1024.0 MiB 8300 Linux filesystem 3 4196352 6293503 1024.0 MiB 8300 Linux filesystem 4 6293504 8390655 1024.0 MiB 8300 Linux filesystem 58390656 10487807 1024.0 MiB 8300 Linux filesystem# Save exit [root@raid5 /] # lsblk # View disk list NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0100G 0 disk ├─ sda1 8:1 01G 0 part / boot └─ sda2 8:2 0 99G 0 part ├─ cl-root 253 ├─ 050G 0 lvm / ├─ cl-swap 253 ├─ 1 02G 0 lvm [SWAP] └─ cl-home 253 ├─ 047G 0 lvm / homesdb 8:16 020G 0 disk # see that we have four partitions on sdb disk ├─ sdb1 8:17 0 1G 0 part ├─ sdb2 8:18 0 1G 0 part ├─ sdb3 8:19 0 1G 0 part sdb4 8:20 01G 0 part └─ sdb5 8:21 01G 0 part # the fifth is the reserved disk sr0 11:0 1 1024M 0 rom

2. Create

[root@raid5 /] # mdadm-- create / dev/md0-- auto=yes-- level=5-- chunk=256K-- raid-devices=4-- spare-devices=1 / dev/sdb1 / dev/sdb2 / dev/sdb3 / dev/sdb4 / dev/sdb5 mdadm: Defaulting to version 1.2 metadatamdadm: array / dev/md0 started.--create: option for creating RAID-- auto=yes: decide to create the following software disk array device, that is, md [0-9]-- chunk=256K: determine the chunk size of this device It can also be regarded as stripe size, usually 64K or 512K--raid-devices=4: devices that use several disks or partitions as disk arrays-spare-devices=1: use several disks or partitions as backup devices-level=5: set the level of this set of disk arrays It is recommended that you only use 0, 1, 5 is fine-- detail: details of the disk array device that follows [root@raid5 /] # mdadm-- detail / dev/md0 / dev/md0: # RAID device file name Version: 1.2 Creation Time: Thu Nov 7 20:26:03 2019 # creation time Raid Level: raid5 # RAID level Array Size: 3142656 (3.00 GiB 3.22 GB) # available capacity of the entire set of RAID Used Dev Size: 1047552 (1023.00 MiB 1072.69 MB) # capacity per disk Raid Devices: 4 # number of disks that make up the RAID Total Devices: 5 # package Total number of disks including spare Persistence: Superblock is persistent Update Time: Thu Nov 7 20:26:08 2019 State: clean # current usage status of this disk array Active Devices: 4 # number of devices started Working Devices: 5 # number of devices currently used in this array Failed Devices: 0 # number of damaged devices Spare Devices: 1 # number of reserved disks Layout: left-symmetric Chunk Size: 256K # this is the cell block capacity of chunk Name: raid5:0 (local to host raid5) UUID: facfa60d:c92b4ced:3f519b65:d135fd98 Events: 18 Number Major Minor RaidDevice State 0 8 17 0 active sync / dev/sdb1 18 18 1 active sync / dev/sdb2 2 8 19 2 active sync / dev/sdb3 5 8 20 3 active sync / dev/sdb4 4 8 21-spare / dev/sdb5 # see the sdb5 as a backup device in the waiting area # the last five lines are the current status of these five devices RaidDevice refers to the disk order within this Raid [root@raid5 /] # cat / proc/mdstat Personalities: [raid6] [raid5] [raid4] md0: active raid5 sdb4 [5] sdb5 [4] (S) sdb3 [2] sdb2 [1] sdb1 [0] # first line 3142656 blocks super 1.2 level 5,512k chunk, algorithm 2 [4Unip 4] [UUUU] # second line unused devices:

The first line indicates that md0 is raid5 and four disk devices such as sdb1,sdb2,sdb3,sdb4 are used. The numbers in brackets [] after each device are in the order of the disk in RAID (RaidDevice); as for the [S] after sdb5, it means that sdb5 means spare.

Second line section: this disk array has 3142656 block (each block unit is 1K), so the total capacity is about 3GB, using RAID level 5, the size of the cell block (chunk) written to disk is 256K, using the algorithm 2 disk array algorithm. [mplan] means that m devices are required for this array and n devices are operating normally. Therefore, 4 devices are required for this md0 and all 4 devices are operating normally. The following [UUUU] represents the start-up of the four required devices (the m in [Mzone]), with U for normal operation and _ for abnormality.

3. Format and mount for use

[root@raid5 /] # mkfs.xfs-f-d su=256k,sw=3-r extsize=768k / dev/md0 # Note This block is formatted as md0meta-data=/dev/md0 isize=512 agcount=8, agsize=98176 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0data = bsize=4096 blocks=785408 Imaxpct=25 = sunit=128 swidth=384 blksnaming = version 2 bsize=4096 ascii-ci=0 ftype=1log = internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1realtime = none extsz=1572864 blocks=0 Rtextents=0 [root@raid5 /] # mkdir / srv/raid [root@raid5 /] # mount / dev/md0 / srv/raid/ [root@raid5 /] # df-TH / srv/raid/ # see that we have successfully mounted Filesystem Type Size Used Avail Use% Mounted on/dev/md0 xfs 3.3G 34M 3.2G 2% / srv/raid

4. Rescue of simulation RAID error

As the saying goes, "there are unexpected events and misfortunes". No one knows when the devices in your disk array will go wrong, so it is necessary to know about the rescue of the software disk array. Let's imitate the RAID error and rescue it.

[root@raid5 /] # cp-a / var/log/ / srv/raid/ # first copy some data to the mount point [root@raid5 /] # df-TH / srv/raid/ Du-sm / srv/raid/* # sees that data Filesystem Type Size Used Avail Use% Mounted on/dev/md0 xfs 3.3G 39m 3.2G 2% / srv/raid5 / srv/raid/log [root@raid5 /] # mdadm-manage / dev/md0-fail / dev/sdb3mdadm: set / dev/sdb3 faulty in / dev/md0 # shows that it has become With the wrong equipment.. / / omit part of the content Update Time: Thu Nov 7 20:55:31 2019 State: clean Active Devices: 4Working Devices: 4 Failed Devices: 1 # error A disk Spare Devices: 0 # here the preparation has been changed to 0, indicating that the work has been replaced, and the cut here is a little slow. Otherwise, it's still 1. / / omit part of the content Number Major Minor RaidDevice State 0 8 17 0 active sync / dev/sdb1 18 18 1 active sync / dev/sdb2 4 8 21 2 active sync / dev/sdb5 # here you can see that sdb5 has been working 5 8 20 3 active sync / dev/sdb4 28 19-faulty / dev/sdb3 # sdb3 is dead

Then you can unplug the bad disk and replace it with a new one.

[root@raid5 /] # mdadm-- manage / dev/md0-- remove / dev/sdb3 # simulate unplugging the old disk mdadm: hot removed / dev/sdb3 from / dev/md0 [root@raid5 /] # mdadm-- manage / dev/md0-- add / dev/sdb3 # insert the new disk mdadm: added / dev/sdb3 [root@raid5 /] # mdadm-detail / dev/md0 # View. . / / omit some contents Number Major Minor RaidDevice State 0 8 17 0 active sync / dev/sdb1 18 18 1 active sync / dev/sdb2 4 8 21 2 active sync / dev/sdb5 5 8 20 3 active sync / dev/sdb4 6 8 19-spare / dev/sdb3 # We will find that sdb3 has been waiting here as a backup disk

5. Set the boot RAID and mount it automatically.

[root@raid5 /] # mdadm-- detail / dev/md0 | grep-i uuid UUID: facfa60d:c92b4ced:3f519b65:d135fd98 [root@raid5 /] # vim / etc/mdadm.confARRAY / dev/md0 UUID=facfa60d:c92b4ced:3f519b65:d135fd98 # RAID device ID content [root@raid5 /] # blkid / dev/md0/dev/md0: UUID= "bc2a589c-7df0-453c-b971-1c2c74c39075" TYPE= "xfs" [root@raid5 / ] # vim / etc/fstab # set auto mount on boot.. / / omit some contents / dev/md0 / srv/raid xfs defaults 0 "or enter UUID [root@raid5 /] # df-Th / srv/raid/ # to restart and test Filesystem Type Size Used Avail Use% Mounted on/dev/md0 xfs 3.0G 37M 3.0G 2% / srv/raid

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report