Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction of related concepts and configuration methods of soft RAID

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Enterprise database applications are mostly deployed on RAID disk array servers, which can improve disk access performance and achieve fault tolerance / disaster tolerance.

RAID (redundant disk array), simply understood, is to make an array of cheap hard drives. Its purpose is nothing more than to expand storage capacity, improve read and write performance, and achieve data redundancy (backup disaster recovery).

The mainstream can be divided into several levels: RAID 0, RAID 1, RAID 5, RAID6, RAID 10, RAID 01, RAID 50, and so on.

RAID0 for short disk striping, it can provide the best read and write performance, if you make two disks into RAID0, then when writing data, you can write to disk An and disk B. What needs to be noted here is: "can be written at the same time." write operation "does not mean that the same contents of the file are" fully written "to disk An and disk B at the same time. For example, if there is a 100m file that needs to be written to disk, assuming that the write speed of a single disk is 10M/S, it takes 10 seconds to complete the write. However, in the RAID 0 array environment with An and B disks, 10m content can be written to disk An in a single time (with the minimum unit of second / S), and the following 10m content can be written to disk B at the same time, so that the writing speed becomes 20M/S, which only takes 5 seconds to complete, and only 50m of file content needs to be stored in each disk, which will not cause hard disk storage pressure. Of course, the example of appeal may not be appropriate, but it only means that under the theoretical environment, there will be many other factors in the actual environment, and the efficiency will certainly not be achieved.

To be sure, this will certainly improve read and write performance, but it also brings a problem, that is, if some of the data is lost, all your data will not be recovered, because RAID0 does not provide a strategy for redundant data recovery. So RAID0 can be used in read-only database tables, or replicated databases, or if you are not sensitive to data loss, you can also use RAID0. In short, this level has high performance and no redundancy.

RAID 1 disk mirroring improves read performance and reduces write performance, because it uses one disk to do redundant backup, so if you have two 50G disks, it adds up to 100G, but in RAID 1, then you can only use 50G, this method will affect the disk space use and reduce the performance of writing. To put it colloquially: if you write a 100m file to RAID 1, when you write the content to disk A, you will also write the same content to disk B. In this way, the contents of the two disks are exactly the same (this is the legendary "redundancy", not a sophisticated thing). Originally, you only need to write to one hard disk, but now if you want to write to two hard drives, the efficiency will certainly become lower. As for the "read" operation, in the RAID 1 environment, because the two disks have the same content, the read operation is carried out on both disks at the same time, so the read performance is improved. In terms of data redundancy, only when the data of the first hard disk is damaged or hung up, start the second hard disk. Of course, if both hard drives hang up, it really crashes. Ha ha. It is worth mentioning that some books or articles say that RAID 1 completely copied the data to the second disk as a mirror backup after writing the first hard disk. As far as I understand it, it is copied and written at the same time. The difference between RAID 5 and RAID1 is that there is more parity. The information of all parity will be distributed on all disks, and the performance is higher than that of RAID1. However, once disk IUnip O fails, it will cause a sharp decline in performance. At the same time, this method is also a break between RAID0 and RAID1, which is a more common approach. In simple language, at least three hard drives (or more) are used to build a RAID5 array. When data is written to a hard disk, it is written directly to this hard disk in the way of one hard disk. If it is RAID5, this data writing will be divided into three parts according to the algorithm, and then write to these three hard disks. At the same time, check information will be written on these three hard disks. When reading the written data, the contents of the data will be read from three hard drives respectively, and then checked by the verification information. When one of the hard drives is damaged, the data content of the third hard disk can be calculated from the data stored on the other two hard drives. In other words, RAID5 storage allows only one hard disk to fail, and it needs to be replaced as soon as possible in case of failure. When the failed hard drive is replaced, the data written during the failure is rechecked. If there is another piece of failure that is not solved, it will be catastrophic. RAID 10 (there is no difference with RAID 01, that is, the former does RAID0 on the basis of RAID1, the latter is the opposite) is the combination of RAID0 and RAID1, which provides high performance, high availability, better performance than RAID5, especially suitable for applications with a large number of writes, but with high cost, no matter how many disks you will lose half of your disk storage. It takes at least 4 hard drives to complete, An and B do data segmentation, respectively store half of the data, C and D correspond to An and B mirror backup. In this way, it is really perfect, and it is also my ideal best condition. There is no need for parity of RAID 5. Obviously, the cost will be higher in this way. It is also a pity that the "short board effect" of performance

Through the introduction of RAID 10 above, you can understand the performance mode of RAID 50, right? I won't introduce you here.

The following is a comprehensive performance and application scenario of various RAID levels:

Soft RAID configuration command-mdadm

Command format:

[root@localhost ~] # mdadm-C / dev/md5-ayes-L5-n3-x1 / dev/sd [bLECHERVE] or [root@localhost] # mdadm-C / dev/md0-ayes-L5-n3-x1 / dev/sd [bmere] 1

An introduction to the above options and parameters:

-C-- create: create mode-a-- auto {yes | no}: agree to create a device. If this parameter is not added, you must first use the mknod command to create a RAID device, but it is recommended to use the-a yes parameter to create it at once;-l-- level #: array mode, supported array modes include linear, raid0, raid1, raid4, raid5, raid6, raid10, multipath, faulty, container -n #: use # block devices to create this RAID (- n 3 means to create this RAID with 3 hard drives)-x #: there are only # hot spares in the current array (- x 1 means only 1 hot spare)

Create an example of RAID (take RAID 5 as an example)

The requirements are as follows:

Four partitions are used to form RAID 5, one of which is used as a backup disk, which is on top of the disk when the working RAID is damaged; the size of each partition is 20GB; one partition is set as a standby partition; mount to the / test directory]

Start the configuration:

1. Create RAID 5:

[root@localhost ~] # fdisk-l | grep disk # the following is the disk used to make RAID 5. # omitted partial content disk / dev/sdb:21.5 GB, 21474836480 bytes, 41943040 sector disk label type: gpt disk / dev/sdc:21.5 GB, 21474836480 bytes 41943040 sector disk label types: gpt disk / dev/sdd:21.5 GB, 21474836480 bytes, 41943040 sector disk label types: gpt disk / dev/sde:21.5 GB, 21474836480 bytes 41943040 sector disk label types: gpt.. # omitted part [root@localhost ~] # mdadm-C / dev/md0-a yes-l 5-n 3-x 1 / dev/sd [bmai cpend E]. # ignore part of the prompt Continue creating array? Y # enter "y" to confirm mdadm: Defaulting to version 1.2 metadatamdadm: array / dev/md0 started. # / dev/md0 was created successfully. [root@localhost ~] # cat / proc/mdstat # query the RAID information you just created Personalities: [raid6] [raid5] [raid4] md0: active raid5 sdd [4] sde [3] (S) sdc [1] sdb [0] # physical disk and its order 41908224 blocks super 1.2 level 5,512k chunk, algorithm 2 [3active raid5 sdd 3] [UUU] # related information, chunk size and RAID grade description, the last three U represents normal If it is not U, it means something is wrong. Unused devices: [root@localhost ~] # mdadm-D / dev/md0 # the result is more humane to write / dev/md0: # RAID device file name Version: 1.2Creation Time: Thu Sep 5 09:37:42 2019 # when RAID was created Raid Level: raid5 # RAID level Here is RAID5 Array Size: 41908224 (39.97 GiB 42.91 GB) # available amount of RAID for the whole set Used Dev Size: 20954112 (19.98 GiB 21.46 GB) # available capacity per disk Raid Devices: 3 # number of disks that make up RAID Total Devices: 4 # Total number of disks including spare Persistence: Superblock is persistent Update Time: Thu Sep 5 09:39:28 2019 State: clean # current usage status of this RAID Active Devices: 3 # number of devices started Working Devices: 4 # number of devices currently used in this RAID Failed Devices: 0 # number of damaged devices Spare Devices: 1 # number of prepared disks Layout: left-symmetric Chunk Size: 512K # chunk's cell block capacity Consistency Policy: resync Name: localhost.localdomain:0 (local to host localhost.localdomain) UUID: d395d245:8f9294b4:3223cd47:b0bee5d8 Events: usage of each disk Including three active sync and one spare Number Major Minor RaidDevice State 0 8 16 0 active sync / dev/sdb 1 8 32 1 active sync / dev/sdc 48 48 2 active sync / dev/sdd 3 8 64-spare / dev/sde#RaidDevice refers to the disk order within this RAID

2. Format and mount:

[root@localhost ~] # mkfs.xfs / dev/md0 # format the newly created RAID 5 [root@localhost ~] # mkdir / test # create mount point [root@localhost ~] # mount / dev/md0 / test # mount [root@localhost ~] # df-hT / test # confirm mount The file system type capacity is no different from that of an ordinary file system. Available mount point / dev/md0 xfs 40G 33m 40G 1% / test# writes mount information into / etc/fstab for boot automatic mount. The device name can be / dev/md0 or the UUID of the device. [root@localhost ~] # blkid / dev/md0 # query the UUID/dev/md0 of the RAID 5: UUID= "93830b3a-69e4-4cbf-b255-f43f2ae4864b" TYPE= "xfs" [root@localhost ~] # vim / etc/fstab # Open / etc/fstab, and write the following content UUID=93830b3a-69e4-4cbf-b255-f43f2ae4864b / test xfs defaults 0 0

3. Test RAID 5:

With regard to testing, some parameters for managing RAID must be involved, as follows:

-f-- fail: sets the subsequent device to an error state;-a-- add: adds the latter device to the md;-r-- remove: removes the latter device from the md [root@localhost ~] # echo "hi,girl,good morning." > / test/a.txt # write data [root@localhost ~] # cat / test/a.txt # View hi,girl Good morning. [root@localhost ~] # mdadm / dev/md0-f / dev/sdb # damage / dev/sdb mdadm: set / dev/sdb faulty in / dev/md0 [root@localhost ~] # mdadm-D / dev/md0 # View the status of RAID 5 / dev/md0: Version: 1.2 Creation Time: Thu Sep 5 09:37:42 2019 Raid Level: raid5 Array Size: 41908224 (39.97 GiB 42.91 GB ) Used Dev Size: 20954112 (19.98 GiB 21.46 GB) Raid Devices: 3 Total Devices: 4 Persistence: Superblock is persistent Update Time: Thu Sep 5 10:47:00 2019 State: clean Active Devices: 3 Working Devices: 3 Failed Devices: 1 # A failed disk was found to have one Spare Devices: 0 Layout: left-symmetric Chunk Size: 512KConsistency Policy: resync Name: localhost.localdomain:0 (local to host localhost.localdomain) UUID: d395d245:8f9294b4:3223cd47:b0bee5d8 Events: 39 Number Major Minor RaidDevice State 3 8 64 0 active sync / dev/sde 1 8 32 1 active sync / dev/sdc 48 48 2 active sync / dev/sdd 0 8 16-faulty / dev/sdb # this is the damaged disk # it can be found that the damaged disk has been replaced as a backup disk [root@localhost ~] # df-hT / test # available capacity or 40G file system type capacity has been used. Load point / dev/md0 xfs 40G 33m 40G 1% / testroot@localhost ~] # mdadm / dev/md0-r / dev/sdb # remove damaged disks from RAID group mdadm: hot removed / dev/sdb from / dev/md0 [root@localhost ~] # mdadm / dev/md0-a / dev/sdf # add a disk to RAID group mdadm: added / dev/sdf [root@localhost ~] # mdadm-D / dev/md0 # check Check the member status of RAID group / dev/md0:. # omit part of the content Number Major Minor RaidDevice State 3 8 64 0 active sync / dev/sde 1 8 32 1 active sync / dev/sdc 48 48 2 active sync / dev/sdd 58 80-spare / dev/sdf # newly added disk has become a standby disk for the RAID group

4. Reset the RAID group to normal disk

[root@localhost ~] # umount / test [root@localhost ~] # vim / etc/fstab. # omitting part of the content UUID=93830b3a-69e4-4cbf-b255-f43f2ae4864b / test xfs defaults 0 delete the set RAID 5 auto mount to remove the super area where [root@localhost ~] # dd if=/dev/zero of=/dev/md0 bs=1M count=50# destroys the RAID Block [root@localhost ~] # mdadm-- stop / dev/md0# stops the use of RAID mdadm: stopped / dev/md0# the following operations overwrite the super block information of the member disks in RAID [root@localhost ~] # dd if=/dev/zero of=/dev/sdc bs=1M count=10 [root@localhost ~] # dd if=/dev/zero of=/dev/sdd bs=1M count=10 [root@localhost ~] # dd if=/dev/zero of=/dev/sde bs=1M count=10 [root@localhost ~] # Dd if=/dev/zero of=/dev/sdf bs=1M count=10 [root@localhost ~] # cat / proc/mdstat # confirm that the RAIDPersonalities does not exist below: [raid6] [raid5] [raid4] unused devices:

After the above operations, it is restored to an ordinary disk, but the original data is gone, and there is no in-depth study of subsequent use. You can use some re-detection commands to scan the system or restart the system, and then you can mount it. Otherwise, it may not be mounted.

Redetect the command:

[root@localhost ~] # partprobe / dev/sdc

This is the end of this article. Thank you for reading

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report