In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to achieve linux soft raid, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
1 what are the levels and characteristics of RAID,RAID
What is RAID? The full name is "A Case for Redundant Arrays of Inexpensive Disks (RAID)". In 1987, it came from a paper published by the University of California, Berkeley. In fact, the abbreviation of this title is RAID;, which translates into "disk array".
RAID is to combine several physical disks into a large virtual physical disk, the main purposes and uses are: to form a large-capacity virtual storage device (the previous physical disks are relatively small), to improve the efficiency of physical storage (read and write), or to provide redundancy to improve the security of data storage.
According to the different application direction, RAID is also divided into different levels, including LINEAR, RAID0, RAID1, RAID5, RAID10, RAID4, RAID6, MULTIPATH. The commonly used ones are RAID0, RAID1, RAID5, RAID10 (which is actually 031), and LINEAR.
1.1 what are hardware RAID and soft RAID
RAID is also divided into hardware RAID and software RAID, hardware RAID is realized through RAID card, while software RAID is realized through software; in enterprise application field, most of them are hardware RAID. The software RAID is mostly adopted by small and medium-sized enterprises because of its high performance-to-price ratio.
Hardware RAID aggregates several hard drives of the same capacity into a large virtual RAID device (or RAID0, or RAID1, or RAID5, or RAID10) through raid cards, depending on the direction of use. If the capacity of each hard disk is inconsistent, it is based on the minimum capacity of the hard disk; its member is the entire hard disk
Soft RAID is to aggregate several hard drives or partitions of the same capacity into a large virtual RAID device (or RAID0, or RAID1, or RAID5, or RAID10) depending on the direction of use. If the capacity of each hard disk or partition is inconsistent, it is based on the minimum capacity of the hard disk or partition. The members of the soft RAID are the entire hard disk or partition
Generally speaking, RAID is still used in the field of production projects, and it is not widely used in commercial office or personal entertainment applications. There should be low-end servers or PC-SERVER that require cost-effective performance in most fields.
1.2 levels and characteristics of RAID
RAID has several levels, LINEAR,RAID0 (striping), RAID1 (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, and FAULTY. Among them, we often have RAID0, RAID1, RAID5, RAID10.
Let's talk about the commonly used RAID0, RAID1, RAID5 and RAID10
1.21 what is soft RAID0 and its characteristics
RAID0 is to combine two or more hard disks or partitions with the same capacity into a sum of the capacity of the members of the RAID0 through the RAID controller (hard RAID is realized through raid cards and soft RAID is realized by software). When writing, data is written to each hard disk or partition at the same time.
In hard RAID, the members of RAID0 are based on the whole hard disk. Two or more hard drives are bound into a virtual disk device through raid card, and each hard disk is a member of RAID0.
In a soft RAID0, the member of the RAID0 is the entire hard disk or partition, and the capacity is the sum of the capacity of all the members who join the RAID0. The capacity of each member in RAID0 is the same. For example, if we make / dev/sdb, / dev/sdc and / dev/sdd three hard drives with a capacity of 80g into RAID0, the capacity of the RAID0 device is the sum of the 80x3=240G of the three hard drives. Of course, we can also, when writing data, the system has to write data to each hard disk at the same time, in the form of strips. For example, if we store a copy of the data linuxsir.tar.gz to the RAID0 device, the data is split into several copies and written to each member of the RAID0. This data is complete only if each member of the RAID0 is functioning properly and the RAID0 is functioning as well. When there is a problem with any member of RAID0 (hard disk partition), RAID0 cannot run, and the data is not complete.
The reading and writing speed of RAID0 is relatively fast, which is about twice as fast as that without RAID (Note: the actual speed is related to the hardware configuration of the machine), so RAID0 is often used in application solutions that require high storage efficiency but low data security.
Security: if any member of the RAID0 fails, the entire RAID0 cannot be activated. Data cannot be guaranteed.
1.22 what is soft RAID1 and its characteristics
RAID1 is to put a number of hard drives or partitions of the same capacity, and there is a mirror relationship between members. In terms of capacity, a RAID1 device is the capacity of a single member. For example, if two 80g hard drives are made into RAID1, the device capacity of this RAID1 is still 80g. For example, when we write a copy of data linuxsir.tar.bz2 to a RAID1 device, we actually write a copy to every member of the RAID. For example, there are two members / dev/sdb and / dev/sdc under the RAID1 device. When we write linuxsir.tar.bz2 to RAID1, both / dev/sdb and / dev/sdc have a complete linuxsir.tar.bz2. Therefore, RAID1 is a redundant array, which is generally used in applications with high security requirements.
Because RAID1 is redundant in mirroring, disk utilization is not efficient, or "wasteful". Relatively speaking, this scheme is not cost-effective and is rarely used. Data reading and writing efficiency is slower than that of RAID0.
Security: as long as one member of the RAID1 is healthy, the RAID1 can be activated, and the data is absolutely secure. If all the members fail, RAID1 will be scrapped. Haha, isn't that nonsense?
1.23 what is soft RAID5 and its characteristics
Soft RAID5 is also redundant and secure. RAID5 virtualizes at least three hard drives or partitions into a large storage device through software. In terms of capacity, it is (nmel) x single hard disk (partition) capacity. For example, we use three 80g hard drives to make RAID5, the capacity is two pieces of capacity and 160g. In writing, the data is split into several parts and written to each member of the RAID5. For example, when writing linuxsir.tar.bz2 into RAID5, you should first split the linuxsir.tar.bz2 into several parts and write them into RAID5 members. Because redundancy is involved, the read speed of data is not very fast, and there is no way to compare with RAID0, but the write speed of RAID5 is not as fast as that of RAID1 and RAID0, and it is not as fast as the disk without RAID.
Because RAID5 has relatively small capacity loss, redundant security, and fast writing speed, on the whole, it has high performance-to-price ratio, so it is widely used.
Security: when one of the members of RAID5 fails, RAID5 can still start and run normally. As long as there is no failure of the hard disk or partition of Nmurl (note n > 3), the data on RAID5 is safe. For a file stored in the RAID5 device, only when the member is nmur1 (note n > 3) is fault-free, the file is complete. For example, RAID5 has four hard drives (or partitions). When one hard disk or partition dies, it does not affect the integrity and security of the data on the entire RAID5.
1.24 what is soft RAID10 and its characteristics
Soft RAID10 is also a redundant security array and an integration of RAID0+1. RAID10 virtualizes at least four hard drives or partitions into a large storage device through software. In the capacity is: nCompact 2x a single hard disk (partition) capacity, for example, we use four 80g hard drives to make RAID5, the capacity is two capacity and 4/2x80=160G. The number of hard drives or partitions required for RAID10 is even.
RAID10 has the mirroring features of RAID1 and the speed of RAID0. You can understand RAID10 in this way, such as a RAID10 made of four hard drives. The process is to make RAID1 for every two hard drives, and then make RAID0 on the basis of two RAID1. Theoretically, RAID10 should inherit the speed of RAID0 and the redundant security of RAID1. However, during the testing of soft RAID0, RAID1, RAID5 and RAID10, I found that the write speed of RAID10 is the slowest. The test method is to copy several disks of large files with more than 1G. It is found that the order of speed is RAID0 > do not do RAID > RAID1 > RAID5 > RAID10.
2 in Linux, the creation and management of soft RAID
In Linux, soft RAID is created and managed through mdadm. Mdadm is a dedicated software for creating and managing RAID. In Linux, most distributions have been installed by default, and mdadm can create any level of soft RAID.
In this section, the creation of RAID is not the goal, we will also check the status of learning RAID, start and stop RAID. Also learn to use RAID. So the use of RAID should include creation, management, and use. The use of RAID is to create a file system on a RAID device and then use it for storage applications
The process is:
[RAID creation]-> [RAID Management]-> [use of RAID]
| |
[maintenance of RAID]
2.1the method of creating RAID
There are two ways to create a RAID
The first method: use mdadm to create a RAID with the-C or-- create arguments. In this method, the RAID information is written to the superblocks (super block) of each RAID member, and in the super block of each RAID member, the RAID level, member, UUID of RAID, etc., are recorded. This method records the information of RAID in the superblocks (super block) of each member. This method is beneficial to the recovery of existing RAID when the system is reinstalled or a system disaster occurs; this method is the most commonly used.
The second method: use mdadm to create a RAID with the-B or-- build parameter. This method does not write the information of RAID into the superblocks (super block) of RAID members, so we cannot get the level of RAID by looking at the information of RAID members, as well as the members of RAID, etc.; this method is not conducive to the recovery of existing RAID for reinstalling the system or system disaster; if you want to use the second method to create RAID, you can change-C or-- create into-B or-- build in the following syntax.
Syntax: create to write RAID information into the superblocks (super block) of each member of RAID
Mdadm-C-v / dev/mdX-lY-nZ RAID member
Or
Mdadm-- create-- verbose / dev/mdX-- level=Y-- RAID-devices=Z RAID member
Note:
-C is an acronym for-- create, which means to create; this method is to create a way to write RAID information to each RAID member superblocks (super block). This is the most commonly used method.
-v and-- verbose, showing the detailed events during the creation process
If you replace-C or-- create with-B or-- build, it is another way to create RAID. Do not write RAID information into the superblocks (super block) of RAID members. If you try it, please try it yourself.
RAID devices: / dev/mdX,RAID devices in Linux, mostly / dev/md0,/dev/md1... The first device starts with / dev/md0. For example, if you already have a RAID0 device that is / dev/md0, and you want to do another RAID5, that is / dev/md1, and so on
RAID level: denoted by-lY or-- level=Y, Y is the level of RAID. The level of RAID is represented by 0 for RAID0, 1 for RAID1, RAID5 for RAID5, and 10 for RAID10. The level of RAID is located according to the direction of use and the number of existing disks and partitions. If you want high-speed reading and writing, large capacity, and low data security requirements, then use RAID0, if the data requirements are high, you can use RAID1 or RAID5, and then it is RAID10. For example,-10 or-- level=0 for RAID0,-l5 or-- level=5 for RAID5,-L1 or-- level=1 for RAID1,-l10 or-- level=10 for RAID10
-nZ or-- RAID-devices=Z indicates the number of RAID members. For example, if we partition three hard drives into a RAID, that is three devices. It should be written like this-N3 or-- RAID-devices=3; it is worth noting that RAID0 and RAID1 require at least two settings, RAID5 requires at least three devices, and RAID10 at least four devices
RAID members: that is, the components of RAID should be listed one by one, and each device should be separated by spaces. For example, we make RAID0 from / dev/sdb, / dev/sdc, / dev/sdd, and write as / dev/sdb / dev/sdc / dev/sdd on RAID devices. Members of software RAID can also be partitions, such as / dev/sdb1, / dev/sdc1...
Example 1: we are going to make a RAID0 with two hard disk devices / dev/sdb and / dev/sdc as members. We will run the following command
Mdadm-C-- verbose / dev/md0-10-N2 / dev/sdb / dev/sdc
Or
Mdadm-C-- verbose / dev/md0-- level=0-- RAID-devices=2 / dev/sdb / dev/sdc
What if we want to make the / dev/sdb1, / dev/sdc1, and / dev/sdd1 partitions RAID0?
Mdadm-C-v / dev/md0-10-n3 / dev/sd [bcd] 1
Or
Mdadm-C-- verbose / dev/md0-- level=0-- RAID-devices=3 / dev/sdb1 / dev/sdc1 / dev/sdd1
Example 2: we are going to make a RAID5 with three devices / dev/sdb, / dev/sdc, / dev/sdd, and we need to run the following command
Mdadm-C-v / dev/md0-L5-N3 / dev/sd [bcd]
Or
Mdadm-C-- verbose / dev/md0-- level=5-- RAID-devices=3 / dev/sdb / dev/sdc / dev/sdd
What if we want to make the / dev/sdb1, / dev/sdc1, and / dev/sdd1 partitions RAID5?
Mdadm-C-v / dev/md0-L5-N3 / dev/sd [bcd] 1
Or
Mdadm-C-- verbose / dev/md0-- level=5-- RAID-devices=3 / dev/sdb1 / dev/sdc1 / dev/sdd1
As soon as the creation is complete, RAID starts immediately. We will find a prompt similar to the following line:
Mdadm: array / dev/md0 started.
We need to be able to view RAID's information with the following command
Mdadm-Ds / dev/md0
Mdadm-D / dev/md0
2.2 RAID management tools
The management of RAID includes a series of tools such as creation, startup, status viewing, etc. We will only talk about the commonly used methods
2.21 the startup method of RAID
There are two ways to start RAID, one is to start RAID by specifying RAID devices and RAID members, and the other is to start RAID by loading RAID's default configuration file.
The first method: start RAID; without reading mdadm.conf. If applicable, you do not have a configuration / etc/mdadm.conf file.
Syntax:
Mdadm-A RAID device RAID member
Note:
-An is the same as-- assemble, which means to activate an existing RAID
RAID device, which is / dev/md0 or / dev/md1. Based on the RAID device you created
RAID members, that is, the RAID you want to start, and which of its subordinate devices should be listed one by one, separated by spaces in the middle.
For example: for example, I want to start a RAID, the device is / dev/md0, and the members under it are / dev/sdb and / dev/sdc;, so I'm going to use the following method
[root@linuxsir:~] mdadm-A / dev/md0 / dev/sdb / dev/sdc
Note: in this case, the startup method is used when the configuration file / etc/mdadm.conf of RAID is not configured; if you have already configured the / etc/mdadm.conf file, you can start it with mdadm-As
The second method: use the configured / etc/mdadm.conf to start RAID
Mdadm-A RAID device
Or
Mdadm-As
Note: the premise of this startup method is to configure the / etc/mdadm.conf file, to write all the RAID in your system to this file, and then you can simply start it with this command.
-An is the same as-- assemble, which means to activate an existing RAID
RAID device, which is / dev/md0 or / dev/md1. Based on the RAID device you created
For example:
[root@linuxsir:~] mdadm-A / dev/md0
[root@linuxsir:~] mdadm-As
Note: for example, after I configure / etc/mdadm.conf and start the RAID device / dev/md0, I will use the above method. For more information on how to write mdadm.conf, see the configuration file section of RAID.
2.22 description of some common parameters of RAID management tools
Mdadm parameter [RAID device] [RAID member]
-An or-- assemble activates a RAID
-S and-- stop stop running devices
-s or-- scan scan RAID device
-D or-- detail to view the details of RAID
-- examine views details of RAID members
Note: the options in [] are optional.
For example:
[root@linuxsir:~] # mdadm-As
[root@linuxsir:~] # mdadm-Ss
[root@linuxsir:~] # mdadm-Ds
[root@linuxsir:~] # mdadm-- examine / dev/sdb
Note: the above examples are all run when / etc/mdadm.conf is configured. If you do not configure the mdadm.conf file, specify the RAID device and its members; where-As is search / etc/mdadm.conf, and then start RAID according to the RAID information configured by mdadm.conf. -Ss is to search for a running RAID and then stop. -Ds searches RAID for RAID information;-- examine / dev/sdb is useful for viewing RAID information on one of the hard drives. For example, if you forget the members of RAID and UUID, if you want to restore the existing RAID, you need to use this to check, and then restart RAID.
For example, the system has a RAID, but there is no corresponding RAID information recorded in / etc/mdadm.conf. I don't know what type of RAID this is, RAID0, RAID1, or RAID5??. How many RAID are there in the machine? If you are a new administrator, you should want to know this information. Then a hard disk, a partition to view the past. Find out all the RAID in the system. And then recover one by one. This is the time to use the parameter-- examine.
[root@linuxsir:~] # fdisk-l
[root@linuxsir:~] # # mdadm-- examine / dev/sdb
/ dev/sdb:
Magic: a92b4efc
Version: 00.90.00
UUID: 35e1a3e6:ed59c368:e5bc9166:5004fe52
Creation Time: Wed Aug 1 07:11:43 2007
RAID Level: RAID0
Used Dev Size: 0
RAID Devices: 2
Total Devices: 2
Preferred Minor: 0
Update Time: Thu Aug 2 07:43:30 2007
State: active
Active Devices: 2
Working Devices: 2
Failed Devices: 0
Spare Devices: 0
Checksum: 8f8a235e-correct
Events: 0.29
Chunk Size: 64K
Number Major Minor RAIDDevice State
This 0 8 16 0 active sync / dev/sdb
0 0 8 16 0 active sync / dev/sdb
1 1 8 32 1 active sync / dev/sdc
Note:
First of all: let's use fdisk-l to check all the hard drives and partitions in the machine, if it can't be fully listed, please specify the specific hard drive.
Second: let's check to see if there is RAID information on a certain hard disk or partition. For example, I look at / dev/sdb, and it shows that / dev/sdb is a member of the RAID0 device, and / dev/sdb and / dev/sdc together constitute RAID0.
What's the use of getting this information? We can activate RAID or rewrite / etc/mdadm.conf to get RAID up and running again. In the process, do not use the-C or-- create parameters to recreate the RAID, otherwise your pre-existing RAID will be destroyed and the data in it will of course be empty! Remember. In a RAID with data, the-C parameter cannot be used casually. If you use-C or-- create is to create a new RAID device!
2.3 configuration file for RAID
RAID does not have to have a configuration file, but it is easy to manage, such as RAID's most streamlined method execution and status check. The configuration file for RAID is also required. If there is no configuration file, specify the RAID member as well
The configuration file for RAID is mdadm.conf located in the / etc directory. If you don't have this file, you can create one yourself. When we are done with RAID, we first need to configure this file; write all your RAID configuration information to this file. We can write it by hand. It is convenient to refer to the example of the mdadm.conf configuration file.
You can also use the following method to make a backup of / etc/mdamd.conf first.
[root@linuxsir~] mv / etc/mdadm.conf / etc/mdadm.conf.bak
Step 1: search for RAID
The search for RAID is preceded by activating RAID, otherwise the following command will not have any effect; see how to activate RAID
Syntax:
Mdadm-Ds
Note: where-D means-- detail,-s means-- scan, and the combination of the two is-Ds
Tip: when running the query RAID, activate RAID first
For example:
[root@linuxsir~] mdadm-Ds
ARRAY / dev/md0 level=RAID0 num-devices=2 UUID=35e1a3e6:ed59c368:e5bc9166:5004fe52
Step 2: query the details of RAID; mainly to see who are the members of RAID
Syntax:
Mdadm-D RAID device
For example:
The following queries the details of RAID devices / dev/md0 that have been started
[root@linuxsir~] mdadm-D / dev/md0
/ dev/md0:
Version: 00.90.03
Creation Time: Wed Aug 1 07:11:43 2007
RAID Level: RAID0
Array Size: 156249856 (149.01 GiB 160.00 GB)
RAID Devices: 2
Total Devices: 2
Preferred Minor: 0
Persistence: Superblock is persistent
Update Time: Thu Aug 2 07:22:27 2007
State: clean
Active Devices: 2
Working Devices: 2
Failed Devices: 0
Spare Devices: 0
Chunk Size: 64K
UUID: 35e1a3e6:ed59c368:e5bc9166:5004fe52
Events: 0.21
Number Major Minor RAIDDevice State
0 8 16 0 active sync / dev/sdb
1 8 32 1 active sync / dev/sdc
Note: by querying the details, we get that / dev/md0 is RAID0, the following two members / dev/sdb and / dev/sdc, UUID is 35e1a3e6:ed59c368:e5bc9166:5004fe52, and this RAID has a super block.
Step 3: write the configuration file mdadm.conf of RAID
[root@linuxsir~] mdadm-Ds > > / etc/mdadm.conf Note: write the queried RAID information to mdadm.conf
[root@linuxsir~] more / etc/mdadm.conf Note: see if anything is written in?
ARRAY / dev/md0 level=RAID0 num-devices=2 UUID=35e1a3e6:ed59c368:e5bc9166:5004fe52
Because we already know through mdadm-D / dev/md0 that there are two hard drives / dev/sdb and / dev/sdc under it. So we need to modify the content of mdamd.conf. To add / dev/md0 members / dev/sdb and / dev/sdc; to open / etc/mdadm.conf with an editor
On a similar line below
ARRAY / dev/md0 level=RAID0 num-devices=2 UUID=35e1a3e6:ed59c368:e5bc9166:5004fe52
Modify to
ARRAY / dev/md0 level=RAID0 num-devices=2 UUID=35e1a3e6:ed59c368:e5bc9166:5004fe52 devices=/dev/sdb,/dev/sdc
In fact, it means to specify the members of the RAID device / dev/md0, and each device should be separated by a sign. Or write devices=/dev/sd [bc] in a similar way.
Let's take a look at the line / dev/md0, where the line / dev/md0 is a RAID0 device consisting of two members, the UUID of / dev/md0 is UUID=35e1a3e6:ed59c368:e5bc9166:5004fe52, and the two members are / dev/sdb and / dev/sdc, respectively.
In fact, no matter how many RAID devices we add, we can write one line to each RAID device in the RAID configuration file / etc/mdadm.conf in this way. After that, we need to restart RAID.
[root@linuxsir~] mdadm-Ss
Or
[root@linuxsir~] mdadm-stop-scan
Mdadm: stopped / dev/md0
[root@linuxsir~] mdadm-As
Or
[root@linuxsir~] mdadm-assemble-scan
Mdadm: / dev/md0 has been started with 2 drives.
Note:-S is the same as-- stop, which means to stop RAID. While-s and-- scan are the same, which means scanning RAID. -An and-- assemble indicate that activating RAID; is relatively simple. Just check man and help.
After activating RAID, we need to check the status of the RAID to determine whether the RAID is normal and healthy.
3 use of RAID devices: RAID device partition, file system initialization, mount method
Now that we have made the RAID device, we will use it. When RAID is done, it is similar to a new hard drive without formatting. What is the first step if we get the new hard drive? Yes, it's partitioning and formatting, installing the operating system. After RAID is done, there is no way to use it without a file system, so after we have done RAID, we have to create a file system; RAID simply binds several hard disks or partitions together to form a large virtual physical storage device. If we want to use this large virtual device, we need to create a file system on this device. The file systems currently available for Linux are reiserfs, xfs, and ext3. I recommend reiserfs and xfs. I feel that this is a relatively safe point. Although there is a super zfs, I think it is still in the mouse stage. For heavyweight applications, let's just watch for a while.
After the RAID is done, we need to initialize the file system for it, and after the initialization is complete, we can mount it. In general, we can mount the finished RAID to / home, and we can put all the storage files.
In Linux, the workers who create file systems are mkfs.xfs (create xfs file system), mkfs.jfs (create JFS file system), mkfs.reiserfs (create reiserfs file system), mkfs.ext3 (create ext3 file system)... . We recommend reiserfs and xfs, not ext2 or ext3, why not? Because practice is the only criterion for testing truth, its performance is not as good as that of people, and its security is not as good as that of people. How to use it? I am not an expert in ext file systems. I only use the most convenient and easy to maintain file systems.
RAID can also be used in partitions, but in my opinion, it is not necessary. All those who can use RAID are mostly in the server area. We can make the RAID and mount it to the / home directory. Everything about data storage is on RAID. The operating system is not installed on the RAID. When the operating system fails, we just repair or reinstall the operating system, without any effect on the RAID of the data storage. Even if we reinstall the operating system, we can restore RAID in a few minutes.
If you want to use RAID for partition use, you can use fdisk, parted or cfdisk for partition work, or you can try LVM to manage the partition. LVM can automatically resize the partition. Of course, I wouldn't recommend RAID+LVM or partitioning RAID.
After RAID is done, we use it just like using a physical hard disk. For example, according to the previous example, we can make / dev/sdb and / dev/sdc two hard drives into RAID0, and the device is / dev/md0, and we can do the same operation on / dev/md0 as the physical hard drive. It would be easier if we didn't partition and just create a file system.
For example, if we create a reiserfs file system on / dev/md0, we can use the mkfs.reiserfs command to do this.
Step 1: check to see if the / dev/md0 device exists and its capacity
[root@linuxsir:~] # fdisk-l / dev/md0
Disk / dev/md0: 159.9 GB, 159999852544 bytes
2 heads, 4 sectors/track, 39062464 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk / dev/md0 doesn't contain a valid partition table
Note: we can see that the / dev/md0 device has the capacity of 158.0GB and does not contain valid partitions. If you want to use partitioning, use fdisk / dev/md0, or cfdisk / dev/md0 or parted / dev/md0
Step 2: create a file system
Here we plan to use the reiserfs file system
[root@linuxsir:~] # mkfs.reiserfs / dev/md0
Mkfs.reiserfs 3.6.19 (2003 www.namesys.com)
A pair of credits:
Nikita Danilov wrote most of the core balancing code, plugin infrastructure
And directory code. He steadily worked long hours, and is the reason so much of
The Reiser4 plugin infrastructure is well abstracted in its details. The carry
Function, and the use of non-recursive balancing, are his idea.
Oleg Drokin was the debugger for V3 during most of the time that V4 was under
Development, and was quite skilled and fast at it. He wrote the large write
Optimization of V3.
Guessing about desired format.. Kernel 2.6.21.5-smp is running.
Format 3.6 with standard journal
Count of blocks on the device: 39062464
Number of blocks consumed by mkreiserfs formatting process: 9404
Blocksize: 4096
Hash function used to sort names: "R5"
Journal Size 8193 blocks (first block 18)
Journal Max transaction length 1024
Inode generation number: 0
UUID: 2b06b743-8a4e-4421-b857-68eb2176bc50
ATTENTION: YOU SHOULD REBOOT AFTER FDISK!
ALL DATA WILL BE LOST ON'/ devbind MDO!
Continue (yPop): y Note: enter y here to create the file system
Initializing journal-0% 20% 40% 60% 80% 100%
Syncing..ok
Tell your friends to use a kernel based on 2.4.18 or later, and especially not a
Kernel based on 2.4.9, when you use reiserFS. Have fun.
ReiserFS is successfully created on / dev/md0.
Thus, the file system reiserfs is created successfully. If you want to create a xfs file system, use mkfs.xfs / dev/md0. Other file systems are similar.
Step 3: Mount the file system and use the
[root@linuxsir:~] # mkdir / mnt/data
[root@linuxsir:~] # mount / dev/md0 / mnt/data
[root@linuxsir:~] # df-lh / dev/md0
File system capacity used available used% mount point
/ dev/md0 150G 33M 149G 1% / mnt/RAID0
Note: this will mount the / dev/md0 device to / mnt/RAID0. You can see that the device size is 150G. 33m has been used and the mount point is / mnt/RAID0. We can store files in the device.
In fact, according to the structure of the Linux file system, and the latest Linux software is easy to use. We can completely separate / home. Mount the RAID device to the / home directory. Anything that involves users or data storage can be placed in / home, such as databases, web servers related to data storage, are specified in the folder in / home. Everything is based on the premise of convenient management.
If your RAID is created after installing the system, if you mount it to the existing Linux directory, you should first migrate the data from the corresponding directory to the RAID device, and then mount the RAID to the Linux directory. For example, if you want to mount RAID to the / home directory, you can first create a temporary directory, mount the RAID to this temporary directory, then move all the files under the / home directory to RAID, then unmount the RAID, and then mount it to / home again, so that the data of / home is migrated.
Different Linux distributions have different approaches to how to install or move the operating system to RAID. Fedora or Redhat provides us with the ability to install the system on RAID1 during the installation process. I don't know whether other distributions support it or not, but Slackware doesn't. If you want to migrate the system to RAID1, you may need to install the system before migrating. It feels like it's not necessary to do soft RAID1 on one hard disk. If you want to do RAID1, you have to do it on two hard drives (or two partitions belonging to different hard drives).
How to boot and mount RAID devices, different distributions also have their own methods, the process is to activate RAID first, and then mount.
4 Advanced level and maintenance of soft RAID
After the RAID is done, everything will not be all right, and daily maintenance will have to be carried out; for example, if a hard drive (or partition) is broken, we can replace it without downtime. Or we make a redundant hard disk or partition for RAID, when the RAID fails, the redundant hard disk or partition is automatically pushed to work, which can achieve zero downtime task.
4.1 how to add a hard disk or partition to an existing RAID
RAID has several modes, such as creation and management. What we are talking about below is the management mode of RAID, Manage, which can also be said to be the advanced application of RAID. The purpose of the management mode is only to replace the problematic RAID members, or to replace one of the RAID members by the other for management needs, or to make the newly added hard drives or partitions as spare members of the RAID for security reasons; in the management mode, the true number of RAID members does not change. For example, we use three hard drives or partitions to do RAID5. When adding RAID members, RAID5 is still three members, and the capacity remains the same. If the three RAID members are completely healthy, the new members are just spare members. The purpose of the backup member is that when there is a problem with the real member of the RAID, the backup member will start immediately, just for the purpose of security redundancy.
To add or remove a hard disk or partition to an existing RAID, we need to use the-f and-r and-a parameters of the mdadm tool
Mdadm / dev/mdX-f RAID member
Mdadm / dev/mdX-r RAID member
Mdadm / dev/mdX-a RAID member
Note:
-f equals-- fail means to list a device as a problem device so that it can be removed with the-r or-- remove parameter
-r is the same as-- remove means to move a member of RAID out of RAID
-an is the same as-- add means to add a member to a RAID
-- re-add re-adds the recently removed RAID members to the RAID
It is worth mentioning that the application of these parameters must be carried out under the normal operation of RAID. Where the RAID device is / dev/mdX,X is an integer greater than 0, and the RAID member is a hard disk or partition. Adding devices does not increase the capacity of RAID, just to add backup members, such as special additions to RAID1, RAID5, and RAID10. When a RAID member fails, use this method to get the new member to take the place of work.
For example:
[root@linuxsir:~] # mdadm / dev/md0-f / dev/sdb3
[root@linuxsir:~] # mdadm / dev/md0-r / dev/sdb3
[root@linuxsir:~] # mdadm / dev/md0-a / dev/sdb5
Note: when we want to remove a member of RAID / dev/sdb3 from the RAID device / dev/md0, we should first mark it as problematic (or it may be completely normal, for administrative purposes). Then remove it with the-r parameter, and then add another device / dev/sdb5 to the RAID device / dev/md0 with the-a parameter
When we use mdadm-D / dev/md0 to check the status of RAID, we will see the details of RAID. For example, whether it is normal or not, RAID members. Here's what we need to know.
Raid Level: note: array level; for example, Raid5
Array Size: note: array capacity siz
Used Dev Size: note: RAID unit member capacity, that is, the capacity of the member hard drives or partitions that make up the RAID
Raid Devices: note: number of RAID members
Total Devices: the total number of subordinate members in RAID, because there are redundant hard drives or partitions, that is, spare, which can be pushed to join RAID at any time for the normal operation of RAID.
State: clean, degraded, recovering Note: status, including three states. Clean indicates normal, degraded indicates problem, and recovering indicates recovery or construction.
Active Devices: number of RAID members activated
Working Devices: note: number of working RAID members
Failed Devices: the RAID member with the problem
Spare Devices: the number of spare RAID members. When a member of RAID has a problem and is replaced by another hard disk or partition, the RAID will build. If the build is not completed, this member will also be considered to be a spare device.
Rebuild Status: note: the construction progress of RAID, such as 38% complete, indicates that it is built to 38%.
UUID: note: the UUID value of RAID is unique in the system
Number Major Minor RaidDevice State
0 8 17 0 active sync / dev/sdb1 Note: indicates that this member is active
18 18 1 active sync / dev/sdb2 Note: indicates that this member is active
48 19 2 spare rebuilding / dev/sdb3 Note: not activated, pages under construction, data being transferred
3 8 49-spare / dev/sdd1
Note: spare / dev/sdd1 indicates that / dev/sdd1 is a backup member of RAID, and this backup member will work automatically when there is a problem with one of the full members of RAID / dev/sdb1, / dev/sdb2, or / dev/sdb3. This is not necessary. You can add it by adding RAID members, or when you create a RAID
4.2 how to expand the capacity of an existing RAID
In the RAID management model, we mentioned the way to add RAID members, so that if all real members of RAID are healthy, that member goes into a spare state. It is only when there is a problem with the real member that this alternate member is enabled to replace the faulty member to work.
But can we add a new member to RAID, make him a true member of RAID, and expand the capacity of RAID? For example, there is a RAID5 made of three 20g partitions with a total capacity of (3-1) x 20g 40g. Can we add a new 20g partition to the RAID5 and make it a real member of the RAID5, and achieve the purpose of expanding the capacity, that is to say, let the RAID5 have four real members with a capacity of (4-1) x 20g 60G.
This is easier in hard RAID, but can it be done in soft RAID? The answer is yes, this situation is only for RAID that has already been done, and has been used in cases where RAID is found to be easily insufficient in stored procedures. If it is a newly made RAID, we find that the plan is wrong, we can do it again, there is no need to expand the capacity.
We expand the capacity of the existing RAID, using RAID's Grow model, which is translated into RAID's growth model; the scope of application is RAID1, RAID4, RAID5, RAID6.
RAID expansion process:
Add a member to an existing RAID-> execute the expansion instruction
Note: here we will use the method of adding members in RAID's management mode. That is, the-a parameter in mdadm is used, please refer to the previous section. At this point, the added member is a spare member, and we need to "push" the alternate member to the location. At this point, we will use mdadm's Grow mode.
Examples are as follows:
For example, the RAID5 we do consists of three hard disk partitions / dev/sdb1, / dev/sdc1, / dev/sdd1, and the real members of the RAID5 are three. When we add a partition / dev/sdb2 to the RAID5, the newly added sdb2 is the spare member of the RAID5R. For example, the existing RAID5 device is / dev/md0
First, check the RAID status
[root@linuxsir:~] # mdadm-D / dev/md0
/ dev/md0:
Version: 00.90.03
Creation Time: Tue Aug 7 01:55:23 2007
Raid Level: raid5 Note: RAID level
Array Size: 39069824 (37.26 GiB 40.01 GB) Note: RAID capacity is 39069824
Used Dev Size: 19534912 (18.63 GiB 20.00 GB) Note: the capacity of each member in RAID is 19534912
Raid Devices: 3 Note: the true member of RAID is made up of 3 devices.
Total Devices: 3 Note: total number of devices is 3
Preferred Minor: 0
Persistence: Superblock is persistent
Update Time: Tue Aug 7 02:02:33 2007
State: clean Note: status is normal
Active Devices: 3 Note: the number of activated devices is 3; in fact, it is the number of RAID real members that are activated normally.
Working Devices: 3 Note: there are 3 devices working properly.
Failed Devices: 0 Note: there are 0 devices in question
Spare Devices: 0 Note: 0 standby devices
Layout: left-symmetric
Chunk Size: 64K
UUID: faea1758:0e2cf8e0:800ae4b7:b26f181d Note: UUID of RAID
Events: 0.16
Number Major Minor RaidDevice State
0 8 17 0 active sync / dev/sdb1 Note: true member of RAID / dev/sdb1
1 8 33 1 active sync / dev/sdc1 Note: true member of RAID / dev/sdc1
28 49 2 active sync / dev/sdd1 Note: true member of RAID / dev/sdd1
Second, we add a member to the RAID5
Add / dev/sdb2 to the RAID device / dev/md0, and then view the status and details of RAID
[root@linuxsir:~] # mdadm / dev/md0-a / dev/sdb2 Note: add partition / dev/sdb2 to / dev/md0
Mdadm: added / dev/sdb2
[root@linuxsir:~] # mdadm-D / dev/md0 Note: view the details of / dev/md0
/ dev/md0:
Version: 00.90.03
Creation Time: Tue Aug 7 01:55:23 2007
Raid Level: raid5 Note: RAID level; raid5
Array Size: 39069824 (37.26 GiB 40.01 GB) Note: RAID capacity is 39069824
Used Dev Size: 19534912 (18.63 GiB 20.00 GB) Note: the capacity of each member in RAID is 19534912
Raid Devices: 3 Note: the true member of RAID is made up of 3 devices.
Total Devices: 4 Note: total equipment is 4
Preferred Minor: 0
Persistence: Superblock is persistent
Update Time: Tue Aug 7 02:14:13 2007
State: clean Note: status is normal
Active Devices: 3 Note: the number of activated devices is 3; in fact, it is the number of RAID real members that are activated normally.
Working Devices: 4 Note: there are 4 working devices.
Failed Devices: 0
Spare Devices: 1
Layout: left-symmetric
Chunk Size: 64K
UUID: faea1758:0e2cf8e0:800ae4b7:b26f181d
Events: 0.18
Number Major Minor RaidDevice State
0 8 17 0 active sync / dev/sdb1 Note: true member of RAID / dev/sdb1
1 8 33 1 active sync / dev/sdc1 Note: true member of RAID / dev/sdc1
28 49 2 active sync / dev/sdd1 Note: true member of RAID / dev/sdd1
3 8 18-spare / dev/sdb2 Note: RAID alternate member / dev/sdb2
After adding / dev/sdb2 to / dev/md0, we found that the total number of constituent devices of RAID has changed from 3 to 4, but the number of real members has not changed, with one more backup member / dev/sdb2. However, the capacity of / dev/md0 has not become larger. So at this time, we need to expand the capacity of RAID, and the solution is to make / dev/sdb2 a real member of RAID. RAID is easy to expand from 40G to 60G.
Third, expand the capacity of RAID
Here we are going to use RAID's Grow model, that is, the growth model. The expansion mode, which is very simple, has-- size parameter,-n parameter-- size refers to the size of the RAID, which can be omitted, depending on the RAID level you do. -n represents the number of real members of the RAID. In this example, the real members of RAID5 are three, and then we add a backup member / dev/sdb2 to it. What we do is to "push" this backup member to the position of a real member. In other words, the true members of RAID have changed from three to four. Just this simple instruction can increase the capacity of RAID5.
[root@linuxsir:~] # mdadm-G / dev/md0-N4
Mdadm: Need to backup 384K of critical section..
Mdadm:... Critical section passed.
Then we look at the details of RAID
[root@linuxsir:~] # mdadm-D / dev/md0
/ dev/md0:
Version: 00.91.03
Creation Time: Tue Aug 7 01:55:23 2007
Raid Level: raid5
Array Size: 39069824 (37.26 GiB 40.01 GB) Note: for the capacity of RAID, we found that the capacity of RAID did not increase because the build was not completed; it will change when the build is completed.
Used Dev Size: 19534912 (18.63 GiB 20.00 GB)
Raid Devices: 4
Total Devices: 4
Preferred Minor: 0
Persistence: Superblock is persistent
Update Time: Tue Aug 7 02:36:06 2007
State: clean, recovering Note: normal, recovering
Active Devices: 4 Note: the number of full members of RAID has changed to 4.
Working Devices: 4
Failed Devices: 0
Spare Devices: 0 Note: the number of backup members has been reduced from 1 to 0, indicating that RAID backup members have been pushed to full members of RAID.
Layout: left-symmetric
Chunk Size: 64K
Reshape Status: 17% complete Note: RAID rebuild status, 17% completed; currently, build is not completed
Delta Devices: 1, (3-> 4) Note: the full membership of RAID has been increased by one, from 3 to 4
UUID: faea1758:0e2cf8e0:800ae4b7:b26f181d
Events: 0.100
Number Major Minor RaidDevice State
0 8 17 0 active sync / dev/sdb1
1 8 33 1 active sync / dev/sdc1
2 8 49 2 active sync / dev/sdd1
3 8 18 3 active sync / dev/sdb2 Note: / dev/sdb2 has been changed from spare to active, that is to say, by standby
After implementing the capacity enhancement, we found that the capacity of RAID did not increase because the construction was not completed. After the construction of RAID is completed, the capacity of RAID will be changed to 19534912x (4-1) = 58604736KGB. The construction progress can also be checked by cat / proc/mdstat.
The method of adding a new full member of RAID will not let the original data of RAID be lost or destroyed. Therefore, this method is a safe and feasible expansion method for RAID that has stored a large amount of data when the capacity is in crisis without losing the original data. Of course, after the expansion, you have to modify / etc/mdadm.conf.
4.3 how to start RAID when the number of full members of RAID does not meet the RAID startup requirements
There may be a situation where when a full member of RAID dies, RAID cannot be started according to the usual method mentioned before. At this time, we have to force the boot and use the-- run parameter; for example, when we use RAID5, we use three hard drives or partitions, and when one of them dies, according to the characteristics of RAID5, the data is also safe and complete, but the conventional way to start RAID5 is to reach the number of formal members specified when doing RAID. At this time, it is not possible to start according to the conventional method. We need to use the-- run parameter.
Let's give an example. For example, if the full member of RAID5 has 3 / dev/sdb1,/dev/sdb2,/dev/sdb3, we only use / dev/sdb1 and / dev/sdb2 to start RAID5.
[root@linuxsir:~] # mdadm-A-run / dev/md0 / dev/sdb1 / dev/sdb2
5 discussion on the use direction of soft RAID equipment
For soft RAID is to use several physical disks or partitions of the same capacity into a large virtual device, what is the direction of our application? Through the definition of RAID, we can know that RAID is proposed to solve the problem of capacity, read and write efficiency and disk redundancy security.
5.1 is it necessary to do RAID just to expand the available storage space?
If it is only to solve the capacity problem, I do not think it is necessary to use RAID. Because LVM is more flexible than RAID, no matter what you do, it will not cause easy loss. However, only RAID0 and LINEAR levels will not cause capacity loss. Because of the security redundancy of RAID1, RAID5 and RAID10, the capacity of RAID must be reduced.
LVM technology can combine all idle hard drives or partitions to use, and does not require each partition or hard disk size to be easily the same, but RAID must require each RAID member to have the same capacity, if not the same, with the smallest member's easy to calculate, this loss does not count, but also the capacity loss brought about by security redundancy, such as making two 80g hard drives into RAID1, when RAID1 is the capacity of a hard disk. After the LVM is done, it is also equivalent to a blank virtual device, which can be divided into a partition or several, and if divided into several, we can automatically adjust the size of the partition. Once the RAID is done, if the partition is done again, the capacity of the partition cannot be adjusted freely.
Some brothers will ask me if I should do a good job of RAID, and then do LVM, that is, RAID+LVM mode, on RAID. This solution is not difficult to implement, but is it really valuable for soft RAID? The purpose of using RAID is nothing more than "capacity + read and write efficiency + security". Is it necessary for us to split the finished RAID into pieces? I don't think it's necessary, because for storage devices, each enhanced management technology means a risk, which comes from the technical level of the administrator and from the aging of the device. In addition, focusing on non-partitioned storage devices can also bring convenience for data migration and system management.
5.2 is it necessary to do RAID on the same hard disk
Is it necessary to do RAID on the same hard disk? if you want to improve the speed of reading and writing data, it is still necessary. RAID0 can bring you the pleasure of high-speed storage. If you want to do soft RAID on the same hard drive and want to strike a balance between efficiency and security, I think you can avoid it. Because when the hard drive is broken, all the important data will be rotten.
5.3 the direction of rational use of soft RAID
Currently, machines that support SATA motherboards can only have up to four hard drives. For example, four 80g SATA hard drives, IDE hard drives are the same; we have to do RAID according to our own direction of use. Let me illustrate the direction of reasonable use of RAID based on an example.
The first hard disk partition:
/ dev/sda1 20g size
/ dev/sda2 20g size
/ dev/sda3 20g size
-/ dev/sda5 swap partition-twice the memory size
-/ dev/sda6 for / tmp 2G size
-/ dev/sda7
Note: we first install the operating system to the first partition / dev/sda1, and the swap partition is the directory where / dev/sda5,/dev/sda6 is temporary / tmp; what are / dev/sda1 and sda2, sda3, sda7 used for? Can be used to install the system. What we designed is an installation-only system, and everything that involves data preservation is put on the RAID. For example, I install sda1 and sda2 with the same system, hang the RAID made by the second, third and fourth hard drives on / home, and all the application data is stored on RAID. When the sda1 system is destroyed, we can enable the sda2 system in the shortest time and load the RAID made on the second, third and fourth hard drives on the sda2.
Second, three or four hard drives, we can use the whole hard disk to do RAID, do not have to partition each hard disk. For example, for reading and writing efficiency, we can make RAID0, for security, you can do RAID5. If the capacity of the RAID0 device is 3x80G=240G, if the capacity of the RAID5,RAID5 device is (3-1) x80mm 160g. Some brothers may say, why can't I partition the disk and do RAID0? RAID10, which is a combination of RAID0+RAID1, has both security and efficiency. This scheme is also possible, but you must do so that when a hard drive is broken, it will not affect the overall security of the data. In other words, when one hard drive is broken, the other two hard drives can be combined to form a complete piece of data. When one of the hard drives in the RAID is broken, we can make the RAID work properly by replacing the hard drive and simply repairing it, and the data is complete. If you attach great importance to data security and can do this in the process of doing soft RAID, this RAID solution belongs to you.
So when doing soft RAID, you should first understand what your goal is, and then judge your desired effect according to your goal. If the simple pursuit is to read and write efficiency, we do not have to consider the security of the data. If the security of the data is extremely important to us, we have to judge whether the integrity of the data is affected when a hard disk is broken. For example, we use two hard drives to do RAID5 or RAID10, so to speak, there is no security at all. No matter how you partition and combine, it will not bring you a little bit of sense of security.
6 frequently asked questions and solutions
Some additions and additions are involved, which are listed here one by one.
6.1 how to clear the RAID information stored in the super block store of a RAID member
The RAID information stored by RAID members in superblock is extremely important, and we can easily restore RAID based on this information.
Mdadm-zero-superblock RAID member
If you confirm that the RAID member is of no use to you, you have removed the RAID from the member and you want to use the device for other purposes. At this point, you can clear its superblock information. For example,
[root@linuxsir:~] # mdadm-- zero-superblock / dev/sdd3
This example is to clear the RAID information stored in the super block in / dev/sdd3
7 about this article
In fact, soft RAID is relatively simple to use, but the difficulty lies in the later management and maintenance. The use of soft RAID tutorial, in fact, with a few commands can be explained clearly. But when I think of the brothers who are new to Linux, like I did when I studied Linux, they are always looking for a Step By Step tutorial to achieve what they want to do. Based on the idea of most beginners, I wrote such a relatively complicated article.
This article doesn't seem to be organized enough, and mdadm's model is not told to everyone, but it is split up and put into specific applications.
I only use myself to write this article from the perspective of novice understanding, there are a large number of non-standard language, which is also reasonable. The most important thing is that I don't know how to translate some professional words.
The above is all the content of the article "how to implement linux soft raid". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.