Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Principle and Construction of RAID disk Array

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

RAID (Redundant Array of Independent Disks, redundant Array of Independent disks) can provide higher speed and security than ordinary disks, so the server will choose to create RAID during installation.

There are two ways to create RAID: soft RAID (implemented by operating system software) and hard RAID (using hardware array cards); raid1, raid10 and raid5 are the most commonly used in enterprises. However, with the rapid development of the cloud, suppliers can generally solve the hardware problem.

1.1 several common classifications of RAID

After continuous development, RAID technology now has seven basic RAID levels from RAID0 to 6.

RAID 0

Data striping, no check

RAID 1

Data mirroring, no parity

RAID 2

Error check and correction of hamming code

RAID 3

The data is read and written in stripes, and the verification information is stored in the dedicated hard disk.

RAID 4

A single hard disk is used to write data at a time, and the verification information is stored in a special hard disk.

RAID 5

Data striping and distributed storage of check information

RAID 6

Data striping, distributed parity and providing two levels of redundancy

In addition, there are some basic RAID level combinations, such as RAID 10 (a combination of RAID0 and RAID1), RAID 50 (a combination of RAID0 and RAID5), and so on.

Note: different RAID levels represent different storage performance, data security, and storage costs

RAID01 (0,1)

Do RAID 0 first, then RAID 1, and provide data striping and mirroring at the same time

RAID 10 (1x 0)

Similar to RAID 01except that RAID 1 is done first, followed by RAID 0

RAID 50 (5: 0)

Making RAID 5 first and then RAID 0 can effectively improve the performance of RAID 5.

1.2 reasons for RAID 2.1 RAID-0

Stripe (strping)

Number of disks required: more than 2 (preferably the same size)

It is the simplest form of building a disk array, requiring more than 2 hard disks.

Features:

Low cost, which can improve the performance and throughput of the entire disk.

RAID 0 does not provide redundancy or error repair capabilities, so it is fast.

Damage to any one disk will damage all data; disk utilization is 100%.

2.2 RAID-1

Mirroring (Mirror Volume)

Need more than two disks, two or, three disks.

Principle: mirroring the data from one disk to another, that is, when the data is written to one disk, it will generate a mirror file on another idle disk, (synchronization)

RAID 1 mirroring (mirror volume) requires at least two hard drives. The raid size is equal to the smallest capacity of the two raid partitions (it is best to divide the partition size into the same size). The data is redundant. It is written to two hard disks at the same time during storage, thus realizing data backup.

Disk utilization is 50%, that is, 2 100g disks constitute a RAID1 that can only provide 100g of free space. The figure below is as follows

2.3 RAID-5

Need three or more hard drives, can provide hot spare to achieve fault recovery; only one piece is damaged, there is no problem. However, if both disks are damaged at the same time, the data will be corrupted. Space utilization: (n Mel 1) / n 2max 3 as shown below

The role of parity information:

When a disk data of RAID5 is damaged, the remaining data and corresponding parity information are used to recover the damaged data.

Extended XOR operation:

Is a relatively simple XOR logic operation (the same is 0, the difference is 1)

A value

B value

Xor result

0

0

0

one

0

one

0

one

one

one

one

0

2.4 RAID10

Mirror + stripe

RAID10 is a RAID level that combines images and stripes at two levels. The first level is RAID1 mirror pair, and the second level is RAID 0. RAID10 is also a widely used RAID level.

The characteristics of RAID1+0 make it particularly suitable for areas where there are a large number of data to access, but also strict requirements for data security, such as banks, finance, commercial supermarkets, warehouses, various file management and so on.

Create a raid1 first, and then create a raid0 using the created raid1 device

Comparison of 2.5 RAID

Choice of 2.5 RAID

2.6 RAID hard disk failure handling

General two processing methods: hot standby and hot plug

Hot standby: HotSpare

Definition: when a hard disk in a redundant RAID group fails, without interfering with the normal use of the current RAID system, another normal spare hard disk in the RAID system is used to automatically replace the failed hard disk to ensure the redundancy of the RAID system in time.

Global: the spare hard disk is shared by all redundant RAID groups in the system

Dedicated: the spare hard disk is dedicated to a set of redundant RAID groups in the system.

Hot swappable: HotSwap

Definition: physically replace the failed hard disk in the RAID system with a normal hard disk without affecting the normal operation of the system

The key lies in the protection mechanism of electronic devices during hot plug.

As shown in the following figure: an example of a global hot spare, which is shared by two RAID groups in the system and can automatically replace a failed hard disk in any RAID.

Chapter 3 raid Card

Raid cards are generally divided into hard raid cards and soft RAID cards. Hard RAID is the RAID function realized by hardware, independent raid cards, and the RAID chips integrated on the motherboard are hard raid cards. Through the software and using CPU raid card refers to the use of CPU to complete the common calculation of RAID, software RAID occupies high CPU resources, most of the server equipment is hardware RAID.

3.2 soft RAID

Management soft raid tool: mdadm

Mdadm is a command used to create and manage software RAID under linux, and it is a model command.

Explanation of common parameters:

-C or-- creat

Build a new array

-r

Remove Devic

A

Activate disk array

-l or-- level=0 1 4 5 6

Set the level of the disk array

D or-- detail

-print details of array devices

-n or-- raid-devices=

Specify the number of array members (partitions / disks)

-s or-- scan

Scan the configuration file or / proc/mdstat for array missing information

-x or-- spare-devicds=

Specify the number of spares in the array

-f

Set the device status as a fault

-c or-- chunk=

Sets the block chunk size of the array in KB

-an or-- add

Add devices to the array

-G or-- grow

To change the size or shape of a formation.

-v-- verbose

Show details

-S

Stop the array

Chunk (block): the size of each segment when raid stores data. 4K,64K

3.3Playbook: RAID0

Environment: adding two hard drives

Environment: adding two sdb hard drives

Add two partitions: sdb6 sdb7

3.3.1 create raid0

[root@xuegod72 ~] # rpm-qf `whichmdadm`

Mdadm-3.3.2-7.el7.x86_64

[root@xuegod72] # mdadm-C-v/dev/md0-l 0-n 2 / dev/sdb6 / dev/sdb7

Mdadm: chunk size defaults to 512K

Mdadm: Defaulting to version 1.2metadata

Mdadm: array / dev/md0 started.

3.3.2 View array information

[root@xuegod72] # mdadm-D

Mdadm: No devices given.

[root@xuegod72 ~] # mdadm-Ds

ARRAY / dev/md0 metadata=1.2name=xuegod72:0 UUID=551f2150:ccb1c188:7fcf3cc0:1c9144d3

[root@xuegod72] # mdadm-D / dev/md0

/ dev/md0:

Version: 1.2

Creation Time: Mon Oct 24 22:12:25 2016

Raid Level: raid0

Array Size: 2095104 (2046.34 MiB 2145.39 MB)

Raid Devices: 2

Total Devices: 2

Persistence: Superblock is persistent

Update Time: Mon Oct 24 22:12:25 2016

State: clean

Active Devices: 2

Working Devices: 2

Failed Devices: 0

Spare Devices: 0

Chunk Size: 512K

Name: xuegod72:0 (local to host xuegod72)

UUID: 551f2150:ccb1c188:7fcf3cc0:1c9144d3

Events: 0

Number Major Minor RaidDevice State

0 8 22 0 active sync / dev/sdb6

1 8 23 1 active sync / dev/sdb7

Chunk value: the stripe size is divided into many "Chunk". If the block size (Chunksize) is set too small, it will certainly increase the number of blocks occupied.

We can also save this configuration information.

[root@xuegod72 ~] # mdadm-Ds

ARRAY / dev/md0 metadata=1.2name=xuegod72:0 UUID=551f2150:ccb1c188:7fcf3cc0:1c9144d3

[root@xuegod72 ~] # mdadm-Ds > / etc/mdadm.conf

[root@xuegod72 ~] # cat! $

Cat / etc/mdadm.conf

ARRAY / dev/md0 metadata=1.2name=xuegod72:0 UUID=551f2150:ccb1c188:7fcf3cc0:1c9144d3

3.3.3 create a partition for the created RAID0

When zoning, we give it all the space.

[root@xuegod72 ~] # fdisk / dev/md0

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only,until you decide to write them.

Be careful before using the writecommand.

Device does not contain a recognizedpartition table

Building a new DOS disklabel with diskidentifier 0x6c8bd2c5.

Command (m for help): n

Partition type:

P primary (0 primary, 0extended, 4 free)

E extended

Select (default p):

Using default response p

Partition number (1-4, default 1):

First sector (2048-4190207, default2048):

Using default value 2048

Last sector, + sectors or + size {KMagne Mpeng} (2048-4190207, default 4190207):

Using default value 4190207

Partition 1 of type Linux and of size 2GiB is set

Command (m for help): W

The partition table has been altered!

Calling ioctl () to re-read partitiontable.

Syncing disks.

[root@xuegod72 ~] # ls / dev/md*

/ dev/md0 / dev/md0p1

3.3.4 format partitions and mount

[root@xuegod72 ~] # mkfs.xfs / dev/md0p1

Meta-data=/dev/md0p1 isize=256 agcount=8, agsize=65408 blks

= sectsz=512 attr=2, projid32bit=1

= crc=0 finobt=0

Data = bsize=4096 blocks=523264,imaxpct=25

= sunit=128 swidth=256 blks

Naming = version 2 bsize=4096 ascii-ci=0 ftype=0

Log = internal log bsize=4096 blocks=2560,version=2

= sectsz=512 sunit=8 blks,lazy-count=1

Realtime = none extsz=4096 blocks=0, rtextents=0

[root@xuegod72 ~] # mkdir / raid0

[root@xuegod72 ~] # mount / dev/md0p1/raid0/

[root@xuegod72 ~] # df-h | tail-1

/ dev/md0p1 2.0G 33M 2.0G 2% / raid0

3.3.5 Boot Auto-mount

[root@xuegod72 ~] # umount / raid0/

[root@xuegod72] # mount-a

Mount: mount point / radi0 does notexist

[root@xuegod72 ~] # vi / etc/fstab

[root@xuegod72] # mount-a

[root@xuegod72 ~] # df

Filesystem 1K-blocks Used Available Use% Mounted on

/ dev/sda3 206234228 3024100 203210128 2%

Devtmpfs 477820 0 477820 / dev

Tmpfs 492364 84 492280 1 per dev/shm

Tmpfs 492364 7152 485212 2% / run

Tmpfs 492364 0 492364 / sys/fs/cgroup

/ dev/sr0 3947824 3947824 0 / media

/ dev/sda1 303788 130864 172924 44% / boot

Tmpfs 98476 16 98460 1% / run/user/42

Tmpfs 98476 0 98476 / run/user/0

/ dev/md0p1 2082816 33056 2049760 2% / raid0

[root@xuegod72] # tail-1 / etc/fstab

UUID= "2c398f3c-462f-4106-a51e-7cadd8ef925b" / raid0 xfs defaults 0 0

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report