Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Creation of mdadm and lvm for Advanced File system Management

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

"configure the quota system

Disk quota means that administrators can impose a quota on the disk space that users can use. Each user can only use disk space within the maximum quota range. For example, some network disks are based on this feature and can be allocated to users with fixed space. If you spend money to buy it, you can set more space.

Function and function

Disk quota can limit the disk space that can be used by a specified account, so as to avoid other users being unable to run or work or even affect the operation of the system due to the excessive use of disk space by one user. This function is not only available in linux, but also in windows systems.

Summary

It runs in the kernel and is enabled on a file system basis. It has different policies for different combined users and is restricted according to blocks or nodes.

Enforce soft limits (soft limit)

Hard limit (hard limit)

Initialization

Partition mount options: usrquota, grpquota

When you set a disk quota relative to a partition, you need to write these two items in the mount option in / etc/fstab to make the disk quota, and then you need to mount it again (mount-o remount device mount_point).

Initialize the database: quotacheck

Quotacheck [- gubcfinvdMmR] [- F quota-format]-a | filesystem

Options:

-c: create

-u: set disk quota for a user

-g: set disk quota for a group

-m: no need to remount

For example, if I want to do disk quota on / home, I need to initialize the database.

Quotacheck-cumg / home

After initialization, two files, aquota.group,aquota.user, are generated in the / home directory.

Enable quota

Quotaon mount_point

Turn off quota

Quotaoff mount_point

Edit quotas: edit content by opening a file

Disk quotas for user hadoop (uid 500):

Filesystem blocks soft hard inodes soft hard

/ dev/mapper/VolGroup-lv_home 2048 0 0 12 0 0

Blocks: the total file size in the current directory

Soft: initial value of trigger warning, minimum capacity limit

Hard: maximum capacity limit allowed for storage

Inodes: the number of files in the current directory

Soft: minimum value of the file

Hadr: the maximum size of a single file

When we set the disk configuration relative to hadoop users, we only need to change the soft and hard limits behind the blocks. The size is determined according to our own actual situation, or we can define the size of a single file to be created.

Command line direct editing

Setquota username 4096 5120 40 50 / home

The default value for the limited capacity size here is bytes

Example

Setquota hadoop 4096 5120 40 50 / home

Sometimes you need to set disk quotas for some users in batches, and it is too troublesome to recreate them one by one. Here is a method of batch creation:

Edquota-p user1 user2 will set the quota for user1 to user2 as well.

Edquota-p xxxx `awk-F:'$3 > 499 {print $1}'/ etc/ passwd`

Query disk quota

Quota

Repquota-a

If multiple partitions are quota, you can use repquota-u mount point to query the usage of a single partition

"RAID

What is RAID?

RAID (Redundant Arry of Inexpensive Disk) is called a redundant array of cheap disks. The basic idea of RAID is to combine several cheap small disks together into a disk group to achieve or exceed a large capacity and expensive disk. At present, RAID technology is roughly divided into two kinds: hardware-based RAID technology and software-based RAID, in which linux can achieve the function of RAID through its own software (mdadm), so it is flexible in configuration and convenient in management. At the same time, using RAID, we can also merge several house disks into a larger virtual device, thus achieving the purpose of performance improvement and data redundancy. Of course, the hardware-based RAID solution is slightly better than the software-based RAID technology in performance, such as the ability to detect and repair errors, automatic disk detection and array reconstruction.

Action

Improve IO ability, read and write to disk in parallel

Improve durability and disk redundancy to achieve

Level, multiple disks are organized to work differently

The way RAID is implemented

External disk finishing amount:

Provide adaptation through expansion cards

Built-in RAID: integrated RAID controller on the motherboard, configured in BIOS before installing OS

Soft RAID, such as mdadm

RAID level

RAID-0

Also known as stripe mode, that is, continuous data is distributed to multiple disks for access, as shown in the figure, when the system has data requests, it can be executed by multiple disks in parallel, and each disk executes its own part of data requests. this kind of parallel operation on data can make full use of the bandwidth of the bus and significantly improve the overall access performance of the disk. Because reads and writes are done in parallel on the real device, read and write performance will increase, which is usually the reason for running RAID-0, but RAID-0 has no data redundancy and cannot recover any data if the disk fails. The cost is low, requires at least two disks, and is generally used only in cases where data security is not required.

Fault tolerance

None

redundancy

None

Hot spare

None

Read performance

High

Random write performance

High

Continuous write performance

High

Number of disks required

2 or 2N

Available capacity

100%

Typical application

Fast fault-free reading and writing requires low security, such as graphics workstations

RAID-1

RAID-1, also known as mirroring, is a fully redundant mode. As shown in the figure, RAID can use two or 2n disks and use 0 or more spare disks. Each time data is written to the mirror disk, the array is reliable, but its effective capacity is reduced to the total capacity. At the same time, the size of these disks should be equal, as long as at least one disk in any pair of mirror disks in the system is available. The system can run normally even when half of the hard drives have problems, and the RAID system with hard disk failures is no longer reliable, so the damaged hard disks should be replaced in time, otherwise the remaining mirror disks will also have problems, then the whole system will collapse, and after replacing the new disk, the original data will take a long time to be mirrored synchronously, and the external access to the data will not be affected, but the performance of the whole system will decline. The load of RAID 1 disk controller is quite heavy, and using multiple disk controllers can improve the security and availability of data.

Fault tolerance

Yes

redundancy

Yes

Hot spare

Yes

Read performance

High

Random write performance

Low

Continuous write performance

Low

Number of disks required

2 or 2N

Available capacity

50%

RAID-4

RAID4 saves the verification information on one disk, and writes the data to other disks in the way of military service RAID0. If one disk fails, you can use the check information to reconstruct all the data. If two disks fail, then all the data is lost and RAID4 is not often used, because the verification information is stored on the same disk and must be updated every time it is written to another disk. Generally writing data in uppercase is easy to cause the bottleneck of verification.

Fault tolerance

Yes

redundancy

Yes

Hot spare

Yes

Read performance

High

Random write performance

Low

Continuous write performance

Low

Number of disks required

2 or more

Available capacity

(nmur1) / n

RAID-5

RAID can be understood as a compromise between RAID0 and RAID1, but it does not fully use the mirror concept of RAID1, but uses parity information as a way of data recovery. There is no separately specified check disk, but the access data and parity information are delivered to all disks, which improves the read and write performance. For RAID5, most of the data transfer only operates on one disk. There is a write loss in RAID5, that is, each write operation will produce four actual read and write operations, including two reading old data and information, two writing experience data and parity information.

Fault tolerance

Yes

redundancy

Parity check

Hot spare

Yes

Read performance

High

Random write performance

Low

Continuous write performance

Low

Number of disks required

3 or more

Available capacity

(nmur1) / n

Typical application

Team data transmission requires high security such as finance, database, storage.

RAID6

RAID6 is a new technology in the RAID family, which is extended on the basis of RAID5, so like RAID5, data and check codes are divided into blocks and stored in decibels to each hard disk of the disk array. RAID adds a separate check disk, which backs up all the check codes distributed on the disk, which allows multiple disks to fail at the same time in the disk column, but consumes too much disk space.

Fault tolerance

Yes

redundancy

Parity check

Hot spare

Yes

Read performance

High

Random write performance

Low

Continuous write performance

Low

Number of disks required

4 or more

Available capacity

(nMur2) / n

Typical application

Team data transmission requires high security such as finance, database, storage.

RAID-10

RAID-10 is a combination of RAID0 and RAID1, it uses parity to achieve striping and mirroring, so it remembers the speed of RAID0 and the security of RAID1. We know that RAID1 is a redundant backup array here, while RAID0 is responsible for the speed of reading and writing data. More often, it is separated from the main channel to do Striping operation, that is, to split the data, and each of these separate paths is divided into two ways, doing Mirroring operation, that is, mirroring each other.

Fault tolerance

Yes

redundancy

Parity check

Hot spare

Yes

Read performance

High

Random write performance

High

Continuous write performance

High

Number of disks required

4 or more

Available capacity

50%

Typical application

Team data transmission requires high security such as finance, database, storage.

RAID-01

RAID-01 takes the four-disk RAID 0room1 as an example, and its data storage method is shown in the figure: RAID 0room1 is a solution that takes into account both storage performance and data security. It not only provides the same data security as RAID 1, but also provides storage performance similar to RAID 0. Because RAID 0room1 also provides data security through 100% backup of data, RAID 0room1 has the same disk space utilization as RAID 1 and has a high storage cost. When any one of the mirror disk has a bad belt, the data will be lost

Soft RAID

Through the software to achieve all levels of raid functions, the performance is not as powerful as the hardware raid, but it can be used in the test environment. Mdadm is used in linux to realize the function of raid.

Mdadm: provides a management interface for soft RAID

Add redundancy to free disks. Combined with md (mutil devices) in the kernel, RAID devices can be ordered as / dev/md0, / dev/md1, and so on.

Mdadm: a modeled tool

Syntax format: mdadm [mode] [options]

Supported RAID levels: Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirror-)

Ing), RAID4, RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CONTAINER.

Mode:

Create pattern-C

Dedicated option

-l specify the raid level

-n specify the number of devices

-an automatically create a device file yes for it | no

-c specifies that the chunk size is generally set to 2 ^ n, with a default of 64k

-x specifies the number of free disks. When one of them is broken, it will be automatically topped up.

Management Mode-F

-a-add add disk

-r-remove delete disk

-f-fail simulates damaged disk

Mdadm / dev/md0-a / dev/sdb1

Mdadm / dev/md0-r / dev/sdb2

Mdadm / dev/md0-f / dev/sdb3

Monitoring mode-F

View the status of soft raid, which is equivalent to viewing / proc/mdstat

Growth Model-G

For example, I want to add a new disk to a raid device to expand the capacity of raid.

Mdadm-G / dev/md0-N4-a / dev/sdf1

Assembly Mode-A

You can use this mode to activate a raid device when it is in a stopped state

Mdadm-A / dev/md0 / dev/sdb1 / dev/sdb2 / dev/sdb3

View the details of the array for RAID

Mdadm-D / dev/md0 or view file / proc/mdstat

Stop the array

Mdadm-S / dev/md0

Save the current RAID information to the configuration file for later assembly

Mdadm-D-s > / etc/mdadm.conf

Delete raid

You must stop the raid device before raid, and then delete the raid information

Mdadm-zero-superblock / dev/sdb1

Mdadm-zero-superblock / dev/sdb2

Mdadm-zero-superblock / dev/sdb3

Example:

Create a RAID1 device with 1G of free space, which requires a chunk size of 128k, a file system of ext4, a free disk, and can be automatically mounted to the / backup directory when booted.

Since the utilization rate of RAID1 is 100%, you only need to create a partition with a space size of 1G, change the partition type to fd, then create raid, and then require a free disk, so you have to create a partition with the same size of 1G, and the partition type has to be changed to fd, which needs to be mounted automatically when booting, which requires writing the entry into / etc/fstab d.

In the first step, the size of the two zones is 1G, and after completion, change the partition type to fd Linux auto raid type.

Second, create raid mdadm-C / dev/md0-a yes-c 128k-l 1-n 1-x 1 / dev/sdc {5jue 6}

Step 3, format mke2fs-t ext4 / dev/md0

The fourth step is to write the entry into / etc/fstab to realize automatic mount on boot.

2: create a RAID5 device with 2G of free space composed of three hard drives, with a chunk size of 256k and a file system of ext4, which can be automatically mounted to the / mydata directory when booted.

"logical volume management

LVM is the abbreviation of Logical Volume Manager (logical Volume Management). It is a mechanism for managing disk partitions in the Linux environment. It is implemented by Heinz Mauelshagen on the Linux 2.4 kernel. The latest versions are stable version 1.0.5, development version 1.1.0-rc2, and LVM2 development version. A common and difficult problem encountered by Linux users when installing Linux operating system is how to correctly evaluate the size of each partition in order to allocate appropriate hard disk space. Compared with traditional disks and partitions, LVM provides a higher level of disk storage for computers. It makes it more convenient for system administrators to allocate storage space for applications and users. Storage volumes managed by LVM can be resized and removed at any time as needed (file system tools may need to be upgraded)

Benefits

An abstraction layer that allows easy manipulation of volumes, including resizing the file system

Allow file systems to be reorganized across multiple physical Devic

Specify the device as a physical volume and create a volume group with one or more physical volumes

Lvm can flexibly change the capacity of lvm

Basic terminology of LVM

Physical storage media: this refers to the storage devices of the system, such as / dev/sda / dev/sdb, etc.

Physical volume: a physical volume means that a hard disk partition or logically has the same function as a disk partition

Volume groups: LVM volume groups are similar to physical hard disks in lvm systems and are made up of physical volumes. You can create one or more lvm partitions on a volume group, and the lvm volume group consists of one or more physical volumes

Logical volumes: lvm's logical volumes are similar to hard disk partitions, and file systems can be established on top of logical volumes

PE (physical extend) each physical volume is divided into basic units of PE. PE with a unique number is the smallest unit that can be addressed by LVM. The default is 4MB, which can be changed.

LE: logical volumes are also divided into the basic addressable units of LE, and in the same volume group, the size of LE and PE are the same.

To put it simply:

PV: is a physical disk partition

The physical disk partition in VG:LVM, that is, PV, must be added to VG, which can be understood as a warehouse or several large hard drives.

LV: that is, the logical partition from the VG

Dm:device mapper: a module that organizes one or more underlying devices into one logical device

Device name: / dev/dm-#

Soft connection

/ dev/mapper/Vg_name-Lv_name

/ dev/mapper/vol0-root

/ dev/Vg_name_Lv_name

/ dev/vol0/root

Pv management tools

Pvs: briefly view pv information

Pvdisplay: view the pv information in detail, or follow a device to view the details of a pv.

Create pv

Pvcreate

Pvcreate [- commandprofile ProfileName] [- d |-debug] [- h |-help] [- t |-test]

[- v |-verbose] [- version] [- f [f] |-force [- force]] [- y |-yes] [- labelsector]

[- bootloaderareasize size] [- M |-metadatatype type] [- [pv] metadatacopies Num-

BerOfCopies] [- metadatasize size] [- metadataignore {y | n}] [- dataalignment

Alignment] [- dataalignmentoffset alignment_offset] [- restorefile file]

[- norestorefile] [- setphysicalvolumesize size] [- u |-uuid uuid] [- Z |-zero

{y | n}] PhysicalVolume [PhysicalVolume...]

Pvcreate / dev/sdb1 / dev/sdb2 / dev/sdb3

Vg management tools

Show Volume Group

Vgs: brief display of vg information

Vgdisplay: displaying vg information in detail

Create a volume group

Vgcreate

Vgcreate [- s |-physicalextentsize PhysicalEx- tentSize [bBsSkKmMgGtTpPee]] [- shared] [- systemid SystemID] [- t |-test] [- v |-verbose] [- version] [PHYSICAL DEVICE OPTIONS] VolumeGroupName PhysicalDe-vicePath [PhysicalDevicePath...]

Vgcreate vg0 / dev/sdb1 / dev/sdb2 / dev/sdb3

Manage volume groups

Vgextend stretch volume group

Vgextend vg0 / dev/sdc1 / dev/sdc2

Delete volume group

Vgremove, first move the data to the specified device, and then delete it, and move the data stored on the underlying pv to other hard drives, pvmove / dev/sdb {1jie 2diem 3} / dev/sdc {1mem2je 3}.

Vgreduce vg0 / dev/sdb {1 ~ 2 ~ 3} remove the pv from the volume group

Vgremove vg0 finally removes the volume group

Lv management tools

Lvs: briefly displays information about logical volumes

Lvdisplay: displays logical volume information in detail

Create a logical volume

Lvcreate

Lvcreate [- a |-activate [[- cachesettings key=value] [- c |-chunksize ChunkSize] [{- l |-extents] LogicalExtentsNumber [% {FREE | PVS | VG}] |-L |-size LogicalVolumeSize} [- I |-stripes Stripes [- I |-stripesize StripeSize] [- h | -? |-help] [- s |-snapshot] [- t |-test] [- type SegmentType] [- v |-verbose]

Lvcreate-L [mMgGtT]-n lv_name vgname

-l #: how many PE capacities are used to create logical volumes

-L: directly specify how much capacity to use to create a volume group

Resize the file system

Fsadm [options] resize device [new_ size[BKMGTEP]]

Resize2fs [- f] [- F] [- M] [- P] [- p] device [new_size]

Extended logical Volume

Lvextend-L [+] # [mMgGtT] / dev/VG_NAME/LV_NAME expands the physical boundary

Resize2fs / dev/VG_NAME/LV_NAME expands logical boundaries

Example:

1. Creating a VG; named testvg with a size of 10G composed of at least two PV requires that the PE size be 16MB, and then create a logical volume with a size of 5G in the volume group, testlv;, and mount it to the / users directory

The first step is divided into two partitions, and the size is 10g and change the partition type to 8e

Step 2: create pv pvcreate / dev/sdc2 / dev/sdc3

Third, create a volume group vgcreate vg0-s 16m / dev/sdc2 / dev/sdc3

Part IV: create a volume group lvcreate-L 20g-n testvg vg0

Step 5: format mount mke2fs-t ext4 / dev/vg0/testlv, mount / dev/vg0/testlv / users (you need to create it if the directory implementation does not exist)

Step 6, write to / etc/fstab in order to boot.

2. Extend testlv to 7G, requiring that archlinux users' files cannot be lost.

To expand, you must first check whether the remaining space in testlv's volume group is enough. If the remaining space in the volume group is not enough, you need to create another partition, then stretch the space of the volume group, and then expand the logical volume. After using lvextend, you only stretch the physical space, but the logical space remains unchanged. Here, you need to execute the command resize2fs device to dynamically stretch the space of the logical volume.

The first step is to check the size of the space left in the volume group and vgdisplay vg0 to see if there is any room for the FREE PE item. I happen to have it here.

The second step is to stretch logic volume lvextend-L + 2G / dev/vg0/testlv

The third step, make it effective resize2fs / dev/vg0/testlv

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report