Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Use of disk quotas, RAID, and LVM for 8.31_Linux advanced file system management

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Disk quota Quota

The role of disk quota (Quota):

In the Linux system, because it is a multi-user, multi-task environment, multiple users will share a hard disk space. If a few users occupy a large amount of hard disk space, it will definitely affect the rights of other users. Therefore, the administrator should appropriately limit the space of the hard disk to the user in order to properly allocate system resources.

General purpose of disk quota

Several more commonly used uses:

For www server, for example: the capacity limit of each person's web space.

For mail server, for example: email space restrictions per person.

For file server, for example, the maximum network disk space available per person.

Limit the maximum disk quota that a user group can use

Configure the quota system:

Execute in the kernel

Enabled in terms of the entire file system (that is, the directory must have a separate mount directory)

Different policies for different groups or users

Restrict based on block or node

Enforce soft limits (soft limit)

Hard limit (hard limit)

Initialization command:

Partition mount options: usrquota, grpquota

Initialize the database: quotacheck

Set quotas for users

Enable or cancel quotas: quotaon, quotaoff

Edit quota directly: edquota username

Edit directly in shell:

Etquota usename 4096 5120 40 50 / foo

Define the original standard user

Edquota-p user1 user2

Meaning of quota option:

User: user name

-Block limits

Used: disk space used

Soft: soft upper limit, which can be exceeded within 7 days by default

Hard: hard upper limit, which must not exceed the disk limit

Grace: this is the countdown when the storage exceeds the upper limit of soft.

-File limits

Used: number of files that have been stored

Soft: the soft upper limit of the file, which is similar to the soft above.

Hard: the hard upper limit of the file, which is similar to the soft above.

Grace: when the file exceeds the soft limit, this is the countdown.

Report quota status

User survey: quota USERNAME

Overview of quotas: repquota / MOUNTZ_POINT

Other tools: warnquota

Example: create a disk quota for the / home directory

Confirm whether the / home directory is a separate partition. It is not a separate partition here, so you need to set the home directory to a separate partition first.

Create a partition of / dev/sda6 here and mount it to / mnt/home, then assign all the files of the / home directory to the / mnt/home directory, then mount the / dev/sda6 directory to / home, then unmount / mnt/home, and then switch to the ordinary user to test whether the home directory is normal. If everything is all right, you can proceed to the next step.

Enable quota mount option

Vim / etc/fstab plus, usrquota,grpquota

Create a quota database

Quotacheck-cug / home,-c for creating quota data,-u for user,-g for group

As shown in the figure, when centos6 is created, the permission is restricted. Enter setenforce 0 at this time to lift the restriction temporarily. This restriction is restricted by SeLinux.

Enable the database

Quotaon-p / home to see if the database is enabled

Quotaon / home enable database

Configure quota entry

Edquota wan

The editing soft is 100000, about 100m, hard is 120000, it is about 120m

Edquota-p wan mage allows mage users to copy wan users' data

Setquota wangcai 80000 120000 00 / home

Test whether the disk quota is in effect

View the user's disk quota settings quota wan

View the disk quota settings repquota / home for the entire directory

Using the dd tool to create a large file write data test

Because the file size limited by this user is 100m soft and 120m hard.

There is no prompt when creating a file with a size of 80m

When creating a file greater than 100m and less than 120m, the system will prompt that the block of the quota exceeds the limit, but the file can still be written to

When creating a file greater than 120m, writing to the file failed because the hard setting limit is 120m

Cancel quota

Disk array-RAID

What is RAID?

RAID:Redundant Arrays of Inexpensive (Independent) Disks

In 1988 by the University of California, Berkeley (University of California-Berkeley) "A Case for Redundant Arrays of Inexpensive Disks".

Multiple disks are combined into an "array" to provide better performance, redundancy, or both

Common levels: RAID-0, RAID-1, RAID-5, RAID-10, RAID-50, JBOD

RAID action

Improve IO capabilities:

Disk parallel read and write

Improve durability

Disk redundancy to achieve

Level: multiple disks are organized to work differently

How to implement RAID:

External disk arrays: provide adaptation through expansion cards

Built-in RAID: motherboard integrated RAID controller

Configure in BIOS before installing OS

Software RAID: implemented through OS

RAID level and its function

RAID-0: stripe volume, strip

Improvement in reading and writing performance

Free space: 100% total disk capacity

Fault-free ability

Minimum number of disks: 2, 2

How it works: divide the data equally among the hard disk space, RAID-0 disk when one of the disks is broken, all the data will be lost

RAID-1: mirrored volumes, mirror

Read performance improved, write performance decreased slightly

Free space: 50% total disk capacity

Have redundant ability

Minimum number of disks: 2,2N

How it works: store data on multiple hard drives at the same time for redundant backup

RAID-4:

XOR operation values of multiple data disks, stored in a special check disk

Fault-tolerant ability

Minimum number of disks: 3

Capacity: Nmur1

Working principle: one of the hard drives is used for verification, and the other disk users store data separately, but when a disk is broken, it is necessary to check the synchronous data when replacing the new hard disk. The pressure is high, and the check disk is easy to break.

RAID-5:

Improvement in reading and writing performance

Available space: (Nmur1) * min (S1, S2,...)

Fault tolerance: allow up to 1 disk to be damaged

Minimum number of disks: 3,3 +

How it works: divide the data and check data equally in each hard disk, but when a disk is broken, replace the new hard disk, you need to check the synchronous data, and the pressure is high.

RAID-6

Improvement in reading and writing performance

Available space: (Nmur2) * min (S1, S2,...)

Fault tolerance: allow up to 2 disks to be damaged

Minimum number of disks: 4,4 +

How it works: divide the two checked data and data equally to other hard drives

RAID-10

Improvement in reading and writing performance

Free space: 50%

Fault tolerance: each group of images can only be broken by one piece at most.

Minimum number of disks: 4,4 +

How it works: do RAID1 first, then RAID0

RAID-01

Improvement in reading and writing performance

Free space: 50%

Fault tolerance: each group of images can only be broken by one piece at most.

Minimum number of disks: 4,4 +

RAID-50

Improvement in reading and writing performance

Fault tolerant, each group of images can only be broken by one piece at most.

Available space: NMQ 2

Minimum number of disks: 6, 6 +

How it works: do RAID5 first, then RAID0

RAID7: can be understood as a stand-alone storage computer, with its own operating system and management tools, can run independently, theoretically the highest performance RAID mode

JBOD:Just a Bunch Of Disks

Function: merge the space of multiple disks into a large continuous space

Available space: sum (S1, S2,...)

Soft RAID

Mdadm: provides a management interface for soft RAID

Add redundancy to the free disk

Combined with md (multi devices) in the kernel

RAID devices can be named / dev/md0, / dev/md1, / dev/md2, / dev/md3, etc.

Implementation of Software RAID

Mdadm: a modeled tool

Syntax format of the command: mdadmn [mode] [options]

Supported RAID levels: LINEAR, RAID0, RAID1, RAID4, RAID5, RAID6, RAID10

Mode:

Create:-C

Assembly:-A

Monitoring:-F

Management:-f,-r,-a

: / dev/md#

Any block of equipment

Common options for madam

-C: create a pattern

-n #: use # block devices to create this RAID

-l #: indicates the level of RAID to be created

-a {yes | no}: automatically creates the device file for the target RAID device

-c CHUNK_SIZE: indicates the block size

-x #: indicates the number of free disks

-D: displays the details of raid

Mdadm-D / dev/md#

Management mode:

-f: Mark the specified disk as damaged

-a: add disk

-r: remove the disk

Observe the status of md:

Cat / proc/mdstat

Soft RAID configuration example

Create and define RAID devices using mdadm

# mdadm-C / dev/md0-a yes-l 5-n 3-x 1 / dev/sdb1 / dev/sdc1 / dev/sdd1 / dev/sde1

Format each RAID device with a file system

# mke2fs-j / dev/md0

Use mdadm to check the status of RAID devices

# mdadm-- detail | D / dev/md0

Add new members

# mdadm-G / dev/md0-N4-a / dev/sdf1

Soft RAID testing and repair

Simulated disk failure

# mdadm / dev/md0-f / dev/sda1

Remove disk

# mdadm / dev/md0-r / dev/sda1

Fix the disk failure from the software RAID disk

Replace the faulty disk and power it on

Rebuild a partition on a spare drive

Mdadm / dev/md0-a / dev/sda1

Mdadm, cat / proc/mdstat and Syslog information

Example 1: create a RAID1 device with 1G of free space, the file system is ext4, there is a free disk, and boot can automatically mount to the / backup directory

1. First, create three 1G disks with System ID as fd, as shown in the following figure

two。 Creating RAID1,-l 1 means that raid 1 means that two disks are used, and-x 1 means that one of them is standby.

3. Formatted into ext4 format mkfs.ext4 / dev/md0

4. Write / etc/fstab

5. Write the information about configuring raid in the / etc directory, otherwise the raid will be gone the next time you log in.

Example 2: create a RAID5 device consisting of three hard drives with 2G of free space, which requires a chunk size of 256k, a file system of ext4, and can be automatically mounted to the / mydata directory on boot.

Create 3 disks of 2G size with fd as system ID

two。 Create raid5,-a yes automatically create raid device file,-l 5 means raid5,- n 3 means use 3 disks,-c 256 means chunk block size is 256k

3. Format / dev/md1

4. Write to / etc/fstab configuration mount directory

5. Write / etc/mdadm.conf file

Logical Volume Manager (LVM)

What is LVM?

The key point of LVM (Logical Volume Manager, logical volume management) is to flexibly adjust the capacity of filesystem! It does not lie in the efficiency and security of data storage. The need for the reading and writing efficiency of the file or the reliability of the data is considered by RAID. LVM can integrate multiple cadaver partition together to make the partition look like a disk. Moreover, other entities partition can be added or removed to this LVM-managed disk in the future, so that the entire disk space is quite flexible.

LVM allows a layer of abstraction for easy manipulation of volumes, including resizing file systems

LVM allows file systems to be reorganized across multiple physical Devic

Specify the device as a physical volume

Create a volume group with one or more physical volumes

Physical volumes are defined by fixed-size physical areas (Physical Extent,PE)

Logical volumes created on physical volumes are made up of physical areas (PE)

You can create a file system on a logical volume

LVM introduction

LVM: Logical Volume Manager,Version: 2

Dm: device mapper: a module that organizes one or more underlying block devices into one logical device

Device name: / dev/dm-#

Soft links:

/ dev/mapper/VG_NAME-LV_NAME

/ dev/mapper/vol0-root

/ dev/VG_NAME/LV_NAME

/ dev/vol0/root

LVM can flexibly change the capacity of LVM

Convert data by exchanging PE, transfer the PE in the original LV to other devices to reduce the capacity of LV, or add PE from other devices to LV to increase the capacity

Delete logical Volum

Note: to delete a logical volume, you must first delete LV, then delete VG, and finally delete PV.

Pv management tools

Display pv information

Pvs: brief pv information display

Pvdisplay

Create pv

Pvcreate / dev/DEVICE

Copy the contents of an old volume group to another volume group

Pvmove PhysicalDevicePath

Delete pv

Pvremove

Vg management tools

Show Volume Group

Vgs

Vgdisplay

Create a volume group

Vgcreate [- s # [kKmMgGtTpPeE]] VolumeGroupName PhysicalDevicePath [PhysicalDevicePath...]

-s specifies the block size. Default is 4Mib.

Manage volume groups

Expand vgextend VolumeGroupName PhysicalDevicePath [PhysicalDevicePath...]

Reduce vgreduce VolumeGroupName PhysicalDevicePath [PhysicalDevicePath...]

Delete volume group

Do vgmove first, then do pvremove

Lv management tools

Show logical volumes

Lvs

Lvdisplay

Create a logical volume

Lvcreate-L # [mMgGtT]-n NAME VolumeGroup

Delete logical Volum

Lvremove / dev/VG_NAME/LV_NAME

Resize the file system

Fsadm [options] resize device [new_ size[BKMGTEP]]

Resize2fs [- f] [- F] [- M] [- P] [- p] device [new_size]

Lv extends and shrinks logical volumes

Extend logical volumes:

Lvextend-L [+] # [mMgGtT] / dev/VG_NAME/LV_NAME

Here-L can specify the capacity, or use the + 1G representation to expand the capacity

Resize2fs / dev/VG_NAME/LV_NAME

After capacity expansion, you need to synchronize the disk, otherwise the df mount shows the capacity before expansion.

Reduce logical volumes: (you must follow the following steps, or you will lose data)

1.umount / dev/VG_NAME/LV_NAME

2.e2fsck-f / dev/VG_NAME/LV_NAME

3.resize2fs / dev/VG_NAME/LV_NAME # [mMgGtT]

4.lvreduce-L [-] # [mMgGtT] / dev/VG_NAME/LV_NAME

Create a logical volume instance

Create a physical volume

Pvcreate / dev/sda3

Assign physical volumes to a volume group

Vgcreate vg0 / dev/sda3

Create a logical volume from a volume group

Lvcreate-L 256m-n data vg0

Mke2fs-t ext4 / dev/vg0/data

Mount / dev/vg0/data / mnt/data

Logical Volume Manager snapshot

A snapshot is a special logical volume, which is an exact copy of the logical volume that exists when the snapshot is generated.

Snapshots are the most appropriate choice for temporary copies of existing datasets and other operations that need to be backed up or replicated.

Snapshots consume space only if they are different from the original logical volume.

A certain amount of space is allocated to it when the snapshot is generated, but it is used only if the original logical volume or snapshot has changed

When there is a change in the original logical volume, the old data is copied to the snapshot.

The snapshot contains only the data changed in the original logical volume or since the snapshot was generated

The volume size for snapshots requires only 15% or 20% of the original logical volume. You can also use lvextend to enlarge the snapshot.

Snapshot is to record the system information at that time, just like taking a photo. if any data is changed in the future, the original data will be moved to the snapshot area, and the unchanged area will be shared by the snapshot area and the file system.

Since the snapshot area shares a lot of PE blocks with the original LV, the snapshot must be on the same VG as the snapped LV! The number of files when the system is restored cannot be higher than the actual capacity of the snapshot area.

Use LVM Snapshot

Create a snapshot of an existing logical volume

Lvcreate-l 64-s-n snap-data-p r / dev/vg0/data

Mount a snapshot

Mkdir-p / mnt/snap

Mount-o ro / dev/vg0/snap-data / mnt/snap

Delete snapshot

Umount / mnt/databackup

Lvremove / dev/vg0/databackup

LVM actual combat example

Example 1: creating a 20G VG; named testvg consisting of at least two PV requires that the PE size be 16MB, and then create a logical volume of 5G in the volume group testlv; to mount to the / users directory

1. Find two empty disks first.

two。 Create PV disk

3. Create a vg named testvg with a pe size of 16m

4. Create an LV disk named testlv with a size of 5G from testvg space

5. Format the newly created LV disk

6. Mounting

Example 2: create a new user archlinux, requiring its home directory to be / users/archlinux, then su switch to the archlinux user, copy the / etc/pam.d directory to your own home directory

Example 3: extend testlv to 7G, requiring archlinux users' files not to lose fr

-L can directly specify the capacity to be extended to, or use a method such as + 2G to execute

-l + 100%FREE DEVICE can directly expand all capacity.

-r equals resize2fs, and this step must be performed, otherwise the disk mount display will be inconsistent with the capacity expansion display.

Example 4: shrink testlv to 3G, requiring archlinux users' files not to be lost

Example 5: create a snapshot of testlv and try to back up data based on the snapshot and verify the snapshot

Confirm existing lv volumes

Create a snapshot, and-p r means that the snapshot is read-only, and then mount the snapshot to the / mnt/testlv-snapsht directory

You can find that there are files under this directory, which are the same as under the / users directory.

If you look at the snapshots, you can see that there are two snapshots.

At this point, you can find that testlv has a snapshot of test-snapshot, lv.

If you look at the snapshot lv at this time, you can see that this snapshot is a snapshot for testlv. Although there is something on this disk, if you look carefully, you will find that the following Allocated to snapshot is 0, indicating that there are no files in the snapshot, and the files in the folder are only temporarily mapped to the files in / users, so that users can see the files currently backed up.

At this point, look at the two directories and delete all the files in the directory / users mounted by testlv. But we found that the files in the snapshot directory still exist! This is equivalent to making a backup, and if necessary, we just need to copy the backup files from the directory of the snapshot.

At this point, if you look at the lv of the snapshot, you will find that it has taken up a little space, indicating that the snapshot has saved the files we have modified, so it takes up part of the space of the snapshot lv volume. At this point, it shows that the snapshot experiment has been successful!

Example 6: delete pv with small capacity and retain data

Create 3 pv

Create a testvg with / dev/sdd1

Create a lv

Mount lv to the / users directory and copy the files to that directory

Because the original 1g vg is too small, now add a 1g vg, a total of 2g vg

Expansion of lv volumes to a maximum of 2g

Since 2g of space is used up, we want to eliminate these two 1g disks and add a 4G disk / dev/sdb1 at this time

To preserve the data in vgtest, first move the data from the old pv volume / dev/sdc1 to another disk

Delete / dev/sdc1 after moving the data

After the / dev/sdc1 is deleted, you can see that there are only two pv volumes left, and the vg volume is also reduced to 4G.

Then move the data of / dev/sdd1 to another free pv, and then delete the pv

After deleting the other disks, let's look in the lv volume and find that the data is still there. The experiment is successful.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report