Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Linux system quota and RAID

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The content of this article:

I. system quota under Linux

2. RAID under Linux

I. system disk quota

Set and check disk quotas on the file system to prevent users from using more space than allowed, and to prevent the entire file system from accidentally filling up.

Summary

Execute in the kernel

Enable on a file system basis

Different policies for different groups or users

Restrict based on block or node

Soft limit (soft limit): soft limit: warning limit, which can be broken

Hard limit (hard limit): maximum available limit, unbreakable

Quota size: in K and in the number of files

Initialization

Partition mount options: usrquota, grpquota

Initialize the database: quotacheck

Execution

Enable or cancel quotas: quotaon, quotaoff

Edit quota directly: edquota username

Edit directly in shell:

Setquota usename 4096 5120 40 50 / foo

Define the original standard user

Edquota-p user1 user2

Report

User survey: quota

Quota Overview: repquota

Other tools: warnquota

Create a partition and format it, then mount it to the / home directory, and implement disk quotas for this directory

1. Create a 10G partition and format it as an ext4 system

[root@localhost ~] # fdisk / dev/sdb [root@localhost ~] # lsblk / dev/sdbNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsdb 8:16 090G 0 disk └─ sdb1 8:17 010G 0 part [root@localhost ~] # mkfs.ext4 / dev/sdb1

2. Move the files under the / home directory to another directory, then mount / dev/sdb1 to the home directory, and then copy the data back.

[root@localhost ~] # mv / home/* / testdir/ [root@localhost ~] # ls / home/ [root@localhost ~] # [root@localhost ~] # mount / dev/sdb1 / home/ Mount / dev/sdb1 to the / home directory [root@localhost ~] # ls / home/lost+found [root@localhost ~] # mv / testdir/* / home/ move the original data under / home back to [root@localhost ~] # ls / home/lost+found xiaoshui

3. Write / etc/fstab, and set the mount option to usrquota,grpquota, because there is no write mount option just now, so you need to unmount it and mount it again.

[root@localhost ~] # vi / etc/fstab# # / etc/fstab# Created by anaconda on Mon Jul 25 09:34:22 2016 June # Accessible filesystems, by reference, are maintained under'/ dev/disk'# See man pages fstab (5), findfs (8) Mount (8) and/or blkid (8) for more info#/dev/mapper/vg0-root / ext4 defaults 1 1UUID=7bbc50de-dfee-4e22-8cb6-04d8520b6422 / boot ext4 defaults 1 2/dev/mapper/vg0-usr / usr ext4 defaults 1 2/dev/mapper/vg0-var / var Ext4 defaults 1 2/dev/mapper/vg0-swap defaults 0 0tmpfs / dev/shm tmpfs defaults 0 0devpts / dev/pts devpts gid=5 Mode=620 0 0sysfs / sys sysfs defaults 0 0proc / proc proc defaults 1 0/dev/sdb1 / home ext4 usrquota,grpquota 0 [root@localhost ~] # umount / home/ [root@localhost ~] # mount-a

4. Create a quota database

[root@localhost ~] # quotacheck-cug / home

5. Enable the database

[root@localhost ~] # quotaon-p / home / / check whether to enable database group quotaon / home (/ dev/sdb1) is off user quotaon / home (/ dev/sdb1) is off [root@localhost ~] # quotaon / home / / enable database

6. Configure quota entries

[root@localhost] # edquota xiaoshui / / soft is soft limit, hard is hard limit Disk quotas for user xiaoshui (uid 500): Filesystem blocks soft hard inodes soft hard / dev/sdb1 32 300000 500000 8 00

7. Switch to the xiaoshui user to test whether the above settings are valid.

[root@localhost ~] # su-xiaoshui / / switch users only xiaoshui [xiaoshui@localhost ~] $pwd/home/xiaoshui [xiaoshui@localhost ~] $dd if=/dev/zero of=file1 bs=1M count=290 / / first create a new 290MB file 2900.0 records in290+0 records out304087040 bytes (304MB) copied, 1.08561 s, 280MB/s / / created successfully, and did not report an error [xiaoshui@localhost ~] $dd if=/dev/zero of=file1 bs=1M count=300 / / overwrite file1 Create a 300MB file sdb1: warning, user block quota exceeded. / / warning! 300000 records in300+0 records out314572800 bytes (315 MB) copied, 0.951662 s, 331 MB/s [xiaoshui@localhost ~] $ll-h check the file and find that total 300M xiaoshui@localhost RW Aug 26 08:16 file1 [xiaoshui@localhost] $dd if=/dev/zero of=file1 bs=1M count=500 / / continue to overwrite file1 and create a 500MB file sdb1: warning, user block quota exceeded.sdb1: write failed, user block limit reached. / / A warning indicates that the user's block has reached the limit dd: writing `file1': Disk quota exceeded489+0 records in488+0 records out511967232 bytes (512 MB) copied, 2.344 s, 218 MB/s [xiaoshui@localhost ~] $ll-h / / View the file, and it is found that there is no 500MB file. The system automatic dd process stops total 489m total RWMuir-1 xiaoshui xiaoshui 489m Aug 26 08:19 file1

8. Report quota status

[root@localhost ~] # quota xiaoshui / / blocks shows the current number of file block of the user, which has exceeded limitDisk quotas for user xiaoshui (uid 500): Filesystem blocks quota limit grace files quota limit grace / dev/sdb1 500004 * 300000 500000 6days 100

RAID

What is RAID

RAID:Redundant Arrays of Inexpensive (Independent) Disks, where multiple disks are combined into an "array" to provide better performance, redundancy, or both

Advantages:

Improve IO capabilities:

Disk parallel read and write

Improve durability

Disk redundancy to achieve

II. RAID level

RAID-0: stripe volume, strip

Improvement in reading and writing performance

Available space: N*min (S1, S2,...)

Fault-free ability

Minimum number of disks: 2, 2

RAID-1: mirrored volumes, mirror

Read performance improved, write performance decreased slightly

Available space: 1*min (S1, S2,...)

Have redundant ability

Minimum number of disks: 2,2N

RAID-2

..

RAID-5

Improvement in reading and writing performance

Available space: (Nmur1) * min (S1, S2,...)

Fault tolerance: check bits, allowing up to 1 disk to be damaged

Minimum number of disks: 3,3 +

RAID-6

Improvement in reading and writing performance

Available space: (Nmur2) * min (S1, S2,...)

Fault tolerance: allow up to 2 disks to be damaged

Minimum number of disks: 4,4 +

RAID-10

Improvement in reading and writing performance

Available space: N*min (S1, S2,...) / 2

Fault tolerance: each group of images can only be broken by one piece at most.

Minimum number of disks: 4,4 +

RAID-01

First RAID0, in RAID1

Software implementation of RAID 5

1. Prepare the disk partition

[root@localhost] # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 80G 0 disk ├─ sda1 8:1 0488M 0 part / boot ├─ sda2 8:2 0 40G 0 part / ├─ sda3 8:3 020G 0 part / usr ├─ sda4 8:4 0512B 0 part ├─ sda5 8:5 02G 0 part [SWAP] └─ sda6 8:6 01m 0 part sdb 8:16 0 80G 0 disk └─ sdb1 8:17 0 1G 0 part sdc 8:32 0 20G 0 disk sdd 8:48 0 20G 0 disk ├─ sdd1 8:49 0 100M 0 part ├─ sdd2 8:50 0 100M 0 part ├─ sdd3 8:51 0 100M 0 part ├─ sdd4 8:52 0 1K 0 part └─ sdd5 8:53 0 99M 0 part sde 8:64 0 20G 0 disk └─ sde1 8:65 0 100M 0 part sdf 8:80 0 20G 0 disk sr0 11:0 1 7.2G 0 rom

Prepare four disk partitions. The partition type is set to RAID type (fd). Here, it is / dev/sdd {1rem 2je 3}, / dev/sde1.

2. Create RAID 5

[root@localhost] # mdadm-C / dev/md0-a yes-l 5-n 3-x1 / dev/sd {d {1scot 2je 3}, E1} mdadm: / dev/sdd1 appears to be part of a raid array: level=raid5 devices=4 ctime=Mon Aug 29 19:43:21 2016mdadm: / dev/sdd2 appears to be part of a raid array: level=raid0 devices=0 ctime=Thu Jan 1 08:00:00 1970mdadm: partition table exists on / dev/sdd2 but will be lost or meaningless after creating arrayContinue creating array? Yes mdadm: Defaulting to version 1.2 metadatamdadm: array / dev/md0 started. [root@localhost ~] # mdadm-D / dev/md0 / dev/md0: Version: 1.2 Creation Time: Thu Sep 1 14:39:53 2016 Raid Level: raid5 Array Size: 202752 (198.03 MiB 207.62 MB) Used Dev Size: 101376 (99.02 MiB 103.81 MB) Raid Devices: 3 Total Devices: 4 Persistence: Superblock is persistent Update Time: Thu Sep 1 14:39:54 2016 State: clean Active Devices: 3Working Devices: 4 Failed Devices: 0 Spare Devices: 1 Layout: left-symmetric Chunk Size: 512K Name: localhost.localdomain:0 (local to host localhost.localdomain) UUID: f4eaf910:514ae7ab:6dd6d28f:b6cfcc10 Events: 18 Number Major Minor RaidDevice State 0 8 49 0 active sync / dev/sdd1 1 8 50 1 active sync / dev/sdd2 4 8 51 2 active sync / dev/sdd3 3 8 65-spare / dev/sde1 [root@localhost ~] #

3. Format the file system

[root@localhost] # mkfs.ext4 / dev/md0 mke2fs 1.42.9 (28-Dec-2013) Filesystem label=OS type: LinuxBlock size=1024 (log=0) Fragment size=1024 (log=0) Stride=512 blocks, Stripe width=1024 blocks50800 inodes, 202752 blocks10137 blocks (5.00%) reserved for the super userFirst data block=1Maximum filesystem blocks=3381657625 block groups8192 blocks per group, 8192 fragments per group2032 inodes per groupSuperblock backups stored on blocks: 8193,24577,40961,57345 73729Allocating group tables: doneWriting inode tables: done Creating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: done

4. Generate the configuration file and start RAID 5, then mount it

[root@localhost ~] # mdadm-Ds / dev/md0 > / etc/mdadm.conf [root@localhost ~] # mdadm-A / dev/md0 mdadm: / dev/md0 has been started with 3 drives and 1 spare. [root@localhost ~] # mount / dev/md0 / testdir/

5. Test

[root@localhost ~] # cd / testdir/ [root@localhost testdir] # lslost+found [root@localhost testdir] # cp / etc/*. / / copy files to the mount directory [root@localhost testdir] # mdadm-D / dev/md0 / / View RAID running status / dev/md0: Version: 1.2 Creation Time: Thu Sep 1 14:39:53 2016 Raid Level: raid5 Array Size: 202752 (198.03 MiB 207.62 MB) Used Dev Size: 101376 (99.02 MiB 103.81 MB) Raid Devices: 3 Total Devices: 4 Persistence: Superblock is persistent Update Time: Thu Sep 1 14:47:58 2016 State: clean Active Devices: 3Working Devices: 4 Failed Devices: 0 Spare Devices: 1 Layout: left-symmetric Chunk Size: 512K Name: localhost.localdomain:0 (local to host localhost.localdomain) UUID: f4eaf910:514ae7ab:6dd6d28f:b6cfcc10 Events: 18 Number Major Minor RaidDevice State / / three partitions running normally 0 8 49 0 active sync / dev/sdd1 1 8 50 1 active sync / dev/sdd2 48 51 2 active sync / dev/sdd3 38 65-spare / dev/sde1 [root@localhost testdir] # mdadm / dev/md0-f / dev/sdd1 / / Analog / dev/ Sdd1 damage mdadm: set / dev/sdd1 faulty in / dev/md0 [root@localhost testdir] # mdadm-D / dev/md0 / / check disk status again / dev/md0: Version: 1.2 Creation Time: Thu Sep 1 14:39:53 2016 Raid Level: raid5 Array Size: 202752 (198.03 MiB 207.62 MB) Used Dev Size: 101376 (99.02 MiB 103.81 MB) Raid Devices: 3 Total Devices: 4 Persistence: Superblock is Persistent Update Time: Thu Sep 1 14:51:01 2016 State: clean Active Devices: 3Working Devices: 3 Failed Devices: 1 Spare Devices: 0 Layout: left-symmetric Chunk Size: 512K Name: localhost.localdomain:0 (local to host localhost.localdomain) UUID: f4eaf910:514ae7ab:6dd6d28f:b6cfcc10 Events: 37 Number Major Minor RaidDevice State 3 8 65 0 Active sync / dev/sde1 1 8 50 1 active sync / dev/sdd2 4 8 51 2 active sync / dev/sdd3 0 8 49-faulty / dev/sdd1 / dev/sdd1 has been marked as damaged And the previous / dev/sde1 automatically added [root@localhost testdir] # cat fstab / / still normal view file # # / etc/fstab# Created by anaconda on Mon Jul 25 12:06:44 2016 document # Accessible filesystems, by reference, are maintained under'/ dev/disk'# See man pages fstab (5), findfs (8) Mount (8) and/or blkid (8) for more info#UUID=136f7cbb-d8f6-439b-aa73-3958bd33b05f / xfs defaults 0 0UUID=bf3d4b2f-4629-4fd7-8d70-a21302111564 / boot xfs defaults 0 0UUID=cbf33183-93bf-4b4f-81c0-ea6ae91cd4f6 / usr xfs defaults 0 0UUID=5e11b173-f7e2-4994-95b9-55cc4c41f20b swap swap Defaults 0 0 [root@localhost testdir] # mdadm / dev/md0-r / dev/sdd1 / / Analog unplug hard disk [root@localhost testdir] # mdadm-D / dev/md0. Omit. Number Major Minor RaidDevice State 3 8 65 0 active sync / dev/sde1 / / cannot find / dev/sdd1 1 8 50 1 active sync / dev/sdd2 48 51 2 active sync / dev/sdd3 [root@localhost testdir] # [root@localhost testdir] # mdadm / dev/md0-a / dev / sdd1 / / add hard disk mdadm: added / dev/sdd1 [root@localhost testdir] # mdadm-D / dev/md0 / / View information. Omit. Number Major Minor RaidDevice State 38 65 active sync / dev/sde1 1 8 50 1 active sync / dev/sdd2 4 8 51 2 active sync / dev/sdd3 58 49-spare / dev/sdd1 / dev/sdd1 Power increase [root@localhost testdir] # mdadm / dev/md0-f / dev/sdd2 / / because there are parity bits in RAI5 One of the disks is allowed to be damaged, and the original data file can be calculated by checking bits. Mdadm: set / dev/sdd2 faulty in / dev/md0 [root@localhost testdir] # mdadm-D / dev/md0.. Omit. Number Major Minor RaidDevice State 5 8 49 0 active sync / dev/sdd1 2002 removed 4 8 51 2 active sync / dev/sdd3 1 8 50-faulty / dev/sdd2 3 8 65- Faulty / dev/sde1 / / data files can still be viewed in this case However, performance will degrade, so it needs to be fixed immediately.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report