In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains the "Linux LVM to increase capacity and delete volume introduction", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in-depth, together to study and learn "Linux LVM to increase capacity and delete volume method introduction" bar!
Magnifying LV capacity involves the command lvresize in LVM management. Let's first create a volume group VG VolGroup02, which is based on disk / dev/sdc (size 8G). When creating the logical volume LV, we deliberately used only a small part of it. The details are as follows
The code is as follows:
[root@localhost ~] # vgdisplay
-Volume group
VG Name VolGroup02
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 7.97 GiB
PE Size 32.00 MiB
Total PE 255
Alloc PE / Size 0 / 0
Free PE / Size 255 / 7.97 GiB
VG UUID SETgjx-dobd-Uayt-AWgN-HKID-hsYe-tEotIS
[root@localhost] # lvcreate-L7.97-n LogVol00 VolGroup02
Rounding up size to full physical extent 32.00 MiB
Logical volume "LogVol00" created
[root@localhost] # mkfs-t ext4 / dev/VolGroup02/LogVol00
Mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
8192 inodes, 32768 blocks
1638 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33554432
4 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs-c or-i to override.
[root@localhost ~] # cd /
[root@localhost /] # mkdir / u01
[root@localhost /] # mount / dev/VolGroup02/LogVol00 / u01
[root@localhost /] # vi / etc/fstab
#
# / etc/fstab
# Created by anaconda on Mon Aug 17 15:08:21 2015
#
# Accessible filesystems, by reference, are maintained under'/ dev/disk'
# See man pages fstab (5), findfs (8), mount (8) and/or blkid (8) for more info
#
UUID=3440ad55-6486-45ed-876f-e942b08013bf / ext4 defaults 1 1
UUID=d3abb655-db70-4c42-967d-57c421abfda0 / boot ext4 defaults 1 2
UUID=660624ff-335d-42ca-b779-f130a80d9da8 / home ext4 defaults 1 2
UUID=6f534bf0-e486-4937-84ae-ed1221cf34f1 swap swap defaults 0 0
/ dev/VolGroup02/LogVol00 / u01 ext4 defaults 1 1
Tmpfs / dev/shm tmpfs defaults 0 0
Devpts / dev/pts devpts gid=5,mode=620 0 0
Sysfs / sys sysfs defaults 0 0
Proc / proc proc defaults 0 0
At this point, if we want to enlarge the size of the file system / U01, we need to use lvresize to enlarge the LV capacity
The code is as follows:
[root@localhost ~] # lvscan
ACTIVE'/ dev/VolGroup02/LogVol00' [32.00 MiB] inherit
[root@localhost ~] # lvdisplay / dev/VolGroup02/LogVol00
-Logical volume
LV Path / dev/VolGroup02/LogVol00
LV Name LogVol00
VG Name VolGroup02
LV UUID OCHwx1-EL9P-6C5J-RNuz-2Xu5-4215-H3xt5s
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2015-09-01 10:50:10 + 0800
LV Status available
# open 1
LV Size 32.00 MiB
Current LE 1
Segments 1
Allocation inherit
Read ahead sectors auto
-currently set to 256
Block device 253:0
[root@localhost] # lvresize-L + 7.89G / dev/VolGroup02/LogVol00
Rounding size to boundary between physical extents: 7.91 GiB
Size of logical volume VolGroup02/LogVol00 changed from 32.00 MiB (1 extents) to 7.94 GiB (254 extents).
Logical volume LogVol00 successfully resized
The code is as follows:
[root@localhost] # resize2fs-p / dev/VolGroup02/LogVol00
Resize2fs 1.41.12 (17-May-2010)
Filesystem at / dev/VolGroup02/LogVol00 is mounted on / u01; on-line resizing required
Old desc_blocks = 1, new_desc_blocks = 32
Performing an on-line resize of / dev/VolGroup02/LogVol00 to 8323072 (1k) blocks.
The filesystem on / dev/VolGroup02/LogVol00 is now 8323072 blocks long.
[root@localhost] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/sda2 27G 6.0g 20g 24% /
Tmpfs 5.9G 0 5.9G 0% / dev/shm
/ dev/sda1 477M 32m 420m 8% / boot
/ dev/sdb1 99G 60m 94G 1% / home
/ dev/mapper/VolGroup02-LogVol00
7.7G 2.7m 7.3G 1% / U01
If we expand the disk from 8G to 10G on the virtual machine at this time, how do we take advantage of the expanded disk space?
The code is as follows:
[root@localhost U01] # fdisk-l
Disk / dev/sda: 42.9 GB, 42949672960 bytes
64 heads, 32 sectors/track, 40960 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000462cf
Device Boot Start End Blocks Id System
/ dev/sda1 * 2 501 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/ dev/sda2 502 28672 28847104 83 Linux
Partition 2 does not end on cylinder boundary.
/ dev/sda3 28673 40960 12582912 82 Linux swap / Solaris
Partition 3 does not end on cylinder boundary.
Disk / dev/sdc: 8589 MB, 8589934592 bytes
64 heads, 32 sectors/track, 8192 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x98c391fe
Device Boot Start End Blocks Id System
/ dev/sdc1 1 8192 8388592 83 Linux
Disk / dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002488e
Device Boot Start End Blocks Id System
/ dev/sdb1 1 13055 104856576 83 Linux
Disk / dev/mapper/VolGroup02-LogVol00: 33 MB, 33554432 bytes
255 heads, 63 sectors/track, 4 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
The spatial changes of / dev/sdc will not be seen until after reboot.
Delete physical Volum
Delete the command vgreduce, pvremove in the volume LVM management:
Vgreduce: reduces volume group capacity by removing physical volumes from the LVM volume group. Note: the last remaining physical volume in the LVM volume group cannot be deleted.
Pvremove: used to delete an existing physical volume. When you delete a physical volume using the pvremove directive, it removes the physical volume information on the LVM partition so that it is no longer considered a physical volume.
Before deleting the physical volume PV, you must have a clear grasp of the server's partition information, volume group information, physical volume information, and logical volume information to avoid errors or misoperations.
The code is as follows:
[root@localhost ~] # fdisk-l
Disk / dev/sda: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sda1 * 1 13 104391 83 Linux
/ dev/sda2 14 10443 83778975 8e Linux LVM
/ dev/sda3 10444 15665 41945715 83 Linux
[root@localhost ~] # vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
[root@localhost ~] # pvscan
PV / dev/sda2 VG VolGroup00 lvm2 [79.88 GB / 0 free]
PV / dev/sda3 VG VolGroup00 lvm2 [40.00 GB / 40.00 GB free]
Total: 2 [119.88 GB] / in use: 2 [119.88 GB] / in no VG: 0 [0]
[root@localhost ~] # pvdisplay
-Physical volume
PV Name / dev/sda2
VG Name VolGroup00
PV Size 79.90 GB / not usable 23.41 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 2556
Free PE 0
Allocated PE 2556
PV UUID YGtB2J-ZKJr-mV62-NluQ-2DGy-vuUT-cCc1lo
-Physical volume
PV Name / dev/sda3
VG Name VolGroup00
PV Size 40.00 GB / not usable 2.61 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 1280
Free PE 1280
Allocated PE 0
PV UUID wsnv13-7j1H-SH8q-hl6k-HpNc-x4WU-gM7LzW
[root@localhost ~] # lvscan
ACTIVE'/ dev/VolGroup00/LogVol00' [77.91 GB] inherit
ACTIVE'/ dev/VolGroup00/LogVol01' [1.97 GB] inherit
[root@localhost ~] # lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LogVol00 VolGroup00-wi-ao 77.91G
LogVol01 VolGroup00-wi-ao 1.97g
Vgreduce parameters:
-a: if the physical volume to be deleted is not specified on the command line, delete all empty physical volumes
-- removemissing: delete the lost physical volumes in the volume group to restore the normal state of the volume group.
The code is as follows:
[root@localhost ~] # vgreduce VolGroup00 / dev/sda3
Removed "/ dev/sda3" from volume group "VolGroup00"
[root@localhost ~] #
The code is as follows:
[root@localhost ~] # pvscan
PV / dev/sda2 VG VolGroup00 lvm2 [79.88 GB / 0 free]
PV / dev/sda3 lvm2 [40.00 GB]
Total: 2 [119.88 GB] / in use: 1 [79.88 GB] / in no VG: 1 [40.00 GB]
[root@localhost ~] # pvs
PV VG Fmt Attr PSize PFree
/ dev/sda2 VolGroup00 lvm2 a-79.88G 0
/ dev/sda3 lvm2 a-40.00G 40.00G
[root@localhost ~] # pvremove / dev/sda3
Labels on physical volume "/ dev/sda3" successfully wiped
[root@localhost ~] # pvscan
PV / dev/sda2 VG VolGroup00 lvm2 [79.88 GB / 0 free]
Total: 1 [79.88 GB] / in use: 1 [79.88 GB] / in no VG: 0 [0]
Thank you for your reading, the above is the content of "LVM in Linux to increase capacity and delete volume method introduction". After the study of this article, I believe you have a deeper understanding of the problem of LVM capacity increase and volume deletion method introduction in Linux. Specific use also needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.