In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Online expansion of online
Data migration online
=
Creating a LVM LVM is called logical volume management
VG expand / shrink
LV capacity expansion
Expansion of file system, online expansion and single disk RAID0 of 300G+900G hard disk in production
Basic partition (MBR | GPT)-> Filesystem-> mount
Logical volume LVM-> Filesystem-> mount
VG is called volume group
Extents is called extension block
PE: Physical extent physical extension 4m 8m 16m 32m 64m specify-s 8m when creating VG
LE: Logical extent logical extension
In the actual work of Sina
Starting the Linux operating system in VMware environment and adding a new partition requires the identity of a root account.
3.1 [fdisk-l] the maximum partition is / dev/sda3, indicating that the newly created partition will be sda4
3.2 enter [fdisk / dev/sda]
3.2.1 enter [m] at the command line prompt
3.2.2 enter the command [n] to add a new partition.
3.2.3 enter the command [p] to create the primary partition.
3.2.4 enter [enter] and select the default size, so that no space is wasted
3.2.5 enter "enter" and select the default start cylinder.
3.2.6 enter [w] to keep the modification
Enter [reboot] to restart linux, which must be reboot, otherwise / dev/sda4 cannot be formatted.
3.4 only in the / dev/ directory can you see the new partition such as / dev/sda4
3.5 [mkfs.ext2 / dev/sda4] formatting
3.6 create a disk4 directory under the root directory
3.7 [mount / dev/sda4 / disk4/] mount the partition to / disk4/
3.8 modify the / etc/fstab file in vim, add a line [/ dev/sda4 / disk4 ext2 defaults 00], and save it to realize automatic boot mount.
Note that a virtual machine with a snapshot needs to add a hard disk after installing software
First, create a LVM
Prepare the physical disk
It could be: / dev/sdb / dev/sdc1
[root@server0 ~] # ll / dev/vd {cmeno dpene}
Brw-rw----. 1 root disk 253, 32 Jun 6 17:38 / dev/vdc
Brw-rw----. 1 root disk 253, 48 Jun 6 17:38 / dev/vdd
Brw-rw----. 1 root disk 253, 64 Jun 6 17:38 / dev/vde
Pv
[root@server0 ~] # pvcreate / dev/vdd
Physical volume "/ dev/vdd" successfully created
[root@server0 ~] # pvscan
PV / dev/vdd lvm2 [2.00 GiB]
Total: 1 [2.00 GiB] / in use: 0 [0] / in no VG: 1 [2.00 GiB]
[root@server0 ~] # pvs
PV VG Fmt Attr PSize PFree
/ dev/vdd lvm2 a muri-2.00g 2.00g
Vg
[root@server0 ~] # vgcreate vg1 / dev/vdd
Volume group "vg1" successfully created
[root@server0 ~] # vgs
VG # PV # LV # SN Attr VSize VFree
Vg1 1 00 wz--n- 2.00g 2.00g
[root@server0 ~] # vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1" using metadata type lvm2
[root@server0 ~] # vgdisplay vgdisplay datevg
-Volume group
VG Name vg1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.00 GiB
PE Size 4.00 MiB
Total PE 511
Alloc PE / Size 0 / 0
Free PE / Size 511 / 2.00 GiB
VG UUID 7E4tlj-l0a2-ph62-OytH-eaq7-58K6-2S4n8V
Lv
[root@server0] # lvcreate-l 10-n lv1 vg1
[root@server0] # lvcreate-L 200m-n lv2 vg1
[root@server0 ~] # lvscan
ACTIVE'/ dev/vg1/lv1' [640.00 MiB] inherit
ACTIVE'/ dev/vg1/lv2' [256.00 MiB] inherit
Create a file system and mount it
[root@server0 ~] # mkfs.xfs / dev/vg1/lv1
[root@server0 ~] # mkfs.ext4 / dev/vg1/lv2
[root@server0 ~] # mkdir / mnt/lv1 / mnt/lv2
[root@server0 ~] # vim / etc/fstab
/ dev/vg1/lv1 / mnt/lv1 xfs defaults 0 0
/ dev/vg1/lv2 / mnt/lv2 ext4 defaults 0 0
[root@server0] # mount-a
[root@server0 ~] # df
Filesystem 1K-blocks Used Available Use% Mounted on
/ dev/mapper/vg1-lv1 651948 32928 619020 6 / mnt/lv1
/ dev/mapper/vg1-lv2 245671 2062 226406 1% / mnt/lv2
II. VG management
= = expand VG vgextend==
Pv
[root@server0 ~] # pvcreate / dev/vde
Vgextend
[root@server0 ~] # vgextend vg1 / dev/vde
Volume group "vg1" successfully extended
[root@server0 ~] # vgs
VG # PV # LV # SN Attr VSize VFree
Vg1 2 2 0 wz--n- 3.99g 3.76g
= = reduce VG vgreduce==
Data migration is usually done first.
1. View the usage of PV in the current VG
[root@server0 ~] # pvs
PV VG Fmt Attr PSize PFree
/ dev/vdd vg1 lvm2 a muri-2.00g 1.76g
/ dev/vde vg1 lvm2 a muri-2.00g 2.00g
Pvmove data to other PV
[root@server0 ~] # pvmove / dev/vdd
/ dev/vdd: Moved: 16.7%
/ dev/vdd: Moved: 100.0%
[root@server0 ~] # pvs
PV VG Fmt Attr PSize PFree
/ dev/vdd vg1 lvm2 a muri-2.00g 2.00g
/ dev/vde vg1 lvm2 a muri-2.00g 1.76g
3.vgreduce VG
[root@server0 ~] # vgreduce vg1 / dev/vdd
Removed "/ dev/vdd" from volume group "vg1"
[root@server0 ~] # vgs
VG # PV # LV # SN Attr VSize VFree
Vg1 1 2 0 wz--n- 2.00g 1.76g
III. Expansion of LV
Lv capacity expansion
[root@server0 ~] # vgs
VG # PV # LV # SN Attr VSize VFree
Vg1 2 2 0 wz--n- 1.88g 1.00g
[root@server0] # lvextend-L 800m / dev/vg1/lv1
[root@server0] # lvextend-L + 800m / dev/vg1/lv1
[root@server0 ~] # lvextend-l 15 / dev/vg1/lv1
[root@server0] # lvextend-l + 15 / dev/vg1/lv1
= =
+ 50%FREE
= =
[root@server0 ~] # lvscan
ACTIVE'/ dev/vg1/lv1' [768.00 MiB] inherit
ACTIVE'/ dev/vg1/lv2' [512.00 MiB] inherit
FS capacity expansion
[root@server0 ~] # df-Th
/ dev/mapper/vg1-lv1 xfs 637M 67M 570M 11% / mnt/lv1
/ dev/mapper/vg1-lv2 ext4 240M 32m 192 M 15% / mnt/lv2
A. Xfs
[root@server0 ~] # xfs_growfs / dev/vg1/lv1
B. Ext2/3/4
[root@server0 ~] # resize2fs / dev/vg1/lv2
[root@server0 ~] # df-Th
Filesystem Type Size Used Avail Use% Mounted on
/ dev/mapper/vg1-lv1 xfs 765M 67M 698m 9% / mnt/lv1
/ dev/mapper/vg1-lv2 ext4 488M 32m 429m 7% / mnt/lv2
1. Create pv [root@localhost ~] # pvcreate / dev/sda5 / dev/sda6 [root@localhost ~] # pvdisplay [root@localhost ~] # pvs2. Create vg [root@localhost ~] # vgcreate vg0 / dev/sda5 / dev/sda6 [root@localhost ~] # vgdisplay [root@localhost ~] # vgs3. Create lv [root@localhost ~] # lvcreate-L 250M-n / dev/vg0/lv0 vg0 [root@localhost ~] # lvdisplay [root@localhost ~] # lvs Note: pv cannot span vglv, can not span vg system, can create multiple vgvg, can create multiple lv to delete lvm1.umount / dev/vg0/lv02. [root@localhost ~] # lvremove / dev/vg0/lv03. [root@localhost ~] # vgremove vg04. [root@localhost ~] # pvremove / dev/sda {5Power6 8} what should an application in an enterprise do if the root partition is full? Df-h View pvcreate / dev/sdbvgsvgextend centos/ dev/sdblvscanlvextend-L + 5G / dev/centos/rootdf-Txfs_growfs / dev/centos/rootdf-Thlvextend-L + 5G / dev/centos/rootxfs_growfs / dev/centos/rootdf-h prepare three hard drives but not partition ideas-PV----VG---LVlsblk## create PVpvcreate / dev/sdb## to view the current PVpvscanpvsvgcreate datavg / dev/sdb (datavg is the name) vgscanpvscan # # get echo pv/ Dev/sdb added to VG datevg capacity XXfree# create LVlvcreate-L 200m-n lv1 datavg-L (specify the size of lv) specify 200m lv1 name from datevg create lvcreate-L 300m-n lv2 datavglvscan # # get echo / dev/datevg/lv1 200m/dev/datevg/lv2 300m look at lsblk to see / dev/sdb???? # # formatting Create file system mount mkfs.xfs / dev/datevg/lv1mkfs.ext4 / dev/datevg/lv2mkdir / mnt/lv1 / mnt/lv2 mount mount / dev/datevg/lv1 / mnt/lv1mount / dev/datevg/lv2 / mnt/lv2mount-adf-h (df-Th) plus TYPE type LVM complete # # expand VGvgspvcreate / dev/sdc first to PVvgextend datevg / dev/sdc to view pvscanvgs free enlargement experiment successfully continue to expand pvcreate / dev/sddvgextend datevg / dev/sdd continue to view VFree continue to grow # # reduce VG data migration! Pvs View pvmove / dev/sdb / dev/sdc drop the sdb data to sdcvgreduce datevg / dev/sdbpvscan to see the changes. Data migration completed Note: pv cannot span vglv can not span vg system can create multiple vgvg can create multiple lv
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.