In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
How RHEL7 partitions and formats disks and how to configure LVM is not much different from previous versions of RHEL. Hard disk devices can be managed through disk tools (run on the graphical desktop) or command tools (such as fdisk, gdisk, parted). Fdisk can be configured in MBR format; gdisk can configure gpt format, and parted can choose it.
Traditional hard disk partitions are in MBR format, MBR partition is located in sector 0, he has a total of 512 bytes, the first 446 bytes is grub boot program, this will be learned later; the middle 64 bytes is a partition table, each partition needs 16 bytes to represent, so there can only be 4 main partitions and extended partitions, and more than 4 partitions can only be represented by logical partitions on the extended partition. The size of each partition cannot exceed 2T. The last two bytes of the MBR are the closing symbol.
GPT format, which breaks the limit of MBR, can set up up to 128. the size of the partition varies according to the operating system, but all break through the limit of 2T space. Support for volume sizes up to 18EB (1EBtape 1024PB), allow primary and backup disk partition tables to be used for redundancy, and support unique disk and partition ID (GUID).
Unlike disks with MBR partitions, GPT's partition information is in the partition, not in the main boot sector like MBR. In order to protect GPT from MBR disk management software, GPT established a MBR partition table of protection partition (Protective MBR) in the main boot sector. the type of partition is identified as 0xEE. The size of this protection partition is 128MB under Windows, 200MB under Mac OS X, and GPT protection partition in Window disk manager, which allows MBR disk management software to treat GPT as an unknown format partition. Instead of mistaking it for an unpartitioned disk
In the MBR hard disk, the partition information is stored directly in the master boot record (MBR) (the system's boot program is also stored in the master boot record). However, in the GPT hard disk, the location information of the partition table is stored in the GPT header. However, for compatibility reasons, the first sector of the hard disk is still used as the MBR, followed by the GPT header.
The structure of GPT is shown below:
First, take a look at the current hard disk information.
You can view the current partition in the file / proc/partitions
Try the partition in MBR format first, and the fdisk options are as follows
Enter n to create a new MBR partition, and then enter p to display the current partition status
Repeat n to add other partitions. Note: MBR format disks can create up to 4 primary partitions or 3 primary partitions and 1 extended partition, and several logical partitions can be created in the extended partition. Note that id represents the purpose of the disk, which can be changed by t
Look at the partition records.
Gdisk and fdisk are very similar.
When you create a new partition, you can see that there can be 128partition parted, which is more flexible than the first two, and you can set your own MBR or GPT format and partition.
Mklabel msdos can be set to MBR format, and then mkpart can be used to partition
Msdos is set to MBR format, gpt is set to GPT format primary represents primary partition, extended represents extended partition, and logical represents logical partition. Set number flag state is used to set the purpose of the partition, flag:boot, lvm, raid. State:on/off means on or off. There is no need to save the parted tool after it is divided into sections, just type Q to exit. After partitioning, you still need to format it before you can use it. File system # mkfs.xfs / dev/ partition device name or # mkfs-t xfs / dev/ partition device name can be formatted through mkfs/mkswap
Fstab can be modified to realize automatic loading
Test whether it can be mounted automatically
Through the df-h view has been mounted device-T option can show that some of the device's file system class mount point path is long, automatically divided into two lines display, can-P force a line to show as if the process has pid, users have uid, each file system also has its own id, called uuid, but not every partition has; if a partition does not have a file system, then this partition does not have uuid. It can be viewed through blkid (block id). Note that uuid marks the file system, not the partition. The advantage of uuid is that the system can be mounted with the unique value of uuid, so that the dislocation caused by deleting the hard disk can be avoided, and sda6 becomes sda5 and so on.
We can manually change the uuid of the file system through xfs_admin-U
Attached: in the directory, you can view the properties of the directory through ls-ld, and ls-la to view the properties of the content, but the size of the directory displayed by-ld is only 4K, which is only the size of the directory itself. If you want to view the entire size of the directory and its contents, you can view it through du. If you only want to see the final result, use-s (summary).
Let's take a look at how swap partitions are created manually. Swap is similar to windows's virtual memory / page file. When memory runs out, the data is stored in swap.
There are two ways to use:
The first uses a single partition as the swap
Create a partition (e.g. / dev/sdb3) and change the partition ID to 82
Execute the partx-a / dev/sdb command to make the partition changes effective
Create a swap file system on a partition
Modify fstab to realize automatic loading
The second way is to create a file block that uses the space occupied by the file as a swap
For ordinary partitions, the extension is not high, once the partition format is completed, it is difficult to flexibly increase or decrease the partition size. To solve this problem, you can use LVM (logical volumes). The basic process is to initialize the physical disk or partition as the physical volume (PV), then add the PV to the VG (volume group), and finally divide the logical partition (LVM) on the VG. The LVM can be formatted and mounted as an ordinary partition.
Create a PV for the prepared disk or partition
You can execute pvdisplay to view the details of PV, and pvremove to delete PV.
After creating the PV, you need to create the VG, and then add the PV to the VG
You can view the specific information through vgdisplay. Note that the Size of PE is 4m, which is the smallest unit of calculation to increase or decrease.
Note: when creating a VG: the function of using the-s option is to specify the size of the PE block (physical extension unit) at creation time, which defaults to 4m.
For example: # vgcreate volGroup03-s 8m / dev/sdb [12])
We can continue to add new partitions to the vg
If sdb3 is not converted into pv in advance, it is directly added to vg, but once it is added, it is automatically initialized to pv.
You can add and of course reduce pv. # vgreduce vg00 / dev/sdb3
VG is ready to create a LVM
Notice that its size is actually 112m, because the size of PE is 4m, and this 4m is the smallest unit and cannot be broken, so 28 PE is 112m.
Note: large L can directly specify the size, and small l is the value of how many PE are specified.
You can also set the percentage of remaining space
Delete logical volume # lvremove/dev/vg00/lv01
Logical volumes that have been created can be formatted and mounted as normal partitions
Modify / etc/fstab file to realize automatic mount on boot.
To expand a logical volume and add 300m, first make sure that the volume group has more than 300m of free space.
Perform lvextend to extend logical volume size
Note that the file system of the logical volume is still 109m unchanged, and we still need to fill in the gaps in the file system.
RHEL7 can use xfs_growfs to expand the XFS file system, or it can directly use resize2fs to process devices
Note that XFS systems can only grow, not decrease! So if you need to reduce LVM, partitions can only use ext4.
Execute df to view the extended file system
Logical volume snapshot
LVM provides an excellent device, which is snaphot. Allows administrators to create a new block device that provides an exact copy of logical volumes at some point in time, snapshots provide a static view of the original volumes LVM snapshots by recording file system changes to a snapshot partition, so when you create a snapshot partition, you do not need to use a partition the same size as the partition you are creating a snapshot, the amount of space required depends on the use of the snapshot So there is no way to set this size. If the size of the snapshot is equal to the size of the original volume, then the snapshot is always available.
Snapshots are special logical volumes that can only be snapped. Logical volume snapshots and logical volumes that need to be snapped must be in the same volume group
Now that we have a logical volume / devg00/lv00 in our system, let's use lvdisplay to query this logical volume
As you can see, the size of this logical volume / dev/vg00/lv00 is 309m. We mount the logical volume / dev/vg00/lv00 under / data. Copy some data into / data. It is convenient to do the experiment later.
Now let's take a snapshot of the logical volume / dev/vg00/lv00
Execute lvscan to view the created logical volume
You can see that / dev/vg00/lv00 is the original logical volume and / dev/vg00/lvsp00 is the snapshot
Execute the lvdisplay or lvs command to view logical information
You can see that the logical volume snapshot was created successfully
Note: after this snapshot volume is built, it does not need to be formatted or mounted. An error message that will appear when formatting or mounting.
Simulation deletes data from the original logical volume
How to recover the data of the original logical volume? There are two ways to recover deleted data
The first way is to unmount the original logical volume # umount / dev/vg00/lv00
Then mount the logical volume snapshot # mount / dev/vg00/lvsp00 / data, and the data can be accessed normally.
Second, you can write the contents of the snapshot back to the original lvm through lvconvert.
Unmount the original logical volume first # umount / dev/vg00/lv00
Execute lvconvert to merge the snapshot data into the original logical volume # lvconvert-- merge / dev/vg00/lvsp00
Finally, mount the original logical volume to see if the data has been recovered successfully.
Note: when we delete the data in the original logical volume, the data in the logical volume snapshot is still there, so we can use the snapshot to restore the data. When we add data to the logical volume, the snapshot will not change, there is no such file. Because the snapshot only backs up the moment of the logical volume at that time.
Use ssm (system Storage Manager) for logical management
Logical Volume Manager (LVM) is an extremely flexible disk management tool that allows users to create logical disk volumes from multiple physical hard drives and resize them without downtime at all. The latest version of CentOS/RHEL 7 now comes with system Storage Manager (also known as ssm), a unified command-line interface developed by Red Hat to manage a wide variety of storage devices. Currently, there are three volume management backends available for ssm: LVM, Btrfs, and Crypt
To prepare ssm, on CentOS/RHEL7, you need to install the system storage manager first. Can be installed through the rpm or yum tool
First, let's examine the information about available hard drives and LVM volumes. The following command displays information about existing disk storage devices, storage pools, LVM volumes, and storage snapshots.
# ssm list
In this example, there are two physical devices ("/ dev/sda" and "/ dev/sdb"), two storage pools ("rhel and vg00"), and two LVM volumes created in the storage pool rhel ("dev/rhel/root" and "/ dev/rhel/swap"), and one LVM volume created in the storage pool vg00 (/ dev/vg00/lv00).
Here's how to create and manage logical volumes and snapshots through ssm
Add at least one new disk and execute the ssm command to display information about existing disk storage devices, storage pools, and LVM volumes
You can see that there are two free disks (sdc, sdd)
Create a new LVM pool / volume
In this example, take a look at how to create a new storage pool and a new LVM volume on a physical disk drive. If you use traditional LVM tools, the whole process is quite complex, you need to prepare partitions, you need to create physical volumes, volume groups, logical volumes, and finally a file system. However, if you use ssm, the whole process can be achieved overnight!
The following command creates a storage pool named mypool, creates a 500MB-sized LVM volume named lv01 in the storage pool, formats the volume using the XFS file system, and mounts it under / mnt/test.
Verify the results created by ssm
Or perform ssm list
Add physical disks (sdd) to the LVM pool
"when a new device is added to the storage pool, the storage pool automatically expands, depending on the size of the device." Check the size of the storage pool named centos to perform a ssm list view
Next, let's expand the existing LVM volume
To expand the LVM volume, you might as well increase the size of the / dev/mypool/lv01 volume by 300MB.
If you have extra space in the storage pool, you can expand the existing disk volumes in the storage pool. To do this, use the resize option of the ssm command
Perform ssm list to view enlarged logical volumes
You can see that the logical volume has expanded to 800m, which is an increase of 300m over the original basis, but the file system size (Fs size) has not changed and remains the original size.
In order for the file system to recognize the increased volume size, you need to "expand" the existing file system itself. There are different tools available to expand existing file systems, depending on which file system you use. For example, there are resize2fs for EXT2/EXT3/EXT4, xfs_growfs for XFS, and btrfs for Btrfs, to name a few.
In this example, we use the CentOS 7 dint XFS file system to create it by default. Therefore, we use xfs_growfs to extend the existing XFS file system.
After expanding the XFS file system, view the results
Or execute # df-hT
You can see that the LVM extension is successful
Logical volume snapshot
Take a snapshot of an existing LVM volume (such as / dev/mypool/lv01)
Once the snapshot is generated, it will be stored as a special snapshot volume, storing all the data from the original volume when the snapshot was generated
Each time the data in the original LVM changes, you can manually perform ssm snapshot generation snapshots
When the original LVM data is corrupted, it can be recovered with snapshots.
The first way is to unmount the original logical volume # umount / dev/vg00/lv00
Then mount the logical volume snapshot # mount / dev/vg00/lvsp00 / data, and the data can be accessed normally.
Second, you can write the contents of the snapshot back to the original lvm through lvconvert.
Unmount the original logical volume first # umount / dev/vg00/lv00
Execute lvconvert to merge the snapshot data into the original logical volume # lvconvert-- merge / dev/vg00/lvsp00
Finally, mount the original logical volume to see if the data has been recovered successfully.
For the specific usage of ssm, please refer to the helper delete page of ssm.
For example: delete LVM volume # ssm remove
Delete Storage Pool # ssm remove
From Weizhi Notes (Wiz)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.