In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to expand the suse linux file system, in view of this problem, this article introduces the corresponding analysis and solutions in detail, hoping to help more partners who want to solve this problem to find a more simple and easy way.
In the system installation, due to lack of experience or planning problems, zoning is often not comprehensive consideration, when using a certain amount of time, you will find a serious shortage of zoning space. At this time, we usually think of partition expansion, but in order to ensure business continuity and high availability, we need to quickly expand and translate the application directory. Let's take a look at an actual expansion case.
Background: two SUSE LINUX10 Enterprise Edition servers + WAS cluster (one master node + one slave node) + NFS shared file system (basefs)
First, view the existing partition system
Master node: 10.4.12.112
Ty1:~ # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/hda2 38G 12G 27G 99% /
Udev 7.9G 88K 7.9G 1% / dev
Slave node: 10.4.12.113
Ty2:~ # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/hda2 38G 7.6G 31G 20% /
Udev 7.9G 92K 7.9G 1% / dev
10.4.12.112:/basefs 38G 12G 27G 99% / basefs
The current situation is that two hosts use a shared file system of one NFS. 10.4.12.112 is the main server of NFS, setting up the shared directory of basefs under the root, while 10.4.12.113 is a client of NFS, which mounts the basefs shared directory on the 112 server. The shared directory of this BASEFS is closely related to the applications on the WEBSPHERE APPLICATION SERVER (WAS) cluster.
Second, the idea of capacity expansion: mount a 107g scsi disk, divide it into a separate partition, and then migrate all the contents of the basefs directory to the new partition, and the name of the new partition mount directory is still basefs. You can rename or delete the original basefs directory.
Third, the implementation process of capacity expansion:
1. Establish a partition
Ty1:~ # fdisk-l
Disk / dev/sda: 107.3 GB, 107374182400 bytes (mounted SCSI disk)
16 heads, 255 sectors/track, 51400 cylinders
Units = cylinders of 4080 * 512 = 2088960 bytes
Disk / dev/sda doesn't contain a valid partition table
Disk / dev/hda: 42.9 GB, 42949672960 bytes (original disk)
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/hda1 1 262 2104483 + 82 Linux swap / Solaris
/ dev/hda2 * 263 5221 39833167 + 83 Linux
You have new mail in / var/mail/root
Create a new partition on the SCSI disk
Ty1:~ # fdisk / dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only
Until you decide to write them. After that, of course, the previous
Content won't be recoverable.
The number of cylinders for this disk is set to 51400.
There is nothing wrong with that, but this is larger than 1024
And could in certain setups cause problems with:
1) software that runs at boot time (e.g.old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w (rite)
Command (m for help): P
Disk / dev/sda: 107.3 GB, 107374182400 bytes
16 heads, 255 sectors/track, 51400 cylinders
Units = cylinders of 4080 * 512 = 2088960 bytes
Device Boot Start End Blocks Id System
Command (m for help): n
Command action
E extended
P primary partition (1-4)
P
Partition number (1-4): 1
First cylinder (1-51400, default 1):
Using default value 1
Last cylinder or + size or + sizeM or + sizeK (1-51400, default 51400):
Using default value 51400
Command (m for help): P
Disk / dev/sda: 107.3 GB, 107374182400 bytes
16 heads, 255 sectors/track, 51400 cylinders
Units = cylinders of 4080 * 512 = 2088960 bytes
Device Boot Start End Blocks Id System
/ dev/sda1 1 51400 104855872 + 83 Linux
Partition 1 does not end on cylinder boundary.
Command (m for help): W
The partition table has been altered!
Calling ioctl () to re-read partition table.
Syncing disks.
Format the newly created partition
Ty1:~ # mkfs.ext3 / dev/sda1
Mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
13107200 inodes, 26213968 blocks
1310698 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
800 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
Done
This filesystem will be automatically checked every 20 mounts or
180 days, whichever comes first. Use tune2fs-c or-i to override.
You have new mail in / var/mail/root
Ty1:~ # pwd
/
Ty1:~ # mkdir aaa
Ty1:~ # mount / dev/sda1 / aaa
Ty1:~ # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/hda2 38G 12G 27G 99% /
Udev 7.9G 92K 7.9G 1% / dev
/ dev/sda1 99G 188m 94G 1% / aaa
2. Application migration
Stop the NFS application of 112and 113s
# / etc/init.d/nfsserver stop
# / etc/init.d/portmap stop
Stop the IHS, WAS of 112and 113s
# cd / usr/IBM/HTTPServer/bin
. / adminctl stop
. / apachectl stop
# cd / usr/IBM/AppServer/profiles/AppSrv01/bin
. / stopServer.sh server1
. / stopNode.sh
# cd / usr/IBM/AppServer/profiles/Dmgr01/bin
. / stopManager.sh
Unhang NFS
# umount / basefs
WAS server for the master node:
Pack each of the four directories under the original / BASEFS into a TAR package and put them into the / AAA directory to extract and restore (or you can also translate the directory and retain the original permissions and attributes through cp-ra smap / aaa)
Rename the original / BASEFS to something else, change the / AAA directory to / BASEFS
112 start NFS
# / etc/init.d/portmap start
# / etc/init.d/nfsserver start
113 rehang
# mount / 10.4.12.112:/basefs / basefs
Start the WAS and HIS applications of 112and 113s
Start IHS WAS
# cd / usr/IBM/HTTPServer/bin
. / adminctl start
. / apachectl start
# cd / usr/IBM/AppServer/profiles/AppSrv01/bin
. / startServer.sh server1
. / startNode.sh
# cd / usr/IBM/AppServer/profiles/Dmgr01/bin
. / startManager.sh
Third, verification testing
1. Df-h verifies whether the partition expansion is normal.
Ty1:~ # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/hda2 38G 12G 27G 29% /
Udev 7.9G 92K 7.9G 1% / dev
/ dev/sda1 99G 25G 74G 25% / basefs
Ty2:~ # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/hda2 38G 7.6G 31G 20% /
Udev 7.9G 92K 7.9G 1% / dev
10.4.12.112:/basefs 99G 25G 74G 25% / basefs
2. Application developers test whether the program is normal or not
3. If it is abnormal, stop applying and go back to the original BASEFS directory for fallback.
Finally, it has been verified that everything is normal and the expansion is successful.
The answer to the question on how to expand the suse linux file system is shared here. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.