In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to install and use ZFS on Centos7". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to install and use ZFS on Centos7.
The ZFS storage pool is made up of a collection of virtual devices. There are two types of virtual devices: physical virtual devices (physical virtual devices, also known as leaf virtual devices, leaf vdevs), and logical virtual devices (logical virtual devices, also known as internal virtual devices, interior vdevs).
ZFS advantage
ZFS is an advanced, highly extensible file system originally developed by Sun Microsystems, and now OpenZFS is part of the project. With so many file systems available on Linux, it's natural to ask what's so special about ZFS. Unlike other file systems, it is more than just a file system logical volume manager. What makes ZFS popular are:
\ 1. Data integrity-data consistency and integrity are guaranteed by write-and-write and verification techniques.
\ 2. Storage pool-the available storage drives are placed together in a single pool called zpool.
\ 3. Software RAID-- create an raidz array just like issuing a command.
\ 4. The built-in volume manager, ZFS, acts as the volume manager.
\ 5. Snapshots, cloning, compression-these are some of the advanced features provided by ZFS.
Terminology
Before we move on, let's look at some of the common terms of ZFS. Pool: logical grouping of storage drives, which is the basic building block of ZFS, from which storage space is allocated to the dataset. The components of the Datasets:ZFS file system, namely file systems, clones, snapshots, and volumes are called datasets. Mirror: a virtual device stores copies of data on the same two or more disks. In the case of a disk failure, the same data can be used in mirrors on other disks. Resilvering: the process of copying data from one disk to another when restoring a device. Scrub: erasure is used for consistency checking in ZFS like how fsck is used in other file systems.
Install ZFS
To install ZFS on CentOS, we need to install the support pack EPEL repository first, and then install the required ZFS packages on the ZFS repository.
Yum localinstall-nogpgcheck http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-5.noarch.rpmyum localinstall-nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
Now install kernel development and zfs packages. Kernel development packages require ZFS to build modules and insert them into the kernel.
Yum install kernel-devel zfs
Verify the lsmod command used to insert the zfs module into the kernel, and if not, manually insert it using the 'modprobe command.
[root@li1467-130C] # lsmod | grep zfs [root@li1467-130C] # modprobe zfs [root@li1467-130C] # lsmod | grep zfszfs 2790271 0zunicode 331170 1 zfszavl 15236 1 zfszcommon 55411 1 zfsznvpair 89086 2 zfs,zcommonspl 92029 3 zfs,zcommon,znvpair
Let's check to see if we can use zfs's command:
[root@li1467-130C] # zfs listno datasets available Management
ZFS has two main tools, zpool and ZFS. The zpool process uses the disk utility to create and maintain ZFS pools responsible for data creation and maintenance.
Zpool utility
Create and destroy pools first verify available disks to create a storage pool.
[root@li1467-130C] # ls-l / dev/sd*brw-rw---- 1 root disk 8, 0 Mar 16 08:12 / dev/sdabrw-rw---- 1 root disk 8, 16 Mar 16 08:12 / dev/sdbbrw-rw---- 1 root disk 8, 32 Mar 16 08:12 / dev/sdcbrw-rw---- 1 root disk 8, 48 Mar 16 08:12 / dev/sddbrw-rw---- 1 root disk 8 64 Mar 16 08:12 / dev/sdebrw-rw---- 1 root disk 8, 80 Mar 16 08:12 / dev/sdf
Create a drive for a pool.
Zpool create # zpool create-f zfspool sdc sdd sde sdf
The command for zpool status displays the status of available pools.
# zpool statuspool: zfspoolstate: ONLINEscan: none requestedconfig:NAME STATE READ WRITE CKSUMzfspool ONLINE 0 0 0sdc ONLINE 0 0 0sdd ONLINE 0 0sde ONLINE 0 0 0sdf ONLINE 0 0 0errors: No known data errors
Verify that if the pool is created successfully.
[root@li1467-130C] # df-hFilesystem Size Used Avail Use% Mounted on/dev/sda 19G 1.4G 17G 8% / devtmpfs 488M 0 488M 0% / devtmpfs 497M 0497M 0% / dev/shmtmpfs 497M 50M 447M 11% / runtmpfs 497M 0497M 0% / sys/fs/cgrouptmpfs 100M 0 100M 0% / run/user/0zfspool 3.7G 0 3.7G 0 0 / zfspoolv
As you can see, using zpool creates a pool name zfspool size 3.7GB space, which is also mounted on / zfspool. Destroy an address pool with the command 'zpool destroy':
Zpool destroy [root@li1467-130C] # zpool destroy zfspool [root@li1467-130C] # zpool statusno pools available
Now let's try to create a simple mirror pool.
Zpool create mirror...
We can create multiple images by repeating the keyword's drive.
# zpool create-f mpool mirror sdc sdd mirror sde sdf [root@li1467-130C] # zpool statuspool: mpoolstate: ONLINEscan: none requestedconfig:NAME STATE READ WRITE CKSUMmpool ONLINE 0 0 0mirror-0 ONLINE 0 0sdc ONLINE 0 0 0sdd ONLINE 0 0 0mirror-1 ONLINE 0 0sde ONLINE 0 0sdf ONLINE 0 0errors: No known data errors
In the above example, we created every two disk mirror pools. Similarly, we can create a raidz pool.
# zpool create-f rpool raidz sdc sdd sde sdf [root@li1467-130C] # zpool statuspool: rpoolstate: ONLINEscan: none requestedconfig:NAME STATE READ WRITE CKSUMrpool ONLINE 00 0raidz1-0 ONLINE 00 0sdc ONLINE 00 0sdd ONLINE 00 0sde ONLINE 00 0sdf ONLINE 00 0errors: No known data errors manages devices in the ZFS pool
Once a pool is created, hot spares and cache devices can be added or removed from the pool, connected or detached from the mirror pool and replacement devices. However, redundancy and raidz devices cannot be removed from the pool. We will see how to perform these operations in this section. I first create a pool called "testpool" consisting of two devices, sdc and sdd. Another device, sde, will be added here.
[root@li1467-130C] # zpool create-f testpool sdc sdd [root@li1467-130C] # zpool add testpool sde [root@li1467-130C] # zpool statuspool: testpoolstate: ONLINEscan: none requestedconfig:NAME STATE READ WRITE CKSUMtestpool ONLINE 00 0sdc ONLINE 00 0sdd ONLINE 00 0sde ONLINE 00 0errors: No known data errors
As mentioned earlier, I cannot remove this newly added device because it is not a redundant or raidz pool.
# zpool remove testpool sdecannot remove sde: only inactive hot spares, cache, top-level, or log devices can be removed
But I can add a free disk to this pool and delete it.
# zpool add testpool spare sdf [root@li1467-130B] # zpool statuspool: testpoolstate: ONLINEscan: none requestedconfig:NAME STATE READ WRITE CKSUMtestpool ONLINE 0 0 0sdc ONLINE 0 0 0sdd ONLINE 0 0 0sde ONLINE 0 0 0sparessdf AVAILerrors: No known data errors [root@li1467-130] # zpool remove testpool sdf [root@li1467-130] # zpool statuspool: testpoolstate: ONLINEscan: none requestedconfig:NAME STATE READ WRITE CKSUMtestpool ONLINE 0 0 0sdc ONLINE 0 0 0sdd ONLINE 0 0 0sde ONLINE 0 0 0errors: No known data errors
Similarly, we can use the attach command to attach disk mirrored or non-mirrored pools and detach commands to detach from mirrored disk pools.
Zpool attach zpool detach
When the device is malfunctioning or damaged, we can replace it with the replace command.
Zpool replace
In the mirror configuration, we will test a device with explosive force.
[root@li1467-130C] # zpool create-f testpool mirror sdd sde
This creates a SDD and SDE consisting of a mirrored disk pool. Now, let's intentionally damage SDD and write zeros to disk.
# dd if=/dev/zero of=/dev/sdddd: writing to'/ dev/sdd': No space left on device2048001+0 records in2048000+0 records out1048576000 bytes (1.0GB) copied, 22.4804 s, 46.6 MB/s
We will use the "scrub" command to detect this damage.
[root@li1467-130C] # zpool scrub testpool [root@li1467-130C] # zpool statuspool: testpoolstate: ONLINEstatus: One or more devices could not be used because the label is missing orinvalid. Sufficient replicas exist for the pool to continuefunctioning in a degraded state.action: Replace the device using 'zpool replace'.see: http://zfsonlinux.org/msg/ZFS-8000-4Jscan: scrub repaired 0 in 0h0m with 0errors on Fri Mar 18 09:59:40 2016config:NAME STATE READ WRITE CKSUMtestpool ONLINE 0 0 0mirror-0 ONLINE 0 0 0sdd UNAVAIL 0 0 corrupted datasde ONLINE 0 0 0errors: No known data errors
Now let's replace SDD with SDC.
[root@li1467-130C] # zpool replace testpool sdd sdc Zpool statuspool: testpoolstate: ONLINEscan: resilvered 83.5K in 0h0m with 0errors on Fri Mar 18 10:05:17 2016config:NAME STATE READ WRITE CKSUMtestpool ONLINE 00 0mirror-0 ONLINE 00 0replacing-0 UNAVAIL 00 0sdd UNAVAIL 00 corrupted datasdc ONLINE 00 0sde ONLINE 00 0errors: No known data errors [root@li1467-130K] # zpool statuspool: testpoolstate: ONLINEscan: resilvered 74.5K in 0h0m with 0errors on Fri Mar 18 10:00:36 2016config:NAME STATE READ WRITE CKSUMtestpool ONLINE 00 0mirror-0 ONLINE 00 0sdc ONLINE 0 0 0sde ONLINE 0 0 0errors: migration of No known data errors pool
We can use the export and import commands to migrate storage pools between different hosts. For this, the disks used in the pool should be available from both systems.
[root@li1467-130C] # zpool export testpool [root@li1467-130C] # zpool statusno pools available
The zpool import command lists all available pools. Execute this system command for the pool you want to import.
[root@li1467-131c] # zpool importpool: testpoolid: 3823664125009563520state: ONLINEaction: The pool can be imported using its name or numeric identifier.config:testpool ONLINEsdc ONLINEsdd ONLINEsde ONLINE
Now import the required pool.
[root@li1467-131c] # zpool import testpool [root@li1467-131I] # zpool statuspool: testpoolstate: ONLINEscan: none requestedconfig:NAME STATE READ WRITE CKSUMtestpool ONLINE 00 0sdc ONLINE 00 0sdd ONLINE 00 0sde ONLINE 0 0 0errors: No known data errors
Iostat
The Iostat command verifies pool device IO statistics.
# zpool iostat-v testpoolcapacity operations bandwidthpool alloc free read write read write -testpool 1.80M 2.86G 22 27 470K 417Ksdc 598K 975M 8 9 200K 139Ksdd 636K 975M 7 9 135K 139Ksde 610K 975M 6 9 135K 139K-
Zfs utility
We will now move to ZFS utility. Here, we will look at how to create and destroy datasets, file system compression, quotas, and snapshots.
Create and destroy file systems
ZFS file systems can be created using the ZFS create command
Zfs create [root@li1467-130C] # zfs create testpool/students [root@li1467-130C] # zfs create testpool/professors [root@li1467-130C] # df-hFilesystem Size Used Avail Use% Mounted on/dev/sda 19G 1.4G 17G 8% / devtmpfs 488M 0 488M 0% / devtmpfs 497M 0497M 0% / dev/shmtmpfs 497M 50M 447m 11% / runtmpfs 497M 0497m 0% / sys/fs/cgrouptestpool 2.8G 0 2.8G 0% / testpooltmpfs 100m 0 100m 0% / run/user/0testpool/students 2.8G 0 2.8G 0% / testpool/studentstestpool/professors 2.8G 0 2.8G 0% / testpool/professors
Notice from the output above that although there is no mount point when the file system is created, the mount point is created using the same path relationship pool. ZFS creation allows you to use the-o option to specify the use of things like mount points, compression, quotas, execution, and so on. You can make a list of available file systems using ZFS:
[root@li1467-130K] # zfs listNAME USED AVAIL REFER MOUNTPOINTtestpool 100M 2.67G 19K / testpooltestpool/professors 31K 1024M 20.5K / testpool/professorstestpool/students 1.57M 98.4M 1.57M / testpool/students
We use the destroy option to destroy the file system. Zfs destroy
Compress
Now we will see how ZFS is compressed, and before we start using compression, we need to make it use "set compression".
Zfs set
Once this is done, compression and decompression will occur on the file system in transparent mode. In our example, I will make the student directory compressed using the lz4 compression algorithm.
[root@li1467-130C] # zfs set compression=lz4 testpool/students
I'm going to copy a file to the file system size 15m and check its size.
[root@li1467-130 /] # cd / var/log [root@li1467-130 log] # du-h secure15M secure [root@li1467-130 ~] # cp / var/log/secure / testpool/students/ [root@li1467-130 students] # df-h. Filesystem Size Used Avail Use% Mounted ontestpool/students 100M 1.7m 99M 2% / testpool/students
Note that with a file system size of only 1.7m and a file size of 15m, we can check the compression ratio.
[root@li1467-130x] # zfs get compressratio testpoolNAME PROPERTY VALUE SOURCEtestpool compressratio 9.03x
Quotas and bookings
Let me explain the quota with a real example. Suppose we have a requirement in a university to limit the use of disk space in the file system for professors and students. Let's assume that we need to assign 1GB and 100MB to professors and students. We can use the "quota" in ZFS to meet this requirement. Quotas ensure that the amount of disk space used by the file system does not exceed the prescribed limit. Retain file systems that help to actually allocate and ensure that the amount of disk space required is available.
Zfs set quota=zfs set reservation= [root@li1467-130K] # zfs set quota=100M testpool/students [root@li1467-130C] # zfs set reservation=100M testpool/students [root@li1467-130C] # zfs listNAME USED AVAIL REFER MOUNTPOINTtestpool 100M 2.67G 19K / testpooltestpool/professors 19K 2.67G 19K / testpool/professorstestpool/students 1.57M 98.4M 1.57m / testpool/students [root@li1467-130m] # zfs set quota=1G testpool/professors [root@li1467-130M] # zfs listNAME USED AVAIL REFER MOUNTPOINTtestpool 100M 2.67G 19K / testpooltestpool/professors 19K 1024M 19K / testpool/professorstestpool/students 1.57M 98.4M 1.57M / testpool/students
In the above example, we have given professors and students 1GB and 100MB. Looking at the results of the ZFS list, initially, they have the size and quota of each 2.67gb, and the value changes accordingly.
Snapshot
A snapshot is a read-only copy of a ZFS file system at some point in time. They do not consume any extra space in the ZFS pool. We can roll back to the same state and at a later stage extract a single or group of files according to the user's request or just one set of files. I'll start with our previous example now, and then create some directories and files for this file system snapshot in testpool/professors.
[root@li1467-130C] # cd / testpool/professors/ [root@li1467-130professors] # mkdir maths physics chemistry [root@li1467-130professors] # cat > qpaper.txtQuestion paper for the year 2016-17 [root@li1467-130professors] # ls-latotal 4drwxr-xr-x 5 root root 6 Mar 19 10:34. Drwxr-xr-x 4 root root 4 Mar 19 09:59 drwxr-xr-x 2 root root 2 Mar 19 10:33 chemistrydrwxr-xr-x 2 root root 2 Mar 19 10:32 mathsdrwxr-xr-x 2 root root 2 Mar 19 10:32 physics-rw-r--r-- 1 root root 36 Mar 19 10:35 qpaper.txt
For snapshots, you can use the following syntax:
Zfs snapshot [root@li1467-130professors] # zfs snapshot testpool/professors@03-2016 [root@li1467-130professors] # zfs list-t snapshotNAME USED AVAIL REFER MOUNTPOINTtestpool/professors@03-2016 0-20.5K
I will now delete the created files and extracted snapshots.
[root@li1467-130professors] # rm-rf qpaper.txt [root@li1467-130professors] # lschemistry maths physics [root@li1467-130professors] # cd .zfs [root@li1467-130.zfs] # cd snapshot/03-2016 / [root@li1467-13003-2016] # lschemistry maths physics qpaper.txt [root@li1467-13003-2016] # cp-a qpaper.txt / testpool/professors/ [root@li1467-13003-2016] # cd / testpool/professors/ [root@li1467-130professors] # lschemistry maths physics qpaper.txt
Deleted files are returned to their location. We can make a list of all available snapshots using ZFS:
[root@li1467-130K] # zfs list-t snapshotNAME USED AVAIL REFER MOUNTPOINTtestpool/professors@03-2016 10.5K-20.5K-
Finally, let's destroy the snapshot using the zfs destroy command:
Zfs destroy [root@li1467-130C] # zfs destroy testpool/professors@03-2016 [root@li1467-130C] # zfs list-t snapshotno datasets available Thank you for reading, this is the content of "how to install and use ZFS on Centos7". After the study of this article, I believe you have a deeper understanding of how to install and use ZFS on Centos7, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.