In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about how to use the encrypted file system in the Linux server. The article is rich in content and analyzes and describes it from a professional point of view. I hope you can get something after reading this article.
Create an AWS access key (access key ID and secret access key) that S3QL uses to access your AWS account.
Then access AWS S3 through the AWS administration panel and create a new empty bucket for S3QL.
For best performance, please choose an area that is geographically closest to you.
Install S3QL on Linux
There are precompiled S3QL packages in most Linux distributions.
For Debian, Ubuntu, or Linux Mint:
The code is as follows:
$sudo apt-get install s3ql
For Fedora:
The code is as follows:
$sudo yum install s3ql
For Arch Linux, use AUR.
Configure S3QL for the first time
Create an autoinfo2 file in the ~ / .s3ql directory, which is a default configuration file for S3QL. The information in this file includes the required AWS access key,S3 bucket name, as well as the encrypted password. This secret password will be used to encrypt a randomly generated master key, which will be used to actually encrypt S3QL file system data.
The code is as follows:
$mkdir ~ / .s3ql
$vi ~ / .s3ql/authinfo2
[s3]
Storage-url: s3:// [bucket-name]
Backend-login: [your-access-key-id]
Backend-password: [your-secret-access-key]
Fs-passphrase: [your-encryption-passphrase]
The specified AWS S3 bucket needs to be created in advance through the AWS management panel.
For security reasons, make the authinfo2 file accessible only to you.
The code is as follows:
$chmod 600 ~ / .s3ql/authinfo2
Create a S3QL file system
Now you are ready to create a S3QL file system on AWS S3.
Use the mkfs.s3ql tool to create a new S3QL file system. The bucket name in this command should match that specified in the authinfo2 file. Using the "--ssl" parameter forces the use of SSL to connect to the back-end storage server. By default, the mkfs.s3ql command enables compression and encryption in the S3QL file system.
The code is as follows:
$mkfs.s3ql s3 / [bucket-name]-- ssl
You will be asked to enter an encrypted password. Please enter the password you specified through "fs-passphrase" in ~ / .s3ql/autoinfo2.
If a new file system is created successfully, you will see output like this:
Mount the S3QL file system
After you have created a S3QL filesystem, the next step is to mount it.
First create a local mount point, and then use the mount.s3ql command to mount the S3QL file system.
The code is as follows:
$mkdir ~ / mnt_s3ql
$mount.s3ql s3 / [bucket-name] ~ / mnt_s3ql
You don't need a privileged user to mount a S3QL file system, just make sure you have write access to the mount point.
Depending on the situation, you can use the "- compress" parameter to specify a compression algorithm (such as lzma, bzip2, zlib). Lzma will be used by default if not specified. Note that if you specify a custom compression algorithm, it will only be applied to newly created data objects and will not affect existing data objects.
The code is as follows:
$mount.s3ql-- compress bzip2 S3 bucket-name / [bucket-name] ~ / mnt_s3ql
For performance reasons, the S3QL file system maintains a local file cache that includes recently accessed (some or all) files. You can customize the size of the file cache through the "- cachesize" and "--max-cache-entries" options.
If you want users other than you to access a mounted S3QL file system, use the "--allow-other" option.
If you want to export a mounted S3QL file system to another machine through NFS, use the "--nfs" option.
After running mount.s3ql, check that the S3QL file system is mounted successfully:
The code is as follows:
$df ~ / mnt_s3ql
$mount | grep s3ql
Unmount the S3QL file system
To safely unmount a S3QL file system (which may contain uncommitted data), use the umount.s3ql command. It will wait for all data, including parts of the local file system cache, to be successfully transferred to the back-end server. Depending on how much data is waiting to be written, this process may take some time.
The code is as follows:
$umount.s3ql ~ / mnt_s3ql
View S3QL file system statistics and repair S3QL file system
To view S3QL file system statistics, you can use the s3qlstat command, which will display information such as total data, metadata size, duplicate file deletion rate, and compression ratio.
The code is as follows:
$s3qlstat ~ / mnt_s3ql
You can use the fsck.s3ql command to check and repair the S3QL file system. Similar to the fsck command, the file system to be checked must first be unmounted.
The code is as follows:
$fsck.s3ql s3 / [bucket-name]
S3QL use case: Rsync backup
Let me end this tutorial with a popular use case: local file system backup. To do this, I recommend using the rsync incremental backup tool, especially because S3QL provides a wrapper script for rsync (/ usr/lib/s3ql/pcp.py). This script allows you to use multiple rsync processes to recursively copy the directory tree to the S3QL target.
The code is as follows:
$/ usr/lib/s3ql/pcp.py-h
The following command will use 4 concurrent rsync connections to back up everything in ~ / Documents to an S3QL file system.
The code is as follows:
$/ usr/lib/s3ql/pcp.py-a-quiet-processes=4 ~ / Documents ~ / mnt_s3ql
These files are first copied to the local file cache and then gradually synchronized to the back-end server in the background.
The above is how to use the encrypted file system in the Linux server shared by the editor. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.