In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Yum:
Http://mirrors.163.com
Http://mirrors.aliyun.com
Http://mirrors.sohu.com
Click CentOS7 in the picture to download NetEase's repo file, copy it to the / etc/yum.repos.d/ directory, and you can use NetEase's source online.
EPEL: Extra Packages for Enterprise Linux
Click the link in the figure above and you will download a package called epel-release-latest-7.noarch.rpm. As long as you install it, a new repo file will appear in the / etc/yum.repos.d/ directory.
Provide block devices using CEPH
1. CEPH has a storage pool by default
[root@node1 ceph_conf] # ceph osd lspools
2. In order to provide block devices to the client, you need to create an image
3. Create a 10GB-sized image named demo-image in the default pool
[root@node1 ceph_conf] # rbd create demo-image-image-feature layering-size 10G
4. Another creation method: specify to create an image named image in the rbd pool
[root@node1 ceph_conf] # rbd create rbd/image-image-feature layering-size 10G
5. List the images
[root@node1 ceph_conf] # rbd list
6. Check the details of image
[root@node1 ceph_conf] # rbd info image
7. Resize the image image to 7GB (experience only, reducing operation is not recommended)
[root@node1 ceph_conf] # rbd resize-- size 7G image-- allow-shrink
8. Resize the image image to 15GB
[root@node1 ceph_conf] # rbd resize-- size 15G image
[root@node1 ceph_conf] # rbd info image
The client uses RBD block devices
1. Install the client software package on Client
[root@client ~] # yum install-y ceph-common
Copy the configuration file and user key file to the client
[root@node1 ceph_conf] # scp / etc/ceph/ceph.conf 192.168.4.10:/etc/ceph/
[root@node1 ceph_conf] # scp / etc/ceph/ceph.client.admin.keyring 192.168.4.10:/etc/ceph/
3. The client uses the image image of CEPH as the local disk.
[root@client ~] # rbd list
[root@client ~] # rbd map image
[root@client ~] # lsblk has an extra rbd0 of 15GB
4. Use RBD block devices
[root@client ~] # mkfs.xfs / dev/rbd0
[root@client ~] # mount / dev/rbd0 / mnt/
[root@client] # df-h / mnt/
[root@client ~] # echo "hello world" > > / mnt/hi.txt
[root@client ~] # cat / mnt/hi.txt
5. View snapshots of image images
[root@client ~] # rbd snap ls image
6. Create a snapshot called image-snap1 for image
[root@client ~] # rbd snap create image--snap image-snap1
[root@client ~] # rbd snap ls image
7. Erroneous deletion of simulation
[root@client] # rm-f / mnt/hi.txt
8. Restore snapshots to restore mistakenly deleted files
(1) Uninstall the image
[root@client ~] # umount / mnt/
(2) restore snapshots using image-snap1
[root@client ~] # rbd snap rollback image--snap image-snap1
(3) Mount the image to see if it has been restored
[root@client ~] # mount / dev/rbd0 / mnt/
[root@client ~] # ls / mnt/
[root@client ~] # cat / mnt/hi.txt
Create a mirror from a snapshot
1. Set image-snap1 to protected state
[root@client ~] # rbd snap protect image--snap image-snap1
2. Use image's snapshot image-snap1 to clone a new image-clone image.
[root@client] # rbd clone image--snap image-snap1 image-clone-- image-feature layering
3. View the relationship between the new image and the parent image
[root@client ~] # rbd info image-clone finds its parent is rbd/image@image-snap1br/ > finds its parent is rbd/image@image-snap1
[root@client ~] # rbd flatten image-clone
[root@client ~] # rbd info image-clone at this time, it has no parent
Delete mount
[root@client ~] # rbd showmapped can view the mapping of block devices mirrored to the local
[root@client ~] # umount / mnt/
[root@client ~] # rbd unmap image Unmap
[root@client ~] # rbd showmapped
[root@client ~] # lsblk found that rbd0 no longer exists
Delete snapshot
[root@client ~] # rbd snap ls image lists snapshots of image
[root@client ~] # rbd snap unprotect image--snap image-snap1
[root@client ~] # rbd snap rm image--snap image-snap1
Delete Mirror
[root@client ~] # rbd rm image
[root@client ~] # rbd list
Install the KVM virtual machine with disks stored on the CEPH cluster
1. Set the physical host to be the CEPH client
[root@room8pc16 ~] # yum install-y ceph-common
[root@room8pc16 ~] # scp 192.168.4.1:/etc/ceph/ceph.conf / etc/ceph/
[root@room8pc16 ~] # scp 192.168.4.1:/etc/ceph/ceph.client.admin.keyring / etc/ceph/
2. Create a new image named vm1-image as the disk of the KVM virtual machine
[root@room8pc16] # rbd create vm1-image-- image-feature layering-- size 10G
3. View image information
[root@room8pc16 ~] # rbd info vm1-image
[root@room8pc16 ~] # qemu-img info rbd:rbd/vm1-image
4. In the virt-manager graphical interface, create a new virtual machine named rhel7-ceph2 and follow the normal steps to create it. When the virtual machine is created, it will be started. At this point, click Force shutdown.
5. KVM virtual machine needs authentication when using CEPH. We can create a secret (equivalent to a pass).
(1) create a temporary file on the physical host
[root@room8pc16 ~] # vim / tmp/secret.xml
Client.admin secret
(2) use temporary files to generate secret pass information
[root@room8pc16 ~] # virsh secret-list View secret
[root@room8pc16] # virsh secret-define-- file / tmp/secret.xml
[root@room8pc16 ~] # virsh secret-list has generated secret
6. Associate the secret of kvm with the administrator user client.admin of ceph
(1) View the administrator's keyring
[root@room8pc16 ~] # cat / etc/ceph/ceph.client.admin.keyring
(2) Associate secret and keyring
[root@room8pc16] # virsh secret-set-value-- secret f59c54be-d479-41a6-a09c-cf2f7ffc57cb-- base64 AQDaDSpboImwIBAAzX800q+H0BCNq9xq2xQaJA==
7. Export the xml file of rhel7-ceph2
[root@room8pc16 ~] # virsh dumpxml rhel7-ceph2 > / tmp/vm.xml
8. Modify vm.xml
[root@room8pc16 ~] # vim / tmp/vm.xml
Find the following:
Change it to
9. Delete rhel7-ceph2 in virt-manager
10. Generate a virtual machine using the modified vm.xml
[root@room8pc16 ~] # virsh define / tmp/vm.xml
The rhel7-ceph2 virtual machine will appear in Virt-manager
Modify the configuration of the virtual machine, connect the IDE CDROM1 to the CD file, and then check "enable Boot menu" in Boot Options to use the CD as the first boot medium.
Use CEPH FS to provide the client with a file system (the equivalent of NFS). Note that this approach is not mature and is not recommended in a production environment.
1. Prepare node4 as MDS (metadata server)
2. Install the software package on node4
[root@node4 ~] # yum install-y ceph-mds
3. Configure MDS of node4 on node1
[root@node1 ceph_conf] # cd / root/ceph_conf/ enter the configuration file directory
4. Synchronize keyring to node4 of admin
[root@node1 ceph_conf] # ceph-deploy admin node4
5. Enable the mds service on NODE4 (operate on node1)
[root@node1 ceph_conf] # ceph-deploy mds create node4
6. Create a storage pool. There are at least two pools, one for storing data and one for storing metadata.
[root@node1 ceph_conf] # ceph osd pool create cephfs_data 128
[root@node1 ceph_conf] # ceph osd pool create cephfs_metadata 128
Note: 128means that there are 128PG in the pool.
Description of PG: http://www.wzxue.com/ceph-osd-and-pg/
7. View MDS status
[root@node1 ceph_conf] # ceph mds stat
8. Create a file system
[root@node1 ceph_conf] # cephfs new myfs1 cephfs_metadata cephfs_data
9. View the file system
[root@node1 ceph_conf] # ceph fs ls
10. Mount CEPH FS on the client
[root@client ~] # mkdir / mnt/cephfs ceph auth list View the key of admin
[root@client] # mount-t ceph 192.168.4.1 ceph 6789 / mnt/cephfs/-o name=admin,secret=AQDaDSpboImwIBAAzX800q+H0BCNq9xq2xQaJA==
[root@client] # df-h / mnt/cephfs/
Object storage
1. What is an object? see: http://storage.ctocio.com.cn/281/12110781.shtml
2. Create a new virtual machine for RGW
Node5.tedu.cn 192.168.4.5
If you need node1, you can log in to node5 without secret access. Node5 also requires a yum source for ceph. Node1 can access node5 by name.
3. Install RGW for node5 remotely in node1
[root@node1 ceph_conf] # ceph-deploy install-- rgw node5
4. Synchronize keyring and other files to node5
[root@node1 ceph_conf] # ceph-deploy admin node5
5. Start the RGW service
[root@node1 ceph_conf] # ceph-deploy rgw create node5
[root@node5 ~] # systemctl status ceph-radosgw@*
6. RGW has a built-in WEB server civetweb. To facilitate user access, change the RGW port to 80 (default is 7480)
[root@node5 ~] # vim / etc/ceph/ceph.conf append the following
[client.rgw.node5]
Host=node5
Rgw_frontends= "civetweb port=80"
[root@node5 ~] # systemctl restart ceph-radosgw@* restart service
7. Client verification
[root@client ~] # curl http://192.168.4.5
The use of RGW client
1. S3 is a command line client provided by Amazon
[root@client] # rpm-ihv s3cmd-2.0.1-1.el7.noarch.rpm
2. Create a user for RGW access
[root@node5 ~] # radosgw-admin user create-- uid= "testuser"-- display-name= "First User" Note the following output
"keys": [
{
"user": "testuser"
"access_key": "2OEWOIA7G6GZZ22UI0E4"
"secret_key": "FrOzh7NQC1Ak3C4SXNSaLAEFPy5SyPu9MC02mtdm"
}
]
3. Configure S3 on the client
[root@client] # s3cmd-- configure
Access Key:2OEWOIA7G6GZZ22UI0E4
Secret Key: FrOzh7NQC1Ak3C4SXNSaLAEFPy5SyPu9MC02mtdm
Default Region [US]: please enter directly and don't change it.
S3 Endpoint [s3.amazonaws.com]: 192.168.4.5
DNS-style bucket+hostname:port template for accessing a bucket [% (bucket) s.s3.amazonaws.com]:% (bucket) s.192.168.4.5
When using secure HTTPS protocol all communication with Amazon S3
Servers is protected from 3rd party eavesdropping. This method is
Slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No
Test access with supplied credentials? [Y/n] y
Save settings? [y/N] y
The configuration file is finally saved in / root/.s3cfg
4. After the client is configured, upload, download, delete and other operations can be performed.
[root@client ~] # s3cmd ls View
[root@client ~] # s3cmd mb s3://my_bucket create a container (folder)
[root@client ~] # s3cmd put / var/log/messages s3://my_bucket upload files
[root@client ~] # s3cmd ls s3://my_bucket View the contents of my_bucket
[root@client ~] # s3cmd get s3://my_bucket/messages / tmp/ download
[root@client ~] # s3cmd del s3://my_bucket/messages deletion
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.