In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "docker how to use the rexray plug-in to access ceph to do unified storage", the content of the article is simple and clear, easy to learn and understand, please follow the editor's ideas slowly in depth, together to study and learn "docker how to use the rexray plug-in to access ceph to do unified storage"!
Docker uses rexray as a shared storage background based on ceph
Docker Swarm increases the speed of distributed and cluster deployment exponentially. The original deployment method may take one day, but it can be reduced to less than 10 minutes after switching to docker deployment.
Docker swarm is generally used to deploy stateless applications, which can be run on any node to achieve scale-out. Of course, we can also deploy stateful applications using docker swarm, but there is a limitation: Docker is archived to the local file system by default, and the data cannot be transferred after the stateful application node is transferred.
The solution provided by Docker is to use volume plugin to store data in a unified place so that data can be shared between different containers on different nodes.
In order to avoid swarm-related knowledge and operations, it is only based on docker. If you have swarm-related knowledge, it is easy to apply it to swarm.
Technology selection
Official plug-in list
Plug-in disadvantages advantages DigitalOcean, Virtuozzo, blockbridge charges
Beegfs, convoy, drbd, Infinit, ipfs, OpenStorage Minority, difficult to learn
Contiv can not be used, and it will support cephfuxi based on openstack for two years, so the research cost is high.
Flocker does not have the right backends popular gce-docker based on google cloud, charge
Quobyte does not support distributed Docker installation GlusterFS cannot docker deployment official documentation good Horcrux cannot be successfully built, poor documentation Minio is still relatively easy to build REX-Ray
After reviewing and comparing all the official available options, we did not immediately get a feasible and easy-to-use solution. After a large number of searches, rexray, flocker and glusterfs are volume plugin recommended programs top3 in China. After further comparison and finding that rexray supports ceph, we think that rexray+ceph 's solution is more in line with our needs.
Introduction to Ceph
Ceph includes three storage services: file storage, block storage and object storage. Rexray/rbd uses block storage. After using this plug-in, the original storage to local files is changed to ceph service. The other two can be used directly within business applications as additional services. In addition, Ceph is relatively simple to build, supporting docker, distributed, and horizontal expansion (after adding hardware, start the osd service to join the ceph cluster).
Ceph officially only has a build tutorial on K8s, but there is an image of ceph on docker store, which has more detailed building instructions. Although you can't play k8s, with instructions for using container images, it's easy to deploy it as a swarm mode. The image integrates the images of multiple components and specifies the type of component by starting command parameters. There are also separate mirrors for different components.
Build an environment
Three docker nodes. Mark as
B1 (10.32.3.147)
B2 (10.32.3.148)
B3 (10.32.3.149)
Ceph building
Start mon
Start mgr
Start osd
View ceph status
Ceph demo Quick start
Resume cleanup
Start mon
Initialize the cluster on B1. Mon is an acronym for monitor and is used to monitor the cluster.
Docker run-d-- net=host-- name=mon\-v / etc/ceph:/etc/ceph\-v / var/lib/ceph/:/var/lib/ceph\-e MON_IP=10.32.3.147\-e CEPH_PUBLIC_NETWORK=10.32.3.0/24\ ceph/daemon mon
After running this service, a ceph cluster is initialized and configuration files are generated under / etc/ceph and / var/lib/ceph.
Run docker exec mon ceph-s to check the cluster status and show a warning that there are no mgr and osd services
In the case of a cluster, if other services want to join the cluster initialized by mon, you need to copy the relevant files generated by mon to the corresponding node and then start the service. Note that the mon service is similar to zookeeper and can run multiple mon on B2 and B3 to prevent a single point of failure.
Start mgr
Mgr is a service that manages the cluster and can run on the same host as mon, but in order to reflect the distributed nature of ceph, we choose to start the mgr service in B2.
Enter B2, from B1 copy file to B2
Scp-r root@10.32.3.147:/etc/ceph/* / etc/ceph/ scp-r root@10.32.3.147:/var/lib/ceph/bootstrap* / var/lib/ceph/
Start mgr on B2
Docker run-d-net=host-name=mgr\-v / etc/ceph:/etc/ceph\-v / var/lib/ceph/:/var/lib/ceph/\ ceph/daemon mgr
Run docker exec mon ceph-s on B1 to view the cluster status.
Start osd
Osd is a service that accepts and stores data in ceph.
Enter B3, from A copy file to B3 scp-r root@10.32.3.147:/etc/ceph/* / etc/ceph/ scp-r root@10.32.3.147:/var/lib/ceph/bootstrap* / var/lib/ceph/
Start osd
Sudo docker run-d-net=host-name=osd\-privileged=true\-v / etc/ceph:/etc/ceph\-v / var/lib/ceph:/var/lib/ceph\ ceph/daemon osd
Osd can typically deploy multiple, but a docker node can only have one osd service. In the swarm environment, to maximize the use of the hard disk, the osd service can be started on all nodes.
View ceph status
Enter the mon container and execute ceph-s to check the service status. Quick command
Docker exec ${mon_container_id} ceph-s
When the cluster status is
Health HEALTH_OK
Pgmap active+clean
Indicates that the cluster is available
Quickly start demo
Ceph-based building is cumbersome, and if you want to test the availability of the rexray/ceph plug-in, you can use the following command to start a container containing all services.
Docker run-d-- net=host-- name=ceph\-v / etc/ceph:/etc/ceph\-v / var/lib/ceph/:/var/lib/ceph\-e MON_IP=10.32.3.147\-e CEPH_PUBLIC_NETWORK=10.32.3.0/24\ ceph/demo
Resume cleanup
The introduction of a technology will inevitably encounter a lot of problems, the following command is used to clean up before rebuilding.
Rm-rf / etc/ceph/* & & rm-rf / var/lib/ceph/ & &\ docker rm-f mon osd mgr
Possible problems
Ceph needs an open port
Firewall-cmd-- zone=public-- add-port=6789/tcp-- permanent & &\ firewall-cmd-- zone=public-- add-port=6800-7300/tcp-- permanent & &\ firewall-cmd-- reload
Ceph plug-in use
Install rbd, ceph client
Install the rexray plug-in
Test unified storage
Install rbd, ceph client
Plug-ins use the requirements of the premise is to install rbd, ceph-related packages, so that commands are available, according to the official documentation, you can use ceph-deploy install client-ip through the official installation tutorial to install in the specified node, but the installation of more packages and easy to report errors. After accidental discovery, installing the ceph-common package can meet the requirements, and the installation depends on less and time is fast. If the problem of error reporting can be solved, official methods are recommended to avoid unexpected things in the future.
Before installation, first make sure that the cluster configuration file exists on the / etc/ceph/ on the docker host machine. Since configuration files already exist for all three machines in B1-3 after the ceph cluster is built, you can ignore this step. For new swarm machines, you need to pay attention to this step.
B1-3 install rbd, ceph package: execute yum install ceph-common-y
Test commands:
Rbd:rbd ls
Ceph:ceph-s
Install the rexray/rbd plug-in
Execute on B1-3
Docker plugin install rexray/rbd
This plug-in is slow to download. When there are many swarm cluster nodes, it is recommended to save it to a private database before downloading it. Plug-ins use build-t operations without mirroring. The plug-in goes to the private repository to perform the following steps:
1. Pull the plug-in docker plugin install-alias 10.32.3.112:5011/rexray/rbd rexray/rbd using the alias parameter
two。 Store to private library docker push 10.32.3.112:5011/rexray/rbd
Test unified storage
There are two ways to create a data volume and start a container:
1. Automatic
Docker run-volume-driver 10.32.3.112:5011/rexray/rbd-v test:/test-it-rm busybox sh
two。 Manual
Docker volume create-d 10.32.3.112:5011/rexray/rbd test docker run-v test:/test-it-- rm busybox sh
Execute manually when you start the error unable to map rbd:
Rbd map test-name client.admin
This may be a bug of the plug-in.
After B1 starts successfully, go to the test directory to create the file "can_you_see_me".
Exit the container and go to B2 ls to see the directory created on B1.
Use the same command to start the container on B2 and verify that the files created in B1 can be accessed on B2. At this point, docker uses rexray to build unified shared storage based on ceph.
Skills / tools you need an auxiliary machine
On the auxiliary machine, you can build tools and projects shared by different environments, such as maven private library, docker private library, jenkins, and so on.
According to the ceph official build tutorial, you need a helper to install ceph-deploy, and then control other nodes to install and deploy ceph clusters.
Although I did not use the official build scheme (native system build), as an idea, we can do secret-free configuration on this machine to quickly control swarm cluster nodes in different environments.
You need to learn ansible.
When your cluster contains more than 5 machines, it may take you 1 minute to copy the ceph configuration file to five docker machines. When your cluster contains more than 20 sets, you think it only takes 4 minutes. But are you sure you won't miss it?
Ceph-common, rexray/rbd and other packages need to be installed manually, and the installation is very dependent on network speed. When you know that ansible can control n machines in parallel, you will not hesitate to click here.
Remaining problems
A data volume has been used by multiple nodes, and as long as it is deleted on one node, it will be deleted on the ceph at the same time, but the other nodes still retain the mapping to the data volume, so that the deleted data volume cannot be recreated unless the mapping relationship is manually deleted or the dataset name is modified.
Thank you for your reading, the above is the content of "how docker uses the rexray plug-in to access ceph to do unified storage". After the study of this article, I believe you have a deeper understanding of how docker uses the rexray plug-in to access ceph to do unified storage, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.