In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Persistent storage Data volume
You know, containers have a life cycle.
Default storage method of docker: storage type: (strage driver:overlay2.xfs)
Data volume can be mounted in two ways:
1) bind mount (user management): Mount a directory or file on the host (cannot be a disk file without formatting) to the container. By default, you have read and write permission to this directory in the container. If you only need to add files to the container and do not want to overwrite the directory, you need to note that the source file must exist, otherwise it will be treated as a directory bind mount to the container.
2) docker manager volume (docker automatic management): you do not need to specify the source file, you only need to specify mount point (mount point). The directory in the container is mapped locally.
The disadvantage of this approach compared with bind mount is that it cannot restrict permissions on directories or files in the container.
Using the second mount method, if you do not specify the source file location when mounting-v, the default mount path is:
[root@sqm-docker01 _ data] # pwd/var/lib/docker/volumes/dd173640edd5b0205bb02f3c4139647be12528b38289b9f93f18123a6b1266a8/_data# when a directory is mounted, a string of hash values is generated under / var/lib/docker/volumes/ by default. There is a directory of _ data under the hash value, and the mapped files in the container are in this path. Example 1: data sharing between containers with a single dockerhost
First create a volume container:
Basic concept of volume container: * * volume container is a container that specifically provides volume for other containers. The volumes provided by volume container can be either bind mount or docker manager volume.
Advantages of volume contianer:
Compared with bind mount, it is not necessary to specify the source file path for each container, all the paths are defined in volume container, and the container only needs to be associated with volume container, so as to decouple the container from host.
(1) create volume container: first, I create a local web webpage directory that needs to be mounted to the container: [root@sqm-docker01 ~] # mkdir html [root@sqm-docker01 ~] # echo "hello volume_data" > html/index.html#. I directly use two mounting methods (based on busybox image): [root@sqm-docker01 ~] # docker create-- name vc_data01-v / root/html/:/usr/share/nginx/html-v / other/useful/tools/ busybox
Note: the vcdata01 is also a container, but its state is create.
(3) run nginx container based on volume container: [root@sqm-docker01 ~] # docker run-d-- name test1-p 80:80-- volumes-from vc_data01 nginx:latest-- volumes-from: specified data volume
(4) visit the default web page of nginx:
Looking at the mapping to the local default path, two hash values are generated, indicating that two host path are mounted.
Example 2: cross-host data sharing method 1: build nfs service:
Environment: two dockerhost (centos7)
Docker01:172.16.1.30
Docker02:172.16.1.31
Nfs server: 172.16.1.40
Nfs server:
[root@nfs-server ~] # yum-y install nfs-utils # install nfs Service [root@nfs-server ~] # yum-y install rpcbind # remote Transmission Control Protocol
[root@nfs-server ~] # vim / etc/exports
Parameter explanation:
*: indicates all addresses. You can also customize the ip address or network segment.
Rw: readable and writable
Sync: synchronize data to disk
No_root_squash: with this option added, root users will have the highest permissions on shared directories, just as they do on local directories.
[root@nfs-server ~] # mkdir / nfs # create a shared directory [root@nfs-server ~] # systemctl start rpcbind # start the service [root@nfs-server ~] # systemctl start nfs first
Docker01 and dcoker02 test whether it can be mounted:
Docker01:
Mount the local directory to the nfs server:
Create a web page directory: [root@sqm-docker01 ~] # mkdir html
[root@sqm-docker01 ~] # vim / etc/fstab # # enter the configuration file to mount
[root@sqm-docker01 ~] # mount-a # reload to make it effective [root@sqm-docker01 ~] # df-hT # View disk information
Write web content: [root@sqm-docker01 ~] # echo "hello docker02" > html/index.html [root@sqm-docker01 ~] # cat html/index.html hello docker02 run nginx Container: [root@sqm-docker01 ~] # docker run-d-name nginx01-p 80:80-v / root/html/:/usr/share/nginx/html nginx:latest
Docker02:
Mount nfs:
[root@sqm-docker02 ~] # mkdir html # create a mount directory [root@sqm-docker02 ~] # vim / etc/fstab
[root@sqm-docker02] # mount-a # reload to make it effective
/ / check whether the directory files are synchronized: [root@sqm-docker02 ~] # cat html/index.html hello docker02// run nginx Container: [root@sqm-docker02 ~] # docker run-d-- name nginx02-p 80:80-v / root/html/:/usr/share/nginx/html nginx
Visit the nginx page:
Let's modify the nginx page on docker01 to test whether the docker02nginx page is synchronized:
[root@sqm-docker01 ~] # echo "123456" > html/index.htmlnginx02 visit again: [root@sqm-docker02 ~] # curl 127.0.0.1123456 method 2: use volume container
Environment: two dockerhost hosts (centos7)
Docker01:172.16.1.30
Docker02:172.16.1.31
(1) create a directory file on docker01: [root@sqm-docker01 ~] # mkdir html [root@sqm-docker01 ~] # echo "hello docker02" > html/index.html (2) write Dockerfile: [root @ sqm-docker01 ~] # vim Dockerfile
/ / build dockerfile: [root@sqm-docker01 ~] # docker build-t data:latest. # customization of image name
(3) create a volume container and run the nginx container: [root@sqm-docker01 ~] # docker create-- name vc_data02 data:latest [root@sqm-docker01 ~] # docker run-d-name box1-P-- volumes-from vc_data02 nginx-P: randomly generate ports on the host, starting from 32768 by default.
Visit the nginx web page:
[root@sqm-docker01 ~] # curl 127.0.0.1 purl 32768
Hello docker02
(4) package the volume container image and copy it to docker02: [root @ sqm-docker01 ~] # docker save-- output data.tar data:latest [root@sqm-docker01 ~] # scp data.tar root@172.16.1.31:/root/
Docker02:
/ / Import image [root@sqm-docker02 ~] # docker load-- input data.tar / / create volume container: [root@sqm-docker02 ~] # docker create-- name vc_data03 data:latest// runs nginx based on container: [root@sqm-docker02 ~] # docker run-d-- name box2-P-- volumes-from vc_data03 nginx:latest
Finally, visit the nginx default interface (make sure it is the same as the nginx page on docker01)
[root@sqm-docker02 ~] # curl 127.0.0.1:32768hello docker02
These are a variety of ways to share data across hosts, and of course there are other ways, which may be written about in subsequent blogs.
-this is the end of this article. Thank you for reading-
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.