In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
I. introduction of the underlying storage mechanism of Docker
For Docker, as the underlying engine of the container, it runs a program and subroutine in each container when organizing and running its container. When the container starts, it depends on the joint mount of read-only images of more than one layer of the underlying layer. The underlying file systems that can store such layered builds and jointly mount images include AUFS, Overlayfs2, and devmapper file systems. Finally, be sure to build a writable layer at the top. For this writeable layer, all write operations performed in the container (modifications to data, modifications to content) are stored in the top-most writable layer. We need to use the "copy on write" (COW) mechanism to add, delete and modify the underlying content.
The copy-on-write mechanism is that if a file exists at the bottom and is marked for deletion in any layer, the user will not see the file at the top. All the user can see is a file that is not marked for deletion or that is marked for deletion and the user creates a file with the same name at the top level.
For this way, we go to access a file (modify, delete, etc.), in the access and use, the efficiency will be very low. Especially for those applications with high requirements for Imax O, such as redis and mysql. For example, mysql itself has a higher requirement for Ibind O, and if mysql runs to write data on the top-most writeable file system mounted jointly in the container, then the data will be deleted when the container stops. And it is inevitable that the efficiency of data access is low. To get around this limitation of use, we can do so by using the mechanism of storing volumes.
2. Introduction of storage volume
The so-called storage volume can be simply imagined as finding a directory above the local file system in the privileged namespace (host), and binding this directory directly to a directory on the file system inside the container. Then, when the processes in the container write data to this directory, they are written directly on the host directory, which is very similar to the function of using the mount-bind command. This enables our internal processes in the container to bypass the restrictions of the internal file system of the container when saving data, thus establishing an association with the host's file system. This allows us to share data and content within hosts and containers. We can let our container access the contents of the host directly. Similarly, you can also allow the host to provide content directly to the container. This is equivalent to making two otherwise isolated mount namespaces establish a certain degree of binding on a subpath, so that some subpath of the file system between the two containers is no longer isolated and the effect of sharing can be achieved. This association makes it easier for containers to share data across file systems, and this directory on the host (which binds to the file system within the container) is called volume (storage volume) for the container. The benefit of storing volumes is when the container is closed or even deleted. We don't have to worry about losing books, as long as we don't delete the bound directory (storage volume) on the host. When we then rebuild the container again, we can associate it to the same storage volume and use the same data. As a result, the data can be persisted away from the lifecycle of the container.
The default storage volume of docker is on the file system of the local host. If the container is to be migrated between multiple docker host (using docker clusters), we can also add shared storage, such as the NFS file system, which makes the migration easier for stateful applications in the container.
2.1. Problems with the file system in the container
Close and restart the container, its data will not be affected, but delete the container, the container data will be deleted at the same time
It is stored in the federated mount file system and is not easily accessed by the host.
It is inconvenient to share data between containers
2.2, benefits of volume
The original intention of volume is to persist data independent of the lifecycle of the container, so when the container is deleted, its data is not deleted and unreferenced volumes are not garbage collected. Therefore, we use storage volumes to solve the problems caused by federated mounting of file systems within the container.
2.3. types of volume
Docker has two types of volumes, each of which has a mount point in the container, but has a different location on the host
Bind mount volume: the path in the host and the container needs to specify a specific path manually, and two known paths to establish a binding relationship
Docker run-- name web1-it-v HOSTDIR:VOLUMEDIR nginx:latest
Docker managed volume: you only need to specify in the container where the mount point in the container is, and the directory under which path on the host is bound. Docker's daemon creates an empty directory or uses an existing directory to bind to the storage volume. This method is very convenient when starting the container for the first time, it automatically creates a volumen for the container under a path on the host, but when the container is deleted and restarted, it may regenerate a new volume.
Docker run-- name web1-it-v / data nginx:latest
3. Docker containers use volume
3.1 、 docker managed volume
1. The volume of the myweb container is specified with-v / data.
[root@bogon] # docker container run-d-v / data-- rm-- name myweb httpd:1.154b7acd21f2a8bafeaa9bf2653828a54be0c8190fb53216423e9aca6f1da6be4 [root@bogon ~] # [root@bogon ~] # docker container exec-it myweb / bin/shsh-4.1# lsbin boot data dev etc home lib lib64 lost+found media mnt opt proc root sbin selinux srv sys tmp usr varsh-4.1#
2. View the details of the container
You can see that the mount point of volume in Mounts is / data, the location on the host and other information.
[root@bogon ~] # docker container inspect myweb ["Mounts": [{"Type": "volume", "Name": "0905db81948a0d09d2c1f811eb113a72356f7d73e6085e0b513e6fcabd26dc1b", "Source": "/ var/lib/docker/volumes/0905db81948a0d09d2c1f811eb113a72356f7d73e6085e0b513e6fcabd26dc1b/_data", "Destination": "/ data", "Driver": "local" "Mode": "," RW ": true," Propagation ":"}]," ArgsEscaped ": true," Image ":" httpd:1.1 "," Volumes ": {" / data ": {}}]
3. Verify in the container after writing data to the mount point directory on the host
[root@bogon ~] # cd / var/lib/docker/volumes/0905db81948a0d09d2c1f811eb113a72356f7d73e6085e0b513e6fcabd26dc1b/_ data [root @ bogon _ data] # lsindex.html [root@bogon ~] # docker container exec-it myweb / bin/shsh-4.1# ls / data/index.html [root@bogon _ data] # echo "welcome to my container." > hello.html [root@bogon _ data] # lshello.html index.html [root@bogon ~] # docker container exec-it myweb / bin/shsh-4.1# ls / data/ Hello.html index.htmlsh-4.1# cat / data/hello.html welcome to my container.
4. Delete the files in the volume in the container
[root@bogon ~] # docker container exec-it myweb / bin/shsh-4.1# rm-f / data/hello.html sh-4.1# ls / data/index.html [root@bogon _ data] # lsindex.html
5. Restart to verify whether the file exists after deleting the container
[root@bogon _ data] # docker container ls-aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES54b7acd21f2a httpd:1.1 "/ usr/sbin/apachectl?? 23 minutes ago Up 23 minutes 5000/tcp myweb [root@bogon _ data] # docker container kill mywebmyweb [root@bogon _ data] # Docker container rm mywebError: No such container: myweb [root@bogon _ data] # docker container ls-aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@bogon _ data] # [root@bogon _ data] # docker container run-it-v / data-- name myweb2 httpd:1.1 / bin/shsh-4.1# ls / data/index.htmlsh-4.1# cat / data/index.html Welcom To My Httpd
3.2 、 Bind mount volume
The function of binding a volume is the same as that of docker managed volume, except that you need to specify both the host path and the path within the container when starting the container.
[root@bogon myweb3] # docker container inspect myweb3 ["Mounts": [{"Type": "bind", "Source": "/ data/volumes/myweb3", "Destination": "/ data/httpd/index", "Mode": "," RW ": true "Propagation": "rprivate"}] }] [root@bogon myweb3] # [root@bogon _ data] # docker container run-d-v / data/volumes/myweb3/:/data/httpd/index/-- name myweb3 httpd:1.1c5828deffb5b8413c80841b3f2e9675565f7f20125311778df5c3123d867f07f [root@bogon _ data] # [root@bogon _ data] # ll / data/volumes/myweb3/total 0sh-4.1# [root@bogon ~] # docker container exec-it myweb3/ bin/shsh-4.1# ls / data/httpd/index/sh-4.1#
Verify whether the file exists in the container after the path of the host is created.
[root@bogon myweb3] # echo 333 > hello.html [root@bogon myweb3] # cat hello.html 333sh-4.1# ls / data/httpd/index/hello.htmlsh-4.1# cat / data/httpd/index/hello.html 333
3.3. Data sharing between containers
Data sharing between containers can be realized by using the same host directory for two containers.
After starting a container called myweb4
[root@bogon myweb3] # docker container run-d-v / data/volumes/myweb3/:/data/httpd/index/-- name myweb4 httpd:1.1e9b6509c14436e8b23c45cadd447bab00f61f871525a07907563d09c807ef4e3 [root@bogon myweb3] # docker container exec-it myweb4 / bin/shsh-4.1# cd / data/httpd/index/sh-4.1# cat hello.html 333
This method allows multiple containers to share data.
3.4. Start a new container by copying the volume settings of other containers, allowing multiple containers to share data
[root@bogon myweb3] # docker container run-d-- name myweb5-- volumes-from myweb4 httpd:1.124b378251586371aade5232ba55ed16ba093ea5800c80562ae3863ab694f2116 [root@bogon myweb3] # [root@bogon myweb3] # [root@bogon myweb3] # docker container inspect-f {{.Mounts}} myweb5 [{bind / data/volumes/myweb3 / data/httpd/index true rprivate}]
Starting a new container by copying the volume settings of other containers saves us the need to specify a long volume path every time we start a new container. We can also make an underlying basic supporting container called basedcontainer, which does not need to run. According to this underlying basic supporting container, we can combine an architecture at will, such as launching nginx, mysql, and tomcat containers. All three containers use-- network container:basedcontainer to join the underlying infrastructure supporting the container network, and use-volumes-from to use the volume settings of the underlying container.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.