In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what is the principle, architecture and application of Docker". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "what is the principle, architecture and application of Docker"?
A brief introduction to Cloud Computing
1.Docker, by packaging the running environment and applications together, to solve the problem of environment dependence of deployment, and really achieve cross-platform distribution and use.
two。 Due to the omission of the operating system, the container is more simplified and can run more applications on a single server
The storage part of 3.ubuntu can use AUFS, while CentOS can only use Devicemapper. The performance of the former is better.
II. Installation of Docker
Third, use Docker
1. The container is actually a process running on the operating system, but with the isolation and restriction of resources, and the three core functions of Docker:
CGroups: used to limit the resource usage of a process. On an operating system, resources such as user ID and machine name are global, and all running processes access the same resource.
Namespace: used to divide different namespaces
UnionFS: used to handle hierarchical mirrors
two。 The image is the file system in the container, and it also integrates some of the running parameters of the container. The image can be regarded as the template of the container.
3. Some commands:
Docker info: view the running status and version information, which is the epitome of the running status of the entire Docker Daemon daemon, including the number of containers, the number of images, the Daemon version, the storage driver used, and so on.
Docker pull: pull image
Docker run-I-t ubuntu / bin/bash:-i means to start an interactive container,-t means to use pseudo-TTY, stdin and stdout,ubuntu associated with the container are running images, / bin/bash indicates the command to be run when starting the container, and [Ctrl+PQ] is used if you want to exit interactive mode and do not affect the container's operation
Run the container for a long time:-d lets the container run in the background, and docker logs views the container's log (in fact, it looks at the container's standard output log)
Docker ps: view containers,-a view all containers
IV. In-depth analysis of Docker
Architecture of A.Docker
1.Docker Daemon: running on a host, users do not interact directly with Docker Daemon, but through Docker Client
2.Docker Client: is the main channel for users to access Docker, through which users control access to Docker Daemon
3.Docker Image: is a read-only template
4.Docker Registry: a repository for Image, which can be uploaded and downloaded using both public and private Registry
The 5.Docker Containers:Docker container is like a folder, a container contains all the environments required by the application, each container is derived from an Image, the container can be run, started, stopped, moved and deleted, and each container is an isolated, secure application
How B.Docker works
1.Docker Image is a read-only template that starts with the container. Each image contains multiple layers. Union File System is used to combine these layers into a single image. Union FS can transparently stack files and directories to form a separate file system, and each image comes from a base image.
two。 A container consists of operating systems, user files, and metadata. At run time, the container adds a file layer on top of the Union FS.
Namespace:PID Namespace, NET Namesapce, IPC Namesapce, MNT Namespace, Union FS used by 3.Docker and UTS Namespace; include AUFS, Btrfs, VFS, DevicemapperFS, etc.
4. Murp host port: container port
5. Enter the image: docker exec-it container id / bin/bash
6.docker commit container id new name, save image and all changes
c. Mirror image production
1. There are three ways to obtain an image:
Pull image: docker pull
Convert the container to an image: docker commit
Make an image: generate an image through Dockerfile
two。 Find DockerHub image: docker search
3.push image, first tag image, docker tag [OPTIONS] IMAGE [: TAG] [REGISTRYHOST/] [USERNAME] NAME [: TAG]
4. Compile the image according to Dockerfile: in the directory where the Dockerfile file is located, docker build.
5. Delete Mirror: docker rmi
D.docker run command
1. Grammatical format: docker run [OPTIONS] IMAGE [: TAGGUBESTD] [COMMAND] [ARG …]
two。 Foreground and backend operation: backend-d, foreground-I-t. After running, the backend enters the container again. You can use docker attach, and use [ctrl+pq] when exiting.
3. Identification of the container:-name,Image [: tag], Image [@ digest]
4.PID setting:-pid=host, you can share the PID Namespace of the host in the container
5.IPC setting:-pic is a support for inter-process communication and can be shared with hosts
Fifth, the network of containers
a. Container built-in network
1. Check through docker network ls, including bridge, host, none
b. Network detail
1. Check the network information of this computer through docker network inspect
two。 When a container is started, relevant network information is registered globally.
c. User-defined network
1. There are three kinds: bridge network, Overlay network and plug-in network.
two。 Bridging the network:
The default bridge for the system is docker0
Create a mynet bridged network using docker network create-driver bridge mynet
Attach the container to the mynet with the-net attribute
Under the same bridge, a private network is formed, which can communicate with each other, but only on the same host.
2.Overlay Network:
Overlay is a virtual handover technology, which mainly solves the problem of network communication between different IP address segments. The Overlay technology used by Docker is VXLAN, which is realized with the help of libnetwork.
Overlay needs a Kmurv service to store relevant host information. Consul, Etcd and ZooKeeper,Consul can be used by default.
Overlay hosts must also open UDP/4789 and TCP/UDP/7946, which are used as data channels and control channels respectively
VI. Data of containers
a. Data volume
1. Data volumes provide a way for hosts and containers to share data, which is sometimes needed for persistence and data sharing. When persisted, the data volume is usually large and can be placed on a separate disk, volume, or array, when the container is just an execution environment. As data sharing, it can be used to develop and test distributed systems, such as using shared disks, processing fencing, etc.
two。 Create a data volume: specify it mainly through the-v attribute,-v [host directory]: [container directory]
3. Do not map the root directory of the host to the inside of the container at any time
b. Use data-type containers
1. Share the volume of the container with another container
two。 To create a container containing external volumes, all you need is create
3. Mapping with-volumes-from in another container
c. Backup, restore, and migration of data volum
1. Murv $(pwd): xxxx,$ (pwd) represents the current path
d. Associate the container with the code
1. Several characteristics of data volumes:
The data volume is initialized when the container is created
Data volumes can be shared or reused between containers
The read and write of the data volume is issued directly.
The Commit command does not save changes to the mirror.
Even if the container is deleted, the data volume still exists, so this area needs to be paid special attention to avoid generating junk data volumes.
7. Image warehouse
a. Warehouse related Docker commands
1.docker login-u username-p password, log in to docker hub or third-party library
2.docker search mysql, find libraries related to mysql
3.docker pull mysql, pull mysql image
4.docker push [OPTIONS] [server/] [user/] image name [: TAG], submit the image
VIII. Storage structure of images and containers
a. The relationship between mirrors, containers, and storage drivers
1. Each mirror consists of multiple mirror layers that are read-only and stack together from the bottom up to form the root file system of the container
two。 When the container is running, the data of all file changes are saved in the container layer, such as new files, modified files, and deleted files.
When managing images and containers, 3.Docker uses write-time replication technology, while write-time replication uses sharing and replication. For the same data, the system retains only one piece of data, and all operations access this data. When an operation needs to modify or add data, the operating system will copy this part of the data to a new place, this operation will modify or add data in the new data area, and other operations will still read the original data in the old data area.
4.docker history command, which lists the layer information of the mirror
5. Timed replication technology saves storage space and accelerates the startup time of the container.
6. A data volume is a file or directory on the host, which is mounted to the container when the container is started. The data volume is not managed by the storage driver, and the data read and write operations in the data volume will practice the storage driver and work directly in the file system of the host. There is no limit to the number of data volumes mounted in containers, and multiple containers can mount the same data volume.
b. How to choose a storage driver
1. Which storage driver to use depends on which file system the user uses on the host. Some storage drivers can work on different back-end file systems, while others must use the same back-end file system.
2.After the storages, the driver is set up.
3. Considerations: no storage driver can be applied to all scenarios; each storage driver is constantly upgrading
4. Direction of selection: stability; familiarity; maturity; Overlay and Overlay2
C.AUFS storage driver
1.AUFS is a Union FS that combines different directories into a single directory to form a virtual file system. AUFS sets different permissions for each directory, and can add, delete, and modify mounted directories in real time.
D.Devicemapper storage driver
1.Devicemapper stores images and containers on virtual devices, manages images and containers using the technology of allocating on demand and replicating snapshots on write, and operates on block devices instead of entire files
two。 Devicemapper's direct-lvm mode should be used in a production environment, in which Devicemapper uses a real block device as the storage medium to establish thin pool on the block device
E.Btrfs storage driver
1.Btrfs is the next generation storage technology, which uses distribution on demand, replication on write and snapshot technology to manage images and containers. It is still in the stage of development and should be used cautiously in production environment.
2.Btrfs saves the mirror layer and container layer in separate subvolumes or snapshots, the underlying layer in the mirror as a subvolume, and the other mirrored and container volumes as snapshots
F.ZFS storage driver
1.ZFS is the next generation file system, which provides the functions of volume management, snapshot, check, compression, deduplication and multi-site replication. If developers have not used ZFS, it is recommended not to use it in production environment.
G.Overlay storage driver
1.OverlayFS is a federated file system. Overlay2 storage driver is supported in Linux kernel version 4.0 and above.
The 2.Overlay/Overlay2 storage driver is fast, faster than AFUS and Devicemapper, and even faster than Btrfs in some scenarios
IX. Customized Docker Daemon
a. Three ways to modify Docker Daemon
1. There are three ways: command line modification, startup item modification and configuration file modification. Docker Daemon runs at the front end of the command line, which is suitable for debugging. The other two ways should be used in the production environment.
b. Warehouse related configuration
1.-disable-legacy-registry option: sets not to download images from the old version of the image repository
2.-registry-mirror option: specify image repositories. Multiple image repositories can be set up.
3.-insecure-registry option: sets the ability to download images from an unsecured image repository
c. Security related configuration
1. Option: set the pid file used by Docker Daemon. Default is / var/run/docker.pid.
2. Option: configure IP and port for Docker Daemon listening
3. The option: configure TLS communication and related certificates for remote communication.
d. Log correlation
1. The option of music DMAE: debug mode of opening a room.
2. The option: set log level, log format and other information.
e. Store related configuration
1. The option to set the root directory of the Docker runtime
2.-storage-driver option: configure storage drivers for Docker Daemon
3.-ostorage-opt option: configure storage-driven parameters
f. Bridge related configuration
1.-big option: set the IP and subnet mask of docker0
2. The IP scope of the container is configured.
3.-mtu option: configure the maximum length of the transfer unit for docker0
4. Murray bridge option: configure the bridge
g. The container communicates with the outside
1.-ip-forward option: automatically modifies the ip_forward of the host. Default is true.
2.-iptables option: the forwarding rule will be appended in iptables. Default is true.
3. The option of IP: set the IPv6 address
h. Other network configuration
1.-default-gateway,-default-gateway-v6 option: set gateway
I.excdriver configuration
1-exec-opt option: sets how to manage the CGroups of the container. The default value is cgroupfs. Optional systemd
2.-exec-root option: sets the root directory of the state files used by execdriver. The default is / var/run/docker.
j. Other configuration
1.-default-ulimit, which sets the maximum number of processes that a user can use. The parameter-ullimit is used when starting the container.
10. How to write Dockerfile
a. Locally compiled image
1. Add .dockerkeeper to filter unwanted files
two。 Use the specified Dockerfile file,-f
3.Murt is used for labeling.
4.-no-cache, compiling without caching
2.dockerignore file
1. Similar to git
C.Dockerfile format
1. Each instruction consists of instructions + parameters, separated by spaces and annotated by #
two。 General instruction uppercase, parameter lowercase. The first instruction must be FROM. Set the basic image.
Detailed explanation of D.Dockerfile instruction
1.FROM instruction: set basic image. You can set multiple basic images. The contents between two FROM instructions are placed in one image. Tag and digest are optional. Ignoring tag will use latest image.
2.MAINTAINER directive: set the mirror author
3.RUN instruction: generate a new container and execute the script in the container. After the script is executed normally, Docker daemon will submit the container as an intermediate image for subsequent instructions to use.
4.CMD instruction: sets the startup set of the container. There can be only one CMD instruction. If more than one instruction is written, only the last one will take effect.
5.LABEL directive: set mirrored tags. You can view tags through docker inspect. Each tag is in Key=Value format. Different tags are separated by spaces before. Each instruction will generate a mirror layer, and a mirror can only have 127layers, so it is best to use a LABEL instruction to set it up.
6.EXPOSE directive: set the image exposure port and record which ports the container is listening on when it starts.
7.ENV directive: sets environment variables in the image and supports instructions to read environment variables: ADD, COPY, ENV, EXPOSE, LABEL, USER, WORKDIR, VOLUME, STOPSIGNAL, ONBUILD
8.ADD instruction: copy the file to the image, ADD.., src must be in the compiled directory, if src is URL, if there is no / at the end of the dest, then dest is saved as the file name in / tmp, if there is /, then dest is the storage directory, if src is the directory, copy all the contents of the directory, including the metadata of the file system, if the scr is compressed (identity,gzip,bzip2,xz) will be extracted into a directory Copy files and metadata if src is a file, if src uses wildcards or a file list, dest must end with /, if dest does not end with /, then file name, if dest does not exist, ADD will automatically create dest and missing parent directory
9.COPY directive: copy a file or directory to the image
10.ENTRYPOINT instruction: set the entry program of the container. The calculator program is the program executed when the container starts. The last command in the docker run command will be passed as a parameter to the entry program, and only the last ENTRYPOINT will take effect.
11.VOLUMN directive: set the mount point of the container
12.USER directive: sets the user name or UID that executes RUN, CMD, and ENTRYPOINT
13.WORKDIR directive: sets the working directory for the RUN, CMD, ENTRYPOINT, ADD, and COPY directives
14.ARG directive: setting compilation variables
15.ONBUILD instruction: sets the compilation hook instruction of the child image. When the child image is generated from the parent image, the ONBUILD instruction in the parent image will be executed first in the compilation process of the child image.
16.STOPSIGNAL instruction: sets the semaphore that Docker Daemon sends to the container when the container exits
The difference between E, CMD, ENTRYPOINT and RUN
The 1.RUN instruction sets the scripts and programs to be executed when compiling the image. The compilation of the image completes, and the life cycle of the RUN instruction ends.
2.CMD is called the container default startup command. Add Command to the end of docker run to replace the startup program set by CMD.
3.ENTRYPOINT is called the entry program and cannot be replaced by the Command at the end of the docker run. The end Command will be treated as a string and passed to ENTRYPOINT as a parameter. You can add-entrypoint to the docker run to replace the entry program in the image.
4. Some rules:
In Dockerfile, there should be at least one CMD or ENTRYPOINT instruction
When using a container as a program container, you should use ENTRYPOINT to define the entry program
In Dockerfile, if both ENTRYPOINT and CMD,CMD are defined, they are passed to ENTRYPOINT as parameters
11. Dockerfile best practices
a. basic principle
1. The life span of the container is short.
two。 Use .dockerkeeper
3. Install only the required packages
4. Each container runs only one process
5. Reduce mirror layer
6. Arrange multiple parameters in different lines
7. Compile cache: Docker Daemon compiles a new image from the base image; for ADD and COPY instructions, Docker Daemon checks the metadata and file contents of all source files in the image layer; except for the ADD and COPY instructions, Docker Daemon does not check files when looking for mirror layers in the image cache
Best practices for B.Dockerfile directives
1.RUN instruction: readable, apt-get install and update must be executed on one line, try to install the required packages in one instruction
2.CMD directive: try to use JSON format, do not use CMD to set ENTRYPOINT parameters
3.EVN directive: when using containers to provide services, it is best to set service-related configurations through environment variables
4.ADD and COPY directives: the COPY directive is recommended because the function is more simple, only the files in the compiled directory are copied to the image, and ADD also decompresses the files and supports remote replication.
5.ENTRYPOINT directive: when you need to use the container as a command line tool, it is best to set the mirror entry program through the ENTRYPOINT directive
6.VOLUME directive: if you need to persist the database, configuration files, user uploaded folders and other file directories in the container, you can use the VOLUME directive to export these files and directories. The container will create a corresponding directory in the host's / var/lib/docker/volumes directory and mount it to the container.
7.WORKDIR directive: set the working directory of other instructions in Dockerfile and use the absolute path
8.USER directive: if the application in the container does not require special permissions, you can set the owner of the application to a non-root user through the USER directive
9.ONBUILD instruction: set the hook instruction in the basic image, and the sub-image will first execute the instruction of the ONBUILD setting of the basic image.
c. How to reduce the Mirror Volume
1. Avoid apt/yum update
two。 Each instruction generates a mirror layer, and each mirror layer takes up some disk space
3. You should update the installation source, install the program, and clean the cache in a RUN instruction to reduce the image size.
Use containers to provide services
a. Using containers to provide database services
1. Check out the options that can be added when starting the mysql container: docker run-it-rm mysql-verbose-help
2.Mure EVNIRONMENT, which can be configured with environment variables
3.mysql related directories: / etc/mysql/my.cnf, / etc/mysql/conf.d/, / var/lib/mysql/
4.mongodb related directory: / data/db/
b. Using containers to provide Web services
1.apache related directories: / usr/local/apache2/htdocs/, / usr/local/apache2/conf/httpd.conf
2.gitlab/gitlab-ce (git repository) related directory: / etc/gitlab, configuration file, / var/opt/gitlab, save all git version libraries, / var/log/gitlab, logs
XIII. Establish a private image repository
1.registry, image repository container, image storage directory: / var/lib/registry, configuration file: / etc/docker/registry/config.yml
14. Frequently asked questions on Docker
1. Virtualization is running multiple isolated instances on a single host, which contains two layers of meaning, one is to isolate each other, that is, they have no influence on each other, and the other is the instance, which can be a complete operating system.
At this point, I believe you have a deeper understanding of "what is the principle, architecture and application of Docker". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.