Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the relevant knowledge points of Docker

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "what are the relevant knowledge points of Docker". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what are the relevant knowledge points of Docker"?

Welcome to the Docker world

a. What is Docker?

1. Containers: prevent access to protected resources, which are quarantined unless explicitly allowed

two。 The container is not virtualized: the program interface running in the Docker container deals directly with the host's Linux kernel and can help use container technology that is already built into the operating system

3. Running software in an isolated container: using Linux namespaces and cgroups

4. Distribution container: through the Docker image (is a bundled snapshot of all files running the program in a container)

What did B.Docker solve?

1. Organized: Docker retains everything isolated by containers and mirrors

two。 Improve portability:

Completely unlock areas where the software could not be used before

Can run the same software on any system

Software maintainers can focus on writing their software on a single platform and a set of dependencies

3. Protect your machine: containers limit the scope of influence of one program on other programs, accessible data, and system resources

c. Why is Docker so important?

1.Docker provides so-called abstraction: it allows you to handle complex tasks in a simplified way, without focusing on the complexities and details associated with installing applications with Docker, but only about what kind of software we want to install.

two。 There is an important software community that promotes the use of containers and Docker

3. Do for the PC side what app stores do for mobile devices

4. Finally began to see better and more advanced isolation features through the operating system

d. When and where to use Docker

1. Can only run applications on the Linux operating system, limited to software running on Linux servers or desktops

two。 Using Docker for daily tasks helps keep your computer clean

3. Containers do not improve the security of programs, especially when you have to access the computer with the highest permissions

Second, run the software in the container

a. Get help from the Docker command line tool

1.docker help

2.docker help cp

b. Control container: build a monitor for a website

1. Daemons: this means that they will run in the background without connecting to any input or output streams, making them ideal for programs that run silently in the background. These programs are called daemons and use-detach or-d

two。 Interactive process:-interactive or-Imam Letty or-t

c. Resolved issues and PID namespaces

1. Each running program or process has a unique number on the Linux machine, called the process Identifier (PID), and an PID namespace is a set of numbers that identify the process. Linux provides tools to create multiple PID namespaces, each with a complete set of PID

d. Eliminate metadata conflicts: build a website farm

1.docker create-cidfile / xxx/cid, create but not run,-cidfile save cid to specified directory file

two。 Containers need to be started in the opposite order of their dependency chains,-link, and circular dependencies cannot be built using Docker containers

e. Build a system independent of the environment

1.Docker has three specific functions to build environment-independent systems:

Read-only file system: the container cannot change the files it contains,-read-only

Environment variable injection: environment variables to convey relevant information, including the daemon options of the container, the host name of the container, and the practical information that he runs the program in the container,-env or-e

Storage Volum

* run run-d-- name wp3-- link wpdb:mysql-p 80-v / run/lock/apache2/-v / run/apache2-v / tmp/-- read-only wordpress:4, missing-v / tmp/ in the book

f. Create a persistent container

1.Docker container has four states: run, paused, restart, exit

two。 Auto restart container:-restart

3. Use init and supervisor processes to keep the container running: init, systemd, runit, upstart, supervisord

g. Clear

1. List all containers: docker ps-a

two。 Delete: docker rm xxx

3. Quickly eliminate all: docker rm-vf $(docker ps-a-Q)

III. Simplification of software installation

a. Select the required software

1. The warehouse is an image bucket with a name similar to URL.

two。 Tags are the only important way to specify images and a convenient way to create useful aliases

b. Find and install softwar

1.docker search xxx

two。 Register the server so that users do not have to think about image storage and transportation

3.docker load-I, load the image file, docker save, save the image to the file

4.docker rmi, delete the image file

c. Install files and quarantine

1. A mirror layer is associated with at least one other mirror.

two。 Mirrors maintain a parent / child relationship. In these dependencies, a new layer is built from the parent layer. The files in the container are all the layer sets of the container created by mirroring. A mirror can be dependent on any other mirror, including providing images of different repositories for different owners

3.Union file system: the program running in the container knows nothing about the image layering, as if the operation is not running in the container or on the operation image. From a container point of view, it has an exclusive copy of the file provided by the mirror. Other tools include the MNT namespace and chroot system calls.

The 4.Linux kernel provides the MNT system namespace. When docker creates a container, the new container has its own MNT namespace and creates a new mount point for the image.

5.chroot builds the mirrored root file system through the container context. Prevents any program running in the container from being associated with other parts of the host system

6. Advantages of tiered file systems and their tools:

The common layer only needs to be installed once

Layering provides tools for dependency management and isolation

It's easy to build professional software, because all you have to do is make minor changes to a basic image.

Shortcomings of the 7.Union file system:

Different file systems have different rules for file attributes, sizes, names, and characters.

The Union file system uses a mode called copy-on-write, which makes it difficult to implement memory-mapped files (system calls to mmap ())

8. Use the info command to decide which file systems to mount

Persistent storage and state sharing between volumes

a. Introduction to storage volum

1. The storage volume is the mount point on the container directory tree, and part of the host directory tree has been mounted

two。 Semantically, a storage volume is a tool for data segmentation and sharing, with a scope or lifecycle independent of the container

3. Images are suitable for packaging and distributing relatively static files, such as programs, while storage volumes hold dynamic or specialized data. This difference makes the image reusable and the data easily shared.

b. Type of storage volume

1. Bind mount volume

Point to a user-specified location on the host file system, which is useful when a file or directory provided by the host needs to be mounted to a specific location of the container

Use the-v (- volume) option and location mapping to create a binding mount point separated by a colon. The mapping key (before the colon) is an absolute path on the host file system, and the key value (after the colon) is the target storage location mounted in the container.

Run-d-- name bmweb-v ~ / MyProject/docker/4:/usr/local/apache2/htdocs-p 8080 httpd

You can add: ro makes mount and storage volumes read-only

~ / MyProject/docker/4:/usr/local/apache2/htdocs:ro

Two problems: binding portable containers to the file system of a specific host; creating opportunities for conflicts with other containers

Binding mount volumes is more suitable for workstations or machines that need to use special mount points, avoiding such bindings on general platforms or hardware pools

2.Docker managing volumes: using managing volumes is a way to decouple volumes at specific locations on the file system

c. Shared storage volume

1.-volumes-from flag that copies any source container referenced by this volume to a new container, and the replicated volume always has the same mount point

two。 You cannot use-volumes-from in three cases:

If you build a container that requires shared volumes to be mounted to a different location, you cannot use the

Source volumes conflict with each other, or there are new volume specifications

If you need to change the write permission of the volume, you cannot use the

d. Manage the lifecycle of a volume

1. The lifecycle of managing volumes is independent of any container, but so far, you can only reference them through containers.

two。 The management volume is a second-class entity, and you have no way to share or delete a specific management volume, because you have no way to specify a management volume. If you do not use a bound mount volume and only create a management storage volume, then you can only distinguish by their containers.

3. The best way to distinguish storage volumes is to define a container for each management volume.

4. Running the docker rm command with-v will attempt to delete any management volume referenced in the target container. If you delete each container that has referenced the processed volume without using the-v flag, the orphaned volume will be generated. Removing the orphaned volume requires a series of manual operations.

e. Advanced container mode for storage volumes

1. Volume container mode: a container that only provides a handle to the volume and does not need to be run, because the container can still guarantee the reference of the storage volume when it is stopped

two。 Volume containers are very important for the maintenance of data handles, which make it easy to back up, restore and migrate data, even if a single container has exclusive access to the data

3. Storage volume containers are most useful when you control and standardize the naming conventions of mount points, because each container copies volumes from volume containers and inherits the definition of mount points. Images with specific requirements should clearly convey the contents of their documents or find a programmable way

4. Data packaged storage volume container: packages the data in the container to add value, describing how images are used to distribute static resources, such as configurations or code used in containers built by other images. Data-packaged volume containers copy static content in the mirror to their defined storage volumes, which can be used to distribute critical schema information

5. Polymorphic tools interact in a consistent manner, but there may be several implementations that do different things and use storage volumes to inject different behaviors into the container without modifying its mirror

V. Network access

a. Network-related background knowledge

B.Docker 's network

1.Docker is concerned with two types of networks: single-host virtual networks and multi-host virtual networks. The local virtual network is used to provide isolation of containers. A multi-host virtual network builds an abstract overlay network in which any container has a separate, routable IP address relative to other containers on the network.

2.Docker uses the underlying features of the operating system to build a special, customizable virtual network topology. Each container has a local loopback interface and a shared Ethernet interface, where the Ethernet interface is connected to another virtual interface on the host namespace.

3. Four prototypes of network containers: Closed container, Joined container, Bridged container, Open container

C.Closed container

1. Processes running in this container can only access the local loopback interface, which is very appropriate if the process only needs to communicate with itself or with other local processes.

two。 Add-net none as an argument after the docker run command to tell Docker to create a Closed container

D.Bridged container

1. There are two interfaces, one private local loopback interface and the other private interface connected to other containers of the host through a bridge

two。 Default or use the-net bridge option

3. Using the-hostname option, you can set the hostname of a new container; using-dns to set the DNS server ip;-dns-search allows you to specify a DNS lookup domain, which is like a default suffix for the host name;-add-host, which can customize the mapping from the hostname to the IP address

4. All custom transformation relationships are saved in the / etc/hosts file in the container

The 5.Bridged container cannot be accessed by the host network by default and is protected by the host's firewall. The default network topology does not provide any routing from the host external interface to the container interface.

6. Murp or-publish option, which can create a mapping relationship between the port on the host network stack and the container port; using-P or-publish-all, will tell Docker daemon to create all the ports acting on the container and expose all the container's goods;-expose option, which can set the port that the container wants to open; the docker port command will output a list of port mappings for each line

7. By default, containers are fully open to other local containers; using docker-d-icc=false, you can choose to turn off network connections between containers

8.docker-d-big "ip" can set the IP address of the docker bridge creation interface, or use the docker-d-fixed-cidr "classless intra-domain routing (CIDR) address to set the size of the subnet; docker-d-mtu 1200 to set the maximum transmission unit (MTU);-b or-bridge to use Linux custom bridge

E.Joined container

1. When you want programs on different containers to communicate through the local loopback interface; when a program in one container is about to change the Joined network stack, while another program is going to use that changed network stack; when you want to monitor the network traffic of a program in another container

F.Open container

The 1.-net host command is created with full access to the host network and does not provide any isolation

g. Cross-container dependency

1. Link-Local Service Discovery: when a new container is created, the target container must be running, and three things happen:

Environment variables that describe the target container are created

The alias of the link and the IP address of the corresponding destination container are added to the DNS override list

If cross-container communication is prohibited, Docker adds specific fire wall rules to allow communication between linked containers

two。 When cross-container communication (ICC) is allowed, the-expose option is the mapping of container ports to host ports, and when ICC is disabled, the-expose option becomes a tool for defining firewall rules and explicitly declaring container interfaces on the network

3.-link container name: alias, link

4. Only with correct and proper configuration, strong network rule settings, and the declaration of service dependence, can a deep security defense system be constructed.

5. Links are essentially static, directional, and non-transitive dependencies; links detect the network information of the destination container (IP address and open port) and then inject that information into the new container

VI. Isolation-limitation of hazards

a. Allocation of resources

1. Memory-m or-memory. The available units are bforce, memory, and g. The memory limit is not memory retention, but to prevent containers from using memory resources that exceed the specific size.

2.cpu weight,-cup-shares= integer; cup qualifies the specified CPU,-cpuset-cpus

3. Device authorization, using the-device option to specify a collection of devices that will be mounted into the new container. The value of the option must be a mapping of the device file on the host operating system to the location in the new container

b. Shared memory

1.Linux 's IPC namespace partition shared memory units, such as named shared memory blocks, semaphores, and message queues. By default, Docker creates a separate IPC namespace for each container.

The 2.-ipc option supports the creation of a new container with the same IPC namespace as another target container

3.-ipc host, which can communicate with processes running on the host, belongs to an open memory container

c. Understand the user

1. Provides the-u or-user option to set up run-as users, you can set users and user groups,-u nobody:default or-u 10000 nobody:default 20000

two。 Unless you want the host file to be accessible by the container, do not mount the file to the container as a volume

d. Capability-Authorization of operating system functions

1. Remove capabilities for containers through the-cap-drop option, and add capabilities through the-cap-add option

e. Run privileged container

1. Privileged containers maintain their own file system and network isolation, but have full access to devices and shared memory, as well as full system capabilities

two。 Use the-privileged option to turn on this mode

f. Use hardening tools to create more robust containers

1.Docker provides an option to specify the Linux security module (LSM) when the container is created or run, and LSM is a framework adopted by Linux as the interface layer between the operating system and the security vendor. AppArmor and SELinux are both suppliers of LSM

two。 Set with the-security-opt option

3. Use LXC through-exec-driver=lxc. Once configured as LXC, you can use-lxc-conf to set the configuration of LXC.

g. Build containers according to local conditions

1. Application: ensure that the user running the application has limited permissions; limit the browser's system capabilities; limit the application's CPU and memory resources

7. Package the software in the image

a. Build an image from a container

1. The basic workflow of building an image from a container consists of three parts:

You need to create a container from an existing image

Modify the file system of this container

Once the changes are complete, submit them (commit)

two。 Changes to the review document: docker diff xxxx

The 3.docker commit command creates a new image from the modified container, preferably using-a to specify the author information for the new image, and using the-m option to set the information about the submission

4.docker run-entrypoint, specify the entry point, an entry point is a program, it will be executed when the container starts, if not set, then the default command will be directly executed, if set, then the default command and its parameters will be passed as parameters to the entry point

5. When using the docker commit command, a new file layer is submitted to the mirror, but not only file system snapshots are committed, each layer contains metadata that describes the execution context (execution context)

b. Go deep into Docker images and layers

1. The federated file system consists of multiple tiers, and each time a change is made to the federated file system, the change is recorded in a new layer, which is placed on top of all layers.

two。 When a file is read from the federated file system, it is read from the uppermost layer where the file exists

3. Most federated file systems use copy-on-write, which is better understood if you understand it as copy-on-change. When a file on the read-only layer (read-only layer) is modified, the entire file is copied to the top writable layer (writable layer) before the change occurs, which has a negative impact on runtime performance and the size of the image

4. A mirror consists of multiple layers in the form of a stack. First, a top layer is given as the starting point, and then multiple layers are connected from top to bottom according to the parent layer ID in each layer of metadata, starting with some starting layers, traversing the dependent layers looking for them.

5. Warehouses and tags are created by docker tag, docker commit, docker build commands

Docker commit xxx warehouse / container name: label

Docker tag warehouse / container name: label xxxxx

6. The federated file system actually adds a file at the top level to mark that a file is deleted, and the original file and any copy of the file remain in the other layers of the mirror. The federated file system may have a limit on the number of layers. 42-tier limit is common on computers using AUFS systems. You can use the docker history command to view all layers of a mirror.

c. Export and import a flat file system

The 1.socket export command exports all the contents of a flat federated file system to standard output or a compressed file, which is very helpful if you need to use the mirrored file system outside the context of the container

The 2.docker import command imports the contents of the compressed format into a new image and can recognize multiple compressed or uncompressed file formats. It is a simple way to import the smallest set of files into the new image.

d. Best practices for version control

1. In Docker, the key to maintaining multiple versions of the same software is to set correct warehouse tags. Each warehouse contains multiple tags, and multiple tags can point to the same image. These two points are the core of the practical tag framework.

two。 The minimum unit of the version control system should be consistent with the minimum unit of the actual software iteration.

The 3.latest tag points to the latest stable version, not the test version

4. If the software dependencies have changed, or if the software needs to be released based on multiple underlying systems, then these dependencies should be included in your tag framework

VIII. Build automation and advanced image settings

a. Use Dockerfile to package Git

1.docker build-t ubuntu:auto. The value of the-tag or-t option specifies the finished warehouse design you want to use, and-file or-f can set the name of the Dockerfile

2.Dockerfile file:

FROM ubuntu:latest, tell Docker to create a new image from the latest ubuntu image

MAINTAINER, which sets the name and mailbox of the image maintainer

RUN apt-get install-y git, run the command

ENTRYPOINT ["git"], set the entry point of the image to git

3. The builder can cache the results of each step. When there is a problem with the next instruction after running several instructions, the builder can restart from the same step after the problem has been fixed.

Getting started with B.Dockerfile

1.Dockerfile file

ENV, similar to-env in the docker run command, sets the environment variable of the image

LABEL is used to define key-value pairs, which are recorded as additional metadata for mirrors or containers, and are consistent with the-label function.

WORKDIR, consistent with-workdir, generates a default working directory

The EXPOSE instruction is consistent with-expose and the port is open to the public.

COPY, copying files from the file system created by the mirror to the container requires two parameters, the last parameter is the destination directory, and the other is the source file. Any copied file will become a root user. If any parameter contains spaces, you must use exec format.

VOLUME, consistent with-volume, each value in the parameter is created as a new volume definition in the resulting new layer

CMD, related to ENTRYPOINT, starts a process in the container

ADD, similar to COPY, if a URL is specified, the remote source file will be pulled and the file in the source determined to be the archive file will be extracted

c. Inject the actions that occur during the build of the downstream image

1. If the generated image is used as the base image of another build, the ONBUILD instruction defines the instructions that need to be executed, and the instructions that follow the ONBUILD will not be executed when the Dockerfile containing them is built. These instructions will be recorded under the metadata ContainerConfig.OnBuild that generated the image.

d. Use startup scripts and multi-process containers

1. UNIX-based computers usually start an initialization (init process) first. This init process is responsible for starting all other system services, keeping them running, and then shutting them down.

two。 Mainstream tools include runit, Busybox init, Supervisord, and DAEMON tools

e. Reinforcement application mirror image

1. Include as few components as possible to build a minimized image; force an image to be built based on a particular image; have an appropriate default user; a common way to remove root user rights

two。 A mirror ID that contains summary components is called content addressable mirror identifier (CAID). It introduces a special layer that contains special content.

3. If the image you build is designed to run certain applications, the default configuration should reduce user permissions as much as possible, but be careful to determine the time so that the user's lack of permissions will prevent the execution of the remaining scripts in Dockerfile.

IX. Distribution of public and private software

a. Publish via managed Registry

1.docker login (Docker Hub website account)

2.docker build-t account name (Docker Hub account) / project name

3.docker push account name / project name

b. Introduction to private Registry

1.Docker Registry software (called Distribution), whose availability and tolerant license make it very cheap to run your own Registry, can be run through Docker Hub, and is easy to use in non-production environments

c. Manual publishing and distribution of images

1.docker build command to create an image, use the docker save command or docker export command to create an image file

two。 Once you have the mirror file, you can use the docker load or import command to complete the transfer

d. Mirror source code distribution workflow

1. Contains only one Dockerfile and your project source code

two。 Use git to save a Dockerfile, which is separate from all Docker distribution tools and relies only on the image builder

10. Run custom Registry

a. Run personal Registry

1. Key components:

The basic image of registry is based on Debian, and the dependency has been updated.

The main program is named registry and is available on the PATH path

The default profile is config.yml

11. Docker Compose declarative environment

A.Docker Compose

1.Compose is a tool for defining, starting, and managing services, one of which can be defined as one or more copies of a Docker container. The service and service system are defined in the YAML file and managed through the command line docker-compose.

Docker Machine and Swarm clusters

1.Docker Machine can build a distributed system to create and remove Docker-enabled host clusters

The 2.Swarm cluster consists of two types of machines. The machine running Swarm in administrative mode is called manager, while the machine running Swarm agent is called node, which handles the container scheduling problem for the cluster.

Thank you for your reading, the above is the content of "what are the relevant knowledge points of Docker". After the study of this article, I believe you have a deeper understanding of what the relevant knowledge points of Docker have, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report