Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The concept, Analysis principle and characteristics of Docker

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. Brief introduction of docker:

What does an application container look like? a finished application container looks like a virtual machine with a specific set of applications. For example, if I want to use a mysql database, I can just find a container containing MySQL. As soon as I run the container when I want to use it, the MySQL service will be up and MySQL can be used.

Why can't you just install a MySQL? Or is it SqlServer?

A: because sometimes, depending on everyone's computer, there will be all kinds of errors during the installation of the physical machine, suddenly your machine will have a virus or hang up, and all your services need to be reinstalled.

Note:

But with docker, or with a container, it is different. You have a virtual machine that can run. As long as you can run the container, the configuration of MySQL is left. There is no need to install MySQL. If you want to change to another computer, bring the container directly, and you can use the services in the container.

Docker is developed based on the go language, the code is hosted on GitHub, and follows the Apache2.0 open source protocol

Dokcer can encapsulate any effective application and run the same service between almost any server. In other words, developers only need to build applications once to run on multiple platforms, and operators only need to configure their services to run all applications.

At the same time, containers can also ensure a high degree of consistency throughout the development, test, and production environment.

Note:

In addition, the container is as isolated as the virtual machine, and the data and memory space of each container are isolated from each other, which can ensure a certain degree of security.

Hypervisors such as Hyper-V, kvm, and Xen are based on hardware prevention mechanisms. This means that they are very demanding of the system, while the Docker container uses a shared operating system. In other words, they are much more efficient and economical in using system resources than hypervisors.

Instead of virtualizing the hardware, the container resides on an Linux instance.

Docker can not only solve the problems that virtual machines can solve, but also solve the problems that virtual machines can not solve because of high resource requirements.

Characteristics of dock:

1. Deliver applications quickly

Developers use a standard p_w_picpaths to build a development container that system administrators can use to deploy code after development is complete.

Docker can quickly create containers, quickly deploy applications, and make the entire process visible, making it easier for the rest of the team to understand how the application is created and works.

The Docker container is very light! Soon! The startup time of the container is sub-second, saving development, testing and deployment time.

two。 Easier to deploy and scale

Docker containers can run in almost all environments, physical machines, virtual machines, public clouds, private clouds, personal computers, servers, etc.

The docker container is compatible with many platforms so that one application can be migrated from one platform to another

3. More efficient

The docker container does not need hypervisor, it is kernel-level virtualization.

4. Rapid deployment also means easier management

Usually only a small change is needed to replace the previous huge and massive update work.

So if containers and virtual machines are so similar, why not use virtual machines?

A: docker still has many advantages over VM:

1. The startup speed is fast, and the container usually starts in 1 second, while VM takes a long time.

two。 Resource utilization is high, an ordinary server can run thousands of containers, and run VM.

3. The performance overhead is small, and VM requires additional CPU and memory to complete the functions of OS, which takes up additional resources.

The following picture compares the differences between Docker and traditional virtualization

It can be seen that the container is virtualized at the operating system level and directly reuses the operating system of the local host, while the traditional way is implemented at the hardware level.

II. The architecture of docker

Docker uses the docker architecture, and docker daemon accepts requests from client as a server and processes (create, run, distribute containers). They can run on a machine and communicate through socket or RESTful API.

Docker daemon usually runs in the background of host.

Docker client exists in the form of system commands, and users use docker commands to interact with docker daemon.

1. 'Docker daemon (Docker daemon)

As shown in the above figure analysis, the Docker daemon is running on a host. Users do not interact directly with the daemon, but communicate indirectly with other processes through the Docker client

II. Ocker client (Docker client)

The Docker client, which is actually a binary program of Docker, is the way users interact with Docker. It accepts user instructions and communicates with the Docker daemon behind it.

3. Docker internal:

To understand docker internal builds, you need to understand the following three operations:

Docker Image-Docker p_w_picpaths

Docker warehouse-Docker registeries

Docker container-Docker container

1.Docker Mirror

The Docker image is a read-only template for the Docker container runtime, and the image can be used to create a Docker container. Each mirror consists of a series of layers (layes).

Docker uses unionFS (federated file system) to federate these layers into separate images.

UnionFS allows files and folders (called branches) in a separate file system to be transparently overwritten to form a separate, coherent file system. Because of the existence of these layers, Docker is so lightweight. When you change a Docker image, such as upgrading to a program to a new version, a new layer is created. So instead of replacing the entire image or re-establishing it (as you might do when using a virtual machine), just a new layer has been added or upgraded. Now you don't need a program to publish the entire image, you just need to upgrade, and the layer makes it easy and fast to distribute Docker images.

Each docker has many layers, and docker uses union file systems to combine these different layers into a single p_w_picpath.

For example, when nginx is installed in the CentOS image, it becomes the nginx image. In fact, the hierarchical concept of the docker image is reflected at this time. The underlying CentOS operating system image is superimposed with a nginx layer to complete the construction of a nginx image. The concept of hierarchy is not difficult to understand. At this time, we generally call the CentOS operating system image the parent image of the nginx mirror layer.

2.docker Warehouse:

The docker repository is used to hold the image, which can be understood as a code repository in code control. Similarly, docker repositories have both public and private concepts. The name of the public docker warehouse is docker HUB. Docker HUB provides a large collection of images for use. These images can be created by yourself or based on the images of others.

A warehouse is a place where image files are centrally stored. Sometimes the warehouse is confused with the warehouse registration server (Registry), and there is no strict distinction. In fact, there are often multiple repositories on the warehouse registration server, and each warehouse contains multiple images, each with a different tag.

Warehouse is divided into public warehouse (public) and private warehouse (Private).

The largest public repository is Docker HUB, which stores a large number of images for users to download. Domestic open warehouses include Docker Pool, etc.

It can provide mainland users with more stable and fast access.

Of course, users can also create a private repository within the local network.

After you have created your own image, you can use the push command to upload it to a public or private repository, so that the next time you use the image on another machine, you just need to pull it from the repository.

Note: the concept of Docker repository is more similar to Git, and the registration server can be understood as a managed service such as GitHUb.

3.Docker container:

Docker uses containers to run applications, and a Docker container contains all the environments that an application needs to run.

Each Docker container is created from a docker image. Docker containers can be run, started, stopped, moved, and deleted. Each docker container is an independent and secure application platform.

A container is a running instance created from an image. It can be started, started, stopped, and deleted. Each container is an isolated and secure platform.

Think of the container as a simple version of the Linux runtime environment (including root user rights, process space, user space, cyberspace, etc.) and applications running in it.

Note: the image is read-only, and the container creates a writable layer as the top layer at startup.

Compared with virtual machines, containers are very different. They are designed to run a single process and cannot well simulate a complete environment. Docker designers strongly advocate the one-container-one-process approach, and if you choose to run multiple processes in a single container, the only situation is for debugging purposes.

The container is designed to run an application, not a machine. You may use containers as virtual machines, but you will lose a lot of flexibility because docker provides tools for separating applications from data, allowing you to quickly update running code / systems without affecting data.

The interaction diagram of Docker replacing lxc,libcontainer and Linux systems with libcontainer since version 0.9 is as follows:

4. Docker underlying technology

The two underlying core technologies of Docker are Namespace and Control groups.

Namespace is used to isolate containers

(1) pid Namespace

The processes of different users are separated by pid Namespace, and different Namespace can have the same pid.

The parent process of all LXC processes in Docker is the Docker process, and each lxc process has a different Namespace.

(2) net Namespace

With pid Namespace, the pid in each Namespace can be isolated from each other, but the network port is still the port that shares the Host. Network isolation is achieved through net Namespace, and each net Namespace has a separate network devices,IP address,IP routing tables,/proc/net directory. So that each container's network can be isolated. By default, Docker uses veth to connect the virtual network card in container with a Docker bridge:docker0 on host.

(3) ipc Namespace

The process interaction in container still adopts Linux's common method of inter-process interaction (interprocess communication-IPC).

Includes common semaphores, message queues, and shared memory. The inter-process interaction of container actually has the same pid on Host.

Inter-process interaction in Namespace.

(4) mnt Namespace

Similar to chroot, a process is placed in a specific directory for execution. Mnt Namespace allows processes with different Namespace to see the text

The component structure is different so that the file directories seen by processes in each Namespace are isolated. In container, I see

The file system is a complete Linux system, including / etc, / lib, etc., implemented through chroot.

(5) uts Namespace

UTS ("UNIX Time-sharing System") Namespace allows each container to have separate hostname and domain

Name, so that it can be treated as a separate node on the network rather than a process on the Host.

(6) user Namespace

Each container can have different user and group id, that is to say, you can use container internal users within the container.

Line programs, not users on Host.

Summary:

With the isolation of the above six Namespace from the perspective of process, network, IPC, file system, UTS and user, a container can

It shows the ability of a stand-alone computer, and different ccontainer are isolated from the OS level. However, the relationship between different Namespace

Sources still compete with each other, and ulimit is still needed to manage the resources that each container can use-- cgroup.

Cgroups (Control groups) implements the quota and measurement of resources.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report