Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of basic Concepts and Framework of Docker

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail the sample analysis of the basic concepts and framework of Docker. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.

A brief introduction to Docker what is a container?

A Virtualization solution

Operating system-level virtualization

Can only run the same or similar kernel operating system

Dependent on Linux kernel features: Namespace and Cgroups (Control Group)

What are the advantages of container technology?

From the figure, we can see that the container technology resources are relatively small, because the virtual machine needs to simulate the behavior of hardware, the consumption of CUP and memory is relatively large. Therefore, with the same configuration of servers, container technology has the following advantages:

Take up less resources

CPU/ has low memory consumption

So since containers have these advantages, why didn't they really get noticed until the advent of Docker? An important reason is the complexity of container technology. The container itself is complex, it relies on many features of the Linux kernel, and it is not easy to install, manage and automate. And Docker is produced in order to change all this.

What is Docker?

An open source engine that automatically deploys applications to containers

An open source project implemented in Go, born in early 2013, originally sponsored by dotCloud

Characteristics of Docker

Provides a simple and lightweight modeling method: simple, Docker is very container-friendly, and users can Dockerize their projects in just a few minutes.

Logical separation of responsibilities: with Docker, developers only need to care about the programs running in the container, and operators only need to care about how to manage the container; the purpose of Docker design is to strengthen the consistency between the environment in which developers write code and the generation environment in which applications are deployed.

Fast and efficient development life cycle: one of the goals of Docker is to shorten the running cycle from code development to testing to deployment, so that applications can be portable, developed in containers, delivered and distributed in the form of containers, so that development, testing, and production all use the same environment, so that additional debugging and deployment costs are avoided, which can effectively shorten the launch cycle of the product.

Encourage the use of service-oriented architecture: Docker recommends that a single container run only one application or process, thus forming a distributed application model in which applications or services can be expressed as a series of interconnected containers, making it easy to extend or debug distributed deployment applications. This is like the idea we often use in development; high cohesion, low coupling, and a single task. This avoids the interaction between services that may result when different services are deployed on the same server. In this way, when there is a problem in the operation of the service, it is easier to locate the problem.

Usage scenarios of Docker

1. Develop, test, and deploy services using Docker containers: because Docker itself is very lightweight, local developers can build, run, and share Docker containers. Containers can be created in the development environment, then submitted to the test, and eventually into the production environment.

two。 Create an isolated runtime environment: in many enterprise applications, different versions of the same service may serve different users, so using Docker it is very easy to create different build environments to run different services.

3. Build a test environment: because of the lightweight of Docker, developers can easily use Docker to build a test environment locally to test the compatibility of programs without a system, or even build cluster deployment tests.

4. Build a multi-user platform as a service (PaaS) infrastructure.

5. Provides software as a service (SaaS) applications.

6. High-performance, very large-scale host deployment.

II. The basic composition of Docker

Docker consists of the following important main parts:

Docker Client client

Docker Daemon daemon

Docker Image Mirror

Docker Container container

Docker Registry warehouse

Docker client / daemon

Docker is a program of the Docker S architecture: the Docker client sends a request to the daemon of the Docker server, that is, the daemon of Docker, which processes all the request work and returns the result.

Docker client access to the server can be either local or remote.

Docker Image Mirror

Mirroring is the cornerstone of the Docker container, which starts and runs based on the image. The mirror is like the source code of the container, saving the various conditions used to launch the container.

The Docker image is a cascading read-only file system.

Docker mirroring uses joint loading technology

The image of docker is a cascading read-only file system, the lowest end is a boot file system (bootfs), and the second layer is the root file system (rootfs), which is on top of bootfs and can be one or more operating systems, such as ubuntu or centos. In docker, the root file system can always be read-only, and docker will load more read-only file systems on top of the root file system by using joint loading technology. Joint loading refers to loading more than one file system at a time, but only one file system can be seen on the outside. Joint loading will stack the file systems of all layers together, so that the final file system will contain all the underlying files and directories. Docker calls such a file system mirroring.

Docker Container container

The container starts through the image. The container of Docker is the execution source of Docker, and one or more processes of the customer can be run in the container. If the image is the build and packaging phase in the Docker declaration cycle, then the container is the startup and execution phase.

When a container starts, docker loads a read-write file system at the top of the image, that is, a writable file layer, in which the program we run in docker executes. When docker starts a container for the first time, the initial read-write layer is empty, and when the file system changes, these changes are applied to this layer, such as modifying a file. The file is first copied from the read-only layer below the read-write layer to the read-write tier. The read-only version of the file still exists, but it has been hidden by the copy of the file in the read-write layer. This is an important technology of docker: copy on write (copy on write). Each read-only mirror layer is read-only and will never change. When you create a new container, docker builds a mirror stack, as shown in the following figure:

Docker Registry warehouse

Docker uses repositories to store images built by users. Repositories are divided into public and private ones. Docker provides a public repository Docker Hub.

Third, the Linux kernel features that Docker depends on

Docker relies on two important features of the Linux kernel:

Namespaces Namespace

Control groups (cgroups) Control Group

Namespaces Namespace

Many programming languages contain the concept of "namespace". We can think of "namespace" as a concept of "encapsulation", while "encapsulation" itself actually implements code isolation. In the operating system, namespaces provide the isolation of system resources, which include processes, networks, file systems, and so on.

From the documents exposed by Docker, it uses five namespaces:

PID (Process ID) process isolation

NET (Network) management network interface

IPC (InterProcess Communication) manages access to cross-process communication

MNT (Mount) manages the mount point

UTS (Unix Timesharing System) isolates kernel and version identification

So how are these isolated resources managed? This requires the use of the Control groups (cgroup) control group.

Control groups (cgroups) Control Group

Control groups is a mechanism provided by the Linux kernel to limit, record, and isolate the physical resources used by process groups. It was originally proposed by google engineers and was introduced by version 2.6.24 of Linux's kernel in 2007. It can be said that Control groups is made for containers. Without Control groups, there would be no container technology today.

Control groups provides the following features:

Resource limits: for example, the memory (memory) subsystem can set an upper limit of memory usage for a process group. Once the memory used by the process group reaches the limit, the process group issues a "out of memory" warning when the process group issues another memory request.

Priority setting: it can set which process groups can use larger CPU or disk IO resources.

Resource metering: it calculates how many system resources the process group uses. This is very important, especially in billing systems.

Resource control: it can suspend or resume a process group.

The ability that Namespace and cgroup bring to Docker

Here we understand the concepts and functions of Namespace and CGroup, but what capabilities do these two features bring to Docker? As follows:

File system isolation: the first is file system isolation, where each Docker container can have its own root file system.

Process isolation: each container runs in its own process environment.

Network isolation: virtual network interfaces and IP addresses between containers are separated.

Isolation and grouping of resources: use cgroups to allocate resources such as cpu and memory independently to each Docker container.

This is the end of this article on "sample analysis of basic concepts and frameworks of Docker". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report