Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction to Docker Series 1:Docker and Container basic knowledge

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. What is a container?

Docker has to talk about containers, so we need to talk about the concept of containers first.

In fact, a container is a tool, which generally refers to other tools that can be used to store and transport items; items can be placed in containers, and containers can protect objects.

Common containers:

Bottle

Basket

Bowl

Container

a cabinet

two。 Virtualization technology

When it comes to containers, we must compare container and virtualization technology, first of all, virtualization technology.

There are two main technologies for us to implement virtualization.

Mode 1: host Virtualization

What is virtualized here is the entire hardware platform, such as vmware and virtual box, and what they virtualize is a complete bare metal, on which we can install os and software at will.

Host virtualization is divided into two categories

L type-I (Virtualization of Type 1)

L type-ii (Virtualization of Type 2)

The performance of a program running in a virtual machine must be worse than that running in a physical machine, so why run it in a virtual machine?

Reuse: for example, if you need to run two tomcat to work in a virtual machine, you can achieve 8080 sockets without collision.

Isolation: any operation done by a process in a virtual machine has no effect on processes in other virtual machines and processes in physical machines

So how can we achieve isolation without compromising performance?

After a host is running, it is divided into kernel space and user space, in which user space processes are run.

By default, all user processes are in the same user space, so if we want to isolate the running environment of multiple processes, we can create multiple user spaces, and multiple user spaces are isolated from each other.

A sex-to-independent user space here is what we call a linux container.

The purpose of using containers is to create an isolated environment, which should include at least the following levels of isolation

UTS: hostname and domain name

Mount: working directory, also known as installation tree

IPC:ipc must be isolated. If IPC is not isolated, then multiple processes can communicate with each other, so there is no isolation.

PID: isolating PID is also necessary

User and group: you also need to have independent users in the container, such as root, but this root must not be a real root, because if you are a real root user, you will have permission to delete content in other containers.

Network: isolating the network is the most important because a container works as a separate unit, so you need to prepare a network card, interface, and tcp/IP protocol stack for each container.

The concept here is namespaces (namespace).

Linux's kernel natively supports six namespaces, and when building a container, some of these namespaces are extracted to form a container.

Various namespaces and supported kernel versions

As you can see here, centos6 is not suitable if you want to make better use of container technology.

3.LXC

The so-called LXC is actually linux Container, which is a template-based solution to implement container technology applications, including a set of tools in LXC

Lxc-create: a command to quickly create containers

4. What is Docker?

In fact, docker is the second package release of lxc. It uses lxc as the container engine and uses mirror technology to extract and install the files needed for an operation type container and pack them into a package.

When creating a container, just make an N-copy of the package and start the container, which is fine and fast.

When you create a container with docker, you actually use lxc create to create a container.

Docker greatly reduces the difficulty of using the container.

The server used to hold the image, which we call the docker warehouse, has almost all the containers we can think of.

Docker uses a more sophisticated design in which only one process runs in each container.

For example, if you only run nginx in one container, if you want to use apache, you need to download another container, and the nginx and apache can communicate through the communication logic between the containers.

Here let each process run in a separate container, and we also know that the container is an isolated environment, so that one process has problems and will not be affected by other processes.

Another benefit of using docker: a real implementation is written once and runs everywhere

Now our build environment is multi-version parallel, such as using centos5 67 at the same time, as well as windows, ubuntu and other systems. If you want to develop a program that can run on all platforms at this time, you often need several teams to develop versions for different systems.

With docker, you only need to develop a version and make the software a docker image, so you only need to put the image on any platform, as long as the platform has docker, then you can run the image, and at the same time, the program can start to run, so the difficulty of software development is greatly reduced.

Let's talk about the way docker images are built.

The construction method of docker image is very special, which is called hierarchical construction and joint mount.

Take the construction of a nginx image as an example to explain

First build a lowest-level, pure system, such as a minimized centos6 system

Installing a nginx on the basis of this centos system constitutes an image.

Note that the constructed image contains only nginx itself, not centos operating system content.

This image contains two layers, which together constitute the centos running on linux.

When you start the container, you need to mount both layers and use it, that is, hierarchical construction and joint mount.

If you need to start multiple images, such as nginx, tomcat, apache, etc., for example, they are all based on centos, so when downloading, you only need to download a centos, and then download the layer separately.

Why can multiple upper layer applications share the underlying system?

Because the underlying centos and tomcat are read-only.

When the user performs a write operation in the created container, the underlying layer is read-only and cannot be modified

Therefore, at this time, a copy of the underlying resources will be copied, and then modified in this layer of replication, this mechanism is called: copy on write.

About Container choreography tool

For example, we have 100 hosts that can run docker. When we need to start the container, we only need the orchestration tool to send instructions. The orchestration tool finds one from the backend to start docker according to the algorithm.

For example, if we want to run the environment of amp, the three programs here are three containers, and the startup order of these three containers is related, so we need to set the startup order, so that the orchestration tool needs to be able to start according to the order.

There are many choreography:

The first one: docker's own choreography tool: this is actually a combination of three tools, machine+swarm+compose

The second one: ASF, meos+marathon

The third: google, kubbernets, k8s for short, because there are eight letters between k and s.

5. Compare kvm Virtualization

Kvm: based on hardware virtualization technology, it needs cpu support to virtualize a virtual machine, and the virtual machine manager needs to take up additional system resources, that is, even if you do not run any virtual machines, you need to take up about 6% of the system resources.

Docker: based on kernel virtualization technology, nothing is virtualized, but it is implemented through isolation technology, so there is no additional overhead on the system.

6. Compare openstack Virtualization

Real docker should not be used as a virtual machine (although it is possible)

7.docker architecture

The whole architecture is divided into three parts.

L 1: client: cient

L 2: server side: docker_hosts

L 3: warehouse side: registery

Communication between parts is based on http or https.

Docker_host part

The server runs in daemon mode by running docker daemon, and the docker listens on one socket, and docker supports three sockets.

L ipv4 socket

L ipv6 socket

L unix socket socket: that is, listening on a local file.

Docker warehouse

First of all, the registery of docker provides the storage function of the docker image, and also provides the authentication function when the user logs in to download the image.

In addition, repository is also included in docker's registery. A repository is a directory. Only one image of the program is stored in a directory, such as the image of the nginx to be created. Then create a directory named nginx, and all nginx images are placed in the same directory.

Because there are multiple images, if you want to uniquely identify an image, you need to use tag (label), such as the first 1.91.11 1.23, so that through the combination of repo name and tag signature, you can uniquely identify an image.

Docker officially provides docker repositories, but it is also provided by third parties, and you can also make your own docker repositories.

Application scenarios of 5.docker

1. Simplify configuration

The working environment includes the production environment, the test environment and the development environment. The test environment is divided into functional testing and performance testing; the production environment is divided into pre-production environment and generation environment.

With so many environments and different environments, the configuration is also different. If the configuration is different, it may lead to online failure. Using docker can simplify the configuration, make an image, and use this image to complete the deployment and launch.

two。 Code pipeline management

After the development is completed, the developer passes the code to the server, and then the tester can pull the code from the server to test. After the test is completed, the release begins, first of all, the grayscale release, and then the official release.

3. Development efficiency

The process of configuring a variety of environments for new employees is often troublesome, so here we can easily use docker as a container to achieve.

4. Application isolation

Applications are isolated from each other.

5. Server consolidation

In other words, a server can run multiple container instances.

6. Debugging ability

Ability to handle bug

7. Multi-tenant

8. Rapid deployment

The docker is second-level and starts extremely fast.

For example, as said before, the Spring Festival Gala's Wechat grabs red envelopes with docker, saying that it can start 1000 docker programs in one second.

Reasons for large and medium-sized companies to choose docker

L technical reserve

L keep up with the pace and improve your skills

L symbol and current business requirements

(at present, it is generally the second, not the first or the third at all.)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report