Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Brief introduction of [docker] 01 and docker

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

A brief introduction to docker

Docker official website: http://www.docker.com

Github Docker source code: https://github.com/docker/docker

Https://github.com/moby/moby

1. What is docker

Docker is an open source application container engine based on the Go language and open source according to the Apache2.0 protocol.

Docker allows developers to package their applications and dependencies into a lightweight, portable container and publish them to any popular Linux machine. Containers are completely sandboxed and do not have any interfaces to each other (similar to iPhone's app). More importantly, the container performance overhead is extremely low.

Features: one-time encapsulation runs everywhere, which well illustrates the cross-platform and strong portability of Docker.

Docker is an open source project that was born in early 2013 as an amateur project within dotCloud. It is based on the Go language developed by Google Company. The project later joined the Linux Foundation, complied with the Apache 2.0 protocol, and the project code was maintained on GitHub.

Docker is the foundation of Linux container (LXC), and it is encapsulated at a higher level, so that users do not need to care about the management of the container, which makes the operation easier. Users can manipulate Docker containers as easily as a fast and lightweight virtual machine.

The Docker container is different from the traditional virtual machine. The container is virtualized at the operating system level and directly reuses the operating system of the local host, while the traditional way is implemented at the hardware level. Refer to the following comparison figure:

Second, why use docker?

2.1 more efficient than virtual machines:

As described earlier, because the container reuses the local host operating system and only encapsulates the software environment needed for the container to run (from this point of view, you can refer to the RPM installation package), so the resources required to run the software directly on the host are almost the same. Unlike virtual machines, additional memory, CPU, and so on are not required to support the operation of the virtual machine operating system.

2.2 Rapid delivery and deployment:

For developers and operators (devop), what they want most is to create or configure it all at once, which can run smoothly anywhere. And it can ensure that the environment running in every place is exactly the same, and it will not cause some problems because the development environment is different from the production environment.

The startup of the docker container is even more seconds, so it can be quickly produced and closed at any time.

2.3 easy migration and expansion:

Docker images can be migrated in any environment without compatibility problems, and the migration process is easy and convenient.

2.4 easy to manage:

With Docker, you can replace a lot of previous update work with only a small change. All changes are distributed and updated incrementally, resulting in automated and efficient management.

2.5 docker compared to traditional virtual machines

III. Docker architecture

1. The organizational structure of Docker

Docker uses the client-server architecture model and uses remote API to manage and create Docker containers.

The Docker container is created through a Docker image.

The relationship between containers and mirrors is similar to objects and classes in object-oriented programming.

Docker object-oriented container object mirror class

The docker architecture is shown below:

Introduction of each component:

Docker Mirror (Images)

The Docker image is a template for creating a Docker container.

Docker container (Container)

A container is an application or group of applications that run independently.

Docker client (Client)

The Docker client uses Docker API (https://docs.docker.com/reference/api/docker_remote_api)) to communicate with Docker's daemon through the command line or other tools.

Docker host (Host)

A physical or virtual machine is used to execute Docker daemons and containers.

Docker warehouse (Registry)

The Docker repository is used to hold the image, which can be understood as a code repository in code control.

Docker Hub (https://hub.docker.com) provides a large collection of images for use.

Docker Machine

Docker Machine is a command-line tool that simplifies Docker installation. Docker can be installed on the appropriate platform, such as VirtualBox, Digital Ocean, and Microsoft Azure, from a simple command line.

III. Basic concepts

Docker includes three basic concepts.

Mirror (Image)

Container (Container)

Warehouse (Repository)

With an understanding of these three concepts, you understand the entire life cycle of Docker.

3.1 Image:

The Docker image is a read-only template.

For example, an image can contain a complete CentOS operating system environment with only httpd or other applications that the user needs.

Images can be used to create Docker containers.

Docker provides a simple mechanism to create or update an existing image, and users can even download an image from someone else to use it directly.

Docker Mirror

As we all know, the operating system is divided into kernel and user space. For Linux, after the kernel starts, the root file system is mounted to provide user space support. The Docker image (Image) is the equivalent of a root file system. For example, the official image ubuntu:14.04 contains a complete set of root file systems for the minimum system of Ubuntu 14.04.

Docker image is a special file system, which not only provides programs, libraries, resources, configuration and other files needed by the container, but also contains some configuration parameters (such as anonymous volumes, environment variables, users, etc.) prepared for the runtime. The mirror does not contain any dynamic data, and its content will not be changed after it is built.

Hierarchical storage

Because the image contains the complete root file system of the operating system, its volume is often huge, so when designing Docker, we make full use of Union FS technology and design it as a tiered storage architecture. So strictly speaking, an image is not a packaged file like an ISO, but a virtual concept, and its actual embodiment is not a single file, but a set of file systems, or a federation of multi-tier file systems.

When the image is built, it is built layer by layer, and the former layer is the foundation of the latter layer. After each layer is built, there will be no change, and any change on the latter layer will only happen at your own level. For example, deleting a file in the previous layer does not actually delete the file in the previous layer, but only marks the file as deleted at the current layer. Although the file will not be seen when the final container runs, it will actually follow the image all the time. Therefore, when building the image, you need to be extra careful, each layer contains only what needs to be added by that layer, and any extra stuff should be cleaned up before the end of the layer construction.

The characteristics of hierarchical storage also make the reuse and customization of images easier. You can even use the previously built image as the base layer, and then further add new layers to customize the content you need and build a new image.

Image construction will be further explained in subsequent chapters.

3.2 Container:

Docker uses containers to run applications.

A container is a running instance created from an image. It can be started, started, stopped, and deleted. Each container is an isolated and secure platform.

Think of the container as a simple version of the Linux environment (including root user rights, process space, user space, cyberspace, etc.) and applications running in it.

* Note: the image is read-only, and the container creates a writable layer as the top layer at startup.

Each mirror may depend on another mirror consisting of one or more lower layers. We sometimes say that the lower mirror is the parent image of the upper mirror.

Basic image

A mirror without any parent image is called a base mirror.

Mirror ID

All mirrors are identified by a 64-bit hexadecimal string with a value of 256 bit inside. To simplify use, the first 12 characters can form a short ID that can be used on the command line. Short ID still has a certain probability of collision, so the server always returns long ID.

Docker container

The relationship between Image and Container is like classes and instances in object-oriented programming

A mirror is a static definition, and a container is an entity that mirrors the runtime. Containers can be created, started, stopped, deleted, paused, and so on.

A container is essentially a process, but unlike a process executed directly in the host, the container process runs in its own independent namespace. So the container can have its own root file system, its own network configuration, its own process space, and even its own user ID space. Processes in the container run in an isolated environment and are used as if they were operating on a system independent of the host. This feature makes container-encapsulated applications more secure than running directly in the host. Because of this isolation, many people often confuse containers with virtual machines when they first learn Docker.

As mentioned earlier, mirrors use tiered storage, and so do containers. When each container runs, it takes the mirror as the basic layer, and creates a storage layer of the current container on it. We can call this storage layer prepared for the read and write of the container runtime as the container storage layer.

The life cycle of the container storage layer is the same as that of the container. When the container dies, the container storage layer also dies. Therefore, any information saved in the container storage layer is lost when the container is deleted.

As required by Docker best practices, the container should not write any data to its storage tier, and the container storage tier should remain stateless. All file writing operations should use data volumes (Volume) or bind host directories. Reads and writes at these locations will skip the container storage layer and read and write directly to the host (or network storage), resulting in higher performance and stability.

The life cycle of the data volume is independent of the container, the container dies, and the data volume does not die. Therefore, after using the data volume, the container can delete and re-run at will without losing the data.

3.3 warehouses:

A warehouse is a place where image files are centrally stored. Sometimes the warehouse is confused with the warehouse registration server (Registry), and there is no strict distinction. In fact, there are often multiple repositories on the warehouse registration server, and each warehouse contains multiple images, each with a different tag.

Warehouses are divided into public warehouses (Public) and private warehouses (Private).

The largest public repository is Docker Hub, which stores a large number of images for users to download. It is used as the default docker repository, but the download speed is very slow in China. Of course, users can also create a private repository within the local network. After users have created their own image, you can use the push command to upload it to a public or private repository, so that the next time you use the image on another machine, you just need to pull it from the repository.

* Note: the concept of Docker repository is similar to Git, and the registration server can be understood as a managed service such as GitHub.

Docker Registry Warehouse Registration Server

After the image is built, it can easily run on the current host, but if you need to use this image on other servers, we need a centralized service to store and distribute images, such as Docker Registry.

A Docker Registry can contain multiple repositories (Repository); each warehouse can contain multiple tags (Tag); each tag corresponds to an image.

Usually, a repository will contain images of different versions of the same software, and tags are often used to correspond to different versions of the software. We can specify which version of the software is mirrored by the format of:. If no label is given, latest will be used as the default label.

Take the Ubuntu image as an example. Ubuntu is the name of the repository, which contains different version tags, such as 14.04,16.04. We can specify which version of the image we want through ubuntu:14.04 or ubuntu:16.04. If tags, such as ubuntu, are omitted, it is considered ubuntu:latest.

Warehouse names often appear in the form of two-stage paths, such as jwilder/nginx-proxy, the former often means the user name in the Docker Registry multi-user environment, and the latter is often the corresponding software name. But this is not absolute, depending on the specific Docker Registry software or services used.

Docker Registry public service

The Docker Registry public service is a Registry service that is open to users and allows users to manage images. In general, such public services allow users to upload and download public images for free, and may provide fee-based services for users to manage private images.

The most commonly used Registry public service is the official Docker Hub, which is also the default Registry, and has a large number of high-quality official images. In addition, CoreOS's Quay.io,CoreOS-related images are stored here; Google's Google Container Registry,Kubernetes images use this service.

For some reason, it may be slow to access these services at home. Some domestic cloud service providers provide Docker Hub mirroring services (Registry Mirror), which are called accelerators. The common accelerators are Ali cloud accelerator, DaoCloud accelerator, sparrow cloud accelerator and so on. Using the accelerator will download the Docker Hub image directly from the domestic address, which is much faster than downloading directly from the official website. Further instructions on how to configure the accelerator will be provided in later chapters.

There are also some cloud service providers in China that provide public services similar to Docker Hub. For example, Speed Cloud Image Warehouse, NetEase Cloud Image Service, DaoCloud Image Market, Ali Cloud Image Library and so on.

Private Docker Registry

In addition to using public services, users can also build private Docker Registry locally. Docker officially provides Docker Registry image, which can be used directly as a private Registry service. In the relevant chapters that follow, there will be further instructions on building private Registry services.

The open source Docker Registry image only provides a server-side implementation of Docker Registry API, which is sufficient to support docker commands without affecting the use. However, it does not include graphical interface, as well as advanced functions such as image maintenance, user management, access control and so on. These advanced features are provided in the official commercial version of Docker Trusted Registry.

In addition to the official Docker Registry, there are third-party software that implements Docker Registry API, and even provides a user interface and some advanced features. For example, VMWare Harbor and Sonatype Nexus.

IV. The riddle of the transformation of Docker into Moby

This week, Docker suddenly became Moby, and the name change left many people confused. Docker decided to distinguish between Docker (commercial software products) and Docker (open source projects). At the DockerCon conference, the company launched the Moby Project project, which is actually a new name for the Docker open source project (the equivalent of Red Hat's Ferdora project).

At the DockerCon conference, the company launched the Moby Project project, which is actually a new name for the Docker open source project. Moby also plays another role: creating personalized container software for specific infrastructure.

But it was frozen three feet, and it wasn't built in a day.

As soon as the Docker project has received widespread attention, there have been many opinions that the name Docker is not only the name of the company, but also the name of the open source project, and soon become the name of the company's commercial products, which itself is a great hidden danger. However, Docker did not take any measures, but paid more attention to and curbed any third-party attempts to abuse the Docker keyword.

In the two years of rapid development of the Docker project, this fuzzy zone created by Docker intentionally or unintentionally has successfully unified the huge audience and user groups carefully managed by the open source community with the influence of the company's future commercial products and target customers through the "Docker" keyword. Then, at the point in time when "Docker to Moby" could have been used to correct the error, Docker completed the renaming of the Docker project without warning and without warning. At this point, "there is no Docker in the community" has become the last emotion of this extremely successful project. This is the main reason why developers on Github and HN feel offended.

Most container users actually don't have to worry too much about Docker or Moby. Free Docker-CE will always exist, the Moby open source community will still be active, and modularization will make hack easier, so why not?

In one sentence version, you don't have to look at the following:

Docker directly renamed the original Docker project to Moby in order to transfer the huge fan group and Google search content (Google search footprint) built in previous years to Docker's commercial products.

It should be noted that:

Docker's commercial products include Docker EE, which is paid for by enterprises, and Docker CE, which is free for the community.

In other words, what you use in the future (including what you have installed in the machine now) is the product of Docker (note, it is not a project), and this product is called Docker CE (naming method such as Docker 17.XX). Docker will also spare no effort to encourage users to buy a paid version after trial (which is normal).

II. About Moby

Moby will exist as an open source organization (Github Org).

Docker CE is a product that will be built and compiled by the Moby project under the Moby organization and other projects.

The following projects under the Moby organization are jointly maintained by community developers. This means that for participants in the Moby community,

The way you work in the future is:

Contribute to the project under Moby, and then use Docker's Docker CE product.

You should also understand that there is no open source project called Docker CE.

Because Docker CE is a product, you must download and use it from the official website of Docker.

What on earth are users complaining about?

The split Moby projects will be greatly affected in terms of the investment of Docker, the openness of the new features, and the activity of developers.

In fact, if it is a normal technology company, they will generally choose to continue to maintain their own open source projects, and then sell an enterprise version and enterprise services in their own company.

There are so many examples that almost every open source project has this routine.

But only Docker, which directly renamed the Docker open source project, or, more bluntly, erased it. From this day on, you will never find an open source project called Docker. All the information you find on Google related to "Docker" will point to the two products of Docker. The huge fan base of the original Docker project has directly become a customer of Docker.

This is why Solomon repeatedly explained that "former Docker users are not affected" but many people did not buy it. The problem is not what project should be renamed, the dependency library can not be used such a small problem. The key problem is that the users of the original Docker open source project were really fooled. This is unprecedented (I wonder if you have any similar examples in the past 20 years).

Fourth, why do you do so?

In the past 20 years, there have been numerous successful open source projects, but there are few successful commercial companies behind these projects. To be serious about the case, only RedHat can

Companies that control to the operating system level are barely a successful example. For other projects, the higher you go, the harder it is to make a profit (because the harder it is for users to retain). In fact, the commercial companies of most open source projects are good enough to support this project, and it is impossible to make a profit. This is why after so many years, the industry is still talking about "how to make money from open source". One word is difficult.

Don't you see, in this circle, countless open source companies from Berkeley, Google and cool techs are lying on the ground. What are the profit prospects for Docker, which does not control the core technology and relies on the UI/UX of its relatives to take the world's projects?

It is impossible for Docker not to see the problem. Don't forget that it itself is from a PaaS company (dotCloud), and now there is really no mood for thinking about the ambitions of the open source world. From the first day of the success of the Docker project, he went to work on the next VMware. Otherwise, there is no reason for him to refuse the $4 billion acquisition of M $.

I want to sell the product, but where is the user? The more than 4w star of the original Docker project beckoned to me.

Is it really that urgent? There have been rumors in the Bay area that Docker investors have set stringent profit standards, which does not seem unthinkable. A pure back-end technology unicorn, there is indeed a dilemma, who let our goal is NASDAQ listing.

V. about the future of Docker

There is no doubt that the future of Docker is bright, and a new VMware is about to emerge. But the important thing is that this new VMware is built on a new, based on

In the open source business model, this is actually similar to what we are familiar with in the new era: fan economy.

Some people ask: haven't developers been offended a lot?

Stupid kid. Bosses who are really willing to pay Docker have no time for HackerNews and Github.

"Docker? Yeah, I've heard of it, and it seems to be quite popular. Xiao Liu, let's do one, too! "

Docker, which is determined to be VMware, has no time to care about domestic friends who sell "self-developed Docker Enterprise Edition". What "Docker native", no matter how native you can also native Docker EE? What's more, the price is not necessarily who is cheap. Of course, there is an advantage of information blocking in China, and everyone can still hold the banner of Docker and touch the light of the huge fan tide of Docker.

Only Ali Yun, the third largest cloud in the world, was willing to act as an agent for Docker and sold Docker EE. It deserved that its news on DockerCon was blocked (escaped) by domestic live broadcasters.

However, for the participants in the open source community, there is really a hehe expression left. It's not too much to say no for a project as big as Docker, and it's not too much for countless people to click on RIP. The activity of the Moby community is indeed a question mark.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report