In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "what is the installation and use of Docker". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Brief introduction
Introduction to Docker
Docker provides an envelope, or container, that can run your application. It was originally an amateur project started by dotCloud and was opened up some time ago. It attracted a lot of attention and discussion, leading dotCloud to rename it to Docker Inc. Originally written in the Go language, it acts as a pipeline added to the LXC (LinuX Containers,linux container), allowing developers to work on higher-level concepts.
Docker extends the Linux container (Linux Containers), or LXC, to provide a lightweight virtual environment for processes through a high-level API. Docker takes advantage of LXC, cgroups and Linux's own kernels. Unlike traditional virtual machines, a Docker container does not contain a separate operating system, but runs based on the functions provided by the operating system in the existing infrastructure.
Docker is similar to the concept of a virtual machine, but differs from virtualization technology in the following:
Virtualization technology relies on physical CPU and memory and is hardware-level, while docker is built on the operating system and uses the operating system's containerization technology, so docker can even run on virtual machines. Virtualized systems generally refer to operating system images, which are complex and called "systems", while docker is open source and lightweight, called "containers". A single container is suitable for deploying a small number of applications, such as deploying a redis or a memcached.
Traditional virtualization technology uses snapshots to save state, while docker not only saves the state more easily and cheaply, but also introduces a similar source code control mechanism to record the historical versions of snapshots of the container one by one, and the switching cost is very low.
Traditional virtualization technologies are more complex to build systems and require a lot of manpower, while docker can build the entire container through Dockfile, and the restart and build speed is very fast. More importantly, Dockfile can be written manually so that application developers can guide the system environment and dependencies by issuing Dockfile, which is good for continuous delivery.
Dockerfile can create a new container based on an already built container image. Dockerfile can be shared and downloaded by the community, which is conducive to the promotion of this technology.
Docker works like a portable container engine. It packages the application and the dependent environment of all programs into a virtual container that can run on any kind of Linux server. This greatly improves the flexibility and portability of programs, whether license is required, in public or private clouds, in bare metal environments, and so on.
Docker is also a cloud computing platform, it uses Linux's LXC, AUFU, Go language, cgroup to achieve resource independence, can easily achieve file, resource, network and other isolation, its ultimate goal is to achieve application isolation similar to PaaS platform.
Docker consists of the following:
The Docker server daemon (server daemon), which manages all containers.
Docker command line client to control the server daemon.
Docker image: find and browse the docker container image.
Docker characteristics
A package runs everywhere, which well illustrates the cross-platform and strong portability of Docker.
Comparison of characteristics between Docker container and traditional virtual machine technology
Characteristic Docker container traditional virtual machine boot speed seconds and minutes
Hard disk use is generally MB, generally GB performance is close to the native system is relatively weak than system support, single machine support thousands of containers generally dozens of
The difference between Docker container and traditional virtual machine technology
Docker containers are virtualized at the operating system level, directly reusing native operating systems, so they are more lightweight and more efficient in terms of performance.
Three basic Concepts of Docker
Docker includes three basic concepts.
Mirror (Image)
Container (Container)
Warehouse (Repository)
With an understanding of these three concepts, you understand the entire life cycle of Docker.
Docker image Docker image is a read-only template. For example, an image can contain a complete ubuntu operating system environment with only Apache or other applications that the user needs. Images can be used to create Docker containers. Docker provides a simple mechanism to create or update an existing image, and users can even download an image from someone else to use it directly.
The Docker container Docker uses containers to run applications. A container is a running instance created from an image. It can be started, started, stopped, and deleted. Each container is an isolated and secure platform. Think of the container as a simple version of the Linux environment (including root user rights, process space, user space, cyberspace, etc.) and applications running in it.
Docker warehouse
Docker repositories (Repostory) are similar to code repositories (similar to the concepts of svn, git, maven, etc.) that Docker uses to centrally store image files.
According to whether the stored images are shared publicly, the Docker repository is divided into:
1. Open warehouse
two。 Private warehouse
As the name implies, public repositories are places where public and open images are stored. Currently, the largest public repository is Dokcer Hub (registry.hub.docker.com), where a large number of images are available for download. Domestic public repositories have aliyun (acs-public-mirror.oss-cn-hangzhou.aliyuncs.com). A private warehouse is a private warehouse that is not open to the public for internal use. Users can build it internally, share the image internally, and share the image files of the exclusive environment conveniently and quickly.
Functional features of Docker
Although Docker provides many features, only a few major features are listed here, as follows:
Easy and quick configuration
Improve work efficiency
Application isolation
Gather together
Routing network
Service
Safety management
Easy and quick configuration
This is a major feature of Docker and can help us configure the system easily and quickly.
You can deploy code with less time and effort. Because Docker can be used in a variety of environments, the infrastructure is no longer required to be associated with the environment of the application.
Improve work efficiency
By relaxing the rapid deployment of technology configuration and applications. Without a doubt, it saves time and increases productivity. Docker not only helps execute applications in isolated environments, but also reduces resources.
Application isolation
Docker provides containers for running applications in an isolated environment. Each container is independent of another container and allows any type of application to be executed.
Cloud gathering (Swarm)
It is a clustering and scheduling tool for Docker containers. Swarm uses Docker API as its front end, which helps us use a variety of tools to control it. It can also control the Docker host cluster as a virtual host. This is a self-organizing engine group used to enable pluggable backends.
Routing network
It routes incoming requests from published ports on available nodes to the active container. This feature enables connectivity even if there are no tasks running on the node.
Service
A service is a task list that allows you to specify the status of containers within a cluster. Each task represents an instance of a container that should be run, and Swarm dispatches them between nodes.
Safety management
It allows confidential data to be saved to the cloud (swarm), and then selected to give services access to certain confidential data.
It includes some important commands to the engine, such as confidential data checking, confidential data creation and so on.
Docker architecture
Docker follows the client-server architecture. Its structure is mainly divided into three parts.
Client (Client): Docker provides command line interface (CLI) tools, and the client interacts with the Docker daemon. The client can build, run, and stop applications. The client can also interact with Docker_Host remotely.
Docker_Host: it contains containers, images, and Docker daemons. It provides a complete environment for executing and running applications.
Registry (Registry): it is the global image library. You can access and use these images to run applications in a Docker environment.
As shown in the following figure:
Docker daemon
This is a process that listens for Docker API requests. It also manages Docker objects, such as images, containers, networks, etc. Daemons can also communicate with other daemons to manage Docker services.
Docker client
The Docker client is the primary way for many Docker users to interact with Docker. When using commands such as docker run, the client sends these commands to docker d and runs them. The docker command uses Docker API.
Docker Registration Management
The Docker registry is used to store Docker images. Docker provides Docker Hub and Docker Cloud, a public registry that anyone can use. Docker is configured to look up images on Docker Hub by default.
When we use the docker pull or docker run command, we extract the desired image from the configured registry. When using the docker push command, the image is pushed to the configured registry.
Docker installation (Redhat6.5)
Download dependency packages (using nodes that can connect to the Internet)
Skip this step if the dependency package has been downloaded
Use nodes with a network
1. Install the downloadonly plug-in and download the rpm package using yum
# yum install yum-plugin-downloadonly
How to use it:
Yum install-- downloadonly (download only, not install)-- component name of the directory installation downloaded by the downloaddir=rpm package
two。 Download the dependency packages required by docker
2.1 configure Fedora EPEL Feed
# yum install http://ftp.riken.jp/Linux/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm
2.2 add hop5.repo Feed
# cd / etc/yum.repos.d # wget http://www.hop5.in/yum/el6/hop5.repo
2.3 download dependency package
# mkdir / usr/local/docker# yum install-downloadonly-downloaddir=/usr/local/docker docker-io
In this step, an error will be reported that the corresponding rpm package cannot be found. The reason is that the version number of the corresponding rpm package has changed. Copy the package name of the undownloaded rpm package (without the version number) and enter the following URL:
Http://mirrors.aliyun.com/epel/6/x86_64/
Find the rpm package with the corresponding package name. For example, you are prompted here that the download of the lxc-1.0.8-1.el6.x86_64.rpm package failed. Find lxc-1.0.9-1.el6.x86_64.rpm in the URL above, copy the link address, and http://mirrors.aliyun.com/epel/6/x86_64/lxc-1.0.9-1.el6.x86_64.rpm.
# cd / usr/local/docker# wget http://mirrors.aliyun.com/epel/6/x86_64/lxc-1.0.9-1.el6.x86_64.rpm
You can download the corresponding package.
2.4 download device-mapper-libs. If you don't download it, you will get an error if you start docker.
# mkdir / usr/local/docker/device-mapper-libs# yum install-downloadonly-downloaddir=/usr/local/docker/device-mapper-libs device-mapper-libs
2.5 copy the entire docker directory to the / usr/local/src directory on the offline node where docker needs to be installed
Second, install docker (offline node)
1. Install docker
# cd / usr/local/docker# rpm-ivh lxc-libs-1.0.11-1.el6.x86_64.rpm# rpm-ivh lua-alt-getopt-0.7.0-1.el6.noarch.rpm# rpm-ivh lua-filesystem-1.4.2-1.el6.x86_64.rpm# rpm-ivh lua-lxc-1.0.11-1.el6.x86_64.rpm# rpm-ivh lxc-1.0.11-1.el6.x86_ 64.rpm # rpm-ivh docker-io-1.7.1-2.el6.x86_64.rpm
two。 Running the docker-d command reports an error as follows:
Docker: relocation error: docker: symbol dm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02 with link time reference
2.1 solve the following
# cd / usr/local/docker/device-mapper-libs# yum-y install device-mapper*
3. Run the docker-d command again with the following error:
FATA [0000] Error mounting devices cgroup: mountpoint for devices not found
3.1 the solution is as follows:
# vi / etc/fstab# add none / sys/fs/cgroup cgroup defaults 0 restart # reboot at the end
4. Run docker-d again; put it in the background
# mkidr / usr/local/docker# cd / usr/local/docker# nohup docker-d > null & # tail-f nohup.out
Run docker in service mode:
Service docker start
Stop it
Service docker stop
If there is no wrong message, it means that the installation is successful!
Docker deployment tomcat
Find the tomcat information of the server
# docker search tomcat
Download the official image, the one with the highest Starts
Due to network problems, you may not be able to download it. You can download it through a domestic image, as follows:
Docker pull registry.docker-cn.com/library/tomcat
View all docker images
Docker images
Start tomcat in docker
The following command runs Tomcat and exposes port 8080 of the container to port 8080 of the host machine:
Docker run-p 8080 8080 registry.docker-cn.com/library/tomcat
Visit tomcat: http://172.16.16.212:8080/
Extended Dockerfile
In the example in this article, a simple Dockerfile is created, as follows:
FROM tomcat:7-jre7MAINTAINER "Craig Trim"
Build the image with the following command:
$sudo docker build-t craig/tomcat.
Evolution direction
Docker has been launched for five years, during which it has greatly changed the architecture of Internet products and promoted new methods of product development, testing and operation and maintenance. But it itself is in the process of drastic change. Especially in the past two years, with the continuous evolution of Docker open source projects, the internal structure of Docker has undergone earth-shaking changes. As a user of a container platform, you don't have to pay attention to the specific Docker evolution details, but you must understand how the evolution of Docker will affect your PaaS platform (if you don't pay attention to the technical details of Docker, you can read Chapter 2 and the last two chapters). This article comes from the Docker community and summarizes the changing trends of Docker in recent years. At the same time, at the end, it gives some suggestions on the selection of the underlying container based on the K8s PaaS platform.
The architecture evolution direction of Docker in recent years:
1. The original engine function sinks into containerd,containerd and evolves to be independent of Docker as a general container runtime tool.
2. Swarm functions are integrated into the engine, and the swarmkit module continues to weaken and will eventually be absorbed by the engine.
3. The internal functions of the engine are constantly decoupled into new modules, and new functions are constantly added to the Docker engine.
In a word: containerd core, engine cluster.
Background
Docker lost in the competition for cluster management service choreography tools.
Docker launched its own cluster management service orchestration tool swarm in Docker 1.9 in 2015. at the beginning, swarm was used as a stand-alone tool outside the Docker engine. But since Docker 1.12, Docker has integrated swarm into the Docker engine. At this point, Docker swarm as a complete service orchestration tool is in direct conflict with the Google-led and Red Hat-supported kubernetes community. At the same time, Docker opened its own way of commercialization in 2015. In order to prevent Docker from becoming a tool to kidnap the community during the commercialization of Docker, the open source community launched a number of Docker alternatives such as rkt. The kubernetes community has launched the CRI-O standard, and any container runtime that complies with this standard can be supported by K8s. For a moment, Docker is likely to be abandoned by the mainstream community! After two years of competition with K8s in 2017, Docker swarm has been completely defeated in fact, and the Kubernetes community has become the most popular open source project. To cope with so many disadvantages, Docker donated its own containerd to the CNCF Foundation in 2017 (K8s is a sub-project) and sank more Docker engine functions into Containerd. To avoid being marginalized in the open source community.
The standardization of the container world is constantly advancing.
After Docker launched Docker in 2013, it greatly subverted the industry and promoted the process of business containerization. 2015 under the leadership of Docker, the community launched the first industry standard OCI standard for containers (Open Container Protocol). At that time, Docker launched the first OCI standard container runtime runc. Runc, as the first oci standard runtime (runtime), is just a reference implementation that only takes care of the interaction between the container and the host. Container running status check, monitoring, container life cycle management, io management, signal transmission and other essential functions of container operation are "nowhere to be placed". So Docker positioned Containerd as an OCI standard runtime in a production environment. It undertakes most of the necessary functions of container operation that runc lacks: life cycle management, io management, signal management. When the OCI v1 standard was released in 2017, Docker donated the Docker image format to the OCI protocol. Therefore, in the Containerd1.0 released in 2017, the storage management functions originally undertaken by the engine also entered the Containerd (network management function, which is listed as roadmap in the https://containerd.io roadmap, but in the main maintainer vote of Containerd in December 2016, the network function was not included in the functional category of containerd and continued to be handed over to the upper layer for completion. Officially remove some network functions from the scope of containerd maintenance in github containerd's readme on January 12, 2017.
Containerd
Containerd only officially appeared in Docker 1.11, starting with version 0.x; the 1.x standard was only launched in 2017. As mentioned above, Container d is an Oci implementation available in a production environment that leverages the OCI runtime and image format. The following figure shows Containerd's interpretation of her place in the community.
You can see that Containerd acts as a general container runtime adaptation layer for PaaS tools, making use of the existing oci runtime (Containerd uses runc as the runtime, and hcsshim on windows) to shield the differences of the underlying operating system and provide general container support for Paas. From this diagram, you can see that the Containerd program supports all existing Paas platform tools that are widely used. What you can confirm now is the use of containerd on AWS ECS,k8s,Docker. Mesos and cloud foundry have not been decided today (September 2018).
Architecture diagram of Containerd 1.x
From this architecture diagram, you can clearly see the api that provides the grpc interface on the Containerd pair, while the Metric api is used by the measurement function. All orchestration tool container adaptation layers can use grpc api as the client of containerd and containerd to manipulate the container.
Distribution is used for pull and push actions of container images (this part appears in Containerd because Docker has contributed its own image format due to the introduction of oci v1 standard, so containerd naturally needs to manage images). The Bundle subsystem is used for container storage management. Its function is the original graphdriver, which provides the Bundle needed to disassemble the container image into the container runtime. The Runtime subsystem is used for container execution and monitoring, that is, it directly operates runtime, transmits and receives signals (signal), transfers fifo, and logs. Content, Metadata, and Snapshots are storage management builds, and Excutor and Supervisor are executive builds. The whole system is driven by Event events.
According to containerd on github, the work of the containerd project focuses on the following areas:
Execution: container creation, run, stop, pause, resume, exec, signaling, and deletion.
Cow file system support: built-in storage capabilities on overlay,aufs and other cow file systems
Measurement system
Release: pull and push of container images, management and acquisition of images
The following is not the scope of work of the containerd project:
Network: the creation and management of the network is done by the senior management.
Build: the construction of images
Volumes:volume management: mounts,bind and other functions for volume should be accomplished by a high level.
Logging
It needs to be noted here that the Internet has been the focus of debate in the containerd community. But in the community maintainer vote at the end of 16 years, the majority supported that the network should not remain in the containerd, because according to the understanding of most maintainers, the network is too complex, and network settings often need to cross nodes. This part of the function is more appropriate for the client of containerd (Docker Engine or K8s).
Here, by the way, is the Containerd version compatibility rule: Containerd is compatible with two consecutive minor versions of the same major version. Example Containerd1.1.0 and 1.2.0 are compatible, while 1.0.0 and 1.2.0 are not guaranteed.
The evolution of Docker
You can see that before Docker1.11, there was no containerd module, and libcontainer directly operated the host os interface.
The following figure shows the docker architecture between Docker 1.11--Docker 17.10
As you can see from figure 3-2, the Docker architecture change during this period is the introduction of Containerd and runc, and the container life cycle management is done by Containerd.
The following figure lists the architecture starting with Docker 17.11 (Docker 17.12 is the true stable version)
It is important to note that the network part of the diagram is not in containerd, but still in docker engine.
A Chronicle of the Evolution of Docker Architecture
Docker 1.9 introduces swarm as a stand-alone tool for cluster management and service orchestration
Docker 1.11 introduces containerd and runc.
Docker 1.12 swarm enters the engine
Docker 17.06 engine Community renamed moby,docke r Community version renamed docker-ce (similar to Red Hat fedora and rhel)
Docker 17.12 Docker officially introduced containerd 1.0
Docker 18.06 officially adds buildkit to engine. After setting an environment variable, you can use buildkit to complete the docker build process.
Docker 18.06.1 officially introduced containerd 1.1 into docker.
K8s community attitude towards Docker version after 17.03
After Docker 17.03, the version of Docker has changed a lot due to moby, containerd and other reasons. We can see that in September 2018 after the release of K8s R11, there is still no version of Docker after 17.03 (excluding) as a compatible Docker version of K8s. From the discussion of the community, the community has recommended that containerd be directly docked with cri to complete the integration of K8s. See https://github.com/kubernetes/kubernetes/issues/42926, where cpuguy83 (who is the primary maintainer of the moby community) recommends directly using containerd instead of docker version 17.03 and later to dock k8s. Originally, the compatibility test of docker in the K8s community was completed by the K8s sig-node working group, but through the discussion of K8s, we can see that sig-node does not plan to do the compatibility test of docker after docker 17.03.
The latest news is that when R12 rc is released on K8s on September 27th, 2018, docker18.06 support has been added to kubeadm and ci testing has been completed. 12. Add docker17.12,18.03,18.06 to the support list in the rc document. However, in K8s, the sig-node working group is responsible for the underlying container interface, and the compatibility test of docker in the K8s version should have been completed by the working group, but sig-node 's attitude towards the docker version after docker 17.03 is still cold and has not been included in the clear work plan. The two chairmen of the sig-node working group are from Google and Red Hat, and they are more interested in cri. In May 2018, sig-node and the containerd community jointly completed the integration of cri-containerd into containerd1.1 and released GA for the integration of containerd into kubenetes. After completing this part of the work, the structure of the k8s drive containerd is more simple and clear. The following figure shows the architecture of docker,containerd on k8s:
From the above two pictures, we can see that containerd1.1 has a simpler structure diagram on K8s, and the driver is simpler. And according to the tests of the K8s community, using containerd instead of docker directly can gain a dividend of reduced container startup time, memory consumption, and cpu consumption. The above conclusions come from:
Https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/
Docker-manager is the way the old kubelet intervened in docker. Cri has been officially involved in K8s cri since K8s 1.6 and directly involved in containerd since April 18, when containerd1.1 began.
Suggestions on PaaS platform selection
If you are selecting a production environment for a K8s-based PaaS platform, I recommend that you use Docker17.03 because it is mature and stable and has been tested for stability and compatibility with K8s R9, R10 and R11. If you like to try new things, I suggest you skip Docker17.03 and use containerd1.1 directly on kublet. If you like the new version of docker, you can try docker18.06, because Docker17.12 is the most volatile version of Docker, and the stability of R12 rc in kuberadm adds support for 18.06, so use the latest one. And more importantly, docker 18.06.1 uses containerd 1.1, which supports a feature: the upper container controls the namespace isolation of the platform. To put it simply, both docker and K8s can operate the same containerd on the same node and are invisible to each other. In this way, we can reserve the time to choose the traditional k8s+docker combination, and retain the possibility of switching to the k8s+containerd combination in the future, so as to maintain the evolution flexibility for our own framework.
Appendix
FAQ
Replace the yum source for Redhat
Yum install pam-devel
# This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Setting up Install Process Nothing to do
The yum source that comes with redhat by default needs to be registered before it can be updated. Therefore, the yum source for redhat needs to be replaced.
1. Check to see if the Yum package is installed
Check to see if yum is installed on RHEL, and if so, what yum packages are available:
[root@localhost ~] # rpm-qa | grep yum
Yum-metadata-parser-1.0-8.fc6
Yum-3.0.1-5.el5
Yum-rhn-plugin-0.4.3-1.el5
Yum-updatesd-3.0.1-5.el5
2 delete the yum package that comes with redhat
Uninstall all yum packages shown above:
[root@localhost ~] # rpm-qa | grep yum | xargs rpm-e-- nodeps (delete the rpm package without checking for dependencies)
Reuse
[root@localhost ~] # rpm-qa | grep yum
[root@localhost ~] #
View, no message indicates that the uninstall is complete.
3. Download the new yum package. Using yum packages with Centos6.5 (64-bit)
Wget http://mirrors.163.com/centos/6/os/x86_64/Packages/python-urlgrabber-3.9.1-11.el6.noarch.rpmwget http://mirrors.163.com/centos/6/os/x86_64/Packages/yum-metadata-parser-1.1.2-16.el6.x86_64.rpmwget http://mirrors.163.com/centos/6/os/x86_64/Packages/yum-3.2.29-81.el6.centos.noarch .rpmwget http://mirrors.163.com/centos/6/os/x86_64/Packages/yum-plugin-fastestmirror-1.1.30-41.el6.noarch.rpm
Install the yum package
Note: individual installation packages may depend on other packages (for example, yum and yum-fastestmirror depend on each other), so we can put all these packages together and install them at the same time with one command:
# rpm-ivh yum-metadata-parser-1.1.2-16.el6.x86_64.rpm yum-3.2.29-81.el6.centos.noarch.rpm yum-plugin-fastestmirror-1.1.30-41.el6.noarch.rpm
4. Replace the yum source. Source using 163
# cd / etc/yum.repos.d/# wget http://mirrors.163.com/.help/CentOS6-Base-163.repo# vi CentOS6-Base-163.repo
Edit the file and replace all the $releasever in the file with the version number, that is, 6.5.Last save!
You can use the command:% s/$releasever/6/g
5. Clean up the yum cache
Yum clean all # clear the original cache yum makecache # rebuild the cache to improve the speed of searching and installing software
6. test
Sudo yum seach git
7. Update the system
Sudo yum update
Note: this step may take a long time. If you don't want to wait, you can skip it.
This is the end of the content of "what is the installation and use of Docker". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.