In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
What are the basic theories and key points of Docker? I believe many inexperienced people are at a loss about it. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
Generally speaking, virtualization can be realized through hardware simulation, such as Vmware, KVM, etc., mainly through the way of virtual hardware, re-create an operating system, the operating system itself does not know whether the current hardware really exists, this form is also known as operating system-level virtualization.
Now the volume technology represented by Docker provides innovation and breakthrough for virtualization technology.
1. What is Docker?
Docker is an open source project that uses container technology to implement application virtualization.
It is essentially different from the traditional virtual machine technology, a single virtual machine can deploy a variety of applications, in essence, there is one more portable operating system; while Docker is a lightweight container, mainly customized for applications, a Docker container encapsulates an application, providing an environment for the application to run, so it is called "application container", which is more fine-grained.
2. Why use Docker?
Docker is an operating system-based software, which makes use of the existing features of the operating system and can achieve lightweight virtualization far beyond traditional virtual machines, mainly as follows:
1. Faster delivery and deployment, developers can quickly build a development environment through images, testers can test in exactly the same environment, and deployment through Docker will become simple and efficient.
2. More efficient use of resources, Docker itself consumes very little resources, and the performance of traditional virtual machines is incomparable, so a server can build more Docker applications.
3. Easier to expand and migrate, Docker can run on almost any platform.
4, more simple update management, through Dockerfile, a lot of update work can be realized through simple modification.
3. The core concept of Docker
All the time using Docker is basically dealing with its three core elements: image, container, and repository.
An image is a read-only template and a definition of a container, just like a class in Java. An instance can be created through a class, and an instance is a container. A container is an application that we really run to provide services. A new image can be formed by submitting a container, and the migration image can generate a container on other servers.
Warehouse refers to the place where images are stored. At present, many basic image files are stored in Docker's Docker Hub, which can be downloaded directly as the basic template of the application.
Of course, we can also build our private warehouse and share our images in the internal network.
4. Implementation of Docker core technology.
Docker is a kind of container virtualization technology born in Linux, which deeply uses the multi-direction underlying support technology of Linux operating system, through these technologies to realize the container-level virtualization of Docker.
1. Use Namespace to achieve resource isolation
Namespace is a powerful feature of Linux kernel. Using this feature, each container has its own independent namespace, and the applications running in it are just like in an independent operating system environment, and the resources between containers are independent and do not affect each other.
This resource isolation feature isolates hostnames from domain names, process numbers, memory, network devices, file systems, users and user groups. Although all containers share host hardware resources through the operating system, resource isolation based on the operating system level is achieved through Namespace, which is very efficient.
2. Use Control Group to achieve resource restrictions.
Control group (Control Group) is also a feature of the Linux kernel, which mainly isolates and restricts shared resources. By controlling the resources allocated to the container, Docker avoids the resource competition between multiple container colleagues and the host system.
The control group mainly provides the following functions:
Resource limit, you can set the group not to exceed the set memory limit.
Priority, setting priority allows some groups to get more CPU resources first.
Resource audit to count the resources allocated by each group.
Isolation means that one group cannot see the resources of another group (including processes, network connections, and file systems).
Control, control suspend, restart and other operations.
3. Use federated file systems to make image management fast and lightweight
Federated file system is a high-performance hierarchical file system in Linux. At present, there are many implementation techniques, and it has two basic characteristics:
Each modification is submitted as a submission and superimposed layer by layer
Hang different directories under the same virtual file system
Federated file system is the basic technology of Docker implementation. Docker images can be inherited through layering. For example, users create different application images based on basic images. These images share a common basic image, and these applications only need to record these layered information, thus greatly improving storage efficiency.
We can view the hierarchical composition of an image through the docker history command.
Mount external data volumes by hanging different directories under the same virtual file system.
For Docker images, the layers that make up the image are immutable and read-only. When Docker uses the image to start a container, a new read / write layer will be mounted to the container on the top of the image file system, and updates to the container will occur in the read / write layer. When the operating object is located in a deeper layer, it needs to be copied to the top read / write layer first. When the data object is large, the IO performance will be poor, so it is generally mounted by the data volume, rather than directly modifying the data in the image.
After reading the above, have you mastered the basic theory and key points of Docker? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.