In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
There is no doubt that "container" is the buzzword of the IT industry in recent years, but it has different meanings to different people in different contexts. In this article, I will explain the specific meaning of a container from the aspects of Linux container, container image, write-time replication, and so on.
Linux container
The traditional Linux container is really just a normal process on a Linux system. These process groups are isolated from other process groups using resource constraints (control group, that is, [cgroups]), Linux security constraints (Unix permissions, functions, SELinux, AppArmor, seccomp, etc.), and namespaces (PID, network, mount, etc.).
If you start a Linux system and look at any process that uses cat / proc/PID/cgroup, you will see a process in cgroup. If you look at / proc/PID/status, you will see its function. If you check / proc / self / attr / current, you will get the SELinux tags. If you check / proc/PID/ns, you will see a series of namespace processes in it.
So, if you define a container as a process with resource constraints, Linux security constraints, and namespaces, then according to this definition, every process on the Linux system runs in the container. This is why some people say "Linux is container, container is Linux". The container runtime modifies these resource restrictions, security constraints, and namespaces, and can start the container's program.
Container image
Docker introduces the concept of container image, which is a standard TAR file that contains:
Rootfs (Container Root File system): a directory on the operating system identified by the root directory (/), such as / usr, / var, / home, and so on.
JSON file (container configuration): the JSON file specifies how to run rootfs, such as what instructions or entrypoint should be run in rootfs when the container starts, what environment variables should be set for the container, what the working directory of the container is, and so on.
The Docker basic image consists of the root file system and the JSON file. You can update the JSON file by installing the required content in the root file system of the base image, and create a new mirror layer, which allows you to create a new image.
The definition of container image is eventually standardized from Open Container Initiative (OCI) to OCI image specification.
The tool used to build a container image is called a container image builder (such as Dockerfile). Sometimes the container engine can also build a container image, or you can use some independent tools that can build a container image.
Docker takes these container images (tarballs) and uploads them to a web service where you can pull them. Then Docker will develop a protocol so that you can pull these images smoothly, and this web service is called the container image repository.
The container engine can pull the image from the container image repository and reorganize it into the container storage. In addition, the container engine can also start the container runtime (as shown in the following figure).
Internal mechanism of Linux container
Copy On Write (COW)
Container storage is typically a write-time replication (COW) hierarchical file system. When you pull an image from the image repository, first of all, you need to extract rootfs from the image and put it on disk. If your image consists of multiple layers, you need to store each layer of the downloaded image file in a different layer in the COW file system. The COW file system allows each layer to be stored separately, which maximizes the sharing of hierarchical mirrors. Container engines typically support different types of container storage, such as overlay, devicemapper, btrfs, aufs, and zfs.
Container runtime
After the container engine downloads the container image to the container store, it needs to create a configuration file for the container runtime. This configuration file combines input from the caller / user and the contents of the container image specification. For example, the caller might want to make specified security changes to the running container, add environment variables, or mount volumes to the container, all of which are input by the caller.
The rootfs configured and decomposed by the container runtime is also standardized by OCI as the OCI runtime specification.
Finally, the container engine starts the container runtime that reads the runtime specification, modifies the Linux cgroup, Linux security constraints, and namespaces, and starts the container command to create PID1 (Process ID1). At this point, the container engine can pass stdin / stdout back to the caller and control the container (for example, stop, start, attach, and so on).
Note that many new containers are being quarantined by different parts of Linux at runtime. People can first use KVM separation (such as mini virtual machines) to run the container, or they can use other hypervisor strategies (for example, to intercept all system calls in the container process). Now that we have a standard runtime specification, we can start these tools through the same container engine. Even Windows can use the OCI runtime specification to start the Windows container.
Container orchestration engine
The container orchestration engine is at a higher level than other container tools. Container orchestration is used to coordinate container execution tools on multiple different nodes. The container orchestration engine can manage containers by communicating with the container engine, such as starting the container and connecting it to a network. It can also monitor containers and start other containers when the load increases.
Kubernetes is currently the most widely used container orchestration engine, which is used by a large number of small and medium-sized enterprise users in development or production environment, and has become a recognized standard framework for container orchestration management. But native Kubernetes is difficult for most developers to use directly because of its steep learning curve. Rancher as an open source enterprise Kubernetes container management platform, its simple and intuitive interface style and operation experience can solve this problem to a great extent. And Rancher implements the centralized deployment and management of Kubernetes clusters in hybrid cloud + local data centers, which can uniformly manage Kubernetes clusters located on different infrastructures. In addition, Rancher also puts users' security in the first place. After Kubernetes released the new patch version on August 6th, Rancher reacted quickly and released a new version of Rancher 2.2.7 a day later, which fixed the recent CVE and supported the new version of Kubernetes.
Visit the Rancher Github home page to learn more about the new version:
Https://github.com/rancher/rancher/releases
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.