In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Jieshao
Containers, as well as container technologies such as Docker and Kubernetes, have increasingly become common tools in many developer kits. The core goal of containerization is to provide a better way to create, package, and deploy software in different environments in a predictable and manageable manner.
In this article, we will take a look at what a container is, how it differs from other virtualization technologies, and what advantages it has in the process of deployment and operations. If you just want to quickly learn about the core concepts of containers, you can skim directly to the penultimate part [Container terminology].
What is a container?
A container is an operating system virtualization technology that packages applications and their dependencies and runs them in an isolated environment. In different types of infrastructure, containers provide a way to package and deploy applications lightweight in a standard way.
These characteristics of the container make it very attractive to developers and operators. Because containers can run consistently on any host that supports containers, developers can test the same software directly locally, which can be easily deployed to a full production environment later. At the same time, Container Format ensures that the dependencies of the application are placed in the mirror, simplifying the manual operation and publishing process. Because the hosts and platforms on which the container is running are common, the container system-based infrastructure can be managed in a standardized manner.
The container is created from the container image, which contains the system, application, and container environment. Container images are the same as creating specific container templates, the same image can be used to generate any number of running containers. This is similar to how classes and instances work in object-oriented programming: a single class can be used to create any number of instances, and a single container image can be used to create any number of containers. This kind of metaphor also applies to inheritance, because the container image can be the parent of other custom container images. Users can download pre-built containers from external resources or build custom images as needed.
What is Docker?
Although Linux containers are a common technology that can be implemented and managed in different ways, Docker is by far the most common way to run builds and containers. It includes a series of tools that allow users to create container images, push or pull images from external image repositories, and run and manage containers in different environments. It can be said that the rapid popularity of containers on Linux is mainly due to the efforts made by Docker since its release in 2013.
The docker command line tool plays a variety of roles. It can act as a process manager for container workloads to run and manage containers. In addition, it can create a new container image by reading and executing the Dockerfile or taking a running container snapshot. This command can also interact with Docker Hub (a container image repository) to pull new container images or push local images to save or publish them.
Although Docker provides only one of the many implementations on Linux, it makes the container world easier to enter and has the most commonly used solutions. Although open standards have been developed for containers to ensure interoperability, most container-related platforms and tools use Docker as a primary reference when testing and releasing software. Docker may not always be the highest performance solution for a given environment, but it may be one of the most popular testing options. In fact, for containers, although there are many other alternatives on Linux, people usually learn Docker first, which is not without reason, because Docker is ubiquitous and has influence on terminology, standards, and tools in the container ecosystem.
Cdn.xitu.io/2019/7/22/16c1766bf0b31739?w=640&h=427&f=jpeg&s=54358 ">
How does the container work?
Understanding how a container works helps to discuss the difference between a container and a virtual machine.
Virtual machine vs container
Virtual machines, or VMs, is a hardware virtualization technology that allows you to completely virtualize hardware or computer resources. A separate guest operating system manages the virtual machine completely separate from the OS running on the host system. On the host system, a piece of software called hypervisor is responsible for starting, stopping, and managing virtual machines. Because the virtual machine runs as a completely different computer, it will not affect the host system or other virtual machines under normal operating conditions, so the virtual machine has great isolation and security. However, they also have their shortcomings. For example, virtualizing an entire computer requires VM to use a lot of resources. In addition, because the virtual machine runs on a separate guest operating system, the configuration and startup time of the virtual machine can be quite slow. At the same time, because the virtual machine runs as a separate machine, administrators often need to use infrastructure-like management tools and processes to update and run each environment.
In summary, virtual machines allow you to subdivide your computer's resources into smaller individual computers, but the end result is not significantly different from managing a set of physical computers. As the size of the computer increases, the responsibilities of each host may become more concentrated, but there may be no significant change in the tools, policies and processes you use, as well as the functions of the system.
Compared with virtualizing the entire computer, the container takes a different approach-- directly virtualizing the operating system. It runs as a dedicated process managed by the host operating system kernel, but has a limited and strictly operated view of system processes, resources, and environment. Containers exist on a shared system and run as if they were on a fully controlled computer.
Instead of treating the container as a complete computer like a virtual machine, the more common administrative container is more similar to the application. For example, although you can bind a SSH server to a container, this is not the recommended mode. Instead, debugging is typically performed through the logging interface, applying updates by scrolling the new image, and no longer emphasizing service management to support the management of the entire container.
These features mean that containers occupy the space between the strong isolation of virtual machines and the local management of traditional processes. Containers provide regionalization and process-centric virtualization, achieving a good balance between limitation, flexibility, and speed.
Linux cgroups and Namespace
The Linux control group, or cgroups, is a kernel feature that allows processes and their resources to be grouped, isolated, and managed as a unit. Cgroups binds with the process and determines access to resources and provides mechanisms to manage and monitor their behavior. They follow a hierarchical system that allows a child process to inherit the conditions of its parent process and may impose further restrictions. Cgroups treats the process as a group, binds the required functionality to it, and limits the resources they can access.
Another container that relies on kernel functionality is the Linux namespace. The namespace restricts what processes can see the rest of the system. Processes running inside the namespace cannot get any processes running outside the namespace. Because the namespace defines a unique context separate from the rest of the system, the process tree of the namespace needs to reflect that context. Within the namespace, the main process becomes PID1 (process ID1), traditionally reserved for PID for OS's init systems. Building a strict virtual process tree within a namespace makes processes running in a container behave as if they were operating in a normal, unrestricted environment.
Advantages of containerization
We have discussed some of the techniques that make containers possible, so now let's take a look at their most important features.
Lightweight virtualization
Containers are lighter than hardware virtualization using virtual machines. First, the container uses the kernel of the host system and runs as a partition process in the operating system, rather than virtualizing all hardware resources and running a completely independent operating system in that environment.
Second, from a host point of view, containers run like other processes, which means they can be started and stopped quickly and can use restricted resources. In addition, the container can not only view and access the host's process space and a subset of resources, but also behave like a completely independent operating system in most cases.
The container image itself can be very small. The smallest mirror enables workflows that rely on pulling the latest mirror without significant latency. This is the requirement of many fault-tolerant, self-healing distributed systems.
Environmental isolation
Containers can be isolated from the host environment by using Linux kernel features such as cgroups and namespaces. This provides a certain degree of functional limitation to prevent container environments from interfering with each other.
Although not strong enough to be seen as a complete security sandbox, this isolation does have an advantage. Because the host and each container store the software in a separate file system, it is easier to avoid dependency and library conflicts. The network environment can be separated, so applications in the container can be bound to their native ports without having to worry about conflicts between software in the host system or in other containers. The administrator can then choose how to map the container's network to the host network as needed.
Standardize packaging formats and runtime goals
One of the most compelling advantages of containers is that they can unify and simplify the process of packaging and deploying software. Container images allow you to bind your application and all runtime requirements to a single unit that can be deployed across multiple infrastructures.
Inside the container, developers can install and use any library needed for their applications without having to worry about interfering with the host system library. When the mirror is created, the dependency is version-locked. The container runtime can act as a standard and stable deployment platform, so developers do not need to know which specific machine the container is running on. As long as the container is operable at runtime and has sufficient system resources available, the container will run as if it were in a development environment.
Similarly, from an operations point of view, containerization standardizes the requirements of the deployment environment. Administrators can focus on maintaining generic hosts that act as container platforms and allocate pools of resources that these computers can access, rather than configuring and maintaining a specific environment based on the application's language, runtime, and dependencies. Binding all specific application features in the container creates natural boundaries between the concerns of the application and the concerns of the platform.
Expandability
The established paradigm of the container allows you to extend your application with a relatively simple mechanism. Features such as lightweight mirroring, fast startup time, creating tests and deploying "golden images", and standardized runtime environments make it possible to build highly scalable systems.
An extensible system is highly dependent on the application architecture and how the container image itself is built. A good design with the container example will take full advantage of the container format to achieve a good balance of speed, availability, and manageability. Service-oriented architectures, especially micro-services, are very popular in containerized environments, because decomposing applications into discrete components with centralized purposes makes it easier to develop, extend, and update.
Terminology for containers
Before the end of this article, let's review some of the key terms we introduced in this article, as well as some new terms that you may encounter as you move on.
Containers: in Linux systems, containers are operating system virtualization technologies that package applications and their dependencies and run them in a separate environment.
Container images: container images are static files that define the file system and the behavior of a specific container configuration. It can also be used as a template for creating containers.
Container orchestration: container orchestration describes the processes and tools required to manage container queues across multiple hosts. It usually uses the container platform to control extension, fault tolerance, resource allocation, and scheduling.
Container runtime: the container runtime is a component that runs and manages the container on a single host. The most basic requirement is usually the ability to configure containers from a given image, but many runtimes are also bundled with other functions, such as process management, monitoring, and image management. The docker command within Docker contains a container runtime, but there are many other alternatives that can be used for different use cases.
Docker:Docker is the first technology to successfully promote the concept of Linux containers. Among them, Docker's tool ecosystem includes docker, a container runtime with a large number of containers and image management features, docker-compose, a system that defines and runs multi-container applications, and Docker Hub, a container image repository.
Dockerfile:Dockerfile is a text file that describes how to build a container image. It defines the basic mirror, the commands that run within the system, and how to start and manage processes when running in the container. Although Dockerfile is not the only option, it is the most commonly used format for defining container images, even without using the build capabilities of Docker images.
Kata Containers:Kata Container is a way to manage lightweight virtual machines using models, workflows, and tools that replicate the experience of containers. Kata Container seeks to reap the benefits of containers while providing stronger isolation and security.
Kubernetes:Kubernetes is a powerful container orchestration platform that manages the cluster of container hosts and the workloads running on it. Kubernetes provides tools and abstractions to deploy, extend, monitor, and manage containers in highly available production environments.
Linux cgroups:Linux cgroups, or control groups, are bound to the kernel functions of processes and can determine their access to resources. Containers in Linux are implemented using cgroups, making it easy to manage resources and separate processes.
Linux namespace: the Linux namespace is used to limit the visibility of processes or cgroup to a kernel feature of the rest of the system. The container in Linux uses namespaces to help isolate workloads and resources from other processes running on the system.
LXC:LXC is a form of Linux containerization that predates Docker and many other technologies, but also relies on many of the same kernel technologies. This is more similar to a virtual machine than a Docker,LXC that typically virtualizes the entire operating system rather than just the processes running the application.
Virtual machine: virtual machine, or VMs, is a hardware virtualization technology that simulates as a whole computer. Installing a complete operating system in the virtual machine can be used to manage internal components and access the computing resources of the virtual machine.
Virtualization: virtualization is the process of creating, running, and managing virtual environments or computer resources. Virtualization is a way of abstracting physical resources and is often used to split resource pools for different purposes.
Total knot
Containers are not magic bullets, but they do have some advantages over running software on bare metal or using other virtualization technologies. By providing a lightweight, functional isolation and rich tool ecosystem to help manage complexity, it provides great flexibility and controllability for containers during development and throughout the operation and maintenance life cycle.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.