In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail the example analysis of Linux containers for everyone, Xiaobian thinks it is quite practical, so share it with you as a reference, I hope you can gain something after reading this article.
I'll tell you a secret: The DevOps cloud computing and stuff that made my apps available around the world is still a bit of a mystery to me. But over time, I realized that understanding the context of large-scale machine growth and application deployment is a very important knowledge for a developer. It's similar to being a professional musician, and of course you definitely need to know how to use your instrument, but if you don't know how a studio works or how to fit into a symphony orchestra, then it's very difficult to work in that environment.
In the world of software development, getting your code into our larger world is as important as writing it. DevOps is important, and it's important.
So, to bridge the gap between developing Dev and deploying Ops, I'll introduce container technology from scratch. Why containers? For there is strong evidence that containers are the next step in machine abstraction: making computers places rather than things. Understanding containers is our shared journey.
In this article, I will introduce the concept behind containerization. Includes the difference between containers and virtual machines, as well as the logic behind container construction and how it fits into the application architecture. I'll explore how lightweight Linux OS fits into the container ecosystem. I'll also discuss using mirroring to create reusable containers. Finally, I'll show you how container clustering enables your application to scale quickly.
In a later article, I'll walk you through the process of containerizing a sample application step by step and how to create a managed cluster for your application container. In the meantime, I'll show you how to deploy your sample application to your on-premises system and virtual machines from various cloud vendors using Deis.
Let's get started.
To understand how containers adapt to things, you first need to understand the predecessor of containers: virtual machines.
A virtual machine(VM) is a software abstraction running on a physical host. Configuring a virtual machine is like buying a computer: you need to define the number of CPUs, RAM, and disk storage capacity you want. After configuring the machine, you load it with an operating system and any servers or applications you want the virtual machine to support.
Virtual machines allow you to run multiple emulated computers on a single hardware host. Here is a simple diagram:
Virtual machines allow you to fully utilize your hardware resources. You can buy a huge, rumbling machine and run multiple virtual machines on it. You can have a database VM and clusters of VMs running the same version of a custom application. You can get a lot of scalability with limited hardware resources. If you feel you need more VMs and your hosting hardware has capacity, you can add whatever VMs you need. Alternatively, if you no longer need a VM, you can shut down the VM and delete the VM image.
Limitations of Virtual Machines However, virtual machines do have limitations.
As shown above, suppose you create three virtual machines on a single host. The mainframe has 12 CPUs, 48 GB of memory and 3TB of storage. Each virtual machine is configured with 4 CPUs, 16 GB of memory, and 1TB of storage. So far, so good. The host has that capacity.
But there's a flaw here. All resources allocated to a virtual machine, whatever they are, are proprietary. Each machine was allocated 16 GB of memory. However, if the first VM never uses more than 1GB of allocated memory, the remaining 15GB is wasted there. If the third VM uses only 100GB of the allocated 1TB of storage space, the remaining 900GB becomes wasted space.
There is no flow of resources. Each virtual machine has all the resources allocated to it. So, in some ways we're back before virtual machines, spending most of our money on unused resources.
Virtual machines have another flaw. It takes a long time for them to run. If you are in a situation where infrastructure needs to grow rapidly, even if adding virtual machines is automatic, you will still find that a lot of your time is wasted waiting for machines to come online.
Conceptually, a container is a Linux process, and Linux thinks of it as just a running process. The process knows only what it is told. In addition, in terms of containerization, the container process is also assigned its own IP address. This is very important, important things say three times, this is the second time. In terms of containerization, a container process has its own IP address. Once given an IP address, the process is an identifiable resource in the host network. Then, you can run commands on the container manager to map the container IP to an IP address in the host that can access the public network. This mapping is established, and for all intents and purposes, a container is an accessible standalone machine on the network, conceptually similar to a virtual machine.
This is the third pass, and containers are independent Linux processes that have different IP addresses to make them recognizable on the network. Below is a schematic diagram:
Containers/processes share resources on hosts in a dynamic, collaborative manner. If the container only needs 1GB of memory, it will only use 1GB. If it needs 4GB, it will use 4GB. The same applies to CPU and storage space utilization. The allocation of CPU, memory, and storage space is dynamic, unlike the static approach of a typical virtual machine. The sharing of all these resources is managed by the container manager.
Finally, containers can be started very quickly.
So the benefit of containers is that you get the benefits of virtual machine isolation and encapsulation, and you get rid of the drawbacks of static resource exclusivity. Also, because containers load into memory quickly, you get better performance when scaling to multiple containers.
Containers host, configure, and manage The computer hosting the container is running a version of Linux that has been stripped down to the main part. Now, the popular underlying operating system for host computers is the previously mentioned CoreOS. There are others like Red Hat Atomic Host and Ubuntu Snappy.
The Linux operating system is shared by all containers, reducing duplication and redundancy in the container footprint. Each container contains only those parts that are unique to that container. Below is a schematic diagram:
You can configure the container with the components it needs. A container component is called a layer. A layer is a container mirror,(you'll see more about container mirrors in a later section). You start with a base layer, which is usually the operating system you want to use in your container. (Container Manager provides only the parts of the operating system you want that are not present in the host operating system.) When you build your container configuration, you need to add layers, such as Apache if you want to add web servers, or PHP or Python runtimes if the container wants to run scripts.
Layering is very flexible. If an application or service container requires PHP 5.2, you can configure the container accordingly. If you have another application or service that requires PHP 5.6, no problem, you can configure the container using PHP 5.6. Unlike virtual machines, you need to go through a lot of configuration and installation to change a version's runtime dependencies; for containers you just need to redefine the layers in the container configuration file.
All of the container functions described above are controlled by a single piece of software called container manager. Today, the most popular container managers are Docker and Rocket. The diagram above shows a hosting scenario where the container manager is Docker and the host operating system is CentOS.
Containers consist of mirrors. When you need to build our application into a container, you compile the mirror. Mirrors represent the container templates that your container needs to do its job. (The container can be in the container, as shown below). Mirrors are stored in a registry, which is accessed over the network.
Conceptually, a registry is similar to a Maven repository for Java users and a NuGet server for. NET users. You will create a container configuration file that lists the images your application needs. Then you use the container manager to create a file that includes your application code and some of the resources downloaded from the container registry. For example, if your application includes PHP files, your container configuration file will state that you will get the PHP runtime environment from the registry. In addition, you will use the container configuration file declaration to declare the.php file that needs to be copied to the container file system. The container manager encapsulates everything about your application into a single container that runs on the host computer under the management of the container manager.
Here is a schematic of the concept behind container creation:
Let's take a closer look at this diagram.
(1)Represents a container profile that defines what your container needs and how your container is constructed. When you run the container on the host, the container manager reads the profile, gets the container image you need from the registry on the cloud, and (2) adds the image as a layer to your container.
In addition, if additional images are needed to make up the image, the container manager also takes those images and adds them as layers. (3)Container Manager copies the required files into containers.
If you use a provisioning service, such as Deis, the application container you just created is mirrored, and (4) the provisioning service deploys it to the cloud vendor of your choice, such as cloud vendors like AWS and Rackspace.
The containers in the cluster are ready. Here's a good example of how containers provide better configuration flexibility and resource utilization than virtual machines. But that's not all.
The real flexibility of containers is in clusters. Remember that each container has a separate IP address. Therefore, it can be placed behind the Load Balancer. Putting containers behind a Load Balancer takes it up a notch.
You can run container clusters behind a Load Balancer container for higher performance and high availability computing. Here's an example:
Suppose you develop a resource-intensive application, such as image processing. Using container configuration techniques like Deis, you can create a container image that includes your image manipulation program and all the resources your image manipulation program needs. You can then deploy one or more container mirrors to the host under the Load Balancer. Once the container image is created, you can use it at any time. When the system is busy, more container instances can be added to satisfy the task at hand.
Here's more good news. You do not need to manually configure the Load Balancer to accept your container image each time you add an instance to your environment. You can use service discovery techniques to have containers tell the equalizer that they are available. Then, once learned, the equalizer distributes traffic to new nodes.
Putting it all together Container technology completes the missing piece of the virtual machine. Snappy hosting operating systems like CoreOS, RHEL Atomic, and Ubuntu, combined with container management technologies like Docker and Rocket, have made containers increasingly popular.
Although containers are becoming more common, mastering them will take some time. But once you get the hang of them, you can use configuration techniques like Deis to make container creation and deployment easier.
About "Linux container sample analysis" This article is shared here, I hope the above content can be of some help to everyone, so that you can learn more knowledge, if you think the article is good, please share it to let more people see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.