In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. The underlying technology (isolation technology) that Docker (linux container) depends on.
1 Namespace
Used for container isolation, with namespace, from the point of view of docker container, it is a complete linux world. In host's view, the process in container is an ordinary host process, namespace provides this kind of pid mapping and isolation effect, and host carries container, just like a paradise.
Namespace includes: pid namespace, net namespace, ipc namespace, mnt namespace, uts namespace, user namespace
For example, we run a container.
Check the process number of the container
You can see that the pid of the container is 3894, and there is a directory of 3894 processes in the host's / proc directory
You can end the container through kill
View / proc/ / ns file
Starting from the 3.8kernel, users can see files pointing to different namespace numbers under the / proc/ / ns file. The effect is as follows, such as [4026531839] is the namespace number.
We run a container and get the container's pid
Get the pid of the container
# ls-l / proc/pid/ns / sys/fs/cgroup/cpu/system.slice/docker-/cpu.shares
You can see that the CPU ratio of the two containers tends to be average.
Set the upper limit of the life cycle of CPU
In cgroups, you can use cpu.cfs_period_us and cpu.cfs_quota_us to limit the cpu time that all processes in the group can use per unit time. Cpu.cfs_period_us is the time period, and the default is 100000, or 100 milliseconds. Cpu.cfs_quota_us is the cpu time that can be used during this period. The default is-1, that is, unlimited.
Cpu.cfs_period_us: sets the time period (in microseconds), which must be used in conjunction with cfs_quota_us.
Cpu.cfs_quota_us: sets the maximum time (in microseconds) that can be used in a cycle. The configuration here refers to the upper limit of task's use of a single cpu.
For example, if the container process needs to use a single CPU for 0.2 seconds every second, you can set cpu-period to 1000000 (that is, 1 second) and cpu-quota to 200000 (0.2 seconds).
Of course, in a multi-core case, if cfs_quota_us is twice as large as cfs_period_us, it means that CPU is fully used on both cores. For example, if you allow the container process to fully occupy two CPU, you can set cpu-period to 100000 (that is, 0.1s) and cpu-quota to 200000 (0.2s).
Examples of use:
Create a container using the command docker run
Execute top on the host machine
From the picture above, you can see that cpu resources account for almost 100% of the total.
The cpu cycle configuration of the resulting cgroup can be found in the following directory:
/ sys/fs/cgroup/cpu/system.slice/docker-/
Modify the cpu.cfs_period_us and cpu.cfs_quota_ us values of the container
Execute top to view cpu resources
From the figure above, you can see that cpu resources account for 50% of the total resources.
The real-time scheduling policy under RT scheduling policy is similar to the method of allocating time by cycle in fair scheduling policy, which allocates a fixed running time within the cycle.
Cpu.rt_period_us: sets the cycle time.
Cpu.rt_runtime_us: sets the run time in the cycle.
Cpuset-CPU binding
For multicore CPU servers, docker can also control which cpu cores and memory nodes are limited to container operation, that is, using the-cpuset-cpus and-cpuset-mems parameters. It is especially useful for servers with NUMA topologies (with multiple CPU, multiple memory nodes) to optimize the performance of containers that require high-performance computing. If the server has only one memory node, the configuration of-cpuset-mems basically has no obvious effect.
Note:
Today's machines have multiple CPU and multiple memory blocks. In the past, we all thought of a memory block as a large block of memory, and all CPU access messages to this shared memory were the same. However, with the increase of processors, shared memory may cause memory access conflicts to become more and more serious, and performance cannot increase if memory access reaches a bottleneck. NUMA (Non-Uniform Memory Access) is a model introduced in such an environment. For example, a machine has 2 processors and 4 memory blocks. We combine one processor and two blocks of memory, which is called a NUMA node, so that the machine has two NUMA node. In terms of physical distribution, the physical distance between the NUMA node processor and the memory block is smaller, so the access is faster. For example, the machine is divided into left and right processors (cpu1, cpu2), with two memory blocks (memory1.1, memory1.2, memory2.1,memory2.2) on both sides of each processor, so that NUMA node1's cpu1 accesses memory1.1 and memory1.2 faster than memory2.1 and memory2.2. Therefore, if the mode of using NUMA can ensure that the CPU in this node can only access the memory blocks in this node, then this kind of efficiency is the highest.
Examples of use:
Indicates that the created container can only use 0, 1, and 2 kernels. The cpu kernel configuration of the resulting cgroup is as follows:
Cpuset.cpus: fill in the CPU numbers that can be used by cgroup in this file. For example, 0-2 CPU 16 represents 0, 1, 2 and 16.
Cpuset.mems: similar to CPU, it represents the memory node that can be used by cgroup in the same format as above.
Through docker exec taskset-c-p 1 (the number of the first process inside the container is usually 1), you can see the binding relationship between the process in the container and the CPU kernel, which can be considered to achieve the purpose of binding the CPU kernel.
Summary:
Mixed use of CPU quota control parameters
In the above parameters, cpu-shares control only occurs when containers compete for time slices of the same kernel. If you specify that container A uses kernel 0 and container B only uses kernel 1 through cpuset-cpus, only these two containers use the corresponding kernel on the host. They each take up all kernel resources, and cpu-shares has no obvious effect.
The two parameters cpu-period and cpu-quota are generally used together. In the case of a single core or forcing the container to use a cpu kernel through cpuset-cpus, even if cpu-quota exceeds cpu-period, it will not cause the container to use more CPU resources.
Cpuset-cpus and cpuset-mems are only valid on servers on multi-core and multi-memory nodes, and must match the actual physical configuration, otherwise the purpose of resource control can not be achieved.
When the system has multiple CPU cores, it needs to be tested easily by using cpuset-cpus as the container CPU kernel.
Memory quota control
Like the CPU control, docker also provides several parameters to control the container's memory usage quota, which can control various memory aspects such as the swap size of the container, the amount of available memory, and so on. The main parameters are as follows:
Docker provides the parameter-m,-- memory= "" to limit the memory usage of the container. If you do not set-m, the default container memory is unlimited, and the container can use all the free memory on the host.
Example of using memory quota control
Set the memory limit of the container, as shown in the following reference command
# docker run-dit-- memory128m image
By default, docker allocates the same size swap partition to the container in addition to the memory size specified by-memory, that is, the container created by the above command can actually use up to 256MB memory instead of 128MB memory. If you need to customize the swap partition size, you can control it by using the-memory-swap parameter together.
You can find that when using 256MB for stress testing, the process is killed by OOM (out of memory) because the memory limit (128MB memory + 128MB swap) is exceeded.
When using 250MB for stress testing, the process runs normally.
Through docker stats, you can see that the memory of the container is fully loaded.
# docker stats test2
For the container created by the above command, you can see that in the configuration file of cgroups, the memory size of the container is 128MB (128x1024mm 134217728B), and the memory and swap add up to 256MB (256x1024 × 10246268435456B).
# cat / sys/fs/cgroup/memory/system.slice/docker-/memory.limit_in_bytes
134217728
# cat / sys/fs/cgroup/memory/system.slice/docker-/memory.memsw.limit_in_bytes
268435456
Disk IO quota control
It mainly includes the following parameters:
-- device-read-bps: limit the read speed (bytes per second) on this device, which can be kb, mb, or gb. -- device-read-iops: limits the read speed of a specified device by reading IO per second.
-- device-write-bps: limit the write speed on this device (bytes per second), which can be kb, mb, or gb.
-- device-write-iops: limits the write speed of a specified device by writing IO times per second.
-- blkio-weight: the weighting value of the container default disk IO. Valid values range from 10 to 1000.
-- blkio-weight-device: IO weighted control for specific devices. Its format is DEVICE_NAME:WEIGHT
Disk IO quota control example
Blkio-weight
Create two containers with different-blkio-weight values using the following command:
Execute the following dd command in the container at the same time to test
Note: oflag=direct avoids the cache of the file system and encapsulates the write request as an io instruction to send to the hard disk
3 Chroot
How to see the file system in container is a complete linux system, including / etc, / lib, etc., implemented through chroot
4 Veth
In container, you can see the network card of eth0 by executing ifconfig. How to communicate? In fact, it is a virtual network card (veth73f7) on the host and bridged with the network card in the container. All the traffic coming out of the container goes through the virtual network card of the host, and so is the traffic into the container.
5 Union FS
For this kind of superimposed file system, there is a good implementation is AUFS, which can achieve the file granularity of copy-on-write, for a massive amount of container instant startup.
6 Iptables, netfilter
It is mainly used to filter ip packets. For example, it can do network policies such as no communication between container, container can not access the network of host, but can access the external network through the network card of host.
2. I have been studying Docker for some time. I have learned the basic implementation principle of Docker and how to use Docker. Here is a summary of some typical application scenarios of Docker.
1. Configuration simplification
This is the main usage scenario for Docker. Write all the configuration work of the application to Dockerfile, create an image, and then you can use this image for unlimited application deployment. This greatly simplifies the deployment of applications, eliminates the need for tedious configuration for each deployment, and achieves once packaging and multiple deployments. This greatly speeds up the development efficiency of the application, so that programmers can quickly build a development and test environment, do not pay attention to the tedious configuration work, but devote all their energy to the development work as much as possible.
2. Code pipeline management
The code from the development environment to the test environment and then to the production environment needs to go through many intermediate links. Docker provides a consistent environment for the application from development to launch. Developers and testers only need to pay attention to the code of the application, which makes the code pipeline very simple, so that the application can be continuously integrated and released.
3. Rapid deployment
Before the virtual machine, it can take several days to introduce new hardware resources. Docker's virtualization technology reduces this time to a few minutes, and Docker simply creates a container process without booting the operating system, a process that takes only seconds.
4. Application isolation
Resource isolation is a strong demand for companies that provide shared hosting services. If you use VM, although the isolation is very thorough, the deployment density is relatively low, resulting in increased costs.
The Docker container makes full use of the namespace of the linux kernel to provide resource isolation. Combined with cgroups, you can easily set the resource quota for each container. It can not only meet the needs of resource isolation, but also easily set different levels of quota limits for different levels of users.
5. Server resource integration
Just as multiple applications are integrated through VM, Docker's ability to isolate applications allows Docker to integrate server resources as well. With no additional operating system footprint and the ability to share unused memory among multiple instances, Docker can provide a better server consolidation solution than VM.
Usually, the resource utilization rate of the data center is only 30%. The resource utilization can be improved by using Docker and effective resource allocation.
6. Multi-version hybrid deployment
With the continuous upgrading of products, it is very common for enterprises to deploy multiple applications or multiple versions of the same application on one server. However, when multiple versions of the same software are deployed on a server, file paths, ports and other resources often conflict, resulting in the problem that multiple versions cannot coexist.
If you use docker, the problem will be very simple. Because each container has its own independent file system, there is no file path conflict at all; for port conflicts, you only need to specify a different port mapping when you start the container to solve the problem.
7. Version upgrade rollback
An upgrade is often not only an upgrade of the application itself, but also an upgrade of dependencies. However, the dependencies of new and old software are likely to be different, or even conflicting, so it is generally difficult to roll back in the traditional environment.
If we use docker, we only need to create a new docker image each time the application is upgraded, stop the old container first, and then start the new container. When you need to roll back, stop the new container, and the old one can be started to complete the rollback. The whole process is completed in seconds, which is very convenient.
8. Internal development environment
Before the advent of container technology, companies used to serve as a development test environment by providing one or more virtual machines for each developer. The load of the development and test environment is generally low, and a lot of system resources are wasted on the process of the virtual machine itself.
The Docker container does not have any additional CPU or memory overhead and is ideal for providing an internal development and test environment. And because the Docker image can be easily shared within the company, it is also of great help to the standardization of the development environment.
9 、 PaaS
Use Docker to build large-scale clusters and provide PaaS. This application is the most promising one, and many startups are already using Docker for PaaS, such as the Skylark Cloud platform. Users only need to submit the code, and all the operation and maintenance work is done by the service company. And for users, the whole application deployment online is one-click, very convenient.
10. Cloud desktop
A graphical desktop runs inside each container, and the user connects to the container through the RDP or VNC protocol. The virtual desktop provided by this scheme is lighter than the traditional desktop scheme based on hardware virtualization, and the running speed is greatly improved. However, the scheme is still in the experimental stage and I do not know whether it is feasible. You can refer to the Docker-desktop scheme.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.