In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
The emergence of Docker is the biggest revolution in the field of software engineering in the past decade. The technology of Docker can completely recast all aspects of software deployment, such as software development, testing, operation and maintenance. Previous virtualization technologies such as VMware,OpenStack are generally heavyweight virtualization. Take VMware as an example, we need VMware software first. Installing a specific operating system on this basis (such as ubantu image about 1G) actually consumes a lot of memory, because every operating system has a concept of "resident memory", no matter what software you are running or not, as long as the system is running. Will consume part of the memory. As in distributed clusters, it is also a great consumption of system resources.
The following is the framework diagram of container technology represented by docker:
From the above, we can see that Docker is virtualized with the support of the features in the linux kernel (kernel). Specifically, such as namespace,cgroups in Linux, these contents can be used to achieve space isolation, memory, CPU isolation, and the allocation control and recording of computing resources, that is to say, we can build different user spaces (userspace). In fact, when we write programs, such as server or database, they only care about userspace user space when they run, so when we are based on namespace,cgroups, we can rely on multiple processes to achieve distribution on one server. Each process is equivalent to a machine, which is completely transparent to the application. The application is not aware that this is a virtual machine through Docker virtualization technology, it is the same as on the real machine. Based on Docker technology, we can do the user space of the operating system in a process way, and its overhead is almost negligible. When we run instances of multiple machines on one machine, we can drain the potential of hardware, including memory CPU, IO and so on. From the perspective of productivity, container technology, represented by Docker, brings the lightweight nature of virtualization and the value of cloud computing, or computing center investment in resources (we basically invest in hardware) into full play, so we have good reasons to choose docker. The core difference between traditional virtualization technology and Docker virtualization technology: creation speed: the former is very slow (usually in minutes) while the latter is very fast (usually in seconds). In terms of system performance: the former increases the system call link link through the simulation of the hardware layer, resulting in performance loss, while the latter shares the kernel with almost no loss. Resource consumption: the former consumes a lot of memory, such as resident memory, while the latter consumes very little, so one machine can easily create multiple Container. Operating system coverage: support Linux,Windows,mac, etc., which currently only supports OS supported by Kernel, such as Linux,ubantu,CentOS (except with the help of tools). Docker is currently the most successful representative of container virtualization technology, and its creation speed is very fast. For example, we often have to create 100s. in the past, traditional virtualization technology based on VMware will be very troublesome and time-consuming, but if it is based on docker, you can simply configure a few parameters to change from an existing instance of a node to 10 or 100. Traditional virtualization is the simulation of hardware, just like Java C language, it adds the simulation of hardware layer, which is equivalent to every call, the virtual machine has to call the hardware through the kernel, while the hardware is called directly in docker, almost no performance loss. In terms of resource consumption, traditional technology, first, installing a system must be at least about 1G in size, and second, even if it does not run any services, it has resident memory, while docker consumes very little resources. A host can easily create multiple container (usually an ordinary pc 8G 16G), if the services you run in container are not very complex. You can build 20 to 50 such virtual machines (nodes) on a very easy pc to establish a development environment on other systems, such as Windows,mac can use a tool Boot2Docker, but it is recommended that ubantu,centOS. One point that has been emphasized is that the container technology represented by docker is the use of cgroups,namespace and other contents in the kernel, as shown in the following figure: when we guarantee independence, Linux's namespace helps us to achieve the isolation effect of different namespace is a peach blossom source. From an application point of view, it is a complete operating system, and from another point of view, since we have a lot of container on a machine, and every container uses memory and CPU, it involves resource restrictions in the kernel (cgroups). Conclusion: the Linux kernel is based on the isolation mechanism of namespace and the resource control mechanism of cgroups to manage container is actually limited to resource control. When cgroups is actually running, it will create multiple cgroups, that is, subsystems, and it will form a tree (see the figure above), and each subsystem will be associated with the tree structure. For example, the CPU,memory on the diagram is actually a subsystem, which will be associated with a specific cgroups instance. A subsystem actually represents a resource. In addition to these two items, there is also IO, which contains a parent cgroups and two child cgroups (a total of three). When associated, two subsystem (CPU, memory) will be associated. These two resources will be connected to a specific task through a specific instance of cgroups. From the perspective of the kernel, a task is quite a process. Our CPU, memory and IO use cgroups to control specific processes and manage them. A period of task process is a container instance, and a cgroups,cgroups that he can add to our tree structure will limit our container resources. Of course, the relationship between cgroups and our specific container resources is actually a many-to-many relationship, because our process itself uses a lot of resources, a cgroups can connect to many task, and a task can also connect to multiple cgroups, but in a tree structure we can only add one cgroups, only for our container, when we start to create, UsegeCPU may have a CPUset, this cgroups Then the pid of the specific container will be written into the specific task of our CPUset, so that our container or this process will be added to the cgroups pool. A specific container may be added to multiple cgroups, because what we just talked about is CPUset, but there is actually IO,memory, which is part of the kernel. When we have a lot of user space on the kernel, it is bound to involve the user space root directory and what the file structure tree is. Because we are based on a specific device, there is only one real root directory, and all container are mounted to subdirectories under the real physical and directory, and the tree structure after mounting is completely designed by you. The virtual file system isolated by chroot is mounted to the real file system. From the container point of view, the subdirectories seen by application servers like Apache are actually the root directories of their processes. One of the outstanding contributions of Linux is that everything is a directory.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.