In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you why Docker is popular, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
Docker is a container management tool
Docker is a lightweight, portable and isolated container, and it is also an engine that can easily build, transmit and run applications in the container. Different from the traditional virtualization technology, the Docker engine does not virtualize a virtual machine, but directly uses the host kernel and hardware to run applications in the container on the host. Thanks to this, the performance gap between the applications running in the Docker container and those running on the host is almost negligible.
But Docker itself is not a container system, but a tool used to create a virtual environment based on the original containerization tool LXC. Tools like LXC have been used in production environments for many years, and Docker provides more friendly image management and deployment tools based on this.
Docker is not a virtualization engine
When Docker was first released, many people compared Docker to virtual machines VMware, KVM and VirtualBox. Although both Docker and virtualization technologies are functionally committed to solving similar problems, Docker takes a very different approach. The virtual machine is a set of virtual hardware, and the disk operation of the virtual machine system is actually operating on the virtual disk. When running CPU-intensive tasks, it is the virtual machine that "translates" the CPU instructions in the virtual system into host CPU instructions and executes them. Two disk layers, two processor schedulers, two operating systems consume memory, all of which will bring considerable performance loss, a virtual machine consumes as much hardware resources as the corresponding hardware, and it will overload after running too many virtual machines on a host. Docker has no such concerns. Docker runs the application as a "container" solution: using namespace and CGroup for resource restrictions, sharing the kernel with the host, not virtual disks, and all container disk operations are actually operations on / var/lib/docker/. In short, Docker is really just running a restricted application on the host.
From the above, it is not difficult to see that the concept of container and virtual machine is not the same, and container can not replace virtual machine. Virtual machines can show their skills beyond the power of the container. For example, the host is Linux, which can only be done by running Windows,Docker through a virtual machine. For example, the host is Windows,Windows and cannot run directly on Docker,Windows. Docker is actually running in the VirtualBox virtual machine.
Docker uses a hierarchical file system
As mentioned earlier, one of the advantages of Docker over existing container technologies such as LXC is that Docker provides image management. For Docker, a mirror is a static, read-only snapshot of a container file system. However, not only that, all disk operations in Docker are performed on a specific Copy-On-Write file system. Let's explain this problem with an example.
For example, if we want to set up a container to run JAVA Web applications, we should use an image that already has JAVA installed. In Dockerfile (an instruction file used to generate images), you should specify "based on JAVA images" so that Docker will go to Docker Hub Registry to download pre-built JAVA images. Then specify in Dockerfile to download and extract the Apache Tomcat software to the / opt/tomcat folder. This command does not have any effect on the original JAVA image, but just adds a change layer to the original image. When a container starts, all the change layers within the container are started, and the container runs the / usr/bin/java command from the first layer and invokes the / opt/tomcat/bin command from the other layer. In fact, every instruction in Dockerfile produces a new layer of change, even if only one file is changed. You can see this more clearly if you have used Git, and every instruction is like a record of every commit. But for Docker, this file system provides more flexibility and makes it easier to manage applications.
Our Spantree team has a self-maintained image with Tomcat. Releasing a new version is also very simple: use Dockerfile to copy the new version into the image to create a new image, and then label the new image as the version. The only difference between different versions of the image is a 90 MB WAR file, and they are all based on the same primary image. If you use a virtual machine to maintain these different versions, you have to consume many different disks to store the same system, while using Docker requires very little disk space. Even if we run many instances of this image at the same time, we only need a basic JAVA / TOMCAT image.
Docker can save time.
Many years ago, when I was developing software for a restaurant chain, I needed to write a 12-page Word document just to describe how to build an environment. For example, local Oracle databases, specific versions of JAVA, and other system tools and shared libraries and software packages. The whole construction process wasted almost a day for each member of our team, and in terms of money, it cost us tens of thousands of dollars in time cost. Although customers have become accustomed to this kind of thing, and even think that it is a necessary cost to bring in new members, adapt their members to the environment, and adapt their employees to our software, in comparison, we would rather spend more time building features for our customers that can improve their business.
If there were Docker at that time, the build environment would be as simple as using the automated scaffolding tool Puppet / Chef / Salt / Ansible, and we could shorten the entire build period from a day to a few minutes. But unlike these tools, Docker can not only build the entire environment, but also save the entire environment to a disk file and copy it somewhere else. Do I need to compile Node.js from source code? Docker can do it. Docker can not only build a Node.js environment, but also mirror the entire environment and save it anywhere. Of course, because Docker is a container, you don't have to worry about what is executed in the container that will have any impact on the host.
Now newcomers to our team just need to run the docker-compose up command to have a cup of coffee and get to work.
Docker can save money
Of course, time is money. In addition to time, Docker can save money on infrastructure hardware. Research by Gartner and McKinsey shows that data center utilization is around 6% to 12%. Not only that, if you use a virtual machine, you also need to passively monitor and set the CPU hard disk and memory utilization of each virtual machine, because of the static partition (static partitioning), so the resources can not be fully utilized. Containers can solve this problem: containers can share memory and disks between instances. You can run multiple services on the same host, you don't have to limit the resources consumed by the container, you can limit resources, you can stop the container when you don't need it, and you don't have to worry about excessive resource consumption when starting stopped programs. At 3: 00 in the morning, only a few people will visit your website, and you need more resources to perform night batch tasks, so you can easily exchange resources.
The memory, hard disk and CPU consumed by the virtual machine are all fixed, and the general dynamic adjustment needs to restart the virtual machine. With Docker, you can make resource restrictions, thanks to CGroup, you can easily adjust resource restrictions dynamically, even without resource restrictions. The applications in the Docker container are only two isolated applications to the host, not two virtual machines, so the host can also allocate resources on its own.
Docker has a robust mirror hosting system
As mentioned earlier, this managed system is called Docker Hub Registry. As of April 29th, there are about 14000 public Docker on the Internet, and most of them are hosted on Docker Hub. Just as Github has largely become the representative of open source projects, Docker's official Docker Hub is already the representative of the public Docker image. These images can be used as the basis for your applications and data services.
Thanks to this, you can feel free to try the latest technology: maybe some people just package instances of graphical databases as Docker images and host them on them. For example, Gitlab, it is very difficult to build Gitlab manually. The translator does not recommend that ordinary users build it manually, but if you use Docker Gitlab, the image will be built within five seconds. For example, for a specific version of Ruby Rails application, or for example, a. Net application on Linux, these can be built with a simple Docker command.
All official Docker images have an official tag, and security can be guaranteed. However, the security of third-party images cannot be guaranteed, so please be careful to download third-party images. In a production environment, you can build an image using only the Dockerfile provided by a third party.
Docker Github introduction: get a Gitlab in 5 seconds
Net applications and Rails applications on Linux will be described in detail in future articles.
Docker can avoid producing Bug.
Spantree has always been a fan of fixed basic setup (immutable infrastructure). In other words, unless there is a loophole such as heart bleeding, we try not to upgrade the system and try not to change the settings of the system. When adding a new server, we will also build the server's system from scratch, then import the image directly, put the server into the load balancer cluster, and then check the health of the retiring server, and remove the cluster after the check. Thanks to the easy import and export of Docker images, we can minimize incompatibilities due to environment and version problems, and roll back easily even if there are incompatibilities. Of course, with Docker, we have a unified operating environment in production, testing, and development. In the past, in collaborative development, it would lead to the situation that "it works on my computer, why not yours" because of the different computer configuration developed by everyone, but now Docker has helped us solve this problem.
Currently, Docker can only run on Linux.
As mentioned earlier, Docker uses technologies that have been tested for a long time in a production environment, and although these technologies have been around for a long time, most of them are unique to Linux, such as LXC and Cgroup. In other words, up to now, services and applications on Linux can only be run on Linux in the Docker container. Microsoft is working closely with Docker and has announced that the next version of Windows Server will support the Docker container and name it Windows Docker. It is estimated that the technology will be Hyper-V Container, and we expect to see this version in the next few years.
In addition, tools like boot2docker and Docker Machine already allow us to run Docker through virtual machines under Mac and Windows.
The above is all the contents of the article "Why Docker is so popular". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.