Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the method of getting started with Docker

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the relevant knowledge of "what is the method of getting started with Docker". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Brief introduction of Docker

Docker is an OS virtualization technology and an open source application container engine. It allows developers to package applications into a portable container that can run on almost all linux systems (Windows10 also natively supports it and requires a built-in virtual machine before Win10), which is called "package once, run everywhere".

The operation of the Docker container is a complete sandboxing mechanism, and there is no correlation to each other (unless you concatenate the cluster yourself). Network, storage, processes and other resources are isolated not only for different containers, but also for hosts and containers directly, unless you manually map exposed ports or mount storage volumes.

Many people don't understand the difference between a Docker and a virtual machine.

From these two structure diagrams, Docker has one less layer of virtual machine operating system than the virtual machine, and the application of Docker runs directly on the Docker engine. Because the virtual machine needs a layer of operating system, the size of the virtual machine is very large, usually between a few gigabytes and more than ten gigabytes. And usually on a virtual machine, there is more than one application, so the overall virtual cluster management is not very friendly, it is difficult to achieve flexible allocation. While the volume of a Docker image is about tens of M to hundreds of M, generally, an image only packages one application, and consists of multiple images to form a complete project, and the image is easy to copy and can be run across platforms, which makes the deployment management of the project more flexible. So Docker is above the virtual machine in terms of resource consumption, management, and use, so is there any reason not to use such containerization technology?

It can be said that the study of containerization technology is as deep as the sea. From basic image and container operations, to image packaging and container deployment, to enterprise production-level container cluster management technology (Docker's official Swarm, Google's Kubernetes), not all technicians can learn so much. However, in addition to the difficulty of production-level cluster management technology, other content is actually very simple from the point of view of learning and use, and K8s is rarely accessible to ordinary developers.

Speaking of which, there may be many people who think that this is an operation at the company level and operation and maintenance level, and they don't know much about what Docker means to ordinary developers, and what good it will do to us.

Multi-office environment, one-click deployment. If you have a development environment in your company and a development environment at home, when your company's development environment changes, the home environment will change. If you are using Docker, package some dependent applications, such as Redis, ZK, Mysql and other edge services, in docker. No matter where you change the content, as long as you update the image at run time, you can follow the content of * *, without the need for a manual installation, adaptation.

Joint testing, no need to rely on others. When the backend completes the external interface, package the backend application into the docker, so that you can start the container for joint debugging and testing no matter when it is the front end or test, without having to manually build the backend environment step by step.

...

The following is a step-by-step explanation of the Docker knowledge that ordinary developers need.

Concept introduction

The first step to learning Docker is to understand the following basic concepts:

The host, Host, the physical machine on which Docker is running, is the system environment in which Docker is running.

Image, Image, is equivalent to a program template through which many similar containers can be generated. It can be understood as a class in Java, which does not have the ability to execute by itself, and is a template for object abstraction. There can be multiple versions of each image, which can be distinguished by tag. Images can be built through Dockerfile.

Container, the smallest unit object that Container,Docker runs. It is a runnable object instantiated by mirroring. Changes to the container can be submitted to the mirror to update the template of the container.

The repository, Repository, is used to store and manage images, similar to the repository of git management code, which can manage multiple versions of images.

The relationship among image, container and repository is as follows:

In a word, pull the image from the repository and use the image to generate the container. Basic operation

After understanding the basic concepts of Docker, let's start to learn how to get started. Omit all the Docker installation process here and go to the official website to download it. It's basically a stupid installation.

Pull the image

With the docker pull ${image_uri}: ${image_tag} command, you can pull the desired image from the remote repository (default is Docker Hub).

You can search for the image and version you need on Docker Hub's website. Ubuntu, for example, has several versions available.

Let's pull the ubuntu image of version 16.04. Then use the docker images command to check the local image and find that there is an extra ubuntu image.

Container creation, start, stop, login

Once you have the image, you can start a container with docker run-it ${image_id} creation.

Image_id is the id of the image, which can be seen through docker images, or it can be the name of the image (REPOSITORY:TAG).

-it allows you to connect to the terminal of the container after startup. After connecting to the terminal, you can manipulate the contents of the container at will.

After exit exits the container, the container automatically stops. But the container still exists, just "off". (you can log out of the container through ctrl+p,ctrl+q without closing the container)

Through docker ps-a, you can see that our container is already Exited.

With docker start ${container_id}, we start the container again. With docker ps (plus *-a containing containers that are not started), you can see that the status of the container is UP*.

Similarly, we can stop the container by docker stop ${container_id}

If you do not add the *-a * parameter when using the docker start command, the container will not be connected by default. However, we can log in to the container through docker attach ${container_id} after start.

With the above basic operations, you can basically use docker as a virtual machine. If you want to connect the network and storage of containers and virtual machines, you can search the Internet to learn about the container settings such as network and volume mount.

Update Mirror

In the above example, what we pull down is only a raw image of ubuntu, not too much content. Let's install a jdk in the container of the image.

So we have a jdk in our container, but if we create another container with the original ubuntu image, it won't use this jdk. So we need to submit the contents of this container to the image. Submit the container contents locally to the image via docker commit ${container_id} ${repository}: ${tag}. Then you can have a ubuntu image with jdk.

Later, we can use this image to generate a container with jdk. The above updates are limited to local images. If you want to push the container to the cloud, you need to use the docker push command. Only if you have logged in to the warehouse and have access.

Image warehouse

As mentioned above, by default, the warehouse uses Docker Hub. Both pull and push operate on Docker hub, but if the image is for internal private use, there is no need to use Docker Hub. One is slow network, and the other is private security.

There are two solutions to the above problems, one is to build your own private service, and the other is to use the image management platform of cloud service (such as Aliyun's "container image service"). The former is not necessary for ordinary developers, but also to engage in authentication, more troublesome, here will not go into detail. Here's how to use Ali Cloud service as your private repository.

Log in to Aliyun's service using docker login.

Then tag the above jdk image (in fact, it is also the process of changing the repository source).

* just push the image to Aliyun.

After push, you can see this image on Aliyun's warehouse.

By building a private warehouse, we can completely put aside the host environment, build an image, and run everywhere.

Dockerfile builds an image

From the above, we have learned how to pull an image, modify the contents of the container, and submit the image to build the image we need. But through these operations to build an image, one is too cumbersome, the other is not clear, there is no way to intuitively understand the composition of the image.

Dockerfile can solve this problem well. It can be used as an one-stop build image by writing a build process. The following is also based on ubuntu, install jdk to build a new image as an example, and see how Dockerfile is written.

Then execute docker build-t registry.cn-shenzhen.aliyuncs.com/zackku/jdk2:1.0. You can build the mirror image.

Advanced skills of Dockerfile

The above is the basic use of Dockerfile, but in fact we don't build the image as described above (or not just). Here are two common usage principles.

Hierarchical construction. In fact, the image of Docker is hierarchical. Let's look back at the previous example of pushing it to a remote repository.

Layer by layer of submissions are mirrored in the red box. If this layer has already been built locally, you don't need to build it next time. Similarly, if you already have this layer at the remote end, you don't need to push this layer. And this layering can be shared between different images, for example, different Java projects depend on the running environment of JDK, so they can share the mirrored content of JDK. Therefore, based on this feature, we should layer by layer to build images and abstract what images have in common. To do this, we can roughly build an image twice. First, we can build a base image for the underlying layers of different images, such as all the basic running environments of the Java project, and then use the base image to build an application image of the develop surface. It's the equivalent of packaging the application and throwing it into the develop layer. And this layer tells Docker how to run the program.

Dubbo, Redis, Netty, zookeeper, Spring cloud, distributed, high concurrency and other architecture technologies

Try to build a small base layer. The volume of the image is also an important factor to consider when using Docker, because if the size of the image is too large, it will be inefficient to update the image and pull the image. Especially in the base layer just mentioned, if the base layer is too big and too bloated, there are too many programs, which will not only be large, but also consume too much resources such as CPU and network. In fact, when we use Docker, generally a container contains only one program project. The monitoring, health and other contents of this program are done outside the container through cluster management such as K8s, so the container itself only needs to ensure that its own program can run.

As for the above operating system on which I use ubuntu as the basis, it is recommended to use only apline operating system as the image of the * * layer of the program. It is a lightweight Linux distribution with low system size and runtime resource consumption, so it is very suitable for Docker containers. Based on the apline operating system, we add the environment we need, such as installing a Tomcat, JDK, and so on, to build a base image.

In fact, the base image mentioned above does not need to be written by oneself. Dockfile,docker officially provides basic images of various languages and environments directly, which are included in the docker-library of github. If you have the requirements of your own team's running environment, you can add changes to this Dockerfile, or abstract it one more level.

As for how to write Dockfile and what the grammar is, there are a lot of detailed instructions on the Internet, which will not be carried out here because of the problem of space.

Docker-compose starts the cluster

We have already described how a single container is built and started, but our projects often have more than one container, and it is not the right thing to package all programs in one container. So how do we manage to start so many containers, is a compulsory subject. At the enterprise level, there is a container orchestration management tool like K8Sline Swarm, but it is slightly more complex, and it is not necessary for personal use.

It is recommended to use Docker's official docker-compose, which can write all the container choreography in a file, and then use the docker-compose up command to start a set of containers according to your choreography.

The services in this example contains the configuration of each container, in which redis and mongodb use the default image, default configuration, and myproject is our own project. With this choreography, we can connect our project to redis and mongodb. * * through docker-compose up, the image will be pulled automatically and run according to the arrangement.

This is the end of the content of "how to get started with Docker". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report