In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Order
Docker has been widely concerned by major companies since open source, perhaps now the operation and maintenance system of Internet companies is not carried on Docker (or Pouch, etc.) are ashamed to say their own Internet companies.
This article will briefly introduce the basic concepts of Docker, entry-level usage, and some scenarios where using Docker can greatly improve efficiency.
Principle
The simplest and erroneous perception of Docker is that Docker is a very good virtual machine.
As mentioned above, this is a certain wrong statement. Docker is much more advanced than the traditional virtual machine technology, which is shown in that Docker does not virtualize a set of hardware on the host and then create an operating system, but allows the processes in the Docker container to run directly on the host (Docker isolates files, networks, etc.). In this way, Docker will be "lighter, run faster, and more can be created under the same host".
There are three core concepts in Docker: Image, Container, and Repository.
Image: programmers who tend to get "good guy cards" must be familiar with the concept of mirroring. But compared with the iso image of windows, the image in Docker is hierarchical and reusable, rather than a simple stack of files (similar to the difference between the source code of a compressed package and a git repository). Container: the container cannot exist without the support of an image, which is a carrier of the image runtime (similar to the relationship between instances and classes). Relying on the virtualization technology of Docker, it creates independent ports, processes, files and other "space" for the container. Container is a "container" isolated from the host machine. Container hosts can communicate with each other by port, volumes, network and so on. Repository: Docker's warehouse is similar to git's warehouse, with a warehouse name and tag. After the image is built locally, the image can be distributed through the repository. The commonly used Docker hub includes https://hub.docker.com/, https://cr.console.aliyun.com/ and so on.
Related command
1. Installation
The installation of Docker is very convenient, and there are one-click installation tools or scripts under macOS, ubuntu, etc. For more information, please refer to the official tutorial of Docker.
Click docker in the Terminal after installation, and if the instructions come out, in most cases, the installation has been successful.
two。 Find the basic image
DockerHub and other sites provide a lot of images, usually we will find an image from it as the basic image, and then carry out our follow-up operations.
Here we take the ubuntu basic image as an example to configure a node environment.
Because the link is too long, domestic access to Docker Hub may be slow, and mirror accelerators provided by many domestic manufacturers can be used.
3. Pull the basic image
You can pull the image locally from the relevant hub website by using the docker pull command. At the same time, you can see that the mirror image is pulled according to multiple "layers" in the process of pulling.
> docker pull ubuntu:18.04
18.04: Pulling from library/ubuntuc448d9b1e62f: Pull complete0277fe36251d: Pull complete6591defe1cd9: Pull complete2c321da2a3ae: Pull complete08d8a7c0ac3c: Pull completeDigest: sha256:2152a8e6c0d13634c14aef08b6cc74cbc0ad10e4293e53d2118550a52f3064d1Status: Downloaded newer image for ubuntu:18.04
Execute docker images to see all the local images
> docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEubuntu 18.04 58c12a55082a 44 hours ago 79MB
4. Create a Docker container
The docker create command creates a container by mirroring and spits out the container id.
> docker create-- name ubuntuContainer ubuntu:18.040da83bc6515ea1df100c32cccaddc070199b72263663437b8fe424aadccf4778
You can run the modified container with docker start.
> docker start ubuntuContainer
You can view the running container with docker ps.
> docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES9298a27262da ubuntu:18.04 "/ bin/bash" 4 minutes ago Up About a minute ubuntuContainer
You can enter the container with docker exec.
> docker exec-it 9298root@9298a27262da:/# lsbin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr varroot@9298a27262da:/# exit
With docker run, you can create and run a container in one step, and then enter the container.
> docker run-it-- name runUbuntuContainer ubuntu:18.04 / bin/bashroot@57cdd61d4383:/# lsbin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr varroot@57cdd61d4383:/## docker ps can find that runUbuntuContainer > docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES57cdd61d4383 ubuntu:18.04 "/ bin/bash" 9 seconds ago Up 8 seconds runUbuntuContainer9298a27262da ubuntu has been successfully run: 18.04 "/ bin/bash" 9 minutes ago Up 6 minutes ubuntuContainer
5. Install the Node environment in the container
After entering the container, everything is the same as the normal environment. Let's install a simple node environment.
> apt-get update > apt-get install wget > wget-qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | after the bash# is installed, the current session may not be able to read the nvm command. You can enter the medium terminal environment after exit > nvm install 8.0.0 > node-v
6. Commit container, create a new image
Like Ghost with windows, most of the time, we expect to customize our own image, install some basic environment (such as node in the above), and then create the basic image we want. This is when docker commit comes in handy.
> docker commit-- author "rccoder"-- message "curl+node" 9298 rccoder/myworkspace:v1sha256:68e83119eefa0bfdc8e523ab4d16c8cf76770dbb08bad1e32af1c872735e6f71# can see the newly made rccoder/myworkspace lying here through docker images > docker imagesREPOSITORY TAG IMAGE ID CREATED SIZErccoder/myworkspace v1 e0d73563fae8 20 seconds ago 196MB
Next, try our newly created mirror image?
> docker run-it-- name newWorkSpace rccoder/myworkspace:v1 / bin/bashroot@9109f6985735:/# node-v8.0.0
It looks fine.
7. Push mirroring to docker hub
Once the image is made, how can you share it for others to use? Take push to docker hub as an example.
The first step is to go to docker hub to sign up for an account, and then log in to the account on the terminal to push.
> docker login > docker push rccoder/myworkspace:v1The push refers to repository [docker.io/rccoder/myworkspace] c0913fec0e19: Pushing [= >] 2.783MB/116.7MBbb1eed35aacf: Mounted from library/ubuntu5fc1dce434ba: Mounted from library/ubuntuc4f90a44515b: Mounted from library/ubuntua792400561d8: Mounted from library/ubuntu6a4e481d02df: Waiting
8. It's time to use Dockerfile
Continuous integration with Docker? Rather than hearing about it before you know Docker, it's unexpected that you need to copy the code from somewhere and execute it (yeah, it sounds a little travis-ci).
It's time for Dockerfile to come out!
Dockerfile is a script made up of a bunch of commands and parameters that can be executed using docker build to build an image and do something automatically (similar to .travis.yml in travis-ci).
The formats of Dockerfile are all:
# CommentINSTRUCTION arguments
The base image must be specified starting with FROM BASE_IMAGE.
Please refer to Dockerfile reference for more detailed specifications and instructions. Here we take the rccoder/myworkspace:v1 above as the basic image, and then create a directory in the root directory as an example
Dockerfile is as follows:
FROM rccoder/myworkspace:v1RUN mkdir a
Then execute:
> docker build-t newfiledocker:v1 .Sending build context to Docker daemon 3.584kBStep 1 68e83119eefaStep 2: RUN mkdir a-- > Running in 1127aff5fbd3Removing intermediate container 1127aff5fbd3-- > 25a8a5418af0Successfully built 25a8a5418af0Successfully tagged newfiledocker:v1# create a new newfiledocker-based container and open it in the terminal, and find that there is already a folder in it. > docker docker run-it newfiledocker:v1 / bin/bashroot@e3bd8ca19ffc:/# lsa bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
With the power of Dockerfile, Docker leaves unlimited possibilities.
What can I do?
Having said all this, what can Docker do in the actual production environment? The commonly used ones may be the following (welcome to add in the comments)
1. Deployment switching in multiple environments
Business development often needs to distinguish between the development environment and the online environment, using Docker to migrate the code and environment in the development environment intact to the online environment, and cooperate with a certain automation process to achieve automatic release.
two。 Front-end cloud building
Because of the egg pain problem of node_modules, different developers in the same warehouse often encounter different people using different package versions and have no idea that they are different from others, which eventually leads to online problems after release. With Docker, you can create new containers in the cloud and build code remotely without pollution and at low cost, so that different people must use the same version.
3. One-click configuration of complex environment
In some scenarios, there may be some super complex environments (for example, freshmen with Java environment). At this time, you can use Docker to encapsulate the environment configuration and directly generate images for everyone to use at low cost.
4. Continuous integration unit testing
Similar to travis-ci
5. Multi-version isolation and file isolation of the same application
For example, this project relies on node6, and that project relies on node 8 (for example, if the hard drive is big enough, it is recommended to solve it through nodeinstall); there are 100 wordpress programs running on the same server (isolation can be established with Docker to prevent mutual contamination).
4. Save money
Well, low-cost security is oversold (fog)
Reference link
Use the Docker command line
Dockerfile reference
Best practices for writing Dockerfiles
The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.