In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what are the basic knowledge points of Docker". Friends who are interested may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what are the basic knowledge points of Docker"?
Brief introduction to Container what is a Linux container
A Linux container is a series of processes that are isolated from the rest of the system, run from another image, and provide all the files needed to support the process.
The image provided by the container contains all the dependencies of the application, so it is portable and consistent throughout the process from development to testing to production.
In more detail, please assume that you are developing an application. You are using a laptop and your development environment has a specific configuration. Other developers may be in a slightly different environment. The application you are developing depends on your current configuration as well as on specific files.
At the same time, your business also has a standardized test and production environment, and has its own configuration and a series of supporting files.
You want to simulate these environments locally as much as possible without the overhead of recreating the server environment.
So how do you ensure that applications run and pass quality checks in these environments without headaches during deployment and without rewriting code and troubleshooting? The answer is to use containers.
The container ensures that your applications have the necessary configurations and files so that they can run smoothly throughout the process from development to testing to production without any problems. In this way, crisis can be avoided and everyone will be happy.
Although this is a simplified example, we can use Linux containers to solve problems in many ways when high portability, configurability, and isolation are required.
Whether the infrastructure is on-premises or in the cloud, or a mix of both, containers can meet your needs.
Isn't the container just virtualization?
Yes, but not unexpectedly. Let's think about it in a simple way:
Virtualization allows many operating systems to run on a single system simultaneously.
Containers can share the same operating system kernel, isolating the application process from the rest of the system.
Figure-comparison of General Virtualization Technology and Docker
What does that mean? First, having multiple operating systems run on a single hypervisor for virtualization does not achieve the same lightweight effect as using containers.
In fact, when you have only limited resources with limited capacity, you need lightweight applications that can be deployed densely.
Linux containers can run from a single operating system and share that operating system across all containers, so applications and services can remain lightweight and run fast in parallel.
A brief History of Container Development
The concept we now call container technology first appeared in 2000, when it was called FreeBSD jail, which partitioned FreeBSD systems into multiple subsystems (also known as Jail).
Jail is developed as a secure environment, and system administrators can share these Jail with multiple users inside or outside the enterprise.
The purpose of Jail is to allow processes to be created in a modified chroot environment without detaching and affecting the entire system-in a chroot environment, access to file systems, networks, and users is virtualized.
Although there are limitations in the implementation of Jail, people finally find a way out of this isolated environment.
But the concept is very attractive.
In 2001, through the VServer project of Jacques G é linas, the implementation of the isolated environment entered the Linux field.
As G é linas said, the goal of this work is to "run multiple generic Linux servers [sic] in a single environment that is highly independent and secure."
After completing this foundational work for multiple controlled user spaces in Linux, the Linux container began to take shape and eventually evolved into what it is today.
What is Docker?
The term "Docker" refers to a variety of things, including open source community projects, the tools used by open source projects, and the company that leads the support of such projects, Docker Inc. And tools officially supported by the company. It's a little confusing that technology products and companies use the same name.
Let's make it simple:
? The term "Docker" in IT software refers to containerization technology that supports the creation and use of Linux containers.
? The open source Docker community is committed to improving such technologies and making them available to all users for free to benefit.
? Docker Inc. The company started with Docker community products, which are mainly responsible for improving the security of the community version and sharing the improved version with the wider technology community. In addition, it also specializes in improving and safely solidifying these technical products to serve enterprise customers.
With Docker, you can use containers as lightweight, modular virtual machines. At the same time, you will have a high degree of flexibility to efficiently create, deploy, and replicate containers, and to migrate them smoothly from one environment to another.
How does Docker work?
Docker technology uses the Linux kernel and kernel functions, such as Cgroups and namespaces, to separate processes so that they run independently of each other.
This independence is the purpose of using containers; it can run multiple processes and applications independently, giving full play to the role of infrastructure while maintaining the security of individual systems.
Container tools, including Docker, provide an image-based deployment model. This makes it easy to share applications or service groups with its dependents across multiple environments. Docker can also automatically deploy applications in this container environment (or merge multiple processes to build a single application).
In addition, because these tools are built on Linux containers, Docker is both easy to use and unique-it provides users with unprecedented levels of application access, rapid deployment, and version control and distribution capabilities.
Is Docker technology the same as traditional Linux containers?
No. Docker technology was originally built on LXC technology (most people would associate this technology with "traditional" Linux containers), but then it gradually got rid of its dependence on this technology.
LXC is very useful in terms of lightweight virtualization, but it doesn't provide a great developer or user experience. In addition to running the container, Docker technology has a number of other functions, including simplifying the process for building the container, transferring the image, and controlling the image version.
Traditional Linux containers use init systems to manage multiple processes. This means that all applications run as a whole. In contrast, Docker technology encourages applications to run their processes independently and provides tools to achieve this functionality. This fine mode of operation has its own advantages.
The goal of docker
The main goal of docker is "Build,Ship and Run any App,Angwhere", build, transport, and run everywhere.
* * build: * * create a docker image
* * Transport: * * docker pull
* * run: * * start a container
For each container, he has his own file system rootfs.
Install Docker
Environment description
# need two sets to install [root@docker01 ~] # cat / etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@docker01 ~] # uname-r 3.10.0-327.el7.x86_64 [root@docker01 ~] # hostname-I10.0.100 172.16.1.100 [root@docker02 ~] # hostname-I10.0.0.101 172.16.1.101
Operate on both nodes
Wget-O / etc/yum.repos.d/docker-ce.repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.reposed-I'm download.docker.compositionmirrors.ustc.edu.cnAccording to dockercopyright ceilings g' / etc/yum.repos.d/docker-ce.repoyum install docker-ce-y
Modify the configuration in docker01:
# modify the startup file and listen to the remote port vim / usr/lib/systemd/system/docker.serviceExecStart=/usr/bin/dockerd-H unix:///var/run/docker.sock-H tcp://10.0.0.100:2375systemctl daemon-reloadsystemctl enable docker.service systemctl restart docker.service# ps-ef to check whether to start or not
Testing in docker02
[root@docker02 ~] # docker-H 10.0.0.100 infoContainers: 0 Running: 0 Paused: 0 Stopped: 0Images: 0Server Version: 17.12.0-ceStorage Driver: devicemapper Docker basic command operation
View docker related information
[root@docker01 ~] # docker version Client: Version: 17.12.0-ce API version: 1.35 Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:10:14 2017 OS/Arch: linux/amd64Server: Engine: Version: 17.12.0-ce API version: 1.35 (minimum version 1.12) Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:12:46 2017 OS/Arch: linux/amd64 Experimental: false
Configure docker Mirror acceleration
Vi / etc/docker/daemon.json {"registry-mirrors": ["https://registry.docker-cn.com"]} launches the first container [root@docker01 ~] # docker run-d-p 80:80 nginxUnable to find image 'nginx:latest' locallylatest: Pulling from library/nginxe7bb522d92ff: Pull complete 6edc05228666: Pull complete cd866a17e81f: Pull complete Digest: sha256:285b49d42c703fdf257d1e2422765c4ba9d3e37768d6ea83d7fe2043dad6e63dStatus: Downloaded newer image for nginx:latest8d8f81da12b5c10af6ba1a5d07f4abc041cb95b01f3d632c3d638922800b0b4d# container, access test is performed in the browser after the container is launched
Parameter description
Docker image lifecycle
Search the official repository image [root@docker01 ~] # docker search centosNAME DESCRIPTION STARS OFFICIAL AUTOMATEDcentos The official build of CentOS for Docker image related operations. 3992 [OK] ansible/centos7-ansible Ansible on Centos7 105 [OK]
List description
Get the image
Pull the image according to the image name
[root@docker01 ~] # docker pull centosUsing default tag: latestlatest: Pulling from library/centosaf4b0a2388c6: Downloading 34.65MB/73.67MB
View the list of current host images
[root@docker01 ~] # docker image list REPOSITORY TAG IMAGE ID CREATED SIZEcentos latest ff426288ea90 3 weeks ago 207MBnginx latest 3f8a4339aadd 5 weeks ago 108MB
Pull third-party mirroring method
Docker pull index.tenxcloud.com/tenxcloud/httpd export image [root@docker01 ~] # docker image list REPOSITORY TAG IMAGE ID CREATED SIZEcentos latest ff426288ea90 3 weeks ago 207MBnginx latest 3f8a4339aadd 5 weeks ago 108MB# export [root@docker01 ~] # docker image save centos > docker-centos.tar.gz Delete Image [root@docker01 ~] # docker image rm centos:latest [root@docker01 ~] # docker image list REPOSITORY TAG IMAGE ID CREATED SIZEnginx latest 3f8a4339aadd 5 weeks ago 108MB Import Image [root@docker01 ~] # docker image load-I docker-centos.tar.gz e15afa4858b6: Loading layer 215.8MB / 215.8MBLoaded image: centos:latest [root@docker01 ~] # docker image list REPOSITORY TAG IMAGE ID CREATED SIZEcentos latest ff426288ea90 3 weeks ago 207MBnginx latest 3f8a4339aadd 5 weeks ago 108MB to view the details of the image [root@docker01 ~] # docker image inspect centos container daily management container start / stop
The easiest way to run a container
[root@docker01 ~] # docker run nginx
Create a container in two steps (rarely used)
[root@docker01 ~] # docker create centos:latest / bin/bashbb7f32368ecf0492adb59e20032ab2e6cf6a563a0e6751e58930ee5f7aaef204 [root@docker01 ~] # docker start stupefied_nobelstupefied_nobel
Quick start container method
[root@docker01 ~] # docker run centos:latest / usr/bin/sleep 20
The first process in the container must be running all the time, otherwise the container will be in the exit state!
View running containers
[root@docker01 ~] # docker container ls or [root@docker01 ~] # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES8708e93fd767 nginx "nginx-g 'daemon of …" 6 seconds ago Up 4 seconds 80/tcp keen_lewin
Check your container details / ip
[root@docker01 ~] # docker container inspect container name / id
Check all your containers (including those that are not running)
[root@docker01] # docker ps-aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES8708e93fd767 nginx "nginx-g 'daemon of..." 4 minutes ago Exited (0) 59 seconds ago keen_lewinf9f3e6af7508 nginx "nginx-g 'daemon of..." 5 minutes ago Exited (0) 5 minutes ago optimistic_haibt8d8f81da12b5 nginx "nginx-g 'daemon of..." 3 hours ago Exited (0) 3 hours ago lucid_bohr
Stop the container
[root@docker01 ~] # docker stop container name / id or [root@docker01 ~] # docker container kill container name / id entry method
Method of getting in at startup
[root@docker01 ~] # docker run-it # Parameter:-it Interactive Terminal [root@docker01 ~] # docker run-it nginx:latest / bin/bashroot@79241093859e:/#
Exit / leave the container
Ctrl+p & ctrl+q
The method of entering the container after startup
Start a docker
[root@docker01] # docker run-it centos:latest [root@1bf0f43c4d2f /] # ps-ef UID PID PPID C STIME TTY TIME CMDroot 1 00 15:47 pts/0 00:00:00 / bin/bashroot 13 10 15:47 pts/0 00:00:00 ps-ef
When attach enters the container and uses pts/0, the user will see the same operation if the user enters it in this way.
[root@docker01 ~] # docker attach 1bf0f43c4d2f [root@1bf0f43c4d2f /] # ps-ef UID PID PPID C STIME TTY TIME CMDroot 1 00 15:47 pts/0 00:00:00 / bin/bashroot 14 1 0 15:49 pts/0 00:00:00 ps-ef
Start a container by naming yourself-name
[root@docker01 ~] # docker attach 1bf0f43c4d2f [root@1bf0f43c4d2f /] # ps-ef UID PID PPID C STIME TTY TIME CMDroot 1 00 15:47 pts/0 00:00:00 / bin/bashroot 14 1 0 15:49 pts/0 00:00:00 ps-ef
Exec entry container method (recommended)
[root@docker01 ~] # docker exec-it clsn1 / bin/bash [root@b20fa75b4b40 /] # reassign a terminal [root@b20fa75b4b40 /] # ps-ef UID PID PPID C STIME TTY TIME CMDroot 1 00 16:11 pts/0 00:00:00 / bin/bashroot 13 00 16:14 pts/1 00:00:00 / bin/bashroot 26 13 0 16 14 pts/1 00:00:00 ps-ef delete all containers [root@docker01 ~] # docker rm-f `docker ps-a-q` #-f force deletion for port mapping at startup
Port mapping of-p parameter
[root@docker01] # docker run-d-p 8888purl 80 nginx:latest 287bec5c60263166c03e1fc5b0b8262fe76507be3dfae4ce5cd2ee2d1e8a89a9
Different specified mapping methods
Random mapping
Docker run-P (large P) # create a volume when you need to mirror a management mount that supports Docker data volumes
Mount Volum
[root@docker01] # docker run-d-p 80:80-v / data:/usr/share/nginx/html nginx:latest079786c1e297b5c5031e7a841160c74e91d4ad06516505043c60dbb78a259d09
Site directory in the container: / usr/share/nginx/html
Write data to the host and view
[root@docker01 ~] # echo "http://www.nmtui.com" > / data/index.html [root@docker01 ~] # curl 10.0.0.100 http://www.nmtui.com
Set up a shared volume and start a new container with the same volume
[root@docker01] # docker run-d-p 8080 http://www.nmtui.com 80-v / data:/usr/share/nginx/html nginx:latest 351f0bd78d273604bd0971b186979aa0f3cbf45247274493d2490527babb4e42 [root@docker01] # curl 10.0.0.100 http://www.nmtui.com
View a list of volumes
[root@docker01 ~] # docker volume lsDRIVER VOLUME NAME creates a volume and mounts it
Create a volume
[root@docker01 ~] # docker volume create f3b95f7bd17da220e63d4e70850b8d7fb3e20f8ad02043423a39fdd072b83521 [root@docker01 ~] # docker volume ls DRIVER VOLUME NAMElocal f3b95f7bd17da220e63d4e70850b8d7fb3e20f8ad02043423a39fdd072b83521
Specify volume name
[root@docker01 ~] # docker volume ls DRIVER VOLUME NAMElocal clsnlocal f3b95f7bd17da220e63d4e70850b8d7fb3e20f8ad02043423a39fdd072b83521
View volume path
[root@docker01 ~] # docker volume inspect clsn [{"CreatedAt": "2018-02-01T00:39:25+08:00", "Driver": "local", "Labels": {}, "Mountpoint": "/ var/lib/docker/volumes/clsn/_data", "Name": "clsn", "Options": {}, "Scope": "local"}]
Create with Volume
[root@docker01] # docker run-d-p 9000 clsn:/usr/share/nginx/html nginx:latest 1434559cff996162da7ce71820ed8f5937fb7c02113bbc84e965845c219d3503# host test [root@docker01 ~] # echo 'blog.nmtui.com' > / var/lib/docker/volumes/clsn/_data/index.html [root@docker01 ~] # curl 10.0.0.100:9000blog.nmtui.com
Set up Volum
[root@docker01] # docker run-d-P-- volumes-from 079786c1e297 nginx:latest b54b9c9930b417ab3257c6e4a8280b54fae57043c0b76b9dc60b4788e92369fb
View the ports used
[root@docker01] # netstat-lntup Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 00 0.0.0.0 Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 22 0.0.0.0 LISTEN 1400/sshd tcp 00 10.0.0.100 2375 0.0 .0.0: * LISTEN 26218/dockerd tcp6 00: 9000: * LISTEN 32015/docker-proxy tcp6 00:: 8080:: * LISTEN 31853/docker-proxy tcp6 00: 80 : * LISTEN 31752/docker-proxy tcp6 00: 22:: * LISTEN 1400/sshd tcp6 00: 32769:: * LISTEN 32300/docker-proxy [root @ docker01 ~] # curl 10.0.0.100 curl 32769 http://www.nmtui.com manually save the container as a mirror
This is based on the official docker centos 6.8 image.
Official image list:
Https://hub.docker.com/explore/
Start a mirror of centos6.8
[root@docker01 ~] # docker pull centos:6.8 [root@docker01 ~] # docker run-it-p 1022 it 22 centos:6.8 / bin/bash# install the sshd service in the container and change the system password [root@582051b2b92b ~] # yum install openssh-server-y [root@582051b2b92b ~] # echo "root:123456" | chpasswd [root @ 582051b2b92b ~] # / etc/init.d/sshd start
After startup, the mirror ssh connection test submits the container as an image.
[root@docker01 ~] # docker commit brave_mcclintock centos6-ssh
Start the container with a new image
[root@docker01] # docker run-d-p 1122centos6-ssh:latest / usr/sbin/sshd-D 5b8161fda2a9f2c39c196c67e2eb9274977e7723fe51c4f08a0190217ae93094
Install the httpd service in the container
[root@5b8161fda2a9 /] # yum install httpd-y
Write a startup script
[root@5b8161fda2a9 /] # cat init.sh #! / bin/bash / etc/init.d/httpd start / usr/sbin/sshd-D [root@5b8161fda2a9 /] # chmod + x init.sh # Note the execution permission
Submit again as a new mirror
[root@docker01 ~] # docker commit 5b8161fda2a9 centos6-httpd sha256:705d67a786cac040800b8485cf046fd57b1828b805c515377fc3e9cea3a481c1
Start the mirror and do the port mapping. And test the access in the browser
[root@docker01 ~] # docker run-d-p 1222 centos6-httpd-p 80:80 centos6-httpd / init.sh 46fa6a06644e31701dc019fb3a8c3b6ef008d4c2c10d46662a97664f838d8c2cDockerfile automatically builds docker image
Official build dockerffile file reference
Https://github.com/CentOS/CentOS-Dockerfiles
Dockerfile instruction set
The main components of dockerfile:
Basic image information FROM centos:6.8
Make mirror operation instruction RUN yum insatll openssh-server-y
Execute the instruction CMD ["/ bin/bash"] when the container starts
Common dockerfile directives:
Who is the mother of the mirror FROM? (specify base image)
MAINTAINER told others, who is in charge of raising it? (specify maintainer information, which may not be available)
RUN, what do you want it to do (just add RUN before the command)
ADD gives it some venture capital (COPY file, which will be decompressed automatically)
WORKDIR I'm cd. I just put on makeup today (set up the current working directory)
VOLUME gives it a place to store its luggage (set the volume, mount the host directory)
EXPOSE what is the door it wants to open (specify the external port)
CMD, run, man! (specify what to do after the container starts)
Other dockerfile directives:
COPY copy Fil
ENV environment variable
Commands executed after the ENTRYPOINT container starts
Create a Dockerfile
Create the first Dockerfile file
# create directory [root@docker01 base] # cd / opt/base# create Dcokerfile file, note case [root@docker01 base] # vim DockerfileFROM centos:6.8RUN yum install openssh-server-y RUN echo "root:123456" | chpasswdRUN / etc/init.d/sshd start CMD ["/ usr/sbin/sshd", "- D"]
Build a docker image
[root@docker01 base] # docker image build-t centos6.8-ssh. -t label the image label. Represents the current path
Start with a self-built image
[root@docker01 base] # docker run-d-p 2022 centos6.8-ssh-b dc3027d3c15dac881e8e2aeff80724216f3ac725f142daa66484f7cb5d074e7a install kodexplorer using Dcokerfile
Dockerfile file content
FROM centos:6.8RUN yum install wget unzip php php-gd php-mbstring-y & & yum clean all# sets the working directory in which all subsequent operations are in WORKDIR / var/www/html/RUN wget-c http://static.kodcloud.com/update/download/kodexplorer4.25.zipRUN unzip kodexplorer4.25.zip & & rm-f kodexplorer4.25.zipRUN chown-R apache.apache .CMD ["/ usr/sbin/apachectl", "- D", "FOREGROUND"]
For more Dockerfile, please refer to the official method.
Mirror layering in Docker
Reference documentation:
Http://www.maiziedu.com/wiki/cloud/dockerimage/
Docker supports the creation of new images by extending existing images. In fact, 99% of the images in Docker Hub are built by installing and configuring the required software in the base image.
As you can see from the image above, the new image is generated by overlaying the base image layer by layer. Each time a software is installed, an additional layer is added to the existing image.
Why Docker mirrors are layered
One of the biggest benefits of mirror layering is the sharing of resources.
For example, if multiple images are built from the same base image, Docker Host only needs to save one base image on disk, and only one base image needs to be loaded in memory to serve all containers. And each layer of the mirror can be shared.
If multiple containers share a basic image, when a container modifies the contents of the basic image, such as the file under / etc, the / etc of other containers will not be modified, and the modification will only be limited to a single container. This is the container Copy-on-Write feature.
Writable container layer
When the container starts, a new writable layer is loaded on top of the mirror. This layer is often called the "container layer", and those below the "container layer" are called the "mirror layer".
All changes to the container-whether adding, deleting, or modifying files-occur only in the container layer. Only the container layer is writable, and all mirror layers below the container layer are read-only.
Details of the container layer
The number of mirror layers may be large, and all mirror layers will be combined to form a unified file system. If there is a file with the same path in different layers, such as / a, the upper / a will overwrite the lower / a, that is, the user can only access the upper file / a. In the container layer, what the user sees is a superimposed file system.
File operated
A copy of the data is copied only when it needs to be modified, a feature called Copy-on-Write. It can be seen that the container layer saves the changed part of the image and does not make any changes to the image itself.
This explains the problem we raised earlier: the container layer records the changes to the image, and all the image layers are read-only and will not be modified by the container, so the image can be shared by multiple containers.
Use docker to run the interconnection between zabbix-server containers
Be sure to understand the method of interconnection between containers before running zabbix
# create a nginx container docker run-d-p 80:80 nginx# create a container, do link, and enter the container docker run-it-- link quirky_brown:web01 centos-ssh / bin/bash# access nginx container can ping ping web01
Command execution process
# launch apache container [root@docker01 ~] # docker run-d httpd:2.4 3f1f7fc554720424327286bd2b04aeab1b084a3fb011a785b0deab6a34e56955 ^ [root @ docker01 docker ps-aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3f1f7fc55472 httpd:2.4 "httpd-foreground" 6 seconds ago Up 5 seconds 80/tcp determined_clarke # pull a busybox image [root@docker01 ~] # docker pull busybox # launch container [root@docker01 ~] # docker run-it-- link determined_clarke:web busybox:latest / bin/sh / # # access the original web container / # ping web PING web (172.17.0.2): 56 data bytes64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.058 Ms ^ C-web ping statistics-1 packets transmitted using the new container 1 packets received, 0% packet lossround-trip min/avg/max = 0.058 packet lossround-trip min/avg/max 0.058 ms launch zabbix container
1. Start a container for mysql
Docker run-- name mysql-server-t\-e MYSQL_DATABASE= "zabbix"\-e MYSQL_USER= "zabbix"\-e MYSQL_PASSWORD= "zabbix_pwd"\-e MYSQL_ROOT_PASSWORD= "root_pwd"\-d mysql:5.7\-- character-set-server=utf8-- collation-server=utf8_bin
2. Start the java-gateway container monitoring java service
Docker run-- name zabbix-java-gateway-t\-d zabbix/zabbix-java-gateway:latest
3. Start the zabbix-mysql container and use link to connect mysql and java-gateway.
Docker run-- name zabbix-server-mysql-t\-e DB_SERVER_HOST= "mysql-server"\-e MYSQL_DATABASE= "zabbix"\-e MYSQL_USER= "zabbix"\-e MYSQL_PASSWORD= "zabbix_pwd"\-e MYSQL_ROOT_PASSWORD= "root_pwd"\-e ZBX_JAVAGATEWAY= "zabbix-java-gateway"\-- link mysql-server:mysql\ -- link zabbix-java-gateway:zabbix-java-gateway\-p 10051 10051\-d zabbix/zabbix-server-mysql:latest
4. Start the zabbix web display and use link to connect zabbix-mysql and mysql.
Docker run-- name zabbix-web-nginx-mysql-t\-e DB_SERVER_HOST= "mysql-server"\-e MYSQL_DATABASE= "zabbix"\-e MYSQL_USER= "zabbix"\-e MYSQL_PASSWORD= "zabbix_pwd"\-e MYSQL_ROOT_PASSWORD= "root_pwd"\-- link mysql-server:mysql\-- link zabbix-server-mysql:zabbix-server\ -p 80:80\-d zabbix/zabbix-web-nginx-mysql:latest about zabbix API
For more information about zabbix API, please refer to the official documentation:
Https://www.zabbix.com/documentation/3.4/zh/manual/api
1. Get the token method
# get token [root@docker02 ~] # curl-s-X POST-H 'Content-Type:application/json'-d'{"jsonrpc": "2.0"," method ":" user.login "," params ": {" user ":" Admin "," password ":" zabbix "}," id ": 1} 'http://10.0.0.100/api_jsonrpc.php{"jsonrpc":"2.0","result":"d3be707f9e866ec5d0d1c242292cbebd", "id": 1} docker Warehouse (registry) create a normal warehouse
1. Create a warehouse
Docker run-d-p 5000 docker run-- restart=always-- name registry-v / opt/myregistry:/var/lib/registry registry
2. Modify the configuration file to support http
[root@docker01 ~] # cat / etc/docker/daemon.json {"registry-mirrors": ["https://registry.docker-cn.com"]," insecure-registries ": [" 10.0.0.100 etc/docker/daemon.json 5000 "]}
Restart docker for the changes to take effect
[root@docker01 ~] # systemctl restart docker.service
3. Modify the image label
[root@docker01 ~] # docker tag busybox:latest 10.0.0.100:5000/clsn/busybox:1.0 [root@docker01 ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEcentos6-ssh latest 3c2b1e57a0f5 18 hours ago 393MBhttpd 2.4 2e202f453940 6 days ago 179MB10.0.0.100:5000/clsn/busybox 1.0 5b0d59026729 8 days ago 1.15MB
4. Upload the newly tagged image to the repository
[root@docker01 ~] # docker push 10.0.0.100:5000/clsn/busybox warehouse with basic certification
1. Install encryption tools
[root@docker01 clsn] # yum install httpd-tools-y
2. Set the authentication password
Mkdir / opt/registry-var/auth/-phtpasswd-Bbn clsn 123456 > / opt/registry-var/auth/htpasswd
3. Start the container and pass authentication parameters when starting.
Docker run-d-p 5000 REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm 5000-v / opt/registry-var/auth/:/auth/-e "REGISTRY_AUTH=htpasswd"-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry
4. Use authenticated user testing
# login user [root@docker01 ~] # docker login 10.0.0.100 clsn Password 5000 Username: clsn Password: 123456Login Succeeded# push the image to the repository [root@docker01 ~] # docker push 10.0.0.100:5000/clsn/busybox The push refers to repository [10.0.0.100:5000/clsn/busybox] 4febd3792a1f: Pushed 1.0: digest: sha256:4cee1979ba0bf7db9fc5d28fb7b798ca69ae95a47c5fecf46327720df4ff352d size: 527# authentication file location [root@docker01 ~] # cat .docker / config. Json {"auths": {"10.0.0.100 5000": {"auth": "Y2xzbjoxMjM0NTY ="} "https://index.docker.io/v1/": {" auth ":" Y2xzbjpIenNAMTk5NG = "}}," HttpHeaders ": {" User-Agent ":" Docker-Client/17.12.0-ce (linux) "}}
At this point, a simple docker image repository has been built.
Docker-compose orchestration tool installs docker-compose
Install docker-compose
# download pip software yum install-y python2-pip# download docker-composepip install docker-compose
Pip download acceleration is enabled in China:
Http://mirrors.aliyun.com/help/pypi
Mkdir ~ / .pip/cat > ~ / .pip/pip.conf restart the docker service, and the solution for exiting all containers is to specify automatic docker run restart-- restart=always modifies docker default configuration file # add the following line "live-restore": true
Docker server profile / etc/docker/daemon.json reference
[root@docker02 ~] # cat / etc/docker/daemon.json {"registry-mirrors": ["https://registry.docker-cn.com"]," graph ":" / opt/mydocker ", # modify the data storage directory to / opt/mydocker/, original / var/lib/docker/" insecure-registries ": [" 10.0.0.100 true 5000 "]," live-restore ": true}
Restart takes effect, only for containers started after that
[root@docker01 ~] # systemctl restart docker.serviceDocker network type
Network type of docker
Bridge default docker network isolation is based on the network namespace. When creating a docker container on the physical machine, each docker container is assigned a network namespace and the container IP is bridged to the virtual bridge of the physical machine.
Do not configure network features for the container
Creating a container in this mode does not configure any network parameters for the container, such as container network card, IP, communication routing, etc., all of which need to be configured by yourself.
[root@docker01 ~] # docker run-it-- network none busybox:latest / bin/sh / # ip A1: lo: mtu 65536 qdisc noqueue link/loopback 0000VlV 0000VlV 0000VO 00 brd 00Vl0000VlV 0000VlG0000 inet 127.0.0.1 8 scope host lo valid_lft forever preferred_lft forever share network configuration with other containers (Container)
This mode is very similar to host mode, except that containers in this mode share IP and ports of other containers rather than physical machines. The container in this mode will not configure the network and port itself. After creating the container in this mode, you will find that the IP is the container IP you specified and the ports are shared, and other containers are isolated from each other, such as processes.
[root@docker01 ~] # docker run-it-- network container:mywordpress_db_1 busybox:latest / bin/sh / # ip A1: lo: mtu 65536 qdisc noqueue link/loopback 0000 scope host lo valid_lft forever preferred_lft forever105: eth0@if106: mtu 1500 qdisc noqueue link/ether 02:42:ac:12:00 03 brd ff:ff:ff:ff:ff:ff inet 172.18.0.3 brd 16 brd 172.18.255.255 scope global eth0 valid_lft forever preferred_lft forever using the host network
The container created by this mode does not have its own independent network namespace, shares a Network Namespace with the physical machine, and shares all ports and IP of the physical machine, and this mode is considered to be insecure.
[root@docker01 ~] # docker run-it-- network host busybox:latest / bin/sh View Network list [root@docker01 ~] # docker network list NETWORK ID NAME DRIVER SCOPEb15e8a720d3b bridge bridge local345d65b4c2a0 host host localbc5e2a32bb55 mywordpress_default bridge localebf76eea91bb None null local uses PIPEWORK to configure stand-alone IP for docker containers
Reference documentation:
Blog.csdn.net/design321/article/details/48264825
Official website:
Github.com/jpetazzo/pipework
Hosting environment: centos7.2
1. Install pipework
Wget https://github.com/jpetazzo/pipework/archive/master.zipunzip master.zip cp pipework-master/pipework / usr/local/bin/chmod + x / usr/local/bin/pipework
2. Configure the bridge network card to install the bridging tool
Yum install bridge-utils.x86_64-y
Modify the network card configuration to achieve bridging
# modify eth0 configuration to let br0 bridge [root@docker01 ~] # cat / etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=EthernetBOOTPROTO=staticNAME=eth0DEVICE=eth0ONBOOT=yesBRIDGE=br0 [root@docker01 ~] # cat / etc/sysconfig/network-scripts/ifcfg-br0 TYPE=BridgeBOOTPROTO=staticNAME=br0DEVICE=br0ONBOOT=yesIPADDR=10.0.0.100NETMASK=255.255.255.0GATEWAY=10.0.0.254DNS1=223.5.5.5# restart the network [root@docker01 ~] # / etc/init.d/network restart
3. Run a container image test:
Pipework br0 $(docker run-d-it-p 6880 it 80-- name httpd_pw httpd) 10.0.0.220 pick 24 colors 10.0.0.254
Test ports and connectivity on other hosts
[root@docker01 ~] # curl 10.0.0.220It works! [root@docker01 ~] # ping 10.0.0.220-c 1PING 10.0.0.220 (10.0.0.220) 56 (84) bytes of data.64 bytes from 10.0.0.220: icmp_seq=1 ttl=64 time=0.043 ms
4. Run another container and set the network type to none:
Pipework br0 $(docker run-d-it-- net=none-- name test httpd:2.4) 10.0.0.221 Universe 24 dollars 10.0.0.254
Conduct access testing
[root@docker01 ~] # curl 10.0.0.221It works!
5. After restarting the container, you need to specify again:
Pipework br0 testduliip 172.16.146.113/24@172.16.146.1pipework br0 testduliip01 172.16.146.112Universe 172.16.146.1
For more information on Dcoker overlay for cross-host communication, please see:
Cnblogs.com/CloudMan6/p/7270551.html
Docker macvlan for cross-host communication
Create a network
[root@docker01] # docker network create-- driver macvlan-- subnet 10.1.0.0It 24-- gateway 10.1.0.254-o parent=eth0 macvlan_133a1f41dcc074f91b5bd45e7dfedabfb2b8ec82db16542f05213839a119b62ca
Set the network card to mixed mode
Ip link set eth0 promisc on
Create a network container using macvlan
[root@docker02 ~] # docker run-it-- network macvlan_1-- ip=10.1.0.222 busybox / bdocker Enterprise Image Repository harbor
Container management
[root@docker01 harbor] # pwd/opt/harbor [root@docker01 harbor] # docker-compose stop
1. Install docker and docker-compose to download harbor
Cd / opt & & https://storage.googleapis.com/harbor-releases/harbor-offline-installer-v1.3.0.tgztar xf harbor-offline-installer-v1.3.0.tgz
2. Modify the password of the host and web interface
[root@docker01 harbor] # vim harbor.cfg hostname = 10.0.0.100 harbor_admin_password = Harbor12345
3. Execute the installation script
[root@docker01 harbor] #. / install.sh
Browsers access http://10.0.0.11
Add a project
4. The specified project that the image is pushed to the warehouse
[root@docker02 ~] # docker tag centos:6.8 10.0.0.100/clsn/centos6.8:1.0 [root@docker02 ~] # [root@docker02 ~] # docker images REPOSITORY TAG IMAGE ID CREATED SIZEbusybox latest 5b0d59026729 8 days ago 1.15MB10.0.0.100/clsn / centos6.8 1.0 6704d778b3ba 2 months ago 195MBcentos 6.8 6704d778b3ba 2 months ago 195MB [root@docker02 ~] # docker login 10.0.0.100Username: adminPassword: Login Succeeded
5. Push image
[root@docker02 ~] # docker push 10.0.0.100/clsn/centos6.8 The push refers to repository [10.0.0.100/clsn/centos6.8] e00c9229b481: Pushing 13.53MB/194.5MB
6. Check in the web interface.
Recommendations for using containers
1. Do not split the application release 2. Do not create a large mirror 3. Do not run multiple processes in a single container. Do not save credentials in the image, do not rely on IP address 5. Run the process 6. 0 as a non-root user. Do not use the "latest" label 7. Do not use a running container to create an image 8. Do not use single-layer mirroring
9. Do not store data in a container
About the monitoring of Docker containers
Basic information of the container
Including the number of containers, ID, name, image, startup command, port and other information
The running state of the container
Count the number of containers in each state, including running, pausing, stopping and abnormal exit
Usage information of the container
Statistics on the usage of resources such as CPU utilization, memory usage, block device I _ swap O usage and network usage of the container
At this point, I believe you have a deeper understanding of "what are the basic knowledge points of Docker?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.