Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the rules of docker compose writing?

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly shows you what the docker compose writing rules are, the content is simple and easy to understand, I hope you can learn, after learning, there will be a harvest, the following let the editor to take a look at it.

This article does not introduce anything related to cluster deployment.

Version constraint

Docker Engine > = 19.03Docker Compose > = 3.8

Structure introduction

The docker-compose.yaml file structure is mainly composed of

Version # docker compose version of networks # network, used for internal communication of docker container x-{name} # template naming rules start with x-to reuse volumes # mount volume services # service module, and internally define container information whose internal parameters are equivalent to those of docker run

Module introduction

Docker Compose official documentation

Version

Set the version of docker-compose.yaml

If you need to upgrade, see the documentation version upgrade reference documentation

Compose file version Docker Engine version 3.819.03.03.718.06.0" 3.618.02.0" 3.517.12.03.417.09.0" 3.317.06.0" 3.217.04.0" 3.11.13.1" 3.01.13.0" 2.417.12.0" 2.317.06.0" 2.21.13.0" 2.11.12.0" 2.01.10.01.01.9.1.+

Network_mode

Use the same value as the-- network parameter, and the special form service: [service name]

Network_mode: "bridge" network_mode: "host" network_mode: "none" network_mode: "service: [service name]" network_mode: "container: [container name/id]"

Networks

Sets the network for the container created by the current docker-compose.yaml file

Does not necessarily exist at the same level as version, but can also be found in various other modules, such as services

Internal network

Services: some-service: networks:-some-network-other-network

Public network

Version: "3" networks: default-network:

Aliases (to be added)

Alias for the network

Version: "3.8" services: web: image: "nginx:alpine" networks:-new worker: image: "my-worker-image:latest" networks:-legacy db: image: mysqlnetworks: new: aliases:-database legacy: aliases:-mysqlnetworks: new: legacy:

Ipv4_address, ipv6_address (to be added)

Version: "3. 8" services: app: image: nginx:alpine networks: app_net: ipv4_address: 172.16.238.10 ipv6_address: 2001:3984:3989::10networks: app_net: ipam: driver: default config:-subnet: "172.16.238.0 driver 24"-subnet: "2001 driver 398449

Services

The most important part, used to configure each service

Build

Used to build an image. When both the build and image fields exist, the image name specified by image and tag are used as the name and tag of the build image

Version: "3.8" # docker compose version services: webapp: # docker-compose defines the service (container) name, which is mainly aimed at the parameters of the docker-compose command and is not necessarily consistent with the container name seen by docker ps. Build: # use Dockerfile to build the image context:. / dir context path The relative path is relative to the compose file path dockerfile: Dockerfile-alternate # specify the Dockerfile file name args: # specify the parameters of the Dockerfile environment variable buildno: 1 # directory and list

Context

You can use the relative path or the url of the git repository

Build: context:. / dir

Dockerfile

Specify the Dockerfile file name, you must specify context

Build: context:. Dockerfile: Dockerfile-alternate

Args

The ARG field in Dockerfile, which is used to specify environment variables when docker build

ARG buildnoARG gitcommithashRUN echo "Build number: $buildno" # bash-like style RUN echo "Based on commit: $gitcommithash"

You can use list or map to set args

Build: context:. Args: # map buildno: 1 gitcommithash: cdc3b19build: context:. Args: # list-buildno=1-gitcommithash=cdc3b19

Tips

If you need to use Boolean values, you need to use double quotes ("true", "false", "yes", "no", "on", "off") so that the parser parses them into strings.

Cache_from

Specify cache for the build procedure

Build: context:. Cache_from:-alpine:latest-corp/web_app:3.14

Labels

Set metadata for mirroring with the LABEL instruction in Dockerfile

Build: context:. Labels: # map com.example.description: "Accounting webapp" com.example.department: "Finance" com.example.label-with-empty-value: "" build: context:. Labels: # list-"com.example.description=Accounting webapp"-"com.example.department=Finance"-"com.example.label-with-empty-value"

Network

With the docker-- network instruction, specify a network for the container, which is understood as setting a local area network

Bridging connects two physical Lans

Three modes

Build: context:. Network: host # host mode, with the lowest network latency and the same performance as the host build: context:. Network: custom_network_1 # created by network build: context:. Network: none # No Network

Shm_size

Set the size of the / dev/shm directory in the container

The / dev/shm directory is very important. This directory is not on the hard disk, but in memory. The default size is half the size of memory. Files stored in it will not be emptied. Dividing this directory within the container can specify the performance of the container to a certain extent.

Build: context:. Shm_size: '2gb' # uses a string to set the size build: context:. Shm_size: 10000000 # set byte size

Command

Equivalent to the CMD command in Dockerfile

Command: bundle exec thin-p 3000 # shell-likecommand: ["bundle", "exec", "thin", "- p", "3000"] # json-like

Container_name

Equivalent to docker run-name

Container_name: my-web-container

Depends_on

Used to describe dependencies between services

In docker-compose up, the startup sequence is determined according to depends_on

Version: "3.8" services: web: build: Depends_on: # start db and redis-db-redis redis: image: redis db: image: postgres first

Tips:

Docker-compose does not wait for the state of the container in depends_on to start 'ready', so you need to check the status of the container after startup is complete. Officials have given a solution, using the shell script curve to save the country without repeating it.

Depends_on does not wait for db and redis to be "ready" before starting web-only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.

-form https://docs.docker.com/compo...

Devices

Mounted external device with the same function as-- devices

Devices:-"/ dev/ttyUSB0:/dev/ttyUSB0"

Dns

Custom dns address

Dns: 8.8.8.8 # single string value dns: # list-8.8.8.8-9.9.9.9

Dns_search

Custom dns search domain name

Dns_search: example.com # single string value DNS _ search:-dc1.example.com-dc2.example.com

Entrypoint

Overwrite the default entrypoint

Entrypoint: / code/entrypoint.sh

The same way as in Dockerfile

Entrypoint: ["php", "- d", "memory_limit=-1", "vendor/bin/phpunit"]

Tips:

Entrypoint in docker-compose.yaml clears the CMD command in Dockerfile and overrides all ENTRYPOINT instructions in Dockerfile.

Env_file

Add an environment variable file for docker-compose.yaml. If the compose file is set in docker-compose-f FILE, the file path in env_file is relative to FILE

Env_file: .env # single valueenv_file: # list -. / common.env -. / apps/web.env-/ opt/runtime_opts.env

Tips:

.env file format:

# Set Rails/Rack environment #'#'is a comment, and # blank lines are ignored. RACK_ENV=development # format is VAR=VAL

The environment variable in the .env file cannot be displayed and read during the build process, but will only be read by the docker-compose.yaml file. If you need to use this environment variable in build, add the args subparameter after build.

In the case of specifying multiple .env files, the sentence on the official website is complicated.

Keep in mind that the order of files in the list is significant in determining the value assigned to a variable that shows up more than once.

-from https://docs.docker.com/compo...

Literally translated as

Keep in mind that the order of the files in the list is important for determining the values assigned to variables that are displayed multiple times.

Because the processing of the environment parameter file is top-down, the translation adult means that multiple parameter files contain the same environment variable, whichever is the last.

Environment

Add environment variabl

Environment: # map RACK_ENV: development SHOW: 'true' SESSION_SECRET:environment: # list-RACK_ENV=development-SHOW=true-SESSION_SECRET

Tips:

The environment variable in the .env file cannot be displayed and read during the build process, but will only be read by the docker-compose.yaml file. If you need to use this environment variable in build, add the args subparameter after build.

When building the image, the secondary environment variable can be read by our own code, for example:

Func getEnvInfo () string {rackEnv: = os.Getenv ("RACK_ENV") fmt.Println (rackEnv)} output:development

Expose

Expose ports, but only for communication between services, exposing internal ports, similar to Dockerfile EXPOSE instructions

Expose:-"3000"-"8000"

External_links

Connection service

External_links:-redis_1-project_db_1:mysql-project_db_1:postgresql

Tips:

Network is officially recommended.

Extra_hosts

Add a custom domain name, same as-- add-host

Extra_hosts:-"somehost:162.242.195.82"-"otherhost:50.31.209.229"

It can also be written in the / etc/hosts file in the container

162.242.195.82 somehost50.31.209.229 otherhost

Healthcheck

Same as the HEALTHCHECK instruction in Dockerfile

Healthcheck: test: ["CMD", "curl", "- f", "http://localhost"] interval: 1m30s timeout: 10s retries: 3 start_period: 40s"

Use disabel: true, which is equivalent to test: ["NONE"]

Healthcheck: disable: true

Image

Specify the image to be pulled or used, or use the warehouse / tag or partial image ID

Image: redis # default label latestimage: ubuntu:18.04image: tutum/influxdbimage: example-registry.com:4000/postgresqlimage: a4bc65fd

Init

Run an initialization program in the container to forward the signal to start the acquisition process.

Version: "3.8" services: web: image: alpine:latest init: true

Tips:

The default initialization binary used is Tini and is installed on the / usr/libexec/docker-init daemon host. You can configure the daemon to use custom init binaries through the init-path configuration option.

Isolation

Specify container isolation technology. Linux only supports three default`process hyperv` values for default,windows. For more information, please see Docker Engine Docs.

Labels

Set the metadata for the container with the LABEL instruction in Dockerfile

Build: context:. Labels: # map com.example.description: "Accounting webapp" com.example.department: "Finance" com.example.label-with-empty-value: "" build: context:. Labels: # list-"com.example.description=Accounting webapp"-"com.example.department=Finance"-"com.example.label-with-empty-value"

Links

Features of the old version, not recommended

Logging

Set the daily solstice parameters for the current service

Logging: driver: syslog options: syslog-address: "tcp://192.168.0.42:123"

The driver parameter is the same as the-- log-driver instruction

Driver: "json-file" driver: "syslog" driver: "none"

Tips:

Only when using json-file and journald, docker-compose up and docker-compose logs can see the log, and no other driver can see the log print.

Specify log settings, same as docker run-- log-opt. The format is kmurv structure

Driver: "syslog" options: syslog-address: "tcp://192.168.0.42:123"

The default log driver is json-file, which allows you to set storage limits

Options: max-size: "200k" # maximum single file storage max-file: "10" # maximum number of files

Tips:

The above option parameters are only supported by json-file log drivers. The parameters supported by different drivers are different. For more information, please see the following table.

List of supported drivers

DriverDescriptionnone does not output logs. Local logs are stored in a custom format designed to minimize overhead. Json-file logs are stored in a custom format designed to minimize overhead. Syslog writes log messages to the syslog facility. The syslog daemon must run on the host. Journald writes log messages to journald. The journald daemon must run on the host. Gelf writes log messages to Graylog extended log format (GELF) endpoints, such as Graylog or Logstash. Fluentd writes log messages to fluentd (input forward). The fluentd daemon must run on the host. Awslogs writes log messages to Amazon CloudWatch Logs. Splunk writes log messages to splunk using HTTP Event Collector. Etwlogs writes log messages as Windows event tracking (ETW) events. Available only on the Windows platform. Gcplogs writes log messages to Google Cloud Platform (GCP) logging. Logentries writes log messages to Rapid7 Logentries.

Tips:

Please refer to Configure logging drivers for details.

Ports

Exposed port

Short syntax:

Either specify two ports HOST:CONTAINER, or specify only the container port (temporary host port selected).

Ports:-"3000"-"3000-3005"-"8000lance 8000"-"9090-9091RV 8080-8081"-"49100RV 22"-"127.0.0.1RV 8001 8001"-"127.0.0.1RV 5000-50105000 RV-50105000"-"6060:6060/udp"-"12400-12500Rd 1240"

Tips:

When mapping ports in HOST:CONTAINER format, using container ports less than 60 may cause an error because YAML parses the numbers in the format xx:yy into digits in decimal 60 (which can be understood as time). Therefore, it is recommended that you always explicitly specify the port mapping as a string.

Long syntax

Long syntax allows fields that are not allowed by short syntax.

Arget: ports within the container published: publicly exposed ports protocol: Port protocol (tcp or udp) mode:host is used to publish host ports on each node, or ingress to balance the load of cluster mode ports. Ports:-target: 80 published: 8080 protocol: tcp mode: host

Restart

Container restart policy

Restart: "no" # fail to restart restart: always # always restart restart after failure: on-failure # restart restart only when the error code is on-failure: unless-stopped # do not restart after manual stop

Secrets (to be added)

Volumes

Used to mount data volumes

Short syntax

Short syntax uses the simplest format [SOURCE:] TARGET [: MODE]

SOURCE can be a host address or a data volume name. TARGET is the path within the container. MODE includes rofor read-only and rw for read-write (default)

If the relative path of the host is used, it is extended on the basis of docker-compose.yaml.

Volumes: # specify the path within the container, and docker automatically creates the path-/ var/lib/mysql # mount absolute path-/ opt/data:/var/lib/mysql # mount relative path -. / cache:/tmp/cache # user directory relative path-~ / configs:/etc/configs/:ro # named mount-datavolume:/var/lib/mysql

Long syntax

Long syntax allows the use of fields that cannot be expressed by short syntax

Type: installation type, bind, tmpfs or npipesource: the mount source, which can be the host path or the volume name set in the top-level mount settings. Tmpfs does not apply to mount target: mount path within container read_only: set mount path read-only bind: configure additional binding settings propagation: set propagation mode for binding volume: configure additional mount configuration nocopy: disable replication of data identity bits from container when creating volumes tmpfs: configure additional tmpfs configuration size: set mount tmpfs size (bytes) consistency: mount consistency requirements Consistent (host and container have the same view), cached (read cache, host view is authoritative) or delegated (read and write cache) Version: "3.8" services: web: image: nginx:alpine ports:-"80:80" volumes:-type: volume source: mydata target: / data volume: nocopy: true-type: bind source:. / static target: / opt/app/staticnetworks: webnet:volumes: mydata: the above is about what the docker compose writing rules are, if you have learned knowledge or skills You can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report