In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is to share with you about how Docker-compose builds ELK clusters. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Planning
It is planned to create three ES instances to form a cluster, while creating a Kibana instance to connect to the cluster. Each ES instance uses a local configuration file to facilitate the saving and version management of the configuration file. The configuration file of Kibana is also placed locally and mounted into the container by file mapping.
The overall directory structure is as follows:
$tree. ├── docker-compose.yml ├── kibana.yml ├── node1 │ └── es1.yml ├── node2 │ └── es2.yml node3 └── es3.yml3 directories, 5 files
Orchestration file
The main choreography file is docker-compose.yml
Version: "9001" services: es-node1: image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0 hostname: es-node1 expose: # will not expose the port to out-of-container applications-"9001" ports: # expose the port to the host-"9200 es-node1 9200"-"9300 es-node1 9300" volumes:-~ / Projects/sh-valley/docker-conf/elasticstack/cluster/node1/es1.yml:/usr/share/ Elasticsearch/config/elasticsearch.yml environment:-cluster.name=es-cluster-bootstrap.memory_lock=true-"ES_JAVA_OPTS=-Xms256m-Xmx256m" ulimits: memlock: soft:-1 hard: es-cluster-network: ipv4_address: 172.21.0.10 es-node2: image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0 hostname: es-node2 expose: # will not expose the port to applications outside the container-"9002" ports: # expose the port to the host-"9201 networks 9201"-"9301 bootstrap.memory_lock=true 9301" volumes:-~ / Projects/sh-valley/docker-conf/elasticstack/cluster/node2/es2.yml:/usr/share/elasticsearch/config/elasticsearch.yml environment:-cluster.name=es-cluster-bootstrap.memory_lock=true-"ES_JAVA_OPTS=-Xms256m-Xmx256m" ulimits: memlock: soft:-1 hard:-1 networks: es-cluster-network: ipv4_address: 172 .21.0.11 es-node3: image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0 hostname: es-node3 expose: # does not expose the port to out-of-container applications-"9003" ports: # exposing the port to the host-"9202 docker.elastic.co/elasticsearch/elasticsearch:6.7.0 hostname 9202"-"9302 virtual 9302" volumes:-~ / Projects/sh-valley/docker-conf/elasticstack/cluster/node3/es3.yml:/usr/share/elasticsearch/config/ Elasticsearch.yml environment:-cluster.name=es-cluster-bootstrap.memory_lock=true-"ES_JAVA_OPTS=-Xms256m-Xmx256m" ulimits: memlock: soft:-1 hard:-1 networks: es-cluster-network: ipv4_address: 172.21.0.12 kibana: image: docker.elastic.co/kibana/kibana:6.7.0 ports:-"5601" volumes:-~ / Projects/sh-valley/docker-conf/elasticstack/cluster/kibana.yml:/usr/ Share/kibana/config/kibana.yml environment:-ELASTICSEARCH_URL= http://es-node1:9200 networks:-es-cluster-networknetworks: es-cluster-network: driver: bridge ipam: driver: default config:-subnet: 172.21.0.0Unip 16 gateway: 172.21.0.1
Only one example selected from the ES configuration file is as follows:
Cluster.name: elasticsearch-clusternode.name: es-node1network.bind_host: 0.0.0.0network.publish_host: 172.21.0.10http.port: 9200transport.tcp.port: 9300http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["172.21.0.10 es-node1network.bind_host 9300", "172.21.0.11 es-node1network.bind_host 9301" "172.21.0.12 discovery.zen.minimum_master_nodes 9302"]
The configuration file for Kibana is as follows
Server.name: kibanaserver.host: "0" elasticsearch.hosts: ["http://es-node1:9200"," http://es-node2:9201", "http://es-node3:9202"] xpack.monitoring.ui.container.elasticsearch.enabled: false
Start command
Once the configuration file is ready, you can start the cluster
$docker-compose up-d
The startup process may be slow, and you can see the cluster nodes from the command line
$curl http://localhost:9200/_cat/nodes172.21.0.12 51 96 29 6.53 6.43 3.72 md-es-node3172.21.0.11 47 96 30 6.53 6.43 3.72 mdi-es-node2172.21.0.10 49 96 30 6.53 6.43 3.72 mdi * es-node1
During subsequent use, you can start and stop the service through the docker-compose command. If you do not want to keep the relevant instances, you can use docker-compose down to close and delete the container.
Thank you for reading! This is the end of the article on "how to build an ELK cluster in Docker-compose". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.