Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use docker to quickly deploy Elasticsearch clusters

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how to use docker to quickly deploy Elasticsearch clusters. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

Note that version 6.x can no longer specify the loading location of the configuration file through the-Epath.config parameter. The documentation states:

For the archive distributions, the config directory location defaults to $ES_HOME/config. The location of the > config directory can be changed via the ES_PATH_CONF environment variable as follows:

ES_PATH_CONF=/path/to/my/config. / bin/elasticsearch

Alternatively, you can export the ES_PATH_CONF environment variable via the command line or via your shell profile.

That is, it is set by the environment variable ES_PATH_CONF (official document). Students who deploy multiple instances on a single machine and do not use containers should pay more attention.

Preparatory work

Install docker & docker-compose

Here is a push to use daocloud for an accelerated installation:

# dockercurl-sSL https://get.daocloud.io/docker | sh#docker-composecurl-L\ https://get.daocloud.io/docker/compose/releases/download/1.23.2/docker-compose-`uname-s`-`uname-m`\ > / usr/local/bin/docker-composechmod + x / usr/local/bin/docker-compose# View the installation result docker-compose-v

Data catalog

# create data / log directory here we deploy 3 nodes mkdir / opt/elasticsearch/data/ {node0,nod1,node2}-pmkdir / opt/elasticsearch/logs/ {node0,nod1,node2}-pcd / opt/elasticsearch# permissions I am also very confused, even if I give it to privileged, I might as well 0777 chmod 0777 data/*-R & & chmod 0777 logs/*-R# to prevent JVM from reporting errors echo vm.max_map_count=262144 > > / etc/sysctl.confsysctl-p

Docker-compse choreography service

Create an orchestration file

Vim docker-compose.yml

Parameter description

-cluster.name=elasticsearch-cluster

Cluster name

-node.name=node0

-node.master=true

-node.data=true

Node name, whether it can be used as a master node, whether to store data

-bootstrap.memory_lock=true

Lock the physical memory address of a process to avoid swapping (swapped) to improve performance

-http.cors.enabled=true

-http.cors.allow-origin=*

Enable cors to use the Head plug-in

"ES_JAVA_OPTS=-Xms512m-Xmx512m"

JVM memory size configuration

-"discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2"

-"discovery.zen.minimum_master_nodes=2"

Since the version after 5.2.1 does not support multicast, you need to manually specify the tcp data interaction address of each node in the cluster, which is used for node discovery and failover in the cluster. The default port is 9300. If you set other ports, you need to specify another port. Here, we can communicate directly through the container, or we can map 9300 of each node to the host to communicate through the network port.

Set the quorum selected by failover = nodes/2 + 1

Of course, you can mount your own configuration file. The configuration file of ES image is / usr/share/elasticsearch/config/elasticsearch.yml. Mount it as follows:

Volumes:-path/to/local/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro

Docker-compose.yml

Version: '3'services: elasticsearch_n0: image: elasticsearch:6.6.2 container_name: elasticsearch_n0 privileged: true environment:-cluster.name=elasticsearch-cluster-node.name=node0-node.master=true-node.data=true-bootstrap.memory_lock=true-http.cors.enabled=true-http.cors.allow-origin=*-"ES_JAVA_OPTS=-Xms512m-Xmx512m"-"discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1" Elasticsearch_n2 "-" discovery.zen.minimum_master_nodes=2 "ulimits: memlock: soft:-1 hard:-1 volumes: -. / data/node0:/usr/share/elasticsearch/data -. / logs/node0:/usr/share/elasticsearch/logs ports:-9200 elasticsearch_n1: image: elasticsearch:6.6.2 container_name: elasticsearch_n1 privileged: true environment:-cluster.name=elasticsearch-cluster-node .name = node1-node.master=true-node.data=true-bootstrap.memory_lock=true-http.cors.enabled=true-http.cors.allow-origin=*-"ES_JAVA_OPTS=-Xms512m-Xmx512m"-"discovery.zen.ping.unicast.hosts=elasticsearch_n0" Elasticsearch_n1 Elasticsearch_n2 "-" discovery.zen.minimum_master_nodes=2 "ulimits: memlock: soft:-1 hard:-1 volumes: -. / data/node1:/usr/share/elasticsearch/data -. / logs/node1:/usr/share/elasticsearch/logs ports:-9201 memlock 9200 elasticsearch_n2: image: elasticsearch:6.6.2 container_name: elasticsearch_n2 privileged: true environment:-cluster.name=elasticsearch-cluster-node .name = node2-node.master=true-node.data=true-bootstrap.memory_lock=true-http.cors.enabled=true-http.cors.allow-origin=*-"ES_JAVA_OPTS=-Xms512m-Xmx512m"-"discovery.zen.ping.unicast.hosts=elasticsearch_n0" Elasticsearch_n1,elasticsearch_n2 "-" discovery.zen.minimum_master_nodes=2 "ulimits: memlock: soft:-1 hard:-1 volumes: -. / data/node2:/usr/share/elasticsearch/data -. / logs/node2:/usr/share/elasticsearch/logs ports:-9202

Here, we open the http service port of 9200Universe 9201 of the host for node0/node1/node2, and the tcp data transmission of each instance uses the default 9300 to communicate through container management.

If multi-machine deployment is required, map port transport.tcp.port: 9300 of ES to host xxxx port, and enter the address of each host agent for discovery.zen.ping.unicast.hosts:

# for example, one of the hosts is 192.168.1.100...-"discovery.zen.ping.unicast.hosts=192.168.1.100:9300192.168.1.101:9300192.168.1.102:9300"... ports:-9300 9300

Create and start the service

[root@localhost elasticsearch] # docker-compose up-d [root@localhost elasticsearch] # docker-compose ps Name Command State Ports -elasticsearch_n0 / usr/local/bin/docker-entr... Up 0.0.0.0 9300/tcpelasticsearch_n1 9200-> 9200/tcp, 9300/tcpelasticsearch_n1 / usr/local/bin/docker-entr... Up 0.0.0.0 9300/tcpelasticsearch_n2 9201-> 9200/tcp, 9300/tcpelasticsearch_n2 / usr/local/bin/docker-entr... Up 0.0.0.0 9300/tcp# 9202-> 9200/tcp, failed to start 9300/tcp# View error [root@localhost elasticsearch] # docker-compose logs# is at most a problem with the setting of access permissions / JVM vm.max_map_count

View cluster status

192.168.20.6 is my server address

You can view the cluster status by accessing http://192.168.20.6:9200/_cat/nodes?v:

Ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name172.25.0.3 36 98 79 3.43 0.88 0.54 mdi * node0172.25.0.2 48 98 79 3.43 0.88 0.54 mdi-node2172.25.0.4 42 98 51 3.43 0.88 0.54 mdi-node1

Verify Failover

Check the status through the cluster API

When the simulation master node goes offline, the cluster begins to elect a new master node, migrates the data, and splits again.

[root@localhost elasticsearch] # docker-compose stop elasticsearch_n0Stopping elasticsearch_n0... Done

Cluster status (note that the original master node is offline by changing the http port). The node lost by down is still in the cluster and will be removed after waiting for a period of time to recover.

Ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name172.25.0.2 57 84 5 0.46 0.65 0.50 mdi-node2172.25.0.4 49 84 5 0.46 0.65 0.50 mdi * node1172.25.0.3 mdi-node0

Wait for a while.

Ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name172.25.0.2 44 84 1 0.10 0.33 0.40 mdi-node2172.25.0.4 34 84 1 0.10 0.33 0.40 mdi * node1

Restore Node node0

[root@localhost elasticsearch] # docker-compose start elasticsearch_n0Starting elasticsearch_n0... Done

Wait for a while.

Ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name172.25.0.2 52 98 25 0.67 0.43 mdi-node2172.25.0.4 43 98 25 0.67 0.43 0.43 mdi * node1172.25.0.3 40 98 46 0.67 0.43 0.43 mdi-node0

Cooperate with Head plug-in observation

Git clone git://github.com/mobz/elasticsearch-head.gitcd elasticsearch-headnpm installnpm run start

The cluster status diagram makes it easier to see the process of automatic data migration.

1. The normal data of the cluster is safely distributed on 3 nodes.

2. Offline node1 master node cluster begins to migrate data

Migrating

Migration completed

3. Restore the node1 node

Notes on the problems

Elasticsearch watermark

When you create an index after deployment, it is found that some shards are in Unsigned state, which is due to the limitation of elasticsearch watermark:low,high,flood_stage. If the default hard disk utilization is higher than 85%, it will alarm. Developer, turn off the data manually, and the data will be shredded to each node, and the production will make its own decision.

Curl-X PUT http://192.168.20.6:9201/_cluster/settings\-H 'Content-type':'application/json'\-d'{"transient": {"cluster.routing.allocation.disk.threshold_enabled": false}} 'this is the end of the article on "how to quickly deploy Elasticsearch clusters using docker". I hope the above content can be of some help to you, so that you can learn more knowledge, if you think the article is good Please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report