In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Three virtual machines are used in this experiment
192.168.209.168
192.168.209.169
192.168.209.170
Cp / usr/elasticsearch-6.2.3/config/elasticsearch.yml / usr/elasticsearch-6.2.3/config/elasticsearch.yml.bak
Vi / usr/elasticsearch-6.2.3/config/elasticsearch.yml
Cluster.name: ES_Cluster_Pcdog
Node.name: 192.168.209.168
Path.data: / usr/elasticsearch-6.2.3/data
Path.logs: / usr/elasticsearch-6.2.3/logs
Network.host: 192.168.209.168
Node.master: true
Node.data: true
Discovery.zen.ping.unicast.hosts: ["192.168.209.168", "192.168.209.169", "192.168.209.170"]
Discovery.zen.minimum_master_nodes: 3
Discovery.zen.fd.ping_timeout: 120s
Discovery.zen.fd.ping_retries: 6
Discovery.zen.fd.ping_interval: 30s
Cluster.routing.allocation.cluster_concurrent_rebalance: 40
Cluster.routing.allocation.node_concurrent_recoveries: 40
Cluster.routing.allocation.node_initial_primaries_recoveries: 40
Bootstrap.memory_lock: false
Bootstrap.system_call_filter: false
Check the configuration and filter out comments
Cat elasticsearch.yml | grep-v "^ #"
Node 168Startup
. / bin/elasticssearch
Cluster status yellow, because the other 2 sets have not been started yet.
[2018-04-20T11:36:16000] [INFO] [o.e.c.r.a.AllocationService] [192.168.209.168] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-2018.04.18] [1]]...]).
Copy the configuration file to the other 2 nodes
Scp elasticsearch.yml 192.168.209.169:/usr/elasticsearch-6.2.3/config
Scp elasticsearch.yml 192.168.209.170:/usr/elasticsearch-6.2.3/config
Then start elasticssearch in their respective node
Node 002
Node 003
In the master node directory / usr/elasticsearch-6.2.3/logs
You can see the cluster log
Tail-f ES_Cluster_Pcdog.log
The Nima log is not updated. If you go to the other two nodes, both node have reported errors. Is it because of the machine I cloned? Let me fix a mistake.
[2018-04-20T11:49:10794] [INFO] [o.e.d.z.ZenDiscovery] [192.168.209.169] failed to send join request to master [{192.168.209.168} {- aU5102ETMW8isf85FWEHA} {192.168.209.168} {192.168.209.168} {192.168.209.168} {192.168.209.168} {192.168.209.168], reason [RemoteTransportException [[192.168.209.168] [192.168.209.168] [internal:discovery/zen/join] Nested: IllegalArgumentException [can't add node {192.168.209.169} {- aU5102ETMW8isf85FWEHA} {192.168.209.169} {192.168.209.169} {192.168.209.169}, found existing node {192.168.209.168} {- aU5102ETMW8isf85FWEHA} {192.168.209.168} {192.168.209.168} {192.168.209.168} {192.168.209.168} 9300} with the same id but is a different node instance];]
It turns out that there is the original data in data. Kill it.
[pactera@ELK_002 data] $pwd
/ usr/elasticsearch-6.2.3/data
[pactera@ELK_002 data] $rm-rf nodes/
All three nodes are normal.
Node 168Discovery master role 169,
Node 169The status changes from yellow to green
Node 170,169 roles found in master
Later I need to study how to elect the master process, this cluster 3 master 3 data, the actual production should be 2 master and then multiple data.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 285
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.