In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
By separating the data, ingest and master roles of elasticsearch, a high performance and high availability ES architecture is built.
Author: "Little Wolf". Welcome to reprint and contribute.
Catalogue
Use of ▪
▪ architecture
▪ step instructions
▪ elasticsearch-data deployment
▪ elasticsearch-ingest deployment
▪ elasticsearch-master deployment
Use
In the first EFK tutorial-Quick start Guide, we described the installation and deployment of EFK, in which the architecture of ES is three nodes, that is, master, ingest, and data roles are deployed on three servers at the same time.
In this article, role separation deployment will be carried out, and three nodes will be deployed for each role to maximize performance while ensuring high availability.
Master node of ▷ elasticsearch: for scheduling, deployed with a common performance server
Ingest node of ▷ elasticsearch: used for data preprocessing and deployed with a high-performance server
Data node of ▷ elasticsearch: used for data storage on the ground and deployed with servers with good storage performance
If you don't know where to find the "EFK tutorial-Quick start Guide", you can search in the mainstream search engines: the EFK tutorial Quick start Guide or the slow Brother EFK tutorial based on multi-node ES EFK installation and deployment configuration architecture
Server configuration
Note: the architecture here is an extension of the previous article "EFK tutorials-Quick start Guide", so please follow the "EFK tutorials-Quick start Guide" to complete the deployment
Step description
1 ️deploy 3 data nodes to join the original cluster
2 ️deploy 3 ingest nodes to join the original cluster
3 ️indexes migrate the original es index to the data node
4 ️nodes transform the original es nodes into master nodes
Elasticsearch-data deployment
Having completed the basic elasticsearch architecture, we now need to add three new storage nodes to join the cluster and turn off master and ingest functions.
Elasticsearch-data installation: all 3 units perform the same installation steps
Tar-zxvf elasticsearch-7.3.2-linux-x86_64.tar.gzmv elasticsearch-7.3.2 / opt/elasticsearchuseradd elasticsearch- d / opt/elasticsearch-s / sbin/nologinmkdir-p / opt/logs/elasticsearchchown elasticsearch.elasticsearch / opt/elasticsearch-Rchown elasticsearch.elasticsearch / opt/logs/elasticsearch-R# data disk requires elasticsearch write permission chown elasticsearch.elasticsearch / data/SAS-R# limits the number of VMA (virtual memory areas) that a process can have to more than 262144 Otherwise, elasticsearch will report max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144] echo "vm.max_map_count = 655350" > > / etc/sysctl.confsysctl-p
Elasticsearch-data configuration
▷ 192.168.1.51 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.5 data disk location, if there are multiple hard disk locations Separate path.data: / data/SASpath.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.51discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # turn off master function node.master: false# turn off ingest function node.ingest: false# turn on data function node.data: true
▷ 192.168.1.52 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.5 data disk location, if there are multiple hard disk locations Separate path.data: / data/SASpath.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.52discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # turn off master function node.master: false# turn off ingest function node.ingest: false# turn on data function node.data: true
▷ 192.168.1.53 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.5 data disk location, if there are multiple hard disk locations Separate path.data: / data/SASpath.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.53discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # turn off master function node.master: false# turn off ingest function node.ingest: false# turn on data function node.data: true
Elasticsearch-data start
Sudo-u elasticsearch/ opt/elasticsearch/bin/elasticsearch
Elasticsearch cluster status
Curl "http://192.168.1.31:9200/_cat/health?v"
Elasticsearch-data statu
Curl "http://192.168.1.31:9200/_cat/nodes?v"
Elasticsearch-data parameter description
Status: green # Cluster Health status node.total: 6 # Cluster node.data: 6 # Storage with 6 nodes node.role: d # only data roles node.role: I # only ingest roles node.role: M # only master roles node.role: mid # supports master, ingest, data roles elasticsearch-ingest deployment
Now you need to add three new ingest nodes to join the cluster and turn off master and data functions at the same time
Elasticsearch-ingest installation: all three es perform the same installation steps
Tar-zxvf elasticsearch-7.3.2-linux-x86_64.tar.gzmv elasticsearch-7.3.2 / opt/elasticsearchuseradd elasticsearch- d / opt/elasticsearch-s / sbin/nologinmkdir-p / opt/logs/elasticsearchchown elasticsearch.elasticsearch / opt/elasticsearch-Rchown elasticsearch.elasticsearch / opt/logs/elasticsearch-R# limits the number of VMA (virtual memory areas) a process can have to more than 262144 Otherwise, elasticsearch will report max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144] echo "vm.max_map_count = 655350" > > / etc/sysctl.confsysctl-p
Elasticsearch-ingest configuration
▷ 192.168.1.41 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.41path.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.41discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # disable master function node.master: false# enable ingest function node.ingest: true# close data function node.data: false
▷ 192.168.1.42 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.42path.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.42discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # disable master function node.master: false# enable ingest function node.ingest: true# close data function node.data: false
▷ 192.168.1.43 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.43path.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.43discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # disable master function node.master: false# enable ingest function node.ingest: true# close data function node.data: false
Elasticsearch-ingest start
Sudo-u elasticsearch/ opt/elasticsearch/bin/elasticsearch
Elasticsearch cluster status
Curl "http://192.168.1.31:9200/_cat/health?v"
Elasticsearch-ingest statu
Curl "http://192.168.1.31:9200/_cat/nodes?v"
Elasticsearch-ingest parameter description
Status: green # Cluster Health status node.total: 9 # Cluster node.data: 6 # Storage with 6 nodes node.role: d # only data roles node.role: I # only ingest roles node.role: M # only master roles node.role: mid # supports master, ingest, data roles elasticsearch-master deployment
First of all, change the three es (192.168.1.31, 192.168.1.32, 192.168.1.33) deployed in the previous "EFK tutorial-Quick start Guide" to have the function of master only, so you need to migrate the index data on these three sets to the data node made this time.
1 ️index migration: be sure to do this by putting the previous index on the data node
Curl-X PUT "192.168.1.31:9200/*/_settings?pretty"-H 'Content-Type: application/json'-d' {"index.routing.allocation.include._ip": "192.168.1.51192.168.1.52192.168.1.53"}'
2 ️indexes confirm the current index storage location: confirm that all indexes are not on 192.168.1.31, 192.168.1.32, 192.168.1.33 nodes
Curl "http://192.168.1.31:9200/_cat/shards?h=n"
Elasticsearch-master configuration
Note: to modify the configuration and restart the process, you need to execute one by one. To ensure that the first one is successful, then execute the next one. How to restart the process: since in the previous article, "EFK tutorial-Quick start Guide", the command was executed and ran in the foreground, so you can simply exit ctrl-c and start again. The startup command is as follows
Sudo-u elasticsearch/ opt/elasticsearch/bin/elasticsearch
▷ 192.168.1.31 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.31path.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.31discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # enable master function node.master: true# turn off ingest function node.ingest: false# close data function node.data: false
▷ 192.168.1.32 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.32path.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.32discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # enable master function node.master: true# turn off ingest function node.ingest: false# close data function node.data: false
▷ 192.168.1.33 / opt/elasticsearch/config/elasticsearch.yml
Cluster.name: my-applicationnode.name: 192.168.1.33path.logs: / opt/logs/elasticsearchnetwork.host: 192.168.1.33discovery.seed_hosts: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] http.cors.enabled: truehttp.cors.allow-origin: "*" # enable master function node.master: true# turn off ingest function node.ingest: false# close data function node.data: false
Elasticsearch cluster status
Curl "http://192.168.1.31:9200/_cat/health?v"
Elasticsearch-master statu
Curl "http://192.168.1.31:9200/_cat/nodes?v"
At this point, when the word "mid" no longer appears on all the servers in the node.role, it means that everything is done successfully.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.