In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Based on ElasticSearch multi-instance architecture, rational allocation of resources and separation of hot and cold data are realized.
Author: "Little Wolf". Welcome to reprint and contribute.
Catalogue
Use of ▪
▪ architecture
▪ 192.168.1.51 elasticsearch-data deployment of dual instances
▪ 192.168.1.52 elasticsearch-data deployment of dual instances
▪ 192.168.1.53 elasticsearch-data deployment of dual instances
▪ test
Use
Previously on:
In the first EFK tutorial-Quick start Guide, ▷ described the installation and deployment of EFK, in which the architecture of ES is three nodes, that is, the master, ingest, and data roles are deployed on three servers at the same time.
In the second EFK tutorial-ElasticSearch High performance and High availability Architecture, ▷ describes the use of EFK's data/ingest/master role and the deployment of three nodes to maximize performance while ensuring high availability.
In the first two articles, there is only one instance in the ES cluster, but in this article, multiple ES instances will be deployed in a cluster to achieve rational resource allocation. For example, there are SSD and SAS hard drives in data servers, which can store hot data to SSD and cold data to SAS to separate hot and cold data.
In this article, we will create two instances for the data server, one based on SSD and one based on SAS hard disk, and put the September index of nginx on SAS disk and the rest on SSD disk.
Architecture
Architecture diagram
Server configuration
192.168.1.51 elasticsearch-data deploys dual instances
Index migration (this step cannot be ignored): put the index on 192.168.1.51 on the other 2 data nodes
Curl-X PUT "192.168.1.31:9200/*/_settings?pretty"-H 'Content-Type: application/json'-d' {"index.routing.allocation.include._ip": "192.168.1.52192.168.1.53"}'
Confirm the current index storage location: verify that all indexes are not on the 192.168.1.51 node
Curl "http://192.168.1.31:9200/_cat/shards?h=n"
Stop the process of 192.168.1.51, modify the directory structure and configuration: please press SSD and SAS hard disk to mount the data disk.
# download and deploy the installation package, please refer to the first "EFK tutorial-Quick start Guide" cd / opt/software/tar-zxvf elasticsearch-7.3.2-linux-x86_64.tar.gzmv / opt/elasticsearch / opt/elasticsearch-SASmv elasticsearch-7.3.2 / opt/mv / opt/elasticsearch-7.3.2 / opt/elasticsearch-SSDchown elasticsearch.elasticsearch / opt/elasticsearch-*-Rrm-rf / data/SAS/*chown elasticsearch.elasticsearch / data/ *-Rmkdir-p / opt/logs/elasticsearch-SASmkdir-p / opt/logs/elasticsearch-SSDchown elasticsearch.elasticsearch / opt/logs/*-R
SAS instance / opt/elasticsearch-SAS/config/elasticsearch.yml configuration
Cluster.name: my-applicationnode.name: 192.168.1.51-SASpath.data: / data/SASpath.logs: / opt/logs/elasticsearch-SASnetwork.host: 192.168.1.51http.port: 9200transport.port: 930 discovery.seed_hosts and cluster.initial_master_nodes must be accompanied by port numbers Otherwise, you will use the http.port and transport.port ports discovery.seed_hosts: ["192.168.1.31 discovery.seed_hosts", "192.168.1.32 cluster.initial_master_nodes", "192.168.1.32", "192.168.1.33") cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33 node.master 9300"] http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: falsenode.ingest: falsenode.data: true# only allows 2 instances to be launched on this machine node.max_local_storage_nodes: 2
SSD instance / opt/elasticsearch-SSD/config/elasticsearch.yml configuration
Cluster.name: my-applicationnode.name: 192.168.1.51-SSDpath.data: / data/SSDpath.logs: / opt/logs/elasticsearch-SSDnetwork.host: 192.168.1.51http.port: 9201transport.port: 930 discovery.seed_hosts and cluster.initial_master_nodes must be accompanied by port numbers Otherwise, you will use the http.port and transport.port ports discovery.seed_hosts: ["192.168.1.31 discovery.seed_hosts", "192.168.1.32 cluster.initial_master_nodes", "192.168.1.32", "192.168.1.33") cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33 node.master 9300"] http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: falsenode.ingest: falsenode.data: true# only allows 2 instances to be launched on this machine node.max_local_storage_nodes: 2
Startup method of SAS instance and SSD instance
Sudo-u elasticsearch / opt/elasticsearch-SAS/bin/elasticsearchsudo-u elasticsearch / opt/elasticsearch-SSD/bin/elasticsearch
Confirm that 2 instances of SAS and SSD have been enabled
Curl "http://192.168.1.31:9200/_cat/nodes?v"
192.168.1.52 elasticsearch-data deploys dual instances
Index migration (this step cannot be ignored): put the index on 192.168.1.52 on the other 2 data nodes
Curl-X PUT "192.168.1.31:9200/*/_settings?pretty"-H 'Content-Type: application/json'-d' {"index.routing.allocation.include._ip": "192.168.1.51192.168.1.53"}'
Confirm the current index storage location: verify that all indexes are not on the 192.168.1.52 node
Curl "http://192.168.1.31:9200/_cat/shards?h=n"
Stop the process of 192.168.1.52, modify the directory structure and configuration: please press SSD and SAS hard disk to mount the data disk.
# download and deploy the installation package, please refer to the first "EFK tutorial-Quick start Guide" cd / opt/software/tar-zxvf elasticsearch-7.3.2-linux-x86_64.tar.gzmv / opt/elasticsearch / opt/elasticsearch-SASmv elasticsearch-7.3.2 / opt/mv / opt/elasticsearch-7.3.2 / opt/elasticsearch-SSDchown elasticsearch.elasticsearch / opt/elasticsearch-*-Rrm-rf / data/SAS/*chown elasticsearch.elasticsearch / data/ *-Rmkdir-p / opt/logs/elasticsearch-SASmkdir-p / opt/logs/elasticsearch-SSDchown elasticsearch.elasticsearch / opt/logs/*-R
SAS instance / opt/elasticsearch-SAS/config/elasticsearch.yml configuration
Cluster.name: my-applicationnode.name: 192.168.1.52-SASpath.data: / data/SASpath.logs: / opt/logs/elasticsearch-SASnetwork.host: 192.168.1.52http.port: 9200transport.port: 930 discovery.seed_hosts and cluster.initial_master_nodes must be accompanied by port numbers Otherwise, you will use the http.port and transport.port ports discovery.seed_hosts: ["192.168.1.31 discovery.seed_hosts", "192.168.1.32 cluster.initial_master_nodes", "192.168.1.32", "192.168.1.33") cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33 node.master 9300"] http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: falsenode.ingest: falsenode.data: true# only allows 2 instances to be launched on this machine node.max_local_storage_nodes: 2
SSD instance / opt/elasticsearch-SSD/config/elasticsearch.yml configuration
Cluster.name: my-applicationnode.name: 192.168.1.52-SSDpath.data: / data/SSDpath.logs: / opt/logs/elasticsearch-SSDnetwork.host: 192.168.1.52http.port: 9201transport.port: 930 discovery.seed_hosts and cluster.initial_master_nodes must be accompanied by port numbers Otherwise, you will use the http.port and transport.port ports discovery.seed_hosts: ["192.168.1.31 discovery.seed_hosts", "192.168.1.32 cluster.initial_master_nodes", "192.168.1.32", "192.168.1.33") cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33 node.master 9300"] http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: falsenode.ingest: falsenode.data: true# only allows 2 instances to be launched on this machine node.max_local_storage_nodes: 2
Startup method of SAS instance and SSD instance
Sudo-u elasticsearch / opt/elasticsearch-SAS/bin/elasticsearchsudo-u elasticsearch / opt/elasticsearch-SSD/bin/elasticsearch
Confirm that 2 instances of SAS and SSD have been enabled
Curl "http://192.168.1.31:9200/_cat/nodes?v"
192.168.1.53 elasticsearch-data deploys dual instances
Index migration (this step cannot be ignored): be sure to do this by placing the index on 192.168.1.53 on the other 2 data nodes
Curl-X PUT "192.168.1.31:9200/*/_settings?pretty"-H 'Content-Type: application/json'-d' {"index.routing.allocation.include._ip": "192.168.1.51192.168.1.52"}'
Confirm the current index storage location: verify that all indexes are not on the 192.168.1.52 node
Curl "http://192.168.1.31:9200/_cat/shards?h=n"
Stop the process of 192.168.1.53, modify the directory structure and configuration: please press SSD and SAS hard disk to mount the data disk.
# download and deploy the installation package, please refer to the first "EFK tutorial-Quick start Guide" cd / opt/software/tar-zxvf elasticsearch-7.3.2-linux-x86_64.tar.gzmv / opt/elasticsearch / opt/elasticsearch-SASmv elasticsearch-7.3.2 / opt/mv / opt/elasticsearch-7.3.2 / opt/elasticsearch-SSDchown elasticsearch.elasticsearch / opt/elasticsearch-*-Rrm-rf / data/SAS/*chown elasticsearch.elasticsearch / data/ *-Rmkdir-p / opt/logs/elasticsearch-SASmkdir-p / opt/logs/elasticsearch-SSDchown elasticsearch.elasticsearch / opt/logs/*-R
SAS instance / opt/elasticsearch-SAS/config/elasticsearch.yml configuration
Cluster.name: my-applicationnode.name: 192.168.1.53-SASpath.data: / data/SASpath.logs: / opt/logs/elasticsearch-SASnetwork.host: 192.168.1.53http.port: 9200transport.port: 930 discovery.seed_hosts and cluster.initial_master_nodes must be accompanied by port numbers Otherwise, you will use the http.port and transport.port ports discovery.seed_hosts: ["192.168.1.31 discovery.seed_hosts", "192.168.1.32 cluster.initial_master_nodes", "192.168.1.32", "192.168.1.33") cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33 node.master 9300"] http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: falsenode.ingest: falsenode.data: true# only allows 2 instances to be launched on this machine node.max_local_storage_nodes: 2
SSD instance / opt/elasticsearch-SSD/config/elasticsearch.yml configuration
Cluster.name: my-applicationnode.name: 192.168.1.53-SSDpath.data: / data/SSDpath.logs: / opt/logs/elasticsearch-SSDnetwork.host: 192.168.1.53http.port: 9201transport.port: 930 discovery.seed_hosts and cluster.initial_master_nodes must be accompanied by port numbers Otherwise, you will use the http.port and transport.port ports discovery.seed_hosts: ["192.168.1.31 discovery.seed_hosts", "192.168.1.32 cluster.initial_master_nodes", "192.168.1.32", "192.168.1.33") cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32" "192.168.1.33 node.master 9300"] http.cors.enabled: truehttp.cors.allow-origin: "*" node.master: falsenode.ingest: falsenode.data: true# only allows 2 instances to be launched on this machine node.max_local_storage_nodes: 2
Startup method of SAS instance and SSD instance
Sudo-u elasticsearch / opt/elasticsearch-SAS/bin/elasticsearchsudo-u elasticsearch / opt/elasticsearch-SSD/bin/elasticsearch
Confirm that 2 instances of SAS and SSD have been enabled
Curl "http://192.168.1.31:9200/_cat/nodes?v"
test
Move all indexes to the SSD hard drive
# the following parameters will be explained in a later article You can copy curl-X PUT "192.168.1.31:9200/*/_settings?pretty"-H 'Content-Type: application/json'-d' {"index.routing.allocation.include._host_ip": "," index.routing.allocation.include._host ":"," index.routing.allocation.include._name ":", "index.routing.allocation.include._ip": "" Index.routing.allocation.require._name: "*-SSD"}'
Verify that all indexes are on the SSD hard drive
Curl "http://192.168.1.31:9200/_cat/shards?h=n"
Migrate the log index of nginx9 month to SAS hard disk
Curl-X PUT "192.168.1.31:9200/nginx_*_2019.09/_settings?pretty"-H 'Content-Type: application/json'-d' {"index.routing.allocation.require._name": "*-SAS"}'
Confirm that the log index of nginx9 month is migrated to SAS hard disk.
Curl "http://192.168.1.31:9200/_cat/shards"
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.