In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
ElasticSearch cluster building
I. Preface
First introduce several core concepts of ElasticSearch.
Cluster (cluster):
A cluster is organized by one or more nodes that hold your entire data and provide indexing and search functions together. A cluster is identified by a unique name, which defaults to "elasticsearch". This name is important because a node can only join a cluster by specifying the name of the cluster.
Node (node):
A node is a server in your cluster that, as part of the cluster, stores your data and participates in the indexing and search functions of the cluster. Like a cluster, a node is identified by a name, which by default is a random name that is assigned to the node at startup. This name is important for administration, because during the management process, you can determine which servers in the network correspond to which nodes in the Elasticsearch cluster.
A node can join a specified cluster by configuring the cluster name. By default, each node is arranged to join a cluster called "elasticsearch", which means that if you start several nodes in your network and assume that they can discover each other, they will automatically form and join a cluster called "elasticsearch".
In a cluster, you can have as many nodes as you want. Moreover, if you do not currently have any Elasticsearch nodes running on your network, if you start a node, a cluster called "elasticsearch" will be created and joined by default.
II. Preparatory work
Set up a cluster with 3 nodes and prepare 3 servers.
192.168.2.86
192.168.2.87
192.168.2.88
Download the installation package on the official website
Https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.0.0.tar.gz
Install a third-party epel source
Rpm-ivh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Install the JDK environment (all machines)
Http://120.52.72.24/download.oracle.com/c3pr90ntc0td/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
Cd / usr/localtar-zxf jdk-8u131-linux-x64.tar.gzln-sv jdk1.8.0_131/ jdk vi / etc/profile.d/jdk.sh
Add the following
Export JAVA_HOME=/usr/local/jdk
Export PATH=$PATH:/usr/local/jdk/bin
~
Chmod 755 / etc/profile.d/jdk.sh. / etc/profile.d/jdk.sh
Verify the Java environment
Modify ulimit limits (all machines)
Vi / etc/security/limits.d/90-nproc.conf
* soft nproc 10240
* hard nproc 10240
* soft nofile 65536
* hard nofile 65536
Vi / etc/sysctl.conf
Add the following
Vm.max_map_count = 262144
Then execute the following command
Sysctl-p
Third, install and configure the cluster
Create an ELK directory where ElasticSearch is installed.
[root@localhost local] # mkdir elk [root@localhost local] # cd elk/ install the Head plug-in:
First install the Head plug-in, which is a tool for managing ElasticSearch clusters (this step only needs to be installed in 192.168.2.86)
Yum installnpm git # install node.jsgit clonegit://github.com/mobz/elasticsearch-head.gitcd elasticsearch-headnpm installnpm run start & or grunt server startup
Log in to view via http://192.168.2.86:9100/
Install ElasticSearch
Extract the ElasticSearch installation package to the ELK directory
[root@localhost local] # tar-zxfelasticsearch-5.3.0.tar.gz
Let's configure the ElasticSearch cluster and edit the configuration file
[unilife@localhost config] $pwd/home/unilife/elk/elasticsearch-cluster2/config [unilife@localhost config] $vi elasticsearch.yml
Add the following configuration
Cluster.name: unilifemedia
Node.name: node-1
Path.data: / tmp/elasticsearch/data
Path.logs: / tmp/elasticsearch/logs
Network.host: 0.0.0.0
Http.port: 19200
Transport.tcp.port: 19300
Http.cors.enabled: true
Http.cors.allow-origin: "*"
Discovery.zen.ping.unicast.hosts: ["192.168.2.86", "192.168.2.87", "192.168.2.88"]
Configuration option resolution:
Cluster.name: cluster name. If a node wants to join a cluster, it needs to be the same as the name of that cluster.
Node.name: node name
Path.data: / tmp/elasticsearch/data data directory
Path.logs: / tmp/elasticsearch/logs log directory
Network.host: 0.0.0.0 listening address
Http.port: 19200 sets the http port for external services. The default is 9200.
Transport.tcp.port: 19300 sets the tcp port for interaction between nodes. The default is 9300.
Http.cors.enabled: true enables cross-domain, otherwise the head plug-in will not be able to connect to the cluster
Http.cors.allow-origin: "*" cross-domain configuration, allowing all
Discovery.zen.ping.unicast.hosts: ["192.168.2.86", "192.168.2.87", "192.168.2.88"] set the initial list of master nodes in the cluster, which can be used to automatically discover new nodes joining the cluster.
The elasticsearch.yml configuration for 192.168.2.87 is as follows:
Cluster.name: unilifemedia
Node.name: node-2
Path.data: / tmp/elasticsearch/data
Path.logs: / tmp/elasticsearch/logs
Network.host: 0.0.0.0
Http.port: 19200
Transport.tcp.port: 19300
Http.cors.enabled: true
Http.cors.allow-origin: "*"
Discovery.zen.ping.unicast.hosts: ["192.168.2.86", "192.168.2.87", "192.168.2.88"]
The elasticsearch.yml configuration for 192.168.2.88 is as follows:
Cluster.name: unilifemedia
Node.name: node-3
Path.data: / tmp/elasticsearch/data
Path.logs: / tmp/elasticsearch/logs
Network.host: 0.0.0.0
Http.port: 19200
Transport.tcp.port: 19300
Http.cors.enabled: true
Http.cors.allow-origin: "*"
Discovery.zen.ping.unicast.hosts: ["192.168.2.86", "192.168.2.87", "192.168.2.88"]
Start ElasticSearch on each node separately
[unilife@localhost bin] $. / elasticsearch &
View cluster status through the head plug-in
The cluster has been built.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.