Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to establish Multi-node Elastic stack Cluster in CentOS8

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

How to establish a multi-node Elastic stack cluster in CentOS8, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.

Elasticsearch:

Three servers with minimal installation of RHEL 8 / CentOS 8

IP & hostname-192.168.56.40 (elasticsearch2.linuxtechi.local), 192.168.56.50 (elasticsearch3.linuxtechi.local), 192.168.56.60 (ElasticSearch4.linuxtechi.local`)

Logstash:

Two servers with minimal installation of RHEL 8 / CentOS 8

IP & mainframe-192.168.56.20 (logstash2.linuxtechi.local), 192.168.56.30 (logstash3.linuxtechi.local)

Kibana:

One server with minimal installation of RHEL 8 / CentOS 8

IP & Hostname-192.168.56.10 (kibana.linuxtechi.local)

Filebeat:

One server to minimize installation of CentOS 7

IP & Hostname-192.168.56.70 (web-server)

Let's start by setting up an Elasticsearch cluster

Set up 3-node Elasticsearch cluster

As I have already said, set up the nodes of the Elasticsearch cluster, log in to each node, set the hostname, and configure the yum/dnf library.

Use the command hostnamectl to set the hostname on each node:

[root@linuxtechi ~] # hostnamectl set-hostname "elasticsearch2.linuxtechi. Local" [root@linuxtechi ~] # exec bash [root@linuxtechi ~] # [root@linuxtechi ~] # hostnamectl set-hostname "elasticsearch3.linuxtechi. Local" [root@linuxtechi ~] # exec bash [root@linuxtechi] # [root@linuxtechi ~] # hostnamectl set-hostname "elasticsearch4.linuxtechi. Local" [root@linuxtechi ~] # exec bash [root@linuxtechi ~] #

For CentOS 8 systems, we do not need to configure any operating system package libraries, for RHEL 8 servers, if you have a valid subscription, then use the Red Hat subscription to get the package repository. If you want to configure the local yum/dnf repository for the operating system package, please refer to the following URL:

How to use DVD or ISO files to set up a local Yum / DNF repository on a RHEL 8 server

Configure the Elasticsearch package repository on all nodes and create an elastic.repo file under the / etc/yum.repo.d/ folder that contains the following:

~] # vi / etc/yum.repos.d/elastic.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packagesbaseurl= https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md

Save the file and exit.

Use the rpm command to import the Elastic public signature key on all three nodes.

~] # rpm-- import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add the following line to the / etc/hosts file for all three nodes:

192.168.56.40 elasticsearch2.linuxtechi.local192.168.56.50 elasticsearch3.linuxtechi.local192.168.56.60 elasticsearch4.linuxtechi.local

Use the yum/dnf command to install Java on all three nodes:

[root@linuxtechi ~] # dnf install java-openjdk-y [root@linuxtechi ~] # dnf install java-openjdk-y [root@linuxtechi ~] # dnf install java-openjdk-y

Use the yum/dnf command to install Elasticsearch on all three nodes:

[root@linuxtechi ~] # dnf install elasticsearch-y [root@linuxtechi ~] # dnf install elasticsearch-y [root@linuxtechi ~] # dnf install elasticsearch-y

Note: if the operating system firewall is enabled and running in each Elasticsearch node, use the firewall-cmd command to allow the following ports to open:

~] # firewall-cmd-- permanent-- add-port=9300/tcp~] # firewall-cmd-- permanent-- add-port=9200/tcp~] # firewall-cmd-- reload

Configure Elasticsearch, edit the file / etc/elasticsearch/elasticsearch.yml on all nodes and add the following:

~] # vim / etc/elasticsearch/elasticsearch.yml cluster.name: opn-clusternode.name: elasticsearch2.linuxtechi.localnetwork.host: 192.168.56.40http.port: 9200discovery.seed_hosts: ["elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local", "elasticsearch4.linuxtechi.local"] cluster.initial_master_nodes: ["elasticsearch2.linuxtechi.local", "elasticsearch3.linuxtechi.local", "elasticsearch4.linuxtechi.local"]

Note: on each node, fill in the correct hostname in node.name and the correct IP address in network.host, and other parameters remain the same.

Now use the systemctl command to start and enable the Elasticsearch service on all three nodes:

~] # systemctl daemon-reload~] # systemctl enable elasticsearch.service~] # systemctl start elasticsearch.service

Verify that the elasticsearch node starts listening on port 9200 using the following ss command:

[root@linuxtechi ~] # ss-tunlp | grep 9200tcp LISTEN 0 128 [:: ffff:192.168.56.40]: 9200 *: * users: (("java", pid=2734,fd=256)) [root@linuxtechi ~] #

Use the following curl command to verify the Elasticsearch cluster status:

[root@linuxtechi ~] # curl http://elasticsearch2.linuxtechi.local:9200[root@linuxtechi ~] # curl-X GET http://elasticsearch3.linuxtechi.local:9200/_cluster/health?pretty

The output of the command is as follows:

Elasticsearch-cluster-status-rhel8

The above output shows that we have successfully created a 3-node Elasticsearch cluster and the status of the cluster is green.

Note: if you want to modify the JVM heap size, you can edit the file / etc/elasticsearch/jvm.options and change the following parameters according to your environment:

-Xms1g

-Xmx1g

Now let's go to the Logstash node.

Install and configure Logstash

Perform the following steps on both Logstash nodes.

Log in to both nodes and use the hostnamectl command to set the hostname:

[root@linuxtechi ~] # hostnamectl set-hostname "logstash2.linuxtechi.local" [root@linuxtechi ~] # exec bash [root@linuxtechi ~] # [root@linuxtechi ~] # hostnamectl set-hostname "logstash3.linuxtechi.local" [root@linuxtechi] # exec bash [root@linuxtechi ~] #

Add the following entry to the / etc/hosts file of both logstash nodes:

~] # vi / etc/hosts192.168.56.40 elasticsearch2.linuxtechi.local192.168.56.50 elasticsearch3.linuxtechi.local192.168.56.60 elasticsearch4.linuxtechi.local

Save the file and exit.

Configure the Logstash repository on both nodes and create a file logstash.repo under the folder / ete/yum.repo.d/ that contains the following:

~] # vi / etc/yum.repos.d/logstash.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packagesbaseurl= https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md

Save and exit the file, and run the rpm command to import the signature key:

~] # rpm-- import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Use the yum/dnf command to install Java OpenJDK on both nodes:

~] # dnf install java-openjdk-y

Run the yum/dnf command from both nodes to install logstash:

[root@linuxtechi ~] # dnf install logstash-y [root@linuxtechi ~] # dnf install logstash-y

Now configure logstash and perform the following steps on both logstash nodes to create a logstash configuration file. First, let's copy the logstash sample file under / etc/logstash/conf.d/:

# cd / etc/logstash/# cp logstash-sample.conf conf.d/logstash.conf

Edit the configuration file and update the following:

# vi conf.d/logstash.conf input {beats {port = > 5044} output {elasticsearch {hosts = > ["http://elasticsearch2.linuxtechi.local:9200"," http://elasticsearch3.linuxtechi.local:9200", "http://elasticsearch4.linuxtechi.local:9200"] index = >"% {[@ metadata] [beat]} -% {[@ metadata] [version]} -% {+ YYYY.MM.dd} "# user = >" elastic "# password = >" changeme "}

Under the output section, specify the FQDN of all three Elasticsearch nodes in the hosts parameter, leaving the other parameters unchanged.

Use the firewall-cmd command to allow logstash port "5044" in the operating system firewall:

~ # firewall-cmd-- permanent-- add-port=5044/tcp~ # firewall-cmd-reload

Now, run the following systemctl command on each node to start and enable the Logstash service:

~] # systemctl start logstash~] # systemctl eanble logstash

Use the ss command to verify that the logstash service starts listening on port 5044:

[root@linuxtechi ~] # ss-tunlp | grep 5044tcp LISTEN 0 128 *: 5044 *: * users: (("java", pid=2416,fd=96)) [root@linuxtechi ~] #

The above output indicates that logstash has been successfully installed and configured. Let's go to the Kibana installation.

Install and configure Kibana

Log in to the Kibana node and set the hostname using the hostnamectl command:

[root@linuxtechi ~] # hostnamectl set-hostname "kibana.linuxtechi.local" [root@linuxtechi ~] # exec bash [root@linuxtechi ~] #

Edit the / etc/hosts file and add the following line:

192.168.56.40 elasticsearch2.linuxtechi.local192.168.56.50 elasticsearch3.linuxtechi.local192.168.56.60 elasticsearch4.linuxtechi.local

Use the following command to set up the Kibana repository:

[root@linuxtechi ~] # vi / etc/yum.repos.d/kibana.repo [elasticsearch-7.x] name=Elasticsearch repository for 7.x packagesbaseurl= https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md [root@linuxtechi ~] # rpm-import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Execute the yum/dnf command to install kibana:

[root@linuxtechi ~] # yum install kibana-y

Configure Kibana by editing the / etc/kibana/kibana.yml file:

[root@linuxtechi] # vim / etc/kibana/kibana.yml. Server.host: "kibana.linuxtechi.local" server.name: "kibana.linuxtechi.local" elasticsearch.hosts: ["http://elasticsearch2.linuxtechi.local:9200"," http://elasticsearch3.linuxtechi.local:9200", "http://elasticsearch4.linuxtechi.local:9200"].

Enable and start the kibana service:

[root@linuxtechi ~] # systemctl start kibana [root@linuxtechi ~] # systemctl enable kibana

Allow Kibana port "5601" on the system firewall:

[root@linuxtechi] # firewall-cmd-- permanent-- add-port=5601/tcpsuccess [root@linuxtechi ~] # firewall-cmd-- reloadsuccess [root@linuxtechi ~] #

Use the following URL to access the Kibana interface: http://kibana.linuxtechi.local:5601

Kibana-Dashboard-rhel8

From the panel, we can check the status of the Elastic Stack cluster.

Stack-Monitoring-Overview-RHEL8

This proves that we have successfully installed and set up a multi-node Elastic Stack cluster on RHEL 8 / CentOS 8.

Now let's send some logs from other Linux servers to the logstash node through filebeat. In my example, I have a CentOS 7 server, and I will push all the important logs from that server to logstash through filebeat.

Log in to the CentOS 7 server and use the yum/rpm command to install the filebeat package:

[root@linuxtechi ~] # rpm-ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpmRetrieving https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.1-x86_64.rpmPreparing... # # [100%] Updating / installing... 1:filebeat-7.3.1-1 # # [100%] [root@linuxtechi ~] #

Edit the / etc/hosts file and add the following:

192.168.56.20 logstash2.linuxtechi.local192.168.56.30 logstash3.linuxtechi.local

Now configure filebeat so that it can use load balancing technology to send logs to the logstash node, edit the file / etc/filebeat/filebeat.yml, and add the following parameters:

In the filebeat.inputs: section, change enabled: false to enabled: true, and specify the location of the log file that we can send to logstash under the paths parameter; comment out the output.elasticsearch and host parameters; delete the comments for output.logstash: and hosts:, add two logstash nodes to the hosts parameter, and set loadbalance: true.

[root@linuxtechi ~] # vi / etc/filebeat/filebeat.yml filebeat.inputs:- type: log enabled: true paths:-/ var/log/messages-/ var/log/dmesg-/ var/log/maillog-/ var/log/boot.log#output.elasticsearch: # hosts: ["localhost:9200"] output.logstash: hosts: ["logstash2.linuxtechi.local:5044", "logstash3.linuxtechi.local:5044"] loadbalance: true

Start and enable the filebeat service using the following two systemctl commands:

[root@linuxtechi ~] # systemctl start filebeat [root@linuxtechi ~] # systemctl enable filebeat

Now go to the Kibana user interface and verify that the new index is visible.

Select the management option from the left column, and then click Index Management under Elasticsearch:

Elasticsearch-index-management-Kibana

As we saw above, the index is now visible, so let's create the index model now.

Click "Index Patterns" in the Kibana section, which will prompt us to create a new model, click "Create Index Pattern", and specify the schema name as "filebeat":

Define-Index-Pattern-Kibana-RHEL8

Click next.

Select "Timestamp" as the time filter for the index model, and click "Create index pattern":

Time-Filter-Index-Pattern-Kibana-RHEL8

Filebeat-index-pattern-overview-Kibana

Now click to view the real-time filebeat index model:

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report