In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on "how to install Elastic Stack on CentOS7". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to install Elastic Stack on CentOS7.
Elasticsearch is an open source search engine developed by Java based on Lucene. It provides a distributed, multi-tenant full-text search engine (LCTT note: multi-tenant refers to multi-tenant technology, is a software architecture technology, used to explore how to share the same system or program components in a multi-user environment, and still ensure data isolation between users.) And Web interface (Kibana) with HTTP dashboard The data is queried, retrieved, and stored using the JSON document scheme by Elasticsearch. Elasticsearch is an extensible search engine that can be used to search all types of text documents, including log files. Elasticsearch is the core of Elastic Stack, and Elastic Stack is also called ELK Stack.
Logstash is an open source tool for managing events and logs. It provides a real-time transmission way for data collection. Logstash collects your log data, converts it into an JSON document, and stores it in Elasticsearch.
Kibana is an open source data visualization tool for Elasticsearch. Kibana provides a beautiful dashboard Web interface. You can use it to manage and visualize data from Elasticsearch. It is not only beautiful, but also powerful.
In this tutorial, I will show you how to install and configure Elastic Stack on a CentOS 7 server to monitor server logs. Then I'll show you how to install "Elastic beats" on clients with CentOS 7 and Ubuntu 16 operating systems.
prerequisite
64-bit CentOS 7, 4 GB memory-elk host computer
64-bit CentOS 7, 1 GB memory-client 1
64-bit Ubuntu 16, 1 GB memory-client 2
Step 1-prepare the operating system
In this tutorial, we will disable SELinux on the CentOS 7 server. Edit the SELinux configuration file.
Vim / etc/sysconfig/selinux
Change the value of SELINUX from enforcing to disabled
SELINUX=disabled
Then restart the server:
Reboot
Log in to the server again and check the SELinux status.
Getenforce
Make sure the result is disabled.
Step 2-install Java
Deployment of Elastic stack depends on Java,Elasticsearch, which requires version 8 of Java, and Oracle JDK 1.8 is recommended. I will install Java 8 from the official Oracle rpm package.
Use the wget command to download JDK for Java 8.
Wget-no-cookies-no-check-certificate-header "Cookie: gpw_e24=http:%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie"http://download.oracle.com/otn-pub/java/jdk/8u77-b02/jdk-8u77-linux-x64.rpm""
Then use the rpm command to install:
Rpm-ivh jdk-8u77-linux-x64.rpm
*, check the java JDK version to make sure it works properly.
Java-version
You will see the Java version of the server.
Step 3-install and configure Elasticsearch
In this step, we will install and configure Elasticsearch. Install Elasticsearch from the rpm package provided by the elastic.co website and configure it to run on localhost (to ensure that the program is secure and inaccessible from the outside).
Before installing Elasticsearch, add the key for elastic.co to the server.
Rpm-- import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Next, download Elasticsearch 5.1 using wget, and then install it.
Wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.rpm rpm-ivh elasticsearch-5.1.1.rpm
Elasticsearch has been installed. Now go to the configuration directory and edit the elasticsaerch.yml configuration file.
Cd / etc/elasticsearch/vim elasticsearch.yml
Remove the comment on line 40 and enable Elasticsearch's memory lock. This disables memory swapping for Elasticsearch.
Bootstrap.memory_lock: true
In the Network block, uncomment the network.host and http.port lines.
Network.host: localhost http.port: 9200
Save the file and exit the editor.
Now edit the memory lock configuration for the elasticsearch.service file.
Vim / usr/lib/systemd/system/elasticsearch.service
Remove the comment on line 60 and make sure the value is unlimited.
MAX_LOCKED_MEMORY=unlimited
Save and exit.
This ends the Elasticsearch configuration. Elasticsearch will run on port 9200 on this machine, and we disable memory swapping by enabling mlockall on the CentOS server. Reload systemd, set Elasticsearch to boot, and then start the service.
Sudo systemctl daemon-reload sudo systemctl enable elasticsearch sudo systemctl start elasticsearch
Wait for Eelasticsearch to start successfully, then check the open ports on the server to make sure that the status of port 9200 is LISTEN.
Netstat-plntu
Check elasticsearch running on port 9200
Then check the memory lock to make sure mlockall is enabled, and use the following command to check if Elasticsearch is running.
Curl-XGET 'localhost:9200/_nodes?filter_path=**.mlockall&pretty' curl-XGET' localhost:9200/?pretty'
You will see the following results.
Check memory lock elasticsearch and check status
Step 4-install and configure Kibana and Nginx
In this step, we will install and configure Kibana on the Nginx Web server. Kibana listens on localhost, while Nginx acts as a reverse proxy for Kibana.
Download Kibana 5.1 with wget and install it using the rpm command:
Wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-x86_64.rpm rpm-ivh kibana-5.1.1-x86_64.rpm
Edit the Kibana configuration file.
Vim / etc/kibana/kibana.yml
Remove the comments from the three lines server.port, server.host, and elasticsearch.url in the configuration file.
Server.port: 5601 server.host: "localhost" elasticsearch.url: "http://localhost:9200"
Save and exit.
Set Kibana to boot and start Kibana.
Sudo systemctl enable kibana sudo systemctl start kibana
Kibana will run as a node application on port 5601.
Netstat-plntu
Kibana running as node application on port 5601
This completes the Kibana installation. Now we need to install Nginx and configure it as a reverse proxy so that we can access Kibana from the public IP address.
Nginx can be found in the Epel repository and install epel-release with yum.
Yum-y install epel-release
Then install the Nginx and httpd-tools packages.
Yum-y install nginx httpd-tools
The httpd-tools package contains tools for the Web server to add htpasswd basic authentication to Kibana.
Edit the Nginx configuration file and delete the server {} block so that we can add a new virtual host configuration.
Cd / etc/nginx/vim nginx.conf
Delete server {} block.
Remove Server Block on Nginx configuration
Save and exit.
Now we need to create a new virtual host configuration file in the conf.d directory. Create a new file kibana.conf with vim.
Vim / etc/nginx/conf.d/kibana.conf
Copy the configuration below.
Server {listen 80; server_name elk-stack.co; auth_basic "Restricted Access"; auth_basic_user_file / etc/nginx/.kibana-user; location / {proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host Proxy_cache_bypass $http_upgrade;}}
Save and exit.
Then use the htpasswd command to create a new basic authentication file.
Sudo htpasswd-c / etc/nginx/.kibana-user admin "enter your password"
Test the Nginx configuration to make sure there are no errors. Then set Nginx to boot and start Nginx.
Nginx-t systemctl enable nginx systemctl start nginx
Add nginx virtual host configuration for Kibana Application
Step 5-install and configure Logstash
In this step, we will install Logstash and configure it to centralize the server's logs from the filebeat-configured logstash client, then filter and transform the Syslog data, and move it to the storage center (Elasticsearch).
Download Logstash and install it using rpm.
Wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.rpm rpm-ivh logstash-5.1.1.rpm
Generate a new SSL certificate file so that the client can identify the elastic server.
Go to the tls directory and edit the openssl.cnf file.
Cd / etc/pki/tlsvim openssl.cnf
Add the server identity in the [v3_ca] section.
[v3_ca] # Server IP Address subjectAltName = IP: 10.0.15.10
Save and exit.
Use the openssl command to generate a certificate file.
Openssl req-config / etc/pki/tls/openssl.cnf-x509-days 3650-batch-nodes-newkey rsa:2048-keyout / etc/pki/tls/private/logstash-forwarder.key-out / etc/pki/tls/certs/logstash-forwarder.crt
The certificate file can be found in the / etc/pki/tls/certs/ and / etc/pki/tls/private/ directories.
Next, we will create a new configuration file for Logstash. Create a new filebeat-input.conf file to configure the log source for filebeat, then create an syslog-filter.conf configuration file to process syslog, and then create an output-elasticsearch.conf file to define the output log data to Elasticsearch.
Go to the logstash configuration directory and create a new configuration file in the conf.d subdirectory.
Cd / etc/logstash/vim conf.d/filebeat-input.conf
Enter the configuration and paste the following configuration:
Input {beats {port = > 5443 ssl = > true ssl_certificate = > "/ etc/pki/tls/certs/logstash-forwarder.crt" ssl_key = > "/ etc/pki/tls/private/logstash-forwarder.key"}}
Save and exit.
Create a syslog-filter.conf file.
Vim conf.d/syslog-filter.conf
Paste the following configuration:
Filter {if [type] = = "syslog" {grok {match = > {"message" = > "% {SYSLOGTIMESTAMP:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {DATA:syslog_program} (?:\ [% {POSINT:syslog_pid}\]):% {GREEDYDATA:syslog_message}"} add_field = > ["received_at", "% {@ timestamp}"] add_field = > ["received_from" "% {host}"]} date {match = > ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]}
We use a filter plug-in named grok to parse the syslog file.
Save and exit.
Create an output profile output-elasticsearch.conf.
Vim conf.d/output-elasticsearch.conf
Paste the following configuration:
Output {elasticsearch {hosts = > ["localhost:9200"] hosts = > "localhost:9200" manage_template = > false index = > "% {[@ metadata] [beat]} -% {+ YYYY.MM.dd}" document_type = > "% {[@ metadata] [type]}"}}
Save and exit.
* set logstash to boot and start the service.
Sudo systemctl enable logstash sudo systemctl start logstash
Logstash started on port 5443 with SSL Connection
Step 6-install and configure Filebeat on the CentOS client
Beat, as a data sender, is a lightweight agent that can be installed on a client node, sending large amounts of data from the client to the Logstash or Elasticsearch server. There are four types of beat,Filebeat for sending "log files", Metricbeat for sending "metrics", Packetbeat for sending "network data", and Winlogbeat for sending "event logs" for Windows clients.
In this tutorial, I'll show you how to install and configure Filebeat to transfer data log files to a Logstash server over a SSL connection.
Log in to the server of client 1. Then copy the certificate file from the elastic server to the server on client 1.
Ssh root@client1IP
Use the scp command to copy the certificate file.
Scp root@elk-serverIP:~/logstash-forwarder.crt. Enter the password for elk-server
Create a new directory and move the certificate to this directory.
Sudo mkdir-p / etc/pki/tls/certs/ mv ~ / logstash-forwarder.crt / etc/pki/tls/certs/
Next, import the elastic key on the client 1 server.
Rpm-- import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Download Filebeat and install it with the rpm command.
Wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-x86_64.rpm rpm-ivh filebeat-5.1.1-x86_64.rpm
Filebeat has been installed, please go to the configuration directory and edit the filebeat.yml file.
Cd / etc/filebeat/ vim filebeat.yml
In the path section of line 21, add a new log file. We will create two files, the / var/log/secure file for ssh activity, and the server log / var/log/messages.
Paths:-/ var/log/secure-/ var/log/messages
Add a new configuration on line 26 to define a file of type syslog.
Document-type: syslog
Filebeat defaults to Elasticsearch as the output destination. In this tutorial, we change it to Logshtash. Add comments on lines 83 and 85 to disable Elasticsearch output.
Disable Elasticsearch output:
#-- Elasticsearch output-- # output.elasticsearch: # Array of hosts to connect to. # hosts: ["localhost:9200"]
Now add a new logstash output configuration. Uncomment the logstash output configuration and change all values to the values in the configuration below.
Output.logstash: # The Logstash hosts hosts: ["10.0.15.10 ssl.certificate_authorities 5443"] bulk_max_size: 1024 ssl.certificate_authorities: ["/ etc/pki/tls/certs/logstash-forwarder.crt"] template.name: "filebeat" template.path: "filebeat.template.json" template.overwrite: false
Save the file and exit vim.
Set Filebeat to boot and boot.
Sudo systemctl enable filebeat sudo systemctl start filebeat
Step 7-install and configure Filebeat on the Ubuntu client
Use ssh to connect to the server.
Ssh root@ubuntu-clientIP
Use the scp command to copy the certificate file.
Scp root@elk-serverIP:~/logstash-forwarder.crt.
Create a new directory and move the certificate to this directory.
Sudo mkdir-p / etc/pki/tls/certs/ mv ~ / logstash-forwarder.crt / etc/pki/tls/certs/
Import the elastic key on the server.
Wget-qO-https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add-
Download the Filebeat .deb package and install it using the dpkg command.
Wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb dpkg-I filebeat-5.1.1-amd64.deb
Go to the configuration directory and edit the filebeat.yml file.
Cd / etc/filebeat/ vim filebeat.yml
Add a new log file path in the path configuration section.
Paths:-/ var/log/auth.log-/ var/log/syslog
Set the document type to syslog.
Document-type: syslog
Comment out the following lines and disable output to Elasticsearch.
#-- Elasticsearch output-- # output.elasticsearch: # Array of hosts to connect to. # hosts: ["localhost:9200"]
Enable logstash output, uncomment the configuration below and change the value as shown below.
Output.logstash: # The Logstash hosts hosts: ["10.0.15.10 ssl.certificate_authorities 5443"] bulk_max_size: 1024 ssl.certificate_authorities: ["/ etc/pki/tls/certs/logstash-forwarder.crt"] template.name: "filebeat" template.path: "filebeat.template.json" template.overwrite: false
Save and exit vim.
Set Filebeat to boot and boot.
Sudo systemctl enable filebeat sudo systemctl start filebeat
Check the service status:
Systemctl status filebeat
Filebeat is running on the client Ubuntu
Step 8-Test
Open your web browser and access the elastic stack domain name you configured in Nginx. Mine is "elk-stack.co". Log in with the administrator password and press Enter to log in to the Kibana dashboard.
Login to the Kibana Dashboard with Basic Auth
Create a new default index filebeat-*, and click the create button.
Create First index filebeat for Kibana
The default index has been created. If there is more than one beat on the elastic stack, you can configure the default beat with one click on the Star button.
Filebeat index as default index on Kibana Dashboard
Go to the Discovery menu and you can see all the log files on the elk-client1 and elk-client2 servers.
Discover all Log Files from the Servers
Example JSON output from an invalid ssh login in the elk-client1 server log.
JSON output for Failed SSH Login
With other options, you can do more with the Kibana dashboard.
Elastic Stack is installed on the CentOS 7 server. Filebeat is installed on CentOS 7 and Ubuntu clients.
At this point, I believe you have a deeper understanding of "how to install Elastic Stack on CentOS7". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.