In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
1 Overview
The ELK kit (ELK stack) refers to the three-piece set of ElasticSearch, Logstash and Kibana. These three softwares can form a set of log analysis and monitoring tools.
As the three softwares have too many version numbers, it is recommended to use the combination recommended by ElasticSearch's official website: http://www.elasticsearch.org/overview/elkdownloads/
2 Environmental preparation
2.1 Software requirements
The specific version requirements are as follows:
Operating system version: CentOS 6.4
JDK version: 1.7.0
Logstash version: 1.4.2
ElasticSearch version: 1.4.2
Kibana version: 3.1.2
2.2 Firewall configuration
In order to use HTTP services, etc., you need to turn off the firewall:
[plain] view plain copy
# service iptables stop
Or you can leave the firewall off, but open the relevant port in iptables:
[plain] view plain copy
# vim / etc/sysconfig/iptables
-An INPUT-m state-- state NEW-m tcp-p tcp-- dport 80-j ACCEPT
-An INPUT-m state-- state NEW-m tcp-p tcp-- dport 9200-j ACCEPT
-An INPUT-m state-- state NEW-m tcp-p tcp-- dport 9292-j ACCEPT
# service iptables restart
3 install JDK
ElasticSearch and Logstash depend on JDK, so you need to install JDK:
[plain] view plain copy
# yum-y install java-1.7.0-openjdk*
# java-version
4 install ElasticSearch
The default HTTP port for external service of ElasticSearch is 9200, and the TCP port for node interaction is 9300.
Download ElasticSearch:
[plain] view plain copy
# mkdir-p / opt/software & & cd / opt/software
# sudo wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.tar.gz
# sudo tar-zxvf elasticsearch-1.4.2.tar.gz-C / usr/local/
# ln-s / usr/local/elasticsearch-1.4.2 / usr/local/elasticsearch
Install elasticsearch-servicewrapper and start the ElasticSearch service:
[plain] view plain copy
# sudo wget https://github.com/elasticsearch/elasticsearch-servicewrapper/archive/master.tar.gz
# sudo tar-zxvf master
# mv / opt/software/elasticsearch-servicewrapper-master/service / usr/local/elasticsearch/bin/
# / usr/local/elasticsearch/bin/service/elasticsearch start
Test whether the ElasticSearch service is normal, and the 200 status code is expected to be returned:
[plain] view plain copy
# curl-X GET http://localhost:9200
5 install Logstash
The default port for external service of Logstash is 9292.
Download Logstash:
[plain] view plain copy
# sudo wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
# sudo tar-zxvf logstash-1.4.2.tar.gz-C / usr/local/
# ln-s / usr/local/logstash-1.4.2 / usr/local/logstash
Simply test whether the Logstash service is working, and it is expected that the input can be printed on the interface in a simple log form:
[plain] view plain copy
# / usr/local/logstash/bin/logstash-e 'input {stdin {}} output {stdout {}'
Create a Logstash configuration file and test again whether the Logstash service is working. It is expected that the input can be printed on the interface in a structured log form:
[plain] view plain copy
# mkdir-p / usr/local/logstash/etc
# vim / usr/local/logstash/etc/hello_search.conf
Input {
Stdin {
Type = > "human"
}
}
Output {
Stdout {
Codec = > rubydebug
}
Elasticsearch {
Host = > "10.111.121.22"
Port = > 9300
}
}
# / usr/local/logstash/bin/logstash-f / usr/local/logstash/etc/hello_search.conf
6 install Kibana
CentOS is pre-installed with Apache by default, so copy the Kibana code directly to a directory that can be accessed by Apache.
[plain] view plain copy
# sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
# sudo tar-zxvf kibana-3.1.2.tar.gz
# mv kibana-3.1.2 / var/www/html/kibana
Modify the configuration file of Kibana to replace the contents of the line of elasticsearch with the following:
[plain] view plain copy
# vim / var/www/html/kibana/config.js
Elasticsearch: "http://10.111.121.22:9200",
Start the HTTP service:
[plain] view plain copy
# service httpd start
Modify the configuration file of ElasticSearch, append one line, and restart the ElasticSearch service:
[plain] view plain copy
# vim / usr/local/elasticsearch/config/elasticsearch.yml
Http.cors.enabled: true
# / usr/local/elasticsearch/bin/service/elasticsearch restart
You can then access Kibana through a browser:
[plain] view plain copy
Http://10.111.121.22/kibana
Now, enter any character in the previous Logstash session and you can see the log in Kibana.
7 configure Logstash
Create the Logstash configuration file again. Here, the HTTP log and file system log are used as inputs, and the output is passed directly to ElasticSearch. It is no longer printed on the interface:
[plain] view plain copy
# vim / usr/local/logstash/etc/logstash_agent.conf
Input {
File {
Type = > "http.access"
Path = > ["/ var/log/httpd/access_log"]
}
File {
Type = > "http.error"
Path = > ["/ var/log/httpd/error_log"]
}
File {
Type = > "messages"
Path = > ["/ var/log/messages"]
}
}
Output {
Elasticsearch {
Host = > "123.206.211.52"
Port = > 9300
}
}
# / usr/local/logstash/bin/logstash-f / usr/local/logstash/etc/logstash_agent.conf &
Now, a simple log analysis and monitoring platform is set up and can be viewed using Kibana.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.