Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Filebeat+kafka+ELK5.4 installation and deployment

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Create a powerful log analysis platform with ELK. The specific topology is as follows:

Here we will deploy kafka+filebeat+ELK5.4

Each software version

Jdk-8u131-linux-i586.tar.gzfilebeat-5.4.0-linux-x86_64.tar.gzelasticsearch-5.4.0.tar.gzkibana-5.4.0-linux-x86_64.tar.gzlogstash-5.4.0.tar.gzkafka_2.11-0.10.0.0.tgz

1. JDK installation configuration (skip)

2. Installation and configuration of ELK

Create an ELK user and extract the file from the 1.elasticsearch configuration [elk@localhost elasticsearch-5.4.0] $vi config/elasticsearch.yml. Network.host: 192.168.12.109 users # Set a custom port for HTTP:#http.port: 9200. Save Start [elk@localhost elasticsearch-5.4.0] $nohup bin/elasticsearch & verify # [elk@localhost elasticsearch-5.4.0] $curl http://192.168.12.109:9200{ "name": "aCA2ApK", "cluster_name": "elasticsearch", "cluster_uuid": "Ea4_9kXZSaeDL1fYt4lUUQ", "version": {"number": "5.4.0", "build_hash": "780f8c4" "build_date": "2017-04-28T17:43:27.229Z", "build_snapshot": false, "lucene_version": "6.5.0"}, "tagline": "You Know, for Search"} 2, kibana installation and configuration [elk@localhost kibana-5.4.0-linux-x86_64] $vi config/kibana.yml # # Kibana is served by a back end server. This setting specifies the port to use.server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.# The default is' localhost', which usually means remote machines will not be able to connect.# To allow connections from remote users Set this parameter to a non-loopback address.server.host: "192.168.12.109". # The URL of the Elasticsearch instance to use for all your queries.elasticsearch.url: "http://192.168.12.109:9200"..........[elk@localhost kibana-5.4.0-linux-x86_64] $nohup bin/kibana & you can access it from the browser.

3. Installation and configuration of kafka

Here we only do stand-alone 192.168.12.105 deployment single node "centos kafka single package stand-alone deployment"

4. Installation and configuration of logstah

[elk@localhost logstash-5.4.0] $vi nginx.conf A new configuration file input {kafka {codec = > "json" topics_pattern = > "logstash-.*" bootstrap_servers = > "192.168.12.105 group_id" auto_offset_reset = > "latest" group_id = > "logstash-g1"} filter {if "nginx-accesslog" in [tags] { Grok {match = > {"message" = > "% {IPORHOST:http_host}% {IPORHOST:clientip} -% {USERNAME:remote_user}\ [% {HTTPDATE:timestamp}\]\" (?:% {WORD:http_verb}% {NOTSPACE:http_request} (?: HTTP/% {NUMBER:http_version})? |% {DATA:raw_http_request})\ "% {NUMBER:response} (?:% {NUMBER:bytes_read} | -)% {QS:referrer}% {QS:agent}% {QS:xforwardedfor}% {NUMBER:request_time:float}% {GREEDYDATA:traceID} "}} mutate {convert = > [" status " "integer"] convert = > ["body_bytes_sent", "integer"] convert = > ["request_time", "float"]} geoip {source= > "remote_addr"} date {match = > ["timestamp" "dd/MMM/YYYY:HH:mm:ss Z"]} useragent {source= > "http_user_agent"}} if "tomcat-accesslog" in [tags] {grok {match = > {"message" = > "% {IPORHOST:clientip}\ [% {HTTPDATE:timestamp}\]\" (?:% {WORD:http_verb}% {NOTSPACE:http _ request} (?: HTTP/% {NUMBER:http_version})?% {DATA:raw_http_request})\ "% {NUMBER:response} (?:% {NUMBER:bytes_read} | -)% {QS:referrer}% {NUMBER:request_time:float}% {GREEDYDATA:traceID}"} date {match = > ["timestamp" "dd/MMM/YYYY:HH:mm:ss Z"]}} output {elasticsearch {hosts = > ["192.168.12.109elasticsearch 9200"] index = > "logstash-% {type} -% {+ YYYY.MM.dd}" document_type = > "% {type}"} # stdout {codec = > rubydebug}} Save And start [elk@localhost logstash-5.4.0] $nohup bin/logstash-f nginx.conf &

5. Installation and configuration of filebeat

Copy the filebeat to the server that needs to be collected and decompress it. Here we collect Nginx,tomcat logs respectively.

Nginx server

[user@localhost filebeat-5.4.0-linux-x86_64] $vi filebeat.yml filebeat.prospectors:- input_type: log paths:-/ data/programs/nginx/logs/access.log tags: ["nginx-accesslog"] document_type: nginxaccesstags: ["nginx-test-194"] output.kafka: enabled: true hosts: ["192.168.12.105 true hosts"] topic: logstash-% {[type]} [user@localhost filebeat-5 .4.0-linux-x86_64] $nohup filebeat-c filebeat.yml &

Tomcat server

[user@localhost filebeat-5.4.0-linux-x86_64] $vi filebeat.yml filebeat.yml filebeat.prospectors:- input_type: log paths:-/ data/tomcat/logs/localhost_access_log* tags: ["tomcat-accesslog"] document_type: tomcataccesstags: ["tomcat103"] output.kafka: enabled: true hosts: ["192.168.12.105 true hosts"] topic: logstash-% {[type]} [user@localhost filebeat -5.4.0-linux-x86_64] $nohup filebeat-c filebeat.yml &

After completing the above, our platform is set up, and then we create the index

Input: logstash-nginxaccess*

Enter logstash-tomcataccess*

The data is successfully displayed from filebeat to kafka and ELK.

Let's take a picture of Xuan.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report