Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to import nginx logs into elasticsearch

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article focuses on "how to import nginx logs into elasticsearch". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to import nginx logs into elasticsearch.

The nginx log is collected through filebeat and passed into logstash, which is processed by logstash and written to elasticsearch. Filebeat is only responsible for the collection, and logstash completes the log formatting, data replacement, splitting, and the creation of the index after the log is written to elasticsearch.

1. Configure the nginx log format

Log_format main'$remote_addr $http_x_forwarded_for [$time_local] $server_name $request''$status $body_bytes_sent $http_referer'"$http_user_agent", "$connection", "$http_cookie"'$request_time''$upstream_response_time'

2. Install and configure filebeat and enable nginx module

Tar-zxvf filebeat-6.2.4-linux-x86_64.tar.gz-c / usr/localcd / usr/local;ln-s filebeat-6.2.4-linux-x86_64 filebeatcd / usr/local/filebeat

Enable the nginx module

. / filebeat modules enable nginx

View Modul

. / filebeat modules list

Create a profile

Vim / usr/local/filebeat/blog_module_logstash.ymlfilebeat.modules:- module: nginx access: enabled: true var.paths: ["/ home/weblog/blog.cnfol.com_access.log"] # error: # enabled: true # var.paths: ["/ home/weblogerr/blog.cnfol.com_error.log"] output.logstash: hosts: ["192.168.15.91 enabled 5044"]

Start filebeat

. / filebeat-c blog_module_logstash.yml-e

3. Configure logstash

Tar-zxvf logstash-6.2.4.tar.gz / usr/localcd / usr/local;ln-s logstash-6.2.4 logstash creates a pipline file cd / usr/local/logstash for the nginx log

Logstash built-in template directory

Vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns

Edit grok-patterns to add a regular that supports multiple ip

Forword (?:% {ipv4} [,]? []?) + |% {word}

Official grok

#

Create a logstash pipline profile

# input {# stdin {} #} # accept data from filebeat input {beats {port = > 5044 host = > "0.0.0.0"} filter {# add a debug switch mutate {add_field = > {"[@ metadata] [debug]" = > true}} grok {# filter nginx log # match = > {"message" = > "% {nginxaccess_test2}"} # match = > {"message" = >'% {iporhost:clientip} # (? [^\ #] *) #\ [% {httpdate: [@ metadata] [webtime]}\] #% {notspace:hostname} #% {word:verb}% {uripathparam:request} http/% {number:httpversion} #% {number:response} # (?:% {number:bytes} | -) # (: "(:% {notspace:referrer} | -)" |% {notspace:referrer} | -) # ( ?: "(? [^ #] *)") # (: "(?:% {number:connection} | -)" |% {number:connection} | -) # (?: "(? [^ #] *)") #% {number:request_time:float} # (:% {number:upstream_response_time:float} | -)'} # match = > {"message" = >'(?:% {iporhost:clientip} | -) (?:% {two_ip:http_x_forwarded_for} |% {ipv4:http_x_forwarded_for} | -)\ [% {httpdate: [@ metadata] [webtime]}\] (?:% {hostname:hostname} | -)% {word:method}% {uripathparam:request} http/%% {number:httpversion}% {number:response} (?:% {number:bytes} | -) (?: (:% {notspace:referrer}) | | -) "|% {notspace:referrer} | -)% {qs:agent} (?:" (:% {number:connection} | -) "|% {number:connection} | -) (?:" (? [^ #] *) ")% {number:request_time:float} (:% {number:upstream_response_time:float} | -)'} match = > {" message "= >'(?:% {iporhost:clientip} | -) | % {forword:http_x_forwarded_for}\ [% {httpdate: [@ metadata] [webtime]}\] (?:% {hostname:hostname} | -)% {word:method}% {uripathparam:request} http/% {number:httpversion}% {number:response} (?:% {number:bytes} | -) (: "(:% {notspace:referrer} | -)" |% {notspace:referrer} | -)% {qs:agent} (?: "(?:% {number:connection} | -)" |% {number:connection} | -)% {qs:cookie}% {number:request_time:float} (?:% {number:upstream_response_time:float} | -)'}} # assign the default @ timestamp (the time beats collected logs) to the new field @ read_tiimestamp ruby {# code = > "event.set ('@ read_timestamp')" Event.get ('@ timestamp')) "# change the time zone to East Zone 8 code = >" event.set ('@ read_timestamp',event.get ('@ timestamp'). Time.localtime + 8 records 60 records 60) "} # format nginx logging time # format time 20/may/2015:21:05:56 + 0000 date {locale = >" en "match = > [[@ metadata] [webtime]" "dd/mmm/yyyy:hh:mm:ss z"]} # convert the bytes field from a string to a number mutate {convert = > {"bytes" = > "integer"}} # parse the cookie field into a json # mutate {# gsub = > ["cookies",'\ ',','] #} # if cdn acceleration http_x_forwarded_for is used, there will be multiple ip. The first ip is the user's real ip if [http _ x_forwarded_for] = ~ "," {ruby {code = > 'event.set ("http_x_forwarded_for", event.get ("http_x_forwarded_for"). Split (",") [0])'}} # parse ip Get the geographic location of ip geoip {source = > "http_x_forwarded_for" # # only get the longitude and latitude, country, city, time zone of ip fields = > ["location", "country_name", "city_name", "region_name"]} # parse the agent field Get specific information such as browser and system version useragent {source = > "agent" target = > "useragent"} # specify the data to be deleted # mutate {remove_field= > ["message"]} # set the prefix ruby {code = > 'event.set ("@ [metadata] [index_pre]") of the index name according to the log name Event.get ("source") .split ("/") [- 1])'} # format @ timestamp as 2019.04.23 ruby {code = > 'event.set ("@ [metadata] [index_day]" Event.get ("@ timestamp") .time.localtime.strftime ("% y.%m.%d")'} # sets the default index name of the output mutate {add_field = > {# "[@ metadata] [index]" = > "% {@ [metadata] [index_pre]}% {+ yyyy.mm.dd}" [@ metadata] [index] "= >"% {@ [metadata] [index_pre]} _% {@ [metadata] [index_] Day]} "}} # parse the cookies field into json# mutate {# gsub = > [#" cookies " " ", #" cookies "," = ",": "#] # # split = > {" cookies "= >", "} #} # json_encode {# source = >" cookies "# target = >" cookies_json "#} # mutate {# gsub = > [#" cookies_json ",',',", # "cookies_json",':' '":"' #] #} # json {# source = > "cookies_json" # target = > "cookies2" #} # if there is an error in grok parsing Write errors to an index if "_ grokparsefailure" in [tags] {# if "_ dateparsefailure" in [tags] {mutate {replace = > {# "[@ metadata] [index]" = > "% {@ [metadata] [index_pre]} _ failure_% {+ yyyy.mm.dd}"[@ metadata] [index]" = > "% {@ [metadata] [index_pre]} _ failure_% {@ [metadata] [index_day]}}" } # if there are no errors, delete message} else {mutate {remove_field= > ["message"]} output {if [@ metadata] [debug] {# output to rubydebuyg and output metadata stdout {codec = > rubydebug {metadata = > true}} else {# to convert the output to "." Stdout {codec = > dots} # will be output to the specified es elasticsearch {hosts = > ["192.168.15.160virtual 9200"] index = > "% {[@ metadata] [index]}" document_type = > "doc"}

Start logstash

Nohup bin/logstash-f test_pipline2.conf & at this point, I believe you have a deeper understanding of "how to import nginx logs into elasticsearch". You might as well do it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report