Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ELK system Analysis nginx Log

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

ELK system needs to deal with relevant log files after installation. This paper mainly deals with nginx log files and visualizes them to meet daily business needs. To make a long story short, let's introduce my environment.

The specific network topology diagram is as follows:

The specific configuration is as follows:

ServerOSVersionIP remarks logstashCentOS release 6.4 (Final) logstash 2.2.2192.168.180.2logstash client is used to collect logs elasticsearchCentOS release 6.4 (Final) elasticsearch 2.2.1192.168.180.3

KibanaCentOS release 6.4 (Final) kibana 4.4.2192.168.180.3

The specific steps are as follows:

(1) configure the nginx server log format (log_format) to a unified format:

[root@ittestserver1 local] # vim / usr/local/nginx2/conf/nginx.confhttp {include mime.types; default_type application/octet-stream; # include proxy.conf Log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"; access_log logs/access.log main; access_log on

(2) install and configure logstash:

1. Omit the installation steps and refer to the previous article.

2. Configuration is as follows:

[root@ittestserver1 local] # vim / usr/local/logstash/etc/nginx-access0518.conf input {file {path = > "/ usr/local/nginx2/logs/access.log" type = > "nginx-access" start_position = > "beginning" sincedb_path = > "/ usr/local/logstash/sincedb" codec = > "json" }} filter {if [type] = = "nginx-access" {geoip {source = > "clientip" target = > "geoip" database = > "/ usr/local/logstash/etc/GeoLiteCity.dat" add_field = > ["[geoip] [coordinates]" "% {[geoip] [longitude]}"] add_field = > ["[geoip] [coordinates]", "% {[geoip] [latitude]}"]} mutate {convert = > ["[geoip] [coordinates]" "float"]}} output {if [type] = = "nginx-access" {elasticsearch {hosts = > ["192.168.180.3 if 9200"] manage_template = > true index = > "logstash-nginx-access-% {+ YYYY-MM}" }}}

The specific explanation is as follows:

Logstash is divided into Input, Output, Filter, Codec and other kinds of plugins.

Input: data input sources also support a variety of plug-ins, such as beats, file, graphite, http, kafka, redis, exec, etc., on the elk official website.

Output: data output purposes also support a variety of plug-ins, such as elasticsearch in this article, which is probably the most commonly used type of output. And exec, stdout terminals, graphite, http, zabbix, nagios, redmine, etc.,

Filter: use a filter to process and filter data events according to the characteristics of log events before output. Support grok, date, geoip, mutate, ruby, json, kv, csv, checksum, dns, drop, xml, etc.,

Codec: coding plug-in to change the presentation of event data, which can run the filter as input or output. Combine with other products, such as rubydebug, graphite, fluent, nmap, etc.

Details of the above plug-ins can go to the official website, the introduction is very detailed. The following is the meaning of the configuration file in this article:

Input segment:

File: using file as input source

Path: log path, supported / var/log*.log, and ["/ var/log/messages", "/ var/log/*.log"] format

Start_position: read the event from the beginning of the file. In addition, there are end parameters.

Filter segment:

Grok: a tool for structural transformation of data

Match: matches the conditional format, takes the nginx log as a message variable, and applies the grok conditional NGINXACCESS for conversion

Geoip: this filter matches the ip field from the geoip, showing the geographic location of the ip

Source:ip source field. Here we select the last field in the log file. If you have the default nginx log, select the first field.

Target: specifies that the inserted logstash word breaker target is stored as geoip

Storage path of database:geoip database

Add_field: added fields, coordinate longitude

Add_field: added field, coordinate latitude

Mutate: data modification, deletion, type conversion

Convert: convert coordinates to float type

Convert: the response code field of http is converted to int

Convert: the transfer bytes of http are converted to int

Replace: replace a field

Remove_field: remove the contents of message, because a copy of the data has already been filtered, so you don't have to use this field here. Otherwise, it will be equivalent to saving two copies.

Date: time processing, this plug-in is very useful, mainly using the events in your log file to convert timestamp, import old data necessary! It puzzled me here for a long time. Don't fall into the pit again.

Match: after matching to the timestamp field, modify the format to dd/MMM/yyyy:HH:mm:ss Z

Mutate: data modification

Remove_field: remove the timestamp field.

Output segment:

Elasticsearch: exporting to es

Host: host ip+ port of es or FQDN+ port of es

Index: create an index for the log logstash-nginx-access-*, here is the name when the index was added to the kibana

3. After creating the logstash configuration file, we also need to establish the expression used by grok. Because the conversion format syntax defined in the logstash configuration file is used, first go to the installation directory of logstash. Under the default installation location: / usr/local/logstash/, create a directory called patterns:

[root@ittestserver1 local] # mkdir-pv / usr/local/logstash/patternsNGUSERNAME [a-zA-Z\.\ @\-+ _%] + NGUSER% {NGUSERNAME} NGINXACCESS% {IPORHOST:remote_addr}--\ [% {HTTPDATE:time_local}\] "% {WORD:method}% {URIPATHPARAM:request} HTTP/% {NUMBER:httpversion}"% {INT:status}% {INT:body_bytes_sent}% {QS:http_referer}% {QS:http_user_agent

4. Test whether the step is successful and start the service. The log will be scanned all the time after the service is started.

[root@ittestserver1 log] # / usr/local/logstash/bin/logstash-t-f / usr/local/logstash/etc/nginx-access0518.conf Configuration OK [root@ittestserver1 log] # / usr/local/logstash/bin/logstash-f / usr/local/logstash/etc/nginx-access0518.conf nux; Android 5.1.1 R7Plusm Build/LMY47V) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/37.0.0.0 Mobile MQQBrowser/6.9 TBS/036906 Safari/537.36 hsp\ "\" -\ ",: level= >: error} JSON parse failure. Falling back to plain-text {: error= > #,: data= > "175.155.181.151-- [12/Jan/2017:01:28:05 + 0800]\" GET / resources/p_w_picpaths/m/landing/landingm20161221/header.png HTTP/1.1\ "200 233816\" https://m.guojinbao.com/landingm.html?s=NDg4MDU2\"\ "Mozilla/5.0 (iPhone CPU iPhone OS 9: 1 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Mobile/13B143\ "\" -\ ",: level= >: error} JSON parse failure. Falling back to plain-text {: error= > #,: data= > "223.67.223.215-- [12/Jan/2017:01:28:05 + 0800]\" GET / resources/js/Crypto.js HTTP/1.1\ "200 8703\" https://m.guojinbao.com/landingm.html?s=MzIxODQ=\"\ "Mozilla/5.0 (Linux; Android 5.1.1 R7Plusm Build/LMY47V) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/37.0.0.0 Mobile MQQBrowser/6.9 TBS/036906 Safari/537.36 hsp\ "\" -\ ",: level= >: error} JSON parse failure. Falling back to plain-text {: error= > #,: data= > "223.67.223.215-- [12/Jan/2017:01:28:05 + 0800]\" GET / resources/p_w_picpaths/m/landing/landingm20161221/btn1.png HTTP/1.1\ "200 10194\" https://m.guojinbao.com/resources/css/landingm20161221.css\"\ "Mozilla/5.0 (Linux; Android 5.1.1 R7Plusm Build/LMY47V) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/37.0.0.0 Mobile MQQBrowser/6.9 TBS/036906

(3) kibana configuration

(1) Log in to 192.168.180.3 and kibana.

(2) add an index whose name is the one in the previously imported ES. In this article, it is logstash-nginx-access-*.

Check the index, there are currently three, set to plus star, that is, discover highlighted by default.

(3) Click discover to see the log data we imported.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report