In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
(1) the environment for testing
Agentd:192.168.180.22
ES:192.168.180.23
Kibana:192.168.180.23
Topology adopted: logstash-- > ES-- > kibana
(2) implementation steps:
(1) specific configuration of logstsh:
1. Configure the nginx log format in log_format format:
Log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"'
2. Download the IP address categorization query library from the logstash server
[root@localhost config] # cd / usr/local/logstash/config/ [root@localhost config] # wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
3. Configure the logstash client
[root@localhost config] # vim / usr/local/logstash/config/nginx-access.conf input {file {path = > "/ opt/access.log" type = > "nginx" start_position = > "beginning"}} filter {grok {match = > {"message" = > "% {IPORHOST : remote_addr} -\ [% {HTTPDATE:time_local}\]\ "% {WORD:method}% {URIPATHPARAM:request} HTTP/% {NUMBER:httpversion}\"% {INT:status}% {INT:body_bytes_sent}% {QS:http_referer}% {QS:http_user_agent} "} geoip { Source = > "remote_addr" target = > "geoip" database = > "/ usr/local/logstash/config/GeoLite2-City.mmdb" add_field = > ["[geoip] [coordinates]" "% {[geoip] [longitude]}"] add_field = > ["[geoip] [coordinates]" "% {[geoip] [latitude]}]} output {elasticsearch {hosts = > [" 192.168.180.23 output 9200 "] manage_template = > true index = >" logstash-map-% { + YYYY-MM} "}}
Note:
Geoip: IP query plug-in
Source: the field that needs to be processed through the geoip plug-in, usually ip. Here, you can enter ip manually through the console, so enter message directly. If you query the nginx access user in the generation environment, you need to filter out the client ip first, and then enter remote_addr here.
Target: which field should the parsed Geoip address data be stored in? the default is geoip.
Database: specify the database file to download
Add_field: here two lines add latitude and longitude, and the area in the map is shown according to longitude and latitude.
. If the startup is normal, you can see the fields related to geoip in kibana, as shown below:
3. Start the logstash client and load the configuration file just now.
[root@localhost config] # / usr/local/logstash/bin/logstash-f nginx-access.conf Sending Logstash's logs to / usr/local/logstash/logs which is now configured via log4j2.properties [2017-06-20T22:55:23801] [INFO] [logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {: changes= > {: removed= > [] : added= > [http://192.168.180.23:9200/]}}[2017-06-20T22:55:23,805][INFO] [logstash.outputs.elasticsearch] Running healthcheck to see if an Elasticsearch connection is working {: healthcheck_url= > http://192.168.180.23:9200/, : path= > "/"} [2017-06-20T22:55:23901] [WARN] [logstash.outputs.elasticsearch] Restored connection to ES instance {: url= > #} [2017-06-20T22:55:23909] [INFO] [logstash.outputs.elasticsearch] Using mapping template from {: path= > nil} [2017-06-20T22:55:23947] [INFO] [logstash.outputs.elasticsearch] Attempting to install template {: manage_template= > {"template" = > "logstash-*", "version" = > 50001 "settings" = > {"index.refresh_interval" = > "5s"}, "mappings" = > {"_ default_" = > {"_ all" = > {"enabled" = > true, "norms" = > false}, "dynamic_templates" = > [{"message_field" = > {"path_match" = > "message", "match_mapping_type" = > "string", "mapping" = > {"type" = > "text", "norms" = > false}}, {"string_fields" = > {"match" = > "*" "match_mapping_type" = > "string", "mapping" = > {"type" = > "text", "norms" = > false, "fields" = > {"keyword" = > {"type" = > "keyword"}], "properties" = > {"type" = > "date", "include_in_all" = > false}, "@ version" = > {"type" = > "keyword", "include_in_all" = > false}, "geoip" = > {"dynamic" = > true "properties" = > {"ip" = > {"type" > "ip"}, "location" = > {"type" = > "geo_point"}, "latitude" = > {"type" = > "half_float"}, "longitude" = > {"type" = > "half_float"} [2017-06-20T22:55:23955] [INFO] [logstash.outputs.elasticsearch] New Elasticsearch output {: class= > "LogStash::Outputs::ElasticSearch" : hosts= > [#]} [2017-06-20T22:55:24065] [INFO] [logstash.filters.geoip] Using geoip database {: path= > "/ usr/local/logstash/config/GeoLite2-City.mmdb"} [2017-06-20T22:55:24094] [INFO] [logstash.pipeline] Starting pipeline {"id" = > "main", "pipeline.workers" = > 4, "pipeline.batch.size" = > 125, "pipeline.batch.delay" = > 5 "pipeline.max_inflight" = > 500} [2017-06-20T22:55:24275] [INFO] [logstash.pipeline] Pipeline main started [2017-06-20T22:55:24369] [INFO] [logstash.agent] Successfully started Logstash API endpoint {: port= > 9600}
(2) Kibana configuration.
1. Edit and modify the configuration file kibana.yml of kibana and add the following at the end:
# The default locale. This locale can be used in certain circumstances to substitute any missing# translations.#i18n.defaultLocale: "en" tilemap.url: 'http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}'
2. Restart kibana service.
[root@localhost bin] # / usr/local/kibana/bin/kibana & [1] 10631 [root@localhost bin] # ps-ef | grep kibana root 10631 7795 21 10:52 pts/0 00:00:02 / usr/local/kibana/bin/../node/bin/node-- no-warnings / usr/local/kibana/bin/../src/cliroot 10643 7795 10:52 pts/0 00:00:00 grep-- color=auto kibana [root@localhost Bin] # log [02info] [status] [plugin:kibana@5.4.0] Status changed from uninitialized to green-Ready log [02info] [plugin:kibana@5.4.0] 59.445] [info] [status] [plugin:elasticsearch@5.4.0] Status changed from uninitialized to yellow-Waiting for Elasticsearch log [02status] [plugin:console@5.4.0] Status changed from uninitialized to green-Ready log [ 02Status changed from yellow to green 59.512] [info] [status] [plugin:elasticsearch@5.4.0] Status changed from yellow to green-Kibana index ready log [02Status changed from yellow to green 59.513] [info] [status] [plugin:metrics@5.4.0] Status changed from uninitialized to green-Ready log [02info] [status] [plugin:timelion@5.4.0] Status changed from uninitialized to green-Ready log [02lv 53v 00.080 ] [info] [listening] Server running at http://192.168.180.23:5601 log [02listening 53PLO 00.081] [info] [status] [ui settings] Status changed from uninitialized to green-Ready
3. Create the access index lostash-map* of nginx. The specific steps are as follows: ip:5601--- > Management--Index Patterns--- > +-- > add logstash-map*--- > create to Index name or pattern. The details are as follows:
4, create Visualize. The specific steps are as follows: Visalize--- > +-- > Maps (Tile Maps)
Already is a simple step.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.