In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
What is ELK?
ELK is a complete set of log collection and front-end display solution provided by elastic, which is the acronym of three products, namely ElasticSearch, Logstash and Kibana.
Among them, Logstash is responsible for processing logs, such as log filtering, log formatting, etc.; ElasticSearch has a strong text search ability, so it serves as a storage container for logs; and Kibana is responsible for the front-end display.
The ELK architecture is shown below:
Filebeat is added to collect logs from different clients and pass them to Logstash for unified processing.
Construction of ELK
Because ELK is three products, you can choose to install these three products in turn.
Here you choose to install ELk using Docker.
Docker installation ELk can also choose to download and run the images of each of the three products, but this time use the three-in-one image of downloading elk directly to install it.
Therefore, first of all, make sure that you already have a running environment for Docker. For the construction of a running environment for Docker, please see: https://blog.csdn.net/qq13112....
Pull the image
Once you have the Docker environment, run the command on the server:
Docker pull sebp/elk
This command downloads the elk three-in-one image from the Docker repository, with a total size of more than 2 gigabytes. If the download speed is too slow, you can replace the Docker repository source address with the domestic source address.
After the download is complete, view the image:
Docker images
Logstash configuration
Create a new beats-input.conf under the / usr/config/logstash directory for log input:
Input {beats {port = > 5044}}
Create a new output.conf to log the output from Logstash to ElasticSearch:
Output {elasticsearch {hosts = > ["localhost"] manage_template = > false index = > "% {[@ metadata] [beat]}"}}
The index is the index exported to ElasticSearch.
Run the container
After you have the image, you can start it directly:
Docker run-d-p 5044 usr/config/logstash:/etc/logstash/conf.d-p 5601 5601-p 9203 usr/config/logstash:/etc/logstash/conf.d-- name=elk sebp/elk
-d means to run the container in the background
-p means host port: container port, that is, the port used in the container is mapped to a port on the host. The default ports for ElasticSearch are 9200 and 9300. Since three ElasticSearch instances are already running on my machine, the mapping port has been modified here.
-v means the file of the host | folder: the file of the container | folder, where the data of the elasticsearch in the container is mounted to the / var/data/elk of the host to prevent data loss after the container is restarted, and the configuration file of logstash is mounted to the / usr/config/logstash directory of the host.
-- name means naming the container to make it easier to manipulate the container later.
If you have built ElasticSearch before, you will find all kinds of errors in the process of building elk, but there are no errors in the process of building elk with docker.
View the container after running:
Docker ps
View the container log:
Docker logs-f elk
Enter the container:
Docker exec-it elk / bin/bash
Restart the container after modifying the configuration:
Docker restart elk
View kinaba
Browser input http://my_host:5601/
You can see the kinaba interface. At this time, there is no data in ElasticSearch, so you need to install Filebeat to collect data into elk.
Filebeat building
Filebeat is used to collect data and report it to Logstash or ElasticSearch. Download Filebeat from the server that needs to collect logs and decompress it to use it.
Wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-linux-x86_64.tar.gz
Tar-zxvf filebeat-6.2.1-linux-x86_64.tar.gz
Modify the configuration file
Enter filebeat and modify filebeat.yml.
Filebeat.prospectors:- type: log # needs to be set to true configuration to take effect enabled: true path: # configure the log path to be collected-/ var/log/*.log # you can type a tag and then use tag: ["my_tag"] # type document_type: my_typesetup.kibana: # corresponding to ElasticSearch type document_type: my_typesetup.kibana: # here is the ip and port of kibana, that is, kibana:5601 host: "" output.logstash: # here is the ip and port of logstash That is, logstash:5044 host: ["] # needs to be set to true, otherwise it will not take effect enabled: true# if you want to collect data directly from Filebeat to ElasticSearch, you can configure the relevant configuration of output.elasticsearch.
Run Filebeat
Run:
. / filebeat-e-c filebeat.yml-d "publish"
At this point, you can see that Filebeat will send the log under the configured path to Logstash;, and then in elk, Logstash will send the data to ElasticSearch after processing the data. But what we want to do is to analyze the data through elk, so the data imported into ElasticSearch must be in JSON format.
This is the format of my single log before:
2019-10-22 10 deCode 44 0fbd93a286533d071 03.441 INFO rmjk.interceptors.IPInterceptor Line:248-{"clientType": "1", "deCode": "0fbd93a286533d071", "eaType": 2, "eaid": 191970823383420928, "ip": "xx.xx.xx.xx", "model": "HONOR STF-AL10", "osType": "9", "path": "/ applicationEnter", "result": 5, "session": "ef0a5c4bca424194b29e2ff31632ee5c", "timestamp": 1571712242326, "uid": "130608957659402240" "v": "2.2.4"}
It is difficult to analyze after import, and then I think of using grok in Logstash's filter to deal with the log to make it JSON format and then import it to ElasticSearch, but because the parameters in my log are not fixed, I find it too difficult, so I use Logback instead, format the log directly into JSON, and then send it by Filebeat.
Logback configuration
My project is Spring Boot, and add dependencies to the project:
Net.logstash.logback logstash-logback-encoder 5.2
Then add logback.xml under the resource directory in the project:
Service ${LOG_PATH} / ${APPDIR} / log_error.log ${LOG_PATH} / ${APPDIR} / error/log-error-%d {yyyy-MM-dd}.% i.log 2MB true% d {yyyy-MM-dd HH:mm:ss.SSS}%-5level% logger Line:%-3L -% msg%n utf-8 error ACCEPT DENY ${LOG_PATH} / ${APPDIR} / log_warn.log ${LOG_PATH} / ${APPDIR} / warn/log-warn-%d {yyyy-MM-dd}.% i.log 2MB true% d {yyyy-MM-dd HH:mm:ss.SSS}%-5level% logger Line:%-3L -% msg%n utf-8 warn ACCEPT DENY ${LOG_PATH} / ${APPDIR} / log_info.log ${LOG_PATH} / ${APPDIR} / info/log-info-%d {yyyy-MM-dd}.% i.log 2MB true% d {yyyy-MM-dd HH:mm:ss.SSS}%-5level% logger Line:%-3L -% msg%n utf-8 info ACCEPT DENY ${LOG_PATH} / ${APPDIR} / log_IPInterceptor.log ${LOG_PATH} / ${APPDIR} / log_IPInterceptor. % d {yyyy-MM-dd} .log 10\ u2028 {"timestamp": "% date {ISO8601}" "uid": "% mdc {uid}", "requestIp": "% mdc {ip}", "id": "% mdc {id}", "clientType": "% mdc {clientType}", "v": "% mdc {v}", "deCode": "% mdc {deCode}", "dataId": "% mdc {dataId}" "dataType": "% mdc {dataType}", "vid": "% mdc {vid}", "did": "% mdc {did}", "cid": "% mdc {cid}" "tagId": "% mdc {tagId}"} ${CONSOLE_LOG_PATTERN} utf-8 debug
The key points are:
Introduce slf4j in the files that need to be printed:
Private static final Logger LOG = LoggerFactory.getLogger ("IPInterceptor")
Put the information you need to print in the MDC:
MDC.put ("ip", ipAddress); MDC.put ("path", servletPath); MDC.put ("uid", paramMap.get ("uid") = = null? "": paramMap.get ("uid") .toString ()
If LOG.info ("msg") is used at this time, the printed content will be entered into the message of the log, which is in the following format:
Modify Logstash configuration
Modify the beats-input.conf under the / usr/config/logstash directory:
Input {beats {port = > 5044 codec = > "json"}}
Only the sentence codec = > "json" is added, but Logstash parses the input in JSON format.
Restart elk because the configuration has been modified:
Docker restart elk
In this way, when our log is generated, we can import it into elk using Filebeat, and then we can do log analysis through Kibana.
The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.