In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on "what is the method of Logstash open source log management", interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "what is the Logstash open source log management method"!
Logstash is an open source data collection engine with real-time pipelining capabilities. Logstash can dynamically unify data from different sources and standardize the data to the target output of your choice. It provides a large number of plug-ins that can help us parse, enrich, convert and buffer any type of data.
I. principle
Input
You can extract data from files, storage, and databases. Input has two options: one is to give it to Filter for filtering and pruning. The other is to give it directly to Output.
Filter
Ability to convert and parse data dynamically. Data information can be filtered and trimmed in a custom way
Output
With a wide range of output options, you can send data to where you want to specify, and have the flexibility to unlock many downstream use cases.
Second, install and use 1. Install wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.1.rpmyum install-y. / logstash-6.0.1.rpm2.Logstash configuration file vim / etc/logstash/logstash.ymlpath.data: / var/lib/logstash # data storage path path.config: / etc/logstash/conf.d/*.conf # configuration files for other plug-ins Input, output, filtering, etc. Path.logs: / var/log/logstash # JVM configuration file in the log storage path 3.Logstash
Logstash is a Java-based program that needs to run in JVM and can be configured for JVM by configuring jvm.options. For example, the maximum and minimum memory, garbage cleaning mechanism, and so on. Here are only two of the most commonly used.
The memory allocation of JVM should not be too large or too small, because it will slow down the operating system. Too small to start.
Vim / etc/logstash/jvm.options # logstash about JVM configuration-Xms256m # logstash maximum and minimum used memory-Xmx1g4. The simplest log collection configuration
Install a httpd for testing and configure Logstash to collect accless.log log files for Apache
Yum install httpdecho "Hello world" > / var/www/html/index.html # install httpd Create the home page to test vim / etc/logstash/conf.d/test.confinput {# use file as data input path = > ['/ var/log/httpd/access_log'] # set the path to read data start_position = > beginning # read from the beginning of the file End reads}} output {# sets the output location stdout {codec = > rubydebug # to the screen}} 5 from the end of the file. Test profile
Logstash is a built-in command but is not in the environment variable, so you can only use the absolute path to use this command.
/ usr/share/logstash/bin/logstash-t-f / etc/logstash/conf.d/test.conf # Test execution configuration file,-t should precede-f Configuration OK # to indicate test OK6. Start logstash
Do not close the current session after running logstash, temporarily call it session 1, and open a new window called session 2.
/ usr/share/logstash/bin/logstash-f / etc/logstash/conf.d/test.conf
Use the curl command in session 2 to test after startup
Curl 172.18.68.14
Then you can see the output information before going back to session 1.
{"@ version" = > "1", "host" = > "logstash.shuaiguoxia.com", "path" = > "/ var/log/httpd/access_log", "@ timestamp" = > 2017-12-10T14:07:07.682Z "message" = > "172.18.68.14-[10/Dec/2017:22:04:44 + 0800]\" GET / HTTP/1.1\ "200 12\" -\ "\" curl/7.29.0\ ""}
At this point, the simplest Logstash configuration is done, only that the collected direct output is not filtered or trimmed.
III. Elasticsearch and Logstash
In the above configuration, Logsatsh extracts data from the log file and outputs it to the screen. Then in production, the extracted data is often filtered and output to Elasticsearch. Let's explain the combination of Elasticsearch and Logstash
Logstash extracts the access.log file of httpd, then it is filtered (structured) and output to Elasticsearch Cluster. The extracted data can be seen using the Head plug-in. (for Elasticsearch Cluster and Head plug-ins, please see the first two articles)
Configure Logstash
Vim / etc/logstash/conf.d/test.conf input {file {path = > ['/ var/log/httpd/access_log'] start_position = > "beginning"} filter {grok {match = > {"message" = > "% {COMBINEDAPACHELOG}"} remove_field = > "message"} output {elasticsearch {hosts = > ["http://172.18.68.11:9200"," "http://172.18.68.12:9200","http://172.18.68.13:9200"] index = >" logstash-% {+ YYYY.MM.dd} "action = >" index "document_type = >" apache_logs "}}
Start Logstash
/ usr/share/logstash/bin/logstash-t-f / etc/logstash/conf.d/test.conf # Test configuration file Configuration OK / usr/share/logstash/bin/logstash-f / etc/logstash/conf.d/test.conf # launch Logstash
test
Address of 172.18.68.14, bit Logstash, executed 10 times each
Curl 127.0.0.1 curl 172.18.68.14
Validate data
Use a browser to access 172.18.68.11 Head 9100 (Elastisearch installation Head address, as mentioned in the previous article)
Select today's date and you can see all the data accessed in one day.
IV. Monitoring others
Monitor Nginx Lo
Only filter configuration blocks are listed. Input and output refer to the previous configuration.
Filter {grok {match = > {"message" = > "% {HTTPD_COMBINEDLOG}\"% {DATA:realclient}\ ""} remove_field = > "message"} date {match = > ["timestamp" "dd/MMM/YYYY:H:m:s Z"] remove_field = > "timestamp"}}
Monitoring Tomcat
Only filter configuration blocks are listed. Input and output refer to the previous configuration.
Filter {grok {match = > {"message" = > "% {HTTPD_COMMONLOG}"} remove_field = > "message"} date {match = > ["timestamp", "dd/MMM/YYYY:H:m:s Z"] remove_field = > "timestamp"}} V, Filebeat
Now it has been built to install Logstash on the node and send it to Elasticsearch, but Logstash is based on Java development needs to run in JVM, so it is a heavyweight collection tool, only for a log collection node to use Logstash is too heavyweight, then you can use a lightweight log collection tool Filebeat to collect log information, Filebeat the same to Logstash for filtering and then Elasticsearch. These will be explained in the next article, so let's start with an architecture diagram.
At this point, I believe that everyone on the "Logstash open source log management method is what" have a deeper understanding, might as well to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.