In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the use of Logstash, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
The function of Logstash is a data collector, which collects and parses the data from various formats and channels and then formats and outputs it to Elasticsearch. Finally, the friendly Web interface provided by Kibana is collected, analyzed and searched.
I. principle
Input
You can extract data from files, storage, and databases. Input has two options: one is to give it to Filter for filtering and pruning. The other is to give it directly to Output.
Filter
Ability to convert and parse data dynamically. Data information can be filtered and trimmed in a custom way
Output
With a wide range of output options, you can send data to where you want to specify, and have the flexibility to unlock many downstream use cases.
Detailed explanation of Elasticsearch's basic friend Logstash, detailed description of Elasticsearch's basic friend Logstash II, installation and use
1. Installation
Wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.1.rpmyum install-y. / logstash-6.0.1.rpm
2.Logstash profile
Vim / etc/logstash/logstash.ymlpath.data: / var/lib/logstash # data storage path path.config: / etc/logstash/conf.d/*.conf # configuration files of other plug-ins, input / output filtering, etc. Path.logs: / var/log/logstash # log storage path
JVM profile in 3.Logstash
Logstash is a Java-based program that needs to run in JVM and can be configured for JVM by configuring jvm.options. For example, the maximum and minimum memory, garbage cleaning mechanism, and so on. Here are only two of the most commonly used.
The memory allocation of JVM should not be too large or too small, because it will slow down the operating system. Too small to start.
Vim / etc/logstash/jvm.options # logstash configuration for JVM-Xms256m # logstash maximum and minimum used memory-Xmx1g
4. The simplest log collection configuration
Install a httpd for testing and configure Logstash to collect accless.log log files for Apache
Yum install httpdecho "Hello world" > / var/www/html/index.html # install httpd Create the home page to test vim / etc/logstash/conf.d/test.confinput {# use file as data input path = > ['/ var/log/httpd/access_log'] # set the path to read data start_position = > beginning # read from the beginning of the file End reads from the end of the file}} output {# sets the location of the output stdout {codec = > rubydebug # output to the screen}}
5. Test profile
Logstash is a built-in command but is not in the environment variable, so you can only use the absolute path to use this command.
/ usr/share/logstash/bin/logstash-t-f / etc/logstash/conf.d/test.conf # Test execution configuration file,-t should precede-f Configuration OK # to indicate test OK
6. Start logstash
Do not close the current session after running logstash, temporarily call it session 1, and open a new window called session 2.
/ usr/share/logstash/bin/logstash-f / etc/logstash/conf.d/test.conf
Use the curl command in session 2 to test after startup
Curl 172.18.68.14
Then you can see the output information before going back to session 1.
{"@ version" = > "1", "host" = > "logstash.shuaiguoxia.com", "path" = > "/ var/log/httpd/access_log", "@ timestamp" = > 2017-12-10T14:07:07.682Z "message" = > "172.18.68.14-[10/Dec/2017:22:04:44 + 0800]\" GET / HTTP/1.1\ "200 12\" -\ "\" curl/7.29.0\ ""}
At this point, the simplest Logstash configuration is done, only that the collected direct output is not filtered or trimmed.
III. Elasticsearch and Logstash
In the above configuration, Logsatsh extracts data from the log file and outputs it to the screen. Then in production, the extracted data is often filtered and output to Elasticsearch. Let's explain the combination of Elasticsearch and Logstash
Logstash extracts the access.log file of httpd, then it is filtered (structured) and output to Elasticsearch Cluster. The extracted data can be seen using the Head plug-in. (for Elasticsearch Cluster and Head plug-ins, please see the first two articles)
Detailed explanation of Elasticsearch's gay friend Logstash detailed explanation of Elasticsearch's gay friend Logstash
Configure Logstash
Vim / etc/logstash/conf.d/test.conf input {file {path = > ['/ var/log/httpd/access_log'] start_position = > "beginning"} filter {grok {match = > {"message" = > "% {COMBINEDAPACHELOG}"} remove_field = > "message"} output {elasticsearch {hosts = > ["http://172.18.68.11:9200"," "http://172.18.68.12:9200","http://172.18.68.13:9200"] index = >" logstash-% {+ YYYY.MM.dd} "action = >" index "document_type = >" apache_logs "}}
Start Logstash
/ usr/share/logstash/bin/logstash-t-f / etc/logstash/conf.d/test.conf # Test configuration file Configuration OK / usr/share/logstash/bin/logstash-f / etc/logstash/conf.d/test.conf # launch Logstash
test
Address of 172.18.68.14, bit Logstash, executed 10 times each
Curl 127.0.0.1 curl 172.18.68.14
Validate data
Use a browser to access 172.18.68.11 Head 9100 (Elastisearch installation Head address, as mentioned in the previous article)
Select today's date and you can see all the data accessed in one day.
Detailed explanation of Elasticsearch's gay friend Logstash, detailed explanation of Elasticsearch's gay friend Logstash IV, monitoring other
Monitor Nginx Lo
Only filter configuration blocks are listed. Input and output refer to the previous configuration.
Filter {grok {match = > {"message" = > "% {HTTPD_COMBINEDLOG}\"% {DATA:realclient}\ ""} remove_field = > "message"} date {match = > ["timestamp" "dd/MMM/YYYY:H:m:s Z"] remove_field = > "timestamp"}}
Monitoring Tomcat
Only filter configuration blocks are listed. Input and output refer to the previous configuration.
Filter {grok {match = > {"message" = > "% {HTTPD_COMMONLOG}"} remove_field = > "message"} date {match = > ["timestamp", "dd/MMM/YYYY:H:m:s Z"] remove_field = > "timestamp"}} V, Filebeat
Now it has been built to install Logstash on the node and send it to Elasticsearch, but Logstash is based on Java development needs to run in JVM, so it is a heavyweight collection tool, only for a log collection node to use Logstash is too heavyweight, then you can use a lightweight log collection tool Filebeat to collect log information, Filebeat the same to Logstash for filtering and then Elasticsearch. These will be explained in the next article, so let's start with an architecture diagram.
Thank you for reading this article carefully. I hope the article "what's the use of Logstash" shared by the editor will be helpful to everyone? at the same time, I also hope that you will support and pay attention to the industry information channel, and more related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.