Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How many ways does logstash use date to handle time?

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

How many ways does logstash use date to handle time? In response to this problem, the editor summed up this article today, hoping to help more friends who want to solve this problem to find a more simple and feasible way.

1. First, customize the time format directly in the configuration file.

This is a period of log time configuration in the tomcat configuration file. According to this configuration, the output log looks like this:

And then you continue to configure it like this in logstash.

At this point, logstash will not report the "_ dateparsefailed" error.

This form is best configured in nginx, apache and other web servers, and it is also more convenient to analyze.

2. Second, with parentheses (in fact, the same thing as above)

The output of the log itself looks like this

[07/Feb/2018:16:24:19 + 0800]

With a pair of square brackets

Then define it like this in the grok plug-in:

\ [% {HTTPDATE:timestamp}\]

The date plug-in can be converted directly, as follows:

The final effect is like this.

It won't be wrong in this way.

3. ISO8601 form

The native log in the log file looks like this:

2019-03-19 13 purl 08 purl 07.782

The point is the following ".782", followed by milliseconds.

Then the matching rules can be defined in the grok plug-in as follows:

At this point the date plug-in can be defined as follows:

Date {match = > ["access_time", "MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601"]}

The above form is not wrong, just copy and paste it into the configuration file

The final match is as follows:

The "_ dateparsefailed" error will not be reported in this way.

Finally, a table of time matching rules is attached, which can be used as a reference.

4. Convert the data according to the timestamp.

When collecting in the slow query log of mysql, since in some cases the time can only be determined based on the timestamp, we must use the transformation scheme. The main purpose is to convert the timestamp into UNIX style time, such as:

The time of the mysql slow query log is as follows:

So the date plug-in we use in logstash's configuration file writes like this:

In this way, the match will be successful.

The "timestamp_mysql" above is my matching time assignment in the grok plug-in, such as:

% {NUMBER:timestamp_mysql}; 4, ISO8601 form

Time log:

2018-02-09T10:57:42+08:00

You can write this in grok at this point:

Grok {match = > {"message"; "{TIMESTAMP_ISO8601:localtime}}

There are two ways to write the time rollover to @ timestamp.

Date {match = > ["localtime", "yyyy-MM-dd'T'HH:mm:ssZZ"] target = > "@ timestamp"}

Or

Date {match = > ["localtime", "ISO8601"]}

Can realize the matching format of ISO8601 time.

After reading the above, have you mastered how logstash uses date to deal with time? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report