In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Abstract
A brief description of the common plug-ins of logstash and simple use cases
One: basic operation
It is recommended to use supervisor to manage the components in ELK to facilitate the same management.
There are instructions on installing https://www.aolens.cn/?p=809
Provide a common configuration:
[program:logstash] command=/opt/logstash/bin/logstash-f / opt/logstash/conf/index.confnumprocs=1; Open several processes dirrectory=/opt/logstashuser=root; user stdout_logfile=/opt/logstash/logs/logstash.logstdout_logfile_maxbytes=1MB; each log size stdout_logfile_backups=10; keep 10 log files stderr_logfile=/opt/logstash/logs/logstash_err.logstderr_logfile_maxbytes=1MBstderr_logfile_backups=10
Operation parameters:
Start the logstash service (common supervisor daemons)
. / bin/logstash-f / etc/logstash/conf.d/*-t # check whether the configuration file is ok
. / bin/logstash-f conf.d/nginx.conf-w 5-l / var/log/logstash/
Second: configuration syntax
1, area: (section)
Logstash uses {} to define areas. Multiple plug-in regions can be defined in the area, and key-value pairs can be defined in the plug-in area.
Eg:
Input {# input data file {path= ["/ var/log/messages", "/ var/log/*.log"] type= "system" start_position= "beginning"} filter {# data filtering processing if [type] = "system" {grok {match= ["message" % {COMBINEDAPACHELOG}]}} output {# data processing output stdout {codec=rubydebug}}
2. Data type:
String-- normal string
Name = > "Hello world"
Name = >'It\'s a beautiful day'
Array-- arrays can be single or multiple string values.
Path = > ["/ var/log/messages", "/ var/log/*.log"]
Path = > "/ data/mysql/mysql.log"
Hash-- key-value pairs, note that multiple key-value pairs are separated by spaces instead of commas.
Match = > {
"field1" = > "value1"
"field2" = > "value2"
...}
Codec-- is used to represent data encoding. Used for input and output segments. It is convenient for data processing.
Codec = > "json"
Number-- must be a valid numeric, floating point, or integer.
Port = > 33
The boolean-- Boolean value must be TRUE or false.
Ssl_enable = > true
Bytes-- specifies the byte unit. The default is byte.
My_bytes = > "1113" # 1113 bytes
My_bytes = > "10MiB" # 10485760 bytes
My_bytes = > "100kib" # 102400 bytes Binary (Ki,Mi,Gi,Ti,Pi,Ei,Zi,Yi) Unit 1024
My_bytes = > "180 mb" # 180000000 bytes SI based on 1000
Password-- a separate string.
My_password = > "password"
Path-- represents a valid operating system path.
My_path = > "/ tmp/logstash"
3: field reference
To use the value of the field in the Logstash configuration, simply write the name of the field in square brackets []. As long as the value entered by input can be referenced
Eg:
[geoip] [location] [- 1]
4, condition judgment
Operators supported by expressions
= = (equal to),! = (unequal), (greater than), = (large, etc.)
= ~ (match regular),! ~ (mismatch regular)
In (included), not in (not included)
And (and), or (or), nand (and not), xor (non-OR)
() (compound expression),! () (take the result of the inverse compound expression)
Three: common plug-ins
1, plug-in management
. / bin/plugin-h
Install
Uninstall
Update
List
Eg:
Bin/plugin install logstash-output-webhdfs
Bin/plugin update logstash-input-tcp
2, common plug-ins input,output,filter,codec
2.1 Input plug-in
Stdin: standard input, commonly used for testing
Input {stdin {type = "string" tags = ["add"] codec= "plain"}}
File: reads files from the file system, similar to tail-f under linux. Most commonly used
Input {file {path = ["/ var/log/*.log", "/ var/log/message"] # logstash only supports the absolute path to the file type = "system" # type record file type, and the defined variables are global variables. Other plug-ins can call start_position = "beginning"}}.
Redis: read from the redis server, using both redis channel and redis list.
Input {redis {data_type= "list" key= "logstash-nginx" host= "192.168.1.250" port=6379 db=1 threads=5}} writes source data to redisoutput {redis {host= "192.168.1.250" port=6379 db=1 data_type= " List "key=" logstash-nginx "}}
TCP/UDP: input
# nc127.0.0.18888 / var/log/nginx/access.json # can be json file direct value # echo' {"name": "liuziping", "age": "18"}'| nc127.0.0.18888 input {tcp {port = 8888 # definition tcp listening port codec= "json" # specifies that the incoming data is in json format KBH structure is convenient to analyze mode = "server"}}.
Syslog: listens to Syslog information on port 514 and parses it into RFC3164 format.
Input {syslog {port = "514"}}
Beats: sends events through Filebeat.
2.2:Output plug-in
Stdout: standard output
Output {stdout {codec = rubydebug workers = 2}}
File: saving to a file
Output {file {path = "/ path/to/% {+ yyyy/MM/dd/HH} /% {host} .log.gz" message_format = "% {message}" gzip = true}}
Elasticsearch: save it into elasticsearch, which is also the most important
Output {elasticsearch {hosts = > ["192.168.0.2 hosts 9200"] # or cluster = > "ClusterName" index = > "logstash-% {type} -% {+ YYYY.MM.dd}" # index name, unified format, convenient for kibana import, can talk about the same type of log Import all the type values in the type=input here document_type = > "nginx" workers = > 1 # start a process flush_size = > 20000 # save 20000 pieces of data and send it to ES at once Default idle_flush_time = > 10 # if there are not enough 20000 messages within 10 seconds, send them to ES once. Default is 1s template_overwrite = > true}}.
Redis: saved to redis is explained in the input plug-in
TCP: output TCP
Output {tcp {host = "192.168.0.2" port = 8888 codec = json_lines}}
Email: sending mail
Exec: call command execution
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.