Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Set up an ElasticSearch log analysis system for office environment

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Set up an ElasticSearch log analysis system for office environment

​ plans to bring the company's firewall + switch + server (centos7) + Vmware+Windows server into the scope of monitoring, so it starts the journey of ELK monitoring.

​ this article uses the ELK architecture stack to build, ten thousand tall buildings rise from the ground, although the beginning is relatively simple, later will continue to improve the log analysis system.

The full text of ​ is as follows:

​ Hillstone: syslog → logstash → elasticsearch → kibana

​ H3C: syslog → logstash → elasticsearch → kibana

​ ESXI: syslog → logstash → elasticsearch → kibana

​ Vcenter: syslog → logstash → elasticsearch → kibana

​ Windows server: winlogbeat → logstash → elasticsearch → kibana

​ linux server: filebeate → lasticsearch → kibana

​ ELK description:

​ ELK1: 192.168.20.18:9200

​ ELK2: 192.168.20.19:9200

​ Planning:

​ Logstash: 192.168.20.18

Different ​ services are tagged according to different ports and different indexes are created.

1 Hillstone part 1.1 Hillstone configuration 1.1.1 configuration steps

​ is configured through the web interface. Of course, it can also be configured on the command line. For specific configuration, please refer to the link.

​ finds Stoneos- Log Management-log configuration-Log Manager, and configures server logs:

​ hostname: 192.168.20.18

​ binding method: virtual router trust-vr

​ protocol: UDP

​ port: 514

​ / / the root I use is running, and the port for non-root accounts is above 1024.

1.1.2 reference links

Elk collects data center network device logs

Hillstone common configuration commands

1.2 add logstash profile 1.2.1 create a configuration test file cat > / data/config/test-hillstone.config 518 type = > "Hillstone"} output {stdout {codec= > rubydebug}} EOFlogstash-f test-hillstone.config1.2.2 select one of the logs for analysis and debugging. Nov 29 17:24:52 1404726150004842 (root) 44243624 Traffic@FLOW: SESSION: 10.6.2.43 SESSION 49608-> 192.168.20.160 Traffic@FLOW 11800 (TCP), application TCP-ANY, interface tunnel6, vr trust-vr, policy 1, user-@ -, host -, send packets 1 send bytes 74 send packets receive packets 110mat start time 2019-11-29 1719-11-29 171440 send packets 2450 close time 2019-11-29 1714449 session end TCP RST\ n\ u00001.2.3Logstash analysis shows that automatic matching can be done through the grok debug website (https://grokdebug.herokuapp.com/discover?#) Then according to the analysis of the log, make a second adjustment. Similarly, there are many cases for reference in the evening, you can first refer to the ideas of others, and then supplement your own ideas. On the grok part of the detailed explanation, please refer to https://coding.imooc.com/class/181.html, the teacher spoke very well, of course, my love cracking forum and bilibili, there is a free version. 1.2.4 Logstash integrated configuration

​ only selects the session + NAT part.

Cat > / data/config/hillstone.config 518 type = > "hillstone"} filter {grok {# Traffic Log # SESSION session end Log match = > {"message" > "\% {SYSLOGTIMESTAMP:timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\)% {DATA:logid}\% {DATA:Sort} @% {DATA:Class}:% {DATA:module}\:% { IPV4:srcip}\:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) Application% {USER:app}\, interface% {DATA:interface}\, vr% {USER:vr}\, policy% {DATA:policy}\, user% {USERNAME:user}\ @% {DATA:AAAserver}\, host% {USER:HOST}\, send packets% {BASE10NUM:sendPackets}\, send bytes% {BASE10NUM:sendBytes}\, receive packets% {BASE10NUM:receivePackets}, receive bytes% {BASE10NUM:receiveBytes}\, start time% {TIMESTAMP_ISO8601:startTime}\, close time% {TIMESTAMP_ISO8601:closeTime}\ Session% {WORD:state}\ % {GREEDYDATA:reason} "} # SESSION session start log match = > {" message ">"\% {SYSLOGTIMESTAMP:timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\)% {DATA:logid}\% {DATA:Sort} @% {DATA:Class}\:% {DATA:module}\:% {IPV4:srcip}:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) Interface% {DATA:interface}\, vr% {DATA:vr}\, policy% {DATA:policy}\, user% {USERNAME:user}\ @% {DATA:AAAserver}\, host% {USER:HOST}\ Session% {WORD:state}% {GREEDYDATA:reason} "} # SNAT match = > {" message "= >"\% {SYSLOGTIMESTAMP:timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\)% {DATA:logid}\% {DATA:Sort} @% {DATA:Class}\:% {DATA:module}:% {IPV4:srcip}\:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) % {WORD:state} to% {IPV4:snatip}\:% {BASE10NUM:snatport}\, vr\% {DATA:vr}\, user\% {USERNAME:user}\ @% {DATA:AAAserver}, host\% {DATA:HOST}\ Rule\% {BASE10NUM:rule} "} # DNAT match = > {" message "= >"\% {SYSLOGTIMESTAMP:timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\)% {DATA:logid}\% {DATA:Sort} @% {DATA:Class}\:% {DATA:module}\:% {IPV4:srcip}:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) % {WORD:state} to% {IPV4:dnatip}\:% {BASE10NUM:dnatport}\, vr\% {DATA:vr}\, user\% {USERNAME:user}\ @% {DATA:AAAserver}\, host\% {DATA:HOST}\, rule\% {BASE10NUM:rule} "} mutate {lowercase = > [" module "] remove_field = > [" host "," message "," ROOT "," HOST " "serial", "syslog_pri", "timestamp", "mac", "AAAserver", "user"]}} output {elasticsearch {hosts = > "192.168.20.18 mac 9200" # elasticsearch Service address index = > "logstash-hillstone-% {module} -% {state} -% {+ YYYY.MM.dd}"} EOF3.1.2.3 reference File

Logstash configuration reference in hillstone

Elk collects data center network device logs

Hillstone Logstash configuration process of Mountain and Stone

ELK from beginner to practice

2 H3C switch part 2.1 H3C Syslog forwarding 2.1.1 H3C switch configuration

​ is configured on the command line in this article. For specific configuration, please refer to the link.

​ set the time of the switch correctly

Clock datetime hh:mm:ss year/month/daysave force

Set up switch syslog forwarding.

System-viewinfo-center enable / / Open info-centerinfo-center loghost 192.168.20.18 port 516 facility local8 / / set log host / port / log level info-center source default loghost level informational / / set log level save force2.1.2 reference link

​ H3C setting time

​ H3C Web Log forwarding

​ H3C configure log host

2.2 add logstash profile 2.2.1 create a configuration test file cat > / data/config/test-h4c.congfig 516 type = > "h4c"}} output {stdout {codec= > rubydebug}} EOF2.2.2 select a log for analysis and debugging. Nov 30 16:27:23 1404726150004842 (root) 44243622 Traffic@FLOW: SESSION: 10.6.4.178 mutate 48150-> 192.168.20.161 mutate 11800 (TCP), interface tunnel6, vr trust-vr, policy 1, user-@ -, host -, session start\ n\ u00002.2.3 basic usage instructions

​ reference link: https://blog.csdn.net/qq_34624315/article/details/83013531

2.2.4 Comprehensive configuration. Cat > H3C.conf "\% {SYSLOGTIMESTAMP:timestamp}\% {DATA:year}% {DATA:hostname}\% {DATA:ddModuleName}\ /% {POSINT:severity}\ /% {DATA:brief}\:% {GREEDYDATA:reason}"} add_field = > {"severity_code" = > "% {severity}"} mutate {gsub = > ["severity", "0" "Emergency", "severity", "1", "Alert", "severity", "2", "Critical", "severity", "3", "Error", "severity", "4", "Warning", "severity", "5", "Notice", "severity", "6", "Informational" "severity", "7", "Debug"] remove_field = > ["message", "syslog_pri"]}} output {stdout {codec= > rubydebug} # elasticsearch {# hosts = > "192.168.20.18 elasticsearch Service address # index = >" logstash-h4c-% {+ YYYY.MM.dd} "#}} EOF2.2.5 reference File

Logstash configuration of network devices such as switching routing

Logstash profile

3 Vmware Section 3.1 Esxi Log Collection Section

​ mainly collects ESXI machine logs to facilitate security log analysis.

​ collects logs mainly through syslog, and then analyzes them through logstash provided by ELK stack

​ ESXI-syslog--logstash--elasticsearch

3.1.1 Esxi machine syslog server configuration 3.1.1.1 configuration steps

​ this article through the client configuration, of course, you can also configure web, the method is basically the same, specific configuration, please refer to the link.

​ enables syslog service:

Open esxi client-Select Host-Host configuration-Advanced Settings-syslog- set remote syslog server to: udp://192.168.20.18:514

Allow the firewall to let go.

Open the esxi client-Select Host-Host configuration-Security profile-Firewall-Edit-check the syslog server, and click OK.

3.1.1.2 reference links

Vmware Esxi syslog configuration

Configure syslog on Esxi

Monitoring VMWare ESXi with the ELK Stack

3.1.2 add logstash configuration file 3.1.2.1 create a configuration test file cat > / data/config/test-vmware.config 514 type = > "Hillstone"} output {stdout {codec= > rubydebug}} EOFlogstash-f test-vmware.config3.1.2.2 select one of the logs for analysis and debugging. 2019-12-03T07:36:11.689Z localhost.localdomain Vpxa: verbose vpxa [644C8B70] [Originator@6876 sub=VpxaHalCnxHostagent opID=WFU-2d14bc3d] [WaitForUpdatesDone] Completed callback\ n3.1.2.3 combine the comprehensive cases on the Internet to configure and transform the test files. Cat > vmware.conf "vmware"}} filter {if "vmware" in [type] {grok {break_on_match = > true match = > ["message" "% {TIMESTAMP_ISO8601:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {SYSLOGPROG:syslog_program}: (?:\ [% {DATA:message_thread_id}% {DATA:syslog_level}\'% {DATA:message_service}\'?% {DATA:message_opID}]))\ [% {DATA:message_service_info}]\ (? (% {GREEDYDATA}))", "message" "% {TIMESTAMP_ISO8601:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {SYSLOGPROG:syslog_program}: (?:\ [% {DATA:message_thread_id}% {DATA:syslog_level}\'% {DATA:message_service}\?% {DATA:message_opID}])) (? (% {GREEDYDATA})", "message" "% {TIMESTAMP_ISO8601:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {SYSLOGPROG:syslog_program}:% {GREEDYDATA:syslog_message}"]} date {match = > ["syslog_timestamp", "YYYY-MM-ddHH:mm:ss", "ISO8601"]} mutate {replace = > ["@ source_host" "% {syslog_hostname}"]} mutate {replace = > ["@ message", "% {syslog_message}"]} mutate {remove_field = > ["@ source_host", "program", "@ timestamp", "syslog_hostname" "@ message"]} if "Device naa" in [message] {grok {break_on_match = > false match = > ["message", "Device naa.% {WORD:device_naa} performance has% {WORD:device_status}% {GREEDYDATA} of% {INT:datastore_latency_from}% {GREEDYDATA} to% {INT:datastore_latency_to}", "message" "Device naa.% {WORD:device_naa} performance has% {WORD:device_status}% {GREEDYDATA} from% {INT:datastore_latency_from}% {GREEDYDATA} to% {INT:datastore_latency_to}"]} if "connectivity issues" in [message] {grok {match = > ["message" "Hostd:% {GREEDYDATA}:% {DATA:device_access} to volume% {DATA:device_id}% {DATA:datastore} (following | due to)"]} if "WARNING" in [message] {grok {match = > ["message" "WARNING:% {GREEDYDATA:vmware_warning_msg}"]}} output {elasticsearch {hosts = > "192.168.20.18 output 9200" # elasticsearch Service address index = > "logstash-vmware-% {+ YYYY.MM.dd}"} # stdout {codec= > rubydebug}} EOF3.1.2.3 reference File

Basic usage of mutate

Basic logstash profile referenc

Vmware and syslog

Logstash VCSA6.0

Filter plugins

3.2 Vcenter Log Collection Section 3.2.1 Vcenter-syslog Server configuration

​ mainly collects vcsa machine logs to facilitate security log analysis.

​ collects logs mainly through syslog, and then analyzes them through logstash provided by ELK stack

​ VCSA-Syslog--Logstash--Elasticsearch

3.2.1.1 configuration steps

​ opens the VCSA management backend URL: http://192.168.20.90:5480, enter the account number and password (boot root and password)-click on the syslog configuration center, and enter syslog configuration information.

3.2.1.2 reference links

Vmware Esxi syslog configuration

VCSA 6.5 forward to multiple syslog

VCSA syslog

3.2.2 add logstash configuration file 3.2.2.1 create test configuration input {udp {port = > 1514 type = > "vcenter"} output {stdout {codec= > rubydebug}} 3.2.2.2 Select a log to analyze and debug 1 2019-12-05T02:44:17.640474+00:00 photon-machine vpxd 4035-- Event [4184629] [1-1] [2019-12-05T02:44:00. 017928Z] [vim.event.UserLoginSessionEvent] [info] [root] [Datacenter] [4184629] [User root@192.168.20.17 logged in as pyvmomi Python/3.6.8 (Linux) 3.10.0-957.el7.x861464; x86164)]\ n3.2.2.3 combine the comprehensive case on the Internet to configure and transform the configuration file. Cat > vcenter.conf "vcenter"}} filter {if "vcenter" in [type] {} grok {break_on_match = > true match = > ["message" "% {NONNEGINT:syslog_ver} + (?:% {TIMESTAMP_ISO8601:syslog_timestamp} | -) + (?:% {HOSTNAME:syslog_hostname} | -) + (- |% {SYSLOG5424PRINTASCII:syslog_program}) + (- |% {SYSLOG5424PRINTASCII:syslog_msgid}) + (?:% {SYSLOG5424SD:syslog_sd} |-|) +% {GREEDYDATA:syslog_msg}"]} date {match = > ["syslog_timestamp" "YYYY-MM-ddHH:mm:ss,SSS", "YYYY-MM-ddHH:mm:ss,SSS", "ISO8601"] # timezone = > "UTC" # For vCenter Appliance # timezone = > "Asia/Shanghai"} mutate {remove_field = > ["syslog_ver" "syslog_pri"]} output {elasticsearch {hosts = > "192.168.20.18 hosts 9200" # elasticsearch Service address index = > "logstash-vcenter-% {+ YYYY.MM.dd}"} # stdout {codec= > rubydebug}} EOF3.2.2.4 reference File

Basic usage of mutate

Basic logstash profile referenc

Vmware and syslog

Logstash VCSA6.0

4 Windows server Section 4.1 windows server Log Collection Section

​ mainly collects AD domain logs to facilitate security log analysis.

​ collects mainly through the winlogbeat provided by the ELK stack

​ winlogbeat--logstash--elasticsearch

4.1.1 windows server configuration download and installation

Download connection address: https://www.elastic.co/cn/downloads/beats/winlogbeat

Configuration:

Put the extracted file into "C:\ Program Files" and rename it to winlogbeat

Installation:

Command installation

Edit:

Edit the winlogbeat.yml file

Winlogbeat.event_logs:-name: Application-name: Security-name: Systemoutput.logstash: enbaled: true hosts: ["192.168.20.18 path 5044"] logging.to_files: truelogging.files: path: d:\ ProgramData\ winlogbeat\ Logslogging.level: info test configuration file PS C:\ Program Files\ Winlogbeat >.\ winlogbeat.exe test config-c.\ winlogbeat.yml-e launch:

Start winlogbeat

The powershell command line starts:

PS C:\ Program Files\ Winlogbeat > Start-Service winlogbeat

Powershell command line closed

PS C:\ Program Files\ Winlogbeat > Stop-Service winlogbeat Import template

Import the winlogindex template, because we use logstash, so we need to import it manually.

PS >.\ winlogbeat.exe setup-- index-management-E output.logstash.enabled=false-E 'output.elasticsearch.hosts= ["192.168.20.18"] 'Import dashboard

Import kibana-dashboard, because we are using logstash, so we need to import manually.

PS >.\ winlogbeat.exe setup-e-E output.logstash.enabled=false-E output.elasticsearch.hosts= ['192.168.20.18 setup.kibana.host=192.168.20.18:56014.1.2 9200']-E setup.kibana.host=192.168.20.18:56014.1.2 reference link

Winlogbeat installation on ​ windows

Analysis of the official manual of ​

4.2 add logstash profile 4.2.1 create Test profile cat > / data/config/test-windows.config 5044}} output {stdout {codec= > rubydebug}} EOFlogstash-f test-windows.config4.2.2 create profile

Create a formal profile and view the content (because you already have a template, it's good to handle other changes)

Cat > / data/config/windows.config 5044}} output {elasticsearch {hosts = > ["http:192.168.20.18:9200"] index = > "% {[@ metadata] [beat]} -% {[@ metadata] [version]}"} EOFlogstash-f windows.config4.2.3 reference file

Beats input plugin

4.3 View Discover

​ has been defined because winlogbeat is the standard module of the elk stack, so we no longer define it ourselves.

​ directly opens the index that searches for winlogbaet*.

4.4 View dashboard

​ has been defined because winlogbeat is the standard module of the elk stack, so we no longer define it ourselves.

​ directly opens the dashboard for searching winlogbaet*.

5 Linux server Section 5.1 linux Syslog Collection Section

​ mainly collects switch log and security log on linux.

​ collects mainly through the filebeat provided by the ELK stack

​ filebeat-filebeat-module--elasticsearch

5.1.1 installation and deployment download

Download from the official website, link address: https://www.elastic.co/cn/downloads/beats/filebeat

Installation:

Command line installation

Curl-L-O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.5.0-linux-x86_64.tar.gztar xzvf filebeat-7.5.0-linux-x86_64.tar.gz View layout

View filebeat directory layout

Edit:

Edit the filebeat.yml file

Filebeat.inputs:- type: log enabled: true paths:-/ var/log/*.log #-c:\ programdata\ elasticsearch\ logs\ * output.elasticsearch: hosts: ["192.168.20.18 var/log/*.log 9200"] setup.kibana: host: "192.168.20.18 programdata 5601" start:

Run the filebeat file

Filebeat-c filebeat.yml-e5.1.2 reference link

Winlogbeat installation on windows

Official manual analysis

5.2 modify filebeat configuration file 5.2.1 index name

Turn off ilm claim cycle management

Setup.ilm.enabled: false

Change the index name

Setup.template.overwrite: trueoutput.elasticsearch.index: "systemlog-7.3.0-% {+ yyyy.MM.dd}" setup.template.name: "systemlog" setup.template.pattern: "systemlog-*"

Modify the pre-built kibana dashboard

Setup.dashboards.index: "systemlog-*" 5.2.1 Open the system module. / filebeat modules enable system./filebeat modules list5.2.3 reinitialize the environment. / filebeat setup-- template-e-c filebeat.yml5.2.4 configuration module Modify the configuration file filebeat.modules:- module: systemsyslog: enabled: true# default location / var/log/messages* / var/log/syslog*auth: enabled: true# default location / var/log/auth.log* / var/log/secure*output.elasticsearch:hosts: ["192.168.20.18 enabled 9200"] setup.kibana:host: "192.168.20.18 Fran 5601" 5.2.5 restart filebeats Initialization module. / filebeat setup-e-c filebeat.yml5.2.6 reference file

Beats input plugin

Filebeat module and configuration

System module

5.3 View discover

5.4 View dashboard

6 Integrated logstash configuration 6.1 Integrated logstash configuration file input {udp {port = > 516 type = > "h4c"} input {udp {port = > 518 type = > "hillstone"}} input {udp {port = > 514 type = > "vmware"} input {udp {port = > 1514 type = > "vcenter"} input {beats {port = > 5044 type = > "windows"} filter {if [type] = "hillstone" {grok {match = > {"message" = > "\% {SYSLOGTIMESTAMP: Timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\)% {DATA:logid}\% {DATA:Sort} @% {DATA:Class}\:% {DATA:module}:% {IPV4:srcip}\:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) Application% {USER:app}\, interface% {DATA:interface}\, vr% {USER:vr}\, policy% {DATA:policy}\, user% {USERNAME:user}\ @% {DATA:AAAserver}\, host% {USER:HOST}\, send packets% {BASE10NUM:sendPackets}\, send bytes% {BASE10NUM:sendBytes}\, receive packets% {BASE10NUM:receivePackets}, receive bytes% {BASE10NUM:receiveBytes}\, start time% {TIMESTAMP_ISO8601:startTime}\, close time% {TIMESTAMP_ISO8601:closeTime}\ Session% {WORD:state}\,% {GREEDYDATA:reason} "} match = > {" message ">"\% {SYSLOGTIMESTAMP:timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\)% {DATA:logid}\% {DATA:Sort} @% {DATA:module}\:% {IPV4:srcip}\:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) Interface% {DATA:interface}\, vr% {DATA:vr}\, policy% {DATA:policy}\, user% {USERNAME:user}\ @% {DATA:AAAserver}\, host% {USER:HOST}\ Session% {WORD:state}% {GREEDYDATA:reason} "} match = > {" message ">"\% {SYSLOGTIMESTAMP:timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\)% {DATA:logid}\% {DATA:Sort} @% {DATA:Class}\:% {DATA:module}\:% {IPV4:srcip}:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) % {WORD:state} to% {IPV4:snatip}\:% {BASE10NUM:snatport}\, vr\% {DATA:vr}\, user\% {USERNAME:user}\ @% {DATA:AAAserver}, host\% {DATA:HOST}\ Rule\% {BASE10NUM:rule} "} match = > {" message ">"\% {SYSLOGTIMESTAMP:timestamp}\% {BASE10NUM:serial}\ (% {WORD:ROOT}\) )% {DATA:logid}\% {DATA:Sort} @% {DATA:Class}\:% {DATA:module}\:% {IPV4:srcip}\:% {BASE10NUM:srcport}->% {IPV4:dstip}:% {WORD:dstport}\ (% {DATA:protocol}\) % {WORD:state} to% {IPV4:dnatip}\:% {BASE10NUM:dnatport}\, vr\% {DATA:vr}\, user\% {USERNAME:user}\ @% {DATA:AAAserver}\, host\% {DATA:HOST}\, rule\% {BASE10NUM:rule} "} mutate {lowercase = > [" module "] remove_field = > [" host "," message "," ROOT "," HOST "," serial "," syslog_pri "," timestamp " "mac", "AAAserver", "user"]} if [type] = = "h4c" {grok {match = > {"message" > "\% {SYSLOGTIMESTAMP:timestamp}\% {DATA:year}% {DATA:hostname}\% {DATA:ddModuleName}\ /% {POSINT:severity}\ /% {DATA:brief}\:% {GREEDYDATA:reason}"} add_field = > {"severity_code" = > "% {severity}"} mutate {gsub = > ["severity" "0", "Emergency", "severity", "1", "Alert", "severity", "2", "Critical", "severity", "3", "Error", "severity", "4", "Warning", "severity", "5", "Notice", "severity", "6", "Informational", "severity", "7", "Debug"] remove_field = > ["message" "syslog_pri"]} if [type] = = "vmware" {grok {break_on_match = > truematch = > ["message" "% {TIMESTAMP_ISO8601:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {SYSLOGPROG:syslog_program}: (?:\ [% {DATA:message_thread_id}% {DATA:syslog_level}\'% {DATA:message_service}\'?% {DATA:message_opID}]))\ [% {DATA:message_service_info}]\ (? (% {GREEDYDATA}))", "message" "% {TIMESTAMP_ISO8601:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {SYSLOGPROG:syslog_program}: (?:\ [% {DATA:message_thread_id}% {DATA:syslog_level}\'% {DATA:message_service}\?% {DATA:message_opID}])) (? (% {GREEDYDATA})", "message" "% {TIMESTAMP_ISO8601:syslog_timestamp}% {SYSLOGHOST:syslog_hostname}% {SYSLOGPROG:syslog_program}:% {GREEDYDATA:syslog_message}"]} date {match = > ["syslog_timestamp", "YYYY-MM-ddHH:mm:ss", "ISO8601"]} mutate {replace = > ["@ source_host", "% {syslog_hostname}"]} mutate {replace = > ["@ message" "% {syslog_message}"]} mutate {remove_field = > ["@ source_host", "program", "syslog_hostname", "@ message"]} if [type] = = "vcenter" {grok {break_on_match = > truematch = > ["message" "% {NONNEGINT:syslog_ver} + (?:% {TIMESTAMP_ISO8601:syslog_timestamp} | -) + (?:% {HOSTNAME:syslog_hostname} | -) + (- |% {SYSLOG5424PRINTASCII:syslog_program}) + (- |% {SYSLOG5424PRINTASCII:syslog_msgid}) + (?:% {SYSLOG5424SD:syslog_sd} |-|) +% {GREEDYDATA:syslog_msg}"]} date {match = > ["syslog_timestamp" "YYYY-MM-ddHH:mm:ss,SSS", "YYYY-MM-ddHH:mm:ss,SSS", "ISO8601"]} mutate {remove_field = > ["syslog_ver" "syslog_pri"]}} output {if [type] = = "hillstone" {elasticsearch {hosts = > "192.168.20.18 if 9200" index = > "hillstone-% {module} -% {+ YYYY.MM.dd}"} if [type] = = "h4c" {elasticsearch {hosts = > "192.168.20.1818 if 9200" index = > "h4c color% {+ YYYY.MM.dd}"} if [type] = "vmware" {elasticsearch {hosts = > "192" .168.20.18: 9200 "index = >" vmware-% {+ YYYY.MM.dd} if [type] = = "vcenter" {elasticsearch {hosts = > "192.168.20.18 if 9200" index = > "vcenter-% {+ YYYY.MM.dd}"}} if [type] = = "windows" {elasticsearch {hosts = > "192.168.20.18Rank 9200" index = > "% {[@ metadata] [beat]} -% {[@ metadata] [version]}"} 6.2 configure logstash services

Start the logstash service, change mix.config to logstash.conf, and put it in the / etc/logstash directory.

Systemctl enable logstashsystemc start logstash to be continued, it is recommended to view English documents.

Find an English site, move the mouse over url, and add icopy.site/ enter.

​ "icopy.site/" + "https://www.elastic.co""

For example: source URL: https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

​ translated website: https://s0www0elastic0co.icopy.site/guide/en/logstash/current/plugins-filters-grok.html

Reasons for recommendation:

​ 1. It is more accurate than Google full-text translation, and the key code is not translated.

​ 2. If you are not good at English, or you are too tired to read English documents, you can give it a try.

​ recommended Learning Video:

Getting started with ​ ELK to practice

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report