Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy ELK Log Analysis system in Centos7.6

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how to deploy the ELK log analysis system in Centos7.6. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.

Download elasticsearch

Create elk users and authorize

Useradd elkchown-R elk:elk / home/elk/elasticsearchchown-R elk:elk / home/elk/elasticsearch2chown-R elk:elk / home/elk/elasticsearch3mkdir-p / home/eladatamkdir-p / var/log/elkchown-R elk:elk / home/eladatachown-R elk:elk / var/log/elk Primary Node master

Decompress elasticsearch and modify configuration fil

/ home/elk/elasticsearch/config [root@localhost config] # grep-v "^ #" elasticsearch.yml cluster.name: my-applicationnode.name: node0node.master: truenode.attr.rack: r1node.max_local_storage_nodes: 3path.data: / home/eladatapath.logs: / var/log/elkhttp.cors.enabled: truehttp.cors.allow-origin: "*" network.host: 192.168.1.70http.port: 9200transport.tcp.port: 9301discovery.zen.minimummaster _ Nodes: 1cluster.initial_master_nodes: ["node0"]

Start the command manually

Su elk-l-c'/ home/elk/elasticsearch/bin/elasticsearch-d'

Start the file elasticsearch.service

[root@localhost system] # pwd/lib/systemd/system [root@localhost system] # cat elasticsearch.service [Unit] Description=ElasticsearchDocumentation= http://www.elastic.coWants=network-online.targetAfter=network-online.target[Service]RuntimeDirectory=elasticsearchPrivateTmp=trueEnvironment=ES_HOME=/home/elk/elasticsearchEnvironment=ES_PATH_CONF=/home/elk/elasticsearch/configEnvironment=PID_DIR=/var/run/elasticsearchEnvironmentFile=-/etc/sysconfig/elasticsearchWorkingDirectory=/home/elk/elasticsearchUser=elkGroup=elkExecStart=/home/elk/elasticsearch/bin/elasticsearch-p {PID_DIR} / elasticsearch.pid-quietStandardOutput=journalStandardError=inheritLimitNOFILE=65536LimitNPROC=4096LimitAS=infinityLimitFSIZE=infinityTimeoutStopSec=0KillSignal=SIGTERMKillMode=processSendSIGKILL=noSuccessExitStatus= 143 [install] WantedBy=multi- User.target [root@localhost system] # Node1 node / home/elk/elasticsearch2/config [root@localhost config] # grep-v "^ #" elasticsearch.yml cluster.name: my-applicationnode.name: node1node.master: falsenode.attr.rack: r1node.max_local_storage_nodes: 3path.data: / home/eladatapath.logs: / var/log/elkhttp.cors.enabled: truehttp.cors.allow-origin: "*" network.host: 192.168.1.70transport.tcp.port : 9303http.port: 9302discovery.zen.ping.unicast.hosts: ["192.168.1.70 root@localhost config 9301"] [root@localhost config] #

Start the command manually

Su elk-l-c'/ home/elk/elasticsearch2/bin/elasticsearch2-d'

Start the file elasticsearch2.service

[root@localhost system] # pwd/lib/systemd/system [root@localhost system] # cat elasticsearch2.service [Unit] Description=ElasticsearchDocumentation= http://www.elastic.coWants=network-online.targetAfter=network-online.target[Service]RuntimeDirectory=elasticsearch2PrivateTmp=trueEnvironment=ES_HOME=/home/elk/elasticsearch2Environment=ES_PATH_CONF=/home/elk/elasticsearch2/configEnvironment=PID_DIR=/var/run/elasticsearchEnvironmentFile=-/etc/sysconfig/elasticsearchWorkingDirectory=/home/elk/elasticsearchUser=elkGroup=elkExecStart=/home/elk/elasticsearch2/bin/elasticsearch-p {PID_DIR} / elasticsearch.pid-quietStandardOutput=journalStandardError=inheritLimitNOFILE=65536LimitNPROC=4096LimitAS=infinityLimitFSIZE=infinityTimeoutStopSec=0KillSignal=SIGTERMKillMode=processSendSIGKILL=noSuccessExitStatus= 143 [install] WantedBy=multi- User.target [root@localhost system] # Node2 node / home/elk/elasticsearch3/config [root@localhost config] # grep-v "^ #" elasticsearch.yml cluster.name: my-applicationnode.name: node2node.attr.rack: r1node.master: falsenode.max_local_storage_nodes: 3path.data: / home/eladatapath.logs: / var/log/elkhttp.cors.enabled: truehttp.cors.allow-origin: "*" network.host: 192.168.1.70http.port: 9203transport .tcp.port: 9304discovery.zen.ping.unicast.hosts: ["192.168.1.70 root@localhost config 9301"] discovery.zen.minimum_master_nodes: 1 [root@localhost config] #

Start the command manually

Su elk-l-c'/ home/elk/elasticsearch3/bin/elasticsearch3-d'

Start the file elasticsearch3.service

[root@localhost system] # pwd/lib/systemd/system [root@localhost system] # cat elasticsearch3.service [Unit] Description=ElasticsearchDocumentation= http://www.elastic.coWants=network-online.targetAfter=network-online.target[Service]RuntimeDirectory=elasticsearch3PrivateTmp=trueEnvironment=ES_HOME=/home/elk/elasticsearch3Environment=ES_PATH_CONF=/home/elk/elasticsearch3/configEnvironment=PID_DIR=/var/run/elasticsearchEnvironmentFile=-/etc/sysconfig/elasticsearchWorkingDirectory=/home/elk/elasticsearch3User=elkGroup=elkExecStart=/home/elk/elasticsearch3/bin/elasticsearch-p {PID_DIR} / elasticsearch.pid-quietStandardOutput=journalStandardError=inheritLimitNOFILE=65536LimitNPROC=4096LimitAS=infinityLimitFSIZE=infinityTimeoutStopSec=0KillSignal=SIGTERMKillMode=processSendSIGKILL=noSuccessExitStatus= 143 [install] WantedBy=multi- User.target [root@localhost system] # download logstash

The directory is as follows, and the default configuration is fine.

[root@localhost logstash] # pwd/home/elk/logstash [root@localhost logstash] #

Start the command manually

. / logstash-f.. / dev.conf nohup. / logstash-f.. / dev.conf & download kibana

The configuration file is as follows

[root@localhost config] # pwd/home/elk/kibana/config [root@localhost config] # grep-v "^ #" kibana.yml server.host: "192.168.1.70" elasticsearch.hosts: ["http://192.168.1.70:9200"]kibana.index:" .kibana "i18n.locale:" zh-CN "

Start the command manually

. / kibananohup. / kibana &

Kibana startup file

[root@localhost system] # pwd/lib/systemd/system [root@localhost system] # cat kibana.service [Unit] Description=Kibana Server Manager [Service] ExecStart=/home/elk/kibana/bin/ Kibana [install] WantedBy=multi-user.target [root@localhost system] # Port: 5601 access: 192.168.1.70 pwd/lib/systemd/system 5601 install Elasticsearch-headyum install git npmgit clone https://github.com/mobz/elasticsearch-head.git [root@localhost elasticsearch-head] # pwd/home/elk/elasticsearch-head [root@localhost elasticsearch-head] #

Start

Npm install npm run startnohup npm run start & curl-XPUT '192.168.2.67 kafka can be downloaded by visiting 192.168.2.67 XPUT 9100

Modify the configuration file as follows

[root@localhost config] # pwd/home/elk/kafka/config [root@localhost config] # grep-v "^ #" server.properties broker.id=0listeners=PLAINTEXT://192.168.1.70:9092num.network.threads=3num.io.threads=8socket.send.buffer.bytes=102400socket.receive.buffer.bytes=102400socket.request.max.bytes=104857600log.dirs=/var/log/kafka-logsnum.partitions=1num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1log.retention .launch = 168log.segment.bytes=1073741824log.retention.check.interval.ms=300000zookeeper.connect=localhost:2181zookeeper.connection.timeout.ms=6000group.initial.rebalance.delay.ms=0delete.topic.enable=true [root@localhost config] # kafka configuration launch zookeeper

Manual start mode

[root@localhost bin] # pwd/home/elk/kafka/bin [root@localhost bin] #. / zookeeper-server-start.sh.. / config/zookeeper.properties

Systemctl starts zookeeper

[root@localhost system] # pwd/lib/systemd/system [root@localhost system] # cat zookeeper.service [Service] Type=forkingSyslogIdentifier=zookeeperRestart=alwaysRestartSec=0sExecStart=/home/elk/kafka/bin/zookeeper-server-start.sh-daemon / home/elk/kafka/config/zookeeper.propertiesExecStop=/home/elk/kafka/bin/zookeeper-server-stop.sh [root@localhost system] # start the kafka service

Manual start mode

. / kafka-server-start.sh.. / config/server.properties

Systemctl starts kafka

[root@localhost system] # pwd/lib/systemd/system [root@localhost system] # cat kafka.service [Unit] Description=Apache kafkaAfter= network.target [Service] Type=simpleRestart=alwaysRestartSec=0sExecStart=/home/elk/kafka/bin/kafka-server-start.sh / home/elk/kafka/config/server.propertiesExecStop=/home/elk/kafka/bin/kafka-server-stop.sh [root@localhost system] # Test kafka

Create a new topic named test

/ kafka-topics.sh-- create-- zookeeper 192.168.1.70 replication-factor 2181-- partitions 1-- topic test

View topic in kafka

. / kafka-topics.sh-- list-- zookeeper 192.168.1.70 zookeeper 2181

To kafka topic to produce messages for test

. / kafka-console-producer.sh-- broker-list 192.168.1.70 topic test

Consume messages in kafka topic for test

Bin/kafka-console-consumer.sh-- bootstrap-server 192.168.1.70 topic test-- from-beginning

The news of production is received by the consumer, that is, ok.

Target machine installs filebeat

Install version 6.5

[root@localhost filebeat] # pwd/usr/local/filebeat [root@localhost filebeat] # cat filebeat.yml filebeat.prospectors:- type: log paths:-/ opt/logs/workphone-tcp/catalina.out fields: tag: 54 log paths: / opt/logs/workphone-webservice/catalina.out fields: tag: 54_web_catalina_outname: 192.168.1.54filebeat.config.modules: path: ${path.config } / modules.d/*.yml reload.enabled: falsesetup.template.settings: index.number_of_shards: 3output.kafka: hosts: ["192.168.1.70 gzip max_message_bytes 9092"] topic: "filebeat-log" partition.hash: reachable_only: true compression: gzip max_message_bytes: 1000000 required_acks: 1 [root@localhost filebeat] #

Go to logstash to edit the configuration file after installation is complete

Logstash operation [root@localhost logstash] # pwd/home/elk/logstash [root@localhost logstash] # cat dev.conf input {kafka {bootstrap_servers = > "192.168.1.70 json 9092" topics = > ["filebeat-log"] codec = > "json"} filter {if [fields] [tag] = = "jpwebmap" {json {source = > "message" Remove_field = > "message"} geoip {source = > "client" target = > "geoip" add_field = > ["[geoip] [coordinates]" "% {[geoip] [longitude]}"] add_field = > ["[geoip] [coordinates]", "% {[geoip] [latitude]}"]} mutate {convert = > ["[geoip] [coordinates]" "float"]} if [fields] [tag] = = "54_tcp_catalina_out" {grok {match = > ["message", "% {TIMESTAMP_ISO8601:logdate}"]} date {match = > ["logdate" "ISO8601"]} mutate {remove_field = > ["logdate"]}} if [fields] [tag] = = "54_web_catalina_out" {grok {match = > ["message" "% {TIMESTAMP_ISO8601:logdate}"]} date {match = > ["logdate" "ISO8601"]} mutate {remove_field = > ["logdate"]}} if [fields] [tag] = = "55_tcp_catalina_out" {grok {match = > ["message" "% {TIMESTAMP_ISO8601:logdate}"]} date {match = > ["logdate" "ISO8601"]} mutate {remove_field = > ["logdate"]}} if [fields] [tag] = = "55_web_catalina_out" {grok {match = > ["message" "% {TIMESTAMP_ISO8601:logdate}"]} date {match = > ["logdate" "ISO8601"]} mutate {remove_field = > ["logdate"]}} if [fields] [tag] = = "51_nginx80_access_log" {mutate {add_field = > {"spstr" = >% {[ Log] [file] [path]} "} mutate {split = > [" spstr " "/"] # save the last element of the array as the api_method. Add_field = > ["src", "% {[spstr] [- 1]}"]} mutate {remove_field = > ["friends", "ecs", "agent" "spstr"]} grok {match = > {"message" = > "% {IPORHOST:remote_addr} -% {DATA:remote_user}\ [% {HTTPDATE:time}\]\"% {WORD:method}% {DATA:url} HTTP/% {NUMBER:http_version}\ "% {NUMBER:response_code}% {NUMBER: Body_sent:bytes}\ "% {DATA:referrer}\"\ "% {DATA:agent}\"\ "% {DATA:x_forwarded_for}\"\ "% {NUMBER:request_time}\"\ "% {DATA:upstream_addr}\"% {DATA:upstream_status}\ ""} remove_field = > "message"} Date {match = > ["time" "dd/MMM/yyyy:HH:mm:ss Z"] target = > "@ timestamp"} geoip {source = > "x_forwarded_for" target = > "geoip" database = > "/ home/elk/logstash/GeoLite2-City.mmdb" "add_field = > [" [geoip] [coordinates] " "% {[geoip] [longitude]}"] add_field = > ["[geoip] [coordinates]", "% {[geoip] [latitude]}"]} mutate {convert = > ["[geoip] [coordinates]" "float"]}} output {if [fields] [tag] = = "wori" {elasticsearch {hosts = > ["192.168.1.70 if 9200"] index = > "zabbix"} if [fields] [tag] = = "54_tcp_catalina_out" {elasticsearch {hosts = > ["192.168.1.70 if 9200"] index = > "54_tcp_catalina_out" } if [fields] [tag] = = "54_web_catalina_out" {elasticsearch {hosts = > ["192.168.1.70 elasticsearch 9200"] index = > "54_web_catalina_out"}} if [fields] [tag] = = "55_tcp_catalina_out" {elasticsearch {hosts = > ["192.168.1.70 54_web_catalina_out 9200"] index = > "55_tcp_catalina_out"}} If [fields] [tag] = = "55_web_catalina_out" {elasticsearch {hosts = > ["192.168.1.70 tag 9200"] index = > "55_web_catalina_out"} if [fields] [tag] = = "51_nginx80_access_log" {stdout {} elasticsearch {hosts = > ["192.168.1.70 tag 9200"] index = > "51_nginx80_access_log" "}} other profiles

Index.conf

Filter {mutate {add_field = > {"spstr" = > "% {[log] [file] [path]}} mutate {split = > [" spstr "," / "] # save the last element of the array as the api_method. Add_field = > ["src", "% {[spstr] [- 1]}"]} mutate {remove_field = > ["friends", "ecs", "agent", "spstr"]}}

Java.conf

Filter {if [fields] [tag] = = "java" {grok {match = > ["message", "% {TIMESTAMP_ISO8601:logdate}"]} date {match = > ["logdate", "ISO8601"]} mutate {remove_field = > ["logdate"]}} # End if}

Kafkainput.conf

Input {kafka {bootstrap_servers = > "172.16.11.68 bootstrap_servers 9092" # topics = > ["ql-prod-tomcat"] topics = > ["ql-prod-dubbo", "ql-prod-nginx" "ql-prod-tomcat"] codec = > "json" consumer_threads = > 5 decorate_events = > true # auto_offset_reset = > "latest" group_id = > "logstash" # client_id = > "# # HELK Optimizing Latency # #" # # fetch_min_bytes = > "1" request_timeout_ms = > "305000" # # HELK Optimizing Availability # # session_timeout_ms = > "10000" max_poll_records = > "10000" max_poll_interval_ms = > " 300000 "}} # input {# kafka {# bootstrap_servers = >" 172.16.11.68 Swiss 9092 "# topics = > [" ql-prod-java-dubbo " "ql-prod" "ql-prod-java"] # codec = > "json" # consumer_threads = > decorate_events = > true# auto_offset_reset = > "latest" # group_id = > "logstash-1" # HELK Optimizing Latency # # # # fetch_min_bytes = > "1" # request_timeout_ms = > "305000" # HELK Optimizing Availability # # session_timeout_ms = > "10000" # max_poll_records = > "10000" # max_poll_interval_ms = > "300000" #} #}

Nginx.conf

Filter {if [fields] [tag] = = "nginx-access" {mutate {add_field = > {"spstr" = > "% {[log] [file] [path]}"} mutate {split = > ["spstr", "/"] # save the last element of the array as the api_method. Add_field = > ["src", "% {[spstr] [- 1]}"]} mutate {remove_field = > ["friends", "ecs", "agent" "spstr"]} grok {match = > {"message" = > "% {IPORHOST:remote_addr} -% {DATA:remote_user}\ [% {HTTPDATE:time}\]\"% {WORD:method}% {DATA:url} HTTP/% {NUMBER:http_version}\ "% {NUMBER:response_code}% {NUMBER:body_sent:bytes}\"% {DATA:referrer}\ "\" % {DATA:agent}\ "% {DATA:x_forwarded_for}\"\ "% {NUMBER:request_time}\"\ "% {DATA:upstream_addr}\"\ "% {DATA:upstream_status}\"} remove_field = > "message"} date {match = > ["time" "dd/MMM/yyyy:HH:mm:ss Z"] target = > "@ timestamp"} geoip {source = > "x_forwarded_for" target = > "geoip" database = > "/ opt/logstash-6.2.4/GeoLite2-City.mmdb" add_field = > ["[geoip] [coordinates]" "% {[geoip] [longitude]}"] add_field = > ["[geoip] [coordinates]", "% {[geoip] [latitude]}"]} mutate {convert = > ["[geoip] [coordinates]", "float"]}} # endif}

Ouput.conf

Output {if [fields] [tag] = = "nginx-access" {stdout {} elasticsearch {user = > elastic password = > WR141bp2sveJuGFaD4oR hosts = > ["172.16.11.67 logstash-% 9200"] index = > "logstash-% {[fields] [proname]} -% {+ YYYY.MM.dd}"} # stdout {} if [fields] [tag] = "java" {elasticsearch {user = > elastic password = > WR141bp2sveJuGFaD4oR hosts = > ["172.16.11.66 elastic password 9200" "172.16.11.68 index 9200"] index = > "% {[host] [name]} -% {[src]}"}} about "how Centos7.6 deploys ELK Log Analysis system" is here. Hope that the above content can be helpful to you, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report