Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ELK log analysis system (actual combat!)

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Introduction Log Server

Improve security

Centralized storage of logs

Defect: difficulty in analyzing the log

ELK log analysis system

Elasticsearch: storage, index pool

Logstash: log collector

Kibana: data visualization

Log processing steps

1. Centralized management of logs

2, Logstash the log and output it to Elasticsearch

3. Index and store the formatted data (Elasticsearch)

4. Front-end data display (Kibana)

Overview of Elasticsearch

A full-text search engine with distributed multi-user capability is provided.

The concept of Elasticsearch

Near real-time

Cluster

Node

Index: index (library)-- > type (table)-- > document (record)

Shards and copies

Logstash introduction

A powerful data processing tool that can realize data transmission, format processing and format output

Data input, data processing (such as filtering, rewriting, etc.) and data output

Main components of LogStash

Shipper

Indexer

Broker

Search and Storage

Web Interface

Kibana introduction

An open source analysis and visualization platform for Elasticsearch

Search and view the data stored in the Elasticsearch index

Advanced data analysis and presentation through various charts

Main functions of Kibana

Seamless Integration of Elasticsearch

Integrated data, complex data analysis

Benefit more team members

Flexible interface and easier to share

Simple configuration and visualization of multiple data sources

Simple data export

Experimental environment

1. Install elasticsearch on node1,node2 (same operation) Demonstrate only one) [root@node1 ~] # vim / etc/hosts # # configure resolution name 192.168.52.133 node1192.168.52.134 node2 [root@node1 ~] # systemctl stop firewalld.service # # turn off firewall [root@node1 ~] # setenforce 0 # # turn off enhanced security function [root@node1 ~] # java-version # # check whether Java [root@node1 ~] # mount.cifs / 192.168.100 is supported .100 / tools/ mnt/tools/ # # Mount Password for root@//192.168.100.100/tools: [root@node1 ~] # cd / mnt/tools/elk/ [root@node1 elk] # rpm-ivh elasticsearch-5.5.0.rpm # # installation warning: elasticsearch-5.5.0.rpm: header V4 RSA/SHA512 Signature Key ID d88e42b4: preparing for NOKEY. # # [100%] Creating elasticsearch group... OKCreating elasticsearch user... OK is being upgraded / installed... 1:elasticsearch-0:5.5.0-1 # # [100%] # NOT starting on installation Please execute the following statements to configure elasticsearch service to start automatically using systemd sudo systemctl daemon-reload sudo systemctl enable elasticsearch.service### You can start elasticsearch service by executing sudo systemctl start elasticsearch.service [root@node1 elk] # systemctl daemon-reload # # reload daemon [root@node1 elk] # systemctl enable elasticsearch.service## boot Created symlink from / etc/systemd/system/multi-user.target.wants/elasticsearch.service to / usr/lib/systemd/system/elasticsearch.service. [root@node1 elk ] # cd / etc/elasticsearch/ [root@node1 elasticsearch] # cp elasticsearch.yml elasticsearch.yml.bak # # backup [root@node1 elasticsearch] # vim elasticsearch.yml # # modify the configuration file cluster.name: my-elk-cluster # # Cluster name node.name: node1 # # Node name The second node is node2path.data: / data/elk_data # # data location path.logs: / var/log/elasticsearch/ # # Log location bootstrap.memory_lock: false # # do not lock memory at startup network.host: 0.0.0.0 # # provide the IP address bound to the service For all addresses http.port: 9200 # # Port number is 9200discovery.zen.ping.unicast.hosts: ["node1" "node2"] # # Cluster finds unicast implementation [root@node1 elasticsearch] # grep-v "^ #" / etc/elasticsearch/elasticsearch.yml # # check whether the configuration is correct cluster.name: my-elk-clusternode.name: node1path.data: / data/elk_datapath.logs: / var/log/elasticsearch/bootstrap.memory_lock: falsenetwork.host: 0.0.0.0http.port: 9200discovery.zen.ping.unicast.hosts: ["node1" "node2"] [root@node1 elasticsearch] # mkdir-p / data/elk_data # # create a data store [root@node1 elasticsearch] # chown elasticsearch.elasticsearch / data/elk_data/ # # give permission [root@node1 elasticsearch] # systemctl start elasticsearch.service # # enable the service [root@node1 elasticsearch] # netstat-ntap | grep 9200 # # View the enabled status tcp6 00:: 9200: * LISTEN 83358/java [root@node1 elasticsearch] #

View node1 node information

View node2 node information

2. Check your health and status on the browser

Node1 health check

Node2 health check

Node1 statu

Node2 statu

3. Install the node component dependency package on node1,node2 (same operation) Demonstrate only one) [root@node1 elasticsearch] # yum install gcc gcc-c++ make-y # # install and compile tool [root@node1 elasticsearch] # cd / mnt/tools/elk/ [root@node1 elk] # tar xf node-v8.2.1.tar.gz-C / opt/ # # extract the plug-in [root@node1 elk] # cd / opt/node-v8.2.1/ [root@node1 node-v8.2.1] #. / configure # # configuration [ Root@node1 node-v8.2.1] # make & & make install # # compile and install 4. In node1 Install the phantomjs front-end framework [root@node1 node-v8.2.1] # cd / mnt/tools/elk/ [root@node1 elk] # tar xf phantomjs-2.1.1-linux-x86_64.tar.bz2-C / usr/local/src/## on node2 and extract it to / usr/local/src [root@node1 elk] # cd / usr/local/src/phantomjs-2.1.1-linux-x86_64/bin/ [root @ node1 bin] # cp phantomjs / usr/ Local/bin/ # # compiler recognition 5. In node1 Install elasticsearch-head data Visualization on node2 [root@node1 bin] # cd / mnt/tools/elk/ [root@node1 elk] # tar xf elasticsearch-head.tar.gz-C / usr/local/src/ # # decompress [root@node1 elk] # cd / usr/local/src/elasticsearch-head/ [root@node1 elasticsearch-head] # npm install # # install npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expressionnpm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1 .2.11 (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.11: wanted {"os": "darwin" "arch": "any"} (current: {"os": "linux", "arch": "x64"}) added 71 packages in 7.262s [root@node1 elasticsearch-head] # 6, modify the configuration file [root@node1 elasticsearch-head] # cd ~ [root@node1 ~] # vim / etc/elasticsearch/elasticsearch.yml# insert http.cors.enabled: true # # enable cross-domain access support Default is falsehttp.cors.allow-origin: "*" # # Domain address allowed for cross-domain access [root@node1 ~] # systemctl restart elasticsearch.service # # restart [root@node1 ~] # cd / usr/local/src/elasticsearch-head/ [root@node1 elasticsearch-head] # npm run start & # # running data visualization service [1] 83664 [root@node1 elasticsearch-head] # > elasticsearch-head@0.0.0 start / usr in the background / local/src/elasticsearch-head > grunt serverRunning "connect:server" (connect) taskWaiting forever...Started connect web server on http://localhost:9100[root@node1 elasticsearch-head] # [root@node1 elasticsearch-head] # netstat-ntap | grep 9200tcp6 00: 9200: * LISTEN 83358/java [root@node1 elasticsearch-head] # netstat-ntap | grep 9100tcp 00 0.0.0.0 9100 0.0.0.0 V * LISTEN 83674/grunt [root@node1 elasticsearch-head] # 7, Connect to the browser and view the health value status

Node1

Node2

8. Create an index on node1

[root@node2 ~] # curl-XPUT 'localhost:9200/index-demo/test/1?pretty&pretty'-H' content-Type: application/json'-d'{"user": "zhangsan", "mesg": "hello world"}'# # creation index information {"_ index": "index-demo", "_ type": "test", "_ id": "1", "_ version": 1, "result": "created" "_ shards": {"total": 2, "successful": 2, "failed": 0}, "created": true} [root@node1 ~] #

9. Install logstash on the Apache server Multi-elasticsearch docking [root@apache ~] # systemctl stop firewalld.service [root@apache ~] # setenforce 0 [root@apache ~] # yum install httpd-y # # installation service [root@apache ~] # systemctl start httpd.service # # startup service [root@apache ~] # java-version [root@apache ~] # mount.cifs / / 192.168.100.100/tools / mnt/tools/ # # mount Password for root@//192.168.100.100 / tools: [root@apache ~] # cd / mnt/tools/elk/ [root@apache elk] # rpm-ivh logstash-5.5.1.rpm # # install logstash warning: logstash-5.5.1.rpm: header V4 RSA/SHA512 Signature Key ID d88e42b4: NOKEY preparing. # # [100%] upgrading / installing... 1:logstash-1:5.5.1-1 # # [100%] Using provided startup.options file: / etc/logstash/startup.optionsOpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one Then you should configure the number of parallel GC threads appropriately using-XX:ParallelGCThreads=NSuccessfully created system startup script for Logstash [root@apache elk] # systemctl start logstash.service # # enable the service [root@apache elk] # systemctl enable logstash.service # # Boot Created symlink from / etc/systemd/system/multi-user.target.wants/logstash.service to / etc/systemd/system/logstash.service. [root@apache elk] # ln-s / usr/share/logstash/bin/logstash / usr/local/bin / # # easy for the system to identify [root@apache elk] # 10, Output Syslog files to Elasticsearch [root @ apache elk] # chmod vim / var/log/messages # # to other users to read [root@apache elk] # vim / etc/logstash/conf.d/system.conf # # create a file input {file {root = > "/ var/log/messages" # # output directory type = > "system" Start_position = > "beginning"}} output {elasticsearch enter address pointing to node1 node hosts = > ["192.168.13.129output 9200"] index = > "system-% {+ YYYY.MM.dd}"} [root@apache elk] # systemctl restart logstash.service # # Restart the service # # you can also use data browsing to view details 11, Install kibana data Visualization on node1 Server [root@node1] # cd / mnt/tools/elk/ [root@node1 elk] # rpm-ivh kibana-5.5.1-x86_64.rpm # # installation warning: kibana-5.5.1-x86_64.rpm: header V4 RSA/SHA512 Signature Key ID d88e42b4: NOKEY preparing. # # [100%] upgrading / installing... 1:kibana-5.5.1-1 # # [root@node1 elk] # cd / etc/kibana/ [root@node1 kibana] # cp kibana.yml kibana.yml.bak # # backup [root@node1 kibana] # vim kibana.yml # # modify configuration file server.port: 5601 # # Port number server.host: "0.0.0.0" # # listen on any network segment elasticsearch.url: "http://192.168.13.129: 9200 "# # Native node address kibana.index:" .kibana "# # Index name [root@node1 kibana] # systemctl start kibana.service # # enable service [root@node1 kibana] # systemctl enable kibana.service # # boot Created symlink from / etc/systemd/system/multi-user.target.wants/kibana.service to / etc/systemd/system/kibana.service. [root@node1 elk] # [root@node1 elk] # netstat-ntap | grep 5601 # # View port tcp 0 0127.0.0.1 root@node1 elk 5601 0.0.0.0 root@node1 elk * port [root@node1 elk] # 12, Browsers access kibana

13. Docking apache log files in apache server Make statistics [root@apache elk] # vim / etc/logstash/conf.d/apache_log.conf # # create configuration file input {file {path = > "/ etc/httpd/logs/access_log" # # input information type = > "access" start_position = > "beginning"} File {path = > "/ etc/httpd/logs/error_log" type = > "error" start_position = > "beginning"}} output {if [type] = = "access" {# # determine the output information elasticsearch {hosts = > [ "192.168.13.129 if 9200"] index = > "apache_access-% {+ YYYY.MM.dd}"} if [type] = = "error" {elasticsearch {hosts = > ["192.168.13.129 if 9200"] index = > "apache_error-% {+ YYYY" .MM.dd} "}} [root@apache elk] # logstash-f / etc/logstash/conf.d/apache_log.conf # # configure logstach14, Access web page information View kibana statistics

Only error logs

Browsers access Apache services

Generate access log

# # Select management > Index Patterns > create index patterns## to create two apache logs

Create an access access log in kibana

Create an error access log in kibana

View access log statistics

View error log statistics

The experiment was successful!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report