Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Build Log Analysis system with Elasticsearch+Logstash+Kibana

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Brief introduction:

ELK is actually a collection of three tools, Elasticsearch + Logstash + Kibana, which form a set of practical and easy-to-use monitoring architecture. Many companies use it to build a visual massive log analysis platform.

ElasticSearch is a Lucene-based search server. It provides a full-text search engine with distributed multi-user capability, based on RESTful web interface. Elasticsearch is developed in Java and released as open source under the Apache license terms, and is currently a popular enterprise search engine. Designed for cloud computing, can achieve real-time search, stable, reliable, fast, easy to install and use.

Logstash is a platform for application logging, event transmission, processing, management, and search. You can use it to unify the collection and management of application logs and provide Web interfaces for query and statistics.

Kibana is a Web interface for log analysis for Logstash and ElasticSearch. It can be used for efficient search, visualization, analysis and other operations on the log.

Log server:

Improve security

Centralized storage of logs

Defect: difficulty in analyzing the log

ELK log processing steps:

Centrally format the log

Format (logstash) and output the log to Elasticsearch

Index and store formatted data (Elasticsearch)

Front-end data display (Kibana)

Build an ELK log analysis system

Configure the elasticsearch environment

Modify two hostnames: node1 and node2

Modify the hosts file:

Vim / etc/hosts

Add the following hostname and host IP address (required for both node):

192.168.220.129 node1

192.168.220.130 node2

Firewalls are off.

Systemctl stop firewalld.service

Setenforce 0

Deploy and install elasticsearch software

Rpm-ivh elasticsearch-5.5.0.rpm / / installation

Systemctl daemon-reload / / reload the service profile

Systemctl enable elasticsearch.service / / is set to boot automatically

Cp / etc/elasticsearch/elasticsearch.yml / etc/elasticsearch/elasticsearch.yml.bak

Modify the configuration file:

Note: the second node server configuration is the same as the first one, just change the node name and ip address.

Vim / etc/elasticsearch/elasticsearch.yml

17 / cluster.name: my-elk-cluster / / Cluster name (custom)

23 / node.name: node-1 / / Node name

33 / path.data: / data/elk_data / / data storage path

37 / path.logs: / var/log/elasticsearch/ Log storage path

43 / bootstrap.memory_lock: false / / does not lock memory at startup

55 / network.host: 0.0.0.0 / / provide the IP address of the service binding (local address)

59 / http.port: 9200 / / Port

68 / discovery.zen.ping.unicast.hosts: ["node1", "node2"] / / Cluster discovery is realized through unicast

Create a data storage path and authorize it to start the service

Mkdir-p / data/elk_data

Chown elasticsearch:elasticsearch / data/elk_data/

Systemctl start elasticsearch.service

Netstat-natp | grep 9200

Use the browser to open the http://192.168.50.142:9200 below is the node information

{

"name": "node1"

"cluster_name": "my-elk-cluster"

"cluster_uuid": "47bm_xHBSfSvB-mg5qaLWg"

"version": {

"number": "5.5.0"

"build_hash": "260387d"

"build_date": "2017-06-30T23:16:05.735Z"

"build_snapshot": false

"lucene_version": "6.6.0"

}

"tagline": "You Know, for Search"

}

Use the browser http://192.168.50.142:9200/_cluster/health?pretty # # to check the health of the cluster

{

"cluster_name": "my-elk-cluster"

"status": "green"

"timed_out": false

"number_of_nodes": 2

"number_of_data_nodes": 2

"active_primary_shards": 0

"active_shards": 0

"relocating_shards": 0

"initializing_shards": 0

"unassigned_shards": 0

"delayed_unassigned_shards": 0

"number_of_pending_tasks": 0

"number_of_in_flight_fetch": 0

"task_max_waiting_in_queue_millis": 0

"active_shards_percent_as_number": 100.0

}

Use the browser http://192.168.50.142:9200/_cluster/state?pretty # # to check the cluster status information

{

"cluster_name": "my-elk-cluster"

"version": 3

"state_uuid": "vU5I5cttQiiedAu38QwWEQ"

"master_node": "3wHS1VEBQ_q0FxZs2T5IiA"

"blocks": {}

"nodes": {

"V0n8BHRfS1CA75dbo312HA": {

"name": "node2"

"ephemeral_id": "2rC6LV1qQcaZNEFRc8uIBg"

"transport_address": "192.168.50.139pur9300"

"attributes": {}

}

"3wHS1VEBQ_q0FxZs2T5IiA": {

"name": "node1"

"ephemeral_id": "d2nLeY3RSYaI7g7jpzNkMA"

"transport_address": "192.168.50.142pur9300"

"attributes": {}

}

}

"metadata": {

"cluster_uuid": "47bm_xHBSfSvB-mg5qaLWg"

"templates": {}

"indices": {}

"index-graveyard": {

"tombstones": []

}

}

"routing_table": {

"indices": {}

}

"routing_nodes": {

"unassigned": []

"nodes": {

"3wHS1VEBQ_q0FxZs2T5IiA": []

"V0n8BHRfS1CA75dbo312HA": []

}

}

}

Install the elasticsearch-head plug-in

Yum install gcc gcc-c++ make-y

Compile and install the node component:

Tar zvxf node-v8.2.1.tar.gz-C / opt/

Cd / opt/node-v8.2.1/

. / configure

Make-j3 / / this step takes a long time. Wait patiently.

Make install

Install the phantomjs front-end framework:

Tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2-C / opt/

Cd phantomjs-2.1.1-linux-x86_64/bin

Cp phantomjs / usr/local/bin/

Install the elasticsearch-head data Visualization tool:

Tar zvxf elasticsearch-head.tar.gz-C / opt/

Cd / opt/elasticsearch-head/

Npm install

Modify the main configuration file:

Vim / etc/elasticsearch/elasticsearch.yml

Insert the following two lines of code at the end:

Http.cors.enabled: true

Http.cors.allow-origin: "*"

Start elasticsearch-head

Cd / opt/elasticsearch-head/

Npm run start & / / run in the background

Enter http://192.168.50.142:9100/ using the browser to see that the cluster is healthy and green

Enter http://192.168.50.142:9200 in the column after Elasticsearch

Then click the connection to find that: cluster health value: green (0 of 0)

◉ node1 Information Action

★ node2 Information Action

Create an index

You can create a new index directly

You can also enter the following command to create an index:

Curl-XPUT '192.168.50.142 content-Type 9200 indexury demotion etc.: application/json'-d' {"user": "zhangsan", "mesg": "hello world"}'

/ / Index name is index-demo and type is test

After refreshing the browser, you will see the index information you just created. You can see that the index is divided into five parts by default, and there is a copy.

Install logstash and do some log collection and output to elasticsearch

Modify hostname

Hostnamectl set-hostname apache

Install the Apache service

Systemctl stop firewalld.service

Setenforce 0

Yum install httpd-y

Systemctl start httpd.service

Install logstash

Rpm-ivh logstash-5.5.1.rpm

Systemctl start logstash

Systemctl enable logstash

Ln-s / usr/share/logstash/bin/logstash / usr/local/bin/ create a soft connection to the bin directory

Whether the functions of logstash (Apache) and elasticsearch (node) are normal, do the docking test:

You can test with the command logstash:

[root@apache bin] # logstash

-f: you can specify the configuration file of logstash and configure logstash according to the configuration file

-e: followed by a string that can be used as a configuration for logstash (if "", stdin is used as input and stdout as output by default)

-t: test whether the configuration file is correct, and then exit

The input adopts standard input and the output adopts standard output:

Logstash-e 'input {stdin {} output {elasticsearch {hosts= > ["192.168.220.136stdin 9200"]}}'

At this point, when the browser accesses http://192.168.50.142:9200/ to view the index information, there will be more logstash-2019.12.17.

Log in to the Apache host and configure the interface:

The logstash configuration file mainly consists of three parts: input, output, and filter (whether or not to do this as appropriate)

Authorize the log file:

Chmod otakr / var/log/messages

Ll / var/log/messages

-rw----r--. 1 root root 488359 December 17 14:52 / var/log/messages

Create and edit the configuration file:

Vim / etc/logstash/conf.d/system.conf

Input {

File {

Path = > "/ var/log/messages"

Type = > "system"

Start_position = > "beginning"

}

}

Output {

Elasticsearch {

Hosts = > ["192.168.50.142VR 9200"]

Index = > "system-% {+ YYYY.MM.dd}"

}

}

Restart the service:

Systemctl restart logstash.service

Browsers view index information: there will be more system-2019.12.17

Node1 host installs kibana

Rpm-ivh kibana-5.5.1-x86_64.rpm

Cd / etc/kibana/

Cp kibana.yml kibana.yml.bak

Vim kibana.yml

Modify to open the following features:

Server.port: 5601 / / Development Port

Server.host: "0.0.0.0" / / listens to all

Elasticsearch.url: "http://192.168.50.142:9200"

Kibana.index: ".kibana"

Restart the service:

Systemctl start kibana.service

Browser access: 192.168.50.142purl 5601

Next, create an index name in the visual interface: system-* (docking Syslog files)

Docking Apache log files of Apache hosts (including normally accessed and incorrect)

Cd / etc/logstash/conf.d/

Vim apache_log.conf / / create a configuration file and add the following code:

Input {

File {

Path = > "/ etc/httpd/logs/access_log"

Type = > "access"

Start_position = > "beginning"

}

File {

Path = > "/ etc/httpd/logs/error_log"

Type = > "error"

Start_position = > "beginning"

}

}

Output {

If [type] = = "access" {

Elasticsearch {

Hosts = > ["192.168.220.136 virtual 9200"]

Index = > "apache_access-% {+ YYYY.MM.dd}"

}

}

If [type] = = "error" {

Elasticsearch {

Hosts = > ["192.168.220.136 virtual 9200"]

Index = > "apache_error-% {+ YYYY.MM.dd}"

}

}

}

Then on the visual interface, create two indexes:

1 、 apache_access-

2 、 apache_error-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report