Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Build an ELK real-time log analysis platform

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

A brief introduction to ELK

ELK is a complete log analysis solution, which is composed of three open source software: ElasticSearch, Logstash and Kibana. Elasticstash is a distributed storage and retrieval engine developed based on Lucene, which is used to store all kinds of logs; Logstash collects, analyzes and stores logs for future use; Kibana is a display tool based on Node.js, which provides Logstash and ElasticSearch with a Web interface for log display, and is also used to help summarize, analyze and search important log data.

ELK works as follows:

Deploy Logstash on all the services that need to collect logs. As Logstash agent, you can monitor and filter the collected logs, integrate the filtered contents together, and finally give them to the ElasticSearch search engine. You can use ElasticSearch for custom search, and generate icons through kibana combined with custom search content for log data display.

Second, build the environment

Two CenostOS 7s

IP:192.168.80.100 installation: elasticsearch, logstash, Kibana

IP:192.168.80.110 installation: elasticsearch

3. Deploy ElasticSearch

Elasticsearch is a real-time full-text search and analysis engine, which provides three major functions of collecting, analyzing and storing data; it is a set of open REST and JAVA API structures that provide efficient search functions and scalable distributed systems. It is built on top of the Apache Lucene search engine library.

1. Install ElasticSearch

Rpm-- import https://packages.elastic.co/GPG-KEY-elasticsearch / / install the YumSource key of elasticsearch vi / etc/yum.repos.d/elasticsearch.repo / / configure the yum source of elasticsearch

[elasticsearch-2.x]

Name=Elasticsearch repository for 2.x packages

Baseurl= http://packages.elastic.co/elasticsearch/2.x/centos

Gpgcheck=1

Gpgkey= http://packages.elastic.co/GPG-KEY-elasticsearch

Enable=1

Yum install elasticsearch-y / / install yum install java-y / / install java environment must be above 1.8java-version / / View version

Vi / etc/elasticsearch/elasticsearch.yml / / modify the configuration file

17-line cluster name

Cluster.name: abner

23-line node name

Node.name: linux-node1

Line 33 modifies the path where data and logs are stored

Path.data: / data/es-data

Path.logs: / var/log/elasticsearch/

Line 43 prevents swapping swap partitions

Bootstrap.memory_lock: true

Line 54 turns on the listening network

Network.host: 0.0.0.0

Line 58 opens the listening port

Http.port: 9200

Mkdir-p / data/es-data / / New data storage directory chown-R elasticsearch:elasticsearch / data/es-data / / modify directory permissions systemctl start elasticsearch / / start service netstat-ntap | grep 9200 / / View port status

The browser accesses the test page 192.168.80.100purl 9200

2. Realize the interaction with ElasticSearch

The first kind: JAVA APO

Second: RESTful API (interactive through json format)

Curl-I-XGET 'http://192.168.80.100:9200/_count?pretty'-d' {"query": {"match_all": {}'

3. Install the elasticsearch-head plug-in

/ usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head / / install the plug-in, which is pulled from git or you can use docker pull

Browser access test whether the plug-in is installed successfully

Http://192.168.80.100:9200/_plugin/head/

4. ElasticSearch cluster deployment (installed on another virtual machine)

1) install ElasticSearch (same as above)

2) Linux-node1 enables cluster automatic discovery mechanism

Vi / etc/elasticsearch/elasticsearch.yml / / modify the configuration file

Systemctl restart elasticsearch / / restart the node1 service

3) Linux-node2 enables cluster automatic discovery mechanism

Vi / etc/elasticsearch/elasticsearch.yml / / modify the configuration file

Systemctl restart elasticsearch / / restart the node2 service

4) when visiting the browser, you will see the primary node and secondary node.

Http://192.168.80.100:9200/_plugin/head/

5. Install the monitoring component

/ usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf / / install

Browser accesses the monitoring page

Http://192.168.80.100:9200/_plugin/kopf/#!/cluster

IV. Deploy Logstash

Logstash is a tool for collecting, analyzing, and filtering logs. It supports almost any type of log, including Syslog, error log, and custom application log. It can receive logs from many sources, including syslog, messaging (such as RabbitMQ), and JMX, which can output data in a variety of ways, including e-mail, websockets, and Elasticsearch.

1. Install Logstash

Rpm-- import https://packages.elastic.co/GPG-KEY-elasticsearch / / download yum source key vim logstash.repo / / configure the yum source of logstash

[logstash-2.1]

Name=Logstash repository for 2.1.x packages

Baseurl= http://packages.elastic.co/logstash/2.1/centos

Gpgcheck=1

Gpgkey= http://packages.elastic.co/GPG-KEY-elasticsearch

Enable=1

Yum install logstash-y / / installation

2. The use of Logstash

Ln-s / opt/logstash/bin/logstash / usr/bin/ A soft connection to the logstash command logstash-e 'input {stdin {}} output {stdout {}' / / execute the logstash command to define input and output streams, similar to pipes

Note:-e: perform action

Input: standard input

{input}: plug-in

Output: standard output

{stdout}: plug-in

Logstash-e 'input {stdin {} output {stdout {codec = > rubydebug}' / / output more detailed information via rubydebug

Logstash-e 'input {stdin {} output {elasticsearch {hosts = > ["192.168.80.100 stdin 9200"]} stdout {codec = > rubydebug}' / / input into elasticsearch

Http://192.168.80.100:9200/_plugin/head/ visit the web page of elastic to view

3. The use of Logstash configuration files

Vi 01-logstash.conf / / Edit configuration files to collect system logs

Input {

File {

Path = > "/ var/log/messages"

Type = > "system"

Start_position = > "beginning"

}

}

Output {

Elasticsearch {

Hosts = > ["192.168.175.132 virtual 9200"]

Index = > "system-% {+ YYYY.MM.dd}"

}

}

Logstash-f 01-logstash.conf / / specifies the profile to filter and match

Vi 02-logstash.conf / / Editing configuration files while collecting system logs and security logs

Input {

File {

Path = > "/ var/log/messages"

Type = > "system"

Start_position = > "beginning"

}

File {

Path = > "/ var/log/secure"

Type = > "secure"

Start_position = > "beginning"

}

}

Output {

If [type] = = "system" {

Elasticsearch {

Hosts = > ["192.168.1.202 9200"]

Index = > "nagios-system-% {+ YYYY.MM.dd}"

}

}

If [type] = = "secure" {

Elasticsearch {

Hosts = > ["192.168.1.202 9200"]

Index = > "nagios-secure-% {+ YYYY.MM.dd}"

}

}

}

Logstash-f 02-logstash.conf / / specifies the profile to filter and match

5. Deploy Kibana

Kibana is a Web-based graphical interface for searching, analyzing, and visualizing log data stored in Elasticsearch metrics. It uses Elasticsearch's REST interface to retrieve data, not only allowing users to create custom dashboard views of their own data, but also allowing them to query and filter data in a special way.

1. Download and install Kibana

Wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz / / download kibana package tar zxvf kibana-4.3.1-linux-x64.tar.gz / / extract mv kibana-4.3.1-linux-x64/ / usr/local/kibana/ / move the package and rename it vi / usr/local/kibana/config/kibana.yml / / modify the configuration file

# Service Port

Server.port: 5601

# Service address

Server.host: "0.0.0.0"

# address and port corresponding to elasticsearch

Elasticsearch.url: "http://192.168.175.132:9200"

# data field type

Kibana.index: ".kibana"

Yum install screen-y / / install screen so that kibana can run in the background (of course, it can be started by other means without installation) / usr/local/kibana/bin/kibana / / start netstat-antp | grep 5601 / / listening port

2. The browser visits 192.168.80.100purl 5601

Fill in the corresponding log index

Click Discover to filter the logs based on the time selector

VI. ELK actual combat

Output the logs of nginx, apache, message and secrue to the front desk for display

1. Edit the nginx configuration file and modify the following (added under the http module)

Log_format json'{"@ timestamp": "$time_iso8601",''"@ version": "1", 'br/ >' "@ version": "1",'

'"url": "$uri",'

'"status": "$status",'

'"domian": "$host",'

'"host": "$server_addr",'

'"size": "$body_bytes_sent",'

'"responsetime": "$request_time",'

'"referer": "$http_referer",'

'"ua": "$http_user_agent"'

'}'

Modify the output format of access_log to the json just defined

Access_log logs/elk.access.log json

2. Modify the configuration file of apache

LogFormat "{\

\ "@ timestamp\":\ "% {% Y-%m-%dT%H:%M:%S%z} t\",\

\ "@ version\":\ "1\",\

\ "tags\": [\ "apache\"],\

\ "message\":\ "% h% l% u% t\\"% r\\ "% > s% b\",\

\ "clientip\":\ "% a\",\

\ "duration\":% D,\

\ "status\":% > s,\

\ "request\":\ "% U% Q\",\

\ "urlpath\":\ "% U\",\

\ "urlquery\":\ "% Q\",\

\ "bytes\":% B,\

\ "method\":\ "% m\",\

\ "site\":\ "% {Host} I\",\

\ "referer\":\ "% {Referer} I\",\

\ "useragent\":\ "% {User-agent} I\"\

} "ls_apache_json

Also modify the output format to the json format defined above

CustomLog logs/access_log ls_apache_json

3. Edit the logstash configuration file for log collection

Vi full.conf / / Edit log collection configuration file

Input {

File {

Path = > "/ var/log/messages"

Type = > "system"

Start_position = > "beginning"

}

File {

Path = > "/ var/log/secure"

Type = > "secure"

Start_position = > "beginning"

}

File {

Path = > "/ var/log/httpd/access_log"

Type = > "http"

Start_position = > "beginning"

}

File {

Path = > "/ usr/local/nginx/logs/elk.access.log"

Type = > "nginx"

Start_position = > "beginning"

}

}

Output {

If [type] = = "system" {

Elasticsearch {

Hosts = > ["192.168.80.100pur9200"]

Index = > "nagios-system-% {+ YYYY.MM.dd}"

}

}

If [type] = = "secure" {

Elasticsearch {

Hosts = > ["192.168.80.100pur9200"]

Index = > "nagios-secure-% {+ YYYY.MM.dd}"

}

}

If [type] = = "http" {

Elasticsearch {

Hosts = > ["192.168.80.100pur9200"]

Index = > "nagios-http-% {+ YYYY.MM.dd}"

}

}

If [type] = = "nginx" {

Elasticsearch {

Hosts = > ["192.168.80.100pur9200"]

Index = > "nagios-nginx-% {+ YYYY.MM.dd}"

}

}

}

Logstash-f / etc/logstash/conf.d/full.conf / / run logstash to filter the log and visit the web page of elasticsearch to view

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report