Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

CentOS 8 deploys ELK Log Analysis platform

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Demand

1. Developers cannot log in to the online server to view logs

two。 Each system has logs, which are scattered and difficult to find

3. The amount of log data is large, the search is slow, and the data is not real-time.

Solution: introduction to the deployment of ELK platform ELK

ELK is the abbreviation of three open source software, which means: Elasticsearch, Logstash, Kibana, they are all open source software. A new FileBeat is added, which is a lightweight log collection and processing tool (Agent). Filebeat takes up less resources and is suitable for collecting logs on each server and transferring them to Logstash.

ELK architecture diagram

Introduction to Elasticsearch:

Elasticsearch is an open source distributed search engine that provides three functions: collecting, analyzing and storing data.

Features: distributed, zero configuration, automatic discovery, automatic index slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load, etc.

Deploy Elasticsearch1. Configure yum Feed

Rpm-- import https://packages.elastic.co/GPG-KEY-elasticsearch # Import key

Vim / etc/yum.repos.d/elasticsearch.repo # configure yum source

[elasticsearch-2.x] name=Elasticsearch repository for 2.x packagesbaseurl= http://packages.elastic.co/elasticsearch/2.x/centosgpgcheck=1gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearchenable=12. Install elasticsearch

Yum install elasticsearch-y # install elasticsearch

3. Configure Elasticsearch

Vim / etc/elasticsearch/elasticsearch.yml

Cluster.name: yltx # 17 line cluster name node.name: node1 # 23 line node name path.data: / data/es-data # 33 line working directory path.logs: / var/log/elasticsearch # 37 line log directory bootstrap.memory_lock: true # 43 line prevent switching swap partition network.host: 0.0.0.0 # 54 line listening network http.port: 9200 # 58 line port

Mkdir-p / data/es-data

Chown-R elasticsearch:elasticsearch / data/es-data/

4. Memory unlocking and file restrictions

Must be modified in the production environment (note)

Vim / etc/security/limits.conf

Insert elasticsearch soft memlock unlimited elasticsearch hard memlock unlimited * soft nofile 65535 * hard nofile 65535 at the end

Systemctl start elasticsearch.service # start the service

Netstat-ntap | grep 9200

Ps-ef | grep elasticsearch

Web page testing: http://192.168.0.102:9200/

Install the Elasticsearch-head plug-in

/ usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

Web access:

Http://192.168.0.102:9200/_plugin/head/

Logstash introduction:

Logstash is mainly used for log collection, analysis, log filtering tools, supporting a large number of data acquisition methods. The general working mode is the Cramp architecture. The client is installed on the host that needs to collect logs, and the server is responsible for filtering and modifying the received logs of each node and sending them to elasticsearch.

Basic process of logstash log collection: input-- > codec-- > filter-- > codec-- > output

1.input: where to collect logs.

2.filter: filter before sending

3.output: output to Elasticsearch or Redis message queue

4.codec: output to the front desk, convenient to practice while testing

5. Logs with small amount of data are collected on a monthly basis.

Deploy Logstash1. Configure yum Feed

Vim / etc/yum.repos.d/logstash.repo

[logstash-2.1] name=Logstash repository for 2.1.x packagesbaseurl= http://packages.elastic.co/logstash/2.1/centosgpgcheck=1gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearchenable=12. Download and install logstash

Yum install logstash-y

Test the basic syntax of logstashlogstash

Input {

Specify input

}

Output {

Specify output

}

1. Test standard input and output

Use rubydebug to display and test the foreground output

/ opt/logstash/bin/logstash-e 'input {stdin {} output {stdout {codec = > rubydebug}}'

Hello # input hello test

two。 Test output to file

/ opt/logstash/bin/logstash-e 'input {stdin {} output {file {path = > "/ tmp/test-% {+ YYYY.MM.dd} .log"}'

Cat / tmp/test-2020.02.17.log

3. Turn on log compression

/ opt/logstash/bin/logstash-e 'input {stdin {} output {file {path = > "/ tmp/test-% {+ YYYY.MM.dd} .log.tar.gz" gzip = > true}}'

Ll / tmp/

4. Test output to elasticsearch

/ opt/logstash/bin/logstash-e 'input {stdin {} output {elasticsearch {hosts = > ["192.168.0.102 stdin 9200"] index = > "logstash-test-% {+ YYYY.MM.dd}"}'

Ll / data/es-data/yltx/nodes/0/indices

5. Web page verification

Introduction to Kibana

Kibana is also an open source and free tool, and Kibana provides a friendly Web interface for log analysis for Logstash and ElasticSearch to help summarize, analyze, and search important data logs.

Kibana deployment 1. Download and install kibana

Wget https://artifacts.elastic.co/downloads/kibana/kibana-7.6.0-linux-x86_64.tar.gz

Tar zxvf kibana-7.6.0-linux-x86_64.tar.gz-C / opt/

Mv / opt/kibana-7.6.0-linux-x86_64/ / usr/local/kibana

two。 Modify configuration

Vim / usr/local/kibana/config/kibana.yml

Server.port: 5601 # 2 line access port server.host: "0.0.0.0" # 5 line listening network elasticsearch.url: "http://192.168.0.102:9200" # 12 line ES address kibana.index:" .kibana "# 20 line

3. Start the service

/ usr/local/kibana/bin/kibana &

Netstat-ntap | grep 5601 # View the port number

4. Web page validation:

Http://192.168.0.102:5601/

Test the ELK platform

Collect system logs and java exception logs

1. Modify the logstash configuration file:

Vim / root/file.conf

Input {file {path = > "/ var/log/messages" # collect system logs type = > "system" start_position = > "beginning"} file {path = > "/ var/log/elasticsearch/yltx.log" # collect java exception days Type = > "es-error" start_position = > "beginning" codec = > multiline {pattern = > "^\ [" negate = > true what = > "previous"}} output {if [type] = "system" {elasticsearch {hosts = > ["192.168.0.102 index"] index = > "system-% {+ YYYY.MM.dd}"} if [type] = = "es-error" {elasticsearch { Hosts = > ["192.168.0.102 YYYY.MM.dd 9200"] index = > "es-error-% {+ YYYY.MM.dd}"} 2. Write to elasticsearch

/ opt/logstash/bin/logstash-f / root/file.conf

3. View Elasticsearch

4. View Kibana

Related materials

ELK official website: https://www.elastic.co/cn/

English Guide: https://www.gitbook.com/book/chenryn/elk-stack-guide-cn/details

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report