Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction and deployment method of ELK Log system

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains the "introduction and deployment of ELK log system", the content of the explanation is simple and clear, easy to learn and understand, now please follow the editor's ideas slowly in depth, together to study and learn "ELK log system introduction and deployment method" bar!

1. ELK application scenarios

In the complex enterprise application service group, there are many ways to record logs, and it is not easy to archive and provide log monitoring mechanism. Neither the developer nor the operation and maintenance staff can accurately locate the problems on the service and server, and there is no way to efficiently search the log content so as to locate the problem quickly. Therefore, we need a centralized, independent, collection and management of log information on various services and servers, centralized management, and provide a good UI interface for data display, processing and analysis.

This: ELK provides a set of open source solutions that can efficiently and easily meet the above scenarios.

II. Introduction of ELK log system

1. ELK is the abbreviation of three open source frameworks, Elasticsearch, Logstash and Kibana, respectively.

The role of framework introduction

Elasticsearch open source distributed search engine, providing storage, analysis, search functions. Features: distributed, based on reasful style, support mass and high concurrency of quasi-real-time search scene, stable, reliable, fast, easy to use and so on. Receive the massive structured log data collected and provide it to kibana query and analysis

Logstash open source log collection, analysis, filtering framework, support a variety of data input and output methods. Used to collect logs, filter them to form structured data, and forward them to elasticsearch

Kibana open source log reporting system, has good web page support for elasticsearch and logstash. Analyze and display the data provided by elasticsearch

2. The classic applications of ELK are as follows

ELK Classic Architecture

As shown in the picture

Logstash is deployed to the service host to collect, filter and push the logs of each service.

Elasticsearch stores structured data transmitted by Logstash and provides it to Kibana.

Kibana provides users with UIweb pages for data display and analysis to form charts, etc.

Note: logs generally refers to various log files and log information: windows,negix,tomcat,webserver and so on.

3. ELK improvement

Because Logstash consumes a lot of resources, and server resources are very valuable, another lightweight log collection framework, Beats, is introduced, which includes the following six types

Packetbeat is used to collect network traffic data.

Heartbeat

Used for runtime monitorin

Filebeat is used to collect file data

Winlogbeat is used to collect winodws event data

Metricbeat is used for indicators

Auditbeat is used to audit data

Improved ELK

4. Further thinking

In traditional web projects, mature log plug-ins such as log4j and logback (higher performance) are often used for logging, whether to provide a better solution.

ELK upgrade 1.0

As shown in the picture

Log collection added Logback to send logs directly to Logstash. If adopted in this way, web service can reduce part of the configuration of generating log files, and improve real-time performance and log push efficiency.

5. High concurrency scenarios

Because logstash consumes performance, high concurrency scenarios tend to encounter traffic bottlenecks, as well as timely use of logstash clusters, so middleware can be added for log caching. Because logstash data sources have a variety of ways, all middleware can also have many choices, the common one is kafka,redis.

ELK upgrade 2.0

As shown in the picture

Host1, middleware and host2 are all highly available service clusters for simple display without drawing

The business data appearing in logback can be cached by writing to middleware such as redis or kafka, and then filtered by reasonably limiting the flow threshold to logstash.

If the log of beats is filebeat, if there is no real-time requirement, the traffic of Beats transfer log can be limited by controlling the update speed of log file.

Three ELK build (non-cluster)

1. Download ELK (keep the version consistent)!

Elasticsearch official website elasticsearch-6.3.0.tar elasticsearch official documentation

Kibana official website kibana-6.3.0 downloads linux64-bit kibana official documents

Logstash official website logstash-6.3.0.tar logstash official documentation

Filebeat official website filebeat-6.3.0 linux64 bit beats official document

Note: the demo is centos7, that is, linux version. Please change it according to your actual needs.

Upload to the centos7 virtual machine by rz command

2. Decompression

Tar-zxvf elasticsearch-6.3.0.tar.gz

Tar-zxvf kibana-6.3.0-linux-x86_64.tar.gz

Tar-zxvf filebeat-6.3.0-linux-x86_64.tar.gz

Tar-zxvf logstash-6.3.0.tar.gz

Note: tar does not support specifying the target directory that can be migrated through the mv command. This tutorial is migrated to the / home directory

3. Build the java environment

It is recommended to use jdk1.8jdk environment configuration

4. Install elasticsearch

Modify the configuration file

Vi / home/elasticsearch-6.3.0/config/elasticsearch.yml

#-Network--

#

# Set the bind address to a specific IP (IPv4 or IPv6):

#

Network.host: 0.0.0.0 # # Server ip Native

#

# Set a custom port for HTTP:

#

Http.port: 9200 # # Service Port

#

# For more information, consult the network module documentation.

#

Start elasticsearch

/ home/elasticsearch-6.3.0/bin/elasticsearch # Command window run

/ home/elasticsearch-6.3.0/bin/elasticsearch-d # background thread running

Close elasticsearch

Ctrl+c # Command window closes

Ps-ef | grep elastic # background thread shuts down

Kill-9 4442 # # pid 4442 is the pid of the checking thread

FAQ solving FAQs on elasticsearch startup

Verify elasticsearch startup

5. Install kibana

Modify the configuration file

Vi / home/kibana-6.3.0-linux-x86_64/config/kibana.yml

Server.port: 5601 # # Service Port

Server.host: "0.0.0.0" # # Server ip Native

Elasticsearch.url: "http://localhost:9200" # # elasticsearch service address corresponds to elasticsearch

Start kibana

/ home/kibana-6.3.0-linux-x86_64/bin/kibana # Command window starts

Nohup. / kibana-6.3.0-linux-x86_64/bin/kibana & # background thread startup

Close kibana

Ctrl+c # Command window closes

Ps-ef | grep kibana # background thread shuts down

Kill-9 4525 # # pid 4525 is the pid of the checking thread

Note: most of the common problems are port occupancy, and the directory is not authorized. You need to run the execution with elasticsearch using the directory to execute the unconfigured root user.

Verify kibana startup

6. Install logstash

New profile

Vi / home/logstash-6.3.0/config/logback-es.conf

Input {

Tcp {

Port = > 9601

Codec = > json_lines

}

}

Output {

Elasticsearch {

Hosts = > "localhost:9200"

}

Stdout {codec = > rubydebug}

}

Note: when copying the above documents, extra spaces must be removed to keep the yml document standard.

Note: the above figure corresponds to the configuration part one by one.

Input {# # input input source configuration

Tcp {# # use tcp input source with detailed documentation on the official website

Port = > 9601 # # Server listener port 9061 accepts logs default ip localhost

Codec = > json_lines # # using json to parse logs requires installation of the json parsing plug-in

}

}

Filter {# # data processing

}

Output {# # output data output configuration

Elasticsearch {# # receive using elasticsearch

Hosts = > "localhost:9200" # # multiple cluster addresses are separated

}

Stdout {codec = > rubydebug} # # output to the command window

}

Logstash official input source support and download

Install the logstash json plug-in

/ home/logstash-6.3.0/bin/logstash-plugin install logstash-codec-json_lines

Start logstash

/ home/logstash-6.3.0/bin/logstash-f / home/logstash-6.3.0/config/logback-es.conf # # Command window form

Nohup / home/logstash-6.3.0/bin/logstash-f / home/logstash-6.3.0/config/logback-es.conf & # # background thread format

Close logstash

Ctrl+c # Command window closes

Ps-ef | grep logstash # background thread shuts down

Kill-9 4617 # # pid 4617 is the pid of the checking thread

7 use logback to transfer logs to logstash

Set up a springboot project (for quick use)

Pom file dependency

Net.logstash.logback

Logstash-logback-encoder

4.11

Logback.xml

192.168.253.6:9601

SpringbootLogbackApplication.java test

Package com.zyj

Import org.slf4j.Logger

Import org.slf4j.LoggerFactory

Import org.springframework.boot.SpringApplication

Import org.springframework.boot.autoconfigure.SpringBootApplication

@ SpringBootApplication

Public class SpringbootLogbackApplication {

Private final static Logger logger = LoggerFactory.getLogger (SpringbootLogbackApplication.class)

Public static void main (String [] args) {

New Thread (()-> {

For (int iTuno Bandi)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report