Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Construction and deployment of Open Source Log Analysis system ELK platform

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Construction and deployment of Open Source Log Analysis system ELK platform

I. Preface

Logs mainly include system logs, application logs and security logs. Through the log, the system operation and developers can know the software and hardware information of the server, check the errors in the configuration process and the causes of the errors. Regular analysis of logs can understand the load, performance and security of the server, so as to take timely measures to correct errors.

Usually, logs are scattered and stored on different devices. If you manage dozens or hundreds of servers, you are still checking logs using the traditional method of logging in to each machine in turn. Does this feel tedious and inefficient? As a top priority, we use centralized log management, such as open source syslog, to collect and summarize logs on all servers.

After centralizing the management of logs, log statistics and retrieval has become a more troublesome thing. Generally, we can use Linux commands such as grep, awk and wc to achieve retrieval and statistics. However, it is hard to avoid using this method for higher query, sorting and statistics requirements and a large number of machines.

The open source real-time log analysis ELK platform can perfectly solve the above problems. ELK is composed of three open source tools: ElasticSearch, Logstash and Kiabana. Official website: https://www.elastic.co/products

Elasticsearch is an open source distributed search engine, its characteristics are: distributed, zero configuration, automatic discovery, index automatic slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load and so on.

Logstash is a completely open source tool that collects, analyzes, and stores your logs for later use (e.g., search).

Kibana is also an open source and free tool, and its Kibana provides a friendly Web interface for log analysis for Logstash and ElasticSearch to help you aggregate, analyze, and search important data logs.

II. Preparatory work

Prepare 3 machines:

192.168.2.61 (install Elasticsearch,kibana,Logstash)

192.168.2.83 (collect umember logs)

192.168.2.93 (collect Nginx logs, install Logstash)

Operating system: Centos 6.5x64

Download the installation package

Elasticsearch:

Https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.tar.gz

Logstash:

Https://artifacts.elastic.co/downloads/logstash/logstash-5.3.0.tar.gz

Kibana:

Https://artifacts.elastic.co/downloads/kibana/kibana-5.3.0-linux-x86_64.tar.gz

Install a third-party epel source

Rpm-ivh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

Install the JDK environment (all machines)

Http://120.52.72.24/download.oracle.com/c3pr90ntc0td/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz

Cd / usr/localtar-zxf jdk-8u131-linux-x64.tar.gzln-sv jdk1.8.0_131/ jdkvi / etc/profile.d/jdk.sh

Add the following

Export JAVA_HOME=/usr/local/jdkexport PATH=$PATH:/usr/local/jdk/bin

~

Chmod 755 / etc/profile.d/jdk.sh. / etc/profile.d/jdk.sh

Check to see if it is effective

Java-version

Modify ulimit restrictions

Vi / etc/security/limits.d/90-nproc.conf

* soft nproc 10240 * hard nproc 10240 * soft nofile 65536 * hard nofile 65536

Vi / etc/sysctl.conf

Add the following

Vm.max_map_count = 262144

Then execute the following command

Sysctl-p III, ElasticSearch installation configuration

Create an ELK directory and put all the installation packages in this directory.

[unilife@cdh4 ~] $mkdir elk [unilife@cdh4 ~] $cd elk/

Extract the ElasticSearch installation package

[unilife@cdh4 elk] $tar-zxfelasticsearch-5.3.0.tar.gz

Install the Head plug-in

Yum installnpm git # install node.jsgit clonegit://github.com/mobz/elasticsearch-head.gitcd elasticsearch-headnpm installnpm run start & or grunt server startup

Log in through http://192.168.2.61:9100/

Then edit the configuration file for ES:

Vi config/elasticsearch.yml

Modify the following configuration items:

Cluster.name: my-applicationnode.name: node-1path.data: / tmp/elasticsearch/datapath.logs: / tmp/elasticsearch/logsnetwork.host=0.0.0.0network.port=9200http.cors.enabled: truehttp.cors.allow-origin: "*"

Leave the other options by default, and then start ES:

[unilife@cdh4 elk] $/ home/unilife/elk/elasticsearch-5.3.0/bin/elasticsearch &

As you can see, its transmission port with other nodes is 9300, and the port that accepts HTTP requests is 9200.

Then, open http://192.168.2.61:9200/ through the web page, and you can see the following

Returns information showing the configured cluster_name and name, as well as the version of the installed ES.

IV. Logstash installation

The Logstash functions are as follows:

It's just a collector, and we need to specify Input and Output for it (of course, Input and Output can be multiple). Since we need to output the log of Log4j in the Java project to ElasticSearch, the Input here is Log4j and Output is ElasticSearch.

Tar-zxf logstash-5.3.0.tar.gzcd logstash-5.3.0

Write a configuration file

Vi config/log4j_to_es.conf# For detail structure of this file# Set: https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.htmlinput {# For detail config for log4j as input # See: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html log4j {mode = > "server" host = > "192.168.2.61" port = > 4567} filter {# Only matched data are send to output.} output {# For detail config for elasticsearch as output # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html elasticsearch {action = > "index" # The operation on ES hosts = > "192.168.2.61 action 9200" # ElasticSearch host, can be array. Index = > "dubbo" # Theindex to write data to, can be any string. }

Start Logstash

[unilife@cdh4 logstash-5.3.0] $. / bin/logstash-f config/log4j_to_es.conf &

Specify the profile with the-f option

Modify the log4j.properties of the Java project and output the log of Log4j to SocketAppender

Log4j.rootCategory=debug, stdout, R, E, socket # appender socketlog4j.appender.socket=org.apache.log4j.net.SocketAppenderlog4j.appender.socket.Port=4567log4j.appender.socket.RemoteHost=192.168.2.61log4j.appender.socket.layout=org.apache.log4j.PatternLayoutlog4j.appender.socket.layout.ConversionPattern=%d [%-5p] [% l]% m%nlog4j.appender.socket.ReconnectionDelay=10000

Finally restart the Java service

Use the Head plug-in to view ES status and content

The above uses ES's Head plug-in to observe the status and data of the ES cluster, but this is just a simple page for interacting with ES, and can not generate reports or charts, then use Kibana to perform the search and generate charts.

5. Kiabana installation

Extract the installation package

Tar-zxf kibana-5.3.0.tar.gzcd kibana-5.3.0

Configure kibana

[unilife@cdh4 kibana-5.3.0] $viconfig/kibana.yml

Modify the following

Server.host: "0.0.0.0" elasticsearch.url: http://192.168.2.61:9200

Start Kiabana

[unilife@cdh4 kibana-5.3.0] $. / bin/kibana &

Access Kibana through http://192.168.2.61:5601/

In order to use Kibana later, you need to configure at least one Index name or Pattern, which is used to determine the Index in the ES during analysis. Here, I enter the previously configured Index name dubbo,Kibana will automatically load the field of the doc under the Index, and automatically select the appropriate field for the time field in the icon:

Then switch to the Discover tag and you can see the data in ES:

6. Logstash collects logs

6.1. Logstash collects Nginx logs

Operation on 192.168.2.93

Mkdir / home/unilife/elkcd / home/unilife/elk

Extract the file

Tar-zxf logstash-5.3.0.tar.gzcd logstash-5.3.0

Write a configuration file

[unilife@localhost bin] $vi/home/unilife/elk/logstash-5.3.0/config/nginx_to_es.conf

Add the following

Input {file {type = > "nginx_access" path = > ["/ usr/local/nginx/logs/access.log"]}} filter {# Only matched data are send to output.} output {# For detail config for elasticsearch as output, # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html elasticsearch {action = > "index" # The operation on ES hosts = > "192.168.2.61 file 9200" # ElasticSearch host Can be array. Index = > "nginx" # Theindex to write data to, can be any string. }}

Start Logstash

[unilife@localhost bin] $. / bin/logstash-fconfig/nginx_to_es.conf &

Use ElasticSearch's Head plug-in to view ES status and content

You can see that the nginx log has been stored in ES

Then create an index for nginx through Kibana

Nginx data can be seen on Kibana.

6.2.The Logstash collects log information through kafka

Edit configuration file

[unilife@localhost bin] $vi/home/unilife/elk/logstash-5.3.0/config/kafka_to_elasticsearch.conf

Add the following

Input {kafka {topics = > "unilife_nginx_production" group_id = > "flume_unilife_nginx_production" bootstrap_servers = > "192.168.2.240V 9092192.168.2.241V 9093192.168.2.242V 9094192.168.2.243V 9095192.168.2.244V 9095192.168.2.244V 9096"} output {elasticsearch = > "index" hosts = > ["192.168.2.240R 9200", "192.168.2.241V 9200" "192.168.2.242 index 9200", "192.168.2.243 9200", "192.168.2.244 9200"]

Start Logstash

[unilife@localhost bin] $. / bin/logstash-fconfig/kafka_to_elasticsearch.conf &

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report