Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

(practical application) Construction of ELK environment

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

What we bring to you today is the open source real-time log analysis ELK. ELK is made up of three open source tools: ElasticSearch, Logstash and Kiabana. Official website: https://www.elastic.co

Three of the software are:

Elasticsearch is an open source distributed search engine, its characteristics are: distributed, zero configuration, automatic discovery, index automatic slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load and so on.

Logstash is a completely open source tool that collects, analyzes, and stores your logs for later use (e.g., search).

Kibana is also an open source and free tool. Kibana provides a friendly Web interface for log analysis for Logstash and ElasticSearch, which can help you summarize, analyze and search important data logs.

System system needs to install software ip description centos6.5Elasticsearch/test5192.168.253.210 search storage log centos6.5Elasticsearch/test4192.168.253.200 search storage log centos6.5Logstash/nginx/test1192.168.253.150 is used to collect logs to the above centos6.5Kibana/nginx/test2192.168.253.100 for back-end display

Architecture schematic:

First, install the elasticsearch cluster first, and test it before installing other software.

Installing elasticsearch-2.3.3.rpm separately on test5,test4 requires the following steps to install java1.8:

Yum remove java-1.7.0-openjdk

Rpm-ivh jdk-8u51-linux-x64.rpm

Java-version

Yum localinstall elasticsearch-2.3.3.rpm-y

Service elasticsearch start

Cd / etc/elasticsearch/

Vim elasticsearch.yml

Modify the configuration as follows

Cluster.name: myelk # sets the name of the cluster, which must be the same in all clusters.

Node.name: test5 # sets the name of each node, and each node must have a different name.

Path.data: / path/to/data # specifies where the data will be stored, and the online machine will be placed in a single large partition.

Path.logs: / path/to/logs # directory of log

Bootstrap.mlockall: true # starts the optimal memory configuration, allocates enough memory at startup, the performance will be much better, and I won't start the test.

Network.host: 0.0.0.0 # listener ip address, which represents all addresses.

Http.port: 9200 # port number for listening

Discovery.zen.ping.unicast.hosts: ["hostip", "hostip"] # know what the ip of the cluster has, and if there is no cluster, there will be one job.

Mkdir-pv / path/to/ {data,logs}

Chown elasticsearch.elasticsearch / path-R

Start the server service elasticsearch start and check the monitoring port startup

[root@test4 ~] # ss-tln

State Recv-Q Send-Q Local Address:Port Peer Address:Port

LISTEN 0 128:: 54411: *

LISTEN 0 128: 111: *

LISTEN 0 128 *: 111 *: *

LISTEN 0 50:: 9200:: *

LISTEN 0 50:: 9300:: *

LISTEN 0 128:: 22: *

LISTEN 0 128 *: 22 *: *

LISTEN 0 128 *: 51574 *: *

LISTEN 0 128 127.0.0.1 631 *: *

LISTEN 0 128:: 1 LISTEN:: *

LISTEN 0 100:: 1:25: *

LISTEN 0 100 127.0.0.1 purl 25 *: *

The configuration of both sets is the same, but the above IP and note names need to be configured differently.

[root@test5 ~] # ss-tln

State Recv-Q Send-Q Local Address:Port Peer Address:Port

LISTEN 0 128 *: 45822 *: *

LISTEN 0 128:: 39620: *

LISTEN 0 128: 111: *

LISTEN 0 128 *: 111 *: *

LISTEN 0 50:: 9200:: *

LISTEN 0 50:: 9300:: *

LISTEN 0 128:: 22: *

LISTEN 0 128 *: 22 *: *

LISTEN 0 128 127.0.0.1 631 *: *

LISTEN 0 128:: 1 LISTEN:: *

LISTEN 0 100:: 1:25: *

LISTEN 0 100 127.0.0.1 purl 25 *: *

Access ip:9200/_plugin/head and ip:9200/_plugin/kopf after installing plug-ins head and kopf (plug-ins can graphically view the status of elasticsearch and delete and create indexes)

/ usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

/ usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

[root@test5] # / usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

-> Installing lmenezes/elasticsearch-kopf...

Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip...

Downloading. . . . . . . . . . . . . ... DONE

Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available...

NOTE: Unable to verify checksum for downloaded plugin (unable to find .SHA1 or .MD5 file to verify)

Installed kopf into / usr/share/elasticsearch/plugins/kopf

[root@test5] # / usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

-> Installing mobz/elasticsearch-head...

Trying https://github.com/mobz/elasticsearch-head/archive/master.zip...

Downloading. . . . . .DONE

Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available...

NOTE: Unable to verify checksum for downloaded plugin (unable to find .SHA1 or .MD5 file to verify)

Installed head into / usr/share/elasticsearch/plugins/head

Second, install nginx and logstash software

Yum-y install zlib zlib-devel openssl openssl--devel pcre pcre-devel

Nginx-1.8.1-1.el6.ngx.x86_64.rpm

Logstash-2.3.3-1.noarch.rpm

Jdk-8u51-linux-x64.rpm

To install the nginx service on test1 is to collect its logs.

The log is in / var/log/nginx/access.log

Then install logstash-2.3.3-1.noarch.rpm on test1

Yum remove java-1.7.0-openjdk

Rpm-ivh jdk-8u91-linux-x64.rpm

Rpm-ivh logstash-2.3.3-1.noarch.rpm

/ etc/init.d/logstash start # start the service

/ opt/logstash/bin/logstash-e "input {stdin {}} output {stdout {codec= >" rubydebug "}" # check the environment execute this command to check whether the environment is normal or not. Entering something directly after startup will appear.

Settings: Default pipeline workers: 1

Pipeline main started

Hello world

{

"message" = > "hello world"

"@ version" = > "1"

"@ timestamp" = > "2017-05-24T08:04:46.993Z"

"host" = > "0.0.0.0"

}

Then enter / opt/logstash/bin/logstash-e 'input {stdin {} output {elasticsearch {hosts = > ["192.168.253.200 stdin 9200"] index = > "test"}'

That is, entering something into 253.200 elasticsearch will generate your name test index file directory in / path/to/data/myelk/nodes/0/indices. You can enter a few more directories to 253.200 to see if there are any files to prove normal.

[root@test4 ~] # ls / path/to/data/myelk/nodes/0/indices/

Test

Then create a configuration file ending with .conf in / etc/logstash/conf.d of test1, and I collect nginx and call it nginx.conf. The contents are as follows

[root@test1 nginx] # cd / etc/logstash/conf.d/

[root@test1 conf.d] # ls

Nginx.conf

[root@test1 conf.d] # cat nginx.conf

Input {

File {

Type = > "accesslog"

Path = > "/ var/log/nginx/access.log"

Start_position = > "beginning"

}

}

Output {

If [type] = = "accesslog" {

Elasticsearch {

Hosts = > ["192.168.253.200"]

Index = > "nginx-access-% {+ YYYY.MM.dd}"

}

}

}

/ etc/init.d/logstash configtest

Ps-ef | grep java

/ opt/logstash/bin/logstash-f nginx.conf

Then check elasticearch to see if there is any index generation. Multiple access to nginx service

If not, modify the file.

Vi / etc/init.d/logstash

LS_USER=root # # change this to root or add a permission to the accessed log so that logstash can read it and restart the service to generate an index

LS_GROUP=root

LS_HOME=/var/lib/logstash

LS_HEAP_SIZE= "1g"

LS_LOG_DIR=/var/log/logstash

LS_LOG_FILE= "${LS_LOG_DIR} / $name.log"

LS_CONF_DIR=/etc/logstash/conf.d

LS_OPEN_FILES=16384

LS_NICE=19

KILL_ON_STOP_TIMEOUT=$ {KILL_ON_STOP_TIMEOUT-0} # default value is zero to this variable but could be updated by user request

LS_OPTS= ""

Test4 View:

[root@test4 ~] # ls / path/to/data/myelk/nodes/0/indices/

Nginx-access-2017.05.23 test

[root@test1 logstash] # cat logstash.log

{: timestamp= > "2017-05-24T16:05:19.659000+0800",: message= > "Pipeline main started"}

Third, install kibana software

Install kibana on test2 after the above installation is complete

Rpm-ivh kibana-4.5.1-1.x86_64.rpm

Edit the configuration file here / opt/kibana/config/kibana.yml just modify the following items

Server.port: Port 5601

Server.host: "0.0.0.0" snooping

Elasticsearch.url: "http://192.168.48.200:9200" elasticsearch address

/ etc/init.d/kibana start start the service

Visit kibana http://ip:5601

Add the index shown, which is the nginx-access-2016.07.03 defined above

Configure the logstash on kibana that collects Nginx logs

Install logstash on the kibana server (follow the previous steps)

Then under / etc/logstash/conf.d/ in logstash

Write a configuration file:

[root@test2 conf.d] # vim nginx.conf

Input {

File {

Type = > "accesslog"

Path = > "/ var/log/nginx/access.log"

Start_position = > "beginning"

}

}

Output {

If [type] = = "accesslog" {

Elasticsearch {

Hosts = > ["192.168.253.200"]

Index = > "nginx-access-% {+ MM.dd.YYYY}"

}

}

}

/ opt/logstash/bin/logstash-f nginx.conf

Check that there is an extra month-day-year Nginx access log index in Elasticsearch.

[root@test4 ~] # ls / path/to/data/myelk/nodes/0/indices/

Nginx-access-05.23.2017 nginx-access-2017.05.23 test

Then generate a new log file on kibana, the browser, as created before.

Fourth, some other configurations.

Direct access to kibana is relatively insecure. We need to use nginx to access the proxy and set permissions, usernames and passwords to access.

Let's first install nginx on the kibana server.

Configure in nginx

#

Server

{

Listen 80

Server_name localhost

Auth_basic "Restricted Access"

Auth_basic_user_file / usr/local/nginx/conf/htpasswd.users; # password and user

Location / {

After 5601 of the proxy_pass http://localhost:5601; # proxy kibana, you can access 80 directly.

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header REMOTE-HOST $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

}

#

Create password and user file: htpasswd.users

You need to install the httpd-tool package first.

Htpasswd-bc / usr/local/nginx/conf/htpasswd.users admin paswdadmin # is preceded by a user followed by a password

After that, the access requires a password and user, and it is port 80.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report