Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of ELK 5.0.1+Filebeat5.0.1 for LINUX RHEL6.6 Monitoring MongoDB Log

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article shows you an example analysis of ELK 5.0.1+Filebeat5.0.1 for LINUX RHEL6.6 monitoring MongoDB logs, which is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

The tools used to build ELK5.0.1 are:

Filebeat-5.0.1-linux-x86_64.tar.gz

Logstash-5.0.1.tar.gz

Elasticsearch-5.0.1.tar.gz

Kibana-5.0.1-linux-x86_64.tar.gz

The above four tools can be found in the historical version at https://www.elastic.co/downloads.

In addition, ELK5.0.1 has requirements for operating system kernels, requiring LINUX operating system kernels greater than 3.5. the linux operating system used in this experiment is ORACLE LINUX6.6.

In addition, there are also requirements for the JAVA JDK version, it is best to install jdk-8u111-linux-x64.tar.gz, which can be found on the official Oracle website and downloaded for free.

The configuration of the linux host that needs to be modified is:

Vi / etc/sysctl.conf

Vm.max_map_count = 262144

Vi / etc/security/limits.conf

* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

ELK works as follows: filebeat monitors mongodb logs on the database server mongodb, and captures the log updates of mongodb in real time and sends them to logstash

Logstash is responsible for filtering and regular parsing the data sent by filebeat according to the pre-edited regularities and filtering conditions, and then logstash sends the processed data to the elasticsearch engine.

Kibana is responsible for displaying the data in elasticsearch, classifying, summarizing, querying, tabulating, drawing, and so on.

The installation process is:

1. Elasticsearch-5.0.1.tar.gz installation

Make sure that the operating system version kernel is greater than 3.5 (what needs to be stated here is that es requires that the operating system kernel be greater than 3.5, otherwise es5 cannot boot)

[root@rhel6] # uname-a

Linux rhel6 3.8.13-44.1.1.el6uek.x86_64 # 2 SMP Wed Sep 10 06:10:25 PDT 2014 x86 "64 GNU/Linux

[root@rhel6 ~] #

Make sure the system JAVA version is 1.8

[root@rhel6 ~] # java-version

Java version "1.8.0,111"

Java (TM) SE Runtime Environment (build 1.8.0_111-b14)

Java HotSpot (TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

[root@rhel6 ~] #

Create es groups, elasticsearch users and es installation directory (to be explained here, es5 startup cannot use root, otherwise an error cannot be started)

Software installation directory:

/ home/elasticsearch/elasticsearch-5.0.1

Data and log storage directory:

/ opt/es5.0.1

[root@rhel6 opt] # ls-l

Total 20

Drwxr-xr-x. 4 elasticsearch es 4096 Feb 13 19:47 es5.0.1

[root@rhel6 opt] # id elasticsearch

Uid=700 (elasticsearch) gid=700 (es) groups=700 (es)

[root@rhel6 opt] #

The next step is to extract the installation elasticsearch-5.0.1.tar.gz, extract the elasticsearch-5.0.1.tar.gz to the / home/elasticsearch/elasticsearch-5.0.1 directory, and modify the permissions.

Modify the configuration file for es:

[root@rhel6 config] # vi elasticsearch.yml

Path.data: / opt/es5.0.1/data

Path.logs: / opt/es5.0.1/logs

Network.host: 192.168.144.230 # IP address is the local ip address

Http.port: 9200 # es web service port

Start es5 using the elasticsearch user:

[elasticsearch@rhel6 bin] $. / elasticsearch

[2017-02-13T19:50:49111] [INFO] [o.e.n.Node] [] initializing.

[2017-02-13T19:50:49362] [INFO] [o.e.e.NodeEnvironment] [58P-l3h] using [1] data paths, mounts [[(/ dev/sda3)]], net usable_space [16.3gb], net total_space [23.4gb], spins? [possibly], types [ext4]

[2017-02-13T19:50:49363] [INFO] [o.e.e.NodeEnvironment] [58P-l3h] heap size [1.9gb], compressed ordinary object pointers [true]

[2017-02-13T19:50:49365] [INFO] [o.e.n.Node] [58P-l3h] node name [58P-l3h] derived from node ID; set [node.name] to override

[2017-02-13T19:50:49390] [INFO] [o.e.n.Node] [58P-l3h] version [5.0.1], pid [3644], build [080bb47/2016-11-11T22:08:49.812Z], OS [Linux/3.8.13-44.1.1.el6uek.x86_64/amd64], JVM [Oracle Corporation/Java HotSpot (TM) 64-Bit Server VM/1.8.0_111/25.111-b14]

[2017-02-13T19:50:52449] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [aggs-matrix-stats]

[2017-02-13T19:50:52450] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [ingest-common]

[2017-02-13T19:50:52450] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [lang-expression]

[2017-02-13T19:50:52450] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [lang-groovy]

[2017-02-13T19:50:52450] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [lang-mustache]

[2017-02-13T19:50:52450] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [lang-painless]

[2017-02-13T19:50:52451] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [percolator]

[2017-02-13T19:50:52451] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [reindex]

[2017-02-13T19:50:52452] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [transport-netty3]

[2017-02-13T19:50:52452] [INFO] [o.e.p.PluginsService] [58P-l3h] loaded module [transport-netty4]

[2017-02-13T19:50:52460] [INFO] [o.e.p.PluginsService] [58P-l3h] no plugins loaded

[2017-02-13T19:50:56213] [INFO] [o.e.n.Node] [58P-l3h] initialized

[2017-02-13T19:50:56213] [INFO] [o.e.n.Node] [58P-l3h] starting.

[2017-02-13T19:50:56637] [INFO] [o.e.t.TransportService] [58P-l3h] publish_address {192.168.144.230 o.e.t.TransportService 9300}, bound_addresses {192.168.144.230

[2017-02-13T19:50:56642] [INFO] [o.e.b.BootstrapCheck] [58P-l3h] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks

[2017-02-13T19:50:59864] [INFO] [o.e.c.s.ClusterService] [58P-l3h] new_master {58P-l3h} {58P-l3hGTqm7e9QzXWn0eA} {J3O-p0wfSMeS4evTxfTmVA} {192.168.144.230} {192.168.144.230), reason: zen-disco-elected-as-master ([0] nodes joined)

[2017-02-13T19:50:59902] [INFO] [o.e.h.HttpServer] [58P-l3h] publish_address {192.168.144.230 58P-l3h 9200}, bound_addresses {192.168.144.230

[2017-02-13T19:50:59902] [INFO] [o.e.n.Node] [58P-l3h] started

[2017-02-13T19:50:59930] [INFO] [o.e.g.GatewayService] [58P-l3h] recovered [0] indices into cluster_state

Visit the web page: http://192.168.144.230:9200/?pretty, you can see a message similar to the following, indicating that es started successfully and provided services normally:

{

"name": "58P-l3h"

"cluster_name": "elasticsearch"

"cluster_uuid": "mO7oaIXJQyiwCEA-jsSueg"

"version": {

"number": "5.0.1"

"build_hash": "080bb47"

"build_date": "2016-11-11T22:08:49.812Z"

"build_snapshot": false

"lucene_version": "6.2.1"

}

"tagline": "You Know, for Search"

}

Second, install logstash6.0.1

Create a software installation directory: / opt/logstash-5.0.1

Extract logstash-5.0.1.tar.gz to the installation directory

Edit the logstash.conf startup configuration file:

[root@rhel6 config] # cat logstash.conf

# input {

# stdin {}

#}

Input {

Beats {

Host = > "0.0.0.0"

Port = > 5044

}

}

Output {

Elasticsearch {

Hosts = > ["192.168.144.230 virtual 9200"]

Index = > "test"

}

Stdout {

Codec = > rubydebug

}

}

[root@rhel6 config] #

Start logstash6

. / logstash- f / opt/logstash-5.0.1/config/logstash.conf

You can see the following output, which indicates that logstash started successfully:

[root@rhel6 bin] #. / logstash- f / opt/logstash-5.0.1/config/logstash.conf

Sending Logstash's logs to / opt/logstash-5.0.1/logs which is now configured via log4j2.properties

[2017-02-14T01:03:25860] [INFO] [logstash.inputs.beats] Beats inputs: Starting input listener {: address= > "0.0.0.0 logstash.inputs.beats 5044"}

[2017-02-14T01:03:25965] [INFO] [org.logstash.beats.Server] Starting server on port: 5044

[2017-02-14T01:03:26305] [INFO] [logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {: changes= > {: removed= > [],: added= > ["http://192.168.144.230:9200"]}}"

[2017-02-14T01:03:26307] [INFO] [logstash.outputs.elasticsearch] Using mapping template from {: path= > nil}

[2017-02-14T01:03:26460] [logstash.outputs.elasticsearch] Attempting to install template {: manage_template= > {"template" = > "logstash-*", "version" = > 50001, "settings" = > {"index.refresh_interval" = > "5s"}, "mappings" = > {"_ default_" = > {"_ all" = > {"enabled" = > true, "norms" = > false}, "dynamic_templates" = > [{"message_field" = > {"path_match" = > "message" "match_mapping_type" = > "string", "mapping" = > {"type" = > "text", "norms" = > false}}, {"string_fields" = > {"match" = > "*", "match_mapping_type" = > "string", "mapping" = > {"type" = > "text", "norms" = > false, "fields" = > {"keyword" = > {"type" = > "keyword"}], "properties" = > {"@ timestamp" = > {"type" = > "date" "include_in_all" = > false}, "@ version" = > {"type" > "keyword", "include_in_all" = > false}, "geoip" = > {"dynamic" = > true, "properties" = > {"ip" = > {"type" = > "ip"}, "location" = > {"type" = > "geo_point"}, "latitude" = > {"type" = > "half_float"}, "longitude" = > {"type" = > "half_float"}

[2017-02-14T01:03:26483] [INFO] [logstash.outputs.elasticsearch] New Elasticsearch output {: class= > "LogStash::Outputs::ElasticSearch",: hosts= > ["192.168.144.230 New Elasticsearch output 9200"]}

[2017-02-14T01:03:26492] [INFO] [logstash.pipeline] Starting pipeline {"id" = > "main", "pipeline.workers" = > 2, "pipeline.batch.size" = > 125," pipeline.batch.delay "= > 5," pipeline.max_inflight "= > 250}

[2017-02-14T01:03:26500] [INFO] [logstash.pipeline] Pipeline main started

[2017-02-14T01:03:26552] [INFO] [logstash.agent] Successfully started Logstash API endpoint {: port= > 9600}

III. Kinaba5.0.1 installation

Create a software installation directory: [root@rhel6 kibana-5.0.1] # pwd

/ opt/kibana-5.0.1

[root@rhel6 kibana-5.0.1] #

Extract kibana-5.0.1-linux-x86_64.tar.gz to the installation directory and modify the configuration file

Vi / opt/kibana-5.0.1/config/kibana.conf

Server.port: 5601

Server.host: "192.168.144.230"

Server.name: "rhel6"

Elasticsearch.url: "http://192.168.144.230:9200" # what is specified here is to read data from the elasticsearch-related service http

Pid.file: / var/run/kibana.pid

When root starts kinaba5.0.1, you can see the following message output, indicating that kinaba started successfully and connected to elasticsearch:

[root@rhel6 bin] #. / kibana

Log [13 status] [info] [status] [plugin:kibana@5.0.1] Status changed from uninitialized to green-Ready

Log [13 status 04V 52.657] [info] [plugin:elasticsearch@5.0.1] Status changed from uninitialized to yellow-Waiting for Elasticsearch

Log [13 status] [52.693] [info] [plugin:console@5.0.1] Status changed from uninitialized to green-Ready

Log [13 status 04V 52.947] [info] [plugin:timelion@5.0.1] Status changed from uninitialized to green-Ready

Log [13 info 04 info] [listening] Server running at http://192.168.144.230:5601

Log [13 status 04 ui settings 52.970] [info] [ui settings] Status changed from uninitialized to yellow-Elasticsearch plugin is yellow

Log [13 status] [status] [plugin:elasticsearch@5.0.1] Status changed from yellow to yellow-No existing Kibana index found

Log [13 status] [status] [plugin:elasticsearch@5.0.1] Status changed from yellow to green-Kibana index ready

Log [13 status] [status] [ui settings] Status changed from yellow to green-Ready

IV. Filebeat installation

Create a software installation directory:

/ opt/filebeat-5.0.1

Extract the filebeat-5.0.1-linux-x86_64.tar.gz package to the software installation directory and modify the configuration file

[root@rhel6 filebeat-5.0.1] # vi filebeat.yml

Paths:

/ opt/logs/*.log # defines the monitoring directory of the log

Output.logstash:

# The Logstash hosts

Hosts: ["localhost:5044"]

Root starts filebeat5

[root@rhel6 filebeat-5.0.1] #. / filebeat- e-c filebeat.yml-d "Publish"

2017-02-13 15:45:47.498852 beat.go:264: INFO Home path: [/ opt/filebeat-5.0.1] Config path: [/ opt/filebeat-5.0.1] Data path: [/ opt/filebeat-5.0.1/data] Logs path: [/ opt/filebeat-5.0.1/logs]

2017-02-13 15:45:47.498913 beat.go:174: INFO Setup Beat: filebeat; Version: 5.0.1

2017-02-13 15:45:47.498966 logstash.go:90: INFO Max Retries set to: 3

2017-02-13 15:45:47.499008 outputs.go:106: INFO Activated logstash as output plugin.

2017-02-13 15:45:47.499055 publish.go:291: INFO Publisher name: rhel6

2017-02-13 15:45:47.499169 async.go:63: INFO Flush Interval set to: 1s

2017-02-13 15:45:47.499180 async.go:64: INFO Max Bulk Size set to: 2048

2017-02-13 15:45:47.499241 beat.go:204: INFO filebeat start running.

2017-02-13 15:45:47.499251 registrar.go:66: INFO Registry file set to: / opt/filebeat-5.0.1/data/registry

2017-02-13 15:45:47.499309 registrar.go:99: INFO Loading registrar data from / opt/filebeat-5.0.1/data/registry

2017-02-13 15:45:47.499337 registrar.go:122: INFO States Loaded from registrar: 0

2017-02-13 15:45:47.499346 crawler.go:34: INFO Loading Prospectors: 1

2017-02-13 15:45:47.499381 logp.go:219: INFO Metrics logging every 30s

2017-02-13 15:45:47.499386 prospector_log.go:40: INFO Load previous states from registry into memory

2017-02-13 15:45:47.499431 prospector_log.go:67: INFO Previous states loaded: 0

2017-02-13 15:45:47.499479 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 1

2017-02-13 15:45:47.499487 crawler.go:61: INFO All prospectors are initialised and running with 0 states to persist

2017-02-13 15:45:47.499501 prospector.go:106: INFO Starting prospector of type: log

2017-02-13 15:45:47.499630 log.go:84: INFO Harvester started for file: / opt/logs/firstset.log

Under the file directory / opt/logs/ I put a log file of mongodb, which is temporarily static and can be modified later, the contents of firstset.log:

[root@rhel6 logs] # cat firstset.log

2017-02-11T06:44:42.954+0000 I COMMAND [conn6] command wangxi.t command: insert {insert: "t", documents: [{_ id: ObjectId ('589eb2da39e265f288b9d9ae'), name: "wangxi"}], ordered: true} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks: {Global: {acquireCount: {r: 2, w: 2}, Database: {acquireCount: {w: 1, W: 1} Collection: {acquireCount: {W: 1} protocol:op_command 7ms

2017-02-11T06:45:59.907+0000 I COMMAND [conn7] command wangxi.t command: find {find: "t", filter: {name: "wangxi"}} planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:141 locks: {Global: {acquireCount: {r: 2}}, Database: {acquireCount: {r: 1}}, Collection: {acquireCount: {r: 1} protocol:op_command 0ms

[root@rhel6 logs] #

Then observe that the logstash window has the following output (indicating that filebeat reads the / opt/logs/firstset.log log and sends it to logstash):

[2017-02-14T01:21:29779] [INFO] [logstash.agent] Successfully started Logstash API endpoint {: port= > 9600}

{

"@ timestamp" = > 2017-02-13T17:22:08.837Z

"offset" = > 413

"@ version" = > "1"

"input_type" = > "log"

"beat" = > {

"hostname" = > "rhel6"

"name" = > "rhel6"

"version" = > "5.0.1"

}

"host" = > "rhel6"

"source" = > "/ opt/logs/firstset.log"

"message" = > "2017-02-11T06:44:42.954+0000 I COMMAND [conn6] command wangxi.t command: insert {insert:\" t\ ", documents: [{_ id: ObjectId ('589eb2da39e265f288b9d9ae'), name:\" wangxi\ "}], ordered: true} ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks: {Global: {acquireCount: {r: 2, w: 2}, Database: {acquireCount: {w: 1, W: 1} Collection: {acquireCount: {W: 1} protocol:op_command 7ms "

"type" = > "log"

"tags" = > [

[0] "beats_input_codec_plain_applied"

]

}

{

"@ timestamp" = > 2017-02-13T17:22:08.837Z

"offset" = > 816

"@ version" = > "1"

"input_type" = > "log"

"beat" = > {

"hostname" = > "rhel6"

"name" = > "rhel6"

"version" = > "5.0.1"

}

"host" = > "rhel6"

"source" = > "/ opt/logs/firstset.log"

"message" = > "2017-02-11T06:45:59.907+0000 I COMMAND [conn7] command wangxi.t command: find {find:\" t\ ", filter: {name:\" wangxi\ "}} planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:141 locks: {Global: {acquireCount: {r: 2}, Database: {acquireCount: {r: 1}} Collection: {acquireCount: {r: 1} protocol:op_command 0ms "

"type" = > "log"

"tags" = > [

[0] "beats_input_codec_plain_applied"

]

}

Then, visit http://192.168.144.230:5601/app/kibana#/management/kibana/indices/test?_g=()&_a=(tab:indexedFields) to create the test index (the index here is the index name in the logstash startup control file):

[root@rhel6 config] # cat logstash.conf

# input {

# stdin {}

#}

Input {

Beats {

Host = > "0.0.0.0"

Port = > 5044

}

}

Output {

Elasticsearch {

Hosts = > ["192.168.144.230 virtual 9200"]

Index = > "test"

}

Stdout {

Codec = > rubydebug

}

}

[root@rhel6 config] #

Then, you can access http://192.168.144.230:5601/app/kibana#/dev_tools/console?_g=() and enter the following query statement:

GET _ search

{

"query": {

"match_phrase": {

"message": "wangxi"

}

}

}

The mongodb log we imported is found:

The above is a sample analysis of ELK 5.0.1+Filebeat5.0.1 for LINUX RHEL6.6 monitoring MongoDB logs. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report