In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
ELK log file analysis system basic deployment ELK overview
ELK is the first letter combination of elasticsearch, Logstashh, and Kibana. When we deploy cluster servers, log files are scattered across multiple servers. To view the log information, you need to pick it up and view it on each server, and we put these log files together in one place for unified management.
Elasticsearch is an open source distributed search engine, its characteristics are: distributed, zero configuration, automatic discovery, index automatic slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load and so on.
Logstash is a completely open source tool that collects, filters, and stores your logs for later use (e.g., search).
Kibana is also an open source and free tool, and Kibana provides a friendly Web interface for log analysis for Logstash and ElasticSearch to help you aggregate, analyze, and search important data logs.
Prepare the name role address centos 7-1node1+kibana192.168.142.221centos 7-2node1192.168.142.132centos 7-3Logstash+web192.168.142.136 before the experiment
Here I will install the WEB side and the log file system, you can all go out independently according to your personal situation. The virtual machine is running too much for the computer to handle.
Step 1: deploy Elasticsearch service Note: the operation of the two nodes is the same
Add domain name resolution. Convenient for later use
[root@node1 ~] # vim / etc/hosts// add 192.168.142.221 node1192.168.142.132 node2
Check the JAVA version (those that are not installed can be installed using yum install java)
[root@node1 ~] # java-versionopenjdk version "1.8.0mm 131" OpenJDK Runtime Environment (build 1.8.0_131-b12) OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
Deploy the elasticsearch service (port number: 9200)
/ / deploy elasticsearch service [root@node1 ~] # rpm-ivh elasticsearch-5.5.0.rpm// load system service [root@node1 ~] # systemctl daemon-reload [root@node1 ~] # systemctl enable elasticsearch.service
Modify ES configuration file
[root@node1 ~] # vim / etc/elasticsearch/elasticssearch.yml// the following lines uncomment 17 / cluster.name: my-elk-eluster / / Cluster name (all nodes must be the same) 23 / node.name: node1 / / Node name (each node is different) 33 / path.data:/data/elk_data / / data storage path 37 / path.logs:/var/log / elasticsearch/ Log storage path 43 / bootstrap.memorylock: false / / do not lock memory at startup 55 / network.host: 0.0.0.0 / / provide the IP address of the service binding 0. 0. 0. 0 represents all addresses 59 / http.port: 9200 / / listening port is 920068 / discovery.zen.ping.unicast.hosts: ["node1", "node2"] / / Cluster discovery is realized through unicast
Create a directory for storing data files, and start the service
[root@node1 ~] # mkdir-p / data/elk_ data [root @ node1 ~] # chown-R elasticsearch:elasticsearch / data/elk_data / / modify the main subordinate group [root@node1 ~] # systemctl start elasticsearch.service [root@node1 ~] # netstat-atnp | grep 9200
Verify that the service is enabled
You can get the corresponding node information by visiting http://192.168.142.132:9200 using the host browser, as shown below:
{
"name": "node1"
"cluster_name": "my-elk-cluster"
"cluster_uuid": "mi3-z72CRqS-ofc4NhjXdQ"
"version": {
"number": "5.5.0"
"build_hash": "260387d"
"build_date": "2017-06-30T23:16:05.735Z"
"build_snapshot": false
"lucene_version": "6.6.0"
}
"tagline": "You Know, for Search"
}
2. Install the management plug-in elasticsearch-head (port: 9100) Note: the two nodes operate the same
Extract and compile the installation
[root@node1 ~] # yum install gcc gcc-c++ make-root@node1 / install node component package [root@node1 ~] # tar zxf node-v8.2.1.tar.gz-C / opt// in the node directory [root@node1 ~] # / configure [root@node1 ~] # make / / abnormal time, about 20min [root@node1 ~] # make install
Install phantomjs front-end framework
[root@node1 ~] # tar jxf phantomjs-2.1.1-linux-x86_64.tar.bz2-C / usr/local/src [root@node1 ~] # cp / usr/local/src/phantomjs-2.1.1-linux-x86_64/bin/phantomjs / usr/local/bin
Install elasticsearch-head data Visualization tool
[root@node1 ~] # tar zxf elasticsearch-head.tar.gz-C / usr/local/src / / in the elasticsearch-head directory [root@node1 ~] # npm install / / install using the npm tool (included with the node component)
Modify elasticsearch-head configuration file
[root@node1 ~] # vim / etc/elasticsearch/elasticsearch.yml// add http.cors.enabled: true / / enable cross-domain access support http.cors.allow-origin: "*" / / the domain address allowed for cross-domain access, and the * number is the whole network segment
Open the elasticsearch-head tool
[root@node1 ~] # systemctl restart elasticsearch / / restart the ES service [root@node1 ~] # cd / usr/local/src/elasticsearch-head/ [root@node1 elasticsearch-head] # npm run start & / / keep the background running. 3. Install the Logstash log collection system
Install the WEB service
Any web service is available (Apache, nginx, Tomcat) and will not be explained in detail.
The Apache service used here
Install the logstash system
[root@apache ~] # rpm-ivh logstash-5.5.1.rpm [root@apache ~] # systemctl start logstash.service / / start logstash [root @ apache ~] # systemctl enable logstash.service / / set to boot self-boot [root@apache ~] # ln-s / usr/share/logstash/bin/logstash / usr/local/bin / / establish a soft link for ease of use
Set logstash to dock with elasticsearch
The logstash configuration file consists of three parts: input (in), output (out), and filter
[root@apache log] # chmod ostatr messages [root@apache log] # ll / var/log/messages-rw----r--. 1 root root 483737 Dec 18 17:54 / var/log/messages [root@apache log] # vim / etc/logstash/conf.d/system.conf// manually add input {file {path = > "/ var/log/messages" / / path to collect logs type = > "system" / / sign start_position = > "beginning" / / Collection method "beginning" start from scratch} output {elasticsearch {hosts = > ["192.168.142.136 elasticsearch 9200"] / / es service address index = > "system-% {+ YYYY.MM.dd}" / / Index name}} [root@apache log] # systemctl restart logstash.service / / restart the service
Result verification
4. Basic usage of logstash (basically nothing to do with this experiment, friends who don't like to watch can skip it directly) Logstash command test
Field description explanation:
-f this option allows you to specify the configuration file for logstash and configure logstash according to the configuration file
-e is followed by a string that can be used as a logstash configuration (if "", stdin is used as the input and stdout as the output by default)
-t test that the configuration file is correct, and then exit
1. Both input and output are in standard mode
[root@apache ~] # logstash-e 'input {stdin {}}, output {stdout {}}'
# # input and output directly on the screen
Input: www.baidu.com
Output: apache www.baidu.com
2. Use codec decoder and rubydebug to display detailed output.
[root@apache ~] # logstash-e 'input {stdin {}}, output {stdout {codec= > rubydebug}}'
# # output according to the encoding format
Input: www.baidu.com
Output:
{
"@ timestamp" = > 2018-10-12T02: 15virtual 39.136Z, # time
"@ version= >" 1 ", # version
"host" = > "apache", # using Apache services
"message" = > "www.baidu. Com" # visit the web page
}
3. Write the information into elasticsearch
[root@apache ~] # logstash-e 'input {stdin {}, output {elasticsearch {hosts= > ["192.168.142.221 input 9200"]}'
# # Index will be generated in elasticsearch
Input: www.baidu.com
Output: generating lostash-2019.12.17 in elasticsearch
Install Kibana visualization tools (port: 5601)
Install in node1
[root@node1 ~] # cd / abc/elk/ [root@node1 elk] # rpm-ivh kibana-5.5.1-x86_64.rpm [root@node1 kibana] # cp kibana.yml kibana.yml.bak
Modify Kibana configuration file
[root@node1 kibana] # vim kibana.yml// modify 2 / server.port: 5601 / / Open port 7 / server.host: "0.0.0.0" / / listener address 21 / elasticsearch.url: "http://192.168.142.136:9200" / / contact with elasticsearch 30 / kibana.index:" .kibana " / / add a .kibana index to elasticsearch
Start Kibana
[root@node1 kibana] # systemctl start kibana.service / / enable kibana service [root@node1 kibana] # systemctl enable kibana.service / / set to boot self-startup
After the web service is docked with elasticsearch, kibana is used for visualization
[root@apache log] # cd / etc/logstash/conf.d/// modify configuration file (empty self-added) [root@apache conf.d] # vim apache_log.confinput {file {path = > "/ etc/httpd/logs/access_log" / / access type for Apache type = > "access" start_position = > "beginning"} File {path = > "/ etc/httpd/logs/error_log" / / error logs for Apache type = > "error" start_position = > "beginning"}} output {if [type] = = "access" {elasticsearch {hosts = > ["192.168.142.136 Apache 9200"] / / elasticsearch address index = > "apache_access-% {+ YYYY.MM.dd}"} if [type] = = "error" {elasticsearch {hosts = > ["192.168.142.136 apache_access-% 9200"] / / elasticsearch address index = > "apache_error-% {+ YYYY.MM. Dd} "} [root@apache conf.d] # / usr/share/logstash/bin/logstash-f apache_log.conf / / takes effect only for this log file
Configure in the visual interface
Enter http://192.168.142.136:5601/ in the browser
Create an index when you log in for the first time (docking with Syslog files):
Input: access-*
Click the create button to create
Click the Discover button and you will find access-* information.
Problems that may be encountered in the process of installation
Fault set:
19 Logstash could not be started because there is already another instance using the configured data directory 43 Logstash could not be started because there is already another instance using the configured data directory 59.210 [LogStash::Runner]. If you wish to run multiple instances, you must change the "path.data" setting.
Solution
/ / check the logstash configuration file vim / etc/logstash/logstash.yml// to find the path.data path / var/lib/logstash/// delete cache rm-rf .lock / / restart logstash-f / etc/logstash/conf.d/nginx_log.conf with path-- path.data=/var/lib/logstash thanks for reading ~ ~
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.