In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Logs mainly include system logs, application logs and security logs. Through the log, the system operation and developers can know the software and hardware information of the server, check the errors in the configuration process and the causes of the errors. Regular analysis of logs can understand the load, performance and security of the server, so as to take timely measures to correct errors.
Usually, logs are scattered and stored on different devices. If you manage dozens or hundreds of servers, you are still checking logs using the traditional method of logging in to each machine in turn. Does this feel tedious and inefficient? As a top priority, we use centralized log management, such as open source syslog, to collect and summarize logs on all servers.
After centralizing the management of logs, log statistics and retrieval has become a more troublesome thing. Generally, we can use Linux commands such as grep, awk and wc to achieve retrieval and statistics. However, it is hard to avoid using this method for higher query, sorting and statistics requirements and a large number of machines.
The open source real-time log analysis ELK platform can perfectly solve the above problems. ELK is composed of three open source tools: ElasticSearch, Logstash and Kiabana. Official website: https://www.elastic.co
Elasticsearch is an open source distributed search engine, its characteristics are: distributed, zero configuration, automatic discovery, index automatic slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load and so on.
Logstash is a completely open source tool that can collect, analyze, and store your logs for future use (e.g., search).
Kibana is also an open source and free tool, and its Kibana provides a friendly Web interface for log analysis for Logstash and ElasticSearch to help you aggregate, analyze, and search important data logs.
Open source real-time logs analyze the deployment process of the ELK platform:
1. Install the Logstash dependency package JDK
The operation of Logstash depends on the running environment of Java. The latest version of Java is recommended for Logstash 1.5 or higher than java 7. Since we are just running Java programs instead of developing them, download JRE. First of all, download the new version of jre from Oracle at http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html
# tar-zxf jdk-8u45-linux-x64.tar.gz-C / usr/local/ set the environment variable / etc/profile, add the code export JAVA_HOME=/usr/local/jdk1.8.0_45 export PATH=$PATH:$JAVA_HOME/bin exportCLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH # source / etc/profile to test whether the installation is successful # java-version
2. Install Logstash (log collection, analysis, and storage for future use)
Download and install Logstash. To install logstash, you only need to extract its corresponding directory, for example: / usr/local:
Download address https://www.elastic.co/downloads/logstash
# https://download.elastic.co/logstash/logstash/logstash-2.4.0.tar.gz # tar-zxf logstash-2.4.0.tar.gz-C / usr/local/
After the installation is complete, run the following command to start logstash:
Normal start-up
# logstash-e 'input {stdin {}} output {stdout {}}' Settings: Default pipeline workers: 2 Pipeline main started waiting for input: hello world 2016-09-06T06:22:50.326Z localhost.localdomain hello world We can see what we enter logstash output in a certain format, where the-e parameter allows Logstash to accept settings directly from the command line. This is particularly quick to help us repeatedly test whether the configuration is correct without having to write the configuration file. Use the CTRL-C command to exit the previously run Logstash.
Using the-e parameter to specify configuration on the command line is a common way, but it takes a long time if you need to configure more settings.
In this case, we first create a simple configuration file and specify that logstash use this configuration file.
For example, create a "basic configuration" test file logstash-test.conf under the logstash installation directory, with the following contents:
# cat logstash-test.conf input {stdin {}} output {stdout {codec= > rubydebug}}
Logstash uses input and output to define the configuration of inputs and outputs when collecting logs. In this case, input defines an input,output called "stdin" and an output called "stdout".
No matter what character we enter, Logstash will return the character we entered in a certain format, where output is defined as "stdout" and uses the codec parameter to specify the logstash output format.
Enable debug mode to start
Use the-f parameter of logstash to read the configuration file and then start, execute the following to start the test: (start in debug mode)
# / usr/local/logstash-2.3.4/bin/logstash agent-f / usr/local/logstash-2.3.4/conf/logstash-test.conf Logstash startup completed Tue Jul 14 18:07:07 EDT 2015 hello World # this line is the result of the output after the execution of echo "`date`hello World" and is pasted directly to this location Then output the following results {"message" = > "Tue Sep 6 14:25:52 CST 2016 hello world", "@ version" = > "1", "@ timestamp" = > "2016-09-06T06:26:31.270Z", "host" = > "localhost.localdomain"}
2.1.Exporting information to redis database by logstash
Just now we displayed the information directly on the screen, and now we save the output of logstash to the redis database.
When the logstash output information is stored in the redis database, redis actually acts as a message queue, not as storage, waiting for the consumption of elasticsearch
If the redis database is installed locally, then the next step is to install the redis database.
# cat / usr/local/logstash-2.3.4/conf/logstash_index_redis.conf stores information in redis input {stdin {} output {stdout {codec = > rubydebug} redis {host = > '10.2.8.45' port = > 6379 data_type = > 'list' key = >' logstash'}
Default port number 9301 for logstash
# netstat-tnlp | grep 9301
Server-side redis collection log configuration file
# cat logstash_index_redis.conf reads information from redis And send it to elasticsearch input {redis {host = > "10.2.8.45" port = > 6379 data_type = > "list" key = > "logstash" codec = > "json"}} output {stdout {codec = > rubydebug} elasticsearch {hosts = > "10.2.8.45"}}
The client stores the collected information in the configuration file of redis
# cat logstash_agent.conf input {file {path = > ["/ var/log/messages" "/ var/log/*.log"] type = > "system" start_position = > beginning}} output {stdout {codec = > rubydebug} redis {host = > "10.2.8.45" port = > 6379 data_type = > "list" key = > "logstash"} # elasticsearch {hosts = > "10.2.8.45"}}
3. Install Elasticsearch
Download address https://www.elastic.co/downloads/elasticsearch
Set up a normal user to start elasticsearch (root users cannot start elasticsearch by default)
# groupadd elk# useradd elasticsearch-g elk
Installation
# tar zxvf https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz-C / usr/local/ # modify configuration file / usr/local/elasticsearch-2.3.5/config/elasticsearch.yml network.host: 10.2.8.45 # Port bind ip address http.port: 9200
Start (attention! The default is not to use root users to start, but to use ordinary users)
# su-elasticsearch# nohup / usr/local/elasticsearch-2.3.5/bin/elasticsearch > nohup & # netstat-anp | grep: 920 curl http://localhost:9200 # View the current elastcisearch status
Next, we create a test file logstash_elasticsearch.conf under the logstash installation directory that is used to test logstash using elasticsearch as the back end of logstash
Stdout and elasticsearch are defined as output in this file, and such "multiple output" ensures that the output is displayed on the screen as well as output to elastisearch.
# cat / user/local/logstash-2.3.4/conf.d/logstash_elasticsearch.conf input {stdin {}} # Manual input output {elasticsearch {hosts = > "localhost"} # output to elasticsearch stdout {codec= > rubydebug} # output to screen}
Execute the following command
# logstash agent-f / user/local/logstash-2.3.4/conf.d/logstash_elasticsearch.conf hello logstash # input and output the following results {"message" = > "hello logstash", "@ version" = > "1", "@ timestamp" = > "2016-09-06T06:49:39.654Z", "host" = > "localhost.localdomain"}
We can use the curl command to send a request to see if ES received the data:
# curl http://localhost:9200/_search?pretty
So far, Elasticsearch and Logstash have been successfully used to collect log data.
Install the elasticsearch plug-in
The Elasticsearch-kopf plug-in can query the data in Elasticsearch and install elasticsearch-kopf by executing the following command in the directory where you installed Elasticsearch:
# cd / usr/local/elasticsearch-2.3.5/bin#. / plugin install lmenezes/elasticsearch-kopf will prompt a failure after installing the plug-in, which is likely to be caused by the network and so on. If the download fails, you can download the software manually without using the plug-in installation command. # cd / usr/local/elasticsearch-2.3.5/plugins # wget https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip # unzip master.zip # mv elasticsearch-kopf-master kopf after the installation is completed, you can see the kopf directory browser to access the kopf page in the plugins directory to access the data http://10.2.8.45:9200/_plugin/kopf/ saved by elasticsearch.
4. Install Kibana
# wget https://download.elastic.co/kibana/kibana/kibana-4.6.0-linux-x86_64.tar.gz modify configuration file / usr/local/kibana-4.6.0-linux-x86_64/config/kibana.yml elasticsearch.url: "launch (background startup) # nohup / usr/local/kibana-4.6.0-linux-x86_64/bin/kibana > nohup.out & default port 5601 # Netstat-tnlp | grep 5601ELK default port logstash 9301elasticsearch 9200 9300kibana 5601
Browsers access http://10.2.8.45:5601
Page error prompt:
1. This version of Kibana requires Elasticsearch ^ 2.4.0 on all nodes. I found the following incompatible nodes in your cluster: Elasticsearch v2.3.5 @ 10.2.8.45 elasticsearch 9200 (10.2.8.45) indicates that the version of Kibana does not match the version of elasticsearch. Check the official website and install matching version 2 and unable to fetch mapping. This means that logstash did not write the log to elasticsearch. Check whether there is a problem with the communication between logstash and elasticsearch. This is the general problem. Restart the application after modification
Use http://kibanaServerIP:5601 to access Kibana. After logging in, first, configure an index. By default, the data of Kibana is pointed to Elasticsearch, using the default index name of logstash-*, and based on time. Click "Create".
Then click "Discover" to search and browse the data in Elasticsearch. The default search is for the last 15 minutes. You can customize the selection time.
At this point, it means that your ELK platform installation and deployment is complete.
Client installation
The client only needs to install logstash, as shown in 1 and 2 above (jdk must be installed)
Then add the logstash_agent.conf configuration file to / usr/local/logstash-2.3.4/conf/ (which can be modified as needed)
# cat logstash_agent.conf input {file {path = > ["/ var/log/messages", "/ var/log/*.log"] # Log files to be collected type = > "system" # Custom type Start_position = > beginning}} output {stdout {codec = > rubydebug} redis {host = > "10.2.8.45" port = > 6379 data_type = > "list" key = > "logstash"} # elasticsearch {hosts = > "10.2.8.45"}} run # nohup / usr/local in the background / logstash-2.3.4/bin/logstash agent-f / usr/local/logstash-2.3.4/conf/logstash_agent.conf > / usr/local/logstash-2.3.4/logstash.log 2 > & 1 & View process # ps-ef | grep java
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.