In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "what is ELK", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "what is ELK" this article.
What is ELK
ELK is actually an acronym for Elasticsearch,Logstash and Kibana, all open source products.
ElasticSearch (ES for short) is a real-time distributed search and analysis engine, which can be used for full-text search, structured search and analysis.
Logstash, a data collection engine, is mainly used for data collection, parsing, and sending data to ES. Supported data sources include local files, ElasticSearch, MySQL, Kafka, and so on.
Kibana, provides analysis and Web visualization interface for Elasticsearch, and generates various dimensional tables and graphics.
Second, set up ELK
Environmental dependence: CentOS7.5,JDK1.8,ElasticSearch7.9.3,Logstash 7.9.3 Kibana 7.9.3.
2.1 install ElasticSearch
First, download the installation package from the official website, and then use the tar-zxvf command to extract it.
Locate the elasticsearch.yml file in the config directory and modify the configuration:
Cluster.name: es-application
Node.name: node-1
# Open to all IP
Network.host: 0.0.0.0
# HTTP port number
Http.port: 9200
# elasticsearch data file storage directory
Path.data: / usr/elasticsearch-7.9.3/data
# elasticsearch log file storage directory
Path.logs: / usr/elasticsearch-7.9.3/logs
After configuration, a user is created because ElasticSearch is started with a non-root user.
# create a user
Useradd yehongzhi
# set password
Passwd yehongzhi
# Grant user permissions
Chown-R yehongzhi:yehongzhi / usr/elasticsearch-7.9.3/
Then switch users and start:
# switching users
Su yehongzhi
# start-d means start in the background
. / bin/elasticsearch-d
Use the command netstat-nltp to view the port number:
When you visit http://192.168.0.109:9200/, you can see the following message, indicating that the installation was successful.
2.2 install Logstash
First download and install the package on the official website, then decompress it, find the logstash-sample.conf file in the / config directory, and modify the configuration:
Input {
File {
Path = > ['/ usr/local/user/*.log']
Type = > 'user_log'
Start_position = > "beginning"
}
}
Output {
Elasticsearch {
Hosts = > ["http://192.168.0.109:9200"]"
Index = > "user-% {+ YYYY.MM.dd}"
}
}
Input represents input source, output represents output, and filter filtering can be configured. The architecture is as follows:
After configuration, you need to have a data source, that is, a log file, prepare a user.jar application, start it in the background, and output it to the log file user.log. The command is as follows:
Nohup java-jar user.jar > / usr/local/user/user.log &
Then start Logstash in the background with the following command:
Nohup. / bin/logstash-f / usr/logstash-7.9.3/config/logstash-sample.conf &
After starting, using the jps command, you can see two processes running:
2.3 install Kibana
First of all, download the compressed package from the official website, then decompress it, find the kibana.yml file in / config directory, and modify the configuration:
Server.port: 5601
Server.host: "192.168.0.111"
Elasticsearch.hosts: ["http://192.168.0.109:9200"]
Like elasticSearch, you cannot start with a root user, you need to create a user:
# create a user
Useradd kibana
# set password
Passwd kibana
# Grant user permissions
Chown-R kibana:kibana / usr/kibana/
Then use the command to start:
# switching users
Su kibana
# non-background startup, close the shell window and exit
. / bin/kibana
# start at backend
Nohup. / bin/kibana &
Open http://192.168.0.111:5601 in the browser after startup, and you can see the web interface of kibana:
2.4 effect display
After all the startup is successful, the whole process should be like this. Let's take a look at:
The browser opens http://192.168.0.111:5601, goes to the administrative interface, and clicks "Index Management" to see that there is an index of user-2020.10.31.
Click the Index Patterns menu bar, and then create it and name it user-*.
Finally, you can go to the Discover column to select the Index Pattern of user-*, and then search for keywords to find the relevant log!
III. Improvement and optimization
The above ELK, which is simply built using the three core components, is actually flawed. If the Logstash needs to add plug-ins, then all server Logstash will need to add plug-ins, which is not scalable. So there is FileBeat, which takes up less resources and is only responsible for collecting logs and doing nothing else, so it is lightweight to extract Logstash and do some filtering and other work.
FileBeat is also an officially recommended log collector. Download the Linux installation package first:
Https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.3-linux-x86_64.tar.gz
After the download is complete, extract it. Then modify the filebeat.yml configuration file:
# input Source
Filebeat.inputs:
-type: log
Enabled: true
Paths:
-/ usr/local/user/*.log
# output, server address of Logstash
Output.logstash:
Hosts: ["192.168.0.110 purl 5044"]
# output, enter this if you output directly to ElasticSearch
# output.elasticsearch:
# hosts: ["localhost:9200"]
# protocol: "https"
Then the configuration file logstash-sample.conf of Logstash should also be changed:
# change the input source to beats
Input {
Beats {
Port = > 5044
Codec = > "json"
}
}
Then start FileBeat:
# background launch command
Nohup. / filebeat-e-c filebeat.yml > / dev/null 2 > & 1 &
Start Logstash again:
# background launch command
Nohup. / bin/logstash-f / usr/logstash-7.9.3/config/logstash-sample.conf &
How to determine whether the startup is successful? take a look at the logstash-plain.log log file in the / logs directory of the Logstash application:
The above is all the contents of this article "what is ELK?" thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.