In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
EFK is not a software, but a solution. EFK is the abbreviation of three open source software, Elasticsearch,FileBeat,Kibana. ELasticsearch is responsible for log analysis and storage, FileBeat is responsible for log collection, and Kibana is responsible for interface display. They cooperate with each other, link up perfectly, and efficiently meet the applications of many occasions. It is a mainstream log analysis system solution at present.
There is only one difference between EFK and ELK. The component for collecting logs has been replaced by Logstash with FileBeat, because Filebeat has two advantages over Logstash:
1. Low intrusion, no need to modify the configuration of elasticsearch and kibana
2. High performance, IO occupancy is much smaller than logstash.
For ELK, please refer to: https://blog.51cto.com/14227204/2442249
Of course, Logstash also has some advantages over FileBeat. For example, Logstash has the ability to format logs. FileBeat only reads logs from log files. Of course, if the collected logs have a certain format, FileBeat can also format them, but compared with Logstash, the effect is much worse.
Filebeat belongs to Beats. Currently, Beats includes six tools:
Packetbeat (collect network traffic data) Metricbeat (collect system, process, and file system level CPU and memory usage data, etc.) Filebeat (collect file data) Winlogbeat (collect Windows event log data) Auditbeat (lightweight audit log collector) Heartbeat (lightweight server health collector)
In addition, various components under the EFK system eat memory very much, and later, according to business needs, the architecture of EFK can be expanded. When more and more logs are collected by FileBeat, Redis can be introduced to prevent data loss, while ElasticSearch can also be expanded into a cluster and managed by Head plug-ins, so make sure that the server has sufficient running memory and disk space.
I. start deployment
1. Install elasticsearch:
[root@localhost /] # mkdir efk # personal habits [root@localhost /] # cd efk/ [root@localhost efk] # wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gz[root@localhost efk] # tar zxf elasticsearch-6.2.4.tar.gz-C / usr/local/ [root@localhost efk] # cd / usr/local/ [root@localhost /] # useradd es [root@localhost local] # Mv elasticsearch-6.2.4/ es/ [root@localhost local] # sed-I 's/#network.host: 192.168.0.1/network.host: 0.0.0.0su es g' / usr/local/es/config/elasticsearch.yml [root@localhost local] # sed-I 's/#http.port: 9200/http.port: 9200 su es g' / usr/local/es/config/elasticsearch.yml [root@localhost local] # su es # Switch between user startup service [es@localhost local] $/ usr/local/es/bin/elasticsearch-d # service running in the background
If you encounter an error: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
[root@localhost local] # echo'* soft nofile 819200'> > / etc/security/limits.conf [root@localhost local] # echo'* hard nofile 819200'> > / etc/security/limits.conf [root@localhost local] # echo'* soft nproc 2048' > > / etc/security/limits.conf [root@localhost local] # echo'* hard nproc 4096'> > / etc/security/limits.conf [root@localhost local] # echo 'vm.max_map_count=655360' > > / etc/sysctl.conf [root@localhost Local] # sysctl-pvm.max_map_count = 655360
2. Install kibana:
[root@localhost local] # cd / efk/ [root@localhost efk] # wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-linux-x86_64.tar.gz[root@localhost local] # mv kibana-6.2.4-linux-x86_64/ kibana/ [root@localhost local] # sed-I 's/#kibana.index: ".kibana" / kibana.index: ".kibana" / g' / usr/local/kibana/config/ Kibana.yml [root@localhost local] # sed-I 's/#server.port: 5601/server.port 5601 [root@localhost local] # sed-I' s/#server.host: "localhost" / server.host "0.0.0.0" / g'/ usr/local/kibana/config/kibana.yml [root@localhost local] # sed-I 's/#elasticsearch.url: "http://localhost: 9200 "/ elasticsearch.url:" http://localhost:9200"/g' / usr/local/kibana/config/kibana.yml [root@localhost /] # / usr/local/kibana/bin/kibana &
3. Install filebeat:
[root@localhost /] # cd / efk/ [root@localhost efk] # wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gz[root@localhost efk] # tar zxf filebeat-6.2.4-linux-x86_64.tar.gz-C / usr/local/ [root@localhost local] # mv filebeat-6.2.4-linux-x86_64/ filebeat/ [root@localhost local] # vim / usr/ Local/filebeat/filebeat.yml# found something like the following to modify filebeat.prospectors:- type: logenabled: true paths:-/ etc/httpd/logs/*_log # specify the location where the log files are stored multiline.pattern: ^\ [multiline.negate: truemultiline.match: aftersetup.kibana: host: 192.168.171.134output.elasticsearch: hosts: ["192.168.171.134 logenabled 9200"] # be sure to pay attention to the format when configuring There are 2 spaces as children, and the configurations are all in the configuration file. Only the parts to be modified are listed. Enabled defaults to false and needs to be changed to true before logs are collected. Among them, / var/xxx/*.log is modified to its own log path. Note-there is a space behind it. If there are multiple paths, add a line. Be sure to pay attention to the 4 spaces in front of the new line. The first few configurations of multiline can be uncommented, in order to be compatible with multi-line logs. Host in setup.kibana cancels comments and configures addresses according to the actual situation, so does host in output.elasticsearch. Configure [root@localhost local] #. / filebeat/filebeat-c / usr/local/filebeat/filebeat.yml # to start the service according to the actual situation
2. Configure Kibana
Browser access:
Now that the EFK has been built, you can click descover to view the information.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.