In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
In this article Xiaobian introduces in detail "how to deploy ELK and Filebeat log center in Docker", the content is detailed, the steps are clear, and the details are handled properly. I hope that this article "how to deploy ELK and Filebeat log center in Docker" can help you solve your doubts.
ELK is not a piece of software, but an acronym for Elasticsearch, Logstash and Kibana. All three are open source software, usually used together, and are successively owned by Elastic.co, so they are referred to as ELK Stack for short. According to Google Trend, ELK Stack has become the most popular centralized logging solution.
Current environment
1. System: centos 7
2.docker 1.12.1
Introduction
ElasticSearch
Elasticsearch is a real-time distributed search and analysis engine that can be used for full-text search, structured search and analysis. It is a full-text search engine based on Apache Lucene, written in Java language.
Logstash
Logstash is a data collection engine with real-time channel capability, which is mainly used for log collection and parsing, and storing it in ElasticSearch.
Kibana
Kibana is a Web platform based on Apache open source protocol and written in JavaScript language to provide analysis and visualization for Elasticsearch. It can find and interact data in the index of Elasticsearch, and generate table graphs of various dimensions.
Filebeat
The main purpose of introducing Filebeat as a log collector is to solve the problem of high Logstash overhead. Compared with Logstash,Filebeat, the CPU and memory of the system are almost negligible.
Architecture
Do not introduce Filebeat
Introduction of Filebeat
Deployment
Start ElasticSearch
Docker run-d-p 9200 9200-- name elasticsearch elasticsearch
Start Logstash
# 1. Create a new configuration file logstash.confinput {beats {port = > 5044}} output {stdout {codec = > rubydebug} elasticsearch {# enter the access IP of the actual elasticsearch. Because it is cross-container access, use private network and public network IP, and do not enter 127.0.0.1 | localhosthosts = > ["{$ELASTIC_IP}: 9200"]}} # 2. Start the container, expose and map the port, mount the configuration file docker run-d-- expose 5044-p 5044 PWD 5044-- name logstash-v "$PWD": / config-dir logstash-f / config-dir/logstash.conf
Start Filebeat
Download address: https://www.elastic.co/downloads/beats/filebeat
# 1. Download the Filebeat package wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.2-linux-x86_64.tar.gz# 2. Extract the file tar-xvf filebeat-5.2.2-linux-x86_64.tar.gz# 3. Create a new configuration file filebeat.ymlfilebeat:prospectors:- paths:- / tmp/test.log # log file address input_type: log # read from the file tail_files: true # start reading data at the end of the file output:logstash:hosts: ["{$LOGSTASH_IP}: 5044"] # fill in the access IP# 4 for logstash. Run filebeat./filebeat-5.2.2-linux-x86_64/filebeat-e-c filebeat.yml
Start Kibana
Docker run-d-name kibana-e ELASTICSEARCH_URL= http://{$ELASTIC_IP}:9200-p 5601 name kibana 5601 kibana test
Simulated log data
# 1. Create the log file touch / tmp/test.log# 2. Write a nginx access log echo '127.0.0.1-- [13/Mar/2017:22:57:14 + 0800] "GET / HTTP/1.1" 3700 "-"Mozilla/5.0 (KHTML, like Gecko) AppleWebKit/537.36 (KHTML, like Gecko) log"-"-" > / tmp/test.log "- to the log file
Visit http://{$KIBANA_IP}:5601
After reading this, the article "how to deploy ELK and Filebeat Log Center in Docker" has been introduced. If you want to master the knowledge of this article, you still need to practice and use it to understand it. If you want to know more about related articles, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.