In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
By deploying elasticsearch (three nodes) + filebeat+kibana Quick start EFK, and build an available demo environment to test the effect
Author: "Little Wolf". Welcome to reprint and contribute.
Catalogue
Use of ▪
▪ Experimental Architecture
▪ EFK Software installation
▪ elasticsearch configuration
▪ filebeat configuration
▪ kibana configuration
▪ Startup Service
▪ kibana interface configuration
▪ test
▪ follow-up articles
Use
▷ collects nginx access logs in real time through filebeat and transfers them to the elasticsearch cluster
▷ filebeat transfers the collected logs to the elasticsearch cluster
▷ displays logs through kibana
Experimental framework
▷ server configuration
▷ architecture diagram
EFK software installation
Version description
▷ elasticsearch 7.3.2
▷ filebeat 7.3.2
▷ kibana 7.3.2
Matters needing attention
The versions of the three components of ▷ must be consistent.
▷ elasticsearch must have more than 3 sets and the total number is odd.
Installation path
▷ / opt/elasticsearch
▷ / opt/filebeat
▷ / opt/kibana
Elasticsearch installation: all three es perform the same installation steps
Mkdir-p / opt/software & & cd / opt/softwarewget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.2-linux-x86_64.tar.gztar-zxvf elasticsearch-7.3.2-linux-x86_64.tar.gzmv elasticsearch-7.3.2 / opt/elasticsearchuseradd elasticsearch- d / opt/elasticsearch-s / sbin/nologinmkdir-p / opt/logs/elasticsearchchown elasticsearch.elasticsearch / opt/elasticsearch-Rchown elasticsearch.elasticsearch / opt/logs/elasticsearch-R# restrictions A process can have more than 262144 VMA (virtual memory areas) Otherwise, elasticsearch will report max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144] echo "vm.max_map_count = 655350" > > / etc/sysctl.confsysctl-p
Filebeat installation
Mkdir-p / opt/software & & cd / opt/softwarewget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.3.2-linux-x86_64.tar.gzmkdir-p / opt/logs/filebeat/tar-zxvf filebeat-7.3.2-linux-x86_64.tar.gzmv filebeat-7.3.2-linux-x86_64 / opt/filebeat
Kibana installation
Mkdir-p / opt/software & & cd / opt/softwarewget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gztar-zxvf kibana-7.3.2-linux-x86_64.tar.gzmv kibana-7.3.2-linux-x86_64 / opt/kibanauseradd kibana- d / opt/kibana-s / sbin/nologinchown kibana.kibana / opt/kibana-R
Nginx installation (used to generate logs, collected by filebeat)
# install yum install-y nginx/usr/sbin/nginx-c / etc/nginx/nginx.confelasticsearch configuration only on 192.168.1.11
▷ 192.168.1.31 / opt/elasticsearch/config/elasticsearch.yml
# Cluster name cluster.name: my-application# node name node.name: 192.168.1.3 log location path.logs: / opt/logs/elasticsearch# this node visit IPnetwork.host: 192.168.1.3 visit this node visit http.port: 920 node transport port transport.port: 930 list of other hosts in the cluster discovery.seed_hosts: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] # when starting a brand new Elasticsearch cluster for the first time The collection of master nodes whose votes were counted in the first election cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] # enable cross-domain resource sharing http.cors.enabled: truehttp.cors.allow-origin: "*" # you can restore gateway.recover_after_nodes: 2 as long as 2 data sets or master nodes have joined the cluster
▷ 192.168.1.32 / opt/elasticsearch/config/elasticsearch.yml
# Cluster name cluster.name: my-application# node name node.name: 192.168.1.3 log location path.logs: / opt/logs/elasticsearch# this node visit IPnetwork.host: 192.168.1.3 visit this node visit http.port: 920 node transport port transport.port: 930 list of other hosts in the cluster discovery.seed_hosts: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] # when starting a brand new Elasticsearch cluster for the first time The collection of master nodes whose votes were counted in the first election cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] # enable cross-domain resource sharing http.cors.enabled: truehttp.cors.allow-origin: "*" # you can restore gateway.recover_after_nodes: 2 as long as 2 data sets or master nodes have joined the cluster
▷ 192.168.1.33 / opt/elasticsearch/config/elasticsearch.yml
# Cluster name cluster.name: my-application# node name node.name: 192.168.1.3 log location path.logs: / opt/logs/elasticsearch# this node visit IPnetwork.host: 192.168.1.3 visit this node visit http.port: 920 node transport port transport.port: 930 list of other hosts in the cluster discovery.seed_hosts: ["192.168.1.31", "192.168.1.32" "192.168.1.33"] # when starting a brand new Elasticsearch cluster for the first time Collection of master nodes whose votes were counted in the first election cluster.initial_master_nodes: ["192.168.1.31", "192.168.1.32", "192.168.1.33"] # enable cross-domain resource sharing http.cors.enabled: truehttp.cors.allow-origin: "*" # gateway.recover_after_nodes: 2filebeat configuration can be restored as long as 2 data or master nodes have joined the cluster
192.168.1.11 / opt/filebeat/filebeat.yml
# File input filebeat.inputs: # File input Type-type: log # enable loading enabled: true # File location paths:-/ var/log/nginx/access.log # Custom Parameter fields: type: nginx_access # Type is nginx_access Consistent with the above fields.type # output to elasticsearchoutput.elasticsearch: # elasticsearch cluster hosts: ["http://192.168.1.31:9200"," http://192.168.1.32:9200", "http://192.168.1.33:9200"] # Index configuration indices: # Index name-index:" nginx_access_% {+ yyy.MM} "# use this index when the type is nginx_access: fields.type:" nginx_access "# close the built-in template setup.template.enabled: false# to enable logging logging. To_files: true# log level logging.level: info# log file logging.files: # log location path: / opt/logs/filebeat/ # log name name: filebeat # log rotation period Must have 2 '1024 keepfiles: 7 # log rotation permission permissions: 0600kibana configuration
192.168.1.21 / opt/kibana/config/kibana.yml
# access port of this node server.port: 560. This node IPserver.host: "192.168.1.21" # name of this node server.name: "192.168.1.21" # elasticsearch cluster IPelasticsearch.hosts: ["http://192.168.1.31:9200"," http://192.168.1.32:9200", "http://192.168.1.33:9200"] startup service # elasticsearch startup (all 3 es are started) sudo-u elasticsearch/ opt/elasticsearch/bin/elasticsearch# filebeat startup / opt/filebeat/filebeat-e-c / opt/filebeat/filebeat.yml-d" publish "# kibana launch sudo-u kibana/ opt/kibana/bin/kibana-c / opt/kibana/config/kibana.yml
The above startup method is to run at the foreground. Systemd configuration method will be provided in the "EFK tutorial" series of subsequent articles, please stay tuned!
Kibana interface configuration
1 ️administrator uses a browser to visit 192.168.1.21 5601, and the following interface indicates that the startup is successful
2 ️click "Try our sample data"
3 ️"Help us improve the Elastic Stack by providing usage statistics for basic features. We will not share this data outside of Elastic" click "no"
4 ️"Add Data to kibana" Click "Add data"
5 ️enter the view
test
Access nginx to generate logs
Curl-I "http://192.168.1.11"
View data on kibana
1 ️template create index template
2 ️template enter the name of the index template you want to create
3 ️"View the data of previous CURL
Follow up articles
This article is the first in a series of "EFK tutorials". Subsequent EFK articles will be released step by step, including role separation, performance optimization and many other practical information.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.