Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

EFK Building process and ES Lifecycle Management

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "EFK building process and ES life cycle management". In daily operation, I believe many people have doubts about EFK building process and ES life cycle management. The editor consulted all kinds of materials and sorted out simple and useful operation methods. I hope it will be helpful to answer the doubts about "EFK building process and ES life cycle management". Next, please follow the editor to study!

Overview

Today, we mainly introduce the construction process of EFK and the life cycle management of ES. The platform adopts EFK (ElasticSearch-6.6.1 + FileBeat-6.6.2 + Kibana-6.6.1) architecture. It is recommended that the major and minor versions of the three components remain the same.

EFK concept

EFK adopts a centralized log management architecture

Elasticsearch: an open source distributed search engine that provides three functions of collecting, analyzing and storing data. Its features are: distributed, zero configuration, automatic discovery, automatic index slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load and so on.

Kibana: provides a friendly log analysis Web interface for Logstash, Beats, and ElasticSearch to help summarize, analyze, and search important data logs.

Filebeat: lightweight log collector. Filebeat needs to be configured on each application server to collect logs and output to elasticsearch

1. ElasticSearch

# rpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearch # vi/etc/yum.repos.d/elasticsearch.repo = = [elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl= https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey= https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md = = # yum install elasticsearch # vim / etc/elasticsearch/elasticsearch.yml = = network.host: 0.0.0.0 = = # service elasticsearch restart

II. Kibana

1. Deployment

# rpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearch # vim / etc/yum.repos.d/kibana.repo = snippet.bash [kibana-6.x] name=Elasticsearch repository for 6.x packages baseurl= https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey= https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md = = # yum install kibana # vim / etc/kibana/kibana.yml = = server.host: "kibana Server ip" elasticsearch. Hosts: ["http://ES server IP:9200"] # if accessed through a reverse proxy You also need to add the following configuration. The specific value of the path depends on the situation. Server.basePath: "/ kibana"

2. Download the Chinese package and copy it to the specified directory

Wget https://codeload.github.com/anbai-inc/Kibana_Hanization/zip/master unzip master cp-r Kibana_Hanization-master/translations/ / usr/share/kibana/src/legacy/core_plugins/kibana/ # modify language configuration # vim / etc/kibana/kibana.yml = = i18n.locale: "zh_CN" = =

3. Restart the service

Service kibana restart

III. FileBeat

Filebeat belongs to the Beats family. The Beats family currently contains six tools:

Packetbeat (collect network traffic data)

Metricbeat (collects data such as CPU and memory usage at the system, process, and file system levels)

Filebeat (collect file data)

Winlogbeat (collect Windows event log data)

Auditbeat (lightweight audit log collector)

Heartbeat (lightweight server health collector)

1. Deployment

# rpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearch # vim / etc/yum.repos.d/filebeat.repo = = snippet.bash [filebeat-6.x] name=Elasticsearch repository for 6.x packages baseurl= https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey= https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md = = # yum install filebeat

2. Configuration

/ etc/filebeat/filebeat.yml

Filebeat.inputs: # Each-is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. -type: log # Change to true to enable this input configuration. Enabled: true # Paths that should be crawled and fetched. Glob based paths. Paths:-d:/ams_logs/*.log encoding: gbk # output configuration output.elasticsearch: # Array of hosts to connect to. Hosts: ["ES Server IP:9200"]

3. Restart the service

Service filebeat restart

The effect picture is as follows:

IV. ES Lifecycle Management

For log data, due to the bottleneck of the storage capacity of a single index, ES generally recommends using time as a suffix to create multiple indexes for the same log data, while users use a timer to regularly delete expired indexes. ES introduced API related to index lifecycle management in x-pack after 6.6 to simplify and enhance the management of similar log data indexes. The scheme divides the index data into four stages based on time: Hot, Warm, Cold and Delete. ES also gives different data processing methods for these four and different data stages, and finally realizes the life cycle management of the log.

1. Policy configuration

Manage → Index Lifecycle Policies,Create Policy

2. Log generation

Filebeat provides two ways to generate logs. In general, it is recommended to use the default build policy

2.1. Default generation policy

Open the filebeat configuration file and add the following. Using the policy configuration name corresponding to this scenario, it must be beats-default-policy

Output.elasticsearch: hosts: ["ES Server IP:9200"] ilm.enabled: true ilm.rollover_alias: "fsl.ams" ilm.pattern: "{now/d}-000001"

2.2. Advanced generation strategy

Open the filebeat configuration file and add the following. Under version 6.6.1, please create an appropriate index template on es before using this scheme. Otherwise, its directly generated index will not have an alias (suspected bug), resulting in the inability to use lifecycle policies.

Output.elasticesarch: hosts: ["ES server IP:9200"] index: fsl.ams-% {+ yyyy.MM.dd} setup.template.name: "fsl.ams" setup.template.pattern: "fsl.ams-*" setup.template.settings.index.lifecycle.rollover_alias: "fsl.ams" setup.template.settings.index.lifecycle.name: "beats-default-policy" this ends the study of "EFK build process and ES Lifecycle Management" I hope I can solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report