Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Elasticsearch+Fluentd+Kafka to build log system

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the knowledge of "how to use Elasticsearch+Fluentd+Kafka to build a log system". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Preface

Because logstash takes up a lot of memory and is relatively less flexible, ELK is being gradually replaced by EFK. Among them, the EFK mentioned in this paper is Elasticsearch+Fluentd+Kfka, in fact, K should be the display of Kibana for log, this part does not show, this paper only talks about the process of data collection.

Premise

Docker

Docker-compose

Apache kafka service

Architecture data acquisition process

The data generation uses cadvisor to collect the monitoring data of the container and transfer the data to Kafka.

The data transmission link is as follows: Cadvisor- > Kafka- > Fluentd- > elasticsearch

Each service can be scaled out to add services to the logging system.

Configuration file

Docker-compose.yml

Version: "3.7" services: elasticsearch: image: elasticsearch:7.5.1 environment:-discovery.type=single-node # launch ports:-9200 elasticsearch 9200 cadvisor: image: google/cadvisor command:-storage_driver=kafka-storage_driver_kafka_broker_list=192.168.1.60:9092 (kafka Service IP:PORT)-storage_driver_kafka_topic=kafeidou depends_on:-elasticsearch fluentd: image using stand-alone mode : lypgcs/fluentd-es-kafka:v1.3.2 volumes: -. /: / etc/fluent-/ var/log/fluentd:/var/log/fluentd

Where:

The data generated by cadvisor will be transferred to the kafka service of 192.168.1.60, and the topic is kafeidou.

Elasticsearch is specified as the startup of stand-alone mode (discovery.type=single-node environment variable). The startup of stand-alone mode is to facilitate the overall effect of the experiment.

Fluent.conf

# # type http# port 888customers @ type kafka brokers 192.168.1.60 format json topic kafeidou @ type copy# # @ type stdout# @ type elasticsearch host 192.168.1.60 port 9200 logstash_format true # target_index_key machine_name logstash_prefix kafeidou logstash_dateformat% Y.%m.%d flush_interval 10s

Where:

The plug-in of type for copy is to be able to copy the data received by fluentd, to facilitate debugging, to print the data in the console or to store it in a file, which is closed by default and only provides the necessary es output plug-ins.

You can open the @ type stdout section when needed and debug whether the data is received.

The input source is also configured with a http input configuration, which is turned off by default and is also used for debugging and putting data into fluentd.

You can execute the following command on linux:

Curl-I-X POST-d 'json= {"action": "write", "user": "kafeidou"} 'http://localhost:8888/mytag

The target_index_key parameter, which takes the value of a field in the data as the index of es, for example, the configuration file uses the value of the field machine_name as the index of es.

Start deployment

Execute in the directory containing the docker-compose.yml file and the fluent.conf file:

Docker-compose up-d

After checking that all containers are working properly, you can check whether elasticsearch has generated the expected data as verification. Here, check whether the index of es has been generated and the amount of data to verify:

-bash: -: no command found [root@master kafka] # curl http://192.168.1.60:9200/_cat/indices?vhealth status index uuid pri rep docs.count docs.deleted store.size pri.store.sizeyellow open 55a4a25feff6 Fz_5v3suRSasX_Olsp-4tA 1 1 1 0 4kb 4kb

You can also enter http://192.168.1.60:9200/_cat/indices?v directly in the browser to view the results, which will be more convenient.

You can see that I use the field machine_name as the index value here. The result of the query is to generate an index data called 55a4a25feff6, which generates a piece of data (docs.count).

So far, a log collection process such as kafka- > fluentd- > es has been built.

Of course, the architecture is not fixed. You can also use fluentd- > kafka- > es to collect data. No demonstration here, nothing more than to modify the fluentd.conf configuration file, es and kafka-related configuration to do the corresponding location exchange on it.

You are encouraged to read the official documentation. You can find fluentd-es plug-ins and fluentd-kafka plug-ins on github or fluentd's official website.

This is the end of the content of "how to build a log system with Elasticsearch+Fluentd+Kafka". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report