In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "what is the architecture of ELK log system". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
Log data processing
With so many logs, O & M needs to complete log collection, filtering analysis and visual display through various means, so how to realize these functions?
There are many methods, such as ELK integration suite (Elasticsearch , Logstash, Kibana) can easily realize real-time collection, analysis and transmission of log data and graphical display.
So how to use ELK? According to the different log volume, the corresponding ELK architecture is also different. See the following common architectures:
ELK Architecture 1
Logstash is deployed on each node to collect relevant logs and data, and after analysis and filtering, it is sent to Elasticsearch on the remote server for storage.
Elasticsearch then compresses and stores the data in the form of fragments, and provides a variety of APIs for users to query and operate. Kibana Web allows users to query logs intuitively and generate data reports on demand.
The advantage of this architecture is that it is simple to build and easy to use. The disadvantage is that Logstash consumes a lot of system resources and takes up CPU and memory resources when running.
Additionally, there is a risk of data loss due to the lack of message queue caching. This architecture is recommended for beginners or for low-volume environments.
ELK Architecture 2
This leads to a second structure:
The main feature of this architecture is the introduction of message queue mechanism. Logstash Agent (level 1 Logstash, mainly used to transmit data) located on each node first passes data to message queue (common ones are Kafka, Redis, etc.).
Then, Logstash server (secondary Logstash, mainly used to pull message queue data, filter and analyze data) passes the formatted data to Elasticsearch for storage.
Finally, Kibana presents logs and data to users. Due to the introduction of Kafka (or Redis) caching, even if the remote Logstash server stops running due to failure, the data will not be lost because the data has already been stored.
This architecture is suitable for business scenarios with large clusters and massive data. It effectively reduces the consumption of business system resources by collecting logs by replacing the front-end Logstash Agent with filebeat.
At the same time, message queue uses kafka cluster architecture, which effectively guarantees the security and stability of collected data, while back-end Logstash and Elasticsearch are built in cluster mode, which improves the efficiency, scalability and throughput of ELK system as a whole.
Operation and maintenance monitoring with big data thinking
Big data analysis originated from log analysis of operation and maintenance personnel, and gradually developed analysis of various businesses. People found that these data contained great value.
So how to do operation and maintenance with big data thinking? One of the thinking on big data architecture is to provide a platform for operation and maintenance to solve these problems conveniently, instead of letting big data platform solve the problems that arise.
A basic big data operation and maintenance architecture is like this:
For operation and maintenance monitoring, using big data thinking requires three steps:
Acquire the required data, filter out abnormal data and set alarm threshold to alarm through the third-party monitoring platform
The most reliable of all systems is the log output, whether the system is normal, what happened, we used to have a problem to check the log, or write a script to analyze it regularly. Now that these things can be integrated into an existing platform, the only thing we need to do is define the logic for analyzing logs.
"ELK log system architecture is what" the content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.