In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Elastic Stack introduction
In recent years, the speed of data generation by the Internet is increasing. In order to facilitate users to find what they want more quickly and accurately, intra-site search or intra-application search has become one of the indispensable functions. At the same time, the data accumulated by enterprises is also increasing, and the demand for massive data analysis, processing and visualization is getting higher and higher.
In this field, the open source project ElasticSearch has won market attention. For example, last year, Elastic reached a partnership with Aliyun to provide Aliyun Elasticsearch cloud services, Elastic went public in October this year, Elastic China developer Conference was held in November this year, and almost all major cloud vendors provide Elasticsearch-based cloud search services, and so on, all these events reflect the increasing popularity and importance of Elasticsearch applications in enterprises.
First, let's take a look at the introduction of the official website, ok, core keywords: search, analysis.
Elasticsearch is a distributed, RESTful search and analytics engine
Capable of solving a growing number of use cases. As the heart of the
Elastic Stack, it centrally stores your data so you can discover the
Expected and uncover the unexpected.
Elasticsearch is a distributed RESTful-style search and data analysis engine that can solve the emerging use cases. As Elastic
The core of Stack, which centrally stores your data and helps you discover unexpected and unexpected situations.
Product advantages: high speed, scalability, flexibility, flexibility.
In some application scenarios, not only Elasticsearch will be used, but also other Elastic products, such as Kibana, Logstash, etc. The common ELK refers to Elasticsearch, Logstash and Kibana, while Elastic Stack refers to all open source products under Elastic.
Application scenario: (the picture is truncated on the official website of Elastic)
Scene actual combat
Next, let's practice an application scenario.
Scenario: a back-end application is deployed in a CVM, and the back-end application will log as a file. The requirements are: collect log contents, parse each line of logs, and get structured data, which is convenient for search, processing and visualization.
Solution: use Filebeat to forward the log to Logstash, which parses or converts the data, then forwards it to Elasticsearch for storage, and then the data is processed by you. Here, we visualize the log data according to certain requirements, and the visualization work is left to Kibana. (another option is to forward log data directly to Elasticsearch through Filebeat, and Elasticsearch Ingest node is responsible for data processing.)
Required software
The product versions used in this case are as follows:
System: CentOS, deployed separately here, can also be put together.
1. Kibana_v6.2.3 (IP: 192.168.0.26)
2. Elasticsearch_v6.2.3 (IP: 192.168.0.26)
3. Filebeat_v6.2.3 (IP: 192.168.0.25)
4. Logstash_v6.2.3 (IP: 192.168.0.25)
Log content
Suppose a line of logs reads as follows: (log files are placed in the / root/logs directory)
The contents of one line of log are as follows: 2018-11-08 20 INFO 46HD 25949 | https-jsse-nio-10.44.97.19-8979-exec-11 | INFO | CompatibleClusterServiceImpl.getClusterResizeStatus.resizeStatus= | com.huawei.hwclouds.rds.trove.api.service.impl.CompatibleClusterServiceImpl.getResizeStatus (CompatibleClusterServiceImpl.java:775) you can get five fields in one line of log. Split with "|" 2018-11-08 20 INFO 46 INFO 25949 | # time https-jsse-nio-10.44.97.19-8979-exec-11 | # Thread name | # Log level CompatibleClusterServiceImpl.getClusterResizeStatus.resizeStatus= | # Log content trove.api.service.impl.CompatibleClusterServiceImpl.getResizeStatus (CompatibleClusterServiceImpl.java:775) # Class name
The file directory is as follows: (Elasticsearch and Kibana are on another server and started)
Logstash configuration
The logs directory stores the application logs that need to be collected, and the configuration files prepared by logstash.conf for Logstash.
The logstash.conf content is as follows:
Input {beats {port = > 5044} filter {grok {match = > {"message" = > "% {GREEDYDATA:Timestamp}\ |% {GREEDYDATA:ThreadName}\ |% {WORD:LogLevel}\ |% {GREEDYDATA:TextInformation}\ |% {GREEDYDATA:ClassName}"}} date {match = > ["Timestamp", "yyyy-MM-dd HH:mm:ss" SSS "]} output {elasticsearch {hosts = >" 192.168.0.26 hosts 9200 "manage_template = > false index = >" java_log "}}
Next, start Logstash (start successfully, listen to port 5044, and wait for log data to be passed in):
Filebeat configuration
Next, take a look at the configuration file of Filebeat:
Filebeat.prospectors:- type: log enabled: true # configure the path to the log directory or the path to the log file paths:-/ root/logs/*.logfilebeat.config.modules: path: ${path.config} / modules.d/*.yml reload.enabled: falsesetup.template.settings: index.number_of_shards: 3 # index.codec: best_compression # _ source.enabled: falsesetup.kibana: host: "192.168.0. 26The Logstash hosts hosts 5601 "# configure output as logstashoutput.logstash: # The Logstash hosts hosts: [" localhost:5044 "]
Start Filebeat: (when the log file is updated, it will be monitored by Filebeat and forwarded. )
Log query and visualization
Finally, let's take a look at Kibana to visualize the logs. Create an index pattern in Kibana, which is named java_log. Query the log data in the Discover page.
Create visual graphics in Visualize.
Assemble our graphics in Dashboard.
At this point, a simple log data collection, analysis, visualization is completed.
Elastic Stack also has many powerful features, so let's take an in-application search case later.
references
Https://www.elastic.co/cn/blog/alibaba-cloud-to-offer-elasticsearch-kibana-and-x-pack-in-china Aliyun cooperates with Elastic
Getting started with https://www.elastic.co/guide/en/beats/libbeat/6.2/getting-started.html beat
Https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html grok plug-in
Official account: programmer dork (welcome to follow and communicate)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.