In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article shows you how to use Sleuth and ELK together, the content is concise and easy to understand, it will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
We have implemented link tracing between service calls, but these logs are scattered on various machines. Even if there is a problem, we have to integrate the logs from each machine to check the problem.
At this time, we need to introduce a log analysis system, such as ELK, which can collect the log information on multiple servers. When something goes wrong, we can easily search the corresponding request link information according to traceId.
Introduction to ELK
ELK consists of three components:
Elasticsearch is an open source distributed search engine, which is characterized by distributed, zero configuration, automatic discovery, automatic index slicing, index copy mechanism, restful style interface, multiple data sources, automatic search load and so on.
Logstash is a completely open source tool that collects, analyzes, and stores logs for future use.
Kibana is an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to summarize, analyze, and search important data logs.
Output log in JSON format
You can output logs in Json format through logback, let Logstash collect and store them in Elasticsearch, and then view them in kibana. If you want to enter data in Json format, you need to add a dependency, as shown below.
Net.logstash.logbacklogstash-logback-encoder5.2
Then create a logback-spring.xml file. The data format that needs to be collected to configure logstash is as follows:
${LOG_FILE} .json ${LOG_FILE} .json.% d {yyyy-MM-dd} .gz7UTC {"severity": "% level", "service": "${springAppName:-}", "trace": "% X {Xmura B3Muzzi TraceIdR -}", "span": "% X {Xmuri B3 SpanIdred -}" "parent": "X {Xmurb B3Mut ParentSpanIdParentSpan Idvana -}", "exportable": "X {Xmuri Spanish ExportRod -}", "pid": "${PID:-}", "thread": "% thread" "class": "% logger {40}", "rest": "% message"}
After the integration, you can see a log file ending in ".json" in the output log directory. The data format is in Json format and can be collected directly through Logstash.
{"@ timestamp": "2019-11-30T01:48:32.221+00:00", "severity": "DEBUG", "service": "fsh-substitution", "trace": "41b5a575c26eeea1", "span": "41b5a575c26eeea1", "parent": "41b5a575c26eeea1", "exportable": "false", "pid": "12024", "thread": "hystrix-fsh-house-10", "class": "c.f.a.client.fsh.house.HouseRemoteClient" "rest": "[HouseRemoteClient#hosueInfo]
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.