In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to implement log link tracking in micro-service distributed architecture". The explanation in this article is simple and clear and easy to learn and understand. let's study and learn how to implement log link tracking in micro-service distributed architecture.
Background
The most common way for developers to troubleshoot system problems is to check system logs. In a distributed environment, ELK is generally used to collect logs uniformly, but it is still troublesome to use logs to locate problems when concurrency is large. Let's take a look at the following figure:
In the figure above, a user requests a url, and the entire link is shown in the figure. Each processing layer generates logs, so how do we string these logs together to form a request full-path log.
In the existing system, because a large number of logs of other users / other threads are also output traversing together, it is difficult to filter out all the related logs of the specified request. So how do we deal with it?
Solution idea
We can get a unique ID for each request, and then we can use a unique ID for each request when printing the log, and that unique ID needs to be passed to the downstream service. Downstream services also print logs with this unique ID, so that all links can be well tracked and displayed in the log.
What is the technical implementation plan? We should try not to invade the code as much as possible, add the traceId identity to the MDC mechanism log template of Logback, and take the value as% X {traceId}.
What is MDC?
MDC (Mapped Diagnostic Context, mapping debug context) is a function provided by log4j and logback to facilitate logging under multithreaded conditions. MDC can be thought of as a Map bound to the current thread to which you can add key-value pairs.
The content contained in the MDC can be accessed by code executing in the same thread. The child thread of the current thread inherits the contents of the MDC in its parent thread. When you need to log, you only need to get the information you need from MDC. The contents of the MDC are saved by the program at the appropriate time. For a Web application, this data is usually saved at the beginning of the request being processed.
Scheme realization
Since ThreadLocal is used internally in MDC, only this thread is valid, and the values in child threads and downstream service MDC will be lost. Therefore, the main difficulty of the solution is to solve the problem of value transfer, which mainly includes the following parts:
How to pass MDC data in API Gateway to downstream Services
How the service receives the data and how it continues to pass when invoking other remote services
How to pass to child threads in the case of asynchronism (thread pool)
Modify log template
Add the identity in logback profile log format
Gateway add filter
This filter is used to solve how the gateway passes MDC data to downstream services:
Generate traceId and pass it to downstream services through header
The above code has a MDC that belongs to org.slf4j.MDC, and here is the value of the constant:
/ * Log Link tracking id message header * / String TRACE_ID_HEADER = "x-traceId-header"; / * * Log Link tracking id Log Flag * / String LOG_TRACE_ID = "traceId"
Spring interceptor added to downstream service
Receive and save the value of traceId:
Feign interceptor added to downstream service
Continue to pass the traceID value of the current service to the downstream service:
Resolve the problem of parent-child thread delivery
Thread pool is mainly used for business (asynchronous and parallel processing), and spring also has @ Async annotation to use thread pool. To solve this problem, you need the following two steps:
Rewrite the LogbackMDCAdapter of logback
Since the MDC implementation of logback uses ThreadLocal internally that cannot pass child threads, it needs to be rewritten and replaced with Ali's TransmittableThreadLocal.
TransmittableThreadLocal is Alibaba's open source InheritableThreadLocal extension to solve the problem of "passing ThreadLocal when using components that cache threads, such as thread pools." If you want to pass between the TransmittableThreadLocal thread pool and the main thread, you need to work with TtlRunnable and TtlCallable.
Other code, like ch.qos.logback.classic.util.LogbackMDCAdapter, only needs to call the copyOnInheritThreadLocal variable instead.
The TtlMDCAdapterInitializer class is used to load your own mdcAdapter implementation when the program starts:
Extended thread pool implementation
Add TtlRunnable and TtlCallable extensions:
Scene testing
The test code is as follows:
Log.info ("Test") @ Async public void test () {log.info ("Test 1")} userService.findByUserName ("gu")
Log printed by api gateway
ELK aggregate logs query the entire link log through traceId
When an exception occurs in the system, you can query all the log information of the request in the log center directly through the value of the traceId of the exception log, as shown below:
Thank you for your reading. the above is the content of "how to implement log link tracking in micro-service distributed architecture". After the study of this article, I believe you have a deeper understanding of how to implement log link tracking in micro-service distributed architecture, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.