In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
In this article, the editor introduces in detail "how to achieve method-level call tracking in Istio". The content is detailed, the steps are clear, and the details are handled properly. I hope this "how to achieve method-level call tracking in Istio" article can help you solve your doubts.
What is distributed call tracking and OpenTracing specification?
Compared with traditional "boulder" applications, one of the major changes in microservices is that different modules in the application are split into independent processes. Under the micro-service architecture, the original intra-process method call becomes a cross-process RPC call. Compared with the method call of a single process, the debugging and fault analysis of cross-process call is very difficult, and it is difficult to view and analyze the distributed call with traditional debugger or log printing.
As shown in the figure above, a request from the client passes through multiple microservice processes. If the request is to be analyzed, the relevant information about all the services through which the request passes must be collected and associated together, which is called distributed call tracking.
What is OpenTracing? CNCF OpenTracing project
OpenTracing is a project under CNCF (Cloud Native Computing Foundation) that includes a set of standard specifications for distributed call tracking, API in various languages, programming frameworks and function libraries. The purpose of OpenTracing is to define a set of standards for distributed call tracking to unify the implementation of various distributed call tracking. At present, there are a large number of Tracer implementations that support the OpenTracing specification, including Jager,Skywalking,LightStep. Using OpenTracing API to implement distributed call tracking in micro-service applications can avoid vendor locking and dock with any OpenTracing-compatible infrastructure with minimum cost.
OpenTracing conceptual model
The conceptual model of OpenTracing is shown in the following figure:
The picture is from https://opentracing.io/
As shown in the figure, OpenTracing mainly contains the following concepts:
Trace: describes an end-to-end transaction in a distributed system, such as a request from a client.
Span: an operation with a name and length of time, such as a REST call or a database operation. Span is the smallest tracking unit for distributed call tracking, and a Trace consists of multiple segments of Span.
Span context: context information tracked by distributed calls, including Trace id,Span id and other content that needs to be passed to downstream services. The implementation of an OpenTracing needs to pass Span context on the process boundary through some serialization mechanism (Wire Protocol) to associate the Span in different processes to the same Trace. These Wire Protocol can be text-based, such as HTTP header, or binary protocols.
OpenTracing data model
A Trace can be regarded as a directed acyclic graph (DAG graph) composed of multiple interrelated Span. The following figure shows a Trace consisting of eight Span:
[Span A] ←←← (the root span) | +-+-+ | | [Span B] [Span C] ←←← (Span C is a `ChildOf` Span A) | | [Span D] +-- +-+ | | | [Span E] [Span F] > [Span G] > [Span H] ↑ (Span G `FollowsFrom` Span F) |
The trace in the figure above can also be represented in chronological order as follows:
-- |-> time [Span A] [ Span B] [Span D] [Span C ] [Span E] [Span F] [Span G] [Span H]
The data structure of Span contains the following:
Name: the name of the operation represented by Span, such as the name of the resource corresponding to the REST interface.
Start timestamp: the start time of the operation represented by Span
Finish timestamp: the end time of the operation represented by the Span
Tags: a series of tags, each consisting of a key value key-value pair. The tag can be any information that facilitates calling analysis, such as method name, URL, and so on.
SpanContext: used to pass Span-related information across process boundaries and needs to be used in conjunction with a serialization protocol (Wire Protocol).
References: other associated Span referenced by this Span, there are mainly two kinds of reference relationships, Childof and FollowsFrom.
Childof: the most commonly used reference relationship, indicating that there is a direct dependency between Parent Span and Child Span. For example, the relationship between RPC server Span and RPC client Span, or database SQL inserts Span and ORM Save action Span.
FollowsFrom: if Parent Span does not depend on the execution result of Child Span, it can be expressed as FollowsFrom. For example, the online store will send an email notification to the user after the purchase payment, but no matter whether the email notification is sent successfully or not, it does not affect the status of the payment success. This case is applicable to FollowsFrom.
Cross-process call information propagation
SpanContext is a confusing concept in OpenTracing. The context in which SpanContext is used to pass distributed calls across process boundaries is mentioned in OpenTracing's conceptual model. But in fact, OpenTracing only defines an abstract interface of SpanContext, which encapsulates the relevant context of a Span in distributed calls, including the Trace id,Span id to which the Span belongs and other information that needs to be passed to the downstream service. SpanContext itself can not achieve cross-process context transmission, it needs Tracer (Tracer is an implementation that follows the OpenTracing protocol, such as Jaeger,Skywalking 's Tracer) to serialize the SpanContext and pass it to the next process through Wire Protocol, and then deserialize the SpanContext in the next process to get the relevant context information for generating Child Span.
In order to provide maximum flexibility for various concrete implementations, OpenTracing only puts forward the requirement of passing SpanContext across processes, and does not specify the specific implementation of serialization of SpanContext and delivery in the network. Different Tracer can use different Wire Protocol to pass SpanContext according to their own situation.
In distributed calls based on the HTTP protocol, HTTP Header is usually used to pass the contents of SpanContext. Common Wire Protocol includes b3 HTTP header,Jaeger used by Zipkin, "x-ot-span-context" HTTP Header used by uber-trace-id HTTP Header,LightStep, and so on. Istio/Envoy supports b3 header and x-ot-span-context header and can be docked with Zipkin,Jaeger and LightStep. An example of b3 HTTP header is as follows:
X-B3-TraceId: 80f198ee56343ba864fe8b2a57d3eff7X-B3-ParentSpanId: 05e3ac9a4f6e3b90X-B3-SpanId: e457b5a2e4d86bd1X-B3-Sampled: 1Istio support for distributed call tracking
Istio/Envoy provides distributed call tracking out of the box for micro services. In a microservice system with Istio and Envoy installed, Envoy intercepts inbound and outbound requests of the service and automatically generates call tracking data for each invocation request of the microservice. By accessing a distributed tracking back-end system in the service grid, such as zipkin or Jaeger, you can view the details of a distributed request, such as what services the request went through, which REST interface was called, and the time spent on each REST interface.
It is important to note that although Istio/Envoy has done most of its work in this process, it still requires minor changes to the application code: the b3 header in the received upstream HTTP request needs to be copied into the header of its downstream HTTP request to pass the call trace context to the downstream service. This part of the code cannot be done by Envoy because Envoy does not know the business logic in the service it proxies and cannot associate inbound and outbound requests according to business logic. Although this part of the code is small, it needs to modify the code that initiates the HTTP request everywhere, which is very tedious and easy to miss. Of course, the work can be simplified by encapsulating the code that initiates the HTTP request into a code base for use by the business module.
Here is a simple online store example program to show how Istio provides distributed call tracking. The sample program consists of several microservices of eshop,inventory,billing,delivery, and its structure is shown in the following figure:
The eshop micro-service receives the request from the client, and then calls the REST interface of the inventory,billing,delivery back-end micro-service to implement the checkout business logic for the user to buy goods. The code for this example can be downloaded from github: https://github.com/aeraki-framework/method-level-tracing-with-istio
As shown in the following code, we need to pass b3 HTTP Header in the application code of the Espope micro service.
@ RequestMapping (value = "/ checkout") public String checkout (@ RequestHeader HttpHeaders headers) {String result = "; / / Use HTTP GET in this demo. In a real world use case,We should use HTTP POST / / instead. / / The three services are bundled in one jar for simplicity. To make it work, / / define three services in Kubernets. Result + = restTemplate.exchange ("http://inventory:8080/createOrder", HttpMethod.GET, new HttpEntity (passTracingHeader (headers)), String.class). GetBody (); result + ="
"; result + = restTemplate.exchange (" http://billing:8080/payment", HttpMethod.GET, new HttpEntity (passTracingHeader (headers)), String.class) .getBody (); result + = "
Result + = restTemplate.exchange ("http://delivery:8080/arrangeDelivery", HttpMethod.GET, new HttpEntity (passTracingHeader (headers)), String.class). GetBody (); return result;} private HttpHeaders passTracingHeader (HttpHeaders headers) {HttpHeaders tracingHeaders = new HttpHeaders (); extractHeader (headers, tracingHeaders," x-request-id "); extractHeader (headers, tracingHeaders," x-b3-traceid "); extractHeader (headers, tracingHeaders," x-b3-spanid ") ExtractHeader (headers, tracingHeaders, "x-b3-parentspanid"); extractHeader (headers, tracingHeaders, "x-b3-sampled"); extractHeader (headers, tracingHeaders, "x-b3-flags"); extractHeader (headers, tracingHeaders, "x-ot-span-context"); return tracingHeaders;}
Let's test the eshop example program. We can build a Kubernetes cluster ourselves and install Istio for testing. For convenience, we directly use the fully managed service grid TCM provided on Tencent Cloud, and add a CCS TKE cluster to the created Mesh for testing.
Deploy the program in the TKE cluster to see the effect of Istio distributed call tracking.
Git clone git@github.com:aeraki-framework/method-level-tracing-with-istio.gitcd method-level-tracing-with-istiogit checkout without-opentracingkubectl apply-f k8s/eshop.yaml
Open the address: http://${INGRESS_EXTERNAL_IP}/checkout in the browser to trigger the REST interface that calls the eshop sample program.
Open the TCM interface in the browser to view the generated distributed call tracking information.
The TCM graphical interface visually shows the details of this call. We can see that the client request enters the system from Ingressgateway, and then calls the checkout interface of Espope micro-service. The checkout call has three child span, which corresponds to the REST interface of inventory,billing and delivery micro-service.
Using OpenTracing to pass distributed trace context
OpenTracing provides Spring-based code burial points, so we can use the OpenTracing Spring framework to provide HTTP header delivery to avoid this part of the hard-coding effort. Using OpenTracing to pass the distributed trace context in Spring is very simple and requires only the following two steps:
Declare the related dependencies in the Maven POM file, one is the dependence on OpenTracing SPring Cloud Starter; in addition, because Istio uses the reporting interface of Zipkin, we also need to introduce the related dependencies of Zipkin.
Declare a Tracer bean in Spring Application. As shown below, notice that we need to set the zpkin escalation address in Istio to OKHttpSernder.
@ Bean public io.opentracing.Tracer zipkinTracer () {String zipkinEndpoint = System.getenv ("ZIPKIN_ENDPOINT"); if (zipkinEndpoint = = null | | zipkinEndpoint = = "") {zipkinEndpoint = "http://zipkin.istio-system:9411/api/v2/spans";} OkHttpSender sender = OkHttpSender.create (zipkinEndpoint); Reporter spanReporter = AsyncReporter.create (sender) Tracing braveTracing = Tracing.newBuilder () .localServiceName ("my-service") .organizationFactory (B3Propagation.FACTORY) .spanReporter (spanReporter) .build () Tracing braveTracer = Tracing.newBuilder () .localServiceName ("spring-boot") .spanReporter (spanReporter) .promoationFactory (B3Propagation.FACTORY) .traceId128Bit (true) .sampler (Sampler.ALWAYS_SAMPLE) .build (); return BraveTracer.create (braveTracer);}
Deploy the version of the program that uses OpenTracing for HTTP header delivery, and the call tracking information is as follows: as you can see from the above figure, compared with passing HTTP header directly in the application code, after using OpenTracing to bury the code, the same call adds seven Span with the name prefix spring-boot, which are generated by the tracer of OpenTracing. Although we do not show the creation of these Span in the code, the code burial point of OpenTracing automatically generates a Span for each REST request and associates it according to the invocation relationship.
These Span generated by OpenTracing provide us with more detailed distributed call tracking information, from which we can analyze the time-consuming of a HTTP call from the client application code to the request, to the Envoy passing through the client, to the Envoy on the server, and finally to the server receiving the request. As you can see from the figure, Envoy forwarding takes about 1 millisecond, which is very short compared to the processing time of the business code. For this application, Envoy processing and forwarding basically have no impact on the efficiency of business request processing.
Add method-level call tracking information to the Istio call tracking chain
Istio/Envoy provides call chain information across service boundaries. In most cases, service granularity call chain information is sufficient for system performance and fault analysis. However, for some services, more fine-grained invocation information is needed for analysis, such as the time-consuming situation of business logic and database access within an REST request. In this case, we need to bury the point in the service code and associate the call tracking data reported in the service code with the call tracking data generated by Envoy to uniformly present the call data generated in the Envoy and the service code.
The code to add call tracking to the method is similar, so we implement it in AOP + Annotation to simplify the code. First, define a Traced annotation and the corresponding AOP implementation logic:
@ Retention (RetentionPolicy.RUNTIME) @ Target (ElementType.METHOD) @ Documentedpublic @ interface Traced {} @ Aspect@Componentpublic class TracingAspect {@ Autowired Tracer tracer; @ Around ("@ annotation (com.zhaohuabing.demo.instrument.Traced)") public Object aroundAdvice (ProceedingJoinPoint jp) throws Throwable {String class_name = jp.getTarget (). GetClass (). GetName (); String method_name = jp.getSignature (). GetName () Span span = tracer.buildSpan (class_name + "." + method_name). Withtag ("class", class_name). Withtag ("method", method_name). Start (); Object result = jp.proceed (); span.finish (); return result;}}
Then add a Traced comment to the method that needs to be traced:
@ Componentpublic class DBAccess {@ Traced public void save2db () {try {Thread.sleep ((long) (Math.random () * 100));} catch (InterruptedException e) {e.printStackTrace ();} @ Componentpublic class BankTransaction {@ Traced public void transfer () {try {Thread.sleep ((long) (Math.random () * 100)) } catch (InterruptedException e) {e.printStackTrace ();}
The master branch of the demo program has been added to the method-level code tracking and can be deployed directly.
Git checkout masterkubectl apply-f k8s/eshop.yaml
As shown in the figure below, you can see that two method-level Span, transfer and save2db, have been added to trace. You can open the Span of a method to view the details, including the name of the Java class and the method called, and add information such as the exception stack when an exception occurs in the AOP code as needed.
After reading this, the article "how to implement method-level call tracking in Istio" has been introduced. If you want to master the knowledge points of this article, you still need to practice and use it yourself to understand it. If you want to know more about related articles, welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.