Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How istio connects, manages, and protects Micro Services 2.0

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

In this issue, the editor will bring you about how istio connects, manages and protects microservices 2.0. the article is rich in content and analyzes and describes it from a professional point of view. I hope you can get something after reading this article.

I. eight fallacies of distributed computing

Peterdovich put forward eight fallacies of distributed computing many years ago, which developers tend to ignore. These eight items are basically related to the network, for example, when initiating a network request, we will constantly make some attempts and wait to ensure the effectiveness of message delivery.

Micro-service is a more complex distributed system. The granularity of the previous SOA or BhopSjold CS architecture will be a little thicker, but if micro-service is used, its granularity will be finer. In the article "what are the prerequisites for microservices" on Martin Fuller's blog, it is mentioned that you have to be "tall". It is not easy to be "tall". Many companies, they are in order to become "tall". Made a lot of frameworks and platforms.

II. Service governance

1 Technical points of micro-service governance

In order to become a "tall man", we need to make some changes to micro-services. Here are the technical points of service governance, which are only some of them, which are related to the service grid. It includes service discovery, load balancing, request routing, authentication and authorization, observability, health check, fault tolerance, and so on. At present, the mainstream solutions are well-known Spring Cloud, Dubbo and so on, which provide libraries to package the content related to service governance for specific business to call.

2 problems with libraries or frameworks

But there are some problems with libraries and frameworks:

1. Study cost. For developers, one more library means learning more and will be constrained by libraries or frameworks in the business development process.

2. Intrusiveness to business code. For example, using Spring Cloud will add a lot of additional content to the code, such as adding a comment to each request to express some functions that you want to reference service governance.

3. Maintenance, upgrade and offline of the basic library. If it is promoted to the service in the way of library, the maintenance, upgrade and offline of the library will result in the maintenance, upgrade and offline of the whole service.

4. Different languages and technology stacks. When we put forward the concept of micro-services, one of its important benefits is that we can use different technology stacks to develop micro-services, but if we are restricted by the framework, we can only use this language, which is a headache for us. Unable to exert the cross-language ability of micro-services.

These problems are actually serious and make developers feel more and more uncomfortable from top to bottom. What is the ideal service governance solution? You can imagine that the whole thing related to service governance should be completely isolated from the business logic, and the interaction between services and services will not take into account the content of service governance, preferably invisible to business development.

How do you do it at this time? This leads to the classic container deployment model-Sidecar.

3Sidecar mode

The Sidecar pattern is usually used with containers, and it doesn't matter if they are not used with containers, that is, two separate processes, if used with containers, are based on containers. The Sidecar schema is physically isolated and language-independent. Because it is two independent things, it can be released independently, which is the same as the concept of micro-service. In addition, they are deployed on the same Host, so resources can access each other, and communication latency is not obvious because they are together. Functions that have nothing to do with business can be put on it to form multiple Sidecar. Today, Sidecar mainly puts some things related to service governance in it, and the idea of software design is to separate concerns.

Service governance is based on the Sidecar pattern, and then the specific conditions of the connection are formed. As shown in the figure, for Service A, whether Service A does not know whether to communicate with Sidecar, Service A still sends a message to Service B and invokes the communication interface as usual, but the message may be captured by Sidecar and forwarded through Sidecar.

III. Service Grid

Two latest control panels, one is Istio, which is developed by Google, IBM, and Lyft, while Conduit is developed by Buoyant, which is the same company as the data panel Linkerd with poor performance. Buoyant redeveloped Conduit after Linkerd, specifically to solve performance problems, so in terms of performance, the performance index of Conduit has come out, that is, microsecond level and P99 delay. However, the performance index of Istio has not been released yet, and it is still in the stage of functional development. Because Istio needs to achieve more functions, to support a variety of platforms and filtering, the entire architecture of Istio is very flexible, so the whole order of magnitude of Istio is relatively heavy, but Conduit only supports the principle of K8SMagneConduit.

In terms of programming language, the control panel Istio and Conduit both use the go language, but on the data panel, Istio uses Clearing languages and conduit to use Rust. It can be seen that both languages are more efficient, and an efficient language must be used on the data panel.

IV. Istio architecture

Istio is defined in one sentence: an open platform for connecting, managing, and protecting microservices. As I just said, Istio is developed by Google, so the trend of Istio will certainly become more and more popular in the future, and K8S has integrated it into it to make it an optional component.

For connections, Istio mainly includes elasticity, service discovery, load balancing, management is flow control, policy enhancement, monitoring, and security is also considered in Istio. Istio is end-to-end authentication and authorization, which will be described in detail later.

Key features of Istio:

1. Intelligent routing and load balancing. The intelligent routing and load balancing here are more advanced, not like the traditional simple random load balancing, but based on the contents of some packets.

2. Flexibility across languages and platforms. For the platform, Istio supports a wide variety of platforms, and can support Amax B testing, Canary release, and use some advanced features on blue and green deployment operations.

3. Comprehensive strategy implementation. Istio has a component specifically responsible for ensuring that policies can be distributed to specific data panels through a component.

4. Telemetry and reporting. That is, the specific measurement and flow monitoring and so on.

This is the entire system architecture of Istio, as shown in the figure, above is the control panel, below is a lot of data Envoy, many agents, form a data panel. We will describe the individual in detail later.

This is a schematic diagram of the Istio deployment under K8S, and you can see that the most critical thing between them is that all services and components will have an agent, which is a must, including the control panel. There are two things that are not drawn, one is the Ingress, the other is the initializer, the initializer is mainly for proxy injection. The interaction between them is done by encryption and TLS protocol.

5. Istio components

1 data panel Envoy

Introduce the core components of Istio, starting with Envoy, the data panel of Istio.

The goal of Envoy is to transparently handle the ingress and exit traffic between services within the cluster and between services and external services. Envoy is written in C++, highly parallel and non-blocking. And the removable L3gam4 filter and the L7 filter will eventually form a filter chain to manage or control the traffic. Envoy is provided through xDS dynamic configuration.

Here are some inherent functions, service discovery and health check. Envoy health check-ups are also done with fine granularity, including active and passive health check-ups. Active health check is like a regular health check, which actively sends a request for calling the API for health check to inquire about it. Passively, health checks are performed from some return values through requests for other services. Of course, it also includes advanced load balancing, monitoring and tracking these functions.

The three most critical points of Envoy:

High performance. It has been emphasized that data panels must be high-performance because they are deployed with business services.

Scalable.

Configurable. It has the characteristic of dynamic configuration.

How does Envoy achieve high performance?

Envoy's threading model is divided into three types of threads. If you have done C++ development, this single-process multithreaded architecture is very common. Envoy is divided into main thread, worker thread and file refresh thread, in which the main thread is responsible for the management and scheduling of worker thread and file refresh thread. The worker thread is mainly responsible for listening, filtering and forwarding, the worker thread will contain a listener, if it receives a request, it will filter the data through the filter chain just introduced. The first two are non-blocking, and the only one that blocks is this IO operation, which constantly removes some of the cache in memory.

Another function emphasized by the service grid is that dynamic configuration does not need to restart, it actually restarts, it will start a new process, and then make a new strategy on top of this process, and some initialization, and then request to listen again, a copy of the socket of the previous process. When the old one closes the link and exits, it receives the new one, which enables a user-insensitive restart.

This is its xDS, as shown in the picture, and you can see that its density is very fine.

Terminal Discovery Service (EDS) is actually a service discovery service.

Cluster Discovery Service (CDS) is used to discover clusters.

Route Discovery Service (RDS) is designed to do some processing on routes.

LDS to dynamically add, update, and delete listeners, including some management and maintenance of the filter chain.

In addition, there is the health check-up (HDS) mentioned just now.

And ADS is the interface for aggregating monitoring metrics.

Another key discovery (KDS) is related to security.

First of all, if a request comes, it will go to the worker thread just mentioned, and the worker thread will have a listener. After receiving it, the worker thread will do some processing, and then send it out. At this time, the route discovery function will be called, and then the corresponding cluster will be found. Through this cluster, the corresponding service will be found and called layer by layer.

2 Control Panel Pilot

Next, we will introduce the three major components of the control panel, the first of which is Pilot.

Pilot is the agent configuration at run time. The xDS mentioned just now is called with Pilot and is responsible for dispatching some of the corresponding policies and failed recovery features. Pilot is responsible for managing all these agents, and agents and management are done through Pilot, which is a platform-independent topology model. At present, K8S is mainly supported.

Pilot is an abstract model independent of the underlying platform because there is an agent for a multi-platform or diverse management architecture, that is, the architecture of the adapter. Platform-specific adapters are responsible for filling these specifications appropriately, and some specifications are formed through platform-specific adapters, which are combined with general models to generate data that must be sent to Envoy, and configuration and push are done by Pilot.

The entire service discovery and load balancer of Pilot. As shown in the figure, Pilot is the domain name provided by Kubernetes DNS for access. First, the url of Service B is called. At this time, Envoy will intercept and process it. After processing, you will find the appropriate load balancer pool and select one for traffic distribution. There are many kinds of load balancing methods currently supported by Pilot. But at present, only three kinds have been implemented, and the first two are random and round robin, with a minimum request load balancing algorithm. It can find which of these Pilot is called the least, and then make the call.

For some rules configuration of Pilot, we can see that they are basically responsible for the management and distribution of rules.

Routing rules.

Destination strategy. It mainly includes the load balancing algorithm and the abstract strategy rehearsal for the load balancing algorithm.

Outbound rules.

The advantage of dividing these rules into three categories is that some templates are generated for all three categories.

The split of traffic can be seen as a label-based split of traffic, which is configured here, such as version, environment, environment type, and it distributes traffic according to this rule. For example, 99% are sent to the previous version, and the new version is tested by 1% first, which is similar to the Acord B test.

Another more advanced one is content-based, because it is this kind of filtering of L7, which can be filtered based on content and supports expressions, which directs all iPhone traffic to new services with tags, and the version must be a canary version.

3 Mixer Mixer

Mixer provides an intermediate layer between the application code and the infrastructure to isolate the Enovy from the background infrastructure, which refers to Promethus,ELK.

Mixer has three core features:

Prerequisite check. Responsible for whitelist and ACL testing

Quota management. Responsible for the control of this frequency of use

Telemetry report.

It is divided into two categories, which are divided into two categories when generating configuration templates, the first is responsible for checking check, and the second is responsible for reporting reporter.

Mixer uses a general plug-in model to achieve high scalability, and plug-ins are called adapters. The operation and maintenance personnel send some configurations to this, and these templates are developed by template developers. Istio provides a lot of universal templates, which can be made with a simple modification. There are also many kinds of adapters, including all kinds of backstage and platform.

Mixer abstracts the details of different back-end strategies and telemetry systems. It abstracts these two things and forms a set of abstract things.

Just introduced the adapter, let's talk about the processor, it is a configured adapter, configured by the operation and maintenance staff, and then form the final processor, which is responsible for really sending things to the background.

It encapsulates the logic required to interface with the back end, specifies the configuration specifications, and the adapter. If you want to exchange messages in the background, the required operation parameters are also defined here. On the right are two templates. The first template is Prometheus, which is a processor. Here are some of its metrics. The other is the template for whitelist checking, which provides URL and the interface return value it requests.

I just introduced how the whole path is opened, including adapters and processors to do this thing, how to establish this path? The next step is how to generate this parameter, which is the result of mapping the request attribute to a template.

Mixer will receive a lot of attributes (requests) from Envoy, and the data contained in the requests are called attributes. Generate the corresponding specific parameters through its template, which is also the corresponding template in the two examples just now. The above is for metrics collection, and the following is a whitelist.

Here is an example of a telemetry report of what happens when a request is received. It will be in the form of an attribute, and there is a Zipk here, which will be reported directly, because Envoy natively supports Zipk. What Envoy supports back-end monitoring is Zipk, so it can be sent directly to it. Others need to be forwarded through Mixer.

First, after it receives the property, it generates a concrete instance of the property. According to the configuration provided by the operation and maintenance personnel, Mixer distributes the generated data to a set of adapters, which generates specific data according to the configuration of the operation and maintenance personnel, and then sends out the instance just generated. After it is sent to the back end, it can be further analyzed and processed.

4 key management

Finally, I would like to introduce the security-related piece, Certificate Authority, which is the schematic diagram of the whole key management. You can see that the service and Envoy interact through TCP, but the relationship between Envoy and Envoy is encrypted through MTIS, which is two-way TIS encryption. The advantage is that encryption is done between each service node internally, which is optional, depending on performance requirements.

Key management is divided into four steps, these four steps are four features, first, through the service account to generate keys and certificates, and then deploy the keys and certificates to the Envoy, it also supports periodic updates to the certificate, in addition to support the revocation function.

This is its two deployment methods, one K8S deployment mode, and if it is a virtual machine, it will have a separate node agent, through the node agent to send a signature request to the CA,CA and generate the key and certificate to the agent, and the agent will be responsible for deploying to the Envoy.

At the specific runtime, they all have their own certificates, and they start to carry out two-way TIS. At this time, they will query the name information and whether the backend has permission to access it. If so, go down. If not, it will be over.

Third, if you receive a request from a client, you will go to Mixer to judge whether it is a judgment on the whitelist or whether it is on the blacklist, which will affect the success of the handshake. Eventually, they form a channel for secure interaction.

This is how istio connects, manages and protects microservices 2.0 shared by the editor. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report