In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to choose the best micro-service agent architecture, the content of the article is of high quality, so the editor will share it with you for reference. I hope you will have some understanding of the relevant knowledge after reading this article.
Micro-service architecture has been very popular in the past two years, and many companies are working hard to build their own micro-service architecture. Because microservices can achieve faster release cycle, modularize applications, scale flexibly and make applications portable, they have increasingly become a symbol that can not be ignored in the process of enterprise digitization. However, due to the lack of understanding of the impact of agility, application delivery adds a lot of complexity.
What is the solution to this?
Choosing the right agent architecture and application delivery controller (ADCs) is critical for end users to get the best experience. It must be able to provide the appropriate level of security, observability, advanced traffic management and troubleshooting capabilities and be compatible with your open source tools. In addition, the agent architecture must be able to meet the needs of both north-south traffic and east-west traffic between microservices.
Load balancing for single applications is very simple. But for microservice-based applications, load balancing is more complex.
Four agent architectures are described below, and several of them are evaluated against seven key criteria based on microservice application delivery.
Make a tradeoff between advantage and complexity
First of all, we need to reach a consensus that the micro-service architecture is actually very complex. Driven by open source innovation, best practices evolve rapidly as technology advances. Different architectures have different advantages, but they also show different degrees of complexity. Many times, we need to choose between the benefits we actually need (such as security, observability) and complexity. Especially when you consider the skills required to implement a particular architecture and the features that must be added to meet the needs of the public, you need to choose between the two.
Balance the diverse needs of core participants
In fact, the choice of architecture is much more complex than expected, because different stakeholders have different concerns, so their evaluation criteria are different. In the journey of micro-service applications, the team managing the platform acts as a link between departments in the organization, and they are concerned about the governance of Kubernetes, the efficiency of operations and maintenance, and the agility of developers. The DevOps team is concerned with faster product release, automation, canary testing, and incremental deployment. SRE is most concerned with the availability, observability, and event response of the application. DevSecOps focuses on the security and automation of applications and infrastructure. The NetOps team is obsessed with network management, visibility, policy enforcement, and compliance. Therefore, the delivery architecture of microservice applications must balance all of the above requirements.
Choosing the right agent architecture is by no means easy. It is important to note that when making any decision, you need to take a long-term view and use seven key criteria for north-south and east-west traffic to evaluate architectural choices:
Application security
Observability
Continuous deployment
Elastic expansion and performance
Integration of open source tools
Istio support for open source control plane
Required IT technology stack
In this way, enterprises can ensure that they can deliver applications safely and reliably now and in the future, and have a world-class operation and maintenance experience.
Agent Architecture Typ
In today's agent architecture, there are four options to consider:
Double layer ingress (two-tier ingress)
Unified ingress (unified ingress)
Service Grid (service mesh)
Service Grid Lite Edition (service mesh lite)
Double layer Ingress (Two-tier Ingress)
For cloud native novice rookies and expert bosses, the two-tier Ingress proxy architecture is the easiest and fastest way to deploy production applications. The load balancing of north-south traffic is divided into two layers to simplify the demarcation between the platform and the network team. On the other hand, the traffic load balancing of micro-service nodes (that is, east-west traffic) uses a simple open source L4 kube-proxy. Double-layer ingress provides good security, traffic management and observability for north-south traffic, but east-west traffic is not well taken care of.
As you can see from the figure above, the two-tier ingress agent architecture has two layers of application delivery controllers (ADC) for north-south traffic. The first (that is, green) ADC shown in the figure is mainly used for L4 load balancing of inbound traffic, as well as security features for north-south traffic, such as SSL termination and Web Application Firewall (WAF). It is usually managed by network team members who are familiar with Internet-oriented traffic. In addition, the green ADC can also be used for L4 load balancing, SSL termination and WAF functions of other monolithic applications that are used at the same time.
The second ADC shown in blue in the figure is used to handle L7 load balancing of north-south traffic. It is typically managed by the platform team and is used in the Kubernetes cluster to direct traffic to the correct node. Layer 7 attributes, such as the information in the URL and HTTP headers, can be used for traffic load balancing decisions. The blue ADC constantly receives updates about the availability of the microservice Pod within the Kubernetes cluster and the corresponding IP address, and can determine which pod can best handle the request.
The east-west traffic between microservice pod is managed by open source kube-proxy, a basic L4 load balancer that has a very simple round-robin scheduling or minimal connection algorithm based on IP addresses. Due to the lack of many advanced features of kube-proxy, such as L7 load balancing, security, and observability, east-west traffic is not well managed in this architecture.
Unified Ingress
Compared with a two-tier Ingress, Unified Ingress is relatively simple to implement for a network-savvy platform team. Unified Ingress reduces the north-south proxy layer and eliminates one-hop latency. On the other hand, the simple open source L4 kube-proxy is used for traffic load balancing of micro-service nodes (Emurw). It applies to internal applications and provides the option to add Web application firewalls, SSL terminations, and external applications later. Similar to the two-tier Ingress architecture, Unified Ingress provides excellent security, traffic management, and observability for north-south traffic, but east-west traffic is still not well taken care of.
In fact, the advantages and disadvantages of unified Ingress and two-tier Ingress are very similar. The difference lies in the skills required to implement them. With Unified Ingress, the ADC for north-south traffic and the kube-proxy for east-west traffic are managed by platform team members, so they must be very proficient in networking to implement and manage this type of architecture.
The unified Ingress agent architecture can participate in the overlay network of Kubernetes clusters, which enables it to communicate directly with micro-service Pod. Therefore, the platform team must understand layers 3-7 of the network stack to take full advantage of this architecture.
Compared with the service grid, the deployment of the unified ingress proxy architecture is fairly simple, and north-south traffic provides excellent functionality. However, due to the limitations of kube-proxy and the need for a network-proficient platform team to implement it, its east-west traffic function is very limited.
Service grid
This is a framework that has only emerged in the past two years, and it is also the most advanced and complex one. The service grid uses sidecar for each microservice pod and examines and manages east-west traffic as it enters and leaves the pod. Therefore, the service grid can provide the highest level of observability, security, and fine-grained management of traffic between micro-services. In addition, you can choose repetitive micro-service functions, such as encryption, to uninstall them to sidecar. However, it is important to emphasize that because the service grid is a very complex architecture, the learning curve is steep for the platform team.
A typical service grid architecture is similar to a two-tier Ingress proxy architecture for north-south traffic and has the benefits described above. The most critical difference between the two-tier Ingress and the service grid, which is also its value, is that the service grid uses lightweight ADC as the sidecar of each east-west traffic micro-service pod. Microservices cannot communicate directly with each other, but need to go through the sidecar so that traffic between the pod can be checked and managed as they enter and leave the pod.
Through the use of proxy sidecar, the service grid provides the highest levels of observability, security, and fine-grained traffic management and control between micro-services. In addition, repetitive micro-service functions such as retry and encryption can be transferred to sidecar. Although we have allocated our own memory and CPU resources for each sidecar before, sidecar is usually very lightweight.
You can choose an open source solution such as Envoy for sidecar. In general, sidecar is managed by the platform team and connected to each pod, thus creating highly scalable distributed architectures, but they are also extremely complex because many active components are added.
Next, let's evaluate the service grid proxy architecture against the following seven criteria.
Application security
Sidecar provides the best security for east-west traffic in microservices. In essence, every API call between microservices is proxied through sidecar to improve security. In addition, Hai can perform authentication between microservices and set policies and controls to prevent abuse. You can also check the traffic between microservices to see if there are any security vulnerabilities.
In addition, encryption can be enforced between microservice communications, and encryption functions can be transferred to the sidecar. In order to prevent overload and failure of micro-services, traffic between micro-services can also be limited. For example, if the microservice can receive 100 calls per second, you can set a rate limit.
Using the service grid, the security of north-south traffic is very good, which is comparable to that provided by the two-tier architecture. For applications with strict regulatory or advanced security requirements, such as the financial and defense industries, the service grid architecture is the best choice.
Observability
The service grid provides very good observability for east-west traffic between microservices, because all traffic between pod is visible to sidecar. In turn, sidecar telemetry can be analyzed through open source or vendor-provided analysis tools to get a better perspective for faster troubleshooting or capacity planning. The observability of north-south traffic is also excellent in the service grid architecture, which is comparable to the two-tier Ingress architecture.
Continuous deployment
With the service grid, both north-south and east-west traffic support advanced traffic management for continuous deployment, such as automatic canary deployment, blue-green deployment, and rollback. Unlike kube-proxy, sidecar has advanced API that enables them to integrate with CI/CD solutions such as Spinnaker.
Elastic expansion and performance
The service grid is highly scalable for east-west traffic because it is a distributed architecture. It also helps extend observability, security, and advanced traffic management and control capabilities.
Performance depends on the choice of sidecar, as performance and latency may vary between sidecar vendors. Since east-west traffic is represented by sidecar, using sidecar will add two additional hops to inter-Pod traffic, which will increase the overall latency. If you use the Istio control plane, an additional hop is added to the Istio Mixer that provides policy enforcement, resulting in additional latency. Running sidecar on every Pod requires memory and CPU, and hundreds of pod can be added quickly.
The service grid provides excellent elastic scaling and performance of north-south traffic, which is comparable to that of double-layer Ingress.
Integration of open source tools
ADC for north-south traffic and sidecar for east-west traffic can be integrated with more mainstream open source tools such as Prometheus, Grafana, Spinnaker, Elasticsearch, Fluentd and Kibana. Most sidecar can also have extended API, which can be integrated with more tools.
Istio support for open source control plane
Both the ADC for north-south traffic and the sidecar for east-west traffic can well integrate the Istio open source control plane. Note that Istio adds an extra hop of latency to Istio Mixer, providing policy enforcement for east-west traffic.
Required technology stack
The service grid is extremely complex, and managing hundreds of sidecar is definitely a great challenge. This new distributed agent architecture brings a steep learning curve for IT personnel. Perhaps the main challenge for the platform team is to use sidecar to manage many active components. Because they have to deal with latency and performance requirements, and must be able to troubleshoot any number of distributed agents as well as problems in the data plane and Istio control plane components.
Service Grid Lite Edition
For those users who want the service grid to bring more security, observability and advanced traffic management, but prefer the simple architecture, the service grid streamlined architecture is a feasible choice. Instead of using Sidecar on each Pod, this architecture deploys a set of agents (for example, each node agent) within the Kubernetes cluster through which all traffic between Pod flows. Service Mesh lite is cheaper for platform and network teams to learn and can easily transition from a two-tier Ingress architecture.
Using the Service Mesh lite architecture, the green application delivery controller (ADC) shown in the figure is responsible for layer 4-7 load balancing to handle north-south traffic to handle inbound requests and load balancing to the correct Kubernetes cluster. Green ADC can perform SSL termination, Web application firewall, authentication, or other network services.
Based on isolation and size requirements, the service grid thin agent architecture uses single or multiple ADC (pink box in the figure) to proxy communication between micro-service Pod to manage east-west traffic between Pod, rather than using sidecar attached to each Pod. The agent is deployed on each node.
Service Grid Lite provides many of the advantages of Service Grid, but because there is only one ADC instance per cluster to manage inter-Pod communication, the overall complexity is reduced. The end result is that when all traffic passes through one or more ADC, it provides the same high-level policy control, security, and fine-grained traffic management as the service grid proxy architecture, without the same complexity as the service grid.
Let's evaluate the service grid streamlined agent architecture based on seven key criteria:
Application security
The security advantages of the simplified version of service grid are similar to those of service grid. Green ADC provides excellent security for north-south traffic. Because everything flows through pink ADC, it provides excellent security features such as policy enforcement, network segmentation, rate limiting, and API protection. However, if something encryption is required, encryption must be implemented in each separate microservice, because there is no automatic encryption of traffic like sidecar in the service grid. Open source projects such as SPIFFE are expected to make this step easier.
Observability
Because ADC can see both north-south and east-west application traffic flow through, its visibility is excellent, basically comparable to that of a service grid.
Continuous deployment
Both north-south and east-west traffic support advanced traffic management for continuous deployment, such as automatic canary deployment, progressive deployment, blue-green deployment, and rollback, just like service grids. CI / CD tools such as Spinnaker can also be integrated into east-west traffic.
Elastic expansion and performance
Like the service grid, the architecture can also easily extend north-south and east-west traffic and benefit from advanced observability, security, and traffic management. Another advantage of the simplified version of the service grid is that its east-west traffic delay is less than that of the service grid.
Open source tool integration
Service Grid Lite and Service Grid integrate with third-party tools exactly the same, and can integrate with mainstream open source tools such as Prometheus, Grafana, Spinnaker, Elasticsearch, Fluentd, and Kibana.
Istio support
Service Grid Lite supports Istio integration for north-south traffic, while support for east-west traffic is not complete. At present, however, the gap between the two is narrowing.
Fewer technology stacks are required
The main advantage of Service Grid Lite is that it requires much less IT technology stack to implement and manage it than Service Grid. Similar to the two-tier Ingress, the network team can manage the green ADC, while the platform team can manage the pink ADC, so both teams can work at their own pace without extra time and cost to learn.
The service grid reduced agent architecture can achieve functions similar to those of the service grid without adding complexity. It also provides an easy transition from a two-tier Ingress, resulting in better observability, greater security, better integration with open source tools, and support for continuous deployment of east-west traffic.
When choosing the right architecture, there is no absolutely right or wrong choice, but you need to choose the right one according to your actual situation.
Novice cloud natives who want the fastest and easiest architecture for production deployment can start with a two-tier Ingresss. If you need to use visibility, security, and integrated north-south and east-west traffic to fully control microservice-based applications, the best architecture is the service grid, which is worth mentioning to be very complex. If IT wants to enjoy the functionality of the service grid without burdening its complexity, then a simplified version of the service grid would be appropriate. Or start with a two-tier Ingress and migrate it to a service grid stripped-down version as the technology advances.
On how to choose the best micro-service agent architecture to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.