In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Envoy is a high-performance C++ distributed agent designed for individual services and applications. As an important component of Service Mesh, it is particularly important to fully understand its configuration. This article lists the reasons for using Envoy instead of other agents. And give the configuration of Envoy and its services, and then interpret it in detail to help readers understand its configuration, so as to master Envoy.
The service grid is the communication layer in the micro-service setup, which means that all requests to and from each service pass through the grid. The service grid also becomes the infrastructure layer in the micro-service setup, which makes the communication between services secure and reliable. We have covered the basics of Service Mesh in detail in this article.
Each service has its own proxy service (sidecars), and then all the proxy services form a service grid together. Sidecars handles communication between services, that is, all traffic passes through the grid and the transparency layer controls how services interact with each other.
Service grid provides observability, service discovery and load balancing through components controlled by API.
In fact, if one service invokes another service, it does not invoke the target service directly. Instead, the request is routed to the local agent, and then the agent routes the request to the target service. This process means that the service instance does not have direct contact with other services and only communicates with the local agent.
According to ThoughtWorks Technology Radar, a semi-annual document that assesses the risks and benefits of existing and emerging technologies, "the service grid provides consistent discovery, security, tracking (tracing), monitoring, and fault handling without sharing resources (such as API gateways or ESB). A typical use case is a lightweight reverse proxy process deployed with each service process or separate container."
When it comes to service grids, it's inevitable to talk about "sidecar"-- the proxies that can be used for each service instance. Each sidecar is responsible for managing an instance of a service. We will discuss sidecar in more detail in this article.
What can the service grid deliver?
At present, more and more enterprises and organizations begin to turn to micro-service architecture. Such enterprises need the above capabilities provided by the service grid. Decoupling the use of libraries or customizing the code is undoubtedly the winner.
Why use Envoy?
Envoy is not the only choice to build a service grid, there are other agents on the market, such as Nginx, Traefik and so on. I chose Envoy, a high-performance agent written in C++, because I prefer Envoy for its lightweight, powerful routing, observability and scalability.
Let's first build a service grid setup with three services, which is the architecture we are trying to build:
Front Envoy
Front Envoy is an edge proxy in our setup, where we usually perform operations such as TLS termination, authentication, request header generation, and so on.
Let's look at the Front Envoy configuration:
-admin: access_log_path: "/ tmp/admin_access.log" address: socket_address: address: "127.0.0.1" port_value: 9901static_resources: listeners:-name: "http_listener" address: socket_address: address: "0.0.0.0" port_value: 80 filter_chains : filters:-name: "envoy.http_connection_manager" config: stat_prefix: "ingress" route_config: name: "local_route" virtual_hosts:- Name: "http-route" domains:-"*" routes:-match: prefix: "/" route : cluster: "service_a" http_filters:-name: "envoy.router" clusters:-name: "service_a" connect_timeout: "0.25s" type: "strict_dns" lb_policy: "ROUND_ROBIN" Hosts:-socket_address: address: "service_a_envoy" port_value: 8786
The Envoy configuration mainly consists of the following parts:
1. Listener (Listener)
2. Routing
3. Cluster
4. Endpoint
Listener
One or more listeners can run in a single Envoy instance. The code in lines 9 to 36 above mentions the address and port of the current listener. Each listener can also have one or more network filters. These filters enable activities such as routing, tls termination, traffic diversion, and so on. In addition to the built-in filters used by envoy.http_connection_manager, Envoy has several other filters.
Rout
Lines 22 to 34 configure the filter with a routing specification, which also specifies the domain we accept the request and the route matcher. The route matcher can match each request according to the configured rules and forward the request to the appropriate cluster.
Cluster
Cluster is the upstream service specification to which Envoy routes traffic. Lines 41 through 48 define "Service A", which is the only upstream for Front Envoy to communicate. "connect_timeout" is the time limit for establishing a connection to the upstream service before returning 503.
Typically, there are multiple "Serivce A" instances, and Envoy supports multiple load balancing algorithms to route traffic. In this example, we use a simple loop algorithm.
End point
"host" specifies the instance of Service A to which we want to route traffic. In this case, we have only one instance.
Instead of communicating directly with Service A, line 47 communicates with the Envoy agent instance of Service A, which will route to the local Service An instance.
We can also perform client-side load balancing using the service names of all instances that return Service A, such as the headless service in Kubernetes. Envoy caches all host of Service An and refreshes the host list every 5 seconds.
In addition, Envoy supports active and passive health check-ups. Therefore, if we want to do an active health check, we need to configure it in the cluster configuration section.
His other
Lines 2 through 7 configure the management server, which can help view the configuration, change the log level, view statistics, and so on.
The "static_resources" in line 8 can load all configurations manually. We will discuss how to do this dynamically below.
Although many other configurations are described here, our goal is not to provide a comprehensive overview of all configurations, but to start with a minimum of configurations.
Service A
This is the Envoy configuration for Service A:
Admin: access_log_path: "/ tmp/admin_access.log" address: socket_address: address: "127.0.0.1" port_value: 9901static_resources: listeners:-name: "service-a-svc-http-listener" address: socket_address: address: "0.0.0.0" port_value: 8786 Filter_chains:-filters:-name: "envoy.http_connection_manager" config: stat_prefix: "ingress" codec_type: "AUTO" route_config: name: "service-a-svc-http-route" Virtual_hosts:-name: "service-a-svc-http-route" domains:-"*" routes:-match: Prefix: "/" route: cluster: "service_a" http_filters:-name: "envoy.router"-name: "service-b-svc-http-listener" address: Socket_address: address: "0.0.0.0" port_value: 8788 filter_chains:-filters:-name: "envoy.http_connection_manager" config: stat_prefix: "egress" codec_type: "AUTO" Route_config: name: "service-b-svc-http-route" virtual_hosts:-name: "service-b-svc-http-route" domains:-"*" Routes:-match: prefix: "/" route: cluster: "service_b" http_filters:- Name: "envoy.router"-name: "service-c-svc-http-listener" address: socket_address: address: "0.0.0.0" port_value: 8791 filter_chains:-filters:-name: "envoy.http_connection_manager" Config: stat_prefix: "egress" codec_type: "AUTO" route_config: name: "service-b-svc-http-route" virtual_hosts:-name: "service-b-svc-http-route "domains: -" * "routes:-match: prefix:" / "route: Cluster: "service_c" http_filters:-name: "envoy.router" clusters:-name: "service_a" connect_timeout: "0.25s" type: "strict_dns" lb_policy: " ROUND_ROBIN "hosts:-socket_address: address:" service_a "port_value: 8081-name:" service_b "connect_timeout:" 0.25s "type:" strict_dns "lb_policy:" ROUND_ROBIN "hosts: -socket_address: address: "service_b_envoy" port_value: 8789-name: "service_c" connect_timeout: "0.25s" type: "strict_dns" lb_policy: "ROUND_ROBIN" hosts:-socket_address: Address: "service_c_envoy" port_value: 8790
Lines 11 through 39 define a listener to route traffic to the actual Service An instance. Find the corresponding cluster definition for the service_a instance in lines 103 to 111.
Service A communicates with Service B and Service C, pointing to more than two listeners and clusters. In this example, we separate listeners for each upstream (localhost, Service B, and Service C). Another approach is to use a single listener and route to any upstream based on the URL or request header.
Service B and Service C
Service B and Service C are at the leaf level and do not communicate with any upstream except for the local host service instance. Therefore, its configuration is very simple.
What complicates things are the single listeners and individual clusters in the above configuration.
After the configuration is complete, we deploy this setting to Kubernetes or test it using docker-compose. You can run docker-compose build and docker-compose up and click localhost:8080 to see if the request passed successfully through all services and Envoy agents. We can use the log to verify it.
Envoy xDS
We provide configuration for each sidecar, and the configuration between services varies depending on the service. Although we can manually make and manage the sidecar configuration, at least 2 or 3 services will be provided initially, and the configuration will become very complex as the number of services increases. In addition, each time the sidecar configuration changes, you must restart the Envoy instance for the changes to take effect.
As discussed above, we can avoid manually configuring and loading all components: clusters (CDS), endpoints (EDS), listeners (LDS), and routing (RDS) by using API server. So each sidecar will communicate with the API server and receive the configuration. When the new configuration is updated to API server, it is automatically reflected in the Envoy instance, thus avoiding a reboot.
You can learn about dynamic configuration in the following links:
Https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/v2_overview#dynamic
Here is a simple xDS server:
Https://github.com/tak2siva/Envoy-Pilot
How to implement it in Kubernetes
This section will discuss what to do if we are going to implement the settings discussed in Kubernetes. The following is the architecture diagram:
Therefore, you will need to change:
Pod
Service
Deploy Pod
Although only one container is defined in the Pod specification-by definition, a Pod can hold one or more containers. To run the sidecar agent for each service instance, we add the Envoy container to each Pod. In order to communicate with the outside world, the service container will talk to the Envoy container through localhost.
The deployment file is as follows:
Admin: access_log_path: "/ tmp/admin_access.log" address: socket_address: address: "127.0.0.1" port_value: 9901static_resources: listeners:-name: "service-b-svc-http-listener" address: socket_address: address: "0.0.0.0" port_value: 8789 Filter_chains:-filters:-name: "envoy.http_connection_manager" config: stat_prefix: "ingress" codec_type: "AUTO" route_config: name: "service-b-svc-http-route" Virtual_hosts:-name: "service-b-svc-http-route" domains:-"*" routes:-match: Prefix: "/" route: cluster: "service_b" http_filters:-name: "envoy.router" clusters:-name: "service_b" connect_ Timeout: "0.25s" type: "strict_dns" lb_policy: "ROUND_ROBIN" hosts:-socket_address: address: "service_b" port_value: 8082
Envoy sidecar was added in the container section, and we mounted our Envoy configuration file in lines 33 to 39 of configmap.
Change service
The Kubernetes service is responsible for maintaining a list of Pod endpoints that can route traffic. Although kube-proxy usually handles load balancing between Pod endpoints, in this case, we will perform client-side load balancing, and we do not want kube-proxy to do load balancing. In addition, we want to extract the list of Pod endpoints and load balance them. To do this, we need to use headless Services to return a list of endpoints.
As follows:
ApiVersion: apps/v1beta1kind: Deploymentmetadata: name: serviceaspec: replicas: 2 template: metadata: labels: app: serviceaspec: containers:-name: servicea image: dnivra26/servicea:0.6 ports:-containerPort: 8081 name: svc-port protocol: TCP-name: envoy image: envoyproxy/envoy:latest ports:-containerPort : 9901 protocol: TCP name: envoy-admin-containerPort: 8786 protocol: TCP name: envoy-web volumeMounts:-name: envoy-config-volume mountPath: / etc/envoy-config/ command: ["/ usr/local/bin/envoy"] args: ["- c" "/ etc/envoy-config/config.yaml", "--v2-config-only", "- l", "info", "--service-cluster", "servicea", "--service-node", "servicea", "--log-format" "[METADATA] [% Y-%m-%d% T.e] [% t] [% l] [% n]% v"] volumes:-name: envoy-config-volume configMap: name: sidecar-config items:-key: envoy-config path: config.yaml
There are two things to pay attention to. One is that the sixth exercise service becomes headless, and the other is that instead of mapping the Kubernetes service port to the application's service port, we map to the Envoy listener port. This means that the traffic is first directed to Envoy. Even so, Kubernetes works perfectly.
In this article, we saw how to build a service grid using Envoy proxies. Among them, we set up that all communications will pass through the grid. Therefore, the grid now not only has a lot of data about traffic, but also has control.
You can get the configuration and code we discussed in the following link:
Https://github.com/dnivra26/envoy_servicemesh
Original text link:
Https://www.thoughtworks.com/insights/blog/building-service-mesh-envoy-0
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.