In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "how to solve the problem of Istio headless service". In daily operation, I believe many people have doubts about how to solve the problem of Istio headless service. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubt of "how to solve the problem of Istio headless service". Next, please follow the editor to study!
What is "headless service"?
"headless service" is the Headless Service in Kubernetes. Service is the logical abstraction and access entry of Kubernetes to a set of Pod that provide the same service at the back end. Kubernetes assigns a running node to Pod according to the scheduling algorithm, and randomly assigns an IP address; in many cases, we also scale Pod horizontally to start multiple Pod to provide the same service. When there are multiple Pod and the Pod IP address is not fixed, it is difficult for the client to access directly through the IP address of the Pod. To solve this problem, Kubernetes uses Service resources to represent a set of Pod that provide the same service.
By default, Kubernetes assigns a Cluster IP to Service, and Service's Cluster IP is always fixed no matter how the back-end Pod IP changes. Therefore, the client can access the services provided by this set of Pod through this Cluster IP without having to pay attention to the real Pod IP at the back end. We can think of Service as a load balancer placed in front of a set of Pod, and Cluster IP is the address of the load balancer. This load balancer will pay attention to the changes of this set of Pod in the backend and forward the request to the Cluster IP to the backend Pod. Note: this is only a simplified description of Service. If you are interested in the internal implementation of Service, you can refer to this article on how to choose an ingress gateway for a service grid. )
For stateless applications, the client does not care which Pod it is connected to, and there is no problem with using Service. However, in some special cases, this cannot be done. For example, if the backend Pod is stateful, the client needs to choose which Pod to provide services according to a certain application-related algorithm, or if the client needs to connect to all the backend Pod, we cannot put a load balancer in front of this set of Pod. In this case, we need to use Headless Service, the headless service (the name compares the load balancer in front of multiple Pod to the head of the service, isn't it? ). When defining Headless Service, we need to set the Cluster IP display of Service to None, so that Kubernetes DNS will directly return multiple Pod IP of its back end instead of Service's Cluster IP when parsing the Service.
Suppose you access a Redis cluster from the client, using normal Service with Cluster IP and Headless Service for access, respectively, as shown in the following figure:
MTLS failure of "headless Service" in Istio
Because of the particularity of Headless Service, the treatment of Headless Service in Istio is different from that of ordinary Service, and some problems caused by Headless Service are often encountered in the process of application migration to Isito. Below we will illustrate a typical case caused by the mTLS failure of Headless Service.
Failure phenomenon: the operation and maintenance students reported that the Redis server was accessed from the Pod with Envoy Sidecar, but the Redis server could be accessed normally in the Pod without Sidecar.
When we encounter the problem that the outbound access cannot be carried out, we can first view the access log of Envoy through the management interface of Envoy. Run the following command in the client Pod to view the Envoy log:
Kubectl logs-f redis-client-6d4c6c975f-bm5w6-c istio-proxy
The access to Redis in the log is recorded as follows, where UR,URX is Response Flag, which means upstream connection failure, that is, the connection failed upstream.
[2020-09-12T13:38:23.077Z] "- -" 0 UF,URX "-"-"00 1001 -"-"10.1.1.24 outbound 6379" 6379 | | redis.default.svc.cluster.local-10.1.1.24140 6379 10.1.25 outbound 45940--
We can export its xDS configuration through the Envoy management interface to further analyze the cause of its failure.
Kubectl exec redis-client-6d4c6c975f-bm5w6-c istio-proxy curl http://127.0.0.1:15000/config_dump
Because it is an outbound access error, we focus on the configuration of the Cluster that should be accessed on the client. In the exported xDS configuration, you can see the configuration of Redis Cluster, as shown in the following yaml snippet (some extraneous content in the yaml has been removed for readers' convenience):
{"version_info": "2020-09-13T00:33:43Z/5", "cluster": {"@ type": "type.googleapis.com/envoy.api.v2.Cluster", "name": "outbound | 6379 | redis.default.svc.cluster.local", "type": "ORIGINAL_DST", "connect_timeout": "1s", "lb_policy": "CLUSTER_PROVIDED" "circuit_breakers": {...}, # mTLS related settings "transport_socket": {"name": "envoy.transport_sockets.tls", "typed_config": {"@ type": "type.googleapis.com/envoy.api.v2.auth.UpstreamTlsContext" "common_tls_context": {"alpn_protocols": ["istio-peer-exchange", "istio"], # access the client certificate "tls_certificate_sds_secret_configs" used by Redis: [{"name": "default" "sds_config": {"api_config_source": {"api_type": "GRPC" "grpc_services": [{"envoy_grpc": {"cluster_name": "sds-grpc"}]}] "combined_validation_context": {"default_validation_context": {# spiffe indentity "verify_subject_alt_name": ["spiffe://cluster.local/ns/default/sa/default"]} used to verify the identity of the Redis server # Root certificate "validation_context_sds_secret_config": {"name": "ROOTCA", "sds_config": {"api_config_source": {"api_type": "GRPC" used to verify the Redis server "grpc_services": [{"envoy_grpc": {"cluster_name": "sds-grpc"}]} "sni": "outbound_.6379_._.redis.default.svc.cluster.local"}}, "filters": [{...}]}, "last_updated": "2020-09-13T00:33:43.862Z"}
In the configuration of the transport_socket section, we can see that the tls certificate information to access the Redis Cluster is configured in the Envoy, including the client certificate used by Envoy Sidecar to access the Redis, the root certificate used to verify the Redis server certificate, and the server-side identity information to be verified in spiffe format. The certificate-related content here is obtained by using SDS (Secret discovery service) in the xDS protocol, which is not introduced in this article because of the space. If you need to understand Istio certificates and SDS-related mechanisms, you can refer to this article to thoroughly clarify the certificate working mechanism in Isito. As you can see from the above configuration, when a request is received from a Redis client, the Envoy Sidecar in the client Pod uses mTLS to initiate a request to the Redis server.
There seems to be nothing wrong with the mTLS configuration of Envoy Sidecar in the Redis client. However, we have previously known that the Redis service does not have Envoy Sidecar installed, so in fact the Redis server side can only receive plain TCP requests. This causes the client Envoy Sidecar to fail to create a link to the Redis server.
The Redis client thinks it goes like this:
But it actually goes like this:
If Envoy Sidecar is not installed on the server side and mTLS is not supported, the Envoy of the client side should not use mTLS to initiate a connection to the server side. What's going on? Let's compare the relevant configurations in other Cluster in the client Envoy.
A mTLS-related configuration for accessing a normal Cluster is as follows:
{"version_info": "2020-09-13T00:32:39Z/4", "cluster": {"@ type": "type.googleapis.com/envoy.api.v2.Cluster", "name": "outbound | 8080 | | awesome-app.default.svc.cluster.local", "type": "EDS" "eds_cluster_config": {"eds_config": {"ads": {}}, "service_name": "outbound | 8080 | | awesome-app.default.svc.cluster.local"}, "connect_timeout": "1s", "circuit_breakers": {.} ... # mTLS-related configuration "transport_socket_matches": [{"name": "tlsMode-istio", "match": {"tlsMode": "istio" # pairs of endpoint with "tlsMode": "istio" lable Enable mTLS}, "transport_socket": {"name": "envoy.transport_sockets.tls", "typed_config": {"@ type": "type.googleapis.com/envoy.api.v2.auth.UpstreamTlsContext", "common_tls_context": {"alpn_protocols": ["istio-peer-exchange" "istio", "h3"], "tls_certificate_sds_secret_configs": [{"name": "default", "sds_config": {"api_config_source": {"api_type": "GRPC" "grpc_services": [{"envoy_grpc": {"cluster_name": "sds-grpc"}]}] "combined_validation_context": {"default_validation_context": {}, "validation_context_sds_secret_config": {"name": "ROOTCA", "sds_config": {"api_config_source": {"api_type": "GRPC" "grpc_services": [{"envoy_grpc": {"cluster_name": "sds-grpc"}]} "sni": "outbound_.6379_._.redis1.dubbo.svc.cluster.local"}, {"name": "tlsMode-disabled", "match": {}, # for all other enpoint Do not enable mTLS, use plain TCP to connect "transport_socket": {"name": "envoy.transport_sockets.raw_buffer"}]}, "last_updated": "2020-09-13T00:32:39.535Z"}
As you can see from the configuration, there are two mTLS-related configurations in a normal Cluster: tlsMode-istio and tlsMode-disabled. The configuration of the tlsMode-istio section is similar to that of Redis Cluster, but contains a matching condition (match part), which means that mTLS; is enabled only for endpoint with "tlsMode": "istio" lable. For endpoint without this tag, the configuration of the tlsMode-disabled part is adopted, using raw_buffer, that is, plain TCP for connection.
Looking at the relevant source code of Istio, you can see that when Istio webhook injects Envoy Sidecar into Pod, a series of label is added to Pod at the same time, including the label of "tlsMode": "istio", as shown in the following code snippet:
PatchLabels: = map [string] string {label.TLSMode: model.IstioMutualTLSModeLabel, model.IstioCanonicalServiceLabelName: canonicalSvc, label.IstioRev: revision, model.IstioCanonicalServiceRevisionLabelName: canonicalRev,}
Because the Pod is tagged while being injected into the Envoy Sidecar, when the client Enovy Sidecar initiates a connection to the Pod, according to the tag in the endpoint matches the configuration in the tlsMode-istio, it will adopt mTLS;, but if a Pod is not injected into the Envoy Sidecar, there will be no such Label, so the matching condition shown in the previous configuration cannot be satisfied. The Envoy Sidecar of the client will use plain TCP to connect the endpoint according to the configuration in the tlsMode-disabled. This is compatible with both server-side support and non-mTLS support.
The following figure shows how Istio is compatible with both mTLS and plain TCP through endpoint tags.
By comparing with normal Cluster, we can see that there is a problem with the configuration of Redis Cluster. The configuration of Redis Cluster should also be judged by the tlsMode tag of endpoint to determine whether the Envoy Sidecar of the client initiates a connection with the Redis server through mTLS or plain TCP. But the reality is that there is only the configuration of mTLS in Redis Cluster, which leads to the connection failure we saw earlier.
Redis is a Headless Service. By looking up relevant information in the community, it was found that there was a problem with the handling of Headless Service before Istio version 1.6, which led to the failure. See this Issue Istio 1.5 prevents all connection attempts to Redis (headless) service # 21964.
Solution
After finding the cause of the failure, it is very easy to solve the problem. We can disable Redis Service's mTLS through a Destination Rule. As shown in the following yaml fragment:
Kind: DestinationRulemetadata: name: redis-disable-mtlsspec: host: redis.default.svc.cluster.local trafficPolicy: tls: mode: DISABLE
If you look at the Redis Cluster configuration in the client Envoy, you can see that mTLS has been disabled and there is no mTLS-related certificate configuration in Cluster.
{"version_info": "2020-09-13T09:02:28Z/7", "cluster": {"@ type": "type.googleapis.com/envoy.api.v2.Cluster", "name": "outbound | 6379 | redis.dubbo.svc.cluster.local", "type": "ORIGINAL_DST", "connect_timeout": "1s", "lb_policy": "CLUSTER_PROVIDED" "circuit_breakers": {...}, "metadata": {"filter_metadata": {"istio": {"config": "/ apis/networking.istio.io/v1alpha3/namespaces/dubbo/destination-rule/redis-disable-mtls"} "filters": [{"name": "envoy.filters.network.upstream.metadata_exchange", "typed_config": {"@ type": "type.googleapis.com/udpa.type.v1.TypedStruct", "type_url": "type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange" "value": {"protocol": "istio-peer-exchange"}]}, "last_updated": "2020-09-13T09:02:28.514Z"}
Try to access the Redis server from the client again at this time, and everything is fine!
At this point, the study on "how to solve the problem of Istio headless service" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.