In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to use Istio for multi-cluster deployment management and single control plane VPN connection topology, the content is concise and easy to understand, absolutely can make your eyes bright, through the detailed introduction of this article, I hope you can get something.
As a dedicated infrastructure layer to improve service-to-service communication, service grid is the hottest topic in the cloud native category. As containers become more popular, service topologies change frequently, which requires better network performance. Service grid can help us manage network traffic through service discovery, routing, load balancing, heartbeat detection and supporting observability. The service grid attempts to provide standardized solutions to irregular and complex container problems.
Service grids can also be used in chaos engineering, "a discipline that experiments on distributed systems in order to build reliable systems that can cope with extreme conditions." The service grid can inject delays and errors into the environment without having to install a daemon on each host.
Container is the cornerstone of cloud native applications. Through application containerization, it makes application development and deployment more agile and migration more flexible, and these implementations are based on standardization. Container choreography is a step further, which can arrange resources more effectively and schedule and utilize these resources more efficiently. In the cloud native era, based on Kubernetes infrastructure and combined with Istio service grid, it provides multi-cloud and hybrid cloud support, effective governance capabilities for micro-services, and different support for specific application loads based on Kubernetes and Istio, such as traffic governance for Kubeflow services, routing management capabilities for Knative load, and so on.
Although the use of Service Mesh in cloud native systems has grown rapidly, there is still a lot of room for improvement. Serverless (Serverless) computing requires the naming and linking model of Service Mesh, which highlights the role of Service Mesh in the cloud's native ecosystem. Service identification and access policies are still rudimentary in cloud native environments, and Service Mesh will undoubtedly be an indispensable foundation for this. Like TCP/IP, Service Mesh will take the underlying infrastructure a step further.
Hybrid clouds can take many forms. Typically, hybrid cloud refers to running across public and private (on-premises) clouds, while multi-cloud means running across multiple public cloud platforms.
Adopting a hybrid or multi-cloud architecture can bring many benefits to your organization. For example, using multiple cloud providers can help you avoid vendor locking and allow you to choose the best cloud service to achieve your goals. With both cloud and local environments, you can enjoy both the advantages of cloud (flexibility, scalability, cost reduction) and local benefits (security, low latency, hardware reuse). If you are migrating to the cloud for the first time, adopting hybrid cloud steps allows you to do so at your own pace and in the way that best suits your business.
Based on our practical experience in the public cloud and the information we have received from our customers, we believe that adopting a hybrid service network is the key to simplifying application management, security, and reliability in cloud and local environments, whether your application runs in a container or in a virtual machine.
A key feature of Istio is that it provides service abstraction for your workloads (such as pod, job, VM-based applications). This service abstraction becomes more important when you move to a hybrid topology, because now you need to focus not only on one environment, but on several environments.
When you use Istio on a Kubernetes cluster, you can gain all the management benefits of micro services, including visibility, fine-grained traffic policies, unified telemetry, and security. But when you use Istio in multiple environments, it actually provides a new super capability for the application. Because Istio is not only a service abstraction of Kubernetes, but also a way to standardize the network in the entire environment. It is a way to centralize API management and separate JWT validation from code. It is a fast path to a secure, zero-trust network across cloud providers.
So how did all this magic happen? A hybrid Istio is a set of Istio Sidecar agents, each Envoy agent next to all services that may run in every virtual machine, every container in a different environment, and these Sidecar agents previously knew how to interact across boundaries. These Envoy Sidecar agents may be managed by a central Istio control plane, or by multiple control planes running in each environment.
Multi-cluster deployment management
Service grid essentially combines a group of individual micro-services into a single controllable composite application. Istio, as a service grid, is also designed to monitor and manage collaborative micro-service networks under a single management domain. For applications of a specific size, all microservices can be run on a single orchestration platform such as a Kubernetes cluster. However, due to growing size or redundancy, most applications will eventually need to distribute some services to run elsewhere.
The community is increasingly concerned with running workloads on multiple clusters to achieve better scalability and better fault isolation, thereby improving application agility. Istio v1.0 began to support some multi-cluster features, and new features have been added in later releases.
The Istio services grid supports many possible topologies for distributing application services outside a single cluster, and there are two common patterns or use cases: single grid and grid federation. As the name implies, a single grid combines multiple clusters into a single unit, managed by an Istio control plane; it can be implemented as a physical control plane or as a set of control planes, while all control planes can be synchronized by copying configurations. On the other hand, grid federation will separate multiple clusters as separate management domains, selectively complete the connection between clusters, and expose only a subset of services to other clusters; naturally, its implementation will include multiple control planes.
Specifically, these different topologies include the following aspects:
Services in a grid can use service entries (Service Entry) to access independent external services or services exposed by another loosely coupled service grid, often referred to as grid federation (Mesh Federation). This topology is suitable for multi-cluster scenarios where they are independent and isolated from each other and can only interact through the public network.
Support service grid extension for services running on virtual machines or physical bare metal, often referred to as grid federation (Mesh Expansion). In the previous section, we have described the scenario of hybrid deployment between Kubernetes clusters, virtual machines and physical bare metal
Combining services from multiple clusters into a single service grid is often referred to as a multi-cluster grid (Multicluster Mesh). According to the difference of network topology, multi-cluster grid is usually divided into single control plane VPN connection, single control plane gateway connection and multi-control plane topology.
Single control plane VPN connection topology
As a benchmark, prior to version 1.1 of Istio, Istio 1.0 multi-cluster only supported single-grid designs. It allows multiple clusters to connect to the grid, but all clusters are on a shared network. In other words, the IP addresses of all pod and services in all clusters are routable directly without conflicts, while ensuring that IP addresses assigned in one cluster will not be reused in another cluster at the same time.
In this topology configuration, a single Istio control plane is run on one of the clusters. The Pilot of the control plane manages services on local and remote clusters and configures Envoy agents for all clusters. This approach works best in environments where all participating clusters have VPN connections, so you can access each pod in the grid from anywhere else using the same IP address.
In this configuration, the Istio control plane is deployed on one of the clusters, while all other clusters run a simpler remote Istio configuration that connects them to a single Istio control plane that manages all Envoy as a single grid. IP addresses on individual clusters are not allowed to overlap, and DNS resolution of services on remote clusters is not automatic. Users need to replicate services on each participating cluster so that Kubernetes cluster services and applications in each cluster can expose their internal Kubernetes network to other clusters. Once one or more remote Kubernetes clusters are connected to the Istio control plane, the Envoy can communicate with a single control plane and form a mesh network across multiple clusters.
Premise constraint
In fact, we have learned that there are various constraints between the grid, the cluster, and the network. For example, in some environments, the network and the cluster are directly related. The single control plane VPN connection topology under Istio single grid design needs to meet the following conditions:
Two or more clusters running Kubernetes 1.9 or later
Ability to deploy Istio control plane on one of the clusters
RFC1918 networking, VPN, or more advanced network technology that meets the following requirements:
Single cluster pod CIDR scope and service CIDR scope must be unique in a multi-cluster environment and should not overlap
All pod CIDR in each cluster must be routable to each other
All Kubernetes control plane API servers must be routable to each other.
In addition, in order to support DNS name resolution across clusters, you must ensure that the corresponding namespaces, services, and service accounts are defined in all clusters that require cross-cluster service invocation For example, if the service service1 of the namespace ns1 in the cluster cluster1 needs to call the service service2 of the namespace ns2 in the cluster cluster2, then in order to support the DNS resolution of the service name in the cluster cluster1, you need to create a namespace ns2 and the service service2 under that namespace in the cluster cluster1.
The network of the two Kubernetes clusters in the following example assumes that the above requirements have been met, and the pod in each cluster can be routed to each other, that is, the network is reachable and the ports are accessible (if you are using a public cloud service similar to Aliyun, make sure these ports are accessible under the security group rules; otherwise, calls between services will be affected).
The pod CIDR scope and service CIDR scope definitions for the two Kubernetes clusters are shown in the following table:
(CIDR range definition)
Topology architecture
(invocation relationships supported by multiple clusters for a single control plane VPN connection topology in Istio)
You can see from the figure that the Istio control plane is installed on only one Kubernetes cluster in the entire multi-cluster topology. This cluster that installs the Istio control plane is often referred to as a local cluster, and all other clusters are called remote clusters.
These remote clusters only need to install Istio's Citadel and Sidecar Injector admission controllers, with less Istio footprint, Citadel for the security management of these remote clusters, and Sidecar Injector admission controllers for automatic injection in the control plane and Sidecar agent functions for workloads in the data plane.
In this architecture, Pilot has access to all Kubernetes API servers in all clusters, so it has a global network access view. Citadel and Sidecar Injector admission controllers will only run within the local scope of the cluster. Each cluster has a unique pod and service CIDR, in addition to a shared flat network between the clusters to ensure direct routing to any workload, including to the control plane of the Istio. For example, an Envoy agent on a remote cluster needs to get the configuration from Pilot, check it, and report it to Mixer, and so on.
Enable two-way TLS communication
If bi-directional TLS communication across clusters is enabled in multiple clusters, you need to deploy and configure each cluster as follows. First, an intermediate CA certificate is generated for the Citadel of each cluster from the shared root CA, and the shared root CA enables two-way TLS communication across different clusters. For illustration purposes, we use the sample root CA certificate provided in the Istio installation under the samples/certs directory for both clusters. In a real deployment, you might use a different CA certificate for each cluster, and all CA certificates are signed by the common root CA.
Create keys in each Kubernetes cluster, including the clusters cluster1 and cluster2 in the example. Use the following command to create an Kubernetes key for the generated CA certificate:
Kubectlcreate namespace istio-systemkubectlcreate secret generic cacerts-n istio-system\-- from-file=samples/certs/ca-cert.pem\-- from-file=samples/certs/ca-key.pem\-- from-file=samples/certs/root-cert.pem\-- from-file=samples/certs/cert-chain.pem
Of course, if your environment is only for development testing or does not need to enable two-way TLS communication, the above steps can be skipped completely.
Deploy the local control plane
The process of installing an Istio control plane on a so-called local cluster is not much different from installing Istio on a single cluster. One thing to note is how to configure the Envoy agent to manage parameters that directly access external services within the scope of an IP. If you are using Helm to install Istio, there is a variable called global.proxy.includeIPRanges in Helm, which ensures that the variable is "*" or includes the local cluster, the pod CIDR scope of all remote clusters, and the service CIDR.
You can confirm the global.proxy.includeIPRanges parameter setting by viewing the traffic.sidecar.istio.io/includeOutboundIPRanges in the configuration item istio-sidecar-injector under the namespace istio-system, as shown below:
Kubectlget configmap istio-sidecar-injector-n istio-system-o yaml | grepincludeOutboundIPRanges' traffic.sidecar.istio.io/includeOutboundIPRanges'\ "*\"]\ "\ n -\"-x\ "\ n
In the cluster cluster1 where the Istio control plane components are deployed, follow these steps.
If bidirectional TLS communication is enabled, the following configuration parameters are required:
Helmtemplate-- namespace=istio-system\-- valuesinstall/kubernetes/helm/istio/values.yaml\-- set global.mtls.enabled=true\-- set security.selfSigned=false\-- set global.controlPlaneSecurityEnabled=true\ install/kubernetes/helm/istio > istio-auth.yamlkubectlapply-f istio-auth.yaml
If you do not need to enable two-way TLS communication, the configuration parameters need to be modified as follows:
Helmtemplate-- namespace=istio-system\-- valuesinstall/kubernetes/helm/istio/values.yaml\-- set global.mtls.enabled=false\-- set security.selfSigned=true\-- set global.controlPlaneSecurityEnabled=false\ install/kubernetes/helm/istio > istio-noauth.yamlkubectlapply-f istio-noauth.yaml
Modify the types of Istio services istio-pilot, istio-telemetry, istio-policy and zipkin to private network load balancer, and expose these services to remote clusters in an intranet manner. Different cloud vendors have different implementation mechanisms, but most of them are implemented by modifying annotation. For Ali Cloud CCS, it is very simple to set private network load balancer. You only need to add the following annotation to the YAML definition of the service: service.beta.kubernetes.io/alicloud-loadbalancer-address-type: intranet.
In addition, you need to set up for each service according to the port definition in the figure.
(service Settings)
The istio-pilot service port is shown in Table 1.
(table 1 istio-pilot service port description)
The istio-telemetry service port is shown in Table 2
(table 2 istio-telemetry service port description)
The istio-policy service port is shown in Table 3.
(table 3 istio-policy service port description)
The zipkin service port is shown in Table 4.
(table 4 zipkin service port description)
Install istio-remote
After installing the control plane in the local cluster, you must deploy the istio-remote components to each remote Kubernetes cluster. Wait for the Istio control plane to complete initialization before performing the steps in this section. You must run these operations on the Istio control plane cluster to capture the Istio control plane service endpoints, such as the Istio services istio-pilot, istio-telemetry, istio-policy, and zipkin mentioned above.
To deploy the Istio-remote component in the remote cluster cluster2, follow these steps:
1. Use the following command to set the environment variables on the local cluster:
ExportPILOT_IP=$ (kubectl-n istio-system get service istio-pilot-ojsonpath=' {.status.loadBalancer.ingress [0] .IP}') exportPOLICY_IP=$ (kubectl-n istio-system get service istio-policy-ojsonpath=' {. Status.loadBalancer.ingress [0] .IP}') exportTELEMETRY_IP=$ (kubectl-n istio-system get service istio-telemetry-ojsonpath=' {.status.loadBalancer.ingress [0] .IP}') exportZIPKIN_IP=$ (kubectl-n istio-system get service zipkin-ojsonpath=' {. Status.loadBalancer .ingress [0] .IP}') echo$PILOT_IP $POLICY_IP $TELEMETRY_IP $ZIPKIN_IP
two。 If bi-directional TLS communication across clusters is enabled in multiple clusters, deployment configuration is required in the cluster.
Of course, if your environment is only for development testing or does not need to enable two-way TLS communication, this step can be skipped completely. Run the following command on the remote Kubernetes cluster cluster2 to create a Kubernetes key for the generated CA certificate in the cluster:
Kubectlcreate namespace istio-systemkubectlcreate secret generic cacerts-n istio-system\-- from-file=samples/certs/ca-cert.pem\-- from-file=samples/certs/ca-key.pem\-- from-file=samples/certs/root-cert.pem\-- from-file=samples/certs/cert-chain.pem
3. On the remote Kubernetes cluster cluster2, create the Istio remote deployment YAML file using Helm by executing the following command.
If bidirectional TLS communication is enabled, the following configuration parameters are required:
Helmtemplate install/kubernetes/helm/istio\-- name istio-remote\-- namespace istio-system\-- values install/kubernetes/helm/istio/values-istio-remote.yaml\-- set global.mtls.enabled=true\-- set security.selfSigned=false\-- set global.controlPlaneSecurityEnabled=true\-- setglobal.remotePilotCreateSvcEndpoint=true\-- set global.remotePilotAddress=$ {PILOT_IP}\-set global.remotePolicyAddress=$ {POLICY_IP}\-- setglobal.remoteTelemetryAddress=$ {TELEMETRY_IP}-- set Global.remoteZipkinAddress=$ {ZIPKIN_IP} > istio-remote-auth.yaml
Then deploy the Istio remote component to cluster2, as follows:
Kubectlapply-f. / istio-remote-auth.yaml
If you do not need to enable two-way TLS communication, the configuration parameters need to be modified as follows:
Helmtemplate install/kubernetes/helm/istio\-- name istio-remote\-- namespace istio-system\-- valuesinstall/kubernetes/helm/istio/values-istio-remote.yaml\-- set global.mtls.enabled=false\-- set security.selfSigned=true\-- setglobal.controlPlaneSecurityEnabled=false\-- setglobal.remotePilotCreateSvcEndpoint=true\-- set global.remotePilotAddress=$ {PILOT_IP}\-set global.remotePolicyAddress=$ {POLICY_IP}\-- set global.remoteTelemetryAddress=$ {TELEMETRY_IP}-- setglobal .remoteZipkinAddress = ${ZIPKIN_IP} > istio-remote-noauth.yaml
Then deploy the Istio remote component to cluster2, as follows:
Kubectlapply-f. / istio-remote-noauth.yaml
Ensure that the above steps are performed successfully in the Kubernetes cluster.
4. Create the Kubeconfig of the cluster cluster2.
After installing Istio-remote Helm chart, a Kubernetes service account called istio-multi is created in the remote cluster, which is used to minimize RBAC access requests. The corresponding cluster roles are defined as follows:
Kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1metadata: name: istio-readerrules:-apiGroups: [''] resources: ['nodes',' pods', 'services','endpoints'] verbs: [' get', 'watch',' list']
The following procedure generates a kubeconfig configuration file for a remote cluster by using the istio-multi service account credentials described earlier. Create a Kubeconfig for the service account istio-multi on the cluster cluster2 and save it as a file n2-k8s-config:
CLUSTER_NAME= "cluster2" SERVER=$ (kubectlconfig view-- minify=true-o "jsonpath= {.cluster.server}") SECRET_NAME=$ (kubectlget sa istio-multi-n istio-system-o jsonpath=' {.cluster.server}') CA_DATA=$ (kubectlget secret ${SECRET_NAME}-n istio-system-o "jsonpath= {.data ['ca\ .crt'}") TOKEN=$ (kubectlget secret ${SECRET_NAME}-n istio-system-o "jsonpath= {.data ['token']}" | base64-- decode) cat
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.