In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to achieve the high availability of Kubernetes cross-cluster service applications. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article. Let's take a look at it.
With Kubernetes 1.3, we want to reduce the management and operational difficulties associated with cross-cluster and cross-regional service deployment.
Note: although the example in this article uses Google Container engine (GKE) to provide Kubernetes clustering, you can deploy Kubernetes in any other environment.
Here we go. The first step is to create Kubernetes clusters through GKE in Google's four cloud platform regions.
Asia-east1-beurope-west1-bus-east1-bus-central1-b
Let's create a cluster by:
Gcloud container clusters create gce-asia-east1\-scopes cloud-platform\-- zone asia-east1-bgcloud container clusters create gce-europe-west1\-- scopes cloud-platform\-- zone=europe-west1-bgcloud container clusters create gce-us-east1\-- scopes cloud-platform\-- zone=us-east1-bgcloud container clusters create gce-us-central1\-- scopes cloud-platform\-- zone=us-central1-b
Verify the cluster created
Gcloud container clusters list NAME ZONE MASTER_VERSION MASTER_IP NUM_NODES STATUS gce-asia-east1 asia-east1-b 1.2.4 104.XXX.XXX.XXX 3 RUNNINGgce-europe-west1 europe-west1-b 1.2.4 130.XXX.XX.XX 3 RUNNINGgce-us-central1 us-central1-b 1.2.4 104.XXX.XXX .XX 3 RUNNING gce-us-east1 us-east1-b 1.2.4 104.XXX.XX.XXX 3 RUNNING
The next step is to start the cluster and deploy the federated control module (Control Plane) in one of the clusters created. You can refer to and follow the example of Kelsey Hightower to configure this step.
Federal service
Get the information from the federated API, so you can define the service properties through the federated API.
When the service is created, the federated service automatically does the following:
Create services in all Kubernetes clusters registered in the federation
Monitor the health of service fragments (and their clusters)
Create a DNS record (such as Google Cloud DNS, or AWS Route 53) on a public NS provider to make sure that your service client has seamless access to relevant health service terminals at any time, even when there is a service outage in a cluster, availability zone or region.
When the service client in the federated Kubernetes cluster works properly in the local cluster fragment, the local service fragment will be used first, otherwise the healthy service fragment will be selected in the nearest cluster.
The federation of Kubernetes clusters can combine different cloud providers (such as GCP, AWS) and private clouds (such as OpenStack). You just need to create your cluster at the appropriate cloud provider and location and register the API Server address and certificate information for each cluster with the federated cluster.
In our example, we created clusters in four areas and deployed federated control modules in one of the clusters, which we will use to deploy our services. Please refer to the following figure for details.
Create federated services
Let's first look at all the registered clusters in the federation:
Kubectl-context=federation-cluster get clustersNAME STATUS VERSION AGEgce-asia-east1 Ready 1mgce-europe-west1 Ready 57sgce-us-central1 Ready 47sgce-us-east1 Ready 34s
Create a federated service:
The kubectl-- context=federation-cluster create-f services/nginx.yaml--context=federation-cluster parameter tells kubectl to submit the request with the relevant certificate information to the federated API server. The federated service automatically creates corresponding Kubernetes services in all clusters in the federation.
You can verify this by checking the services in each cluster, such as:
Kubectl-context=gce-asia-east1a get svc nginxNAME CLUSTER-IP EXTERNAL-IP PORT (S) AGEnginx 10.63.250.98 104.199.136.89 80/TCP 9m
The above command assumes that you have a context setting for a kubectl named gce-asia-east1a and that you have a cluster deployed in this area (zone). The name and namespace of the Kubernetes service are automatically the same as the cluster service you created above.
The federated service status reflects the service status corresponding to Kubernetes in real time, for example:
Kubectl-- context=federation-cluster describe services nginxName: nginxNamespace: defaultLabels: run=nginxSelector: run=nginxType: LoadBalancerIP: LoadBalancer Ingress: 104.XXX.XX.XXX, 104.XXX.XXX.XXPort: http 80/TCPEndpoints: Session Affinity: NoneNo events.
The LoadBalancer Ingress address of the federated service summarizes the LoadBalancer Ingress addresses of all registered Kubernetes cluster services. In order for the same federated service to work correctly between disconnected clusters and between different cloud providers, your service needs to have externally visible IP addresses, such as Loadbalancer is a common service type.
Please note that we have not deployed any background Pod to receive network traffic directed to these addresses (such as service terminals), so the federated service does not consider these service fragments to be healthy at this time, so the corresponding DNS record for the federated service has not yet been created.
Add background Pod
In order to render the health status of the underlying service fragments, we need to add background Pod to the service. You currently need to do this by directly manipulating the API Server of the underlying cluster (to save your time, we can create it in the federated server with a command in the future). For example, we create a background Pod in the underlying cluster:
For CLUSTER in asia-east1-an europe-west1-a us-east1-a us-central1-adokubectl-context=$CLUSTER run nginx-image=nginx:1.11.1-alpine-port=80done
Verify the public NS record
Once the Pod starts successfully and starts listening for connections, each cluster (through a health check) reports the health terminal of the service to the cluster federation. The cluster federation in turn considers these service fragments to be healthy and creates corresponding public NS records. You can use the interface of your favorite DNS vendor to verify. For example, if you use Google Cloud DNS to configure federation, your DNS domain is' example.com':
$gcloud dns managed-zones describe example-dot-com creationTime: '2016-06-26T18:18:39.229Z'description: Example domain for Kubernetes Cluster FederationdnsName: example.com.id:' 3229332181334243121'kind: dns#managedZonename: example-dot-comnameServers:- ns-cloud-a1.googledomains.com.- ns-cloud-a2.googledomains.com.- ns-cloud-a3.googledomains.com.- ns-cloud-a4.googledomains.com.$ gcloud dns record-sets list-- zone example-dot-comNAME TYPE TTL DATAexample.com. NS 21600 ns-cloud-e1.googledomains.com., ns-cloud-e2.googledomains.com.example.com. SOA 21600 ns-cloud-e1.googledomains.com. Cloud-dns-hostmaster.google.com. 1 21600 3600 1209600 300nginx.mynamespace.myfederation.svc.example.com. A 180 104.XXX.XXX.XXX, 130.XXX.XX.XXX, 104.XXX.XX.XXX, 104.XXX.XXX.XXnginx.mynamespace.myfederation.svc.us-central1-a.example.com. A 180 104.XXX.XXX.XXXnginx.mynamespace.myfederation.svc.us-central1.example.com.nginx.mynamespace.myfederation.svc.us-central1.example.com. A 180 104.XXX.XXX.XXX, 104.XXX.XXX.XXX, 104.XXX.XXX.XXXnginx.mynamespace.myfederation.svc.asia-east1-a.example.com. A 180 130.XXX.XX.XXXnginx.mynamespace.myfederation.svc.asia-east1.example.com.nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.XXX.XX.XXX, 130.XXX.XX.XXXnginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com.... Etc.
Note: if you use AWS Route53 to configure federation, you can use the appropriate AWS tools, such as:
$aws route53 list-hosted-zones$aws route53 list-resource-record-sets-hosted-zone-id Z3ECL0L9QLOVBX
No matter what DNS vendor you use, you can use DNS query tools (such as' dig' or 'nslookup') to view the DNS records created by the federation for you.
Discovering federated services from pod within the federated cluster
By default, the Kubernetes cluster has a built-in local domain name server (KubeDNS) and an intelligent DNS search path to ensure that applications running in Pod are automatically extended and parsed to the corresponding local cluster service IP for "myservice", "myservice.mynamespace", "bobsservice.othernamespace", and so on.
Through the introduction of federated services and cross-cluster service discovery, this concept is extended globally, covering all clusters in your federation. To take advantage of the convenience of this extension, you only need to use a slightly different service name (for example, myservice.mynamespace.myfederation) for parsing.
Using different DNS names also prevents your existing services from being unexpectedly resolved to different zone or region networks without clear configuration and selection, resulting in additional network fees or delays.
So, using the NGINX service above, and the federated service DNS name just introduced, we envision an example: the pod in the cluster of the availability zone us-central1-a needs to access our NGINX service. Instead of using the traditional local cluster DNS name ("nginx.mynamespace", which automatically extends to "nginx.mynamespace.svc.cluster.local"), you can now use the federated service name "nginx.mynamespace.myfederation". This name is automatically extended and resolved to the nearest health node of my Nginx service, no matter where it is in the world. If there is a health service fragment in the local cluster, the local cluster IP address (usually 10.x.y.z) will be returned (clustered local KubeDNS). This is exactly the same as non-federated service resolution.
If the service does not exist in the local cluster (or if the service exists but there is no healthy backend pod locally), the DNS query will be automatically expanded to `"nginx.mynamespace.myfederation.svc.us-central1-a.example.com ". The actual behavior is to find the external IP of the service fragment closest to the current availability zone. The extension is automatically triggered by KubeDNS to return the relevant CNAME record. This traverses the DNS record in the above example until the external IP of the local us-central1 region federation service is found.
By explicitly specifying the DNS name, it is possible to point directly to service fragments of non-local availability zones (AZ) and geographies (region) without relying on automatic DNS extension. For example,
"nginx.mynamespace.myfederation.svc.europe-west1.example.com" will be resolved to all health service fragments located in Europe, even if the service parsed through nslookup is located in the United States, regardless of whether there are health fragments of the service in the United States or not. This is useful for remote monitoring and other similar applications.
Discovering federated services from outside the federated cluster
For external clients, the DNS automatic extension described earlier is no longer applicable. The external client needs to specify the full name of the federated service (FQDN), which can be a region, region, or global name. For convenience, it is best to manually create CNAME records for your service, such as:
Eu.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.europe-west1.example.com.us.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.us-central1.example.com.nginx.acme.com CNAME nginx.mynamespace.myfederation.svc.example.com.
In this way, your client can always use the short name form on the left, and it will be automatically resolved to the nearest healthy node on its continent. The cluster federation will help you deal with the failover of the service.
Deal with the failure of background Pod and the entire cluster
The standard Kubernetes service cluster IP has ensured that unresponsive Pod is removed from the service in a timely manner. The Kubernetes federated cluster system automatically monitors the health status of the terminals corresponding to the cluster and all federated service fragments, and adds and deletes service fragments as needed.
The delay caused by DNS cache (cache timeout, or federal service DNS record TTL is set to three minutes by default, but can be adjusted) may take such a long time to correctly failover the client request to the available cluster when there is a problem with a cluster or service fragment. However, because each service terminal can return multiple ip addresses (such as the us-central1 above, there are three options) many clients will return one of the optional addresses.
The above is how to achieve the high availability of Kubernetes cross-cluster service applications. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.