Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the Edge Cluster Management of hundreds of sites in the World by Rancher + VMware PKS

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Rancher + VMware PKS how to achieve hundreds of sites around the world edge cluster management, I believe that many inexperienced people do not know what to do, so this paper summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.

Rancher is a container orchestration management platform that has been working in this field for several years and has many easy-to-use and practical features. In recent years, it has been refactored to fully embrace Kubernetes. I'll focus on how to build an enterprise, highly available, and secure Rancher Server installation on top of VMware Enterprise PKS. In the meantime, I'll answer the question "Why do you use Rancher on PKS?"

Deploy Rancher on PKS?

I know this is not a common match, especially since there is some kind of competition between the two solutions. They can all deploy Kubernetes clusters in different cloud providers and can be deployed locally to stacks such as vSphere. Each of them has its own strengths and weaknesses, but there is no overlap in the management space. Enterprise PKS uses BOSH to deploy Kubernetes clusters and continues to use them for lifecycle events-- extensions, fixes, upgrades, and so on. Rancher, on the other hand, uses its own drivers for various cloud PaaS solutions, and even uses vSphere for local deployment to perform similar tasks. However, Rancher can import externally configured clusters and provide them with visibility, which PKS has not yet implemented. At the VMware 2019 conference, VMware announced a product called Tanzu Mission Control, which may be able to achieve these existing functions of Rancher in the future. Until then, however, Rancher still represented excellent technology that could manage and aggregate different Kubernetes clusters into a single system.

Where did this case come from?

Like many articles I have written before, this article is derived from client projects. And since I couldn't find an Enterprise PKS guide on this topic, I decided to write one myself. The customer plans to deploy Kubernetes at the edge to hundreds of sites around the world using bare metal and small hardware. They need to unify all clusters into one tool to facilitate management, monitoring, and application deployment. They run Enterprise PKS in the primary data center to deploy Kubernetes clusters for other business requirements. Because these sites are connected to their data centers around the world, they need to be clustered in a tool that can be run locally, with high availability, security, and high performance. Using Enterprise PKS for the underlying, dedicated Kubernetes cluster, and then layering on the Rancher Server in HA mode, is an excellent feature for both of us-network and security through NSX-T, cluster lifecycle maintenance through BOSH, replication of image repositories through Harbor and unified management of dispersed Kubernetes clusters in a single pane through Rancher.

Frame structure

Let's first look at the architecture diagram.

In theory, we have a vSphere stack at the bottom that hosts the environment in our local SDDC. Next, we build a Kubernetes cluster of 3-master and 3-worker through PKS. In addition to many other system containers (not shown in the figure above), worker hosts Rancher pod and Nginx pod. Based on the above, we use the load balancer provided by NSX-T to provide two main virtual servers: one for control plane access and the other for ingress access.

However, after a lot of trial and error, I found that NSX-T 's L7 load balancer does not support the necessary request headers for Rancher to function properly. Specifically, Rancher needs to pass four request headers to pod in the HTTP request. These request headers are used to identify the source of the inbound request and to ensure that the appropriate TLS termination has occurred outside the application. One of the request headers, X-Forwarded-Proto, can tell Rancher the protocol used to communicate with the load balancer, but NSX-T 's load balancer cannot pass these request headers. For this reason, we must use a third-party ingress controller--nginx, which is currently one of the most popular solutions, provides extensive support for most client options, and can send request headers directly.

Let's move on to the next level and go inside the Kubernetes (as shown in the following figure). Remember, we are specializing in worker, but we are abstracting the physical layer such as nodes.

External requests enter the cluster through the LoadBalance service type in Nginx controller. Select a nginx controller from the list of endpoints built by the service to match the rules in ingress controller. The request will be forwarded to the Rancher Service and finally to the pod that matches the tag selector on the service.

We will go through the installation process step by step, and you will have a deeper understanding after reading it.

Installation

Remember the architecture mentioned above, and we will begin to put these fragments together. There are a total of six big steps, and each step has a detailed small step, which I will explain in detail.

1. Configure a new custom PKS cluster

As a first step, we will develop a plan specifically for Rancher, and then build a cluster dedicated to running Rancher in HA mode. We will also make a custom load balancer configuration file for reference in the build.

2. Prepare the cluster using Helm

After the cluster is built, we need to prepare so that we can use Helm's package, because we will use it to install ingress controller and Rancher.

3. Generate certificate and secret

This is an enterprise deployment, which means using custom certificates. So we need to create those certificates and make them available to Kubernetes's various API resources.

4. Create and configure Ingress

Since we cannot use NSX-T as the Ingress controller, we must configure another type and configure it appropriately. Then we will use a NSX-T Layer-4 load balancer to send traffic to ingress.

5. Install Rancher

Next, we use Helm to install Rancher.

6. Configure the infrastructure

Once everything is in place, we need to perform some post-deployment configuration tasks before completing all the steps.

Next, I will explain in detail how to do each step.

Configure a new custom PKS cluster

The first thing we need to do is set up the PKS infrastructure so that we can deploy the Kubernetes cluster anywhere. Two basic things need to be done: create a new plan and create a medium load balancer. Our plan needs to specify three master to make the control plane level highly available and three worker highly available at the workload level. In this case, since we are preparing to install a large number of clusters into Rancher, we may have to proceed and specify a medium load balancer, otherwise PKS will give us a very small load balancer. This load balancer will provide a virtual server for control plane / API access and send traffic to Rancher.

In PKS tile, I create a new plan using the following configuration.

Create plan according to your own needs, but remember the number of master and worker. If you want to put it in production, I suggest you increase the specification of master permanent disk. Since each master node is a composite node containing control plane components and etcd, and Rancher will take full advantage of the etcd installation of the Kubernetes cluster as its data store, we need to make sure there is enough space.

You can use the add-on section at the bottom to automatically load any Kubernetes Manifest so that it can run during the cluster build process. I'll use pks-dns, which automatically creates DNS records for the control plane after it starts. If you haven't used it before, I strongly suggest you try it.

Finally, in order for Rancher agent to run correctly, be sure to start the allow privilege mode on this cluster.

Now, using the previously saved plan and committed changes, you should be able to run a pks plans and display the new plan.

$pks plansName ID Descriptiondev-small 8A0E21A8-8072-4D80-B365-D1F502085560 1 master; 2 workers (max 4) dev-large 58375a45-17f7-4291-acf1-455bfdc8e371 1 master; 4 workers (max 8) prod-large 241118e5-69b2-4ef9-b47f-4d2ab071aff5 3 masters; 10 workers (20 max) dev-tiny 2fa824527-318d-4253-9f8e-0025863b8c8a 1 master; 1 worker (max 2) Auto-adds DNS record upon completion.rancher fa824527-318d-4253-9f8e-0025863b8c8a Deploys HA configuration with 3 workers for testing Rancher HA builds.

With this, we can now create a custom load balancer plan. Currently, the only way to implement this method is to create an API through the pks CLI tool or by creating a custom JSON specification file. Save the information to the name lb-medium.json. Replace the values of "name" and "description" according to your own needs

{"name": "lb-medium", "description": "Network profile for medium NSX-T load balancer", "parameters": {"lb_size": "medium"}}

Run the following command to create a new network profile:

$pks create-network-profile lb-medium.json

Check again to make sure that plan exists.

$pks network-profilesName Descriptionlb-medium Network profile for medium NSX-T load balancer

Now create a new PKS cluster using your plan and medium load balancer.

$pks create-cluster czpksrancher05-e czpksrancher05.sovsystems.com-p rancher-- network-profile lb-medium

Building a cluster will take a few minutes, and you can take a break. You can monitor the cluster creation process to see when the creation is complete.

$watch-n 5 pks cluster czpksrancher05Every 5.0s: pks cluster czpksrancher05Name: czpksrancher05Plan Name: rancherUUID: 3373eb33-8c8e-4c11-a7a8-4b25fe17722dLast Action: CREATELast Action State: in progressLast Action Description: Instance provisioning in progressKubernetes Master Host: czpksrancher05.sovsystems.comKubernetes Master Port: 8443Worker Nodes: 3Kubernetes Master IP (s): In ProgressNetwork Profile Name: lb-medium

After the cluster is created, if you used my recommended pls-dns tool before, your DNS record should have been created by now.

$nslookup czpksrancher05Server: 10.10.30.13Address: 10.10.30.13#53Name: czpksrancher05.sovsystems.comAddress: 10.50.0.71

Now that all the necessary work is done, we can access the cluster. Let's configure kubeconfig first.

$pks get-credentials czpksrancher05

Then confirm that we have access.

$kubectl cluster-infoKubernetes master is running at https://czpksrancher05.sovsystems.com:8443CoreDNS is running at https://czpksrancher05.sovsystems.com:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy prepares the cluster using Helm

Now that we have access to our cluster, I need to use Helm to prepare it. Helm is a tool that deploys an entire application to Kubernetes using the packaged format of chart. These chart packages the various Kubernetes manifest needed to fully deploy the application. There are already a lot of articles on the Internet about Helm and its introduction, so I won't repeat it. I assume you already know this and add the Helm binaries to PATH. We need to create a necessary object in Kubernetes for Tiller (the server-side component of Helm) to run. This includes a ServiceAccount, ClusterRoleBinding, and then initialize the Helm.

$kubectl-n kube-system create serviceaccount tillerserviceaccount/tiller created

Create a binding for the Tiller account as the cluster administrator. When you have completed the installation, you can unbind it. In general, binding service account to the cluster administrator role is not a good idea, unless they really require cluster-wide access. You can restrict Tiller deployment to a single namespace if you want. Fortunately, there will be no Tiller in Helm3, which will further simplify the deployment of Helm chart and improve security.

$kubectl create clusterrolebinding tiller-clusterrole cluster-admin-serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller created

Next, initialize the Helm with service account, which creates a Tiller pod in the kube-system namespace.

$helm init-service-account tiller$HELM_HOME has been configured at / home/chip/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

You should now be able to check and see that a new Tiller pod has been created and is running.

$kubectl-n kube-system get po-l app=helmNAME READY STATUS RESTARTS AGEtiller-deploy-5d6cc99fc-sv6z6 1 Running 1 Running 0 5m25s

Helm version should be able to ensure that the client's helm binaries can communicate with the newly created Tiller pod.

$helm versionClient: & version.Version {SemVer: "v2.14.3", GitCommit: "0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState: "clean"} Server: & version.Version {SemVer: "v2.14.3", GitCommit: "0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState: "clean"}

The second step is completed, and then we need to do some certificate work.

Generate certificates and secret

Now we have to create our custom certificates and make them available to Kubernetes. In this case, I use a certificate signed by an internal enterprise certificate authority (CA), but you can only use a generic certificate signed by a third-party root. Rancher can use either the above two certificates or a self-signed certificate. Since this is an enterprise deployment, we need an established chain of trust, so we will use the enterprise CA.

The process of creating a certificate may be different, so I can't list everything. I'll just briefly introduce the certificate that is the same as the certificate creation and generation process required for this example. But in any case, we need a hostname through which we can access the Rancher applications running in the cluster. Don't confuse the external hostnames we used to create the PKS cluster, which are two separate addresses and do not serve the same purpose. I named this Rancher installation "czrancherblog" to generate and sign my certificate correctly.

With the magic of Internet, we can quickly complete the generation process until we get the certificate, secret, and root CA certificate file.

Make sure the files are accessible to the system running kubectl before continuing.

We will create a new namespace for Rancher, within which we need to create two secret: a root CA certificate to sign our host certificate, and the actual generated certificate and its corresponding private key.

$kubectl create ns cattle-systemnamespace/cattle-system created

Before creating the first secret, make sure that the root CA certificate is named cacerts.pem, which must be used exactly, or Rancher pod will fail to start.

$kubectl-n cattle-system create secret generic tls-ca-- from-file=cacerts.pemsecret/tls-ca created

Next, create a TLS secret that will hold the host certificate for the czrancherblog site. In addition, the nginx ingress controllerPod created in the next step will also need this secret in order to authenticate the customer request correctly.

$kubectl-n cattle-system create secret tls czrancherblog-secret-- cert=czrancherblog.cer-- key=czrancherblog.keysecret/czrancherblog-secret created

Verify the two secret that have been created

$kubectl-n cattle-system get secretNAME TYPE DATA AGEczrancherblog-secret kubernetes.io/tls 2 2m7sdefault-token-4b7xj kubernetes.io/service-account-token 3 5m1stls-ca Opaque 1 3m26s

You will notice that when you create these secret, you are actually creating two different types of secret. The secret tls-ca is a secret of type Opaque, while czrancherblog-secret is a TLS secret. If you compare the two, you will notice that tls-ca secret lists a base64-encoded certificate for cacerts.pem in the data section. Czrancherblog-secret will list two of them, one for each file. You will also notice that no matter which input file you provide, you have listed their values for tls.crt and tls.key (after base64 encoding to blur them).

Now that we have completed the generation of the certificate, we should go to the section of nginx ingress.

Create and configure Ingress

Now that the secret and namespace have been created, let's install nginx-ingress controller. As mentioned above, although Enterprise PKS can and will use NSX-T as an off-the-shelf Ingress controller, Rancher has some special needs that this controller cannot meet. So in this case, we will use other controller. Nginx has a widely used and technologically mature ingress controller that meets our needs. Note that in this case, I am using kubernetes/ingress-nginx instead of nginxinc/Kubernetes-ingress. Although they are slightly different, they can actually be used in this case.

On your terminal, run the following Helm command to install kubernetes / ingress-nginx controller from the stable version:

Helm install stable/nginx-ingress-name nginx--namespace cattle-system-set controller.kind=DaemonSet

In this command, we ask Helm to put the nginx object in the cattle-system namespace and run pod as a DaemonSet instead of a Deployment. We want each Kubernetes node to have a controller so that node failures do not eliminate the ingress data path to the Rancher Pod. Rancher will accomplish a similar task, but will use deployment with pod anti-association rules (which works similar to vSphere DRS).

After the command is completed, you will get a series of returns from Helm, including all the objects it creates. It is important to point out that several API objects are ConfigMap, which stores the configuration of nginx controller as well as services.

= = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEnginx-nginx-ingress-controllern LoadBalancer 10.100.200.4880 80/TCP 32013TCP 0snginx-nginx-ingress-default-backend ClusterIP 10.100.200.45

The first type called nginx-nginx-ingress-controller is LoadBalancer. From an application perspective, this will actually be the entry point for Rancher clustering. If you see the External-IP bar, you will notice that it initially only reported the status. At this point, you need to give your Kubernetes cluster some time to pull the image, and then check the namespace service again.

# kubectl-n cattle-system get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEnginx-nginx-ingress-controller LoadBalancer 10.100.200.48 10.50.79100.64.128.125 80:32013/TCP 443:32051/TCP 6m35snginx-nginx-ingress-default-backend ClusterIP 10.100.200.45 80/TCP 6m35s

This time, you will find that it allocates two IP because the NSX Container plug-in (NCP) notices the creation of the object and has communicated with NSX-T to create a new virtual server for us using the medium load balancer specified for this cluster.

If you skip to the Server Pool tab in the NSX-T Manager interface and find the tab referenced here, you can check the Pool Member tab and see that it has automatically added the endpoints referenced by the service.

The Name column is truncated, but the IP column is still visible. Let's check the Kubernetes again.

$kubectl-n cattle-system get po-o wide-L appNAME READY IP NODE APPnginx-nginx-ingress-controller-wjn2x 1 9aa90453 1 10.11.54.2 9aa90453-2aff-4655-a366-0ea8e366de4a nginx-ingressnginx-nginx-ingress-controller-wkgms 1 f35d32a0 1 10.11.54.3 f35d32a0-4dd3-42e4-995b-5daffe36dfee nginx-ingressnginx-nginx-ingress-controller-wpbtp 1 995b-5daffe36dfee nginx-ingress 1 10.11.54.4 e832528a-899c-4018-8d12-a54790aa4e15 nginx-ingressnginx-nginx-ingress-default-backend-8fc6b98c6-6r2jh 1 f35d32a0-4dd3-42e4-995b-5daffe36dfee nginx-ingress

The output has been slightly modified to remove unnecessary columns, but in addition to the running nodes, you can clearly see the pod, their status, and IP addresses.

With the IP address of the new virtual server, we next need to create a DNS record that corresponds to the hostname of our Rancher HA installation. I decided to name it "czrancherblog", so I will create an A record in DNS that will point to 10.50.0.79 in czrancherblog.

$nslookup czrancherblogServer: 10.10.30.13Address: 10.10.30.13053 Name: czrancherblog.sovsystems.comAddress: 10.50.0.79

The final step in this phase is to create an ingress controller on Kubernetes. Although we already have LoadBalancer as a service type, we still need ingress resources to route traffic to Rancher service. But the service doesn't exist yet, but it can be created soon.

Create a new manifest named ingress.yaml with the following code, and apply it using kubectl create-f ingress.yaml:

ApiVersion: extensions/v1beta1kind: Ingressmetadata: annotations: kubernetes.io/ingress.class: nginx namespace: cattle-system name: rancher-ingressspec: rules:-host: czrancherblog.sovsystems.com http: paths:-backend: serviceName: rancher servicePort: 80 tls:-hosts:-czrancherblog.sovsystems.com secretName: czrancherblog-secret

Now let me explain this manifest. First, our type is Ingress, and then we have a comment. This line actually does two things: instruct NCP not to tell NSX-T to create and configure any objects, and tell Nginx controller to route traffic to a service called rancher that has not yet been created on port 80. Finally, when any traffic from a request with a host value of czrancherblog.sovsystems.com enters the system, it applies to the controller using the TLS secret we created with the certificate and key.

Although we hope there will be no errors, we still need to put the address in a web browser to see what is returned.

We will get some important information. First of all, the very obvious big word "503 Service Temporarily Unavailable", considering that there is currently no traffic passing through the nginx controller, the return result is in line with expectations. Second, we see a green lock icon in the address bar, which means that the TLS secret we created containing the certificate has been accepted and applied to the host rule.

So far, it's all going in the desired direction, so let's move on to the next step.

Install Rancher

Finally, it's the long-awaited moment to install Rancher. Now that we have everything ready, let's start the installation!

Add the Rancher stable repository to Helm, then update it and pull the latest chart.

$helm repo add rancher-stable https://releases.rancher.com/server-charts/stable"rancher-stable" has been added to your repositories$ helm repo listNAME URLstable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/chartsrancher-stable https://releases.rancher.com/server-charts/stable$ helm repo updateHang tight while we grab the latest from your chart repositories.Skip Local chart repository...Successfully got an update from the "rancher-stable" chart repository...Successfully got an update from the "stable" chart repositoryUpdate Complete.

Why don't we need to add a warehouse before installing nginx ingress? Because that chart actually comes from the default "stable" repository hosted by Google, there is no custom repository to add. For Rancher, we must add a stable or latest repository to access the chart that it manages.

If you have successfully updated it, then activate Rancher quickly. Install the latest chart from stable repo and check the output.

$helm install rancher-stable/rancher-- name rancher--namespace cattle-system-- set hostname=czrancherblog.sovsystems.com-- set ingress.tls.source=secret-- set privateCA=true NAME: rancherLAST DEPLOYED: Sun Sep 8 18:11:18 2019NAMESPACE: cattle-systemSTATUS: DEPLOYEDRESOURCES:== > v1/ClusterRoleBindingNAME AGErancher 1s = = > v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGErancher 0go 3 001 s = = > v1/Pod (related) NAME READY STATUS RESTARTS AGErancher-6b774d9468-2svww 0amp 1 ContainerCreating 00 sranchermuri 6b774d9468-5n8n7 0Acer 1 Pending 0 0srancher-6b774d9468-dsq4t 0Compare 1 Pending 00s = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGErancher ClusterIP 10.100.200.15 80/TCP 1s = > v1/ServiceAccountNAME SECRETS AGErancher 1 1s = = > v1beta1/IngressNAME HOSTS ADDRESS PORTS AGErancher czrancherblog.sovsystems.com 80 443 1s NOTES:Rancher Server has been installed. NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up. Check out our docs at https://rancher.com/docs/rancher/v2.x/en/ Browse to https://czrancherblog.sovsystems.comHappy Containering!

Well, now chart has deployed a large number of Kubernetes objects for us. Note, however, that the last object is another ingress. We don't need it, so just delete it.

$kubectl-n cattle-system delete ing rancheringress.extensions "rancher" deleted

Check the pod to make sure they are running now.

$kubectl-n cattle-system get po-l app=rancherNAME READY STATUS RESTARTS AGErancher-6b774d9468-2svww 1 Running 1 Running 2 5m 32srancherly6b774d9468-5n8n7 1 Running 2 5m32srancher-6b774d9468-dsq4t 1 5m32s 1 Running 0

It's great. Everything's up. If the value you see in the RESTARTS column is not zero, don't panic, Kubernetes will eventually be consistent with its controller and monitoring loop, so as long as you take appropriate action, you can make the actual state reach the desired state.

Now, after all this work is done, let's check our web page again to see what happened.

As we hoped, we are now in our Rancher cluster! Let's set the initial password and set server URL on the next page. If it is not set, it will automatically populate our hostname.

Now that this step is complete, let's move on to the final step.

Configure the infrastructure

Now that we've got everything up and running, let's do something.

First of all, you may have noticed that your "local" cluster card is in the state of Provisioning and is waiting to set the hostname. This is usually resolved automatically in a few minutes, but in this case, you need to perform a few simple steps.

The fastest repair step is to click the rightmost settings button to edit the cluster.

Don't make any changes now, just click Save.

Then go back to the home page, it should be up.

Enter the cluster again and make sure your node is reporting status.

Next, we need to allocate our underlying PKS Kubernetes nodes among the hosts. If your plan includes multiple free spaces, you may have completed this step. But if, like me, you don't have much free space, you will need some way to make sure that your master and your worker are distributed on the ESXi host. If you have heard of my Optimize-VMwarePKS project, I strongly recommend that you use it, which can automatically handle master for you, but we still need to separate the worker manually. Remember, we need not only the high availability of the API and control plane, but also the high availability of the Rancher data plane. This means that any failure of the hypervisor will not affect the accessibility of our application.

After you run Optimize-VMware-PKS with the-ProcessDRSRules flag, it should detect the master of the cluster and create DRS anti-association rules. Now you need to manually create a new node for the worker node.

Create a new anti-association rule for worker and add all the rules. Because BOSH deploys them using UUID instead of the actual name, it is difficult to find them in the list, but you can find them in the vSphere VM and template list views (if you run Optimize-VMware-PKS with the-ProcessDRSRules flag), or you can get the name from BOSH CLI after referencing deployment.

$bosh-d service-instance_3373eb33-8c8e-4c11-a7a8-4b25fe17722d vmsUsing environment 'czpcfbosh.sovsystems.com' as client' ops_manager' Task 55540. Done Deployment 'service-instance_3373eb33-8c8e-4c11-a7a8-4b25fe17722d' Instance Process State AZ IPs VM CID VM Type Activemaster/00be5bab-649d-4226-a535-b2a8b15582ee running AZ1 10.50.8.3 vm-a8a0c53e-3358-46a9-b0ff-2996e8c94c26 medium.disk truemaster/1ac3d2df-6a94-48d9-89e7- 867c1f18bc1b running AZ1 10.50.8.2 vm-a6e54e16-e216-4ae3-8a99-db9100cf40c8 medium.disk truemaster/c866e776-aba3-49c5-afe0-8bf7a128829e running AZ1 10.50.8.4 vm-33eea584-ff26-45ed-bce3-0630fe74f88a medium.disk trueworker/113d6856-6b4e-43ef-92ad-1fb5b610d28d running AZ1 10.50.8.6 vm-5508aaec-4253-4458-b2de-26675a1f049f medium.disk trueworker/c0d00231-4afb-4a4a -9a38-668281d9411e running AZ1 10.50.8.5 vm-e4dfc491-d69f-4404-8ab9-81d2f1f4bd0d medium.disk trueworker/ff67a937-8fea-4c13-8917-3d92533eaade running AZ1 10.50.8.7 vm-dcc29000-16c2-4f5a-b8f4-a5420f425400 medium.disk true 6 vmsSucceeded

Whichever method you choose, as long as you can ensure that the rules are created.

Now that you have anti-association rules for master and worker, make sure you have high availability in many ways. If you are willing to test, let the worker node (or the associated master node) fail, or delete the Rancher Pod, and then check out the new pod created by Kubernetes. At this point, Rancher should still be working.

You have seen how we have created a highly available and fully functional Rancher Server from scratch, and you should take some steps to ensure that it is securely distributed across the underlying architecture. You need to keep in mind an important consideration: when running Rancher on Enterprise PKS, it has to do with the Kubernetes namespace. When installing Rancher on Kubernetes, Rancher integrates deeply with Kubernetes from a platform perspective by creating CRD and other objects. When a user creates a new project or imports a new cluster, Kubernetes creates a new namespace in the background. In other Kubernetes environments, this process would be perfect, but in Enterprise PKS, a new namespace means a new Tier-1 router, a new logical network segment, a new pod IP block, and so on. If you import a large number of clusters and create projects without a pre-set PKS, these namespaces can quickly run out of NSX-T objects such as pod IP blocks. If you are considering adopting such an architecture in a production environment, you need to keep this in mind. At this time, you cannot tell NCP to ignore these namespace creation commands, so it does not generate objects within NSX-T.

After reading the above, have you mastered how Rancher + VMware PKS implements the edge cluster management of hundreds of sites around the world? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 214

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report