In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about how to migrate Rancher Server. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.
Rancher provides two installation methods, single node and high availability installation. Single-node installation allows users to quickly deploy installation work suitable for short-term development testing. Highly available deployments are significantly more suitable for the long-term use of Rancher.
In practice, you may encounter situations where Rancher Server needs to be migrated to other nodes or local clusters to manage. Although you can use the simplest import cluster management, the problem is that the subsequent cluster management and upgrade maintenance can not be done, and some of the relationship between namespace and project will disappear. So this article focuses on how to migrate Rancher Server to other nodes or local clusters.
Explain how to migrate Rancher Server in three scenarios:
Rancher single node installation migrates to another host
Rancher single-node installation migrates to highly available installation
Important instructions for migrating Rancher highly available installations to other Local clusters
The Rancher official documentation does not specify that the migration is supported for the following scenarios, but this document only makes use of some of the existing features of Rancher and RKE to implement the migration.
If you encounter problems during this process, you should be familiar with Rancher architecture / troubleshooting
Migration is very dangerous. Be sure to make a backup just before migration, so as to avoid accidents that cannot be restored.
You should be familiar with the architectural differences between single-node installations and highly available installations
This document is based on the Rancher 2.4.x test, and other versions may operate slightly differently
This document focuses on the migration of Rancher Server, which will not affect the use of business clusters.
Prepare the cluster directly connected kubeconfig profile
By default, the replicated kubeconfig on Rancher UI connects to the kubernetes cluster through the cluster agent proxy. Changing the Rancher Server will cause the cluster agent to be unable to connect to the Rancher Server and the kubectl will not be able to use the replicated kubeconfig on the Rancher UI to operate the kubernetes cluster. However, you can use-fqdn "target=" _ blank "> kubectl-- context-fqdn to connect directly to the kubernetes cluster for operation, so please prepare the directly connected kubeconfig configuration files for all clusters before performing the migration.
Rancher v2.2.2 and later, you can download the kubeconfig file directly from UI.
For versions prior to Rancher v2.2.2, please refer to: restore kubectl configuration files
Scenario 1:Rancher single node installation and migration to other hosts
For Rancher single-node installation and migration to other hosts, you only need to package the / var/lib/rancher directory of the old cluster Rancher Server container, then replace it with the corresponding directory of the new Rancher Server, and then update the agent configuration after starting the new Rancher Server container.
1. Rancher single node installation
Tip: the following steps create a Rancher single-node environment to demonstrate migration. If you need to migrate a formal environment, you can skip this step.
Execute the following docker command to run the single-node Rancher Server service
Docker run-itd-p 80:80-p 443-restart=unless-stopped rancher/rancher:v2.4.3
After the container is initialized, access the Rancher Server UI through the node IP, set the password and log in.
two。 Create a custom cluster
Tip: the following steps create a business cluster for demonstration to verify whether data is lost after Rancher migration. You can skip this step if you need to migrate the formal environment.
After logging in to Rancher UI, add a custom cluster
The authorized cluster access address is set to enabled, and the FQDN and certificate can be left unfilled.
Note:
This step is crucial. Because after Rancher migration, the change of address or token or certificate will cause agent to be unable to connect to Rancher Server. After migration, you need to edit the configuration file and update some agent-related parameters through kubectl. The kubeconfig file on the default UI connects to the Kubernetes through the agent proxy, and if the agent cannot connect to the Rancher Server, the Kubernetes cluster cannot be accessed through this kubeconfig file. Enabling the authorization cluster access address feature generates multiple Contexts Cluster. These Contexts Cluster are directly connected Kubernetes, not through the agent agent. If the business cluster does not enable this feature, you can enable it by editing the cluster.
Click next, select the desired role according to the pre-assigned node role, and then copy the command to the host terminal for execution.
After the cluster deployment is complete, go to the home page of the cluster and click the kubeconfig file button. Copy the kubeconfg profile in the pop-up page for backup.
3. Deploy test applications
Deploy a nginx workload. Then deploy a test application from the app store.
4. Backing up single-node Racher Server data
Docker create-volumes-from-name rancher-data- rancher/rancher:docker run-volumes-from rancher-data--v $PWD:/backup:z busybox tar pzcvf / backup/rancher-data-backup--.tar.gz-C / var/lib rancher
For more information, please refer to the Rancher Chinese official website single-node backup guide.
5. Copy the generated rancher-data-backup--.tar.gz to the new Rancher Server node
Scp rancher-data-backup--.tar.gz root@:/opt/
6. Start a new node Rancher Server with backup data
If the original Rancher Server is installed by using an existing self-signed certificate or an existing trusted certificate, when migrating, you need to copy the certificate to the new Rancher Server and use the same startup command to mount the certificate and backup data to start Rancher Server.
Cd / opt & & tar-xvz-f rancher-data-backup--.tar.gzdocker run-itd-p 80:80-p 443-v / opt/rancher:/var/lib/rancher-- restart=unless-stopped rancher/rancher:v2.4.3
7. Update Rancher Server IP or domain name
Note:
If your environment uses a self-signed certificate or a Let's Encrypt certificate, and configure the domain name to access Rancher Server. After migration, the cluster status is Active. Please skip to step 9 to verify the cluster.
At this point, you can see the managed Kubernetes cluster by visiting the new Rancher Server, but the cluster status is unavailable. Because the agent is still connected to the old Rancher Server, you need to update the agent information.
Access the global > system settings, and scroll down the page to find the server-url file
Click the ellipsis menu on the right and select upgrade
Change server-url address to the address of the new Rancher Server
Save
8. Update agent configuration
Log in to Rancher Server through a new domain name or IP
Query the cluster ID through the browser address bar. The field starting with c / c is the cluster ID. In this example, the cluster ID is c-4wzvf.
Visit the https:///v3/clusters//clusterregistrationtokens page
After opening the clusterRegistrationTokens page, navigate to the data field; find the insecureCommand field and copy the YAML connection backup
There may be multiple groups of "baseType": "clusterRegistrationToken", as shown above. In this case, the group with the largest and newest createdTS shall prevail, usually the last group.
Using the kubectl tool, execute the following command to update the agent-related configuration through the directly connected kubeconfig configuration file prepared earlier and the YAML file obtained in the previous step.
Curl-- insecure-sfL | kubectl-- context=xxx apply-f-
For instructions on-- context=xxx, please refer to directly using downstream clusters for authentication.
9. Verification
After a while, the cluster changes to the Active state, and then verifies that the previously deployed application is available.
Scenario 2:Rancher single-node installation migration to highly available installation
The process of migrating from a single node to a high-availability installation of Rancher can be summarized as follows:
On a single node instance of Rancher:
Backup Rancher single-node container
Backup etcd Snapshot
Stop the old Rancher single-node container
On the RKE Local cluster:
Start the Rancher Local cluster using RKE
Restore the etcd snapshot backed up by a single node to RKE HA with rke etcd snapshot-restore
Install Rancher in a RKE Local cluster
Update the configuration of the Local cluster and the business cluster so that agent can connect to the correct Rancher Server
Operate on a single node Rancher Server
1. Rancher single node installation
Tip: the following steps create a Rancher environment to demonstrate migration, and you can skip this step if you need to migrate a formal environment.
Execute the following docker command to run the single-node Rancher Server service
Docker run-itd-p 80:80-p 443-restart=unless-stopped rancher/rancher:v2.4.3
After the container is initialized, access the Rancher Server UI through the node IP, set the password and log in.
two。 Create a custom cluster
Tip: the following steps create a business cluster for demonstration to verify whether data is lost after Rancher migration. You can skip this step if you need to migrate the formal environment.
After logging in to Rancher UI, add a custom cluster
The authorized cluster access address is set to enabled, and the FQDN and certificate can be left unfilled.
Note: this step is critical. Because after Rancher migration, the change of address or token or certificate will cause agent to be unable to connect to Rancher Server. After migration, you need to edit the configuration file and update some agent-related parameters through kubectl. The kubeconfig file on the default UI connects to the Kubernetes through the agent proxy, and if the agent cannot connect to the Rancher Server, the Kubernetes cluster cannot be accessed through this kubeconfig file. Enabling the authorization cluster access address feature generates multiple Contexts Cluster. These Contexts Cluster are directly connected Kubernetes, not through the agent agent. If the business cluster does not enable this feature, you can enable it by editing the cluster.
Click next, select the desired role according to the pre-assigned node role, and then copy the command to the host terminal for execution.
After the cluster deployment is complete, go to the home page of the cluster and click the kubeconfig file button. Copy the kubeconfg profile in the pop-up page for backup.
3. Deploy test applications
Deploy a nginx workload. Then deploy a test application from the app store.
4. Create a single-node etcd snapshot
Docker exec-it bashroot@78efdcbe08a6:/# cd / root@78efdcbe08a6:/# ETCDCTL_API=3 etcdctl snapshot save single-node-etcd-snapshotroot@78efdcbe08a6:/# exitdocker cp: / single-node-etcd-snapshot.
5. Turn off single-node Rancher Server
Docker stop is on the RKE Local cluster
1. RKE deploy Local Kubernetes cluster
Create a RKE configuration file cluster.yml based on the RKE sample configuration:
Nodes:- address: 99.79.49.94 internal_address: 172.31.13.209 user: ubuntu role: [controlplane, worker, etcd]-address: 35.183.174.120 internal_address: 172.31.8.28 user: ubuntu role: [controlplane, worker, etcd]-address: 15.223.49.238 internal_address: 172.31.0.199 user: ubuntu role: [controlplane, worker, etcd]
Execute the rke command to create a Local Kubernetes cluster
Rke up-config cluster.yml
Check the running status of the Kubernetes cluster
Use kubectl to check the node status and confirm that the node status is Ready
Kubectl get nodesNAME STATUS ROLES AGE VERSION15.223.49.238 Ready controlplane,etcd,worker 93s v1.17.635.183.174.120 Ready controlplane,etcd,worker 92s v1.17.699.79.49.94 Ready controlplane,etcd,worker 93s v1.17.6
Check that all necessary Pod and containers are in good condition, and then proceed
Kubectl get pods-all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEingress-nginx default-http-backend-67cf578fc4-9vjq4 1 67singress-nginx nginx-ingress-controller-8g7kq 1 Running 0 67singress-nginx nginx-ingress-controller-8g7kq 1 Running 0 67singress-nginx nginx-ingress-controller-8jvsd 1 Running 0 67singress-nginx nginx-ingress-controller-lrt57 1/1 Running 0 67skube-system canal-68j4r 2/2 Running 0 100skube-system canal-ff4qg 2/2 Running 0 100skube-system canal-wl9hd 2/2 Running 0 100skube-system coredns-7c5566588d-bhbmm 1/1 Running 0 64skube-system coredns-7c5566588d-rhjpv 1/1 Running 0 87skube-system coredns-autoscaler-65bfc8d47d-tq4gj 1/1 Running 0 86skube-system metrics-server-6b55c64f86 -vg7qs 1 Running 0 79skube-system rke-coredns-addon-deploy-job-fr2bx 0 79skube-system rke-coredns-addon-deploy-job-fr2bx 0 Completed 0 92skube-system rke-ingress-controller-deploy-job-vksrk 0 Completed 0 82skube-system rke-network- 1 Completed 0 72skube-system rke-metrics-addon-deploy-job-d9hlv 0 max Plugin-deploy-job-kf8bn 0/1 Completed 0 103s
two。 Transfer the generated single-node etcd snapshot from the Rancher single-node instance to the RKE Local cluster node
Create a / opt/rke/etcd-snapshots directory on the RKE HA Local node and copy the single-node-etcd-snapshot file to that directory:
Mkdir-p / opt/rke/etcd-snapshotsscp root@:/root/single-node-etcd-snapshot / opt/rke/etcd-snapshots
3. Restore a single-node etcd snapshot to a new HA node using RKE
Rke etcd snapshot-restore-name single-node-etcd-snapshot-config cluster.yml
4. Rancher HA installation
Refer to the installation documentation to install Rancher HA.
5. Configure NGINX load balancing for Rancher HA
Refer to the NGINX configuration example to configure load balancing for Rancher HA.
Nginx configuration:
Worker_processes 4 workerworkers are responsible for events 4000 server {worker_connections 8192;} stream {upstream rancher_servers_http {least_conn; server 172.31.11.95 least_conn; server 80 max_fails=3 fail_timeout=5s; server 172.31.0.201 listen 80 max_fails=3 fail_timeout=5s; server 172.31.15.236 proxy_pass rancher_servers_http 80 max_fails=3 fail_timeout=5s;} server {listen 80; } upstream rancher_servers_https {least_conn; server 172.31.11.95 max_fails=3 fail_timeout=5s; server 172.31.0.201 max_fails=3 fail_timeout=5s; server 172.31.15.236 max_fails=3 fail_timeout=5s;} server {listen 443; proxy_pass rancher_servers_https;}}
After Nginx starts, we can access Rancher UI through the configured domain name / IP. You can see that the business cluster demo is in the status of Unavailable, and although the local cluster is Active, both cluster-agent and node-agent failed to start.
Both cases are due to the old Rancher Server to which agent is still connected.
6. Update Rancher Server IP or domain name
Access the global > system settings, and scroll down the page to find the server-url file
Click the ellipsis menu on the right and select upgrade
Change server-url address to the address of the new Rancher server
Save
7. Update the agent configuration of local clusters and business clusters
Log in to Rancher Server through a new domain name or IP
Query the cluster ID through the browser address bar. The field starting with c / c is the cluster ID. In this example, the cluster ID is c-hftcn.
Visit the https:///v3/clusters//clusterregistrationtokens page
After opening the clusterRegistrationTokens page, navigate to the data field; find the insecureCommand field and copy the YAML connection backup
There may be multiple groups of "baseType": "clusterRegistrationToken", as shown above. In this case, the group with the largest and newest createdTS shall prevail, usually the last group.
Using the kubectl tool, execute the following command to update the agent-related configuration through the directly connected kubeconfig configuration file prepared earlier and the YAML file obtained in the previous step.
Note:
The kubeconfig used to update the local cluster and the business cluster is different. Please select the required kubeconfig for the impassable cluster.
For instructions on-- context=xxx, please refer to directly using downstream clusters for authentication.
Curl-- insecure-sfL | kubectl-- context=xxx apply-f-
After the business cluster agent is updated successfully, update the local cluster agent configuration using the same method.
9. Verification
After a while, both the local and demo clusters become Active:
Cluster-agent and node-agent of Local cluster started successfully
Cluster-agent and node-agent of Demo cluster started successfully
Then verify that the previously deployed application is available.
Scenario 3:Rancehr highly available installation migrates to other Local clusters
Rancehr highly available installations can be migrated to other Local clusters with the help of rke's update feature. Expand the original 3-node local cluster to 6 nodes through rke, at this time, the etcd data will be automatically synchronized to the 6 nodes in the local cluster, and then use rke to remove the original 3 nodes and update again. In this way, Rancher Server can be smoothly migrated to the new Rancher local cluster.
1. RKE deploy Local Kubernetes cluster
Create a RKE configuration file cluster.yml based on the RKE sample configuration:
Nodes:- address: 3.96.52.186 internal_address: 172.31.11.95 user: ubuntu role: [controlplane, worker, etcd]-address: 35.183.186.213 internal_address: 172.31.0.201 user: ubuntu role: [controlplane, worker, etcd]-address: 35.183.130.12 internal_address: 172.31.15.236 user: ubuntu role: [controlplane, worker, etcd]
Execute the rke command to create a Local Kubernetes cluster
Rke up-config cluster.yml
Check the running status of the Kubernetes cluster
Use kubectl to check the node status and confirm that the node status is Ready
Kubectl get nodesNAME STATUS ROLES AGE VERSION3.96.52.186 Ready controlplane,etcd,worker 71s v1.17.635.183.130.12 Ready controlplane,etcd,worker 72s v1.17.635.183.186.213 Ready controlplane,etcd,worker 72s v1.17.6
Check that all necessary Pod and containers are in good condition, and then proceed
Kubectl get pods-all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEingress-nginx default-http-backend-67cf578fc4-gnt5c 1 72singress-nginx nginx-ingress-controller-47p4b 1 Running 0 72singress-nginx nginx-ingress-controller-47p4b 1 Running 0 72singress-nginx nginx-ingress-controller-85284 1 Running 0 72singress-nginx nginx-ingress-controller-9qbdz 1/1 Running 0 72skube-system canal-9bx8k 2/2 Running 0 97skube-system canal-l2fjb 2/2 Running 0 97skube-system canal-v7fzs 2/2 Running 0 97skube-system coredns-7c5566588d-7kv7b 1/1 Running 0 67skube-system coredns-7c5566588d-t4jfm 1/1 Running 0 90skube-system coredns-autoscaler-65bfc8d47d-vnrzc 1/1 Running 0 90skube-system metrics-server-6b55c64f86 -r4p8w 1 Running 0 79skube-system rke-coredns-addon-deploy-job-lx667 0 79skube-system rke-coredns-addon-deploy-job-lx667 0 Completed 0 94skube-system rke-ingress-controller-deploy-job-r2nw5 0 Completed 0 84skube-system rke-network- 1 Completed 0 74skube-system rke-metrics-addon-deploy-job-4bq76 0 max Plugin-deploy-job-gjpm8 0/1 Completed 0 99s
2. Rancher HA installation
Refer to the installation documentation to install Rancher HA.
3. Configure NGINX load balancing for Rancher HA
Refer to the NGINX configuration example to configure load balancing for Rancher HA.
Nginx configuration:
Worker_processes 4 workerworkers are responsible for events 4000 server {worker_connections 8192;} stream {upstream rancher_servers_http {least_conn; server 172.31.11.95 least_conn; server 80 max_fails=3 fail_timeout=5s; server 172.31.0.201 listen 80 max_fails=3 fail_timeout=5s; server 172.31.15.236 proxy_pass rancher_servers_http 80 max_fails=3 fail_timeout=5s;} server {listen 80; } upstream rancher_servers_https {least_conn; server 172.31.11.95 max_fails=3 fail_timeout=5s; server 172.31.0.201 max_fails=3 fail_timeout=5s; server 172.31.15.236 max_fails=3 fail_timeout=5s;} server {listen 443; proxy_pass rancher_servers_https;}}
After Nginx starts, we can access Rancher UI through the configured domain name / IP. You can navigate to local- > Nodes to view the status of the three nodes in the local cluster:
4. Deploy test clusters and applications
Add a test cluster. Node Role selects etcd, Control Plane and Worker at the same time
After waiting for the test cluster to be added successfully, deploy a nginx workload. Then deploy a test application from the app store.
5. Add the nodes of the new cluster to the Local cluster
Modify the rke configuration file you just used to create the local cluster to add the configuration of the new cluster.
Cluster.yml:
Nodes:- address: 3.96.52.186 internal_address: 172.31.11.95 user: ubuntu role: [controlplane, worker, etcd]-address: 35.183.186.213 internal_address: 172.31.0.201 user: ubuntu role: [controlplane, worker, etcd]-address: 35.183.130.12 internal_address: 172.31.15.236 user: ubuntu role: [controlplane, worker Etcd] # the following is the configuration of the new node-address: 52.60.116.56 internal_address: 172.31.14.146 user: ubuntu role: [controlplane, worker, etcd]-address: 99.79.9.244 internal_address: 172.31.15.215 user: ubuntu role: [controlplane, worker, etcd]-address: 15.223.77.84 internal_address: 172.31.8.64 user: ubuntu role: [controlplane, worker, etcd]
Update the cluster to expand the local cluster nodes to 6
Rke up-cluster.yml
Check the running status of the Kubernetes cluster
Use kubectl to test your connectivity and verify that the original node (3.96.52.186, 35.183.186.213, 35.183.130.12) and the new node (52.60.116.56, 99.79.9.244, 15.223.77.84) are in the Ready state
Kubectl get nodesNAME STATUS ROLES AGE VERSION15.223.77.84 Ready controlplane,etcd,worker 33s v1.17.63.96.52.186 Ready controlplane,etcd,worker 88m v1.17.635.183.130.12 Ready controlplane,etcd,worker 89m v1.17.635.183.186.213 Ready controlplane,etcd,worker 89m v1.17.652.60.116.56 Ready controlplane,etcd Worker 101s v1.17.699.79.9.244 Ready controlplane,etcd,worker 67s v1.17.6
Check that all necessary Pod and containers are in good condition, and then proceed
Kubectl get pods-- all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEcattle-system cattle-cluster-agent-68898b5c4d-lkz5m 1 109scattle-system cattle-node-agent-lvdlf 1 Running 0 46mcattle-system cattle-node-agent-9xrbs 1 Running 0 109scattle-system cattle-node-agent-lvdlf 1 / 1 Running 0 46mcattle-system cattle-node-agent-mnk76 1/1 Running 0 46mcattle-system cattle-node-agent-qfwcm 1/1 Running 0 75scattle-system cattle-node-agent-tk66h 1/1 Running 0 2m23scattle-system cattle-node-agent- V2vpf 1/1 Running 0 46mcattle-system rancher-749fd64664- 8cg4w 1/1 Running 1 58mcattle-system rancher-749fd64664-fms8x 1/1 Running 1 58mcattle-system rancher-749fd64664-rb5pt 1/1 Running 1 58mingress-nginx default-http -backend-67cf578fc4-gnt5c 1 to 1 Running 0 89mingress-nginx nginx-ingress-controller-44c5z 1 to 1 Running 0 61singress-nginx nginx-ingress-controller-47p4b 1 to 1 Running 0 89mingress-nginx nginx-ingress-controller-85284 1 to 1 Running 0 89mingress-nginx nginx-ingress-controller-9qbdz 1/1 Running 0 89mingress-nginx nginx-ingress-controller-kp7p6 1/1 Running 0 61singress-nginx nginx-ingress-controller-tfjrw 1/1 Running 0 61skube-system canal-9bx8k 2/2 Running 0 89mkube-system canal-fqrqv 2/2 Running 0 109skube-system canal-kkj7q 2/2 Running 0 75skube-system canal-l2fjb 2/2 Running 0 89mkube-system canal-v7fzs 2/2 Running 0 89mkube-system canal-w7t58 2/2 Running 0 2m23skube-system coredns-7c5566588d-7kv7b 1/1 Running 0 89mkube-system coredns-7c5566588d-t4jfm 1/1 Running 0 89mkube-system coredns- Autoscaler-65bfc8d47d-vnrzc 1/1 Running 0 89mkube-system metrics-server-6b55c64f86-r4p8w 1/1 Running 0 89mkube-system rke-coredns-addon-deploy-job-lx667 0/1 Completed 0 89mkube-system rke-ingress-controller-deploy-job-r2nw5 0/1 Completed 0 89mkube-system rke-metrics- Addon-deploy-job-4bq76 0/1 Completed 0 89mkube-system rke-network-plugin-deploy-job-gjpm8 0/1 Completed 0 89m
From the above information, you can confirm that the number of local clusters has now expanded to 6, and all workload are running normally.
6. Update the cluster again to eliminate the original Local cluster node
Modify the rke configuration file used by the local cluster again to comment out the original local cluster node configuration.
Cluster.yml:
Nodes:#-address: 3.96.52.18 "user: 172.31.11.9" user: ubuntu# role: [controlplane, worker, etcd] #-address: 35.183.186.21 "internal_address: 172.31.0.20" ubuntu# role: [controlplane, worker " Etcd] #-address: 35.183.130.1nodes internal_address: 172.31.15.23nodes user: ubuntu# role: [controlplane, worker, etcd] # the following are new nodes-address: 52.60.116.56 internal_address: 172.31.14.146 user: ubuntu role: [controlplane, worker, etcd]-address: 99.79.9.244 internal_address: 172.31.15.215 user: ubuntu role: [controlplane, worker Etcd]-address: 15.223.77.84 internal_address: 172.31.8.64 user: ubuntu role: [controlplane, worker, etcd]
Update the cluster to complete the migration.
Rke up-cluster.yml
Check the running status of the Kubernetes cluster
Using kubectl to check the node status as Ready, you can see that the nodes of the local cluster have been replaced with the following three:
Kubectl get nodesNAME STATUS ROLES AGE VERSION15.223.77.84 Ready controlplane,etcd,worker 11m v1.17.652.60.116.56 Ready controlplane,etcd,worker 13m v1.17.699.79.9.244 Ready controlplane,etcd,worker 12m v1.17.6
Check that all necessary Pod and containers are in good condition, and then proceed
Kubectl get pods-all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEcattle-system cattle-cluster-agent-68898b5c4d-tm6db 1 3m14scattle-system cattle-node-agent-9xrbs 1 Running 3 14mcattle-system cattle-node-agent-qfwcm 1 Running 0 14mcattle-system cattle-node-agent-qfwcm 1 Running 0 14mcattle-system cattle-node-agent-tk66h 1 + 1 Running 0 15mcattle-system rancher-749fd64664- 47jw2 1 * Nginx default-http-backend-67cf578fc4-4668g 1 Running 0 3m14singress-nginx nginx-ingress-controller-44c5z 1 13mkube-system canal-fqrqv 1 Running 0 13mingress-nginx nginx-ingress-controller-kp7p6 1 Running 0 13mingress-nginx nginx-ingress-controller-tfjrw 1 Running 0 13mkube-system canal-fqrqv 2/2 Running 0 14mkube-system canal-kkj7q 2/2 Running 0 14mkube-system canal-w7t58 2/2 Running 0 15mkube-system coredns-7c5566588d-nmtrn 1/1 Running 0 3m13skube-system Coredns-7c5566588d-q6hlb 1/1 Running 0 3m13skube-system coredns-autoscaler-65bfc8d47d-rx7fm 1/1 Running 0 3m14skube-system metrics-server-6b55c64f86-mcx9z 1/1 Running 0 3m14s
From the above information, you can confirm that the local cluster has been migrated successfully and that all workload are running normally.
Modify the nginx load balancer configuration to update the information of the new node to the nginx configuration file
Worker_processes 4 workerworkers are responsible for handling nofile 40000 events {worker_connections 8192;} stream {upstream rancher_servers_http {least_conn; server 172.31.14.146 least_conn; server 80 max_fails=3 fail_timeout=5s; server 172.31.8.64 80 max_fails=3 fail_timeout=5s; server 172.31.15.215 15 listen 80 max_fails=3 fail_timeout=5s;} server {listen 80; proxy_pass rancher_servers_http } upstream rancher_servers_https {least_conn; server 172.31.14.146 max_fails=3 fail_timeout=5s; server 172.31.8.64 max_fails=3 fail_timeout=5s; server 172.31.15.215 max_fails=3 fail_timeout=5s;} server {listen 443; proxy_pass rancher_servers_https;}}
7. Verification
Confirm that the status of local cluster and business cluster is Active
Confirm that the Local cluster node has been replaced
The IP of the original cluster nodes are 3.96.52.186,35.183.186.213 and 35.183.130.12 respectively.
Then verify that the previously deployed application is available.
Open source has always been the product concept of Rancher, and we have always attached importance to communicating with users in the open source community. To this end, we have created 20 Wechat communication groups. This article was born out of many conversations with community users and found that many Rancher users have similar problems. So I summarized three scenarios and tested them over and over again to complete this tutorial. We also welcome Rancher users to share their experiences in various forms and build a happy open source community together.
After reading the above, do you have any further understanding of how to migrate Rancher Server? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.