Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy Ubuntu20.04 + k8s 1.21.0 development environment

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "how to deploy Ubuntu20.04 + k8s 1.21.0 development environment", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to deploy Ubuntu20.04 + k8s 1.21.0 development environment" this article.

Kubeflow deployment (using kfctl_k8s_istio)

Some guidelines for installing Kubeflow (deploy to an existing Kubernetes cluster using kfctl_k8s_istio configuration). This configuration list creates the deployment of Kubeflow's core components, but does not include external dependencies, which can be optimized according to the needs of your environment. KubeFlow 1.2.0 is deployed to Ubuntu20.04 and k8s 1.21.0, and other platforms may change.

The Kubeflow deployment requires StorageClass to support dynamic volume provisioner. Confirm the settings for the default provisioner domain of StorageClass. If there is no provisioner, make sure that volume provisioning is configured and set it in Kubernetes cluster as described below (see below).

Using kfctl_k8s_istio.v1.2.0.yaml to configure installation, you need to consider the following options:

Disabling istio installation-if Kubernetes cluster already has Istio installed, you can choose not to install Istio by removing the istio-crds and istio-install parameters from the configuration file kfctl_k8s_istio.v1.0.2.yaml.

Prepare the environment

Download the Kubeflow CLI tool for kfctl and set the environment variables manually:

Download the kfctl v1.2.0 release to Kubeflow releases page.

Wget https://github.com/kubeflow/kfctl/releases/download/v1.2.0/kfctl_v1.2.0-0-gbc038f9_linux.tar.gztar-vxf kfctl_v1.2.0-0-gbc038f9_linux.tar.gzsudo cp kfctl / usr/bin/

Create environment variables to simplify the deployment process:

# The following command is optional. It adds the kfctl binary to your path.# If you don't add kfctl to your path, you must use the full path# each time you run kfctl.# Use only alphanumeric characters or-in the directory name.export PATH=$PATH: "# # actually as follows: # # export PATH=$PATH:" / home/supermap/openthings/kubeflow "# Set KF_NAME to the name of your Kubeflow deployment. You also use this# value as directory name when creating your configuration directory.# For example, your deployment name can be 'my-kubeflow' or' kf-test'.export KF_NAME=## are actually as follows: # # export KF_NAME= "kubeflow" # Set the path to the base directory where you want to store one or more # Kubeflow deployments. For example, / opt/.# Then set the Kubeflow application directory for this deployment.export BASE_DIR=export KF_DIR=$ {BASE_DIR} / ${KF_NAME} # # is actually as follows: # # export BASE_DIR= "/ home/supermap/openthings/" # export KF_DIR=$ {BASE_DIR} / ${KF_NAME} # Set the configuration file to use when deploying Kubeflow.# The following configuration installs Istio by default. Comment out # the Istio components in the config file to skip Istio installation. # See https://github.com/kubeflow/kubeflow/pull/3663export CONFIG_URI= "https://raw.githubusercontent.com/kubeflow/manifests/v1.2-branch/kfdef/kfctl_k8s_istio.v1.2.0.yaml"## is actually as follows: # # export CONFIG_URI=$ {BASE_DIR} / ${KF_NAME} / kfctl_k8s_istio.v1.2.0.yaml

Note:

${KF_NAME}-name of the Kubeflow deployment. If you want to customize the deployed deployment name, specify it through this parameter. For example, my-kubeflow or kf-test. The KF_NAME must be lowercase or'-', and the beginning and end must be letters. The variable cannot exceed 25 characters. Can only contain a name, not a directory path. It will also be used as the name of the creation directory to save Kubeflow configurations, that is, Kubeflow application directory.

${KF_DIR}-the full path of the Kubeflow application directory.

${CONFIG_URI}-the GitHub address, located in https://raw.githubusercontent.com/kubeflow/manifests/v1.2-branch/kfdef/kfctl_k8s_istio.v1.2.0.yaml. When running kfctl apply or kfctl build (see the next step), kfctl creates a local version of the YAML file that can be further customized.

⚠️ Note:

When you run kfctl build or apply-V-f xxx, you can download the manifest unsuccessfully, and then modify the following contents of kfctl_k8s_istio.v1.2.0.yaml to point the manifest to the local path. As follows:

Repos:-name: manifests uri: / home/supermap/openthings/kubeflow/v1.2.0.tar.gz version: v1.2-branch setup and deployment Kubeflow

To set up and deploy Kubeflow using the default settings using the default settings, run kfctl apply as follows:

Mkdir-p ${KF_DIR} cd ${KF_DIR} kfctl apply-V-f ${CONFIG_URI}

Check resources deployed in namespace kubeflow:

Kubectl-n kubeflow get all

Optionally, set the configuration parameters for future deployment:

When deploying Kubeflow, if you need to customize the installation parameters, you can edit the configuration file, and then run the Kubeflow deployment command:

Run the kfctl build command to set the installation parameters:

Mkdir-p ${KF_DIR} cd ${KF_DIR} kfctl build-V-f ${CONFIG_URI}

Edit the configuration file and describe it as customizing your Kubeflow deployment above.

Set the environment variable to point to the local configuration file:

Export CONFIG_FILE=$ {KF_DIR} / kfctl_k8s_istio.v1.2.0.yaml

Run the kfctl apply directory to implement the Kubeflow deployment:

Kfctl apply-V-f ${CONFIG_FILE}

An error occurred:

10:24:44 on 2021-04-28 absolute path error in'/ home/supermap/openthings/kubeflow/.cache/manifests/namespaces/base': evalsymlink failure on'/ home/supermap/openthings/kubeflow/.cache/manifests/namespaces/base': lstat / home/supermap/openthings/kubeflow/.cache/manifests/namespaces: no such file or directoryERRO [0000] Error evaluating kustomization manifest for namespaces: accumulating resources: accumulating resources from'.. /.. / .cache / manifests/namespaces/base': open / home/supermap/ Openthings/kubeflow/.cache/manifests/namespaces/base: no such file or directory filename= "kustomize/kustomize.go:155" Error: failed to apply: (kubeflow.error): Code 500with message: kfApp Apply failed for kustomize: (kubeflow.error): Code 500with message: error evaluating kustomization manifest for namespaces: accumulating resources: accumulating resources from'.. /.. cache/manifests/namespaces/base': open / home/supermap/openthings/kubeflow/.cache/manifests/namespaces/base: no such file or directory

Look at the .cache directory and find that manifest is located in the ~ / openthings/kubeflow/.cache/manifests/manifests-1.2.0 directory instead of the manifests directory above.

Move all files under manifests-1.2.0 to the next level, the manifest directory, and run kfctl apply again. As follows:

Cd manifests-1.2.0mv-r *.. /

But running kfctl apply discovery deletes the .cache directory, invalidating the replication method above.

Directly modify the configuration file and modify the paths of all custom resources. The modified configuration file is as follows:

ApiVersion: kfdef.apps.kubeflow.org/v1kind: KfDefmetadata: creationTimestamp: null namespace: kubeflowspec: applications:-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/namespaces/base name: namespaces-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/application/v3 name: application-kustomizeConfig: repoRef: name: manifests path: manifests- 1.2.0/stacks/kubernetes/application/istio-1-3-1-stack name: istio-stack-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/stacks/kubernetes/application/cluster-local-gateway-1-3-1 name: cluster-local-gateway- kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/istio/istio/base name: istio -kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/stacks/kubernetes/application/cert-manager-crds name: cert-manager-crds-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/stacks/kubernetes/application/cert-manager-kube-system-resources name: cert-manager-kube-system-resources-kustomizeConfig: repoRef: name: manifests Path: manifests-1.2.0/stacks/kubernetes/application/add-anonymous-user-filter name: add-anonymous-user-filter-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/metacontroller/base name: metacontroller-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/admission-webhook/bootstrap/overlays/application name: bootstrap-kustomizeConfig: RepoRef: name: manifests path: manifests-1.2.0/stacks/kubernetes/application/spark-operator name: spark-operator-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/stacks/kubernetes name: kubeflow-apps-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/knative/installs/generic name: knative-kustomizeConfig: RepoRef: name: manifests path: manifests-1.2.0/kfserving/installs/generic name: kfserving-kustomizeConfig: repoRef: name: manifests path: manifests-1.2.0/stacks/kubernetes/application/spartakus name: spartakus repos:-name: manifests uri: / home/supermap/openthings/kubeflow/v1.2.0.tar.gz version: v1.2-branchstatus: {}

Delete the kustomize directory and rerun kfctl build and kfctl apply.

KubeFlow has a large number of images, and it takes a long time to fully start, so you need to wait patiently.

After a while, some pod has been started and the main interface can be accessed.

Check the status and find that there are problems with some images and services, including image download, storage volume settings, and so on, which will be resolved later.

(base) supermap@xriver02:~$ kubectl get pod-n kubeflowNAME READY STATUS RESTARTS AGEadmission-webhook-bootstrap-stateful-set-0 0 36hadmission-webhook-deployment-5cd7dc96f5-l9rxl 1 ImagePullBackOff 0 36hadmission-webhook-deployment-5cd7dc96f5-l9rxl 1 Running 0 36happlication-controller-stateful-set-0 0/1 ImagePullBackOff 0 36hargo-ui-657cf69ff5-kn966 1/1 Running 0 36hcache-deployer-deployment-5f4979f45-q6psq 1/2 ImagePullBackOff 0 36hcache-server-7859fd67f5-kx8zm 0/2 Init:0/1 0 36hcentraldashboard-86744cbb7b-44rbc 1/1 Running 0 36hjupyter-web-app-deployment-8486d5ffff-9czzl 1/1 Running 0 36hkatib-controller-7fcc95676b-tsbzx 1/1 Running 1 36hkatib-db-manager-67867f5498-jzrgh 0/1 Running 442 36hkatib-mysql-6b5d848bf5-gs95h 0/1 Pending 0 36hkatib-ui-65dc4cf6f5-pqj5p 1/1 Running 0 36hkfserving-controller-manager-0 1/2 ImagePullBackOff 0 36hkubeflow-pipelines-profile-controller-797fb44db9-vznlv 1/1 Running 0 36hmetacontroller-0 1/1 Running 0 36hmetadata-db-c65f4bc75-m2ggv 0/1 Pending 0 36hmetadata-envoy-deployment-67bd5954c -jl7pn 1 Running 0 36hmetadata-grpc-deployment-577c67c96f-29dwx 0 36hmetadata-writer-756dbdd478-tlrpw 1 CrashLoopBackOff 433 36hmetadata-writer-756dbdd478-tlrpw 2 Running 325 36hminio-54d995c97b-jrmqq 0 Pending 0 36hml-pipeline-8d6749d9c-drv2h 1/2 CrashLoopBackOff 662 36hml-pipeline-persistenceagent-d984c9585-mhstn 2/2 Running 0 36hml-pipeline-scheduledworkflow-5ccf4c9fcc-wqg4d 2/2 Running 0 36hml-pipeline-ui-8ccbf585c-77krb 2 / 2 Running 0 36hml-pipeline-viewer-crd-56c68f6c85-bssgc 1 36hmxnet-operator-7576d697d6-jwks8 2 ImagePullBackOff 0 36hml-pipeline-visualizationserver-7446b96877-ffs7b 2 Running 0 36hmpi-operator-d5bfb8489-75m6b 1 Running 0 36hmxnet-operator-7576d697d6-jwks8 1 Running 0 36hmysql-74f8f99bc8-ndzqg 0 ImagePullBackOff 2 Pending 0 36hnotebook-controller-deployment-dd4c74b47-k9fng 0 pound 1 ImagePullBackOff 0 36hprofiles-deployment-65f54cb5c4-9xtws 0 ImagePullBackOff 0 36hpytorch-operator-847c8d55d8-x6l4t 0/1 ImagePullBackOff 0 36hseldon-controller-manager-6bf8b45656-d7rvf 1/1 Running 0 36hspark-operatorsparkoperator-fdfbfd99-cst9l 0/1 ImagePullBackOff 0 36hspartakus-volunteer-558f8bfd47-tcvpn 1/1 Running 0 36htf-job-operator-58477797f8-wr79t 1 Running 0 36hworkflow-controller-64fd7cffc5-m6gkc 1 36hworkflow-controller-64fd7cffc5-m6gkc 1 Running 0 36h access to the Kubeflow user interface (UI)

After the Kubeflow deployment is complete, access to the Kubeflow Dashboard is obtained through the service istio-ingressgateway. Loadbalancer is not available in the environment, NodePort or Port forwarding can be used to access Kubeflow Dashboard, refer to Ingress Gateway guide or:

Create a LoadBalancer service for a private Kubernetes cluster

Kubernetes dashboard provides HTTPS access through Ingress

Kubernetes load Balancer-Nginx ingress installation

Delete Kubeflow

Run the following command to delete the deployment and recycle resources:

Cd ${KF_DIR} # If you want to delete all the resources, run:kfctl delete-f ${CONFIG_FILE} understand the deployment process

The kfctl deployment process includes the following commands:

Kfctl build-(optional) create a configuration file and run kfctl build before kfctl apply only if you need to modify the configuration parameters yourself.

Kfctl apply-create or update resources.

Kfctl delete-Delete the resource.

Layout of the application

Your Kubeflow application directory ${KF_DIR} contains the following files and directories:

${CONFIG_FILE} is an YAML file that defines the parameters for the kubeflow deployment:

This file is a copy of the GitHub-based configuration YAML file, located at: https://raw.githubusercontent.com/kubeflow/manifests/v1.0-branch/kfdef/kfctl_k8s_istio.v1.0.2.yaml. If it cannot be successfully deployed, you need to download and save it on github first, and then modify the URI to point to the local file.

When you run kfctl apply or kfctl build,kfctl to create a local copy of the configuration file ${CONFIG_FILE}, you can then edit and customize it.

Kustomize is a directory that contains customized packages for Kubeflow applications applications. Reference: how Kubeflow uses kustomize.

This directory is created when you run kfctl build or kfctl apply.

You can customize the Kubernetes resources by modifying the manifests in the directory, and then rerun kfctl apply for deployment and update.

It is recommended that the contents of the ${KF_DIR} directory be included in the version management system.

Provisioning of Persistent Volumes in Kubernetes

If you already have a dynamic volume provisioner, you can skip this step

Problem solving Persistent Volume Claims is in Pending state

Check whether PersistentVolumeClaims is Bound to PersistentVolumes, as follows:

Kubectl-n kubeflow get pvc

If the PersistentVolumeClaims (PVCs) is in the Pending state and there is no bound to PersistentVolumes (PVs) after deployment, you need to manually create a PV for each PVC, or install dynamic volume provisioning to create the PVs on demand, and delete the existing PVCs and redeploy the Kubeflow.

These are all the contents of the article "how to deploy the Ubuntu20.04 + k8s 1.21.0 development environment". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report