In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on "deployment and easy use of helm". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "the deployment and easy use of helm"!
Through the form of software packaging, Helm supports release version management and control, which greatly simplifies the complexity of Kubernetes application deployment and management.
With the business containment and the transition to micro-service architecture, by decomposing huge single applications into multiple services, the complexity of single applications is decomposed, so that each micro-service can be deployed and expanded independently. Agile development and rapid iteration and deployment are realized. However, everything has two sides. Although micro-services bring us a lot of convenience, the number of services increases greatly because the application is split into multiple components. For Kubernetest choreography, each component has its own resource file and can be deployed and scaled independently, which brings many challenges to using Kubernetes for application choreography:
Manage, edit and update a large number of K8s configuration files
Deploy a complex K8s application with a large number of configuration files
Share and reuse K8s configuration and application
Parameterized configuration templates support multiple environments
Manage the release of applications: rollback, diff, and view release history
Control certain links in a deployment cycle
Post-release verification
And Helm happens to help us solve the above problems.
Helm packages Kubernetes resources (such as deployments, services, ingress, etc.) into a chart, while chart is stored in an chart repository. Store and share chart through chart repositories. Helm makes the release configurable, supports the version management of the release application configuration, and simplifies the version control, packaging, release, deletion, update and other operations of Kubernetes deployment applications.
This article briefly introduces the use, architecture, installation and use of Helm.
Use
As a package management tool for Kubernetes, Helm has the following functions:
Create a new chart
Chart is packaged into tgz format
Upload chart to chart repository or download chart from warehouse
Install or uninstall chart in a Kubernetes cluster
Manage the release cycle of chart installed with Helm
Helm has three important concepts:
Chart: contains the necessary information to create an application instance of Kubernetes
Config: contains application release configuration information
Release: is a running instance of chart and its configuration
Architecture
module
Helm has the following two components:
Helm Client is a user command line tool, which is mainly responsible for the following:
Local chart development
Warehouse management
Interact with Tiller sever
Send pre-installed chart
Query release information
Required to upgrade or uninstall an existing release
Tiller server is a server deployed within the Kubernetes cluster. It interacts with Helm client and Kubernetes API server, and is mainly responsible for the following:
Listen for requests from Helm client
Build a release through chart and its configuration
Install chart to the Kubernetes cluster and track subsequent releases
Upgrade or uninstall chart by interacting with Kubernetes
Simply put, client manages charts, while server manages release release.
Realize
Helm client
Helm client is written in go language and interacts with Tiller server using gRPC protocol.
Helm server
Tiller server is also written in go language, which provides gRPC server to interact with helm client and uses the Kubernetes client library to communicate with Kubernetes. The current library uses REST+JSON format.
Tiller server does not have its own database, and currently uses Kubernetes's ConfigMaps to store relevant information.
Description: configuration files use YAM format as much as possible
Installation
If it is different from my situation, please read the official quick guide, the core process of installation and a variety of situations.
Helm Release address
Precondition
Kubernetes cluster
Understand the Context security mechanism of kubernetes
Download wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz, the installation package for helm
Configure ServiceAccount and rules
My environment uses RBAC (Role-Based Access Control) authorization, which requires configuring ServiceAccount and rules before installing helm. The official configuration refers to the Role-based Access Control documentation.
Configure helm full cluster permissions
Rights Management yml:
ApiVersion: v1kind: ServiceAccountmetadata: name: tiller namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: tillerroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:-kind: ServiceAccount name: tiller namespace: kube-system
Cluster-admin is the role created by default by kubernetes. There is no need to redefine.
Install helm:
$kubectl create-f rbac-config.yamlserviceaccount "tiller" createdclusterrolebinding "tiller" created$ helm init-service-account tiller
Running result:
HELM_HOME has been configured at / root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the-- tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installationHappy Helming!
In the lab environment, it is recommended to install in this way, and then install system components such as ingress-nginx.
Configure helm in one namespace and manage another namespace
Configure helm to be installed in helm-system namespace, allowing Tiller to publish applications to kube-public namespace.
Create Tiller install namespace and ServiceAccount
Create a helm-system namespace, using the command kubectl create namespace helm-system
Define ServiceAccount
-kind: ServiceAccountapiVersion: v1metadata: name: tiller namespace: helm-systemTiller manages the roles and permissions configuration of namespace
Create a Role with all the permissions of namespace kube-public. Bind the ServiceAccount of Tiller to this role, allowing Tiller to manage all the resources of kube-public namespace.
-kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: tiller-manager namespace: kube-publicrules:- apiGroups: ["", "extensions" "apps"] resources: ["*"] verbs: ["*"]-kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: tiller-binding namespace: kube-publicsubjects:- kind: ServiceAccount name: tiller namespace: helm-systemroleRef: kind: Role name: tiller-manager apiGroup: rbac.authorization.k8s.ioTiller Release Information Management
The Release information in Helm is stored in ConfigMap, helm-system, in the namespace installed by Tiller, which needs to allow Tiller to operate on the ConfigMap of helm-system. So create Role helm-system.tiller-manager and bind to ServiceAccounthelm-system.tiller
-kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: namespace: helm-system name: tiller-managerrules:- apiGroups: ["", "extensions" "apps"] resources: ["configmaps"] verbs: ["*"]-- kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: tiller-binding namespace: helm-systemsubjects:- kind: ServiceAccount name: tiller namespace: helm-systemroleRef: kind: Role name: tiller-manager apiGroup: rbac.authorization.k8s.ioinit helm use the command `helm init-- service-account tiller--tiller-namespace helm- system` to install helm.
Helm init parameter description:
-- service-account: specify the ServiceAccount of helm Tiller, which is applicable to clusters with kubernetesRBAC enabled.
-- tiller-namespace: installs helm into the specified namespace
-- tiller-image: specify helm image
-- kube-context: install helm Tiller into a specific kubernetes cluster
There is a problem with the first run:
[root@kuber24 helm] # helm init-- service-account tiller--tiller-namespace helm-systemCreating / root/.helmCreating / root/.helm/repositoryCreating / root/.helm/repository/cacheCreating / root/.helm/repository/localCreating / root/.helm/pluginsCreating / root/.helm/startersCreating / root/.helm/cache/archiveCreating / root/.helm/repository/repositories.yamlAdding stable repo with URL: https://kubernetes-charts.storage.googleapis.comError: Looks like "https://kubernetes-charts.storage.googleapis .com "is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: EOF
This is because the google is all blocked. Modify the Hosts and specify the IP to be accessed by the class corresponding to the storage.googleapis.com. The latest domestic accessible Hosts configuration of google can be found in the hosts/hosts-files/hosts file of github project googlehosts/hosts.
Run the init helm command again and install it successfully.
[root@kuber24 helm] # helm init-- service-account tiller--tiller-namespace helm-systemCreating / root/.helm/repository/repositories.yamlAdding stable repo with URL: https://kubernetes-charts.storage.googleapis.comAdding local repo with URL: http://127.0.0.1:8879/charts$HELM_HOME has been configured at / root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this Run `helm init` with the-- tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installationHappy Helming!
When viewing the Pod status of Tiller, an error ImagePullBackOff is found in Pod, as follows:
[root@kuber24 resources] # kubectl get pods-- all-namespaces | grep tillerhelm-system tiller-deploy-cdcd5dcb5-fqm57 0 ImagePullBackOff 1 ImagePullBackOff 0 13m
Check the details of pod kubectl describe pod tiller-deploy-cdcd5dcb5-fqm57-n helm-system and find that Pod depends on the mirror gcr.io/kubernetes-helm/tiller:v2.11.0.
Query whether anyone on docker hub has copied and changed the image, as shown in the figure:
[root@kuber24] # docker search tiller:v2.11.0INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATEDdocker.io docker.io/jay1991115/tiller gcr.io/kubernetes-helm/tiller:v2.11.0 1 [OK] docker.io docker.io/luyx30/tiller tiller:v2. 11.0 1 [OK] docker.io docker.io/1017746640/tiller FROM gcr.io/kubernetes-helm/tiller:v2.11.0 0 [OK] docker.io docker.io/724399396/tiller gcr.io/kubernetes-helm/tiller:v2.11.0-rc.2... 0 [OK] docker.io docker.io/fengzos/tiller gcr.io/kubernetes-helm/tiller:v2.11.0 0 [OK] docker.io docker.io/imwower/tiller tiller from gcr.io/kubernetes-helm/tiller:... 0 [OK] docker.io docker.io/xiaotech/tiller FROM gcr. Io/kubernetes-helm/tiller:v2.11.0 0 [OK] docker.io docker.io/yumingc/tiller tiller:v2.11.0 0 [OK] docker.io docker.io/zhangc476/tiller gcr.io/kubernetes-helm/tiller/kubernetes-h... 0 [OK]
Also use the mirrorgooglecontainers accelerated google image on hub.docker.com, and then change the name of the image. Each Node node needs to be installed.
Installation problem Mirror problem
Images cannot be downloaded: use images synchronized to docker hub by others; use docker search $NAME:$VERSION
Installation of helm prompts repo connection not available
Use Hosts to climb over the wall.
Download Chart question
Question Tips:
[root@kuber24 ~] # helm install nginx-tiller-namespace helm-system-namespace kube-publicError: failed to download "nginx" (hint: running `helm repo update` may help)
Using helm repo update did not solve the problem.
As follows:
[root@kuber24 ~] # helm install nginx-- tiller-namespace helm-system-- namespace kube-publicError: failed to download "nginx" (hint: running `helm repo update` may help) [root@kuber24 ~] # helm repo updateHang tight while we grab the latest from your chart repositories.Skip local chart repository...Successfully got an update from the "stable" chart repositoryUpdate Complete. ⎈ Happy Helming! ⎈ [root@kuber24 ~] # helm install nginx-tiller-namespace helm-system-namespace kube-publicError: failed to download "nginx" (hint: running `helm repo update` may help)
Possible reasons:
There is no chart of nginx: use helm search nginx to query nginx chart information.
There is a problem with the network connection. I cannot download it. In this case, helm prompts you after waiting for a certain timeout.
Use to add a common repo
Add aliyun, github and official incubator charts repository.
Daily use of helm add repo gitlab https://charts.gitlab.io/helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartshelm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
The $NAME in this summary represents the repo/chart_name of helm.
Query charts: helm search $NAME
Check out the list of release: helm ls [--tiller-namespace $TILLER_NAMESPACE]
Query package information: helm inspect $NAME
Query the options supported by package: helm inspect values $NAME
Deploy chart:helm install $NAME [--tiller-namespace $TILLER_NAMESPACE] [--namespace $CHART_DEKPLOY_NAMESPACE]
Delete release:helm delete $RELEASE_NAME [--purge] [--tiller-namespace $TILLER_NAMESPACE]
Update: helm upgrade-- set $PARAM_NAME=$PARAM_VALUE $RELEASE_NAME $NAME [--tiller-namespace $TILLER_NAMESPACE]
Rollback: helm rollback $RELEASE_NAME $REVERSION [--tiller-namespace $TILLER_NAMESPACE]
When deleting a release, without using the-- purge parameter, only the pod deployment will be revoked, the basic information of the release will not be deleted, and the chart with the same name cannot be release.
Deploy RELEASE
When deploying mysql, query the parameters and configure the corresponding parameters.
Query configurable parameters:
[root@kuber24 charts] # helm inspect values aliyun/mysql## mysql image version## ref: https://hub.docker.com/r/library/mysql/tags/##image: "mysql" imageTag: "5.7.14" # # Specify password for root user#### Default: random 10 character string# mysqlRootPassword: testing## Create a database user### mysqlUser:# mysqlPassword:## Allow unauthenticated access Uncomment to enable### mysqlAllowEmptyPassword: true## Create a database### mysqlDatabase:## Specify an imagePullPolicy (Required) # # It's recommended to change this to 'Always' if the image tag is' latest'## ref: http://kubernetes.io/docs/user-guide/images/#updating-images##imagePullPolicy: IfNotPresentlivenessProbe: initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3readinessProbe: initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 percent # Persist data to a persistent volumepersistence: Enabled: true # # database data Persistent Volume Storage Class # # If defined StorageClassName: # # If set to "-", storageClassName: ", which disables dynamic provisioning # # If undefined (the default) or set to null, no storageClassName spec is # # set, choosing the default provisioner. (gp2 on AWS, standard on # # GKE AWS & OpenStack) # storageClass: "-" accessMode: ReadWriteOnce size: 8Gi## Configure resource requests and limits## ref: http://kubernetes.io/docs/user-guide/compute-resources/##resources: requests: memory: 256Mi cpu: 100m# Custom mysql configuration files used to override default mysql settingsconfigurationFiles:# mysql.cnf: |-# [mysqld] # skip-name-resolve## Configure the service## ref: http://kubernetes.io/docs/ User-guide/services/service: # # Specify a service type # # ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types type: ClusterIP port: 3306 # nodePort: 32000
For example, if we need to configure the root password for mysql, we can directly use the-- set parameter setting option, such as the roo password setting:-- set mysqlRootPassword=hgfgood.
From the persistence parameter in the description of the mysql option, you can see that mysql needs persistent storage, so you need to configure the persistent storage volume PV for kubernetes.
Create a PV:
[root@kuber24 resources] # cat local-pv.ymlapiVersion: v1kind: PersistentVolumemetadata: name: local-pv namespace: kube-publicspec: capacity: storage: 30Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle hostPath: path: / home/k8s
The complete release chart command is: helm install-- name mysql-dev-- set mysqlRootPassword=hgfgood aliyun/mysql-- tiller-namespace helm-system-- namespace kube-public.
View the list of chart that have been release:
[root@kuber24 charts] # helm ls-- tiller-namespace=helm-systemNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACEmysql-dev 1 Fri Oct 26 10:35:55 2018 DEPLOYED mysql-0.3.5 kube-public
Normally, the situation monitored by dashboard is as follows:
Busybox image is required to run this mysql chart. Occasionally, the problems shown in the following figure occur, which is caused by docker accessing foreign docker hub by default. You need to download the busybox image first.
Update and rollback
In the above example, after installing mysql, the root password is hgfgood. In this example, update it to hgf and roll back to the original password hgfgood.
Query the password after mysql installation:
[root@kuber24 charts] # kubectl get secret-- namespace kube-public mysql-dev-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode; echohgfgood
Update the root password of mysql, helm upgrade-- set mysqlRootPassword=hgf mysql-dev mysql--tiller-namespace helm-system
Query the root user password of mysql again after the update is completed
[root@kuber24 charts] # kubectl get secret-- namespace kube-public mysql-dev-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode; echohgf
View RELEASE's information:
[root@kuber24 charts] # helm ls-- tiller-namespace helm-systemNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACEmysql-dev 2 Fri Oct 26 11:26:48 2018 DEPLOYED mysql-0.3.5 kube-public
Looking at REVISION, you can see that there are currently two versions of mysql-dev.
Roll back to version 1:
[root@kuber24 charts] # helm rollback mysql-dev 1-tiller-namespace helm-systemRollback was a success! Happy Helming! [root@kuber24 charts] # kubectl get secret-- namespace kube-public mysql-dev-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode; echohgfgood
From the above output, you can see that RELEASE has been rolled back.
common problem
Error: could not find tiller. When you need to interact with tiller using helm client, you need to set the namespace of tiller and use the parameter-- tiller-namespace helm-system. This parameter defaults to kube-system.
The problem of failed download of chart in China
Problems in which downloads will fail due to network problems, such as:
[root@kuber24 ~] # helm install stable/mysql-- tiller-namespace helm-system-- namespace kube-public-- debug [debug] Created tunnel using local port: '32774' [debug] SERVER: "127.0.0.1 Original chart version 32774" [debug] Original chart version: "" Error: Get https://kubernetes-charts.storage.googleapis.com/mysql-0.10.2.tgz: read tcp 10.20.13.2414 56594-> 216.58.221.240R443: read: connection reset by peer
Enter the directory saved by the local charts
Use the chart corresponding to Ali Cloud fetch
For example, install mysql.
Helm fetch aliyun/mysql-- untar [root@kuber24 charts] # lsmysql [root@kuber24 charts] # lsmysql / Chart.yaml README.md templates values.yaml
Then run helm install again to install mysql chart.
Helm install mysql-tiller-namespace helm-system-namespace kube-public
You can use the-- debug parameter to open the debug information.
[root@kuber24 charts] # helm install mysql--tiller-namespace helm-system-- namespace kube-public-- debug [debug] Created tunnel using local port: '41905' [debug] SERVER: "127.0.0.1 tiller-namespace helm-system 41905" [debug] Original chart version: "" [debug] CHART PATH: / root/Downloads/charts/mysqlNAME: kissable-bunnyREVISION: 1RELEASED: Thu Oct 25 20:20:23 2018CHART: mysql-0.3.5USER-SUPPLIED VALUES: {} COMPUTED VALUES:configurationFiles: nullimage: MysqlimagePullPolicy: IfNotPresentimageTag: 5.7.14livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5persistence: accessMode: ReadWriteOnce enabled: true size: 8GireadinessProbe: failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1resources: requests: cpu: 100m memory: 256Miservice: port: 3306 type: ClusterIPHOOKS:MANIFEST:---# Source: mysql/templates/secrets.yamlapiVersion: v1kind: Secretmetadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart: " Mysql-0.3.5 "release:" kissable-bunny "heritage:" Tiller "type: Opaquedata: mysql-root-password:" TzU5U2tScHR0Sg== "mysql-password:" RGRXU3Ztb3hQNw== "- # Source: mysql/templates/pvc.yamlkind: PersistentVolumeClaimapiVersion: v1metadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart:" mysql-0.3.5 "release:" kissable-bunny "heritage:" Tiller "spec: AccessModes:-"ReadWriteOnce" resources: requests: storage: "8Gi"-- # Source: mysql/templates/svc.yamlapiVersion: v1kind: Servicemetadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart: "mysql-0.3.5" release: "kissable-bunny" heritage: "Tiller" spec: type: ClusterIP ports:-name: mysql port: 3306 targetPort: mysql selector: app : kissable-bunny-mysql---# Source: mysql/templates/deployment.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart: "mysql-0.3.5" release: "kissable-bunny" heritage: "Tiller" spec: template: metadata: labels: app: kissable-bunny-mysql spec: initContainers:-name: "remove-lost -found "image:" busybox:1.25.0 "imagePullPolicy:" IfNotPresent "command: [" rm " "- fr" "/ var/lib/mysql/lost+found"] volumeMounts:-name: data mountPath: / var/lib/mysql containers:-name: kissable-bunny-mysql image: "mysql:5.7.14" imagePullPolicy: "IfNotPresent" resources: requests: cpu: 100m memory: 256Mi env:-name : MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: kissable-bunny-mysql key: mysql-root-password-name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: kissable-bunny-mysql key: mysql-password-name: MYSQL_USER value: ""- Name: MYSQL_DATABASE value: "" ports:-name: mysql containerPort: 3306 livenessProbe: exec: command:-sh-- c-"mysqladmin ping-u root-p$ {MYSQL_ROOT_PASSWORD}" initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 SuccessThreshold: 1 failureThreshold: 3 readinessProbe: exec: command:-sh-- c-"mysqladmin ping-u root-p$ {MYSQL_ROOT_PASSWORD}" initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 volumeMounts: -name: data mountPath: / var/lib/mysql volumes:-name: data persistentVolumeClaim: claimName: kissable-bunny-mysqlLAST DEPLOYED: Thu Oct 25 20:20:23 2018NAMESPACE: kube-publicSTATUS: DEPLOYEDRESOURCES:== > v1/Pod (related) NAME READY STATUS RESTARTS AGEkissable-bunny-mysql-c7df69d65-lmjzn 0max 1 Pending 00s = > v1/SecretNAME AGEkissable-bunny-mysql 1slots = > v1/PersistentVolumeClaimkissable-bunny-mysql 1slots = > v1/Servicekissable-bunny-mysql 1slots = > v1beta1/Deploymentkissable-bunny-mysql 1sNOTES:MySQL can be accessed via port 3306 on the following DNS name from within your cluster:kissable-bunny-mysql.kube-public.svc.cluster.localTo get your root password run: MYSQL_ROOT_PASSWORD=$ (kubectl get secret-- namespace kube-public kissable-bunny-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode Echo) To connect to your database:1. Run an Ubuntu pod that you can use as a client: kubectl run-I-- tty ubuntu-- image=ubuntu:16.04-- restart=Never-- bash-il2. Install the mysql client: $apt-get update & & apt-get install mysql-client-y3. Connect using the mysql cli Then provide your password: $mysql-h kissable-bunny-mysql-pTo connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following commands to route the connection: export POD_NAME=$ (kubectl get pods-- namespace kube-public-l "app=kissable-bunny-mysql"-o jsonpath= "{.items [0] .metadata.name}") kubectl port-forward $POD_NAME 3306 mysql-h ${MYSQL_HOST }-P ${MYSQL_PORT}-u root-p ${MYSQL_ROOT_PASSWORD} package Chart
[] detailed packaging experiment.
# create a new charthelm create hello-chart# validate charthelm lint# package chart to tgzhelm package hello-chart here, I believe you have a deeper understanding of "helm deployment and easy use", you might as well do it in practice! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.