Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

A preliminary study on the use of helm-v3, a native micro-service management tool for K8s (1)

2025-04-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Helm-v3 Application package Manager

3.1Why do I need Helm?

The application objects on K8S are composed of specific resource descriptions, including deployment, service and so on. All are saved in their own files or written in a centralized configuration file. Then kubectl apply-f is deployed.

Why use helm?

In K8s, when we deploy applications, we usually use yaml files to manage the release of our applications, such as micro-services, including deployment,service,configmap,ingress, but if we have hundreds of micro-services, there will be more modifications each time. I just feel that it is not very flexible, and it may feel inadequate to manage it. In fact, K8S lacks a more advanced application-level management, and it might be better if we put these yaml files in one place based on an application-level management.

What is the problem with using these yaml files?

And due to the lack of version management and control of released applications, the maintenance and update of applications on Kubernetes are faced with many challenges, mainly facing the following problems:

How to manage these services as a whole? in fact, every time we deploy, we also aim at these yaml, and then we apply them, and then we also lack unified management.

How to reuse these resource files efficiently? in fact, when we deploy, many types of templates are the same. how can we use a set of templates to publish multiple applications? we only need to modify them when we release them.

Do not support application-level version management, so there are a lot of yaml, how can you manage application-level? in order to solve this problem, helm also came into being.

3.2 Helm introduction

Helm is a package management tool for Kubernetes, just like the package manager under Linux, such as yum/apt, etc., you can easily deploy previously packaged yaml files to kubernetes.

For example, like yum, it mainly solves a dependency problem. There may be a lot of yum to execute when installing yum. This helm is equivalent to a yum package manager. It will be installed directly by application, and so will this helm.

Helm has three important concepts:

Helm: a command-line client tool for creating, packaging, publishing, and managing the chart of Kubernetes applications.

Chart: application description, a collection of files related to K8s resources.

Release: a Chart-based deployment entity. After a chart is run by Helm, a corresponding release; will be generated and the actual running resource object will be created in K8s.

3.3 Helm v3 Chan

On November 13, 2019, the Helm team released the first stable version of Helm v3.

The main changes in this version are as follows:

1. Architecture change

The most obvious change is that it is much simpler, and the first change is, the most obvious change is that it exists as a server that tiller was previously deployed as a pod cluster. It mainly receives requests from helm clients, sends them to API, and then api forwards them. Tiller also needs to be deployed separately, and it has to authorize which namespaces it can operate on the cluster and which permissions it has to delete.

The configuration file of the kubeconfig connection cluster is directly connected to apiserver. In the past, uh, it was directly connected to API, and it made a forwarding. Now, uh, direct use of kubeconfig,kubectl uses kubeconfig to connect to the cluster, so helm connects directly to kubeconfig, then connects to apiserver, and then deploys chart packages. That is much simpler, and then v2 version deploys a Helm, which requires tiller to work properly. Now v3 version does not need it. You only need to download a helm client tool. First of all, this deployment has been simplified a lot, which is more reasonable. The previous rights management is also very troublesome, and it also requires tiller to do rights management, so this authorization is also troublesome. I feel that adding this tiller is also a superfluous thing, which is also the reason for the design at that time. With such a tiller, this is also an optional thing. With kubeconfig, it can be done directly through the native kubeconfig, so there is no need to set up a component to do this alone, such as authorization, connecting to API, some of which can be implemented on the client side of helm, so it is very simple to use helm, which is a cognitive change in the community, really making helm a useful tool.

2. Release names can be reused in different namespaces, which were previously maintained by tiller. For example, deploying a web to generate a distribution version cannot be used across namespaces, but can only use one name. For example, default uses a web name, which cannot be used under the kube-system namespace, mainly for global management, and stores this information in a namespace. So the information maintained under each namespace should not be repeated in this namespace.

3. Support pushing Chart to the Docker image repository, that is, chart can be pushed to the harbor repository. Previously, there was a special storage tool, that is, this support. You can use only one repository to store multiple types. For example, you can store our chart package management tools by putting our images through harbor.

4. Use JSONSchema to verify chart values, mainly to verify the file of variables in values format.

5. Other

1) in order to better coordinate the wording of other package managers, Helm CLI renamed individually

Helm delete changed its name to helm uninstall

Helm inspect changed its name to helm show

Helm fetch changed its name to helm pull

However, the above old commands can still be used at present.

2) the helm serve command for local temporary Chart Repository has been removed.

3) create namespaces automatically

When you create a distribution in a non-existent namespace, Helm 2 creates a namespace. Helm 3 follows the behavior of other Kubernetes objects and returns an error if the namespace does not exist.

4) requirements.yaml is no longer needed, and dependencies are defined directly in chart.yaml.

Basically, the v3 version of helm means that the code is basically refactored.

3.4 Helm client

1. Deploy Helm client

Helm client download address: https://github.com/helm/helm/releases

Extract and move to the / usr/bin/ directory.

Wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz

Tar zxvf helm-v3.0.0-linux-amd64.tar.gz

Mv linux-amd64/helm / usr/bin/

2. Helm common commands

3. Configure domestic Chart warehouse

After you have prepared the client tools, you need to configure the Chart repository. Chart is an application package.

Microsoft Warehouse (http://mirror.azure.cn/kubernetes/charts/) is recommended for use. There are basically all chart on the official website.

Ali Cloud Warehouse (https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts)

Official warehouse (https://hub.kubeapps.com/charts/incubator) official chart warehouse, it doesn't work well in China.

Add a repository:

Helm repo add azure http://mirror.azure.cn/kubernetes/chartshelm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts helm repo update update

View the configured repository:

You can also add multiple warehouses here, such as Aliyun and Microsoft, all of which can be listed for you through search.

[root@k8s-master1 ~] # helm repo listNAME URL stable http://mirror.azure.cn/kubernetes/charts[root@k8s-master1 ~] # helm search repo mysql

Delete the repository:

Helm repo remove aliyun

3.5 basic use of Helm

There are three main commands:

Chart install installation chart update upgrade chart rollback rollback

1. Deploy an application using chart

Find chart:

Helm search repohelm search repo mysql

Why is mariadb on the list? Because he's connected to mysql.

View chart information:

Helm show chart azure/mysql

Install package: db-1 specifies the name of the package (custom)

[root@k8s-master1 ~] # helm install db-1 azure/mysqlNAME: db-1LAST DEPLOYED: Tue Dec 17 10:24:07 2019NAMESPACE: defaultSTATUS: deployedREVISION: 1NOTES:

View the status of the release:

Helm status db-1 lists release [root @ k8s-master1 ~] # helm listNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONdb-1 default 1 2019-12-17 10 helm listNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONdb-1 default 24 helm listNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONdb-1 default 07.593783822 + 0800 CST deployed mysql-1.6.2 5.7.28

View the deployment status of pod

[root@k8s-master1 ~] # kubectl get podNAME READY STATUS RESTARTS AGEdb-1-mysql-765759d7d8-n65x6 0amp 1 Pending 0 3m47s

Check the event. Pending appears here to detect the reason why the pod cannot be run. Check the reason why the pvc cannot be bound here, indicating that there is no pv.

[root@k8s-master1] # kubectl describe pod db-1-mysql-765759d7d8-n65x6Events: Type Reason Age From Message-Warning FailedScheduling default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times) Warning FailedScheduling default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)

Check pvc. Here, the binding is not successful, which means that a suitable pv has not been found to bind. As long as it is matched, it will run successfully.

[root@k8s-master1 ~] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdb-1-mysql Pending 7m54s

Let's create a pv for it to automatically bind. Here I still use nfs to do network storage. After creating it, check the pvc, and it will show that the binding is successful and check the status of the pod.

[root@k8s-master1 ~] # cat pv.yaml apiVersion: v1kind: PersistentVolumemetadata: name: pv0003spec: capacity: storage: 8Gi accessModes:-ReadWriteOnce nfs: path: / opt/k8s/db server: 10.4.7.200 [root@k8s-master1 ~] # kubectl get podNAME READY STATUS RESTARTS AGEdb-1-mysql-765759d7d8-n65x6 1 bank 1 Running 0 24m

View deployed applications through helm list

[root@k8s-master1] # helm listNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONdb-1 default 1 2019-12-17 10 CST deployemysql-1.6.2 2414 07.593783822 + 0800 CST deployemysql-1.6.2 5.7.28

Check the details, which will tell you how to connect to mysql

[root@k8s-master1 ~] # helm status db-1

Here, it is said to generate a random password and take out the value inside to generate it.

To get your root password run: MYSQL_ROOT_PASSWORD=$ (kubectl get secret-- namespace default db-1-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode; echo)

It says here to connect to the database through this command.

Connect using the mysql cli, then provide your password: $mysql-h db-1-mysql-p

First go into this container, and then tell us to connect, test and create a database.

[root@k8s-master1] # kubectl exec-it db-1-mysql-765759d7d8-n65x6 / bin/bashroot@db-1-mysql-765759d7d8-n65x6:/# mysql- h db-1-mysql- pEnter password: Welcome to the MySQL monitor. Commands end with; or\ g.Your MySQL connection id is 73Server version: 5.7.28 MySQL Community Server (GPL) mysql > create database db;Query OK, 1 row affected (0.07 sec) mysql > show databases +-+ | Database | +-+ | information_schema | | db | | mysql | | performance_schema | | sys | +-+ 5 rows in set (0.02 sec)

Now there is a problem, there is this NFS automatic supply, now I want to use my own pv automatic supply how to achieve?

So it's time to modify the configuration option of chart, because some need some dependencies. For example, mysql has a pv dependency, because it doesn't know which storage class we use. To put it bluntly, there are two ways to customize the configuration option before installing chart. The first is to directly use-- values this yaml to overwrite it. The helm show values azure/mysql we just used, the yaml of values in this chart.

Let's redirect the file under the values to a file.

[root@k8s-master1 ~] # helm show values azure/mysql > volues.yaml [root@k8s-master1 ~] # cat volues.yaml mysqlRootPassword: testingmysqlUser: k8smysqlPassword: k8s123mysqlDatabase: k8spersistence: enabled: true storageClass: "managed-nfs-storage" accessMode: ReadWriteOnce size: 8Gi

I created this storage class before. Instead of demonstrating here, I specify our storage class in values-yaml.

[root@k8s-master1 ~] # kubectl get storageclassNAME PROVISIONER AGEmanaged-nfs-storage fuseim.pri/ifs 3d23h

Let's create another database based on our values and now we can directly bind our storage classes and create pod directly.

[root@k8s-master1] # helm install db-2-f volues.yaml azure/mysql [root@k8s-master1 ~] # helm listNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONdb-1 default 1 2019-12-17 10 volues.yaml azure/mysql 24 root@k8s-master1 07.593783822 + 0800 CST deployed mysql-1.6.2 5.7.28 db-2 default 1 2019-12-17 11 kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv0003 8Gi 3714 31.852808375 + 0800 CST deployed mysql-1.6.2 5.7.28 [root@k8s-master1] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv0003 8Gi RWO Retain Bound default/db-1-mysql 52mpvc-0baaf69a-0a3b-4d05-adb5-515057bda753 8Gi RWO Delete Bound default/db-2-mysql managed-nfs-storage 18spvc-16725fa9-3fe5-4e87mura2f8- F3f1e7df56b3 16Gi RWO Delete Bound kube-system/prometheus-data-prometheus-0 managed-nfs-storage 3d23hpvc-30244364-8bcd-43af-b1a9-d36e044c83c4 1Gi RWO Delete Bound kube-system/grafana-data-grafana-0 managed-nfs-storage 3d23h [root@k8s-master1 ~] # kubectl get podNAME READY STATUS RESTARTS AGEdb-1-mysql-765759d7d8-n65x6 1/1 Running 0 74mdb-2-mysql-69dc64b75f-b2cxb 1/1 Running 0 59s

Now let's test our database. Our password is also defined in values, so log in directly and see the database we created.

Root@db-2-mysql-69dc64b75f-b2cxb:/# echo ${MYSQL_ROOT_PASSWORD} testingroot@db-2-mysql-69dc64b75f-b2cxb:/# mysql- uroot-p ${MYSQL_ROOT_PASSWORD} mysql: [Warning] Using a password on the command line interface can be insecure.Welcome to the MySQL monitor. Commands end with; or\ g.Your MySQL connection id is 52Server version: 5.7.28 MySQL Community Server (GPL) mysql > show databases +-+ | Database | +-+ | information_schema | | K8s | | mysql | | performance_schema | | sys | +-+ 5 rows in set (0.06 sec)

View users. K8s users have been created

Mysql > select user from mysql.user;+-+ | user | +-+ | K8s | | root | | mysql.session | | mysql.sys | | root | +-+ 5 rows in set (0.04 sec)

2. Customize chart configuration options before installation

If you want to use the official chart, you should use install directly. Some of the things you must rely on are prepared in advance, such as pv.

There are two ways to keep some modified files and then reference these configuration files, or use-- set to replace variables on the command line

Everything written in the configuration file can also be used on the command line, and then it all runs.

-- values (or-f): specifies the YAML file with an overwrite. This can be specified multiple times, and the rightmost file takes precedence

-- set: specify an override on the command line. If you use both, set has a high priority.

[root@k8s-master1 ~] # helm install db-3-- set persistence.storageClass= "managed-nfs-storage" azure/mysql [root@k8s-master1 ~] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdb-1-mysql Bound pv0003 8Gi RWO 4h23mdb-2-mysql Bound pvc-0baaf69a-0a3b-4d05-adb5-515057bda753 8Gi RWO managed-nfs-storage 3hdb-3-mysql Bound pvc-2bf895a8-075b-43d9-ade9-fe9b7ae67b1b 8Gi RWO managed-nfs-storage [root@k8s-master1 ~] # kubectl get podNAME READY STATUS RESTARTS AGEdb-1-mysql-765759d7d8- N65x6 1/1 Running 0 4h23mdb-2-mysql-69dc64b75f-b2cxb 1/1 Running 0 179mdb-3-mysql-679888dd7b-9m5cm 1/1 Running 0 85s

Or if you want to care about how the official chart is written, you can pull it directly and check the details.

[root@k8s-master1] # helm pull azure/mysql-- untar

This is a compressed package when pulled down, and it can also be decompressed directly when pulled,-- untar

In this values.yaml, what we redirected just now is this yaml, and the rest remains the same. Under templates, you will find that it is much easier to deploy a chart, and I can quickly start multiple sets and deploy multiple such pod to dynamically input parameters. It can also be divided into production environment and test environment, as long as you define different namespaces under the values. Distinguish between different production environments and test environments.

[root@k8s-master1 ~] # cd mysql [root@k8s-master1 mysql] # lsChart.yaml README.md templates values.yaml

And the helm install command can be installed from multiple sources:

The chart repository local chart archive (helm install foo-0.1.1.tgz) or the mysql package we pulled just now officially, directly use the helm install mysql-1.5.0.gzchart directory (helm install path/to/foo), the complete URL (helm install https://example.com/charts/foo-1.2.3.tgz), or the address of one of your url.

3. Build a Helm Chart

How a chart is formed by create + Custom name (directory structure)

[root@k8s-master1 test-helm] # helm create chartCreating chart [root@k8s-master1 test-helm] # lschart [root@k8s-master1 test-helm] # cd chart/ [root@k8s-master1 chart] # lscharts Chart.yaml templates values.yaml

Start our chart just now and start the pod custom name helm install name under the configuration directory just now

[root@k8s-master1 test-helm] # helm install my-chart chart/NAME: my-chartLAST DEPLOYED: Tue Dec 17 15:09:10 2019NAMESPACE: defaultSTATUS: deployedREVISION: 1NOTES:1. Get the application URL by running these commands: export POD_NAME=$ (kubectl get pods-- namespace default-l "app.kubernetes.io/name=chart App.kubernetes.io/instance=my-chart "- o jsonpath=" {.items [0] .metadata.name} ") echo" Visit http://127.0.0.1:8080 to use your application "kubectl-- namespace default port-forward $POD_NAME 8080

Let's take a look at what kind of service our launched pod is. This default is an official template. Under values, you can see that the image it acquires is nginx.

[root@k8s-master1 test-helm] # kubectl get pod-o widemy-chart-94997cb67-c2zxx 1 + 1 Running 0 10m 10.244.0.43 k8s-node2 [root@k8s-master1 chart] # curl-I 10.244.0.43HTTP/1.1 200 OKServer: nginx/1.16.0Date: Tue, 17 Dec 2019 07:22:57 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue 23 Apr 2019 10:18:21 GMTConnection: keep-aliveETag: "5cbee66d-264" Accept-Ranges: bytes [root@k8s-master1 chart] # helm listNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONdb-1 default 12019-12-17 10 keep-aliveETag 2424 07.593783822 + 0800 CST deployed mysql-1.6.2 5.7.28 db-2 default 1 2019-12-17 11 CST deployed mysql-1.6.2 37 CST deployed mysql-1.6.2 31.852808375 + 0800 CST deployed mysql-1.6.2 5.7.28 db-3 default 1 2019-12-17 14 14 CST deployed mysql-1.6.2 5.7.28 my-chart default 1.164272986 + 0800 CST deployed chart-0.1.0 1.16.0

Take a look at the directory structure of this file

[root@k8s-master1 test-helm] # tree.. └── chart ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _ helpers.tpl │ ├── ingress.yaml │ ├── NOTES.txt │ ├── serviceaccount.yaml │ ├── service.yaml │ └── tests │ └── test-connection.yaml └── values.yaml4 directories 9 files

Chart.yaml: basic information used to describe the Chart, including name, description, version, and so on.

Values.yaml: used to store the values of variables used in the template file in the templates directory.

Templates: all yaml template files are stored in the directory.

Charts: all the child chart that this chart depends on is stored in the directory.

NOTES.txt: used to introduce Chart help information, which is shown to users after helm install deployment. For example: how to use this Chart, list the default settings, etc.

_ helpers.tpl: the place where the template assistant is placed and can be reused throughout the chart

Now we make a template of chart and release simple microservice types.

[root@k8s-master1 chart] # tree.. ├── charts ├── Chart.yaml ├── templates └── values.yaml

Create a new pod of type deployment, mirrored as nginx

[root@k8s-master1 templates] # kubectl create deployment app-1-- image=nginx-o yaml-- dry-run > deployment.yaml [root@k8s-master1 templates] # lsdeployment.yaml

Delete some unneeded field null values in yaml

Let's modify the yaml, first simply use the variable assignment of values to render, then release two microservices, use the image of nginx as a small instance, and then release a complete microservice (dubbo,spring cloud application).

[root@k8s-master1 chart] # cat templates/deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: labels: app: {{.Values.name}} name: {{.Values.name}} spec: replicas: {{.Values.replicas}} selector: matchLabels: app: {{.Values.name}} template: metadata: labels: app: {{.Values.name}} Spec: containers:-image: {{.Values.image}}: {{.Values.imageTag}} name: {{.Values.image}}

When we publish a microservice, it will call the variables of our upper values template to render the yaml of our publishing application. The advantage of helm is that to publish a service in the native yaml of K8s, the format of yaml itself does not support variable injection, so helm arises at the historic moment, mainly to solve this problem. And we release multiple tasks directly through this template to write some changes in the value, the release task will be very fast, saving us time.

[root@k8s-master1 chart] # cat. / / values.yaml name: base-user-devopsimage: nginximageTag: 1.15replicas: 2 [root@k8s-master1 ~] # kubectl get podNAME READY STATUS RESTARTS AGEbase-user-common-58b7bc9c56-2nmcb 1 Running 0 12mbase-user-common-58b7bc9c56-2tgpg 1 Running 0 12mbase-user -devops-7cf5c99485-rr295 1 to 1 Running 0 10mbase-user-devops-7cf5c99485-s2jbb 1 to 1 Running 0 10m [root@k8s-master1 test-helm] # helm listNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONbase-user-common default 1 2019-12-17 16:29 : 01.587768045 + 0800 CST deployed chart-0.1.0 1.16.0 base-user-devops default 1 2019-12-17 16 CST deployed chart-0.1.0 27purl 11.757082258 + 0800 CST deployed chart-0.1.0 1

To see what our rendering looks like, our variables have been assigned to our yaml, and then help us start the pod

[root@k8s-master1 test-helm] # helm get manifest base-user-common---Source: chart/templates/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: labels: app: base-user-common name: base-user-commonspec: replicas: 2 selector: matchLabels: app: base-user-common template: metadata: labels: app: base-user-commonspec: containers:-image: nginx:1.16 name: nginx

4. Upgrade, rollback and delete

You can use this helm upgrade command when you release a new version of chart, or when you want to change the released configuration

For example, replace the image of our base-user-common service with another image, because during the release of our micro service, it will be rebuilt due to changes to the developed code. Through dockerfile, the truth here is the same. When we release a new service, we need to replace the image of our old code. In fact, the old image can be designated as our new image.

[root@k8s-master1 test-helm] # vim chart/values.yaml

Modify the image to 1.15, and then update it. Use upgrade to specify the name of our microservice, which is defined according to the project. The microservice itself is a member of a split organization, which is defined by itself, and then specify the chart template directory.

[root@k8s-master1 test-helm] # helm upgrade base-user-common chart/Release "base-user-common" has been upgraded. Happy Helming!NAME: base-user-commonLAST DEPLOYED: Tue Dec 17 16:47:55 2019NAMESPACE: defaultSTATUS: deployedREVISION: 2TEST SUITE: None

Test check that the image has been successfully replaced with version 1.15

[root@k8s-master1 test-helm] # curl-I 10.244.2.24HTTP/1.1 200 OKServer: nginx/1.15.12Date: Tue, 17 Dec 2019 08:48:34 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 16 Apr 2019 13:08:19 GMTConnection: keep-aliveETag: "5cb5d3c3-264" Accept-Ranges: bytes

For example, roll back the application to the first version, and now go back to the 1.16 image.

[root@k8s-master1 ~] # helm rollback base-user-commonRollback was a success! Happy Helming! [root@k8s-master1 ~] # curl-I 10.244.1.20 HTTP/1.1 200 OKServer: nginx/1.16.1Date: Tue, 17 Dec 2019 09:44:44 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 13 Aug 2019 10:05:00 GMTConnection: keep-aliveETag: "5d528b4c-264" Accept-Ranges: bytes

You can also view the version of history.

[root@k8s-master1 chart] # helm history base-user-commonREVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 1 Tue Dec 17 16:29:01 2019 superseded chart-0.1.0 1.16.0 Install complete2 Tue Dec 17 16:47:55 2019 superseded chart-0.1.0 1.16.0 Upgrade complete3 Tue Dec 17 17:43:23 2019 Deployed chart-0.1.0 1.16.0 Rollback to 1

You can also package and push the charts warehouse and share it with others.

[root@k8s-master1 test-helm] # helm package chart

Uninstall the distribution using helm uninstall or helm delete, which will also remove pod

[root@k8s-master1 test-helm] # helm uninstall base-user-commonrelease "base-user-common" uninstalled

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report