Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Helm of K8s

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The package manager for Kubernetes.

Every successful software platform has an excellent packaging system, such as Debian, Ubuntu's apt,Redhat, Centos's yum. Helm is the package manager on Kubernetes.

Helm, architecture and components, and how to use Helm.

Why Helm

What exactly did Helm solve? Why does Kubernetes need Helm?

The answer is: Kubernetes is good at organizing and orchestrating containers, but it lacks a higher-level application packaging tool, and Helm is here to do this.

Let's take a look at an example.

For example, for a MySQL service, Kubernetes needs to deploy the following objects:

Service, so that the outside world can access MySQL.

ApiVersion: v1kind: Servicemetadata: name: my-mysql labels: app: my-mysqlspec: ports:-name: mysql port: 3306 targetPort: mysql selector: app: my-mysql

Secret, which defines the password for MySQL.

ApiVersion: v1kind: Secretmetadata: name: my-mysql labels: app: my-mysqltype: Opaquedata: mysql-root-password: "M0MzREhRQWRjeQQ =" mysql-password: "eGNXZkpMNmlkSw=="

PersistentVolumeClaim, apply for persistent storage space for MySQL.

Kind: PersistentVolumeClaimapiVersion: v1metadata: name: my-mysql labels: app: my-mysqlspec: accessModes:-"ReadWriteOnce" resources: requests: storage: 8Gi

Deployment, deploy MySQL Pod, and use the above support objects.

ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: my-mysql labels: app: my-mysqlspec: template: metadata: app: my-mysqlspec: containers:-name: my-mysql image: mysql:5.7 env:-name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: my-mysql Key: mysql-password-name: MYSQL_USER value: ""-name: MYSQL_DATABASE value: "" ports:-name: mysql containerPort: 3306 volumMounts:-name: data mountPath: / var/lib/mysql volumes:-name: data persistentVolumeClaim: claimName: my-mysql

We can save the above configurations to the respective files of the object, or write them into a configuration file centrally, and then deploy through kubectl apply-f.

So far, Kubernetes has supported the deployment of services quite well, and if the application consists of only one or more of these services, the above deployment method is sufficient.

However, if we develop applications with micro-service architecture, there may be as many as ten or even dozens of services that make up the application, which is the way applications are organized and managed:

It is difficult to manage, edit, and maintain so many services. Each service has several configurations and lacks a higher-level tool to organize these configurations. It is not easy to publish these services as a whole. The deployer needs to first understand which services are included in the application, and then execute the kubectl apply in logical order. That is, there is a lack of a tool to define applications and services, and the dependencies between services. Services cannot be shared and reused efficiently. For example, both applications need to use MySQL services, but the configuration parameters are different. These two applications can only copy a set of standard MySQL configuration files and deploy them through kubectl apply after modification. In other words, parameterized configuration and multi-environment deployment are not supported. Application-level version management is not supported. Although rollback can be done through kubectl rollout undo, this is only for a single Deployment and does not support rollback of the entire application. Verifying the status of deployed applications is not supported. For example, whether you can access MySQL through a predefined account. Although Kubernetes has a health check, it is for a single container, and we need an application (service) level health check.

Helm can solve these problems, and Helm helps Kubernetes become an ideal deployment platform for micro-service architecture applications.

The architecture of Helm.

The architecture of Helm.

Helm has two important concepts: chart and release.

Chart is a collection of information to create an application, including configuration templates, parameter definitions, dependencies, documentation, etc., for various Kubernetes objects. Chart is the self-contained logical unit of application deployment. Think of chart as a software installation package in apt and yum.

Release is a running instance of chart and represents a running application. When chart is installed into the Kubernetes cluster, a release is generated. Chart can be installed to the same cluster multiple times, with each installation being a release.

Helm is a package management tool, and the package here refers to chart. Helm can:

Create a new chart from scratch. Interact with the repository that stores the chart to pull, save, and update the chart. Install and uninstall release in the Kubernetes cluster. Update, roll back, and test release.

Helm consists of two components: the Helm client and the Tiller server.

The Helm client is a command-line tool used by end users, and users can:

Develop chart locally. Manage chart warehouse. Interact with the Tiller server. Install chart on a remote Kubernetes cluster. View release information. Upgrade or uninstall an existing release.

The Tiller server, which runs in a Kubernetes cluster, processes requests from Helm clients and interacts with Kubernetes API Server. The Tiller server is responsible for:

Listen for requests from Helm clients. Build release through chart. Install chart in Kubernetes and track the status of release. Upgrade or uninstall an existing release via API Server.

To put it simply: the Helm client is responsible for managing the chart;Tiller server is responsible for managing release.

Deploy Helm install and deploy Helm client and Tiller server. Helm client

Typically, we install the Helm client on a node that can execute the kubectl command, requiring only the following command:

Curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

Perform helm version verification.

[root@k8s-master ~] # helm versionClient: & version.Version {SemVer: "v2.14.3", GitCommit: "0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState: "clean"} Error: could not find tiller

Currently, you can only see the version of the client, and the server is not installed yet.

Helm has many subcommands and parameters. In order to improve the efficiency of using the command line, it is usually recommended to install helm's bash command completion script as follows:

# source > ~ / .bashrc

After logging in again, you can use the Tab key to complete the helm subcommands and parameters.

[root@k8s-master] # helm completion history list search verifycreate home package serve versiondelete init plugin status dependency inspect repo template fetch install reset test get lint rollback upgrade [root@k8s-master] # helm install-atomic-- render-subchart- Notes--ca-file=-replace--cert-file=-repo=--debug-set=--dep-up-set-file=--description=-set-string=--devel-tiller-connection-timeout=--dry-run -tiller-namespace=--home=-timeout=--host=-tls--key-file=-tls-ca-cert=--keyring=-tls-cert=--kubeconfig=-tls-hostname=--kube-context=-tls-key=--name= -- tls-verify--namespace=-- username=--name-template=-- values=--no-crd-hook-- verify--no-hooks-- version=--password=-- wait [root@k8s-master] # helm install-- Tiller server

The installation of the Tiller server is very simple, as long as you execute helm init:

[root@k8s-master ~] # helm initCreating / root/.helm Creating / root/.helm/repository Creating / root/.helm/repository/cache Creating / root/.helm/repository/local Creating / root/.helm/plugins Creating / root/.helm/starters Creating / root/.helm/cache/archive Creating / root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879 / charts $HELM_HOME has been configured at / root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the-- tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

Tiller itself runs in Kubernetes Cluster as containerized applications:

[root@k8s-master] # kubectl get-- namespace=kube-system svc tiller-deploy NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEtiller-deploy ClusterIP 10.104.165.164 44134/TCP 30s# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3 gcr.io/kubernetes-helm/tiller:v2.14.3

You can see the Service, Deployment, and Pod of Tiller.

[root@k8s-master] # kubectl get-- namespace=kube-system svc tiller-deploy NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEtiller-deploy ClusterIP 10.104.165.164 44134/TCP 19m [root@k8s-master] # kubectl get-n kube-system deployments. Tiller-deploy NAME READY UP-TO-DATE AVAILABLE AGEtiller-deploy 1 Running 1 1 1 16m [root@k8s-master ~] # kubectl get-n kube-system pod tiller-deploy-75f6c87b87-qlw4h NAME READY STATUS RESTARTS AGEtiller-deploy-75f6c87b87-qlw4h 1max 1 Running 0 17m

Now, helm version can see the version information of the server.

[root@k8s-master ~] # helm version Client: & version.Version {SemVer: "v2.14.3", GitCommit: "0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState: "clean"} Server: & version.Version {SemVer: "v2.14.3", GitCommit: "0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState: "clean"} use Helm

After the Helm installation is successful, execute helm search to view the currently installable chart.

# helm search

Helm can manage chart in the same way as apt and yum management packages. Packages for apt and yum are stored in repositories, just as Helm has repositories.

[root@k8s-master ~] # helm repo listNAME URL stable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/charts [root@k8s-master ~] #

Helm is installed with two repositories configured by default: stable and local. Stable is the official warehouse, and local is the local warehouse where users store their own developed chart.

Helm search will show which warehouse chart is located in, such as local/cool-chart and stable/acs-engine-autoscaler.

Users can add more warehouses through helm repo add, such as private warehouses of enterprises. For the management and maintenance of warehouses, please refer to the official website document https://docs.helm.sh.

Like apt and yum, helm supports keyword search:

[root@k8s-master ~] # helm search mysqlNAME CHART VERSION APP VERSION DESCRIPTION stable/mysql 1.4.0 5.7.27 Fast, reliable, scalable And easy to use open-source rel...stable/mysqldump 2.6.0 2.4.1 A Helm chart to help backup MySQL databases using mysqldump stable/prometheus-mysql-exporter 0.5.1 v0.11.0 A Helm chart for prometheus mysql exporter with cloudsqlp...stable/percona 1.2.0 5.7.17 free Fully compatible, enhanced, open source drop-in rep...stable/percona-xtradb-cluster 1.0.2 5.7.19 free, fully compatible, enhanced Open source drop-in rep...stable/phpmyadmin 3.0.7 4.9.1 phpMyAdmin is an mysql administration frontend stable/gcloud-sqlproxy 0.6.1 1.11 DEPRECATED Google Cloud SQL Proxy stable/mariadb 6 . 11.1 10.3.18 Fast Reliable, scalable, and easy to use open-source rel...

All information, including DESCRIPTION, is displayed in the results list as long as it matches the keyword.

Installing chart is also easy, and you can install MySQL by executing the following command.

Helm install stable/mysql

If you see the following error, it is usually due to insufficient permissions on the Tiller server.

[root@k8s-master ~] # helm install stable/mysqlError: failed to download "stable/mysql" (hint: running `helm repo update` may help)

Perform the following naming to add permissions:

Kubectl create serviceaccount-- namespace kube-system tillerkubectl create clusterrolebinding tiller-cluster-rule-- clusterrole=cluster-admin-- serviceaccount=kube-system:tillerkubectl patch deploy-- namespace kube-system tiller-deploy-p'{"spec": {"template": {"spec": {"serviceAccount": "tiller"}'# kubectl create serviceaccount-n kube-system tillerserviceaccount/tiller created [root@k8s-master ~] # kubectl create clusterrolebinding tille [root@k8s-master ~] # kubectl create clusterrolebinding tiller-cluster -rule-- clusterrole=cluster-admin-- serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created [root@k8s-master ~] # kubectl patch deploy-n kube-system tiller-deploy-p'{"spec": {"template": {"spec": {"serviceAccount": "tiller"} 'deployment.extensions/tiller-deploy patched

Then execute it again.

[root@k8s-master ~] # helm install stable/mysqlNAME: olfactory-birdLAST DEPLOYED: Tue Oct 15 17:36:02 2019NAMESPACE: defaultSTATUS: DEPLOYEDRESOURCES:== > v1/ConfigMapNAME DATA AGEolfactory-bird-mysql-test 10 slots = > v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGEolfactory-bird-mysql 0 Accord 1 100 slots = > v1/PersistentVolumeClaimNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEolfactory -bird-mysql Pending 0swatch = > v1/Pod (related) NAME READY STATUS RESTARTS AGEolfactory-bird-mysql-5cd5bc6b7-qmbj6 0Universe 1 Pending 00s = > v1/SecretNAME TYPE DATA AGEolfactory-bird-mysql Opaque 20s = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEolfactory-bird-mysql ClusterIP 10.102.142.220 3306/TCP 0sNOTES:MySQL can be accessed via port 3306 on the following DNS name from within your cluster:olfactory-bird-mysql.default.svc.cluster.localTo get your root password run: MYSQL_ROOT_PASSWORD=$ (kubectl get secret-- namespace default olfactory-bird-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode Echo) To connect to your database:1. Run an Ubuntu pod that you can use as a client: kubectl run-I-- tty ubuntu-- image=ubuntu:16.04-- restart=Never-- bash-il2. Install the mysql client: $apt-get update & & apt-get install mysql-client-y3. Connect using the mysql cli, then provide your password: $mysql-h olfactory-bird-mysql-pTo connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/olfactory-bird-mysql 3306 mysql-h ${MYSQL_HOST}-P$ {MYSQL_PORT}-u root-p$ {MYSQL_ROOT_PASSWORD}

The output is divided into three parts:

Description of this deployment of ① chart:

NAME is the name of release, because we don't specify it with the-n parameter, Helm randomly generates one, here is fun-zorse.

NAMESPACE is the namespace deployed by release. The default is default, or it can be specified through-- namespace.

STATUS is DEPLOYED, which means that chart has been deployed to the cluster.

② 's current release contains resources: Service, Deployment, Secret, and PersistentVolumeClaim, all named fun-zorse-mysql and named in the format ReleasName-ChartName.

The ③ NOTES section shows how to use release. Such as how to access Service, how to get the database password, and how to connect to the database and so on.

With kubectl get, you can view the individual objects that make up release:

[root@k8s-master] # kubectl get service olfactory-bird-mysql NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEolfactory-bird-mysql ClusterIP 10.102.142.220 3306/TCP 4m14s [root@k8s-master] # kubectl get deployments. Nginx-configmap olfactory-bird-mysql [root@k8s-master] # kubectl get deployments. Olfactory-bird-mysql NAME READY UP-TO-DATE AVAILABLE AGEolfactory-bird-mysql 0 Pending 1 1 0 4m29s [root@k8s-master ~] # kubectl get pod olfactory-bird-mysql-5cd5bc6b7-qmbj6 NAME READY STATUS RESTARTS AGEolfactory-bird-mysql-5cd5bc6b7-qmbj6 0 Pending 0 4m40s [root@] K8s-master ~] # kubectl get pvc olfactory-bird-mysql NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEolfactory-bird-mysql Pending 4m56s

Because we are not ready for PersistentVolume, release is not currently available.

Helm list shows that the deployed release,helm delete can delete the release.

[root@k8s-master ~] # helm listNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACEolfactory-bird 1 Tue Oct 15 17:36:02 2019 DEPLOYED mysql-1.4.0 5.7.27 default [root@k8s-master ~] #

Helm is used much like apt and yum, and it is very convenient to use Helm to manage Kubernetes applications.

Chart directory structure

Chart is the application packaging format of Helm. Chart consists of a series of files that describe the resources needed for Kubernetes to deploy applications, such as Service, Deployment, PersistentVolumeClaim, Secret, ConfigMap, and so on.

A single chart can be very simple, only used to deploy a service, such as Memcached;chart can also be very complex, deploy the entire application, such as including HTTP Servers, Database, message middleware, cache, and so on.

Chart places these files in a predefined directory structure, and the entire chart is usually packed into tar packages and marked with version information to facilitate Helm deployment.

Below we will discuss in detail the directory structure of chart and the various files it contains.

Chart directory structure

Take the previous MySQL chart as an example. Once a chart is installed, we can find the tar package for chart in ~ / .helm/cache/archive.

# ls ~ / .helm/cache/archive/mysql-1.4.0.tgz

After decompression, the MySQL chart directory structure is as follows:

[root@k8s-master ~] # tree mysqlmysql ├── Chart.yaml ├── README.md ├── templates │ ├── configurationFiles-configmap.yaml │ ├── deployment.yaml │ ├── _ helpers.tpl │ ├── initializationFiles-configmap.yaml │ ├── NOTES.txt │ ├── pvc.yaml │ ├── secrets.yaml │ ├── servicemonitor.yaml en20# svc.yaml Tests │ ├── test-configmap.yaml │ └── test.yaml └── values.yaml2 directories 14 files

The directory name is the name of chart (without version information). Here is mysql, which contains the following contents:

Chart.yaml

The YAML file that describes the profile of the chart.

ApiVersion: v1appVersion: 5.7.27description: Fast, reliable, scalable, and easy to use open-source relational database system.engine: gotplhome: https://www.mysql.com/icon: https://cache.yisu.com/upload/information/20200309/33/54090.jpgkeywords:- mysql- database- sqlmaintainers:- email: o.with@sportradar.com name: olemarkus- email: viglesias@google.com name: viglesiascename: mysqlsources:- https://github.com/kubernetes/charts- https://github.com/docker-library/mysqlversion: 1.4.0

Name and version are required, others are optional.

README.md

README file in Markdown format, equivalent to chart usage document, this file is optional.

LICENSE

A text file that describes the license information for chart. This file is optional.

Requirements.yaml

Chart may depend on other chart, and these dependencies can be specified by requirements.yaml, such as:

During the installation process, the dependent chart is also installed.

Values.yaml

Chart supports customized configuration based on parameters at installation time, while values.yaml provides default values for these configuration parameters.

Templates directory

The configuration templates for all kinds of Kubernetes resources are placed here. Helm injects the parameter values from values.yaml into the template to generate a standard YAML configuration file.

Templates are the most important part of chart and the most powerful part of Helm. Templates increase the flexibility of application deployment and can be applied to different environments, which we will discuss in more detail later.

Templates/NOTES.txt

Easy-to-use documentation for chart, which will be displayed when chart is installed successfully.

As with templates, configuration parameters can be inserted into NOTE.txt, and Helm injects parameter values dynamically.

Chart template

Helm uses templates to create resource configuration files in YAML format that Kubernetes can understand, and we will learn how to use templates through examples.

Take templates/secrets.yaml as an example:

{{- if not .Values.originingSecret}} apiVersion: v1kind: Secretmetadata: name: {{template "mysql.fullname". } namespace: {{.Release.Namespace}} labels: app: {{template "mysql.fullname". } chart: "{{.Chart.Name}}-{{.Chart.Version}}" release: "{{.Release.Name}}" heritage: "{{.Release.Service}}" type: Opaquedata: {{if .Values.mysqlRootPassword} mysql-root-password: {{Values.mysqlRootPassword | b64enc | quote} {{else} mysql-root-password: {{randAlphaNum 10 | b64enc | quote}} {{end}} {{if .Values.mysqlPassword}} mysql-password: {{.Values.mysqlPassword | b64enc | quote}} {{else}} mysql-password: {{randAlphaNum 10 | b64enc | quote}} {{- if. Values.ss l.enabled}} {{- Values.ssl.password}} {{- Values.ssl.password}-- apiVersion: v1kind: Secretmetadata: name: {{.name}} labels: App: {{template "mysql.fullname" $}} chart: "{{$. Chart.Name}}-{{$. Chart.Version}}" release: "{{$. Release.Name}}" heritage: "{{$. Release.Service}}" type: Opaquedata: ca.pem: {{. Ca | b64enc}} server-cert.pem: {{.cert | b64enc}} server-key.pem: { .key | b64enc}} {{- end}} {{- end}}

In terms of structure, the content of the file is very similar to the Secret configuration, except that most of the property values become {{xxx}}. These {{xxx}} are actually the syntax of the template. Helm uses templates of the Go language to write chart. Go template is very powerful, supporting variables, objects, functions, flow control and other functions. Let's parse the templates/secrets.yaml Quick Learning template.

① {{template "mysql.fullname". }} define the name of the Secret.

The function of the keyword template is to reference a subtemplate mysql.fullname. This subtemplate is defined in the templates/_helpers.tpl file.

This definition is complex because it uses the concepts of objects, functions, flow control, and so on in the template language. It doesn't matter if you don't understand it now, the point we learn here is: if there is some information that multiple templates can use, you can define it as a subtemplate in templates/_helpers.tpl and then reference it through the templates function.

Here mysql.fullname consists of the concatenation of the names release and chart.

According to chart best practices, the names of all resources should be the same, and for our chart, whether Secret or Deployment, PersistentVolumeClaim, Service, their names are the values of the subtemplate mysql.fullname.

② Chart and Release are predefined objects for Helm, and each object has its own properties that can be used in templates. If you install chart using the following command:

Helm install stable/mysql-n my

So:

The value of {{.Chart.Name}} is mysql.

The value of {{.Chart.Version}} is 0.3.0.

The value of {{.Release.Name}} is my.

{{.Release.Service}} always takes the value Tiller.

{{template "mysql.fullname". }} the calculated result is my-mysql.

③ specifies the value of mysql-root-password here, but uses the flow control of if-else, whose logic is:

If .Values.mysqlRootPassword has a value, it is base64-encoded; otherwise, a 10-bit string is randomly generated and encoded.

Values is also a predefined object that represents an values.yaml file. The .Values.mysqlRootPassword is the mysqlRootPassword parameter defined in values.yaml:

Because mysqlRootPassword is commented out and there is no assignment, the logical judgment will go to else, that is, randomly generate passwords.

RandAlphaNum, b64enc and quote are all functions supported by the Go template language, and the functions can be connected through pipes. The function of {{randAlphaNum 10 | b64enc | quote}} is to first randomly generate a string of length 10, then encode it in base64, and finally put double quotation marks on both sides.

Templates/secrets.yaml this example shows the main functions of the chart template, our biggest gain should be: the template parameterized chart, through the values.yaml can be flexible to customize the application.

No matter how complex the application is, users can write chart in Go template language. Nothing more than the use of more functions, objects, and flow controls. For beginners, my advice is to refer to the official chart as much as possible. According to the law of 28, these chart already cover most cases and adopt best practices. How to encounter functions, objects and other grammars that you do not understand, please refer to the official website document https://docs.helm.sh

Practice the preparation before MySQL chartchart installation

As a preparation, you need to know how to use chart before installation. This information is usually recorded in values.yaml and README.md. In addition to downloading the source file and viewing it, performing helm inspect values may be a more convenient way.

[root@k8s-master ~] # helm inspect values stable/mysql## mysql image version## ref: https://hub.docker.com/r/library/mysql/tags/##image: "mysql" imageTag: "5.7.14" busybox: image: "busybox" tag: "1.29.3" testFramework: enabled: true image: "dduportal/bats" tag: "0.4.0" # # Specify password for root user#### Default: random 10 Character string# mysqlRootPassword: testing## Create a database user### mysqlUser:## Default: random 10 character string# mysqlPassword:## Allow unauthenticated access Uncomment to enable### mysqlAllowEmptyPassword: true## Create a database### mysqlDatabase:## Specify an imagePullPolicy (Required) # # It's recommended to change this to 'Always' if the image tag is' latest'## ref: http://kubernetes.io/docs/user-guide/images/#updating-images##imagePullPolicy: IfNotPresent## Additionnal arguments that are passed to the MySQL container.## For example use-- default-authentication-plugin=mysql_native_password if older clients need to## connect to a MySQL 8 instance.args : [] extraVolumes: | #-name: extras # emptyDir: {} extraVolumeMounts: | #-name: extras # mountPath: / usr/share/extras # readOnly: trueextraInitContainers: | #-name: do-something # image: busybox # command: ['do' 'something'] # Optionally specify an array of imagePullSecrets.# Secrets must be manually created in the namespace.# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod# imagePullSecrets: #-name: myRegistryKeySecretName## Node selector## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselectornodeSelector: {} # # Tolerations for pod assignment## Ref: https://kubernetes.io/docs/concepts/configuration/ Taint-and-toleration/##tolerations: [] livenessProbe: initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3readinessProbe: initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3## Persist data to a persistent volumepersistence: enabled: true # # database data Persistent Volume Storage Class # # If defined StorageClassName: # # If set to "-", storageClassName: ", which disables dynamic provisioning # # If undefined (the default) or set to null, no storageClassName spec is # # set, choosing the default provisioner. (gp2 on AWS, standard on # # GKE, AWS & OpenStack) # storageClass: "-" accessMode: ReadWriteOnce size: 8Gi annotations: {} # # Use an alternate scheduler E.g. "stork". # # ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/### schedulerName:## Security contextsecurityContext: enabled: false runAsUser: 999 fsGroup: 999 years # Configure resource requests and limits## ref: http://kubernetes.io/docs/user-guide/compute-resources/##resources: requests: memory: 256Mi cpu: 100m# Custom mysql configuration files pathconfigurationFilesPath: / etc/mysql/conf.d/# Custom mysql Configuration files used to override default mysql settingsconfigurationFiles: {} # mysql.cnf: |-# [mysqld] # skip-name-resolve# ssl-ca=/ssl/ca.pem# ssl-cert=/ssl/server-cert.pem# ssl-key=/ssl/server-key.pem# Custom mysql init SQL files used to initialize the databaseinitializationFiles: {} # first-db.sql: |-# CREATE DATABASE IF NOT EXISTS first DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci # second-db.sql: |-# CREATE DATABASE IF NOT EXISTS second DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci Metrics: enabled: false image: prom/mysqld-exporter imageTag: v0.10.0 imagePullPolicy: IfNotPresent resources: {} annotations: {} # prometheus.io/scrape: "true" # prometheus.io/port: "9104" livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 readinessProbe: initialDelaySeconds: 5 timeoutSeconds: 1 flags: [] serviceMonitor: enabled: false additionalLabels: {} # # Configure the service## ref: http://kubernetes.io/docs / user-guide/services/service: annotations: {} # # Specify a service type # # ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types type: ClusterIP port: 3306 # nodePort: 32000 # loadBalancerIP:ssl: enabled: false secret: mysql-ssl-certs certificates:#-name: mysql-ssl-certs# ca: |-#-BEGIN CERTIFICATE-# ... #-END CERTIFICATE-# cert: |-#-BEGIN CERTIFICATE-#... #-END CERTIFICATE-# key: |-#-BEGIN RSA PRIVATE KEY-#... #-END RSA PRIVATE KEY-## Populates The 'TZ' system timezone environment variable## ref: https://dev.mysql.com/doc/refman/5.7/en/time-zone-support.html#### Default: nil (mysql will use image's default timezone Normally UTC) # # Example: 'Australia/Sydney'# timezone:# Deployment AnnotationsdeploymentAnnotations: {} # To be added to the database server pod (s) podAnnotations: {} podLabels: {} # # Set pod priorityClassName# priorityClassName: {} # # Init container resources defaultsinitContainer: resources: requests: memory: 10Mi cpu: 10m

The output is actually the content of values.yaml. Read the notes to know which parameters are supported by MySQL chart and what preparations need to be made before installation. Part of it is about storage:

# # Persist data to a persistent volumepersistence: enabled: true # # database data Persistent Volume Storage Class # # If defined, storageClassName: # # If set to "-", storageClassName: ", which disables dynamic provisioning # # If undefined (the default) or set to null, no storageClassName spec is # # set, choosing the default provisioner. (gp2 on AWS, standard on # # GKE, AWS & OpenStack) # storageClass: "-" accessMode: ReadWriteOnce size: 8Gi annotations: {}

Chart defines a PersistentVolumeClaim and applies for 8G PersistentVolume. Since our experimental environment does not support dynamic supply, we have to create the corresponding PV in advance. The configuration file mysql-pv.yml is as follows:

ApiVersion: v1kind: PersistentVolumemetadata: name: mysql-pvspec: accessModes:-ReadWriteOnce capacity: storage: 8Gi persistentVolumeReclaimPolicy: Retain nfs: path: / nfsdata/mysql-pv server: 192.168.77.10

Create a PV mysql-pv:

[root@k8s-master ~] # kubectl apply-f mysql-pv.ymlpersistentvolume/mysql-pv created [root@k8s-master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmysql-pv 8Gi RWO Retain Available 5s

Then you can install chart.

Customized installation of chart

In addition to accepting the default value of values.yaml, we can also customize chart, such as setting mysqlRootPassword.

Helm has two ways to pass configuration parameters:

Specify your own values file.

The usual practice is to first generate the values file through helm inspect values mysql > myvalues.yaml, then set up mysqlRootPassword, and then execute helm install-- values=myvalues.yaml mysql. Pass in parameter values directly through-- set For example: [root@k8s-master ~] # helm install stable/mysql-- set mysqlRootPassword=abc123-n myNAME: myLAST DEPLOYED: Tue Oct 15 21:52:35 2019NAMESPACE: defaultSTATUS: DEPLOYEDRESOURCES:== > v1/ConfigMapNAME DATA AGEmy-mysql-test 10s = > v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGEmy-mysql 0There 1100s = > v1/PersistentVolumeClaimNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmy-mysql Pending 0s = > v1/Pod (related) NAME READY STATUS RESTARTS AGEmy-mysql-6dcc9b7d67-qh7m4 0 Pending 0 0s = > v1/SecretNAME TYPE DATA AGEmy-mysql Opaque 2 0s = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmy-mysql ClusterIP 10.105.14.41 3306/TCP 0sNOTES:MySQL can be accessed via port 3306 on the following DNS name from within your cluster:my-mysql.default.svc.cluster.localTo get your root password run : MYSQL_ROOT_PASSWORD=$ (kubectl get secret-- namespace default my-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode Echo) To connect to your database:1. Run an Ubuntu pod that you can use as a client: kubectl run-I-- tty ubuntu-- image=ubuntu:16.04-- restart=Never-- bash-il2. Install the mysql client: $apt-get update & & apt-get install mysql-client-y3. Connect using the mysql cli, then provide your password: $mysql-h my-mysql-pTo connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/my-mysql 3306 mysql-h ${MYSQL_HOST}-P$ {MYSQL_PORT}-u root-p$ {MYSQL_ROOT_PASSWORD}

MysqlRootPassword is set to abc123. In addition,-n sets release to my, and the names of all kinds of resources are my-mysql.

You can view the latest status of chart through helm list and helm status.

[root@k8s-master] # helm listNAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACEmy 1 Tue Oct 15 21:52:35 2019 DEPLOYED mysql-1.4.0 5.7.27 default [root@k8s-master] # helm status myLAST DEPLOYED: Tue Oct 15 21:52:35 2019NAMESPACE: defaultSTATUS: DEPLOYEDRESOURCES:== > v1/ConfigMapNAME DATA AGEmy-mysql-test 1 10mm = > v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGEmy-mysql 1 + 1 110m = > v1/PersistentVolumeClaimNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmy-mysql Bound mysql-pv 8Gi RWO 10m = > v1/Pod (related) NAME READY STATUS RESTARTS AGEmy-mysql-6dcc9b7d67- Qh7m4 1 Running 0 10 on the following DNS name from within your cluster:my-mysql.default.svc.cluster.localTo get your root password run = > v1/SecretNAME TYPE DATA AGEmy-mysql Opaque 2 10 mm = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmy-mysql ClusterIP 10.105.14.41 3306/TCP 10mNOTES:MySQL can be accessed via port 3306 on the following DNS name from within your cluster:my-mysql.default.svc.cluster.localTo get your root password run: MYSQL_ROOT_PASSWORD=$ (kubectl get secret-- namespace default my-) Mysql- o jsonpath= "{.data.MySQL-root-password}" | base64-- decode Echo) To connect to your database:1. Run an Ubuntu pod that you can use as a client: kubectl run-I-- tty ubuntu-- image=ubuntu:16.04-- restart=Never-- bash-il2. Install the mysql client: $apt-get update & & apt-get install mysql-client-y3. Connect using the mysql cli, then provide your password: $mysql-h my-mysql-pTo connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/my-mysql 3306 mysql-h ${MYSQL_HOST}-P$ {MYSQL_PORT}-u root-p$ {MYSQL_ROOT_PASSWORD}

PVC has already Bound,Deployment and AVAILABLE.

Upgrade and roll back release

After release is released, it can be upgraded by helm upgrade and the new configuration can be applied through-- values or-- set. For example, upgrade the current MySQL version to 5.7.15:

[root@k8s-master ~] # helm upgrade-- set imageTag=5.7.15 my stable/mysqlRelease "my" has been upgraded.LAST DEPLOYED: Tue Oct 15 22:13:15 2019NAMESPACE: defaultSTATUS: DEPLOYEDRESOURCES:== > v1/ConfigMapNAME DATA AGEmy-mysql-test 120 mm = > v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGEmy-mysql 1 v1/ConfigMapNAME DATA AGEmy-mysql-test 11 120 mm = > v1/PersistentVolumeClaimNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmy-mysql Bound mysql-pv 8Gi RWO 20 mm = > v1/Pod (related) NAME READY STATUS RESTARTS AGEmy-mysql-67f47db69b-tx2kt 0 0smy-mysql-6dcc9b7d67-qh7m4 1 Init:0/1 0 0smy-mysql-6dcc9b7d67-qh7m4 1 Running 0 20 mm = > v1/SecretNAME TYPE DATA AGEmy-mysql Opaque 2 20 mm = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmy-mysql ClusterIP 10.105.14.41 3306/TCP 20mNOTES:MySQL can be accessed via port 3306 on the following DNS name from within your cluster:my-mysql.default.svc.cluster.localTo get your root password run: MYSQL_ROOT_PASSWORD=$ (kubectl get secret-- namespace default my-mysql-o jsonpath= "{.data.MySQL-root-password}" | base64-- decode Echo) To connect to your database:1. Run an Ubuntu pod that you can use as a client: kubectl run-I-- tty ubuntu-- image=ubuntu:16.04-- restart=Never-- bash-il2. Install the mysql client: $apt-get update & & apt-get install mysql-client-y3. Connect using the mysql cli, then provide your password: $mysql-h my-mysql-pTo connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following command to route the connection: kubectl port-forward svc/my-mysql 3306 mysql-h ${MYSQL_HOST}-P$ {MYSQL_PORT}-u root-p$ {MYSQL_ROOT_PASSWORD}

Wait some time for the upgrade to succeed.

[root@k8s-master] # kubectl get deployments. My-mysql-o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORmy-mysql 1max 1 1 21m my-mysql mysql:5.7.15 app=my-mysql,release=my

Helm history can view all versions of release. You can roll back to any version through helm rollback.

[root@k8s-master] # helm history myREVISION UPDATED STATUS CHART DESCRIPTION 1 Tue Oct 15 21:52:35 2019 SUPERSEDED mysql-1.4.0 Install complete2 Tue Oct 15 22:13:15 2019 DEPLOYED mysql-1.4.0 Upgrade complete [root@k8s-master] # helm rollback my 1Rollback was a success.

The rollback succeeded and MySQL reverted to 5.7.14.

[root@k8s-master] # helm history myREVISION UPDATED STATUS CHART DESCRIPTION 1 Tue Oct 15 21:52:35 2019 SUPERSEDED mysql-1.4.0 Install complete2 Tue Oct 15 22:13:15 2019 SUPERSEDED mysql-1.4.0 Upgrade complete3 Tue Oct 15 22 16:21 2019 DEPLOYED mysql-1.4.0 Rollback to 1 [root@k8s-master] # kubectl get deployments. My-mysql-o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORmy-mysql 1max 1 124m my-mysql mysql:5.7.14 app=my-mysql,release=my to develop its own chart

Kubernetes provides us with a lot of official chart, but to deploy microservice applications, we still need to develop our own chart.

Create chart

Execute the command of helm create mychart to create chart mychart:

[root@k8s-master ~] # helm create mychartCreating mychart [root@k8s-master ~] # tree mychart/mychart/ ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _ helpers.tpl │ ├── ingress.yaml │ ├── NOTES.txt │ ├── service.yaml │ └── tests │ └── test-connection.yaml └── values.yaml3 directories, 8 files

Helm will help us create the directory mychart and generate various chart files. In this way, we can develop our own chart on this basis.

The newly created chart contains an example of nginx application by default. The values.yaml content is as follows:

# Default values for mychart.# This is a YAML-formatted file.# Declare variables to be passed into your templates.replicaCount: 1image: repository: nginx tag: stable pullPolicy: IfNotPresentimagePullSecrets: [] nameOverride: "" fullnameOverride: "" service: type: ClusterIP port: 80ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts:-host: chart-example.local paths: [] Tls: [] #-secretName: chart-example-tls # hosts: #-chart-example.localresources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128MinodeSelector: {} tolerations: [] affinity: {}

When developing, it is recommended that you refer to the templates, values.yaml and Chart.yaml in the official chart, which contains a large number of best practices and the most commonly used functions and flow control.

Debug chart

As long as it is a program, there will be bug,chart, and there is no exception. Helm provides tools for debug: helm lint and helm install-- dry-run-- debug.

Helm lint detects the syntax of chart, reports errors, and gives suggestions.

For example, we deliberately missed one in values.yaml:

Helm lint mychart will point out this syntax error.

[root@k8s-master ~] # helm lint mychart== > Linting mychart [INFO] Chart.yaml: icon is recommended [ERROR] values.yaml: unable to parse YAML error converting YAML to JSON: yaml: line 12: could not find expected': 'Error: 1 chart (s) linted, 1 chart (s) failed

The mychart directory is passed to helm lint as a parameter. After the error is fixed, it can pass the detection.

Helm install-- dry-run-- debug simulates the installation of chart and outputs the YAML content generated by each template.

[root@k8s-master ~] # helm install-- dry-run mychart--debug [debug] Created tunnel using local port: '37754' [debug] SERVER: "127.0.0.1 dry-run mychart- 37754" [debug] Original chart version: "" [debug] CHART PATH: / root/mychartNAME: quieting-quetzalREVISION: 1RELEASED: Wed Oct 16 23:12:34 2019CHART: mychart-0.1.0USER-SUPPLIED VALUES: {} COMPUTED VALUES:affinity: {} fullnameOverride: "" image: pullPolicy: IfNotPresent repository: nginx tag: stableimagePullSecrets: [] ingress: annotations: {} enabled: false hosts:-host: chart-example.local paths: [] tls: [] nameOverride: "" nodeSelector: {} replicaCount: 1resources: {} service: externalPort: 80 internalPort: 80 type: ClusterIPtolerations: [] HOOKS:---# quieting-quetzal-mychart-test-connectionapiVersion: v1kind: Podmetadata: name: "quieting-quetzal-mychart-test-connection" labels: app .kubernetes.io / name: mychart helm.sh/chart: mychart-0.1.0 app.kubernetes.io/instance: quieting-quetzal app.kubernetes.io/version: "1.0" app.kubernetes.io/managed-by: Tiller annotations: "helm.sh/hook": test-successspec: containers:-name: wget image: busybox command: ['wget'] args: [' quieting-quetzal-mychart:'] restartPolicy : NeverMANIFEST:---# Source: mychart/templates/service.yamlapiVersion: v1kind: Servicemetadata: name: quieting-quetzal-mychart labels: app.kubernetes.io/name: mychart helm.sh/chart: mychart-0.1.0 app.kubernetes.io/instance: quieting-quetzal app.kubernetes.io/version: "1.0" app.kubernetes.io/managed-by: Tillerspec: type: ClusterIP ports:-port: targetPort: http protocol: TCP Name: http selector: app.kubernetes.io/name: mychart app.kubernetes.io/instance: quieting-quetzal---# Source: mychart/templates/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: quieting-quetzal-mychart labels: app.kubernetes.io/name: mychart helm.sh/chart: mychart-0.1.0 app.kubernetes.io/instance: quieting-quetzal app.kubernetes.io/version: "1.0" app.kubernetes.io/managed -by: Tillerspec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: mychart app.kubernetes.io/instance: quieting-quetzal template: metadata: labels: app.kubernetes.io/name: mychart app.kubernetes.io/instance: quieting-quetzal spec: containers:-name: mychart image: "nginx:stable" imagePullPolicy: IfNotPresent ports: -name: http containerPort: 80 protocol: TCP livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: {}

We can examine these outputs to see if they are in line with expectations.

Similarly, the mychart directory is passed as a parameter to helm install-- dry-run-- debug.

Manage and install chart

Install chart

When we feel ready, we can install chart,Helm to support four installation methods:

Install chart in the repository, for example: helm install stable/nginx

Install through the tar package, for example: helm install. / nginx-1.2.3.tgz

Install from the chart local directory, for example: helm install. / nginx

Install via URL, for example: helm install https://example.com/charts/nginx-1.2.3.tgz

Here we use the local directory to install:

[root@k8s-master ~] # helm install mychartNAME: exasperated-olmLAST DEPLOYED: Wed Oct 16 23:49:18 2019NAMESPACE: defaultSTATUS: DEPLOYEDRESOURCES:== > v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGEexasperated-olm-mychart 0Universe = > v1/Pod (related) NAME READY STATUS RESTARTS AGEexasperated-olm-mychart-6845d8bb6c-mflfj 0 / 1 ContainerCreating 00 slots = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEexasperated-olm-mychart ClusterIP 10.109.135.88 80/TCP 0sNOTES:1. Get the application URL by running these commands: export POD_NAME=$ (kubectl get pods-- namespace default-l "app.kubernetes.io/name=mychart App.kubernetes.io/instance=exasperated-olm "- o jsonpath=" {.items [0] .metadata.name} ") echo" Visit http://127.0.0.1:8080 to use your application "kubectl port-forward $POD_NAME 8080 virtual 80

When chart is deployed to a Kubernetes cluster, it can be tested more comprehensively.

Add chart to the warehouse

After chart passes the test, it can be added to the repository and used by the rest of the team. Any HTTP Server can be used as a chart repository, which will be built on k8s-node1.

Start a httpd container on k8s-node1.

[root@k8s-node1 ~] # mkdir / var/www [root@k8s-node1 ~] # docker run-d-p 8080 purl 80-v / var/www:/usr/local/apache2/htdocs/ httpd f571b574a59017de31b615402ae7d6886cde18907bb14c22fe82b8a68757e859

Package the mychart through helm package.

[root@k8s-master ~] # helm package mychartSuccessfully packaged chart and saved it to: / root/mychart-0.1.0.tgz

Execute helm repo index to generate the index file for the warehouse.

[root@k8s-master ~] # mkdir myrepo [root@k8s-master ~] # mv mychart-0.1.0.tgz myrepo [root@k8s-master ~] # helm repo index myrepo/-url http://192.168.77.20:8080/charts[root@k8s-master ~] # ls myrepo/index.yaml mychart-0.1.0.tgz [root@k8s-master ~] #

Helm scans all tgz packages in the myrepo directory and generates index.yaml. -- url specifies the access path to the new warehouse. The newly generated index.yaml records the information of all the chart in the current warehouse:

ApiVersion: v1entries: mychart:-apiVersion: v1 appVersion: "1.0" created: "2019-10-16T23:58:54.835722169+08:00" description: A Helm chart for Kubernetes digest: 31c8cc4336c1afd09be0094a6bbb5d4c37abb20bbffbcc0c3e72101f6f0635b6 name: mychart urls:-http://192.168.77.20:8080/charts/mychart-0.1.0.tgz version: 0.1.0generated: "2019-10-16T23:58:54.834212265+08:00"

Currently, there is only one chart, mychart.

Upload mychart-0.1.0.tgz and index.yaml to the / var/www/charts directory of k8s-node1.

Add the new warehouse to Helm through helm repo add.

[root@k8s-master ~] # helm repo add newrepo http://192.168.77.20:8080/charts"newrepo" has been added to your repositories [root@k8s-master] # [root@k8s-master ~] # helm repo listNAME URL stable https://kubernetes-charts.storage.googleapis.comlocal http://127.0.0.1:8879/charts Newrepo http://192.168.77.20:8080/charts

The warehouse is named newrepo,Helm and index.yaml is downloaded from the warehouse.

Now you can repo search to mychart.

[root@k8s-master ~] # helm search mychartNAME CHART VERSION APP VERSION DESCRIPTION local/mychart 0.1.0 1.0 A Helm chart for Kubernetesnewrepo/mychart 0.1.0 A Helm chart for Kubernetes

In addition to newrepo/mychart, there is also a local/mychart. This is because while performing the step 2 packaging operation, mychart is also synchronized to local's warehouse.

You can install mychart directly from the new warehouse.

[root@k8s-master ~] # helm install newrepo/mychartNAME: ardent-pugLAST DEPLOYED: Thu Oct 17 00:06:23 2019NAMESPACE: defaultSTATUS: DEPLOYEDRESOURCES:== > v1/DeploymentNAME READY UP-TO-DATE AVAILABLE AGEardent-pug-mychart 0 related 1 100s = > v1/Pod (related) NAME READY STATUS RESTARTS AGEardent-pug-mychart-7858fd5f-j8mpb 0 ContainerCreating 0 0 slots = > v1/ServiceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEardent-pug-mychart ClusterIP 10.98.167.6 80/TCP 0sNOTES:1. Get the application URL by running these commands: export POD_NAME=$ (kubectl get pods-- namespace default-l "app.kubernetes.io/name=mychart App.kubernetes.io/instance=ardent-pug "- o jsonpath=" {.items [0] .metadata.name} ") echo" Visit http://127.0.0.1:8080 to use your application "kubectl port-forward $POD_NAME 8080 virtual 80

If a new chart is added to the warehouse later, you need to update the local index with helm repo update.

[root@k8s-master] # helm repo updateHang tight while we grab the latest from your chart repositories.Skip local chart repository...Successfully got an update from the "newrepo" chart repository...Successfully got an update from the "stable" chart repositoryUpdate Complete.

This operation is equivalent to the yum update of CentOS.

Summary

Helm enables us to install, deploy, upgrade, and remove containerized applications in the same way that apt manages deb packages.

Helm consists of a client and a Tiller server. The client is responsible for managing chart and the server is responsible for managing release.

Chart is the application packaging format of Helm, which consists of a set of files and directories. The most important one is the template, which defines the configuration information of all kinds of Kubernetes resources, and Helm instantiates the template through values.yaml during deployment.

Helm allows users to develop their own chart and provides users with debugging tools. Users can build their own chart repositories and share chart with the team.

Helm helps users run and manage micro-service architecture applications efficiently on Kubernetes, and Helm is very important.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report