Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

A preliminary study on the use of helm-v3, a native micro-service management tool for K8s (2)

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Table of contents:

Debug the application according to the release requirements of the microservice, and publish the microservice using the template of chart

1. Release a helm template based on production environment based on Dubbo microservice

Template address:

Git clone git@gitee.com:zhaocheng172/helm-dubbo.git

Please give me your public key, or you can't pull it down.

3.6 Chart template

The core of Helm is the template, that is, templated K8S manifests file.

It is essentially a template template for Go. Helm adds a lot of things to the Go template template. Such as some custom metadata information, extended libraries, and workflows similar to programming forms, such as conditional statements, pipes, and so on. All these things will make our templates richer.

Speaking above, in fact, in helm, the most important thing is the rendering of this template. We specify a variable in the field that may change frequently in yaml, and this variable can be overridden its default variable through the command line of helm and its naming line. This can be dynamically rendered to yaml, and the most important thing is the template values. What helm does for us is to centrally manage yaml and render these yaml dynamically, because there may be many fields when writing yaml, and there may be other changes when it is deployed later. In fact, these changed fields will be dynamically modified in batches. Before, when there was no helm, we would generally design a general template to change the fields that change frequently. Generally, use sed to replace the value inside, such as replacing the image and replacing the image with a name to deploy a new application. After the replacement, apply is applied immediately. Of course, this image is compiled in advance, and then you create your own image through dockerfile. Generally, the replacement address of the image is also the image address on harbor. In this case, you may write a lot of replacement commands. Obviously not very flexible, more and more files, there is a certain cost for management, in fact, the best way is not in a file, write these variable fields, all yaml can read this variable, reference it to the rendered file, this is what helm to do, this is also the core function of helm.

1. Template

With templates, how do we incorporate our configuration? This is the values file that is used. These two parts are actually the core functions of chart.

When we deploy an application, such as releasing a micro-service, we all need to make a chart. This chart can come from the Internet, or can be shared with you by others, or can be made by ourselves. The core of this chart is this template. Let's deploy an application. This template itself is a go template that is rendered with go. It's just that helm adds something to go to make it more flexible, such as conditional segmentation.

Next, deploy the nginx application, familiar with the template use, first delete all the files under the templates directory, here we will create template files, syntax control, are to meet more requirements of this template.

For example, first create a chart, a total of four directories, in templates is the configuration yaml we need to deploy an application, such as deployment,service,ingress, we write some frequently changed fields into variable patterns, and define the values of these variables through values. When created through helm install, it will render the values to our template. There is also a _ helpers.tpl, which will put some templates that can be used by deployment,service, such as some common fields, which can be put into the named template of _ helpers.tpl for such a long time. NOTES.txt is a prompt used to deploy an application, such as the address to be accessed, and the directory of test. For example, if you have deployed an application, test it to see if it is deployed properly.

[root@k8s-master1 one_chart] # helm create oneCreating one [root@k8s-master1 one_chart] # lsone [root@k8s-master1 one_chart] # cd one/ [root@k8s-master1 one] # lscharts Chart.yaml templates values.yaml [root@k8s-master1 one] # tree.. ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _ helpers.tpl │ ├── ingress.yaml │ ├── NOTES .txt │ ├── serviceaccount.yaml │ ├── service.yaml │ └── tests │ └── test-connection.yaml └── values.yaml

First prepare two yaml, and then we will render some fields that change frequently.

[root@k8s-master1 templates] # kubectl create deployment application-image=nginx-dry-run-o yaml > deployment.yaml [root@k8s-master1 templates] # kubectl expose deployment application-- port=80-- target-port=80-- dry-run-o yaml > service.yaml

Then we will release our service to test it and test whether it can be accessed normally.

Normally, we release such a service through apply-f. Now we use helm to publish and try. In fact, the effect is the same, but if we publish in this way, we have no effect with apply-f. The function of helm's core application is that we can effectively render our variables, making our microservices more flexible. For example, we can change the address of the image by rendering variables through templates. Publish the name of the service, as well as the number of copies, and so on, to dynamically introduce and quickly release multiple sets of micro services, simply to deploy a set of general templates to deploy some regular applications.

[root@k8s-master1 one_chart] # helm install application one/NAME: applicationLAST DEPLOYED: Wed Dec 18 11:44:21 2019NAMESPACE: defaultSTATUS: deployedREVISION: 1TEST SUITE: None [root@k8s-master1 templates] # kubectl get pod SvcNAME READY STATUS RESTARTS AGEpod/application-6c45f48b87-2gl95 1bat 1 Running 0 10sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/application ClusterIP 10.0.0.221 80/TCP 9s [root@k8s-master1 templates] # curl-I 10.0.0.221HTTP / 1.1 200 OKServer: nginx/1.17.6Date: Wed 18 Dec 2019 03:35:22 GMTContent-Type: text/htmlContent-Length: 612Last-Modified: Tue, 19 Nov 2019 12:50:08 GMTConnection: keep-aliveETag: "5dd3e500-2019" Accept-Ranges: bytes

That is, to deploy a chart, using this template, you can deploy some regular applications. First of all, you need to change the name, copy, and image name frequently.

2. Debugging

Helm also provides-- dry-run debug parameters to help you verify the correctness of the template. With these two parameters in the helm install, you can print out the corresponding values and the rendered resource list, instead of actually deploying a release.

For example, let's debug the chart package created above:

Helm install pod-nodejs-tools-dry-run / root/one

3. Built-in object

{{Release.Name}} this is a built-in variable. This built-in variable is actually the name we assigned to our deployment application during install, that is, it has been passed in, so we can directly use this to deploy the resource name.

{{Chart.Name}} this value also belongs to a built-in variable of helm, that is, after we create chart, the template has the yaml of Chart.yaml. In fact, this is the value to be taken here. Of course, the name of the project is generally uniform, and we can define it directly through {{Values.name}}, that is, in values.yaml.

After we have written the transmission of the variable, we can also output it to see if we can output the rendering normally.

The pod-base-common here actually enables me to take effect on {{Release.name}}-- dry-run is pre-execution, and one is my chart directory.

Like some commonly used release built-in variables, Chart variables can be seen directly in the chart package

[root@k8s-master1 one] # cat templates/deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: labels: chart: {{.Chart.Name}} app: {{.Chart.Name}} name: {{.Release.Name}} spec: replicas: {{.Values.replicas}} selector: matchLabels: app: {{.Values.label}} template: metadata: labels: App: {{.Values.label}} spec: containers:-image: {{.Values.image}}: {{.Values.imagetag}} name: {{.Release.Name}}

4. Values custom variable

The Values object provides a value for the Chart template, and the value of this object has four sources:

Values.yaml files in the chart package

The values.yaml file of the parent chart package

A custom yaml file passed in through the-f or-- values parameter of helm install or helm upgrade

The value passed in through the-- set parameter

[root@k8s-master1 one] # helm install pod-mapper-service-- set replicas=1.. / one/

The-- set command will first override the value of values, creating one copy instead of two.

The value provided by chart's values.yaml can be overridden by a user-supplied values file, which can also be overwritten by the parameters provided by-- set.

[root@k8s-master1 one] # cat values.yaml replicas: 2image: nginximagetag: 1.16label: nginx [root@k8s-master1 one_chart] # helm install pod-base-common-- dry-run one/ [root@k8s-master1 ~] # helm install pod-css-commons / root/one_chart/one/NAME: pod-css-commonsLAST DEPLOYED: Wed Dec 18 14:41:02 2019NAMESPACE: defaultSTATUS: deployedREVISION: 1TEST SUITE: None

After execution, view the rendered results using the name of the get manifest+ project. Through helm ls, you can see the services created by helm.

[root@k8s-master1 ~] # helm get manifest pod-css-commons [root@k8s-master1 one] # helm lsNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONpod-css-commons default 1 2019-12-18 14 root@k8s-master1 one 41 helm lsNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONpod-css-commons default 02.800570406 + 0800 CSTdeployed application-0.1.0 1.16.0

For example, if we test it again, for example, if our code is updated and a new image is formed through dockerfile, then we need to replace the new image, what do we do?

In fact, just change the name defined under our values to the address of the new image. Here, the demo is written as nginx1.17.

And then replaced the new image through helm upgrade

[root@k8s-master1 one] # helm upgrade pod-css-commons.. / one/Release "pod-css-commons" has been upgraded. Happy Helming!NAME: pod-css-commonsLAST DEPLOYED: Wed Dec 18 15:00:14 2019NAMESPACE: defaultSTATUS: deployedREVISION: 2TEST SUITE: None

You can view the address of the rendered image through get manifest. Generally, in order to ensure the consistency of project names in the release of microservices, {{Values.name}} is used uniformly, and variables are set by yourself.

[root@k8s-master1 one] # helm get manifest pod-css-commons

For example, now there is a microservice to be released.

So what we usually replace is the name of the service and the image.

Directly modify it in values to change the address of the new image and the name of the project-- dry-run the test and release it directly.

[root@k8s-master1 one] # cat values.yaml replicas: 2image: nginximagetag: 1.15name: pod-base-user-serviceapp: pod-base-user-serviceport: 80targetPort: 80 [root@k8s-master1 one] # helm install pod-base-user-service.. / one/ [root@k8s-master1 one] # kubectl get podNAME READY STATUS RESTARTS AGEnfs-client-provisioner-6f54fc894d-dbvmk 1/1 Running 0 5d3hpod-base-ec-service-664987f9c6-5f9vl 1 + 1 Running 0 7m18spod-base-ec-service-664987f9c6-mw4jb 1 + + 1 Running 0 7m18spod-base-user-service-6b7d9d47b8-qqcbp 1 + + 1 Running 0 7spod-base-user-service-6b7d9d47b8-r5f96 1 + + 1 Running 0 8s

5. Pipes and functions

The values just now and the built-in objects actually pass the value to the template engine for rendering. In addition, the template engine also supports secondary processing of the obtained data, that is, you must use the value of values, or you can do secondary processing on this template, for example, the data I will get, the first letter is capitalized, or I will add a double quotation mark to the string of the value Well, you can do secondary processing on this template engine, such as changing the value into a string, then this uses a function, then the template engine supports this function, so this function is not particularly commonly used, but it will also be used here, such as deployment, such as labels, where you get this value and add a double quote. Some yaml values must be in double quotation marks. In fact, this implementation is relatively simple, just add a quote.

Double quotation marks

Labels: app: {{quote .Values.name}}

The test effect has been added to the double quotation marks. In fact, this is a secondary processing that the quote function does for us. When we add double quotation marks to some specific values, we can directly achieve it through the quote function.

[root@k8s-master1 one] # helm install pod-tools-service-- dry-run.. / one/ labels: app: "pod-mapper-service"

Another example is to pass a specific variable directly, not through values. The env field I defined is not available by default. It is passed through {{default "xxxx" .Values.env}}. For example, this is a default value, and it can be defined in this way without changing it.

Spec: nodeSelector: team: {{.Values.team}} env: {{default "JAVA_OPTS" .Values.env}}

So if there is this value in values, it will use the value in values by default. If not, it will use default to use the default value.

Like indentation, yaml itself is defined by hierarchical relationships, so sometimes we use this kind of requirement to render our hierarchical relationships.

Other functions:

Indent: {{.Values.resources | indent 12}} uppercase: {{upper .Values.resources}} initials: {{title .Values.resources}}

6. Process control

Process control provides the template with the ability to meet more complex data logic processing.

The Helm template language provides the following flow control statements:

If/else condition block

With specified range

Range cyclic block

Like indentation process control is generally used, such as else/if will do some complex logic processing

Define this parameter under values

Test: "123"

Templates/deployment.yaml to define this judgment, if test=123, then output test:a, otherwise, if the value of the variable is other, test:b will be printed here. The scenario of this kind of application can also define yaml according to its own actual application scenario, but this kind of situation is rare.

Spec: nodeSelector: team: {{.Values.team}} env: {{default "JAVA_OPTS" .Values.env}} {{if eq .Values.test "123"}} test: a {{else}} test: b {{Values.team}}

In fact, if you output here, leave a space, in fact, it is {{the}} left by the parameters we defined just now, this can be removed directly, and you can delete it through -.

The eq operator determines whether it is equal, and it also supports operators such as ne, lt, gt, and, or, and so on.

{{- if eq .Values.test "123"}} test: a {{- else}} test: b {{- end}} containers:

Conditional judgment is to determine whether the condition is true, and it is false if the value is:

A Boolean type is flase

A number zero

An empty string

A nil (empty or null)

An empty collection (map, slice, tuple, dict, array)

Except for the above, all the other conditions are true.

For example, if the value of values is set to flase, it is not true.

Or if the value of values is set to 0, then the default is also set to false, which is not true.

If it is empty, it is also false, or a collection, all of which are false

So let's set a 0 in values, test it, and then print b here, which means it's fake.

Test: 0test: "" [root@k8s-master1 one] # helm install pod-mapper-service-- dry-run.. / one/ spec: nodeSelector: team: team1 env: JAVA_OPTS test: B containers:-image: nginx:1.15 name: pod-mapper-service

Now we use their helm official original values template to create an application, and its formatted structure that supports serialization, such as image image, may have the address tag name of its image, and multiple attributes may be defined below, so this can define such a structured format, such as the address of the warehouse and the strategy of pulling the image.

Image: repository: nginx tag: 1.17 pullPolicy: IfNotPresent [root@k8s-master1 one] # cat templates/deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: labels: app: {{.Values.name}} name: {{.Values.name}} spec: replicas: {{.Values.replicaCount}} selector: matchLabels: app: {{.Values.name}} template: metadata: labels: App: {{.Values.name}} spec: nodeSelector: team: {{.Values.team}} containers:-image: {{.Values.image.repository}}: {{.Values.image.tag}} name: {{.Values.name}} [root@k8s-master1 one] # cat templates/service.yaml apiVersion: v1kind: labels: app: {{.Values.name}} name: {{.Values.name}} spec: ports:-port: {{.Values.port}} protocol: TCP targetPort: {{.Values.port}} selector: app: {{.Values.name}} [root@k8s-master1 one] # vim values.yamlapp: pod-base-jssname: pod-base-jssreplicaCount: 3image: nginx tag: 1.17 pullPolicy: IfNotPresentteam: team3 [root @ k8s-master1 one] # helm install pod-base-jss.. / one/ [root@k8s-master1 one] # helm lsNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONpod-base-jss default 1 2019-12-19 13 one/ 49.88 1954736 + 0800 CST deployed application-0.1.0

Now add another resource limit.

Make a judgment when the default is false or true, do the relevant action, that is, whether the resource limit is used or not. If the resource limit is defined as false, then the resources in it are not used. If it is ture, use it, or do not set it, and remove the restricted resource comments.

Now go to determine whether the resource is true, use resource if it is true, and then impose one more resource restriction on the pod. If it is false, do not make resource restrictions, directly judge {{if .Values.resources}}.

Here I'll test it if it's true, and it will add our judgment to it.

[root@k8s-master1 one] # cat templates/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: labels: app: {{.Values.name}} name: {{.Values.name}} spec: replicas: {{.Values.replicaCount}} selector: matchLabels: app: {{.Values.name}} template: metadata: labels: app: {{.Values.name}} spec : nodeSelector: team: {{.Values.team}} containers:-image: {{.Values.image.repository}}: {{.Values.image.tag}} name: {{- if .Values.resources}} resources: limits: cpu: {{.Va lues.resources.limits.cpu } memory: {{.Values.resources.requests.memory}} requests: cpu: {{.Values.resources.requests.CPU}} memory: {{.Values.resources.requests.memory}} {{- else}} resources: {} {- end}}

Here we will refer to our following variables. If we do not have this requirement, we can directly use resources: 0, or "" or false, and then the following comments will not be referenced, which is equivalent to a switch to manage our application very well.

[root@k8s-master1 one] # cat values.yamlresources: limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi

View the test results

[root@k8s-master1 one] # helm upgrade pod-base-jss-- dry-run.. / one/Release "pod-base-jss" has been upgraded. Happy helpful name: pod-base-jssLAST DEPLOYED: Thu Dec 19 14:36:37 2019NAMESPACE: defaultSTATUS: pending-upgradeREVISION: 2TEST SUITE: NoneHOOKS:MANIFEST:---Source: application/templates/service.yamlapiVersion: v1kind: Servicemetadata: labels: app: pod-base-jss name: pod-base-jssspec: ports:-port: 80 protocol: TCP targetPort: 80 selector: app: pod-base-jss---Source: application/templates/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: labels : app: pod-base-jss name: pod-base-jssspec: replicas: 3 selector: matchLabels: app: pod-base-jss template: metadata: labels: app: pod-base-jssspec: nodeSelector: team: team3 containers:-image: nginx:1.17 name: pod-base-jss resources: limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi

Or there is another way to add enabled,false directly to values, which means to turn off. After execution, it will first be used according to enabeld.

Resources: enabled: false limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi

Then in {{if .Values.resource.enabled}} to define the switch of values as true, it will be used and false will not be used.

[root@k8s-master1 one] # cat templates/deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: labels: app: {{.Values.name}} name: {{.Values.name}} spec: replicas: {{.Values.replicaCount}} selector: matchLabels: app: {{.Values.name}} template: metadata: labels: app: {{.Values.name}} Spec: nodeSelector: team: {{.Values.team}} containers:-image: {{.Values.image.repository}}: {{.Values.image.tag}} name: {{- if .Values.resources.enabled}} resources: limits: cpu: {{.Values.resources. Resources. Limits.cpu} memory: {{.Values.resources.requests.memory}} requests: cpu: {{.Values.resources.requests.CPU}} memory: {{.Values.resources.requests.memory}} {{- else}} resources: {} {- Values.requests.memory}

There are also many requirements like this. Some micro-services do not need to create ingress, some may, or some may not use ingress as a load balancer outside the cluster to flow into your services inside the cluster, and directly use service's clusterIP to deploy several nginx load balancers to be responsible for forwarding internal services, and expose them through slb. Then let's implement these two requirements.

It has this template in values enabled can be the same as just now, if set to false, then do not create ingress rules, if set to true, then create this rule

First define service, that is, process control, now do not use service

Values sets the switch to false,enabled

[root@k8s-master1 one] # cat values.yaml app: pod-base-toolsname: pod-base-toolsreplicaCount: 3image: repository: nginx tag: 1.17 pullPolicy: IfNotPresentserviceAccount: create: true name:service: enabled: false port: 80 targetPort: 80ingress: enabled: false annotations: {} hosts:-host: chart-example.local paths: [] tls: [] resources: enabled: true limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128MinodeSelector: team: team2 [root@k8s-master1 templates] # cat service.yaml {{- if .Values.service.enabled} apiVersion: v1kind: Servicemetadata: labels: app: {{.Values.name}} name: {{.Values.name}} spec: ports:-port: {{.Values.service.port} protocol: TCP targetPort: {{.Values.service. Service.targetPort}} selector: app: {{.Values.name}} {{end}} [root@k8s-master1 templates] # cat deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: labels: app: {{.Values.name}} name: {{.Values.name}} spec: replicas: {{.Values.replicaCount}} selector: matchLabels: app: {{.Values.name}} template: Metadata: labels: app: {{.Values.name}} spec: nodeSelector: team: {{.Values.nodeSelector.team}} containers:-image: {{.Values.image.repository}: {{.Values.image.tag}} name: {{.Values.name} {{- Values.resources.enabled}} Resources: limits: cpu: {{.Values.resources.consums.CPU}} memory: {{.Values.resources.consums.memory}} requests: cpu: {{.Values.resources.requests.cpu}} memory: {{.Values.resources.requestmemo ry}} {{- else} resources {} {- end}}

After execution, the service will not be created, because if the if I set is false, then the service will not be created, and the switch will be set to false, then the service can be created directly if it is set to true.

[root@k8s-master1 templates] # helm install pod-base-tools-- dry-run.. /.. / one/

Now create an ingress and set a switch. In fact, the method is the same.

[root@k8s-master1 one] # cat values.yaml app: pod-base-username: pod-base-userreplicaCount: 3image: repository: nginx tag: 1.17 pullPolicy: IfNotPresentserviceAccount: create: true name:service: enabled: false port: 80 targetPort: 80ingress: enabled: true annotations: {} hosts:-host: chart-example.local paths: [] tls: [] resources: enabled: true limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128MinodeSelector: team: team2 [root@k8s-master1 templates] # cat ingress.yaml {{- if .Values.ingress.enabled}} apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules:-http: paths:-path: / testpath backend: serviceName: test ServicePort: 80 {end}} [root@k8s-master1 templates] # helm install pod-base-user.. / one/ [root@k8s-master1 templates] # kubectl get ingNAME HOSTS ADDRESS PORTS AGEingress.extensions/test-ingress * 8039s

With

With: controls the scope of variables.

{{.Release.xxx}} or {{.Values.xxx}}. Represents a reference to the current scope, and .values tells the template to look for the value of the Values object in the current scope. The with statement can control the scope of the variable, and its syntax is similar to a simple if statement.

A small problem is that when we write variable references, we always add one in front of them. This point means from which area to find, according to. If so, it is according to the structure of generating template to find it.

Let's use the value of nodeSelector again. If we look at the actual scenario, we usually set up scheduled nodes for grouping, so as to ensure that we can better manage the layout of distributed micro-services of node nodes.

This can also be judged by using the previous syntax if.

It can also be configured through this switch, either as with or as a function of toyaml

Spec: {{- if .nodeSelector.enabled} nodeSelector: team: {{- else}} {{- Values.nodeSelector.team}} {{- Values.nodeSelector.team}} {{- end}} [root@k8s-master1 one] # cat values.yamlnodeSelector: enabled: team2 [root@k8s-master1 templates] # cat deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: labels: app: {{.Values.name}} name: {{.Values.name}} spec: replicas: {{.Values.replicaCount}} selector: matchLabels: app: {{.Values.name}} template: metadata: labels: app: {{.Values.name}} spec: {{- Values.nodeSe lector.enabled}} nodeSelector: team: {{.Values.nodeSelector.team} {{- else}} containers:-image: {{.Values.image.repository}}: {{.Values.image.tag}} name: {{- if .Values.resources.enabled}} resources: limits: cpu: {{.Values.resou rces.limits.cpu}} Memory: {{.Values.resources.requests.memory}} requests: cpu: {{.Values.resources.requests.cpu}} memory: {{.Values.resources.requests.memory}} {{- else}} resources: {} {- end}}

Or take off the switch and use with to read our parameters directly.

[root@k8s-master1 one] # tail-4 values.yaml nodeSelector: team: team2

Define this field in deployment, and-with specifies .team to read the corresponding value

Spec: {{- with .Values.nodeSelector}} nodeSelector: team: {{.team}} {{- else}} {{- end}}

Use toYaml mode

Spec: {{- with .Values.nodeSelector}} nodeSelector: {{- toYaml. | | nindent 8}} {{- end}} |

With is a circular construct. Use the value in .Values.nodeSelector: convert it to Yaml.

The point after toYaml is the current value of .Values.nodeSelector in the loop.

Range

In the Helm template language, use the range keyword for looping operations.

We added the previous variable list to the values.yaml file:

Like range, you need to use it when you need to write multiple elements. Like toyaml and with, there are many layers of structure, and it is suitable to use range when you use env.

Cat values.yaml test:-1-2-3

Print the list in a loop:

ApiVersion: v1kind: ConfigMapmetadata: name: {{.Release.Name}} data: test: | {{- range .Values.test}} {{. }} {{- end}}

Inside the loop we use a.., this is because the current scope is within the current loop, this. Referenced currently read element

7. Variables

Variables, which are rarely used in templates, but we'll see how to use them to simplify code and make better use of with and range. Because we just used with, we can not define other variables below, so how to reference some full-value built-in variables in with, there are two ways, one is to use the variable assignment of helm, and the second is to use $to use

To test it, don't add $

Spec: {{- with .Values.nodeSelector}} nodeSelector: app: {{.Values.name}}

The result of the execution is as follows

[root@k8s-master1 templates] # helm install pod-base-user-- dry-run. /.. / one/Error: template: application/templates/deployment.yaml:19:23: executing "application/templates/deployment.yaml" at: nil pointer evaluating interface {} .name

Well, if you add $, then the output will be normal.

Spec: nodeSelector: app: pod-base-user team: team2

You can also use another form to output

Spec: {{- $releaseName: = .Release.Name -}} {{- with .Values.nodeSelector}} nodeSelector: app: {{$releaseName}}

You can see that a sentence {{- $releaseName:=.Release.Name-}} has been added to the with statement, where $releaseName is a reference variable of the following object, which is in the form of $name. The assignment operation uses: =, so the $releaseName variable inside the with statement block still points to .Release.Name

In addition, when we define a micro-service or java project, we will set the heap memory size of java, so this is also a more commonly used option. How to add this to it? there are many ways to use toYaml.

Let's go to values to define it.

[root@k8s-master1 one] # tail-4 values.yaml env:- name: JAVA_OPTS value:-Xmx1024m-Xms1014m

It can also be printed by the method just now.

{{- with .Values.env}} env: {{- toYaml. | | nindent 8}} {{- end}} |

8. Naming template

Named template: defined by define and introduced by template. By default, the file at the beginning of an underscore in the templates directory is a common template (helpers.tpl). For example, if there are one or two things in this yaml that are not as good as toYaml mode or if else switch mode, you can use this named template for a long time.

For example, if the names of resources are all the same, you can define a naming template, write all the logic of this block in a template, let these yaml refer to this block, and then they all reference the same name, such as label, tag selector. Well, the controller needs to match the pod according to the tag selector. Well, this piece can be written into _ helper.tpl. This is the place where the public template is actually stored. The definition template is introduced by using define definition, template.

Cat _ helpers.tpl {{- define "demo.fullname" -}} {{- .Chart.Name -}} {{- end -}} cat deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: {{template "demo.fullname". }}...

The template directive is a way to include one template in another. However, the template function cannot be used for Go template pipes. In order to solve this problem, add include function

Cat _ helpers.tpl {{- define "demo.labels" -}} app: {{template "demo.fullname". }} chart: "{{.Chart.Name}}-{{.Chart.Version}}" release: "{{.Release.Name}}" {{- end -}} cat deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: {{include "demo.fullname". }} labels: {{- include "demo.labels". | nindent 4}}.

It contains a template named demo.labels, and then sets the value. Pass it to the template, and finally pass the output of the template to the nindent function.

3.7To develop your own Chart: Dubbo micro service application as an example

Create a template first

Helm create dubbo

Modify Chart.yaml,Values.yaml to add commonly used variables

Create the yaml file required for the deployment image in the templates directory, and the variables refer to the fields that change frequently in the yaml.

The code here is in my git code repository. If you want to use it, please send me your public key.

Git clone git@gitee.com:zhaocheng172/helm-dubbo.git

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report