In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Kubernetes native pipeline example analysis, I believe that many inexperienced people do not know what to do, so this article summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.
Tekton Pipeline, is a pipeline of K8s native, tasks run in pod, manage tasks and workflows through custom CRD, etc. After reading tekton, I feel that the function is very powerful, but it is a bit over-designed, without the simple, generous and flexible sense of drone.
Task
The main goal of Tekton Pipelines is to run your task individually or as part of a pipeline. Each task runs as a Pod on the Kubernetes cluster, and each step acts as its own container. This is the essence of drone thought, in fact, drone also plans to use kubernetes as a task execution engine, but there is no more.
A Task defines the work that needs to be performed, such as the following is a simple task:
ApiVersion: tekton.dev/v1alpha1kind: Taskmetadata: name: echo-hello-worldspec: steps:-name: echo image: ubuntu command:-echo args:-"hello world"
This steps is a series of commands that are executed sequentially by tasks. The configuration within this steps is almost identical to that of drone, where the definition of Task is not executed and is not executed until the TaskRun is created. This is reasonable, equivalent to a trigger.
ApiVersion: tekton.dev/v1alpha1kind: TaskRunmetadata: name: echo-hello-world-task-runspec: taskRef: name: echo-hello-world trigger: type: manualkubectl apply-f
< name-of-file.yaml >Check out TaskRun kubectl get taskruns / echo-hello-world-task-run-o yaml
ApiVersion: tekton.dev/v1alpha1kind: TaskRunmetadata: creationTimestamp: 2018-12-11T15:49:13Z generation: 1 name: echo-hello-world-task-run namespace: default resourceVersion: "6706789" selfLink: / apis/tekton.dev/v1alpha1/namespaces/default/taskruns/echo-hello-world-task-run uid: 4e96e9c6-fd5c-11e8-9129-42010a8a0fdcspec: generation: 1 inputs: {} outputs: {} taskRef: name: echo-hello-world taskSpec: null trigger: type : manualstatus: conditions:-lastTransitionTime: 2018-12-11T15:50:09Z status: "True" type: Succeeded podName: echo-hello-world-task-run-pod-85ca51 startTime: 2018-12-11T15:49:39Z steps:-terminated: containerID: docker://fcfe4a004...6729d6d2ad53faff41 exitCode: 0 finishedAt: 2018-12-11T15:50:01Z reason: Completed startedAt: 2018-12-11T15:50 : 01Z-terminated: containerID: docker://fe86fc5f7...eb429697b44ce4a5b exitCode: 0 finishedAt: 2018-12-11T15:50:02Z reason: Completed startedAt: 2018-12-11T15:50:02Z
The status Succeeded = True shows that the task ran successfully.
Task input and output
In more common scenarios, tasks require multiple steps to process input and output resources. For example, Task can get the source code from the GitHub repository and build a Docker image from it.
PipelinesResources is used to define the input (such as code) and output (such as Docker image) of a task. There are some system-defined resource types available, and here are two examples of resources that are commonly required.
The git resource can be the code you want to compile:
ApiVersion: tekton.dev/v1alpha1kind: PipelineResourcemetadata: name: skaffold-gitspec: type: git params:-name: revision value: master-name: url value: https://github.com/GoogleContainerTools/skaffold
The image resource represents the image to be compiled into by the task:
ApiVersion: tekton.dev/v1alpha1kind: PipelineResourcemetadata: name: skaffold-image-leeroy-webspec: type: image params:-name: url value: gcr.io//leeroy-web
Here are the Task inputs and outputs. The input resource is the GitHub repository, and the output is the image generated from that source. The parameters of the task command support templating, so the definition of the task is constant, and the value of the parameter can be changed at run time.
ApiVersion: tekton.dev/v1alpha1kind: Taskmetadata: name: build-docker-image-from-git-sourcespec: inputs: resources:-name: docker-source type: git params:-name: pathToDockerFile # these parameters are customizable description: The path to the dockerfile to build default: / workspace/docker-source/Dockerfile-name: pathToContext description: The build Context used by Kaniko (https://github.com/GoogleContainerTools/kaniko#kaniko-build-contexts) default: / workspace/docker-source outputs: resources:-name: builtImage type: image steps:-name: build-and-push image: gcr.io/kaniko-project/executor # mirrors of specific features Can be used to docker build command:-/ kaniko/executor args:-- dockerfile=$ {inputs.params.pathToDockerFile} # then the original pathToDockerFile is the parameter defined above-destination=$ {outputs.resources.builtImage.url}-context=$ {inputs.params.pathToContext}
TaskRun binds the input and output to the defined PipelineResources values, setting the values to parameters for templating in addition to performing task steps.
ApiVersion: tekton.dev/v1alpha1kind: TaskRunmetadata: name: build-docker-image-from-git-source-task-runspec: taskRef: name: build-docker-image-from-git-source trigger: type: manual inputs: resources:-name: docker-source resourceRef: name: skaffold-git params: # pass parameters to Task during execution, so that there is no need to redefine task You only need to add input output and taskrun to run another project, which is better than drone from the point of view of decoupling Task flow can be reused-name: pathToDockerFile value: Dockerfile-name: pathToContext value: / workspace/docker-source/examples/microservices/leeroy-web # configure: may change according to your source outputs: resources:-name: builtImage resourceRef: name: skaffold-image-leeroy-web # this is also the resource specified above
PS: inputs outputs should not limit the use of these two names, as long as the parameters are supported. For example, define a resource called build to specify the image of docker build:
ApiVersion: tekton.dev/v1alpha1kind: PipelineResourcemetadata: name: skaffold-image-leeroy-webspec: type: image params:-name: url value: docker-in-docker:latest
In Task:
ApiVersion: tekton.dev/v1alpha1kind: Taskmetadata: name: build-docker-image-from-git-sourcespec: build: resources:-name: build type: image params:-name: build-image default: docker-in-docker:latest steps:-name: build-and-push image: ${build.params.build-image}
I think we need to be able to expand like this. Inputs outputs alone is in a narrow sense.
Get all the information about pipeline kubectl get build-pipeline
NAME AGEtaskruns/build-docker-image-from-git-source-task-run 30sNAME AGEpipelineresources/skaffold-git 6mpipelineresources/skaffold-image-leeroy-web 7mNAME AGEtasks/build-docker-image-from-git-source 7m
To view the output of TaskRun, use the following command:
Kubectl get taskruns/build-docker-image-from-git-source-task-run-o yamlapiVersion: tekton.dev/v1alpha1kind: TaskRunmetadata: creationTimestamp: 2018-12-11T18:14:29Z generation: 1 name: build-docker-image-from-git-source-task-run namespace: default resourceVersion: "6733537" selfLink: / apis/tekton.dev/v1alpha1/namespaces/default/taskruns/build-docker-image-from-git-source-task-run uid: 99d297fd-fd70-11e8-9129-42010a8a0fdcspec: generation : 1 inputs: params:-name: pathToDockerFile value: Dockerfile-name: pathToContext value: / workspace/git-source/examples/microservices/leeroy-web # configure: may change depending on your source resources:-name: git-source paths: null resourceRef: name: skaffold-git outputs: resources:-name: builtImage paths: null resourceRef: name : skaffold-image-leeroy-web taskRef: name: build-docker-image-from-git-source taskSpec: null trigger: type: manualstatus: conditions:-lastTransitionTime: 2018-12-11T18:15:09Z status: "True" type: Succeeded podName: build-docker-image-from-git-source-task-run-pod-24d414 startTime: 2018-12-11T18:14:29Z steps:-terminated: containerID: docker://138ce30c722eed... .c830c9d9005a0542 exitCode: 0 finishedAt: 2018-12-11T18:14:47Z reason: Completed startedAt: 2018-12-11T18:14:47Z-terminated: containerID: docker://4a75136c029fb1....4c94b348d4f67744 exitCode: 0 finishedAt: 2018-12-11T18:14:48Z reason: Completed startedAt: 2018-12-11T18:14:48Z
The status of type Succeeded = True shows that Task has run successfully, and you can also verify that the Docker image is generated.
Pipeline
Pipeline defines a list of tasks to be executed sequentially, and also indicates whether any output should be used as input to subsequent tasks by using the from field, and indicates the order in which to execute (using the runAfter and from fields). The same template you use in the task can also be used in the pipe.
ApiVersion: tekton.dev/v1alpha1kind: Pipelinemetadata: name: tutorial-pipelinespec: resources:-name: source-repo type: git-name: web-image type: image tasks:-name: build-skaffold-web # compilation and Mirror Task TaskRef: name: build-docker-image-from-git-source params:-name: pathToDockerFile value: Dockerfile-name: pathToContext value: / workspace/examples/microservices/leeroy-web # configure: may change according to your source resources: inputs:-name: workspace resource: source-repo outputs: -name: image resource: web-image-name: deploy-web # deployment taskRef: name: deploy-using-kubectl # here introduces a Task deployed through K8s Let's take a look at what resources: inputs: # defines the input below. The input here is actually the output of the previous task-name: workspace resource: source-repo-name: image # such as this image This is the resource: web-image from: # from generated by the previous task, just like a pipeline Take the output of the previous task as input to this task-build-skaffold-web params: # Note that these parameters are passed to the Task template Override the parameters in inputs-name: path value: / workspace/examples/microservices/leeroy-web/kubernetes/deployment.yaml # configure: may change according to your source-name: yqArg value: "- D1"-name: yamlPathToImage value: "spec.template.spec.containers [0] .image"
The above Pipeline refers to a deploy-using-kubectl called Task:
ApiVersion: tekton.dev/v1alpha1kind: Taskmetadata: name: deploy-using-kubectlspec: inputs: resources:-name: workspace type: git-name: image type: image params:-name: path description: Path to the manifest to apply-name: yqArg description: Okay this is a hack But I didn't feel right hard-coding `- d1` down below-name: yamlPathToImage description: The path to the image to replace in the yaml manifest (arg to yq) steps:-name: replace-image # the first step is to replace image: mikefarah/yq # images with specific features In the same way as drone This is mainly a template rendering command: ["yq"] args:-"w"-"- I"-"${inputs.params.yqArg}"-"${inputs.params.path}"-"${inputs.params.yamlPathToImage}"-"${inputs.resources.image.url}"-name: run-kubectl # step 2: kubectl image: lachlanevenson/k8s-kubectl command: ["kubectl"] args:-"apply"-"- f"-"${inputs.params.path}" # this is the location of the yaml file
To run Pipeline, create a PipelineRun as follows:
ApiVersion: tekton.dev/v1alpha1kind: PipelineRunmetadata: name: tutorial-pipeline-run-1spec: pipelineRef: name: tutorial-pipeline trigger: type: manual resources:-name: source-repo resourceRef: name: skaffold-git-name: web-image resourceRef: name: skaffold-image-leeroy-web
Execute and view pipeline
Kubectl apply-f
< name-of-file.yaml >Kubectl gets pipelineruns / tutorial-pipeline-run-1-o yaml
Summary
Beginners may find it a bit circuitous, but this design is also designed to understand the coupling. I personally think the advantages and disadvantages are as follows:
advantage
K8s cluster can be used as a task execution engine, which can make better use of resources, such as using online night idle resources to run tasks, building mirror offline analysis and even machine learning. Decoupling is better, task templates can be used for reuse, without everyone having to repeat the definition, input and output concept, the input of one task is good as the output of another.
Inferior position
It is a bit overdesigned, and some simple scenarios may feel that the configuration is a bit circuitous. Input and output depend on distributed systems. Compared with drone, containers in a pipeline share a data volume, so the files generated by the previous task can be easily used by the next task, while cluster-based tasks may have to rely on git docker image repositories for input and output, which is a bit troublesome. A good solution is to use K8s distributed trial storage to set a shared volume for pipeline to facilitate data transfer between tasks. Generally speaking, the path is right and there are still many scenarios that can be used.
After reading the above, have you mastered the method of sample analysis of kubernetes native pipeline? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.