Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze the workflow engine of Argo Workflows-Kubernetes

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

How to analyze Argo Workflows-Kubernetes workflow engine, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.

What is Argo Workflows?

Argo Workflows is an open source project that provides container-native workflows for Kubernetes, mainly through Kubernetes CRD.

The features are as follows:

Every step of the workflow is a container.

Model a multi-step workflow as a series of tasks, or use a directed acyclic graph (DAG) to describe the dependencies between tasks

Compute-intensive jobs for machine learning or data processing can be easily run in a short period of time

Running CI/CD Pipeline on Kubernetes without complex software configuration

Installation

Install the controller side

The installation of Argo Wordflows is very simple, just use the following command to install it.

Kubectl create ns argo kubectl apply-n argo-f https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/quick-start-postgres.yaml

After the installation is complete, the following four pod are generated.

# kubectl get po-n argo NAME READY STATUS RESTARTS AGE argo-server-574ddc66b-62rjc 1 4h35m minio 1 Running 4 4h35m postgres-56fd897cf4-k8fwd 1 Running 0 4h35m postgres-56fd897cf4-k8fwd 1 Running 0 4h35m workflow-controller-77658c77cc -p25ll 1 Running 1 4h35m

Where:

Argo-server is the argo server

Mino is the warehouse of ongoing products.

Postgres is a database

Workflow-controller is the process controller

Then configure an ingress on server side to access UI. The configuration list is as follows (I use traefik here):

ApiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: argo-ui namespace: argo spec: entryPoints:-web routes:-match: Host (`argowork- test.explorops.cn`) kind: Rule services:-name: argo-server port: 2746

The UI interface is as follows:

Configure another ingress for minio. The configuration list is as follows:

ApiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: minio namespace: argo spec: entryPoints:-web routes:-match: Host (`minio- test.explorops.cn`) kind: Rule services:-name: minio port: 9000

The UI interface is as follows (the default user name and password is: admin:password):

Install the client side

Argo Workflows provides Argo CLI, and its installation is also very simple, as follows: Linux system:

# Download the binary curl-sLO https://github.com/argoproj/argo/releases/download/v3.0.0-rc4/argo-linux-amd64.gz # Unzip gunzip argo-linux-amd64.gz # Make binary executable chmod + x argo-linux-amd64 # Move binary to path mv. / argo-linux-amd64 / usr/local/bin/argo

After the installation is complete, use the following command to verify that the installation is successful.

# argo version argo: v3.0.0-rc4 BuildDate: 2021-03-02T21:42:55Z GitCommit: ae5587e97dad0e4806f7a230672b998fe140a767 GitTreeState: clean GitTag: v3.0.0-rc4 GoVersion: go1.13 Compiler: gc Platform: linux/amd64

Its main commands are:

List lists workflows logs views logs of workflows submit creates workflows watch real-time monitoring workflows get realistic details delete deletes workflows stop stops workflows

More commands can be viewed using argo-- help.

You can then use a simple WorkFlow for hello world, as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello-world- labels: workflows.argoproj.io/archive-strategy: "false" spec: entrypoint: whalesay templates:-name: whalesay container: image: docker/whalesay:latest command: [cowsay] args: ["hello world"]

Use the following command to create and observe the workflow.

$argo submit-n argo helloworld.yaml-- watch

Then you can see the following output.

Name: hello-world-9pw7v Namespace: argo ServiceAccount: default Status: Succeeded Conditions: Completed True Created: Mon Mar 08 14:51:35 + 0800 (10 seconds ago) Started: Mon Mar 08 14:51:35 + 0800 (10 seconds ago) Finished: Mon Mar 08 14:51:45 + 0800 (now) Duration: 10 seconds Progress: 1 ResourcesDuration: 4s* (1 cpu) 4s* (100Mi memory) STEP TEMPLATE PODNAME DURATION MESSAGE ✔ hello-world-9pw7v whalesay hello-world-9pw7v 5s

You can also view the status through argo list, as follows:

# argo list-n argo NAME STATUS AGE DURATION PRIORITY hello-world-9pw7v Succeeded 1m 10s 0

Use argo logs to view the specific log, as follows:

# argo logs-n argo hello-world-9pw7v hello-world-9pw7v: _ hello-world-9pw7v:

< hello world >

Hello-world-9pw7v:-hello-world-9pw7v:\ hello-world-9pw7v: # #. Hello-world-9pw7v: # # = = hello-world-9pw7v: # # = = hello-world-9pw7v: / "_ _ / = = hello-world-9pw7v: ~ ~ {~ / = =- ~ hello-world-9pw7v:\ _ o _ / hello-world-9pw7v:\ _ _ / hello-world-9pw7v:\ _\ _ / Core Concepts

Workflow

Workflow is the most important resource in Argo, and it has two main functions:

It defines the workflow to be executed

It stores the status of the workflow

The workflow to be executed is defined in the Workflow.spec field, which mainly includes templates and entrypoint, as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello-world- # Workflow configuration name spec: entrypoint: whalesay # parse whalesay templates templates:-name: whalesay # define whalesay templates, consistent with entrypoint container: # define a container, output "helloworld" image: docker/whalesay command: [cowsay] args: ["helloworld"]

Templates

Templates is a list structure, which is divided into two main categories:

Define a specific workflow

Call other templates to provide parallel control

Define a specific workflow

There are four categories for defining specific workflows, as follows:

Container

Script

Resource

Suspend

Container

Container is the most commonly used template type. It will dispatch a container whose template specification is the same as the container specification of K8S, as follows:

-name: whalesay container: image: docker/whalesay command: [cowsay] args: ["hello world"]

Script

Script is another wrapper implementation of Container, which is defined in the same way as Container, except that the source field is added for custom scripts, as follows:

-name: gen-random-int script: image: python:alpine3.6 command: [python] source: | import random i = random.randint (1,100) print (I)

The output of the script is automatically exported to {{tasks..outputs.result}} or {{steps..outputs.result}} depending on how it is called.

Resource

Resource is mainly used to perform cluster resource operations directly on the K8S cluster, such as get, create, apply, delete, replace, and patch cluster resources. Create a resource of type ConfigMap in the cluster as follows:

-name: k8s-owner-reference resource: action: create manifest: | apiVersion: v1 kind: ConfigMap metadata: generateName: owned-eg- data: some: value

Suspend

Suspend is mainly used to pause, which can be paused for a period of time or can be resumed manually. The command is to resume using argo resume. The definition format is as follows:

-name: delay suspend: duration: "20s" call other templates to provide parallel control

There are also two categories for invoking other templates:

Steps

Dag

Steps

Steps defines tasks mainly by defining a series of steps, its structure is "list of lists", the external list will be executed sequentially, and the internal list will be executed in parallel. As follows:

-name: hello-hello-hello steps:-- name: step1 template: prepare-data-- name: step2a template: run-data-first-half-name: step2b template: run-data-second-half

Where step1 and step2a are executed sequentially, while step2a and step2b are executed in parallel.

The condition can also be judged by When. As follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: coinflip- spec: entrypoint: coinflip templates:-name: coinflip steps:-- name: flip-coin template: flip-coin-- name: heads template: heads when: "{{steps.flip-coin.outputs.result} = = heads"-name: tails template: tails when: "{{ Steps.flip-coin.outputs.result}} = = tails "- name: flip-coin script: image: python:alpine3.6 command: [python] source: | import random result =" heads "if random.randint (0L1) = = 0 else" tails "print (result)-name: heads container: image: alpine:3.6 command: [sh -c] args: ["echo\" it was heads\ "]-name: tails container: image: alpine:3.6 command: [sh,-c] args: [" echo\ "it was tails\"]

Submit this Workflow, and the execution effect is as follows:

In addition to using When for condition determination, you can also perform loop operations. The sample code is as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: loops- spec: entrypoint: loop-example templates:-name: loop-example steps:-- name: print-message template: whalesay arguments: parameters:-name: message value: "{item}}" withItems:-hello world-goodbye world -name: whalesay inputs: parameters:-name: message container: image: docker/whalesay:latest command: [cowsay] args: ["{{inputs.parameters.message}}"]

Submit the Workflow, and the output is as follows:

Dag

Dag is mainly used to define the dependencies of tasks. You can set that other tasks must be completed before starting a specific task, and tasks that do not have any dependencies will be executed immediately. As follows:

-name: diamond dag: tasks:-name: a template: echo-name: B dependencies: [a] template: echo-name: C dependencies: [a] template: echo-name: d dependencies: [B, C] template: echo

Where A will be executed immediately, and B and C will rely on Amaine D and B and C.

Then run an example to see the effect. The example is as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: dag-diamond- spec: entrypoint: diamond templates:-name: diamond dag: tasks:-name: a template: echo arguments: parameters: [{name: message, value: a}]-name: B dependencies: [a] template: echo arguments: parameters: [{name: message Value: B}]-name: C dependencies: [a] template: echo arguments: parameters: [{name: message, value: C}]-name: d dependencies: [B, C] template: echo arguments: parameters: [{name: message Value: d}]-name: echo inputs: parameters:-name: message container: image: alpine:3.7 command: [echo, "{inputs.parameters.message}}"]

Submit workflow.

Argo submit-n argo dag.yam-- watch

Image.png

Variables

Variables are allowed in the Workflow of argo as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello-world-parameters- spec: entrypoint: whalesay arguments: parameters:-name: message value: hello world templates:-name: whalesay inputs: parameters:-name: message container: image: docker/whalesay command: [cowsay] args: [{inputs.parameters.message} "]

First define arguments in the spec field, define the variable message, whose value is hello world, and then define an inputs field in the templates field for the input parameters of templates, and then reference the variable in the form of "{{}}".

Variables can also perform some functional operations, mainly:

Filter: filterin

AsInt: converting to Int

AsFloat: converting to Float

String: converting to String

ToJson: converting to Json

Example:

Filter ([1,2], {# > 1}) asInt (inputs.parameters ["my-int-param"]) asFloat (inputs.parameters ["my-float-param"]) string (1) toJson ([1,2])

More grammars can be learned at https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md.

Product warehouse

When installing argo, mino has been installed as a product library, so how to use it?

Let's take a look at an official example, as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: artifact-passing- spec: entrypoint: artifact-example templates:-name: artifact-example steps:-- name: generate-artifact template: whalesay-- name: consume-artifact template: print-message arguments: artifacts:-name: message from: "{steps.generate-artifact. Outputs.artifacts.hello-art}} "- name: whalesay container: image: docker/whalesay:latest command: [sh -c] args: ["sleep 1" Cowsay hello world | tee / tmp/hello_world.txt "] outputs: artifacts:-name: hello-art path: / tmp/hello_world.txt-name: print-message inputs: artifacts:-name: message path: / tmp/message container: image: alpine:latest command: [sh,-c] args: [" cat / tmp/message "]

It is divided into two steps:

First, the product is generated.

And then get the product.

Submit the Workflow, and the running result is as follows:

Then you can see the generated product in minio. The product has been compressed, as follows:

WorkflowTemplate

WorkflowTemplate is a template for Workflow and can be referenced from within WorkflowTemplate or from other Workflow and WorkflowTemplate on the cluster.

The difference between WorkflowTemplate and template:

Template is just a task under templates in Workflow. When we define a Workflow, we need to define at least one template.

WorkflowTemplate is the definition of Workflow that resides in the cluster, and it is the definition of Workflow because it contains templates that can be referenced from within WorkflowTemplate or from other Workflow and WorkflowTemplate on the cluster.

After version 2.7, the definition of WorkflowTemplate is the same as that of Workflow, and we can simply change kind:Workflow to kind:WorkflowTemplate. For example:

ApiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: workflow-template-1 spec: entrypoint: whalesay-template arguments: parameters:-name: message value: hello world templates:-name: whalesay-template inputs: parameters:-name: message container: image: docker/whalesay command: [cowsay] args: [{inputs.parameters.message} "]

Create a WorkflowTemplate as follows

Argo template create workflowtemplate.yaml

Then quote it in Workflow, as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-template-hello-world- spec: entrypoint: whalesay templates:-name: whalesay steps: # reference template must be under steps/dag/template-- name: call-whalesay-template templateRef: # Application template field name: workflow- Template-1 # WorkflowTemplate name template: whalesay-template # specific template name arguments: # Parameter parameters:-name: message value: "hello world"

ClusterWorkflowTemplate

ClusterWorkflowTemplate creates a cluster-wide WorkflowTemplate that can be referenced by other workflow.

Define a ClusterWorkflow as follows.

ApiVersion: argoproj.io/v1alpha1 kind: ClusterWorkflowTemplate metadata: name: cluster-workflow-template-whalesay-template spec: templates:-name: whalesay-template inputs: parameters:-name: message container: image: docker/whalesay command: [cowsay] args: ["{inputs.parameters.message}}"]

Then use templateRef in workflow to reference it, as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-template-hello-world- spec: entrypoint: whalesay templates:-name: whalesay steps:-- name: call-whalesay-template templateRef: # reference template name: cluster-workflow-template-whalesay-template # ClusterWorkflow name template: whalesay-template # specific module The board name clusterScope: true # represents the practice of ClusterWorkflow arguments: # parameter parameters:-name: message value: "hello world"

The above probably describes the basic theoretical knowledge of argo, more theoretical knowledge can go to the official website to learn.

Let's use a simple CI/CD practice to see what you should do with argo workflow.

The whole process of CI/CD is very simple, namely: pull code-> compile-> build image-> upload image-> deploy.

Define a WorkflowTemplate as follows:

ApiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: annotations: workflows.argoproj.io/description: | Checkout out from Git, build and deploy application. Workflows.argoproj.io/maintainer:'@ joker' workflows.argoproj.io/tags: java Git workflows.argoproj.io/version:'> = 2.9.0' name: devops-java spec: entrypoint: main arguments: parameters:-name: repo value: gitlab-test.coolops.cn/root/springboot-helloworld.git-name: branch value: master-name: image value: registry.cn-hangzhou.aliyuncs.com/rookieops/myapp:202103101613-name: cache-image Value: registry.cn-hangzhou.aliyuncs.com/rookieops/myapp-name: dockerfile value: Dockerfile-name: devops-cd-repo value: gitlab-test.coolops.cn/root/devops-cd.git-name: gitlabUsername value: devops- name: gitlabPassword value: devops123456 templates:-name: main steps:-- name: Checkout Template: Checkout-- name: Build template: Build-- name: BuildImage template: BuildImage-- name: Deploy template: Deploy # pull substitution Code-name: Checkout script: image: registry.cn-hangzhou.aliyuncs.com/rookieops/maven:3.5.0-alpine workingDir: / work Command:-sh source: | git clone-- branch {{workflow.parameters.branch}} http://{{workflow.parameters.gitlabUsername}}:{{workflow.parameters.gitlabPassword}}@{{workflow.parameters.repo}}. VolumeMounts:-mountPath: / work name: work # compilation and packaging-name: Build script: image: registry.cn-hangzhou.aliyuncs.com/rookieops/maven:3.5.0-alpine workingDir: / work command:-sh source: mvn-B clean package-Dmaven.test.skip=true-Dautoconfig.skip volumeMounts: -mountPath: / work name: work # build image-name: BuildImage volumes:-name: docker-config secret: secretName: docker-config container: image: registry.cn-hangzhou.aliyuncs.com/rookieops/kaniko-executor:v1.5.0 workingDir: / work args:-context=. -dockerfile= {{workflow.parameters.dockerfile}}-destination= {{workflow.parameters.image}}-skip-tls-verify-reproducible-cache=true-cache-repo= {{workflow.parameters.cache-image}} volumeMounts:-mountPath: / work name: work -name: docker-config mountPath: / kaniko/.docker/ # deployment-name: Deploy script: image: registry.cn-hangzhou.aliyuncs.com/rookieops/kustomize:v3.8.1 workingDir: / work command:-sh source: | git remote set-url origin http://{{workflow.parameters. GitlabUsername}: {{workflow.parameters.gitlabPassword}} @ {{workflow.parameters.devops-cd-repo}} git config-- global user.name "Administrator" git config-- global user.email "coolops@163.com" git clone http://{{workflow.parameters.gitlabUsername}}:{{workflow.parameters.gitlabPassword}}@{{workflow.parameters.devops-cd-repo}} / work/devops-cd Cd/ work/devops-cd git pull cd/ work/devops-cd/devops-simple-java kustomize edit set image {{workflow.parameters.image}} git commit-am 'image update' git push origin master volumeMounts:-mountPath: / work name: work volumeClaimTemplates:-name: work metadata: Name: work spec: storageClassName: nfs-client-storageclass accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi

Description:

1. Use kaniko to create an image. You don't need to mount docker.sock, but config.json is required for push image, so you need to create a secret first, as follows:

Kubectl create secret generic docker-config-- from-file=.docker/config.json-n argo

2. Get ready for storageClass, or you can use empty directly without it, but you can persist the cache files and speed up the build (I didn't do it above).

3. Create a WorkflowTemplate with the following command:

Argo template create-n argo devops-java.yaml

4. Create a Workflow, which can be created manually, as follows:

ApiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-template-devops-java- spec: workflowTemplateRef: name: devops-java

You can also click create directly in the UI interface, and I can click create directly in the UI interface here. Select the WorkflowTemplate you just created, and click create, as follows:

A Workflow is then generated, as follows:

Click on it and you can see each specific step, as follows

Click on each specific step to see the log, as follows:

You can also see the execution result of Workflow in the command line interface, as follows:

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report