Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use the event-driven architecture native to Argo Event cloud

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Most people do not understand the knowledge points of this article "how to use Argo Event cloud native event-driven architecture", so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "how to use Argo Event cloud native event-driven architecture" article.

1. Introduction to Argo Event

Event events are familiar to everyone. It can be said that Kubernetes is completely event-driven. The essence of different controller manager is to implement different event handling functions. For example, all ReplicaSet objects are managed by ReplicaSetController controller, which listens to the event changes of ReplicaSet and its associated Pod through Informer, thus maintaining the running state consistent with our declaration of spec.

Of course, no matter what Controller Kubernetes is, it listens and handles internal events, and at the application layer, we also have many external events, such as CICD events, Webhook events, log events, and so on. How to deal with these events? currently, Kubernetes cannot be implemented natively.

Of course, you can implement your own event handler to run on the Kubernetes platform, but it is also difficult to implement. The Argo Event component solves this problem perfectly.

As shown in the figure, the official flow chart provided by Argo Event:

First of all, the event source EventSource can be Webhook, S3, Github, SQS, and so on, and the middle will pass through a component called Gateway (the new version is called EventBus). More precisely, the configuration functions of the old version of gateway have been merged into EventSource, and EventBus is a newly introduced component. The backend is based on high-performance distributed messaging middleware NATS [1] by default, of course, other middleware such as Kafka is also possible.

This EventBus can be thought of as a message queue for events, and the message producer connects to EvenSource,EventSource and connects to Sensor. In more detail, EvenSource sends events to the message queue where EvenBus,Sensor subscribes to EvenBus, and EvenBus is responsible for forwarding events to Sensor components that have subscribed to the event. EventSorce is not reflected in the above figure. For specific design documents, please refer to Argo-events Enhancement Proposals [2].

Some people may say why EventBus does not go directly to Trigger and introduce a Sensor in the middle, mainly for two reasons, one is to loosely couple event forwarding and processing, and the other is to realize the parameterization of Trigger events. Sensor can not only filter events, but also achieve event parameterization. For example, the later Trigger is to create a Kubernetes Pod, and the metadata and env of this Pod can be populated according to the event content.

The Sensor component registers with one or more triggers that can trigger AWS Lambda events, Argo Workflow events, Kubernetes Objects, and so on. To put it simply, you can execute Lambda functions, dynamically create Kubernetes objects, or create the Workflow described earlier.

Remember the Argo Rollout introduced earlier, we demonstrated the manual promote to achieve application release or rollback, through Argo Event can be perfectly combined with the test platform or CI/CD events to achieve automatic application release or rollback.

2. A simple example of Webhook

The deployment of Argo Event is very simple, either through kubecl apply or helm. You can refer to the document Installation [3]. I won't repeat it here.

After the deployment of Argo Event, note that you also need to deploy EventBus. It is officially recommended to use NATS middleware. The document shows how to deploy NATS stateful.

Let's take the simplest Webhook event as an example to understand the functions and usage of several components of Argo Event.

First, according to the previous introduction, we need to define EventSource first:

ApiVersion: argoproj.io/v1alpha1kind: EventSourcemetadata: name: webhookspec: service: ports:-port: 12000 targetPort: 12000 webhook: webhook_example: port: "12000" endpoint: / webhook method: POST

This EventSource defines a webhook webhook_example, port 12000, path / webhook, and the general Webhook is the POST method, so we configure the Webhhok processor to receive only the POST method.

To expose this Webhook EventSource, we also created a Service with port 12000.

At this point we can manually curl the Service:

# kubectl get svc-l eventsource-name=webhookNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEwebhook-eventsource-svc ClusterIP 10.96.93.24 12000/TCP 5m49s# curl-X POST-d'{} '10.96.93.24:12000/webhooksuccess

Of course, nothing will happen at this time because no Sensor is registered.

Next we define Sensor:

First of all, the subscription EventSource and the specific Webhook are defined in dependencies. Because an EventSource can define multiple Webhook, both EventSource and Webhook parameters must be specified at the same time.

In Trigger, we define a workflow corresponding to Action as create, and the spec definition of this workflow is configured in resource.

The final parameters section defines the parameters of workflow, which are obtained from event, and here we will treat the entire event as the input of workflow. Of course, you can only learn from the body part through dataKey: dataKey: body.message.

At this point, we curl the webhook event again:

Curl-X POST-d'{"message": "HelloWorld!"} '10.96.93.24:12000/webhook

At this point, we get the argo workflow list and find that a new instance has been created:

# argo listNAME STATUS AGE DURATION PRIORITYwebhook-8xt4s Succeeded 1m 18s 0

View the workflow output as follows:

Since we send the whole event as workflow input, the content of data is encoded by base64. We can view the decoded content as follows:

{"header": {"Accept": ["* / *"], "Content-Length": ["26"], "Content-Type": ["application/x-www-form-urlencoded"], "User-Agent": ["curl/7.58.0"]}, "body": {"message": "HelloWorld!"}}

From here, we can also see that Event contains two parts, one is context, the other is that data,data also contains header part and body part, any part of the content can be obtained through Key in parameters.

As the webhook trigger above is done by manual curl, you can easily configure it to webhook on github or bitbucket so that this event can be triggered once the code is updated.

3. Kubernetes triggers the AWS Lambda function

EventSource in the previous example uses Webhook and supports a lot of EventSource in addition to Webhook,Argo Event, such as:

Amqp

Aws-sns

Aws-sqs

Github/gitlab

Hdfs

Kafka

Redis

Kubernetes resource

...

Trigger also supports a lot, such as:

Aws lambda

Amqp

Kafka

...

As the above officials have provided very rich examples, you can refer to argo events examples [4].

Here, take the Kubernetes resource event source as an example. This event listens to the resource status of Kubernetes, such as Pod creation, deletion, and so on. Here, take the creation of Pod as an example:

ApiVersion: argoproj.io/v1alpha1kind: EventSourcemetadata: name: k8s-resource-demospec: template: serviceAccountName: argo-events-sa resource: pod_demo: namespace: argo-events version: v1 resource: pods eventTypes:-ADD filter: afterStart: true labels:-key: app operation: "=" value: my-pod

For example, the above example listens to the ADD event of Pods, that is, the Pod that only contains app=my-pod tags is filtered in the creation of Pod,filter. It is important to note that the serviceaccount argo-events-sa used must have list and watch permissions of Pod.

Next, we use AWS Lambda triggers, and the Lambda function has been created in advance in AWS:

This Lambda function is simple and returns the event itself directly.

Create the Sensor as follows:

ApiVersion: argoproj.io/v1alpha1kind: Sensormetadata: name: aws-lambda-trigger-demospec: template: serviceAccountName: argo-events-sa dependencies:-name: test-dep eventSourceName: k8s-resource-demo eventName: pod_demo triggers:-template: name: lambda-trigger awsLambda: functionName: hello accessKey: name: aws-secret key: accesskey secretKey: Name: aws-secret key: secretkey namespace: argo-events region: cn-northwest-1 payload:-src: dependencyName: test-dep dataKey: body.name dest: name

As mentioned above, AWS access key and access secret need to be put into aws-secret in advance.

At this point, we create a new Pod my-pod:

ApiVersion: v1kind: Podmetadata: labels: app: my-pod name: my-podspec: containers:-image: nginx name: my-pod dnsPolicy: ClusterFirst restartPolicy: Always

When Pod starts, we find that the AWS Lambda function is triggered to execute:

4 、 event filter

In the previous example, all events in webhook are triggered by sensor. Sometimes we don't need to handle all events. Argo Event supports filtering based on data and context. For example, we only deal with events where message is hello or hey, and other messages are ignored. You only need to test-dep and add filter to the original dependencies:

Dependencies:- name: test-dep eventSourceName: webhook eventName: webhook_example filters:-name: data-filter data:-path: body.message type: string value:-"hello"-"hey"

Filter specifies data-based filtering, where the filtered field is body.message, and the matching content is hello and hey.

5 、 trigger policy

Trigger policy is mainly used to determine whether the result of the final trigger execution is successful or failed. If you are creating a Kubernetes resource such as Workflow, you can determine the execution result of the Trigger based on the final status of the Workflow, while if you trigger a HTTP or AWS Lambda, you need to customize the policy status.

AwsLambda: functionName: hello accessKey: name: aws-secret key: accesskey secretKey: name: aws-secret key: secretkey namespace: argo-events region: us-east-1 payload:-src: dependencyName: test-dep dataKey: body.message dest: message policy: status: allow:-200201

As shown above, when AWS Lambda returns 200 or 201, Trigger is successful.

The above is the content of this article on "how to use the native event-driven architecture of Argo Event cloud". I believe you all have some understanding. I hope the content shared by the editor will be helpful to you. If you want to learn more about the relevant knowledge, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report