Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize Service Catalog Integration based on CloudEvent

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces you how to achieve service directory integration based on CloudEvent, the content is very detailed, interested friends can refer to, hope to be helpful to you.

Event-driven system architecture has long been common in daily platform development, sending events through message queues, and then building the corresponding producers and consumers respectively. However, in the traditional business development model, different events have different formats, different producers generate different event formats, and the formats that consumers can consume are also very different. in essence, events, producers and consumers are still coupled. So how to solve the problem? That's the CloudEvent we're going to talk about today.

Introduction to CloudEvent

From the CloudEvents description of the official website, we can see that CloudEvent is essentially a specification for describing event data. So when we learn about CloudEvents, we should understand its design specifications more than the data structure it presents. Just like you are going to learn the tcp protocol, you are not going to learn what this field is called, but to understand why there is this field and what the problem is.

How to decouple

For the learning of CloudEvents, the author uses a top-down approach, that is, to understand how CloudEvents decouples events, consumers and producers on the platform, and then to think about the details of the underlying relevant fields.

The life cycle of an event usually includes three links: production, transmission and consumption. Here we introduce the difference between cloudevent and traditional event development mode.

Event production

Under the traditional development mode, the events produced by different businesses are also different, and the data of the event itself will be relatively less, and it is more like a signaling role, that is, notifying the back-end service that a certain type of event has occurred. then the context data of the event is built by the corresponding system for business logic processing. In Cloudevents, however, more attention is paid to the consistency and integrity of events. In order to ensure that events can be uniformly distributed, parsed and processed, Cloudevents adopts a similar hierarchical event encapsulation mechanism, namely "event protocol" and "event data" two layers. Event protocol means that Cloudevent defines the format of underlying events, that is, everyone encapsulates events according to a set of standard specifications, so that events can be handled and distributed uniformly. The event-specific data is stored in the corresponding data field.

Integrity is my personal understanding that the events we build in the Cloud environment need to contain their current complete context data so that subsequent systems have enough information for business logic processing and decision-making. This avoids the need for the back-end system to assemble the context corresponding to the current event after receiving the event, mainly to solve the problem that the relevant data may no longer be the state at the time of the event due to the delay in transmission, and that there are inconsistent states.

Event transmission

After the event is generated, it is usually sent to the corresponding message broker service for temporary storage. In the traditional business, a specific message protocol is usually selected for transmission, which usually involves two parts: serialization and transport protocol. In the transport protocol, Cloudevents supports common industry standard protocols such as HTTP, AMQP, MQTT, SMT, etc., and supports common suppliers and platforms such as kafka, AWS Kinesis, Azure event grid, Alibaba Cloud EventBridge. Users can choose corresponding suppliers to distribute corresponding events according to their own scenarios.

In terms of serialization, cloudevents supports common standard protocols such as HTTP, AMQP, Kafka, etc., without the need for users to serialize related protocols manually

Event consumption

The consumer side of the event is usually interested in the type of event they are concerned about, and because the format of the message is uniform, we can easily route the message through the corresponding platform according to the contents of the message body and distribute it to the corresponding event consumer, who is only responsible for receiving the corresponding event and does not pay attention to other information.

We will share more about the Cloudevents event later, and then we will introduce how we design the system based on cloudevent.

Getting started scene

In the previous article, we introduced our service directory system. We need to access different basic services in the service directory, with different formats of basic services, billing, efficiency statistics and other systems. Later, we may also dock with the company's event flow platform, so how to uniformly distribute and process the heterogeneous data in these heterogeneous modules is as follows:

Message processing flow

First of all, on the message sending side, we build the corresponding message based on cloudevent, and encapsulate the context data of the current event into data, and then send it to the company's message queue system. The corresponding event distribution and routing is completed by the company's message queue, and the corresponding event receiver only needs to define the events of his own concern, not to listen to the specific MQ, but to define a HTTP interface that accepts messages. The corresponding message routing and distribution function is implemented by the company's MQ to realize that the service consumer parses the event information passed by the message queue and parses the corresponding data structure. Then carry on the business processing. Later, if you add modules or add new event consumption demand, you only need to implement the corresponding logic.

Data structure

Combined with the Cloudevents specification, we define the data structure of our own internal system. The main structures used are as follows:

Here we mainly introduce some of the additional fields and the definition according to our own scenario:

Type

On the surface, both Source and type describe the current event occurrence system, but the difference is that type is a structured data. According to this structure, our corresponding billing and efficiency statistics module can get this data to do some related branch logic processing.

Resources: change resource list

That is, to identify which related resources have been triggered by the current event, such as adding a hard disk to a virtual machine, it actually contains two kinds of resources, namely, the virtual machine and the corresponding disk resources.

Integrated service catalog

We mentioned earlier that we implemented a service proxy module using the service and the provided API specification. The main usage scenarios of Cloudevent in the service proxy module are as follows:

1. After the service directory receives the service change request, after saving the database, the corresponding cloudevent event occurs to message queue 2. Set the corresponding routing forwarding rules in the message queue and send the corresponding events to the service broker module 3. The service proxy module parses according to the type field, obtains the corresponding back-end service address, parses the corresponding data from the message, and sends the data to the real back-end service 4. After receiving the structured data, the back-end real service processes its own business logic and sends the corresponding event 5. 5. The service agent module parses the relevant resources according to the event, calls the corresponding platform to obtain the data of the current resource, and generates event 6. 5. The service directory module receives the corresponding service instance data and stores it in its own database.

If there are subsequent changes, you only need to generate the corresponding events to the message queue, and the 5-6 phases will be repeated.

Although the link is a bit long, in fact, the system design of the whole link is very simple, and the communication, reliability, fault tolerance and coupling between systems do not need to be concerned (message queuing service to ensure). Just another module is fine. To consume a new event, write a new interface, and then edit the routing rules to achieve the access of the new module.

On how to achieve service directory integration based on CloudEvent to share here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report