In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
What this article shares with you is about how KubeEdge and Kuiper solve marginal streaming data processing. The editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.
KubeEdge is an open source edge computing platform, which extends the capabilities of cloud edge collaboration, computing sinking, massive edge device management, edge autonomy and so on, based on the native container orchestration and scheduling capabilities of Kubernetes. KubeEdge will also support scenarios such as 5G MEC and AI cloud edge collaboration in the form of plug-ins, which have been implemented in many fields.
Dealing with the loss at the edge of the product Kuiper
Kuiper started in early 2019, released its first version in October 2019, and continues to iterate until now, and its entire architecture is a classic streaming architecture.
Product design goal: streaming processing that runs on the cloud, such as Spark and Flink, can run on the edge
Kuiper architecture diagram
The overall architecture can be divided into three parts. The left side is sources, which represents the location of the data source. The data source may be a MQTT macOS broker with an edge in the KubeEdge, or it may be a file, window or database.
On the right is Sinks, which represents the location to be stored after data processing, that is, the target system. The target can be MQTT, which can be saved in a file or database, or you can call HTTP service.
The middle part is divided into these layers, and the top layer is data business logic processing. This layer provides SQL statement and Rule Parser,SQL processors for processing and converting it into SQL plan;. The lower layer is Streaming runtime and SQL runtime, and the bottom layer of plan; is storage, which is used to store some messages.
Kuiper usage scenario
Streaming: real-time streaming at the edge end
Rule engine: define rule engine flexibly, realize alarm and message forwarding
Data format and protocol conversion: flexible conversion between different types of data formats and heterogeneous protocols between edge and cloud, and IT&OT fusion
Integration of KubeEdge and Kuiper
Partial architecture diagram
Kuiper is installed behind the KubeEdge MQTT Broker, the whole runs on the edge end, the bottom is a different Mapper, that is, access to a variety of different protocols. Edge MQTT Broker is used to exchange messages.
Type of data processing:
Get the type definition from the device model file definition
Convert data to the data type of Kuiper
When you create a flow, you can use the schema-less flow definition
Supported data types are int, string, bool, float
KubeEdge model files and configuration
The following figure shows some configuration files, including the device name, attributes, name, data type, Description, and so on.
Partial profile
Save the device model file
Configure model file information in ect/mqtt_source.yaml
KubeEdgeVersion: not currently in use, reserved for future versions of model files
KubeEdgeModelFile: model file path
Distribute the configuration through config-map and save it to the relevant directory
Kuiper usage process
1) definition flow: similar to the definition of tables in other databases
DATASOURCE= "$hw/events/device/+/twin/update" is the topic defined in KubeEdge
2) define and submit rules
Realize the business logic with SQL and send the running result to the specified target
Supported SQL
SELECT/FROM/WHERE/ORDER
JOIN/GROUP/HAVING
4 types of time windows + 1 counting window
60+SQL function
3) run
Deploy Kuiper rules in KubeEdge
1) use Kuiper-Kubernetes-tool
2) the program is a tool class that runs separately in the container and executes the command configuration file issued through config-map
The configuration file is used to specify information such as address and port where the kuiper service is located.
The directory where the command file is located
3) issue the command execution file through config-map, and the tool automatically scans the file periodically, and then executes the command
Kuiper manager- Cloud Edge Collaborative Management console
Another way is to manage many Kuiper nodes through the administrative console, because Kuiper can run on many nodes.
For example, Kuiper can run in the box of car networking, which has many cars. You can access all instances through Kuiper-manager and update their rules uniformly.
The first step is to install the plug-in. We have provided some knowledge about the plug-in, such as accessing different sources. If our source does not support it, we can write a plug-in and install the plug-in. After installing it, we provide an Android plug-in interface that can be used.
Next, define the flow for creation
The following figure shows the location of the data storage, and the following figure shows saving the data to the file system and specifying the path.
The following picture is a visual editing interface, which can be used to write rules.
Application case: big data Center of National Industrial Internet
This case is a very typical usage scenario. K8s+CloudCore is deployed in the cloud, and the rules are distributed to the Kuiper,Kuiper through the management channel. The MQTT broker will define the data and clean the data. Currently, there are two channels. The first is to send the processed message to Cloud MQTT broker. The second channel, such as local data persistence, can be stored in Influxdb, a persistent database. Some third-party applications that occur on the edge can directly transfer the data in Influxdb and do some visualization. The bottom layer is to connect different data through Mapper.
Usage scenario of Rule engine in Kuiper
LF EdgeX Foundry has a built-in rules engine, which was officially released in the Geneva version in April 2020.
Application case: docking data format conversion in heterogeneous systems
To realize data exchange with ERP, MES and other IT systems, we provide a very flexible expansion capability, including that after heterogeneous data is collected through extension plug-ins, we can use SQL built-in functions or extension functions for fast and flexible processing. The second point is that after getting the data processing results, the analysis results can be converted through the data template of sink to flexibly adapt to the data formats and protocols needed by all kinds of target systems, such as the same rule with a temperature greater than 30 degrees Celsius. If you want to send instructions to control the equipment, and send them to Wechat. These two different target systems, it requires different interfaces and data, but for this rule is the same, then in data, according to the same rule to trigger two different operations, you can specify different topic, data can be sent, no longer complex programming; the third point is the use of SAP NetWeaver RFC SDK to read data from SAP, processing and conversion to send to other heterogeneous systems.
Performance data
Kuiper supports running thousands of rules concurrently
8000 rules * 0.1messages / second / rule, with a total TPS of 800messages / second
Rule definition
Source: MQTT
SQL:select temperature from source where temperature > 20 (90% data is filtered)
Target: log
Configuration
AWS:2core*4GB
Ubuntu
Resource usage
Memory:89%~72%;0.4MB/rule
GPU:25%
AWS t2.micro configure 10k+/s message throughput
This is how KubeEdge and Kuiper solve the problem of marginal streaming data processing. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.