Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Edge computing workload: virtual machine, container or bare metal?

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

We live in an age of connections and smart devices. With the growth of the number of smart devices, the growth of data has rapidly reached a new height. This data arrives in the cloud or data center from the end user for processing, storage, and other analysis operations, so it is bound to cause latency and bandwidth problems when accessed. As Nati Shalom wrote in his blog post, "what is edge computing?", edge computing essentially shifts processing power to the edge of the network, closer to the data source. This gives organizations a significant advantage in terms of data access speed and bandwidth consumption.

Because the edge plays a critical role, it is also important to consider the infrastructure technology in which edge workloads are running.

Provide technology for edge workload

We have seen the entire paradigm shift in infrastructure technology, from physical servers to the birth of virtual machines (VM), and now the latest is containers. Although VM has done well over the past decade or so, containers provide inherent advantages over VM. They are also ideal for running edge workloads.

The following figure describes how containers work compared to VM.

Each virtual machine runs a unique operating system on top of the shared hypervisor (software or firmware layer) for "hardware-level virtualization". Instead, containers run on top of the physical infrastructure and share the same kernel, resulting in "operating system-level virtualization".

This shared operating system keeps containers in MB units, making them very "light" and flexible, reducing startup time to a few seconds, compared with a few minutes for VM. In addition, because containers share the same operating system, the administrative tasks of operating system administrators (patches, upgrades, etc.) will also be reduced. On the other hand, in the case of a container, a kernel vulnerability can shut down the entire host. But if an attacker only routes through the host kernel and hypervisor before reaching the VM kernel, VM is still a better choice.

Today, a lot of research is moving towards the goal of bringing bare metal capabilities to edge workloads. Packet is one such organization dedicated to implementing a unique proposition to meet low latency and local processing needs.

A container on a virtual machine or bare metal?

CenturyLink did an interesting study of running Kubernetes clusters on bare metal and virtual machines. For this test, use an open source utility called netperf to measure the network latency for both clusters.

Because the physical server does not have a hypervisor as an overhead, the results are as expected. Kubernetes and containers running on bare metal servers significantly reduce latency; in fact they are three times lower than when running Kubernetes on VM. In addition, when running a cluster on VM, CPU consumption is significantly higher than bare metal.

Should all edge workloads run on bare metal?

While databases, analytics, machine learning algorithms, and other data-intensive enterprise applications are ideal for running containers on bare metal, running containers on VM has some advantages. Compared with bare metal environments, out-of-the-box features can be easily implemented in VM (such as workload movement from one host to another, rollback to previous configurations in case of any problems, software upgrades, etc.).

Therefore, as mentioned earlier, lightweight and fast start / stop containers are ideal for edge workloads. There is always a trade-off when running on bare metal or VM.

Public cloud and edge workload

Most public clouds, including Microsoft Azure and Amazon, offer container as a service (CaaS). Both are built on top of the existing infrastructure layer and are based on virtual machines, providing the portability and flexibility needed for edge computing.

AWS also launched "Greengrass" as a software layer to extend cloud-like functions to the edge to enable the collection and execution of local information.

Let's see how it works.

Greengrass Group contains two components. The first is the Greengrass kernel, which is used to perform AWS Lambda, messaging, and security locally. The second is IoT, a SDK-enabled device that communicates with the Greengrass core over the local network. If the Greengrass kernel loses communication with the cloud, it still maintains communication with other local devices.

Enterprise adoption and challenges involved

Because of the speed, density and flexibility that containers provide, they are one of the hottest technologies. Security may pose obstacles for enterprises to adopt edge workloads on containers. Two of the main problems are:

Denial of service: when an application is running, it may consume most of the operating system resources, depriving other applications of the minimum resources they need to continue running, and finally forcing the operating system to shut down.

Leverage the kernel: containers share the same kernel, so if attackers have access to the host operating system, they can access all applications running on the host.

The way forward: the latest developments

Among the various developments in infrastructure technology, New York-based startup Hyper is striving to provide the best products in the VM and container areas. Using HyperContainers (such as Hyper calling it), we see a fusion between the two. It provides the speed and flexibility of the container to launch the instance with minimal resource footprint in less than a second. At the same time, it provides the security and isolation of VM, that is, to prevent the shared kernel problem of the container through hardware mandatory isolation.

This article is from Cloud Community partner "SDNLAB", original link: https://yq.aliyun.com/articles/625871?spm=a2c4e.11153940.bloghomeflow.217.204a291arW997Y

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report