Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to make monomer / microservice applications become Serverless Application

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces how to make single / micro-service applications into Serverless Application, the content is very detailed, interested friends can refer to, hope to be helpful to you.

First, the native Serverless1 of natural clouds. Cloud primordial era

With the development of container technology represented by Docker, CNCF Foundation and K8s in 2013, cloud natives are becoming familiar to developers. There are two stages before the original cloud era: one is to build the IDC computer room, and the other is to simply move the original applications to the cloud. It is difficult to build a self-built IDC server room with high availability, high scalability and efficient operation and maintenance. The second stage is the era of cloud computing, which has made some progress compared with IDC, but most of them are still using the cloud relatively primitive, so it is difficult to make good use of the cloud. The resources at this stage are nearly unlimited, but the methods based on virtual machines and various self-built services still need to be improved.

Cloud native era refers to the design of applications, taking into account that the future applications will run in the cloud environment, making full use of the advantages of cloud resources, such as the flexibility and distributed advantages of cloud services. As shown in the figure above, cloud natives can be divided into several parts:

First, cloud native technologies, including containers, K8s, microservices, and DevOps. These technologies are just a tool, and some best practices and combinations are needed to really make good use of these technologies, that is, cloud native architecture.

Cloud native architecture is a collection of architectural principles and design patterns based on cloud native technology. it is some guiding principles, such as the requirement to be observable, and the subsequent flexibility can be done only on the premise of observability. including high availability related construction and infrastructure sinking, hope to maximize the stripping of non-business code, under the guidance of such technology and architecture design. Cloud native applications can be designed.

Cloud native applications have the characteristics of lightweight, agile and high automation, which can give full play to the advantages of cloud and better adapt to the development and changes of business in the era of modern digital transformation.

2. Natural cloud origin of Serverless

Why is Serverless native to natural clouds? Although Serverless appeared a little earlier than cloud natives, AWS took the lead in launching the original Serverless product, Lambda, which is billed on request and extremely scalable, which is very consistent with the definition of cloud natives, such as infrastructure sinking. In Lambda, there is no need to manage the server, it scales the server on request, achieving a high degree of automation; it also organizes the code in the form of functions, which are lighter and faster than applications. However, the disadvantage of this model is that the transformation cost is high, because many applications are originally a huge single or micro-service application, which is difficult to transform into a function mode.

3. Get to know SAE

The concept of Serverless and related products have been launched for almost 7 years, in this process, cloud native technology is also maturing, including Docker, K8s and so on. Aliyun began to think about another form of Serverless in 2018, that is, Serverless application, or SAE, which was launched in September 18 and commercialized in 19 years.

Characteristics of SAE:

Immutable infrastructure, observable, automatic recovery

Based on the K8s pedestal, it represents immutable infrastructure such as mirroring, as well as observable and automatic recovery. If a request is detected to fail, it will automatically cut the stream or restart the instance.

Free of operation and maintenance, extreme flexibility, extreme cost

Hosting server resources does not require users to operate and maintain their own servers, but also has the ability of extreme flexibility and extreme cost accordingly.

Easy to use, 0 transformation, integration

As shown in the figure above, the top layer is the customer awareness layer, which is an aPaaS product form and an application PaaS. After more than three years of practice, it has finally achieved the effect of making users really easy to use and zero transformation, and has done a lot of integrated integration.

SAE, a product with K8s as the base, with the characteristics of Serverless and in the form of aPaaS, is fully in line with the original characteristics of the cloud. At the technical level, the underlying layer uses containers, K8s, and integrates micro-services, including various DevOps tools. At the architecture level, because the underlying layer depends on these technologies, it is very convenient for users to design their own application practices in accordance with the principles of cloud native architecture, so that customers' applications can maximize the cloud native dividends, achieve lightweight, agile and highly automated applications, and greatly reduce the threshold for entering the cloud native era.

SAE product architecture diagram

SAE is an application-oriented Serverless PaaS,0 to transform the 0 threshold 0 container foundation is its feature, so that users can easily enjoy the technology dividends brought by Serverless, K8s and micro services. At the same time, it also supports a variety of micro-service frameworks, multiple deployment channels (including UI deployment / cloud effect / Jenkins / plug-in deployment, etc.), and multiple deployment methods (including War / Jar / image deployment, etc.).

The bottom layer is an IaaS resource layer, and above is a K8s cluster, which is transparent to users. There is no need to purchase servers or understand K8s. The upper layer has two core competencies: one is application hosting, the other is micro-service governance, which is the application lifecycle. Micro-service governance is service discovery, elegant offline, and so on, which are well integrated in SAE.

The core features of SAE can be summarized into three: 0 code transformation, 15s elastic efficiency, and 57% cost reduction.

2. SAE design concept 1. Kubernetes base

Container

In the K8s container orchestration ecology, the most basic thing is the container or image. Relying on the image, the user is equivalent to implementing an immutable infrastructure. The advantage is that the image can be distributed and copied everywhere, which is equivalent to achieving portability without vendor binding. In addition, for users who are not familiar with images or do not want to feel complexity, we also provide deployment at the War / Jar level, which greatly reduces the threshold for users to enjoy dividends.

Oriented final state

In the traditional field of operation and maintenance, there are many problems that are difficult to solve, such as the server suddenly has a high load or high CPU for a variety of reasons, which usually requires a large number of manual operation and maintenance operations in the traditional field, while in the K8s field, combined with observable and health check, as long as liveness and readiness are configured, automatic operation and maintenance can be realized, and K8s will automatically cut the stream and reschedule automatically. The cost of operation and maintenance is greatly reduced.

Resource hosting

Not only the ECS machine is managed, but also K8s is managed internally. Customers do not need to buy servers or buy K8s or operation and maintenance K8s, or even understand K8s, which greatly reduces the entry threshold and salary burden of customers.

2. Serverless characteristics

Extreme elasticity

We have implemented an end-to-end 15 seconds, that is, 15 seconds to create a pod to let the user's application start. In terms of elasticity, we have the elasticity of basic indicators (such as CPU, Memory, etc.), conditional elasticity of business indicators (such as QPS, RT, etc.) and timing elasticity. If you set elasticity metrics manually, there are still some thresholds and burdens, because customers do not know how many metrics should be set. In this context, we are also considering intelligent flexibility to automatically help users calculate elasticity metrics and recommend them to users. Further lower the threshold.

Lean cost

SAE eliminates the cost of resource hosting and operation and maintenance. Before that, customers need a large number of ECS servers for operation and maintenance. When security upgrades and bug fixes are required, especially for high-density deployment, the cost will be very high. In addition, the SAE billing method is billed by minutes, and users can fully achieve lean costs, such as expanding the capacity to 10 instances in one hour at the business peak and 2 instances at the end of the peak.

Language enhancement

In the field of elasticity, we have made some targeted language enhancements. For example, Java, combined with Ali's large-scale Java application practice, Ali's JDK--Dragonwell11 can increase the startup speed of Java applications by 40% compared with other open source JDK. We will explore more possibilities in other languages in the future.

3. (application) PaaS product form

Application hosting

Application hosting is equivalent to the management of the application life cycle, including application release, restart, expansion, grayscale release, and so on. It uses the same mind as everyone uses applications or other PaaS platforms, and the threshold for getting started is very low.

Integration

Because there are hundreds of cloud products, it is an extra cost to use each one well. Therefore, we have integrated the most commonly used cloud services, including basic monitoring, business monitoring ARMS, NAS storage, SLS log collection and other aspects to reduce the threshold for users to use the product.

In addition, we have made additional micro-service enhancements, including hosting registry, elegant online and micro-service governance, etc. Because the use of micro-services usually requires a registry, SAE built-in hosting registry, users do not need to repurchase, you can register the application directly, further reducing the user threshold and cost.

SAE combines these capabilities so that users can basically achieve zero transformation and migration when migrating traditional single applications or micro-service applications, and enjoy the technology dividend behind this product at the zero threshold.

3. SAE technical architecture 1. SAE technical architecture diagram

SAE hosts the technical architecture behind K8s as shown in the figure above. On a host, the top layer is SAE's PaaS interface, the second layer is K8s Master (including API server, etc.), and the bottom layer is the host where K8s really runs resources, all of which are hosted by SAE. Users only need to create Pod resources in their own VPC or network segment and make a connection to achieve the normal operation of the application.

There are two core issues:

One is anti-penetration. For example, our Pod or container uses traditional container technology such as Docker. Running two users of public cloud an and b to a physical machine actually has a very high security risk. B users are likely to hack into a user's container to obtain user information, so the core of this is to limit the ability of users to prevent them from escaping.

Second, the connection of the network or the cloud system, we need to connect with the user's network system, so that the user can easily connect with his security group, security rules, RDS, etc., which is also a core issue.

two。 Safety container

Let's expand on the problem of escape prevention here. The table above is a relatively wide range of secure container technology discussed by everyone now. the simple understanding of secure container is the idea of virtual machine. If you use the traditional containerization technology such as Docker, it is difficult to do a good job of security protection or isolation, and the security container can be understood as a lightweight virtual machine, which has both the startup speed of the container and the security of the virtual machine.

At present, secure containers have gone beyond security. There are not only secure isolation, but also performance isolation and fault isolation. Take fault isolation as an example. If you use Docker container technology and encounter some kernel problems, other users may be affected because of the failure of a Docker container, and the entire host may be affected. If you use secure container technology, there will be no such problems.

SAE adopts Kata security container technology. From the fact of time and open source world, Kata is the combination of runV and Clear Container, which is more mature than Firecracker and gVisor.

4. SAE best practices best practices 1: low-threshold micro-service architecture transformation

Customers who are familiar with micro-services know that if they want to operate and maintain a set of micro-service technical architecture, they need to consider many factors, not only at the open source and framework level, but also at the resource level and subsequent problem troubleshooting, including registry, link tracking, monitoring, service governance, and so on. As shown on the left side of the figure above, under the traditional development mode, these capabilities require users to host and operate and maintain them.

In SAE, users can hand over some business-independent features to SAE. Users only need to pay attention to their own business, including micro-service user center, group center, etc., and integrate with SAE's CI/CD tools to quickly implement micro-service architecture.

Best practice 2: start and stop the development test environment with one button to reduce cost and increase efficiency

Some medium and large enterprises have multiple test environments, which are usually not used at night. In ECS mode, it is necessary to keep these application instances for a long time, and the cost of idle waste is relatively high.

If you can combine namespaces in SAE, such as the ability to start and stop with one button or start and stop at a fixed time, you can build all the applications of the test environment under the test environment namespace, and then configure to start all instances of the test environment namespace at 8:00 in the morning, and stop all instances at 8:00 in the evening, with no charge at all after the stop, allowing users to minimize costs.

According to the calculation, in the extreme case, it can basically save the hardware cost of user 2P3, and there is no need to pay additional operation and maintenance costs, just configure the rules of timing start and stop.

Best practice 3: accurate capacity + extreme elasticity solution

In the case of this year's epidemic, a large number of students are conducting online education at home, and many customers in the online education industry are faced with a seven-or eight-fold increase in business traffic. based on the original ECS architecture of operation and maintenance, users need to upgrade the architecture in a very short time, not only the operation and maintenance architecture, but also the application architecture upgrade, which is a great challenge to the cost and energy of users.

It can be much easier if you rely on a variety of integration in SAE and a highly automated platform such as the underlying K8s. For example, you can evaluate the capacity water level with PTS compression tools; for example, if there is a problem with the pressure test, you can combine basic monitoring and application monitoring, including call chain, diagnostic report, etc., to analyze where the bottleneck is and whether it is possible to solve it in a short time; if it is found to be a bottleneck that is difficult to solve, you can use the high availability service of the application to achieve current limit and degradation to ensure that the business will not collapse due to sudden flood peaks.

Finally, SAE can configure the corresponding elastic strategy according to the stress test model, for example, according to CPU memory, RT or QPS, set the industry strategy in the case of capacity model, so as to achieve the effect of very suitable actual usage, and achieve low cost and maximum upgrade of the architecture.

V. Summary

Digital transformation has penetrated into various industries, whether due to time development or epidemic reasons. In digital transformation, enterprises should have the ability to apply cloud to cope with the rapid changes in business and the challenges in high flood peak and high traffic scenarios. This process includes several stages: Rehost (new hosting), Re-platform (new platform), Refactor (new architecture), with the deepening of architecture transformation. The higher the value of the cloud that enterprises can acquire, the higher the cost of migration and transformation. If the application is simply hosted on the cloud, it is difficult to obtain the resilience of the cloud, and it is still difficult to deal with problems in a timely manner.

Through SAE, we hope to enable users to enjoy the value dividend of Serverless + K8s + micro services with 0 transformation, 0 threshold and 0 container foundation, and ultimately help users better face business challenges.

On how to make single / micro-service applications into Serverless Application to share here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report