In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
The purpose of this article is to share with you an example analysis of the evolution of Serverless architecture. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article.
The traditional single application architecture Serverless reduces the total cost of maintaining the application and can build more logic more quickly. It is a command-line tool that provides scaffolding, workflow automation, and best practices for developing and deploying serverless architectures. It can also be fully extended through plug-ins.
More than a decade ago, the mainstream application architecture was a single application, which was deployed in the form of a server plus a database. Under this architecture, operators would carefully maintain the server to ensure the availability of the service.
Problems faced by single application architecture
As the business grows, this simplest monolithic application architecture soon faces two problems. First of all, there is only one server, and if this server fails, such as hardware damage, then the whole service will be unavailable; second, after the volume of business increases, the resources of one server will soon be unable to carry all the traffic.
The most direct way to solve these two problems is to add a load balancer at the traffic entrance to enable a single application to be deployed to multiple servers at the same time, so that the single point of server problem is solved. At the same time, this single application also has the ability to scale horizontally.
Micro-service architecture
1. Micro-service architecture goes in and out of common services
With the further growth of the business, more and more R & D personnel join the team to develop features on single applications. Because the code in a single application has no clear physical boundaries, we will soon encounter all kinds of conflicts, which require manual coordination, as well as a large number of conflict merge operations, resulting in a sharp decline in R & D efficiency.
Therefore, we begin to split the single application into micro-service applications that can be independently developed, tested and deployed independently, and the services communicate with each other through API, such as HTTP, GRPC or DUBBO. The micro-service architecture based on Bounded Context split in domain-driven design can greatly improve the R & D efficiency of large and medium-sized teams.
two。 Micro service architecture brings challenges to operation and maintenance.
Applications have evolved from a single architecture to a micro-service architecture. From a physical point of view, distribution has become the default option, and application architects have to face the new challenges brought by distribution. In this process, everyone will start to use some distributed services and frameworks, such as caching service Redis, configuration service ACM, state coordination service ZooKeeper, message service Kafka, communication framework such as GRPC or DUBBO, and distributed tracking system.
In addition to the challenges brought by the distributed environment, the micro-service architecture also brings new challenges to the operation and maintenance. Originally, R & D personnel only need operation and maintenance for one application, but now they may need operation and maintenance for ten or more applications, which means that the workload of security patch upgrade, capacity assessment, fault diagnosis and other transactions has increased exponentially. At this time, the importance of application distribution standards, life cycle standards, observation standards, automation and elasticity and other capabilities has become more prominent.
Cloud origin
1. Based on cloud product architecture
Whether an architecture is cloud native depends on whether the architecture grows on the cloud, which is a simple understanding of "cloud native". This "growing on the cloud" does not simply mean the use of cloud IaaS layer services, such as simple ECS, OSS and other basic computing storage, but should be understood as whether or not to use cloud distributed services, such as Redis, Kafka, etc., which directly affect the business architecture. Under the micro-service architecture, distributed services are necessary. It turns out that everyone develops such services on their own, or based on the open source version of their own operation and maintenance services. In the cloud native era, businesses can use cloud services directly.
The other two technologies that have to be mentioned are Docker and Kubenetes, in which the former standardizes the standard of application distribution, no matter the application written by Spring Boot or NodeJS is distributed in the way of mirror image, while the latter defines the standard of application life cycle in the former technology, and an application has a unified standard from startup to online, to health check, and then to offline.
two。 Application lifecycle hosting
With standards for application distribution and lifecycle, the cloud can provide standardized application hosting services. It includes version management, release, observation after launch, self-healing and so on. For example, for stateless applications, the failure of an underlying physical node will not affect R & D at all, because the application hosting service can automatically complete the migration based on the standardized application life cycle. take the application container offline on the failed physical node and start the same number of application containers on the new physical node. It can be seen that cloud natives further release value dividends.
On this basis, because the application hosting service can perceive the data of the application runtime, such as concurrency of business traffic, cpu load, memory footprint, etc., the business can configure scaling rules based on these metrics, and then the platform implements these rules to increase or decrease the number of containers according to the actual situation of business traffic. This is the most basic automatic scaling of auto scaling--. This can help users avoid restricting resources during the business trough, save costs, and improve the efficiency of operation and maintenance.
The above is an example analysis of the evolution of Serverless architecture, and the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.