In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
The main content of this article is to explain "what is Serverless". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn what Serverless is.
To say that the current hot topic in software architecture is Serverless.
Usually we translate it as "serverless architecture".
Although it is called "serverless" all day long, this architecture is different from the traditional architecture and is obviously not really without a server.
Instead, it chooses to "hide" the management of infrastructure such as servers, and computing resources appear as services rather than as servers.
It has multiple attributes such as event triggering, transient and completely managed by a third party, in which developers only need to pay attention to business logic.
In that year, 2012, Magi TA first appeared in the field of vision of technical people.
Just two years after it came to the fore, AWS, one of the "3A giants" of cloud computing, officially launched Lambda products at the end of that year, marking the grand start of the commercialization process of Serverless.
At that time, Lambda was described as a computing service that could run the user's code according to time, regardless of the underlying computing resources.
From 2012 to 2014, Lambda didn't arrive early.
But just like what cloud computing PaaS said in its infancy: users only care about the business, and leave the underlying IaaS to us!
The ideas brought to people by Serverless and PaaS are strikingly similar.
In the following two years, the success of Google Cloud Function and Microsoft Azure Function in the technology circle naturally pushed Serverless into the hot stage.
Focusing on the connotation of Serverless from the Perspective of Architecture change
For many developers, it is obvious that it is not enough to know that "Serverless is defined as a serverless architecture". How to make the understanding of Serverless more figurative?
I'm afraid we still have to start from the perspective of the evolution of software application architecture.
As you may know, more than a decade ago, monomer applications were widely recognized as the most mainstream form of application architecture.
With a server plus a database, service availability can peak.
However, with the decline of the aging performance of the server and even its own damage, coupled with the gradual expansion of enterprise business, the single architecture is no longer "one move to eat all over the sky".
Even if a load balancer is added to the traffic entrance, so that a single application can be deployed on multiple servers to increase flexibility, it cannot completely solve a large number of conflicts caused by the lack of physical boundaries of the code.
At this point, the single application architecture has the opportunity to evolve into a micro-service architecture for the first time, and architects have to face the new challenges brought by distribution.
For example, caching service Redis, state coordination service ZooKeeper, message service Kafka and so on in those years.
We can simply understand that a large system is divided into multiple business modules, in which the business modules need to be deployed on different servers, and each business module exchanges data through interfaces. It doesn't seem that simple.
Of course, in addition to the particularity of the distributed environment, the micro-service architecture has also brought great changes to the operation and maintenance.
In practice, because micro-services can be deployed on different servers or on the same server but on different containers, the importance of capabilities such as application distribution standards, life cycle standards and automation resiliency is highlighted one by one.
In the twinkling of an eye, in the well-known era of cloud origin, businesses can not only go to the cloud directly, but also provide standardized application hosting services, including version management, release, observation after launch, self-healing, etc., and the value dividend is further demonstrated.
At this time, Serverless is also facing this wave of technology dividends to break into the public's line of sight and get attention.
It can be seen that in the evolution of the architecture, both R & D and operation and maintenance personnel have gradually shifted their focus from the machine to the platform system, rather than simply using people to manage, which may be the most simple explanation for the principle of Serverless.
To sum up, the emergence of Serverless is to integrate all components of host management, operating system management, resource allocation, and even application logic into services.
If you put it in the current cloud computing scenario, it cannot be understood as "do not care about the server" in a narrow sense. After all, the resources on the cloud not only involve basic computing, storage resources, network resources and so on, but also include higher-level categories such as databases, caches and message queues.
Serverless architecture is similar to FaaS, so what is the solution?
When it comes to Serverless, many people's first reaction is FaaS+BaaS.
Indeed, this is an implementation of Serverless, and it is also a more mainstream understanding.
The so-called "FaaS+BaaS" is actually a combination of function as a service and back-end as a service.
Specifically, BaaS (Backend as a Service) can be interpreted as "back-end as a service".
Generally, API calls program logic that has been implemented by the back end or others, and is usually used to manage data.
For example, Amazon RDS can replace its own deployment of MySQL, of course, there are a variety of other databases, middleware functions.
FaaS (Functions as a Service) is a function as a service. As a form of serverless computing, the most widely used Lambada is AWS.
After long-term practice, we believe that Serverless architecture can provide a more "code fragmentation" software architecture paradigm, while the so-called "Function" is to provide smaller program units than micro-services.
Further, how to understand the concept of "function as a service"?
Generally speaking, developers encapsulate the function definition in a container and call functions to call services such as back-end storage.
In essence, FaaS is an event-driven service triggered by messages.
Different from the traditional server-side software, after the application is deployed to a virtual machine or container with an operating system, it generally needs to stay in the operating system for a long time to run.
On the other hand, FaaS can deploy the program directly to the platform, trigger execution when an event comes, and destroy it after execution.
More importantly, FaaS products do not need to code specific frameworks or libraries.
Or take the AWS Lambda function as an example, the function can be implemented in Javascript, Python, Go, etc., that is, any JVM language (Java,Clojure,Scala, etc.) or .NET language; but at the same time, the Lambda function can also execute another process bundled with its deployment artifacts.
In the FaaS environment, the user uploads the function code to the FaaS provider, where the horizontal expansion is completely automatic and flexible.
The "function" can also represent each operation that the customer wants to perform, that is, each function completes a relatively simple business logic, and a complete application consists of several functions, including creating, reading, updating, deleting and so on.
At present, function as a Service (Function as a Service,FaaS) is the technical basis of current Serverless implementation.
Because of the close relationship between FaaS and Serverless, the characteristics of FaaS can also be regarded as the characteristics of the Serverless platform, but if you simply think that Serverless is FaaS, it is relatively narrow.
The era of BaaS only provides back-end services on which applications depend on API; in the era of FaaS, users and developers no longer pay attention to the bottom, so it is reasonable to say that Serverless is booming.
Using Serverless is also a double-edged sword.
According to the actual observation, the use of Serverless by enterprises usually involves several factors, among which "reducing operating costs" is considered to be one of the most intuitive and effective reasons.
Indeed, with Serverless, companies no longer have to buy server racks that may be idle most of the time for potential traffic peaks, but scale automatically according to traffic and adopt a flexible way of paying by request.
In addition, "automatic on-demand expansion" can be carried out to the extreme: expand to current usage at any time, eliminating the trouble of unexpected or seasonal traffic peaks.
More importantly, Serverless does not need to care about memory leaks and has complete supporting facilities that include cloud database, cloud message queue and other services, which greatly reduces the workload.
Even if most of the developers in the enterprise are born in software and are not good at repair, protection and management, they can still focus on software development. Serverless is absolutely fine.
Based on this, many enterprises at home and abroad have been committed to providing capability services based on the Serverless framework, and the acceptance is rising all boats, especially several large public cloud vendors.
For example, landmark AWS Lambda.
As a FaaS cloud service launched by AWS for Serverless architecture, AWS Lambda has received widespread attention since its launch in 2014. In addition to meeting people's expectations for Serverless, what is more important is the success of the AWS platform.
The advantages of AWS Lambda can be summarized as follows:
High maturity: the first Serverless FaaS platform on the mainstream public cloud platform has developed for several years and precipitated a large user base: AWS Lambda has a large user base and many active communities refer to the case: there are many open source projects around AWS Lambda in the open source community: the integration of AWS, an open source project, is well integrated on AWS Lambda's natural and AWS platforms. Microsoft Azure also launched the event-driven functional cloud computing service Azure Functions in 2016.
It supports users to develop functions in a variety of languages, including Java, Node.js, PHP, C #, F#, Bash and Microsoft Windows PowerShell scripts.
In addition, Azure Functions provides a version of Azure Functions Runtime for On-premises deployment in addition to the version of the public cloud.
The product features are also noteworthy:
Integrity: Azure Functions is a relatively fully functional Serverless FaaS platform integration: Azure Functions naturally integrates with various services on the Azure cloud platform: for enterprises that use Microsoft's products and tools to build IT capabilities, Azure Functions is the preferred platform for Serverless transformation. Privatization: provide a privatized deployment version with commercial support to meet the needs of different levels of users in 2016. Google Cloud Platform launched the Google Cloud Functions platform, but also joined the competitive sequence in the field of Serverless.
As a FaaS platform, what is the biggest functional difference between Google Cloud Functions and AWS Lambda and Microsoft Azure?
After counting, it may be that Google Cloud Functions currently only supports JavaScript as a function development language, running in Node.js.
In July 2018, Google announced the open source project Knative, which is positioned as the Serverless plug-in for Kubernetes, and has been strongly supported by Pivotal, IBM and Red Hat.
Foreign countries are scrambling to compete with each other, and there is also a rush at home. Aliyun is one of the first public cloud vendors to launch the Serverless platform in China, and its FaaS platform product is called Ali Cloud function Computing.
The product also has a lot of data to pay attention to in terms of event triggering, supporting language, and user experience:
Event trigger: Ali cloud function computing can be triggered by service events on Ali Cloud, such as Ali Cloud object Storage (OSS) support language: Ali Cloud function computing currently supports Node.js, and it is planned that the size of the deployment package that will support Java and Python entire function code cannot exceed 50MB. The unzipped code of the deployment package cannot exceed the 250MB user experience: Ali Cloud function computing provides Web-based console and SDK Users can manage function applications through the Web console, or they can operate service specifications through an interactive command line: a service contains up to 50 functions and 10 triggers. When running, the longest running time of a function is 300s, that is, 5min. The maximum concurrency of a function is 100. it is also the leader of the domestic cloud computing competition. Serverless Cloud function (Serverless Cloud Function,SCF) is a functional computing platform launched by Tencent Cloud. According to official data, its release time is April 26th, 2017.
Summarize the characteristics of Tencent Cloud Serverless platform:
Function runtime: Tencent Cloud SCF currently supports Python, Java and Node.js as functions. Users can upload codes locally in the form of compressed packages or trigger by referring to code file events in Tencent Cloud object Storage. Currently, Tencent Cloud SCF supports event triggers such as Tencent Cloud object Storage COS, timer, and Tencent Cloud messaging Service CMQ. And the user manually triggers the service specification through API and the console: each function will be executed in a CentOS Linux-based environment. The memory range of function execution is from 128MB to 1536MB, the maximum number of function definitions supported in a single region is 20, the maximum duration of function execution is 300 seconds, and the maximum number of concurrency is more than 5. What we are discussing is basically the technical practice of Serverless by large public cloud service providers.
In fact, compared with the public cloud, there are not many technical obstacles to building a Serverless platform in a private environment, and there are naturally many leading technology attempts, which we will discuss in detail.
It can be found that even public cloud service providers with worldwide influence on Serverless seem to lack a unified understanding and corresponding standards, which can not adapt to all cloud platforms, such as different supported development languages and different event trigger mechanisms.
After all, Serverless has never been a product or a tool, but a collection of capabilities.
Even in practice, there will be difficulties in business lightweight, it is difficult to expand business instances in seconds or even milliseconds, and inadequate response capacity of infrastructure leads to problems such as service discovery and log monitoring system.
As a result, a large number of other web server hosting providers may fail, many SaaS platforms will be impacted, and the living space of operation and implementation personnel will be further reduced and other industry phenomena.
However, it is unavoidable that the rise of Serverless architecture makes "de-serverization" really benefit developers and gives new opportunities for infrastructure management.
At this point, I believe you have a deeper understanding of "what Serverless is", might as well come to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.