In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Original address: https://martinfowler.com/articles/serverless.html
Author: Martin Fowler, Mike Roberts
4. Advantages
so far, we have been trying to define and interpret only the meaning of serverless architecture. Now I'll discuss some of the advantages and disadvantages of this approach to designing and deploying applications. You should never use a serverless architecture without fully considering and weighing the pros and cons.
lets start with the land of rainbows and unicorns and look at the benefits of a serverless architecture.
4.1. Reduce operating costs
in the simplest case, serverless architecture is an outsourced solution. It allows you to pay people to manage servers, databases, and even application logic that you manage yourself. Because you use a lot of predefined services that others will also use, we see the economics of economies of scale: because a vendor is running thousands of very similar databases, you pay less for hosting databases.
The benefits of 's reduced costs come from two aspects: the first is infrastructure cost benefit, which comes purely from sharing infrastructure (such as hardware, network) with others, and the second is labor cost benefit, where you can spend less time on outsourced serverless systems than on equivalent systems developed and hosted by yourself.
, however, this benefit is not much different from the benefits you get from infrastructure services (IaaS) or platform services (PaaS). But this advantage can be extended through BaaS and FaaS in a serverless architecture.
4.2. BaaS: reduce development costs
The premise of IaaS and PaaS is that server and operating system management can be commercialized, while BaaS is the result of the commercialization of the entire application component.
authentication is a good example. Many applications write their own authentication capabilities, which typically include registration, login, password management, and integration with other authentication providers. Overall, this logic is very similar in most applications, and services like Auth0 have been created, allowing us to integrate authentication capabilities that have been built into the application without having to develop it ourselves.
The same is true for BaaS databases, such as Firebase's database service. Some mobile application teams find it meaningful to have the client communicate directly with the server-side database. The BaaS database removes a lot of database management overhead and provides a mechanism to perform appropriate authorization for different types of users, which is the same as for serverless applications.
depending on your actual situation, these ideas may make you uncomfortable (we'll discuss them in the defects section), but there is no denying that many successful companies can produce convincing products almost without their own server-side code.
4.3. FaaS: scaling cost
One of the joys of serverless FaaS is-- as I mentioned earlier in this article-- that "horizontal scaling is fully automatic, flexible, and managed by the service provider." The biggest advantage is that you only need to pay for the calculations you need. For example, in AWS Lambda, depending on the shape of the application, you may only need to pay 100 milliseconds for the edge, which is a good deal for you.
4.3.1. Example 1: occasional request
assumes that you are running a server application that processes only one request per minute and takes 50 milliseconds to process each request, then your average CPU usage within an hour is 0.1%. This is very inefficient if the application is deployed to its own dedicated host. Thousands of similar applications can share a machine.
serverless FaaS addresses this inefficiency and provides you with benefits at a lower cost. In the example application above, you only need to pay for the 100 millisecond computing time per minute, which is 0.15% of the total time.
has the following knock-on effects:
For potential micro-services that require a small load, it supports the decomposition of components by logic / domain. The cost-effectiveness is excellent. If the company or team wants to try something new, the operating cost is very small when using FaaS to meet their computing needs. In fact, if your total workload is relatively small (but not entirely unimportant), you may not have to pay for any computing at all, because some FaaS vendors offer a "free layer". 4.3.2. Example 2: inconsistent traffic
shows us another example. Suppose your traffic profile is very busy-maybe your basic traffic is 20 requests per second, but every 5 minutes you will receive 200 requests per second in 10 seconds (10 times the usual number). If you don't want to reduce response time during peak traffic, how do you solve this problem?
in a traditional environment, you might need to increase the total number of hardware tenfold to handle peaks, even if the total duration of peaks accounts for less than 4 per cent of total machine uptime. Automatic scaling may not be a good choice here, because when a new instance starts, the new instance of the server takes a long time to start.
however, for serverless FaaS, this is not a problem. If your traffic distribution is uniform, then you will not be any different, you only need to pay for the extra computing power generated during peak periods.
obviously, I've chosen a few examples here, among which serverless FaaS can save a lot of money. But the point is, from a scaling point of view, unless you have a very stable traffic pattern and always make full use of the full power of your server host, using FaaS can save money.
4.3.3. Optimization is the Source of cost Saving
has a more interesting aspect of FaaS costs: performance optimization of the code not only increases the speed of the application, but also directly reduces operating costs, depending on the granularity of the vendor's charges. For example, suppose an application initially takes a second to process an event. If code optimization is passed, this will be reduced to 200ms, which (on AWS Lambda) will immediately see an 80% savings in computing costs without changing any infrastructure.
4.4. Simple operation and management
When it comes to serverless BaaS in , it's easy to understand why operation management is simpler than other architectures: supporting fewer components equals less work.
4.4.1. Extend the benefits of FaaS beyond infrastructure costs
although we remember scaling in the previous section, it is worth noting that FaaS's scaling feature not only reduces computational costs, but also reduces operation management, because scaling is automatic.
if your scaling process is manual (for example, you need to manually add and remove instances to the server array), you can gladly ignore this with FaaS and let the FaaS vendor extend the application for you.
still needs to be installed and maintained, even if you have reached the point of using automatic scaling in non-FaaS architectures. FaaS no longer needs this job.
4.4.2. Reduce the complexity of packaging and deployment
Compared with deploying the entire server, is easy to package and deploy FaaS functionality. All you have to do is package all the code into a zip file and upload it. There is no Puppet/Chef, no start / stop shell script, no decision whether or not to deploy one or more containers on the machine. If you're just getting started, you don't even need to pack anything-- you can even write code on the service provider's console (obviously, this is not recommended!).
The process doesn't take long to describe, but for some teams, the benefit can be absolutely huge: a completely serverless solution requires zero system management.
The PaaS solution has similar deployment advantages, but as we saw earlier, FaaS has an advantage in scalability when comparing PaaS with FaaS.
4.4.3. It's time for marketing and continuous experimentation.
's simpler operations management is a benefit that we engineers understand, but what does it mean for our business?
What's obvious about is the cost: the less time you spend on the operation, the fewer people you need to operate, as already described. But in my opinion, what is more important is the online time. As our teams and products tend to lean and agile processes, we want to keep trying new things and quickly update existing systems. Although simple redeployment in a continuous delivery environment can quickly iterate over stable projects, the ability to "deploy quickly" allows us to try new experiments that are smooth and low-cost.
Although the cost-effectiveness of is the most obvious improvement in serverless architecture, I am most excited about the early launch time. It can make the thinking mode of product development test continuously, which is the real revolution to software delivery.
4.5. "green" computing?
In the past few decades, the number and scale of data centers in the world have grown tremendously. The energy demand associated with the use of physical resources to build these centers is so large that it will have a huge environmental impact.
on the one hand, cloud infrastructure may have helped reduce this impact, because companies can only "buy" more servers when they absolutely need them, rather than providing all the servers they might need long in advance. However, it may also be said that the convenience of provisioning servers may make the situation worse if many servers are not adequately managed.
whether we are using a self-managed server, IaaS, or PaaS infrastructure solution, we still need to manually determine the capacity of the application, which usually takes months or years. In general, we are right to be cautious about management capacity, but this leads to oversupply and inefficiency. Using a serverless approach, you no longer calculate your own capacity-- making the computing power provided by serverless vendors just meet your real-time needs. The supplier can make capacity calculations based on the common needs of his customers.
This difference in will improve the efficient use of resources across data centers, so it can reduce the environmental impact compared to traditional capacity management methods.
5. Defect 5.1. Inherent defect 5.1.1. Supplier restriction
For any outsourcing strategy, transfers some control of the system to third-party vendors. This lack of control may manifest itself as system downtime, unexpected restrictions, cost changes, loss of functionality, forced API upgrades, and so on.
5.1.2. Multi-tenant problem
multi-tenancy refers to the situation in which software instances of different customers (or tenants) run on the same machine and may run in the same managed application, which is a strategy to achieve economies of scale. Service providers do their best to make customers feel that they are the only ones using their systems, and usually good service providers do a good job in this respect. However, there is no perfect multi-tenant solution. Sometimes there are problems in terms of security (one customer can see another customer's data), robustness (errors in one customer's software cause another customer's software to fail), and performance (a high-load customer slows down another customer).
these issues are not unique to serverless systems-they exist in many other service offerings that use multi-tenancy. AWS Lambda is mature enough now that we don't want to see such problems with it, but you should be aware of such problems with any less mature services, whether from AWS or other vendors.
5.1.3. Vendor binding
any serverless feature you get from one vendor is likely to be implemented by another vendor in a different way. If you want to change vendors, you almost certainly need to update your operating tools (deployment, monitoring, etc.), you may need to change the code (for example, to meet different FaaS interfaces), and you may even need to change the design or architecture.
Even if can easily migrate part of the ecosystem, you may be more affected by another architectural component. Suppose you are using AWS Lambda to respond to events on the AWS Kinesis message bus, and although the differences between AWS Lambda, Google Cloud, and Microsoft Azure capabilities may be relatively small, it is still not possible to connect the latter two vendors directly to the Kinesis stream of AWS. This means that it is not possible to migrate code from one solution to another without migrating the rest of the infrastructure.
Many people in are spooked by the idea that if the cloud provider chosen today needs to be changed tomorrow and a lot of work needs to be done, the experience is really bad. As a result, some people adopt a "multi-cloud" approach and develop and operate applications in a way that has nothing to do with the actual cloud provider, but this is usually more expensive than a single-cloud approach-so while vendor binding is a reasonable issue, I still recommend choosing a vendor that you are satisfied with and making the most of their capabilities.
5.1.4. Security considerations
will face a lot of security problems when it adopts serverless approach. Here are some things to consider:
Every serverless vendor you use will add a different security scheme to your system, which increases the likelihood of malice. If you use the BaaS database directly from the mobile platform, you will lose the protective barrier that server-side applications provide in traditional applications. While this is not a big deal, you still need to be very careful when designing and developing applications. When your organization accepts FaaS, you may experience an explosion of FaaS functionality throughout the company. For example, in AWS Lambda, each Lambda function is usually accompanied by a configured IAM policy, which is error-prone. This is not a simple topic, nor is it a topic to be ignored. IAM management requires careful consideration, at least in the AWS account of the production environment. 5.1.5. Logical repetition across customer platforms
uses the "complete" BaaS architecture, and the server side does not write custom logic-- all in the client. This may be good for your first client platform, but once you need to switch to another platform, you need to repeatedly implement a subset of the logic, but you don't need such repetition in a more traditional architecture. For example, if you use a BaaS database in such a system, then all your client applications (possibly web, native iOS, and native Android) now need to be able to communicate with the vendor database and need to know how to map from the database schema to the application logic.
5.1.6. Server optimization lost
With 's complete BaaS architecture, there is no opportunity to optimize server design. The "back-to-front" pattern exists to abstract some of the underlying knowledge of the entire system from the server, in part to enable clients to perform operations faster and use less battery power in mobile applications. This mode does not apply to a full BaaS.
has another drawback to the complete BaaS architecture: in this architecture, all customization logic is located in the client and only provides back-end services provided by the vendor. One way to alleviate these two situations is to adopt FaaS or other types of lightweight server-side patterns and transfer some logic to the server.
5.1.7. Server status missing
said before: "when it comes to local, FaaS functions have great limitations. State. You should not assume that the state of one call of a function is available to another call of the same function." .
The reason for 's assumption is that for FaaS, we usually have no control over when the host container of the function starts and stops.
, how does the state of FaaS change if you can't keep it in memory? In many cases, fetching using a fast NoSQL database, out-of-process cache (such as Redis), or external object / file storage (such as S3) is some option. But these are much slower than memory or machine persistence. You need to consider whether the application is suitable for this situation.
Another problem with is memory caching. Many applications that read data from externally stored big data sets retain a cache of some data sets in memory. You may be reading data from the "reference data" table in the database and using things like Ehcache. Alternatively, it can be read from the HTTP service that specifies the cache header, in which case the in-memory HTTP client can provide local caching.
FaaS does allow some local caching, which may be useful if your functions are used frequently enough. For example, for AWS Lambda, we usually want an instance of a function to exist for several hours and use it at least once every few minutes. This means that you can use the (configurable) 3gb RAM provided by Lambda, or the 512mb local "/ tmp" space, which may be sufficient for some caches. Otherwise, you no longer need to assume in-process caching, and you need to use a low-latency external cache such as Redis or Memcached. However, this requires extra work, which can be very slow.
5.2. Implementation defect
The shortcomings described earlier in may always exist in serverless environments. The solutions to these problems are improving, but they still exist.
The remaining shortcomings of boil down entirely to the current level of technology. These can be eliminated with the will and investment of suppliers and / or heroic communities. In fact, this list has shrunk since the first version of this article.
5.2.1. Configuration
when I wrote the first version of this article, AWS rarely provided the configuration of the Lambda function. This problem has now been solved, but if you are using an immature platform, it is still worth checking.
5.2.2. On your own.
The following example of shows why "buyer beware" is a key phrase when dealing with FaaS. AWS Lambda limits the number of concurrent executions of Lambda functions that can be run at a given time. The limit is 1000; this means that you can execute a thousand instances of the function at any time. If you need to exceed this standard, you may encounter exceptions, queues, and / or generally slow down.
The problem with here is that this restriction is across the entire AWS account. Some organizations use the same AWS account for production and testing. This means that if someone somewhere in your organization performs a new load test and starts trying to execute 1000 concurrent Lambda functions, your production application will be shut down unexpectedly.
Even if uses different AWS accounts for production and development, an overloaded product lambda (for example, processing batch uploads from customers) can cause independent real-time production API supported by lambda to become unresponsive.
Amazon provides some protection here by preserving concurrency. Preserving concurrency allows you to limit the concurrency of the Lambda function so that it does not break the rest of your account, while ensuring that there is always available capacity no matter what other functions in the account are doing. However, concurrency is not reserved for account opening by default, so you need to manage it carefully.
5.2.3. Execution time
I mentioned earlier in this article that AWS Lambda functions will be aborted if they take more than 5 minutes to run. This has been the case for two years, and AWS shows no sign of wanting to change.
5.2.4. Start delay
I talked about cold startup earlier and mentioned articles on this topic. AWS has improved this area over time, but there are still some major problems, especially for occasionally triggered jvm implementation functions and / or functions that need to access VPC resources. It is expected that this area will continue to improve.
all right, that's enough to choose AWS. I believe other suppliers also have some problems.
5.2.5. test
unit testing serverless applications is simple because I discussed earlier that any code you write is "just code" and in most cases you don't have to use a bunch of custom libraries or interfaces that must be implemented.
, on the other hand, integrating testing serverless applications is difficult. In the BaaS world, you rely on externally provided systems rather than (for example) your own database. So, should integration testing also use external systems? If so, how adaptable are these systems to test scenarios? Can you easily discard the status? Can your supplier provide different billing strategies for load testing?
if you want to pile those external systems for integration testing, does the supplier provide piling simulation? If so, what is the fidelity of the pile? If the supplier does not provide the pile, how will you drive the pile yourself?
The same problem with exists in the FaaS area, although this area has improved. You can now run FaaS functions locally for Lambda and Microsoft Azure. However, no local environment can fully simulate a cloud environment; I don't recommend relying entirely on a native FaaS environment. In fact, I would like to go further and suggest that the canonical environment in which you run automated integration tests (at least as part of the deployment pipeline) should be the cloud, and you should mainly use the local test environment for interactive development and debugging. These local test environments are constantly improving.
remember the cross-account execution restrictions I mentioned in the previous sections when running integration tests in the cloud? You may want to at least isolate these tests from your production cloud account.
An important reason why considers integration testing is that we use serverless FaaS, where each function is much smaller than other architectures, so we rely more on integration testing than other architectural styles.
relied on a cloud-based test environment instead of running everything locally on my laptop, which was a considerable shock to me. But times are changing, and the capabilities we get from cloud computing are similar to those that engineers at companies like Google have had for more than a decade. Amazon now even allows you to run IDE in the cloud. I haven't fully done this yet-but it may come.
5.2.6. Debug
using FaaS for debugging is an interesting area. Some progress has been made here, mainly related to running the FaaS function locally. Microsoft provides excellent debugging support for functions that run locally but are triggered by remote events. Amazon provides similar services, but there is no mechanism for triggering production events.
It's another thing for to debug functions that are actually running in a cloud production environment, which Lambda doesn't support at least yet, but it would be nice to see it.
5.2.7. Deployment, packaging, and version control
this is an area that is actively improving. AWS has made great strides in improving this, which I will discuss further in the section "the Future of a serverless Architecture" later.
5.2.8. find
"discovery" is a frequently discussed topic in the microservice world: it is the question of how a service invokes the correct version of another service. In a serverless world, there is little discussion about discovery. At first it worried me, but now I'm not so worried. Many serverless uses are event-driven in nature, where consumers of events are usually self-registered to some extent. For API-oriented FaaS usage, we usually use them behind the API gateway. In this case, we use DNS in front of the API gateway and automatic deployment / traffic transfer behind the gateway. You can even use more layers in front of the API gateway (for example, using AWS CloudFront) to support flexible scheduling across regions.
5.2.9. Monitoring and observability
because of the transient nature of the container, monitoring is a tricky area for FaaS. Most cloud providers provide a certain amount of monitoring support, and we also see a lot of third-party work from traditional monitoring vendors. However, what they and you can do in the end depends on the basic data provided to you by the supplier. In some cases, this may be good, but for AWS Lambda, at the very least, it is very basic. What we really need in this area is to open up api and third-party services to provide more help.
5.2.10. Definition of API Gateway and overly powerful API Gateway
ThoughtWorks discusses overly aggressive API gateways in part of his technical radar publication. Although this link generally refers to an API gateway (for example, for those traditionally deployed micro-service front ends), it can certainly be used as an API gateway from the HTTP front end to the FaaS function. The problem is that API gateways provide a lot of application-specific logic. This logic is often difficult to test, version control, and sometimes difficult to define. In general, this logic is best retained in the program code like the rest of the application.
if we think of the API gateway as a BaaS, then consider whether all the options it gives us are valuable in order to save our work. Wouldn't it be more cost-effective to maximize the functionality of the API gateway if you pay for each request using the API gateway instead of paying for CPU utilization?
my advice is to use the enhanced API gateway functionality wisely and only if it does save work in the long run. Never use API gateway features that cannot be expressed in a configuration file or deployment script that can be controlled by the source code.
5.2.11. Postponed operation and maintenance
mentioned earlier that no server is not "no operation and maintenance"-there is still a lot of work to be done from a monitoring, architectural scalability, security, and network perspective. However, it is easy to ignore this at the beginning. The danger here is being bewildered by a fake sense of security. Maybe you've started and run your application, but it unexpectedly appears in the news, and all of a sudden you need to deal with 10 times as much traffic. Oops! You're confused.
The solution is training. Teams using serverless systems need to consider operations activities as soon as possible, and vendors and communities should provide teaching to help them understand what this means.
6. The future of serverless architecture
We are nearing the end of our exploration of the world of serverless architecture. Finally, I will discuss several areas where I think the serverless world is likely to develop in the coming months and years.
6.1. Defect reduction
serverless is still a fairly new world. Therefore, the previous section on shortcomings is so extensive that it does not even cover everything I can think of. The most important development of serverless will be to reduce inherent defects and to eliminate or improve implementation defects.
6.1.1. Use the tool
tools are still a serverless problem because many technologies and technologies are new. Deployment / application binding and configuration have been improved over the past two years, with the serverless framework and Amazon's serverless application model taking the lead. While Amazon and Google can turn to Microsoft and Auth0 for more inspiration, the "first 10 minutes" experience is not as magical as you might think.
I am pleased to see that one area that cloud providers are actively addressing is the higher-level approach to publishing. In traditional systems, teams often need to write their own processes to deal with "traffic transfer" ideas such as blue and green deployments and canary releases. With this in mind, Amazon supports automatic traffic transfer for Lambda and API gateways. This concept is more useful in serverless systems because the atomic version of the system with 100 Lambda functions at a time makes up many separately deployed components. In fact, Nat Pryce described to me a "mixed platform" approach in which we can gradually introduce and exit a set of components in a business flow.
distributed monitoring is probably the area most in need of improvement. We have seen early work in Amazon's X-Ray and various third-party products, but this is definitely not a problem that has been solved. Remote debugging is also something I'd like to see more widely used. The Microsoft Azure function supports this, but Lambda does not. A function breakpoint that can be run remotely is a very powerful feature.
Finally, , I would like to see improved tools more effectively "meta-manipulate"-managing hundreds of FaaS functions, configuring services, and so on.
6.1.2. State management
for many applications, FaaS's lack of persistent server state is acceptable, but for many other applications, this is a fatal problem-- whether for large cache sets or fast access to session state.
For for high-throughput applications, one solution might be for vendors to keep function instances alive a little longer between events and use conventional in-process caching methods. Because caching is not hot for every event, this approach is not 100% effective. The same problem exists for auto-scaling applications that use traditional deployment.
A better solution for is to access out-of-process data with very low latency, such as the ability to query Redis databases at very low network overhead. Considering that Amazon already provides a managed Redis solution in their Elasticache products, and that they have allowed placement groups to relative locate EC2 (server) instances, this does not seem excessive.
but, more likely, I think we will see different kinds of hybrid (serverless and non-server) application architectures adopted to consider state externalization constraints. In a low-latency application, for example, you may see a common method: a resident server processes an initial request, collects all the necessary contexts to handle the local and external status of the request, and then passes a fully contextual request to the FaaS function group without having to look for data externally.
6.1.3. Lifting platform
Some of the shortcomings of serverless FaaS at present are attributed to the implementation of the platform. Execution duration, startup delay, and cross-functional limitations are three obvious defects. These problems may be solved through new solutions, or there may be workarounds to add additional costs. For example, I think you can reduce startup latency by allowing customers to always request two instances of a FaaS function at low latency, for which the customer has to pay. The functionality of Microsoft Azure has elements of this concept, with persistent functionality, as well as the functionality hosted by the application service plan.
, of course, we will see improvements to the platform, not just fixing current defects, but these improvements will also be exciting.
6.1.4. Training
Through training, many vendor-specific inherent server-free defects are being mitigated. What does it mean to have one or more application vendors host so many ecosystems? Everyone who uses such platforms needs to think positively. We need to consider whether we need to consider "designing parallel solutions from different vendors in case one of them is unavailable" and "how can the application be gracefully degraded in the case of partial downtime?" A question like this.
Another area of training is technical operations. Many teams now have fewer system administrators than before, and serverless architecture will accelerate this change. But system administrators don't just configure UNIX boxes and Chef scripts-they're usually front-line people in support, networking, security, and so on.
in a serverless world, a true DevOps culture becomes more important because those non-system administrator activities still need to be done, and developers are usually responsible for them now. For many developers and technical leaders, these activities may not be natural, so training and close collaboration with operators are important.
6.1.5. Increase transparency and clarify supplier expectations
Finally, , on how to mitigate defects: vendors must be more clear about what we expect from their platform because we rely more on them to provide managed functionality. Although it is difficult to migrate the platform, it is not impossible, and untrusted vendors will see their customers move their business elsewhere.
6.2. The emergence of patterns
our understanding of how and when to use serverless architecture is still in its infancy. Now, the team is throwing all kinds of ideas on serverless platforms to see which ones are useful. Thanks to the pioneers! We are beginning to see the emergence of recommended practice patterns, and this knowledge will only become more and more.
some of the patterns we see in application architecture: for example, how big can FaaS functions become before they become clumsy? Assuming that we can automatically deploy a set of FaaS functions, what is a good way to create these groups? Are they closely related to the way we currently aggregate logic into microservices, or do architectural differences push us in a different direction?
A particularly interesting area of active discussion about in the serverless application architecture is how it interacts with event thinking. Ajay Nair, AWS Lambda's product director, gave a great talk on this in 2017, which is one of the main areas discussed by the CNCF serverless working group.
further, what is a good way to create a hybrid architecture between FaaS and traditional "always on" persistent server components? What is a good way to introduce BaaS into existing ecosystems? Instead, what are the warning signs that BaaS systems need to start accepting or using more custom server-side code?
we also see a discussion of more usage patterns: a standard example of FaaS is media conversion, for example, whenever a large media file is stored in an S3 bucket, a process is automatically run to create a smaller version in another bucket. However, we are now seeing significant applications of serverless in data processing pipelines, highly scalable web api, and code that acts as a general "glue" in operations. Some of these patterns can be implemented as common components and deployed directly to an organization; I have written about Amazon's serverless application repository, which is an early form of this idea.
finally, as the tool improves, we begin to see the recommended mode of operation. How can we reasonably aggregate logging for a hybrid architecture of FaaS, BaaS, and traditional servers? How to debug FaaS functions most effectively? Many of the answers to these questions-- and emerging patterns-- come from cloud providers themselves, and I expect activity in this area to grow.
6.3. Global distributed architecture
in the pet store example, we saw earlier that I divided a pet store server into several server-side components and some logic transferred to the client. But fundamentally, this is still a client-centric architecture, and remote services are in a known location.
what we are beginning to see in the serverless world is a vague allocation of responsibilities. One example is Amazon's Lambda@Edge product: a way to run Lambda functions in an Amazon CloudFront content delivery network. With Lambda@Edge, Lambda functions can be distributed around the world-an upload by an engineer means that the function will be deployed to more than 100 data centers around the world.
in addition, the Lambda function can run on the device, the machine learning model can run on the mobile client, until you know it, the bifurcation between "client" and "server" no longer seems to make sense. No server will become no zone.
6.4. Outside of "FaaSification"
most of the uses of FaaS I've seen so far are "FaaSification" (FaaS) existing code and design ideas: converting them into a set of stateless functions. This is very powerful, but I hope we will start to see more abstractions, and possibly languages, using FaaS as the underlying implementation, allowing developers to benefit from FaaS without having to think of their applications as a set of discrete functions.
for example, I don't know if Google's Dataflow products are implemented in FaaS, but I can imagine someone creating a product or open source project, doing something like that, and using FaaS as the implementation. The comparison here is similar to Apache Spark. Spark is a tool for large-scale data processing, it provides a very high-level abstraction, you can use Amazon EMR and Hadoop as its underlying platform.
6.5. test
I think there is still a lot of work to be done on the integration and acceptance testing of serverless systems, but much of the work is the same as the "cloud native" micro-service system developed in a more traditional way.
One of the basic ideas of is to adopt the testing idea in production and monitoring-driven development: once the code has passed the basic unit test verification, it can be deployed to a business subset and compared with the previous version. This can be combined with the business transfer tool I mentioned earlier. This does not apply to all contexts, but it is a very effective tool for many teams.
6.6. Migration implementation
teams can use serverless in several ways while reducing their dependence on specific cloud vendors.
6.6.1. Abstraction of vendor implementation
The serverless framework exists primarily to simplify operations and maintenance tasks for serverless applications, but also provides leeway about where and how to deploy such applications. For example, it would be great to be able to easily switch between AWS API gateway + Lambda and Auth0 webtask (even now) according to the operation and maintenance capabilities of each platform.
One of the tricky aspects of is modeling the abstract FaaS coding interface, and there is currently no standardized guidance, but this is the work of the CNCF serverless working group on CloudEvents.
however, once the complexity of operations and maintenance surfaced, it was doubtful how valuable it would be to provide abstract deployment for multiple platforms. For example, getting the right security in one cloud may be different in another.
6.6.2. Deployable implementation
recommends that we use serverless technology without using a third-party provider, which may sound strange, but consider the following ideas:
Maybe we are a large technical organization that wants to start providing a firebase-like database experience to all mobile application development teams, but wants to use the existing database architecture as the back end. I've talked about the "Serverful" FaaS platform-- the ability to use FaaS-style application architectures for some projects.
in both cases, there are still many benefits to using a serverless approach that is not hosted by the vendor. Here is a precedent-consider platform as a Service (PaaS). Initially popular PaaS was cloud-based (such as Heroku), but soon people saw the benefits of running PaaS environments on their local systems-so-called "private" PaaS (such as cloud Foundry I mentioned earlier in this article).
I can imagine that, like private PaaS implementations, we will see open source and commercial implementations of BaaS and FaaS concepts becoming popular, especially those integrated with container platforms such as Kubernetes. 6.7. Community
now has a fairly large serverless community, with multiple meetings, gatherings in many cities, and various online groups. I expect this trend to continue, possibly in communities like Docker and Spring.
7. Conclusion
Despite the confusing name of , serverless is an architectural style in which we run our own server-side systems as part of the application. This is achieved through two technologies: BaaS, which tightly integrates third-party remote application services to the front end of the application, and FaaS, which moves server-side code from long-running components to short-lived function instances.
serverless is not the right solution to every problem, so beware of those who say it will replace all of your existing architecture. If you try to use serverless systems now, please be careful, especially in the FaaS field. While there are abundant (scalable and economical deployment efforts) resources available, there are also debugging and monitoring problems lurking in the corner.
, however, this wealth should not be abandoned any time soon, as serverless architecture has many positive aspects, including reduced operational and development costs, easier operations management, and reduced environmental impact. But I think the most important benefit is that it reduces the feedback loop for creating new application components. I am a big fan of the "lean" approach, mainly because I think it is valuable to present the technology to end users as soon as possible to get early feedback, and the shorter time to market of serverless products is in line with this idea.
now (May 2018), serverless services and our understanding of them are in a "slightly clumsy adolescence". There will be a lot of progress in this area over the next few years, and it will be interesting to see how serverless adapts to our architectural toolset.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.