In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Recently, one of the more novel concepts in system design is the concept of "serverless architecture". No doubt, this is a bit of an exaggeration, because there are servers involved, but it means that we can look at servers in different ways.
Potential upward space without servers
Imagine a simple Web-based application that handles requests from HTTP clients. Instead of having some programs wait for requests to arrive and then call a function to handle them, what if we could start and run each function on demand and then discard it? We don't need to worry about the number of running servers that can accept connections, or deal with complex configuration management systems to build new instances of the application as we scale. In addition, we will reduce common problems with state management such as memory leaks and segmentation errors.
Perhaps most importantly, this method of calling functions on demand will allow us to scale each function to match the number of requests and process them in parallel. Each "customer" will be given a dedicated process to process their requests, and the number of processes will only be limited by the computing power you handle. When coupled with a large cloud provider, the available computing far exceeds your usage, and no server can remove a lot of complexity and scale the application.
Potential shortcomings
Admittedly, there is still the challenge of increasing latency when building a process for each request. Serverless is never as fast as pre-allocated processes and memory; however, the question is not whether it is faster, but whether it is fast enough. In theory, we will accept serverless delays because we will be rewarded. However, this tradeoff needs to be based on a careful assessment of the situation at hand.
Using Rancher and open source tools to achieve serverless
Docker provides us with a number of tools to implement this serverless concept and gives a good demonstration on the recent DockerCon. Rancher maximizes these capabilities. Because our platform is responsible for the management of your container infrastructure, you only need to operate an API to add and remove computing capacity. Through this part of the software definition stack, users are supported to achieve comprehensive application automation.
The next layer in the stack is the available framework for writing code for serverless systems. You can write or extend some middleware to deal with this problem, but many open source projects provide tools to simplify the process. One of the projects is the Iron function of Iron.io. I did a quick POC on Rancher and found it easy to use. Use these compose files to quickly start this setting in Rancher.
To use these files, copy and paste the docker-compose.yml and rancher-compose.yml files in repo into the Add Stack section of Rancher UI. Or from Rancher CLI, simply run "rancher up" (be sure to set the following environment variable: RANCHER_URL,RANCHER_ACCESS_KEY,RANCHER_SECRET_KEY).
When the stack starts, you should see it in Rancher UI. In addition, you can find the URL of Iron Functions API endpoints and UI by clicking the "I" icon next to the first item in the stack ("API-lb").
Run the serverless stack after deployment is complete
Find the URL of your IronFunctions endpoint
Once you run the stack, follow the instructions on "Write a Function (write a function)" on Iron.io 's Github repo. You may need some time to adapt, because this requires a slight change when you write the application. There will be no shared state for your functions to reference, and things like libraries can be difficult and expensive to use. In my example, I chose a simple golang function from Iron.io:
Package mainimport ("encoding/json"fmt"os") type Person struct {Name string} func main () {p: = & Person {Name: "World"} json.NewDecoder (os.Stdin) .Decode (p) fmt.Printf ("Hello% v!", p.Name)}
The next step is to deploy the function to the instance of the Iron function we set up in Rancher. To make this easier to try, I've written a script that performs all the steps for you. Refer to the README in this repo. Once the function is deployed, you should be able to see it in UI, and then you can try it:
Dashboard of IronFunctions
The result of your executing function
From within Rancher, you can expand or reduce the number of employees according to your needs. Rancher puts them on a host and connects them to a load balancer. According to the best practices guide, you can simply use "wait_time" metrics to make scaling operations relatively simple.
If you've ever thought about building applications in this way, I think the tutorials in this article will be a useful attempt. If you have any comments or feedback on this, do not hesitate to contact us! We look forward to hearing from you as always!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Http://www.brendangregg.com/Slides/LinuxConNA2014_LinuxPerfTools.pdfhttps://cache.yisu.com/upload/in
© 2024 shulou.com SLNews company. All rights reserved.