Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to automatically scale Kubeless based on CPU

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

How to automatically scale Kubeless based on CPU, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and easy way.

Auto scaling is one of the biggest selling points of Serverless.

The automatic scaling function of Kubless is based on the HPA (HorizontalPodAutoscaler) function of Kubernetes.

Currently, functions in kubeless support automatic scaling based on cpu and qps metrics.

How to automatically scale based on cpu metrics will be demonstrated.

Environment description

Operating system: macOS

Kubernetes version: v1.15.5

Kubeless version: v1.0.7

Learn how to set up autoscale

You can first learn how to use autoscale from the kubeless command line.

The kubeless autoscale command help documentation is as follows:

Kubeless help autoscaleautoscale command allows user to list, create, delete autoscale rule for function on KubelessUsage: kubeless autoscale SUBCOMMAND [flags] kubeless autoscale [command] Available Commands: create automatically scale function based on monitored metrics delete delete an autoscale from Kubeless list list all autoscales in KubelessFlags:-h,-- help help for autoscaleUse "kubeless autoscale [command]-- help" for more information about a command.

The kubeless autoscale create command help documentation is as follows:

Kubeless autoscale create-- helpautomatically scale function based on monitored metricsUsage: kubeless autoscale create FLAG [flags] Flags:-- h,-- help help for create-- max int32 maximum number of replicas (default 1)-- metric string metric to use for calculating the autoscale. Supported metrics: cpu, qps (default "cpu")-min int32 minimum number of replicas (default 1)-n,-namespace string Specify namespace for the autoscale-- value string value of the average of the metric across all replicas. If metric is cpu, value is a number represented as percentage. If metric is qps, value must be in format of Quantity install Metrics Server

To use HPA, you need to install Metrics Server service in the cluster. Otherwise, HPA cannot obtain metrics, and it will not be able to expand or reduce capacity.

You can use the following command to check if Metrics Server is installed, and if not, you need to install it.

$kubectl api-versions | grep metrics

1. First download the components.yaml of metrics-server:

$curl-L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml-- output components.yaml

2. Then add the parameter-kubelet-insecure-tls under args on line 88 of the components.yaml file, otherwise metrics-server starts to report an error:

3. Finally, use the kubectl apply command to install Metrics Server:

$kubectl apply-f components.yamlclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io createdserviceaccount/metrics-server createddeployment.apps/metrics-server createdservice/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

4. Confirm whether metrics-server is installed successfully:

$kubectl api-versions | grep metricsmetrics.k8s.io/v1beta1 automatically scales based on cpu

Still use the familiar Python code:

# test.pydef hello (event, context): print event return event ['data']

Create a hello function, plus cpu parameter and memory parameter, so that HPA can expand and reduce capacity according to cpu metric:

Kubeless function deploy hello-- runtime python2.7-- from-file test.py-- handler test.hello-- cpu 200m-- memory 200MINFO [0000] Deploying function... INFO [0000] Function hello submitted for deployment INFO [0000] Check the deployment status executing 'kubeless function ls hello'

View the function status:

$kubeless function ls helloNAME NAMESPACE HANDLER RUNTIME DEPENDENCIES STATUS hello default test.hello python2.7 1Acer 1 READY

Use kubeless to create an autoscale for the function hello:

$kubeless autoscale create hello-metric=cpu-min=1-max=20-value=60INFO [0000] Adding autoscaling rule to the function... INFO [0000] Autoscaling rule for hello submitted for deployment

Use kubectl proxy to create a reverse proxy so that functions can be accessed through http:

$kubectl proxy-p 8080

Next, the function is stress tested. Ab is used here, which is the stress testing tool included with apache. MacOS installs apache by default, which can be used directly.

Use the ab tool for stress testing:

$ab-n 3000-c 8-t 300-k-r "http://127.0.0.1:8080/api/v1/namespaces/default/services/hello:http-function-port/proxy/"

Using the kubectl get hpa-w command to observe the status of HPA, you can see that the number of copies will change according to the size of the metric. When the pressure is high, the number of copies will increase, and when the pressure is low, the number of copies will decrease:

Kubectl get hpa-wNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEhello Deployment/hello 0% wNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEhello Deployment/hello 60% 1 20 1 30mhello Deployment/hello 95% 60% 1 20 1 32mhello Deployment/hello 95% 60% 1 20 2 33mhello Deployment/hello 77% 60% 1 20 2 33mhello Deployment/hello 77% 34mhello Deployment/hello 60 0 20 3 34mhello Deployment/hello 63% 1 20 3 34mhello Deployment/hello 62% 1 20 3 36mhello Deployment/hello 71% 1 20 3 37mhello Deployment/hello 71% 1 20 4 37mhello Deployment/hello 0% 38mhello Deployment/hello 0% 60% 1 20 4 42mhello Deployment/hello 0% 60% 1 20 1 43m

Using the kubectl get pod-w command, you can also see the changes in the number and status of Pod during automatic scaling:

$kubectl get pod-wNAME READY STATUS RESTARTS AGEhello-67b44c7585-5t9g4 1 Running 0 21hhello-67b44c7585-d9w7j 0 pound 1 Pending 0 0shello-67b44c7585-d9w7j 0 Pending 0 0shello-67b44c7585-d9w7j 0 0shello-67b44c7585-d9w7j 1 Init:0/1 0 0shello-67b44c7585-d9w7j 0 pound 1 PodInitializing 0 2loshelz 67b44c7585- D9w7j 1/1 Running 0 6shello-67b44c7585-fctgq 0/1 Pending 0 0shello-67b44c7585-fctgq 0/1 Pending 0 0shello-67b44c7585-fctgq 0/1 Init:0/1 0 0shello-67b44c7585-fctgq 0/1 PodInitializing 0 2shello-67b44c7585-fctgq 1/1 Running 0 3shello-67b44c7585-ht784 0 Pending 0 0shello-67b44c7585-ht784 0 ax 1 Pending 0 0shello-67b44c7585-ht784 0 Init:0/1 0 0shello-67b44c7585-ht784 0 PodInitializing 0 2shello-67b44c7585-ht784 1 Acer 1 Running 0 3 Shellowe 67b44c7585- Wfcg9 0/1 Pending 0 0shello-67b44c7585-wfcg9 0/1 Pending 0 0shello-67b44c7585-wfcg9 0/1 Init:0/1 0 0shello-67b44c7585-wfcg9 0/1 PodInitializing 0 2shello-67b44c7585-wfcg9 1/1 Running 0 3shello-67b44c7585-fctgq 1/1 Terminating 0 8m53shello-67b44c7585-ht784 1/1 Terminating 0 7m52shello-67b44c7585-wfcg9 1/1 Terminating 0 5m50shello-67b44c7585-d9w7j 1/1 Terminating 0 9m54shello-67b44c7585-fctgq 0/1 Terminating 0 9m24shello-67b44c7585-ht784 0/1 Terminating 0 8m23shello-67b44c7585-fctgq 0/1 Terminating 0 9m25shello-67b44c7585-fctgq 0/1 Terminating 0 9m25shello-67b44c7585-fctgq 0/1 Terminating 0 9m25shello-67b44c7585-d9w7j 0/1 Terminating 0 10mhello-67b44c7585-d9w7j 0/1 Terminating 0 10mhello-67b44c7585-ht784 0/1 Terminating 0 8m24shello-67b44c7585-wfcg9 0/1 Terminating 0 6m22shello-67b44c7585-d9w7j 0/1 Terminating 0 10mhello-67b44c7585-wfcg9 0/1 Terminating 0 6m29shello-67b44c7585-wfcg9 0/1 Terminating 0 6m29shello-67b44c7585-ht784 0/1 Terminating 0 8m31shello-67b44c7585-ht784 0 Terminating 1 Terminating 0 8m31s the answer to the question about how Kubeless automatically scales based on CPU is shared here I hope the above content can help you to a certain extent, if you still have a lot of doubts to be solved, you can follow the industry information channel to learn more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report