Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Zero basic strategy! How to use kubectl and HPA to extend Kubernetes applications

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Nowadays, Kubernetes has completely changed the way of software development. As an open source platform for managing containerized workloads and services, Kubernetes has portable and extensible features, promotes declarative configuration and automation, and proves itself to be a major player in managing complex micro services. The reason why Kubernetes is widely adopted in the industry is that Kubernetes caters to the following needs:

Enterprises want to keep costs low while growing.

DevOps wants a stable platform that can run applications on a large scale.

Developers want a reliable and replicable process for writing, testing, and debug code

But have you ever thought about how to get a powerful container orchestration platform while using the resources you actually need? The key to optimal resource utilization is to know what applications to extend and when to extend them. Therefore, in this article, we will discuss and learn how to extend the Kubernetes container, and we will pay special attention to two types of services: kubectl and Horizontal Pod Autoscaler (HPA).

Kubectl

In most cases, you interact with Kubernetes through a command-line tool called kubectl. Kubectl is primarily used to communicate with Kubernetes API to create, update, and delete workloads within Kubernetes. In the following sections, we will provide some common commands that you can use to start managing Kubernetes.

Most common kubectl commands provide specific actions or actions to perform, such as create, delete, and so on. This approach usually involves interpreting files (YAML or JSON) that describe objects (pod, services, resources, and so on) in Kubernetes. These files can be used for templates and persistent files in the environment, and help keep Kubernetes's focus on declarative configurations. The actions specified on the command line are passed to the API Server and then communicate with the back-end services in the Kubernetes as needed. The following table can help you install kubectl:

Please note that the best version of kubectl for Windows will change with the release of the new version. To find the most suitable binary currently, please visit the following URL:

Https://storage.googleapis.com/kubernetes-release/release/stable.txt

And adjust the above URL as needed.

Kubectl syntax

The syntax of kubectl is as follows:

Kubectl [command] [TYPE] [NAME] [flags]

Command: refers to the actions you want to perform (create, delete, etc.)

Type: refers to the type of resource against which you want to execute the command (Pod, Service, etc.)

Name: the name of the resource object (case sensitive). If you don't specify a name, it will get all the resource information that matches your command.

Flags: this section is not necessary in syntax, but is useful when you need to find a specified resource. For example,-namespace allows you to specify a specific namespace in which to perform operations.

Kubectl operation

The following example can help you familiarize yourself with running common kubectl operations:

Kubectl apply-Apply or Update a resource from a file or stdin.# Create a service using the definition in example-service.yaml.kubectl apply-f example-service.yamlkubectl get-List one or more resources.# List all pods in plain-text output format.kubectl get pods# List all pods in plain-text output format and include additional information (such as node name). Kubectl describe-Display detailed state of one or more resources, including the uninitialized ones by default.# Display the details of the node with name. Kubectl describe nodes kubectl delete-Delete resources either from a file, stdin, or specifying label selectors, names Resource selectors, or resources.# Delete a pod using the type and name specified in the pod.yaml file.kubectl delete-f pod.yaml# Delete all the pods and services that have the label name=.kubectl delete pods,services-l name=kubectl logs-Print the logs for a container in a pod.# Return a snapshot of the logs from pod. Kubectl logs # Start streaming the logs from pod. This is similar to the 'tail-f' Linux command.kubectl logs-f

The above are common operations in kubectl. If you want to know more, you can check out the official guide of kubectl. In addition, we have also introduced in previous articles:

Seven kubectl commands that you are sure to use

A full-solution tutorial for managing Kubernetes with Kubectl

Horizontal Pod Autoscaler (HPA)

Pod horizontal automatic scaling (HPA) is an important feature of Kubernetes, which allows you to configure clusters to automatically scale running services. HPA is implemented as a Kubernetes API resource and controller. Resources determine the behavior of controller, and controller periodically adjusts the number of replicas in the replication controller or deployment to match the observed average CPU utilization with user-specified goals.

At the same time, HPA is implemented as a control loop, and its period is controlled by the-horizontal-pod-autoscaler-sync-period flag of controller manager (the default is 30 seconds).

During each cycle, controller manager queries resource utilization based on the metrics specified in each HPA definition. Controller manager obtains metrics from resource metrics API (for per-pod resource metrics) or custom metrics API (for all other metrics).

For per-pod resource metrics, such as CPU, controller gets the metrics from each Pod located for HPA in the resource metrics API. Then, if the target utilization value is set, controller will calculate the utilization value as a percentage of equivalent resource requests in each pod container. If the target original value is set, the original indicator value is used directly. Controller then averages the utilization or raw values of all target pod (depending on the specified target type) and produces a ratio for the number of copies required to scale.

For per-pod custom metrics, controller functions similar to per-pod resource metrics, but it applies to raw values, not utilization values.

For object metrics, a single metric, which describes the object in question, is taken and compared with the target value to produce a ratio for the number of replicas required to scale.

HPA controller will obtain metrics in two different ways: direct Heapster access and REST client access. When using direct Heapster access, HPA will query Heapster directly through API server's service agent child resources. Note that Heapster needs to be deployed on the cluster and run in the kube-system namespace.

The workflow of HPA consists of the following four steps, as shown in the figure:

During the default 30-second interval, HPA will continuously check the indicator values you have configured.

If the specified threshold is reached, HPA attempts to increase the number of pod

HPA mainly updates the number of copies in deployment or replication controller

Then, the deployment / replication controller will add any additional required pod

Consider the following factors when you launch HPA: the default HPA check interval is 30 seconds, which can be configured with the-horizontal-pod-autoscaler-sync-period flag of controller manager. The default tolerance for HPA-related metrics is 10%. After the last extension event, HPA will wait 3 minutes to stabilize the metric. This wait event can also be configured with the-horizontal-pod-autoscaler-upscale-delay flag. Since the last zoom out event, HPA will wait 5 minutes to avoid autoscaler jitter. It can also be configured with the-horizontal-pod-autoscaler-downscale-delay flag. Compared with replication controller, HPA is most suitable for use with deployment objects or Pod metrics, and is not suitable for rolling updates that use direct manipulation of replication controller. When you deploy, you need to manage the size of the underlying replica set according to the deployment object. When HPA is used with custom metrics such as Pod metrics or object metrics, you need to decide when to scale. Because Kubernetes supports multiple metrics, you can use multiple metrics at the same time to determine when to scale. Note that Kubernetes considers each metric in order. For more examples, please see:

Https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale

Conclusion theory

In this article, we discussed two main tools for extending Kubernetes applications, both of which are key components of all Kubernetes services. We saw how to install and use different features, such as application, fetch, delete, description, and kubectl logs. At the same time, we review and learn about Horizontal Pod Autoscaler, such as how it works and its importance to any Kubernetes service. Both kubectl and HPA are important features of Kubernetes when extending microservice applications.

In Rancher 2.3, which was released last month, HPA functionality has been integrated and can be used in Rancher through UI. At present, Rancher 2.3 is also stable. If you want to learn more about Rancher 2.3, follow our Rancher K8S cloud class next Wednesday night.

Welcome to add a small assistant (× ×: × ×) to join the official technical group to learn more about Kubernetes usage strategies.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report