In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to analyze how to realize the visualization of K8S resource cost monitoring? in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
Complexity of calculating Kubernetes cost
Adopting Kubernetes and service-based architecture can bring many benefits to the enterprise, such as faster migration of teams and easier expansion of applications. But this shift also brings some complexity, such as the visibility of cloud costs. This is because applications and their resource requirements are often dynamic, and teams share core resources without transparent prices linked to workloads. In addition, enterprises that are fully aware of the benefits of Kubernetes often run resources on different types of machines, or even on multiple cloud providers.
We will learn about best practices and different implementations of cost monitoring for showback/chargeback projects in the enterprise, and how to authorize users to take action on this information. We'll start with Kubecost, which provides an open source way to ensure consistent and accurate visibility across all Kubernetes workloads.
Let's learn more about best practices to accurately allocate and monitor workload costs for Kubernetes and expenses on related managed services.
Cost allocation
Precise allocation of resource costs is the first step in creating cost visibility and achieving efficient cost utilization in a Kubernetes environment.
To do this step correctly, you need to allocate costs through a single container at the workload level. After workload allocation is completed, costs can be correctly allocated to teams, departments, and even individual developers by summarizing different sets of workloads. The cost allocation framework at the workload level is as follows:
Let's disassemble the framework bit by bit.
The average amount of resource consumption is calculated by Kubernetes scheduler or provided by a cloud provider, depending on the specific resource being measured. We recommend that memory and CPU allocations be calculated based on the maximum values of request and usage. This reflects the amount of resources reserved by the Kubernetes scheduler itself. On the other hand, resources such as load balancers and persistent volumes are strictly based on the quantity provided by the provider.
Kubernetes API can directly calculate the time period of resource consumption. This is determined by the time that resources (such as memory, CPU, GPU, and so on) spend in the Running state. To make the data accurate enough for the cloud chargeback, we recommend that the team reconcile and align the data with the time spent on specific cloud resources (such as nodes) provided by the cloud provider. We will cover this in more detail in a later section.
The price of resources is determined by observing the cost of each particular resource in the environment. For example, the CPU hourly price of a m5.xlarge spot instance in the us-east-1 AWS region is different from the on-demand price of the same instance.
Using this framework, costs can be appropriately allocated among workloads, so that they can be easily aggregated by any Kubernetes concept such as namespaces, tags, comments, or controller.
Kubernetes cost monitoring
By allocating costs through Kubernetes concepts such as pod or controller, you can begin to accurately map expenses to any internal business level, such as team, product, department, or cost center. It is a common practice for enterprises to divide team workloads through Kubernetes namespaces, while other practices may use Kubernetes tags or comments to identify which team the workload belongs to.
Another key factor in cost monitoring between different applications, teams, etc., is to determine who should pay for idle capacity. Specifically, it refers to the cluster resources that are still billed to the enterprise but not used. Typically, these costs are either credited to the central infrastructure cost center or pro rata to the application team. Allocate these costs to the team responsible for supply decisions and adjust incentives to have an efficient-scale cluster to produce positive results.
Check cloud bill
Kubernetes provides a large amount of real-time data. This gives developers direct access to cost metrics. Although this real-time data is usually accurate, it may not be completely consistent with the billing data of the cloud provider. For example, when determining the hourly rate for an AWS spot node, the user needs to wait for the Spot data source or cost and usage report to determine the exact rate. For billing and charging purposes, you should check the data with the actual bill.
Better visibility and governance through Kubecost
We have seen how to look at the data to calculate the cost of the Kubernetes workload. Another approach is to leverage Kubecost, a cost and capacity management solution based on open source that provides visibility into the entire Kubernetes environment. Kubecost provides cost visibility and insight into Kubernetes workloads and the related management services they consume, such as S3 or RDS. The product collects real-time data from Kubernetes and can be checked with your cloud billing data to reflect the actual price you pay.
With solutions like Kubecost, you can empower application engineers to make informed real-time decisions and begin to implement immediate and long-term practices to optimize and manage cloud spending. This includes adopting cost-optimized solutions, implementing Kubernetes budgets and alerts, showback/chargeback projects, and even cost-based automation without compromising performance.
Kubecost Community Edition is available for free and has all of the above features. You can find Kubecost Helm chart in the Rancher App Store for easy deployment. With Rancher, you can get the experience of Kubernetes cluster visualization and excellent control, while Kubecost provides you with a direct observation of expenses and how to optimize costs. Together, they provide a completed cost management solution for teams using Kubernetes.
This is the answer to the analysis question on how to realize the visualization of K8S resource cost monitoring. I hope the above content can be of some help to you, if you still have a lot of doubts to be solved. You can follow the industry information channel for more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.