In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces the docker container resource requirements, resource restrictions and HeapSter example analysis, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor with you to understand.
Resource requirements and resource restrictions of the container
Request: demand, minimum guarantee, when scheduling, this node must meet the resource size required by request.
Limits: limit, hard limit. No matter how this restriction container runs, it will not exceed the value of limits.
CPU: a cpu in K8s corresponds to a host logic cpu. A logical cpu can also be divided into 1000 millcores. So 500m=0.5 CPU,0.5m is equivalent to 1/2 core.
Unit of measurement of memory: e, P, T, G, M, K
[root@master scheduler] # kubectl explain pods.spec.containers.resources.requests [root@master scheduler] # kubectl explain pods.spec.containers.resources.limits [root@master metrics] # cat pod-demo.yaml apiVersion: v1kind: Podmetadata: name: app: myapp tier: frontendspec: containers:-name: myapp image: ikubernetes/stress-ng:v1 command: ["/ usr/bin/stress-ng", "- M1", "- c 1" "--metrics-brief"] #-m 1 means to start a child process to stress test the memory. -c 1 means that a child process is started to stress test the cpu. A child process of the default stress-ng uses 256m memory resources: requests: cpu: "200m" memory: "128Mi" limits: cpu: "1" # No unit means a cpu memory: "200Mi" [root@master metrics] # kubectl apply-f pod-demo.yaml pod/pod-demo created [root@master metrics] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEpod-demo 1 Running 0 6m 10.244.2.48 node2 [root@master metrics] # kubectl exec-it pod-demo-/ bin/sh/ # topMem: 3542328K used 339484K free, 123156K shrd, 3140K buff 1737252K cachedCPU: 21% usr 4% sys 0% nic 73% idle 0% io 0% irq 0 sirqLoad average: 1.31 1.00 0.74 4 root 968 1405 PID PPID USER STAT VSZ% VSZ CPU% CPU COMMAND 8 1 root R 6884 0 15% {stress-ng-cpu} / usr/bin/stress-ng-M1-C1-metrics-brief 1404 9 root R 262m 7 3 12% {stress-ng-vm} / usr/bin/stress-ng-M1-c 1-- metrics-brief 9 1 root S 6244 0 0 20% {stress-ng-vm} / usr/bin/stress-ng-m 1-c 1-- metrics-brief 1 0 root S 6244 0 0 0 / usr/bin/stress-ng-m 1-c 1-- metrics-brief 1202 0 root S 1508 0 0 0 / bin/sh 1405 1202 root R 1500 0 0 0 top
After we allocate a resource limit to the container, K8s automatically allocates a Qos called quality of service. You can see this field through kubectl describe pods xxx.
Qos can be divided into three categories:
Guranteed: indicates that the cpu and memory resources of each container are set to the same requests and memory values, that is, cpu.requests=cpu.limits and memory.requests=memory.limits,Guranteed will ensure that this type of pod has the highest priority and will be run first, even if the resources on the node are insufficient.
Burstable: indicates that at least one container in the pod has the requests property of the cpu or memory resource set, and the limits attribute may not be defined, so this type of pod has a medium priority.
BestEffort: means that no container has a requests or limits property set, so this type of pod is the lowest priority. When the resources of this type of pod are insufficient, the containers in the BestEffort will be terminated first to free up resources for the containers in the other two types of pod to run normally.
HeapSter
The function of HeapSter is to collect the resource usage of each node pod, and then show it to the user in a graphical interface.
The cAdvisor in kubelet is responsible for collecting the resource usage on each node, then storing the information in HeapSter, and then HeapSter persists the data in the database InfluxDB. And then we can graphically show it with a very good Grafana.
Generally speaking, the indicators we monitor include system indicators, container indicators and application indicators of K8s cluster.
By default, InfluxDB uses emptyDir as the storage volume, and the data will be lost as soon as the container is closed, so we need to replace the production with storage volumes such as glusterfs.
InfluxDB: https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/influxdb.yaml
[root@master metrics] # wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
[root@master metrics] # kubectl apply-f influxdb.yaml deployment.extensions/monitoring-influxdb createdservice/monitoring-influxdb created [root@master metrics] # kubectl get svc-n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmonitoring-influxdb ClusterIP 10.100.80.21 8086/TCP 17s [root@master metrics] # kubectl get pods-n kube-systemNAME READY STATUS RESTARTS AGEmonitoring-influxdb-848b9b66f6-ks69q 1 Running 0 10m [root@master metrics] # kubectl log monitoring-influxdb-848b9b66f6-ks69q-n kube-system
In this way we have deployed influxdb.
Let's start deploying heapster, but heapster depends on rbac.
So let's deploy rbac and visit https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/rbac first
[root@master metrics] # kubectl apply-f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yamlclusterrolebinding.rbac.authorization.k8s.io/heapster created
So I can deploy heapster next.
Visit https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/heapster.yaml
[root@master] # kubectl apply-f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yamlserviceaccount/heapster createddeployment.extensions/heapster createdservice/heapster created [root@master] # kubectl get svc-n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEheapster ClusterIP 10.100.35.112 80/TCP 1m [root@master ~] # kubectl get pods-n kube-system-o wideNAME READY STATUS RESTARTS AGE IP NODEheapster-84c9bc48c4-8h7vf 1 Running 0 9m 10.244.1.63 node1 [root@master ~] # kubectl logs heapster-84c9bc48c4-8h7vf-n kube-system
We have finished installing the heapster components above, and then we will install Grafana next.
Visit https://github.com/kubernetes/heapster/blob/master/deploy/kube-config/influxdb/grafana.yaml
In order to access Grafana outside the cluster, we need to define NodePort, so add a type: NodePort to the last line of the granfana.yaml file
[root@master ~] # wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml[root@master ~] # tail grafana.yaml # or through a public IP. # type: LoadBalancer # You could also use NodePort to expose the service at a randomly-generated port # type: NodePort ports:-port: 80 targetPort: 3000 selector: k8s-app: grafana type: NodePort [root@master ~] # kubectl apply-f grafana.yaml deployment.extensions/monitoring-grafana createdservice/monitoring-grafana created [root@master ~] # kubectl get svc-n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEheapster ClusterIP 10.100.35.112 80/TCP 22mkube-dns ClusterIP 10.96.0.10 53/UDP 53/TCP 37dkubernetes-dashboard NodePort 10.104.8.78 443:31647/TCP 16dmonitoring-grafana NodePort 10.96.150.141 80:30357/TCP 2mmonitoring-influxdb ClusterIP 10.100.80.21 8086/TCP 11h [root@master] # kubectl get pods-n kube-systemNAME READY STATUS RESTARTS AGEmonitoring -grafana-555545f477-qhb28 1ax 1 Running 0 5m
Open a browser and visit the host ip: http://172.16.1.100:30357
It is said that heapster has been completely abandoned in v1.12.
Root@master ~] # kubectl top nodes [root@master ~] # kubectl top pod
In theory, executing the above two commands can produce results, but k8s cannot be used since v1.11, and there is nothing you can do about it.
Thank you for reading this article carefully. I hope the article "Container Resource requirements, Resource restrictions and sample Analysis of HeapSter in docker" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.