Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Tutorials for Kubernetes deployment of dashboard and Prometheus

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1. Dashboard1) get the modification of yaml file and execute it

As shown in the figure:

[root@master ~] # wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml[root@master ~] # vim recommended.yaml + 3 navigate to line 39 and modify the service resource spec: type: NodePort ports:-port: 443 targetPort: 8443 nodePort: 31001 selector: k8s-app: kubernetes-dashboard# because by default, the type of service is cluster IP, which needs to be changed to NodePort for easy access. It can also be mapped to the specified port [root@master ~] # kubectl apply-f recommended.yaml [root@master ~] # kubectl get pod-n kubernetes-dashboard NAME READY STATUS RESTARTS AGEdashboard-metrics-scraper-7f5767668b-dd7ml 1 28skubernetes-dashboard-57b4bcc994-vrzcp 1 Running 0 28skubernetes-dashboard-57b4bcc994-vrzcp 1 28s# to ensure that the yaml file is provided Pod is running normally [root@master ~] # kubectl get svc-n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEdashboard-metrics-scraper ClusterIP 10.110.63.144 8000/TCP 86skubernetes-dashboard NodePort 10.111.65.9 443:31001/TCP 87s# to view service resources It is also normal and has been mapped to our specified port 2) client access test

Prior to this version, dashboard can only be accessed using Firefox, and there are no rigid requirements for this version.

3) Log in to [root@master ~] # kubectl create serviceaccount dashboard-admin-n kube-system# using Token to create an administrative user of dashboard [root@master ~] # kubectl create clusterrolebinding dashboard-cluster-admin-- clusterrole=cluster-admin-- serviceaccount=kube-system:dashboard-admin# binds the created dashboard user as the administrative user [root@master ~] # kubectl get secrets-n kube-system | grep dashboard# gets the token name dashboard-admin of the user you just created -token-88gxw kubernetes.io/service-account-token 3 22s [root@master ~] # kubectl describe secrets-n kube-system dashboard-admin-token-88gxw # View the details of token

As shown in the figure:

Browser access:

The appearance of this page indicates that the visit is successful!

4) Log in using kubeconfig

Based on token, do the following:

[root@master ~] # kubectl get secrets-n kube-system | grep dashboard# View the tokendashboard-admin-token-88gxw kubernetes.io/service-account-token 3 22m [root@master ~] # kubectl describe secrets-n kube-system dashboard-admin-token-88gxw# you just created to view the details of token Will get token [root@master ~] # DASH_TOKEN=$ (kubectl get secrets-n kube-system dashboard-admin-token-88gxw-o jsonpath= {.data.token} | base64-d) # generate a variable [root@master ~] # kubectl config set-cluster kubernets-- kubeconfig=/root/.dashboard-admin.conf# to write the configuration information of the k8s cluster to a file File customizable [root@master ~] # kubectl config set-credentials dashboard-admin-- token=$ {DASH_TOKEN}-- kubeconfig=/root/.dashboard-admin.conf# writes token information to a file (same file) [root@master ~] # kubectl config set-context dashboard-admin@kubernetes-- cluster=kubernetes-- user=dashboard-admin-- kubeconfig=/root/.dashboard-admin.conf# writes user information to a file (same file) [root @ master ~] # kubectl config use-context dashboard-admin@kubernetes-- kubeconfig=/root/.dashboard-admin.conf# also writes the configuration information of the context to the file (the same file) [root@master ~] # sz / root/.dashboard-admin.conf# finally imports the configuration information to the client

Browser access:

The above page indicates that the visit is successful!

So much for a brief introduction to dashboard!

II. Weave Scope1) get the yaml file modification and execute it

As shown in the figure:

[root@master ~] # wget https://cloud.weave.works/k8s/scope.yaml[root@master ~] # vim scope.yaml + 21lines navigate to line 212 and change the service type to NodePort And specify the port spec: type: NodePort ports:-name: app port: 80 protocol: TCP targetPort: 4040 nodePort: 31002 [root@master ~] # kubectl apply-f scope.yaml [root@master ~] # kubectl get pod-n weave # ensure that all pod running are Running status NAME READY STATUS RESTARTS AGEweave-scope-agent-7t4qc 1/1 Running 0 8m57sweave-scope-agent-r78fz 1/1 Running 0 8m57sweave-scope-agent-t8j66 1/1 Running 0 8m57sweave-scope-app-78cff98cbc-cs4gs 1/1 Running 0 8m57sweave-scope-cluster -agent-7cc889fbbf-pz6ft 1 Running 0 8m57s [root@master ~] # kubectl get svc-n weave # View the port information corresponding to common svc resources NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEweave-scope-app NodePort 10.102.221.220 80:31002/TCP 11m

Browser access:

III. Prometheus

Before deploying Prometheus, due to the low configuration of the experimental environment, the original dashboard and Scope were deleted!

Before you actually deploy Prometheus, you should understand the relationships and roles of the various components of Prometheus:

1) MertricServer: aggregator of resource usage in K8s cluster, collecting data for use in K8s cluster, such as kubectl,hpa,scheduler

2) PrometheusOperator: a system detection and alarm toolkit for storing monitoring data

3) NodeExporter: key metrics status data for each node

4) kubeStateMetrics: collect resource object data in K8s cluster and specify alarm rules

5) Prometheus: collects apiserver,scheduler,controller-manager,kubelet component data by pull and transmits it through http protocol

6) Grafana: is a visual data statistics and monitoring platform

1) obtain the modification of yaml file and execute it

Note: the Prometheus deployed here is not provided by the official website of Prometheus, but uses the Prometheus project provided by coreos.

As shown in the figure:

[root@master ~] # git clone https://github.com/coreos/kube-prometheus.git# Clone the project locally [root@master ~] # cd kube-prometheus/manifests/ [root@master manifests] # vim grafana-service.yaml# change the type of service resource corresponding to the grafana resource and the mapped port apiVersion: v1kind: Servicemetadata: labels: app: grafana name: monitoringspec: type: NodePort # add type to NodePort Ports:-name: http port: 3000 targetPort: http nodePort: 31010 # Custom mapped port selector: app: grafana [root@master manifests] # vim alertmanager-service.yaml # change the service resource type corresponding to the alertmanager resource and the mapped port apiVersion: v1kind: Servicemetadata: labels: alertmanager: main name: alertmanager-main namespace: monitoringspec: type: NodePort # add type to NodePort ports:- Name: web port: 9093 targetPort: web nodePort: 31020 # Custom mapped port selector: alertmanager: main app: alertmanager sessionAffinity: ClientIP [root@master manifests] # vim prometheus-service.yaml # change the type of service resource corresponding to prometheus resource and the mapped port apiVersion: v1kind: Servicemetadata: labels: prometheus: K8s name: prometheus-k8s namespace: monitoringspec: type: NodePort # add type to NodePort ports :-name: web port: 9090 targetPort: web nodePort: 31030 # Custom mapped port selector: app: prometheus prometheus: K8s sessionAffinity: ClientIP [root@master manifests] # pwd # confirm the current directory / root/kube-prometheus/manifests [root@master manifests] # kubectl apply-f setup/# must first execute all yaml files in the setup directory [root@master manifests] # cd.. # return to the upper directory [root@master kube-prometheus] # pwd # confirm the directory location / root/kube-prometheus [root@master kube-prometheus] # kubectl apply-f manifests/# execute all yaml files in this directory [root@master kube-prometheus] # kubectl get pod-n monitoring # ensure that all Pod under this namespace are Running status [root@master kube-prometheus] # kubectl get svc-n monitoring | grep grafana# to view grafana resources Port grafana NodePort 10.97.56.230 3000:31010/TCP 16m of the corresponding service resource map

The pod image used is slow to download, in short, the stability of the network is not a problem!

Browser access:

The local template is a bit low, so import a nice template next!

Template download address: https://grafana.com/dashboards to check it on its own!

It's done, and you can see the cool monitoring page.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report