In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
1. UI access interface of K8s-dashboard
In dashboard, although it is possible to create, delete, and modify resources, we usually think of it as software for a healthy K8s cluster.
As the Web user interface of Kubernetes, users can deploy containerized applications in the Kubernetes cluster through Dashboard, deal with and manage the applications, and manage the cluster itself. Through Dashboard, users can view the running status of applications in the cluster, and at the same time, they can also create or modify deployment, tasks, services and other Kubernetes resources based on Dashboard. Through the deployment wizard, users can scale up the deployment, perform rolling updates, restart Pod and deploy new applications. Of course, you can also view the status of Kubernetes resources through Dashboard.
1. Functions provided by Dashboard
By default, Dashboard displays objects under the default (default) namespace, and you can select other namespaces through the namespace selector. Most of the object types of the cluster can be displayed in the Dashboard user interface.
1) Cluster management
The cluster management view is used to manage nodes, namespaces, persistent storage volumes, roles, and storage classes. The node view shows CPU and memory usage, as well as the creation time and running status of this node. The namespaces view shows which namespaces exist in the cluster and the running status of those namespaces. The roles view shows in the form of a list of roles that exist in the cluster, their types, and their namespaces. Persistent storage volumes are displayed in the form of a list, and you can see the total storage amount, access mode, usage status and other information of each persistent storage volume; administrators can also delete and edit the YAML files of persistent storage volumes.
2) workload
The workload view shows all workload types, such as deployment, replica set, stateful replica set, and so on. In this view, various workloads are organized according to their own types. The details view of the workload can display the details and status information of the application, as well as the relationships between objects.
3) Service discovery and load balancing
The service discovery view can expose the services of the cluster content to applications outside the cluster, applications inside and outside the cluster can invoke applications through exposed services, external applications use external endpoints, and internal applications use internal endpoints.
4) Stora
The storage view shows the persistent storage volume declaration resources that are applied to store data.
5) configuration
The configuration view shows the configuration information used by the application running in the cluster. Kubernetes provides the configuration dictionary (ConfigMaps) and secret dictionary (Secrets). Through the configuration view, you can edit and manage configuration objects and view hidden sensitive information.
6) Log view
The Pod list and details page provides a link to view the log view, which allows you to view not only the log information of Pod, but also the log information of the Pod container. Through Dashboard, you can create and deploy a containerized application according to the wizard, of course, you can enter specified application information manually, or you can create and not be applied by uploading YAML and JSON files.
2. Download the required yaml file and image [root@master https] # wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml[root@master https] # docker pull kubernetesui/dashboard:v2.0.0-rc53, Modify recommended.yaml [root@master https] # vim recommended.yaml-kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardspec: type: NodePort # add 40 ports:-port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard execute [root@master https] # kubectl apply-f recommended.yaml check [root@master https] # kubectl get svc-n kubernetes-dashboard
3. Browsers access https://192.168.1.21:32306
PS: if you are using an older version of dashboard, logging in using Google may not be successful and you need to switch to another browser, such as Firefox.
4. Log in to dashboard to create a dashboard management user based on token. [root@master https] # kubectl create serviceaccount dashboard-admin-n kube-system bind user to cluster management user [root@master https] # kubectl create clusterrolebinding dashboard-cluster-admin-- serviceaccount=kube-system:dashboard-admin get Token [root@master https] # kubectl get secrets-n kube-system | grep dashboard-admin// gets the name of Token first.
[root@master https] # kubectl describe secrets-n kube-system dashboard-admin-token-62bh9// to view the details of the secret resources obtained above, you will get token
Log in using token on the browser.
Create a resource
Check to see if the creation is successful
5. Log in to dashboard based on kubeconfig configuration file to get Token [root@master https] # kubectl get secrets-n kube-system | grep dashboard-admin// gets the name of Token first.
[root@master https] # kubectl describe secrets-n kube-system dashboard-admin-token-62bh9// to view the details of the secret resources obtained above, you will get token
Generate a kubeconfig configuration file.
Set an environment variable to represent the obtained token
[root@master https] # DASH_TOKEN=$ (kubectl get secrets-n kube-system dashboard-admin-token-62bh9-o jsonpath= {.data.token} | base64-d)
Write the configuration information of the K8s cluster to the kubeconfig configuration file.
[root@master https] # kubectl config set-cluster kubernetes-server=192.168.1.21:6443-kubeconfig=/root/.dashboard-admin.conf [root@master https] # kubectl config set-credentials dashboard-admin-token=$DASH_TOKEN-kubeconfig=/root/.dashboard-admin.conf [root@master https] # kubectl config set-context dashboard-admin@kubernetes-cluster=kubernetes-user=dashboard-admin-kubeconfig=/root/.dashboard-admin.conf [root@master https] # kubectl config use-context dashboard-admin @ kubernetes-- the configuration file of / root/.dashboard-admin.conf that kubeconfig=/root/.dashboard-admin.conf will generate Export and save. [root@master https] # sz / root/.dashboard-admin.conf / / Export to your favorite location, you can select the login method of kubeconfig from the browser, and then import the configuration file.
Deploy weave-scope to monitor K8s cluster
Weave Scope is a visual monitoring tool for Docker and Kubernetes. Scope provides a complete view of top-down cluster infrastructure and applications, and users can easily monitor and diagnose problems with distributed containerized applications in real time.
Using scopeScope automatically builds the logical topology of applications and clusters. For example, clicking on the top PODS shows the dependencies between all Pod and Pod. Click HOSTS and the relationship between the nodes will be displayed. Real-time resource monitoring can view the CPU and memory usage of resources in Scope. Supported resources are Host, Pod, and Container. * * online operation Scope also provides convenient online operation features, such as selecting a Host and clicking the > _ button to directly open the command line terminal of the node in the browser. Click Deployment + to perform Scale Up operations to view Pod logs and attach, restart, stop containers, and to troubleshoot problems directly in Scope. Powerful search function Scope supports keyword search and location of resources. You can also do conditional searches, such as finding and locating Pod with MEMORY > 100m. 1. Find the yaml file of scope on github (1) search scope on github
(2) description of deployment scope for k8s
(3) choose the deployment of k8s
(4) copy the link above and download the yaml file [root@master https] # wget https://cloud.weave.works/k8s/scope.yaml2, modify the downloaded yaml file and run [root@master ~] # vim scope.yaml # to edit the yaml file # jump to line 213 Modify the port type of its service spec: type: NodePort # modify the type to NodePort ports:-name: app port: 80 protocol: TCP targetPort: 4040 (1) execute [root@master https] # kubectl apply-f scope.yaml (2) check the operation of the container and make sure it is running normally [root@master https] # kubectl get pod-o wide-n weave
DaemonSet weave-scope-agent, the scope agent program that runs on each node of the cluster, is responsible for collecting data. The Deployment weave-scope-app,scope application, which takes data from agent, displays it through Web UI and interacts with users. Service weave-scope-app, which is ClusterIP by default, has been modified to NodePort by adding the parameter k8s-service-type=NodePort to the above command. [root@master https] # kubectl get svc-n weave
# DaemonSet resource object: weave-scope-agent (proxy): responsible for collecting node information
# deployment resource object: weave-scope-app (App): get data from agent, display it through web UI and interact with users
The feature of the # DaemonSet resource object compared to deployment is that the DaemonSet resource object runs on each node and can only run one pod.
# because each node needs to be monitored, a resource object such as DaemonSet is used
3. Visit http://192.168.1.21:31841/ by browser
In the web interface of scope, you can view a lot of things, such as pod, node nodes, etc., including opening the terminal of the container, viewing its log information, and so on.
Summary
With its concise visualization, weave scope can show us more vividly the management of resource objects such as service/controller/pod and simple web ui operation, which is convenient for troubleshooting and timely location.
Weave scope as a web ui currently lacks login authentication, so you can use the verification of the web server in other ways for security control.
III. Deploy Prometheus services
PS: the prometheus deployed here is not provided by the official website of Prometheus, but uses the prometheus project provided by coreos.
Before you deploy, let's take a look at the role of the various components of Prometheus! MetricsServer: aggregator of resource usage in K8s cluster, collecting data for use in K8s cluster, such as kubectl,hpa,scheduler, etc. Prometheus Operator: is a system detection and alarm toolkit for storing monitoring data. Prometheus node-exporter: collect data of K8s cluster resources and specify alarm rules. Prometheus: collects data from apiserver,scheduler,controller-manager,kubelet components and transmits them over the http protocol. Grafana: visual data statistics and monitoring platform. Features
Compared with other traditional monitoring tools, Prometheus mainly has the following characteristics:
A multidimensional data model with time series data identified by metric name and key / value pairs has a flexible query language that does not rely on distributed storage, and is only related to local disks. Pulling time series data through HTTP services also supports push to add time series data. It also supports discovery of multiple graphics and dashboards through service discovery or static configuration. 1, search coreos/prometheus on github
Copy Link
2. Clone the promethes project on github [root@master promethes] # yum-y install git// download the git command [root@master promethes] # git clone https://github.com/coreos/kube-prometheus.git// clone project on github
3. Modify the grafapa-service.yaml file to change the exposure mode of nodePort, and the exposed port is 31001. [root@master promethes] # cd kube-prometheus/manifests/// enters kube-prometheus 's manifests directory [root@master manifests] # vim grafana-service.yaml # modifies grafana's yaml file apiVersion: v1kind: Servicemetadata: labels: app: grafana name: grafana namespace: monitoringspec: type: NodePort # to NodePort type ports:-name: http port: 3000 targetPort: http nodePort: 31001 # mapped to host port 31001 selector: app: grafana3. Modify the prometheus-service.yaml file to change the way nodePort is exposed Expose port 31002. [root@master manifests] # vim prometheus-service.yaml # modify prometheus's yaml file apiVersion: v1kind: Servicemetadata: labels: prometheus: K8s name: prometheus-k8s namespace: monitoringspec: type: NodePort # to NodePort type ports:-name: web port: 9090 targetPort: web nodePort: 31002 # Map to host port 31002 selector: app: prometheus prometheus: K8s sessionAffinity: ClientIP4, modify alertmanager-service.yaml file Change to nodePort exposure mode Expose port 31003 [root@master manifests] # vim alertmanager-service.yaml # modify alertmanager's yaml file apiVersion: v1kind: Servicemetadata: labels: alertmanager: alertmanager-main namespace: monitoringspec: type: NodePort # to NodePort type ports:-name: web port: 9093 targetPort: web nodePort: 31003 # map to host port 31003 selector: alertmanager: main app: alertmanager sessionAffinity: ClientIP5, change all yaml files in the setup directory Run it all. Is the underlying environment configuration for running the above yaml files. [root@master manifests] # cd setup/// enter the setup/ directory [root@master manifests] # kubectl apply-f setup/// run all the yaml files in the setup directory 6, run all the yaml files in the home directory (kube-prometheus).
When executing the following yaml file, each node will download many images on the Internet. In order to prevent the image from taking too long to download, you can download the image I provided locally, then import it to each node, and then execute the following yaml file, which will save some time. (if you download the image provided by me, it is recommended to write a script to import the image to avoid manual import and cause errors.)
[root@master manifests] # cd.. / / return to the previous directory (kube-prometheus) [root@master kube-prometheus] # kubectl apply-f manifests/// run all the yaml files in the kube-prometheus directory to see [root@master ~] # kubectl get pod-n monitoring
7. Visit http://192.168.1.21:31001 by browser
When the client accesses the IP+30100 port of any node in the cluster, you can see the following interface (the default user name and password are admin)
Change the password when prompted:
(1) add a template
Click "import" to import the following three templates:
(2) Click the following to view the monitoring status within the cluster
You can see the monitoring status below
8. Import the monitoring template
Search for https://grafana.com/ from grafana's website
Copy the id of the following template
Now you can see the surveillance footage.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.