In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to build an environment based on Prometheus and Grafana monitoring platform, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.
Related concepts
Monitoring in microservices is divided into three categories according to their areas of action, Logging,Tracing,Metrics.
Logging-used to record discrete events. For example, debug messages or error messages for an application. It is the basis for us to diagnose the problem. For example, what we call ELK is based on Logging.
Metrics-used to record aggregable data. For example, the current depth of the queue can be defined as a measure that is updated when the element is queued or dequeued, and the number of HTTP requests can be defined as a counter that is tired when new requests arrive. Prometheus specializes in the Metrics area.
Tracing-used to record information within the scope of the request. For example, the execution process and time-consuming of a remote method call. It is a sharp tool for us to troubleshoot system performance problems. The most commonly used is Skywalking,ping-point,zipkin.
Today we'll focus on Prometheus monitoring, and then we'll take a look at a few key components that need to be covered.
Prometheus
Prometheus (English name: Prometheus) is an open source monitoring and alarm system and time series database (TSDB) developed by SoundCloud. Prometheus is developed in the GE language and is an open source version of the Google BorgMon monitoring system.
The basic principle of Prometheus is to capture the status of the monitored components periodically through the HTTP protocol, and any component can access the monitoring as long as it provides the corresponding HTTP interface. No SDK or other integration process is required. The HTTP interface that outputs the information of the monitored components is called exporter. At present, most of the components commonly used in development can be used directly by exporter, such as Nginx, MySQL, Linux system information, Mongo, ES and so on.
Exporter
Prometheus can be understood as a database + data crawling tool, which grabs unified data from everywhere and puts it into prometheus, a time series database. So how to ensure that the data format is uniform everywhere? It is through this exporter. Exporter is the general name of a kind of data acquisition components. Exporter is responsible for collecting data from the target and converting it into a format supported by Prometheus. It opens a http interface (so that Prometheus can grab the data). Different from the traditional data acquisition components, Exporter does not send data to the central server, but waits for the central server (such as Prometheus, etc.) to grab it. Https://github.com/prometheus has a lot of written exporter that can be downloaded and used directly.
Grafana
Grafana is a graphical tool, it can read data information from many kinds of data sources (such as Prometheus), use beautiful charts to display the data, and there are many open source dashborad can be used, you can quickly build a very beautiful monitoring platform. Its relationship to Prometheus is similar to that of Kibana and ElasticSearch.
Environmental preparation
Before starting the configuration, please download the following software (downloading directly from the github or grafana official website is too slow, simply tortoise speed and easy to download failure, it is recommended to use Thunderbolt download).
Prometheus
Grafana
Node_exporter
Installation
Prepare two servers, one for installing prometheus and grafana, and one for placing exporter components. Set up an application folder and upload the relevant software to the server.
192.168.249.131 prometheus,grafana
192.168.249.129 exporter
Prometheus
Install and start using the following shell command
Tar zxvf prometheus-2.13.1.linux-amd64.tar.gzmv prometheus-2.13.1.linux-amd64 prometheuscd prometheusnohup. / prometheus &
After the startup is completed, open http://192.168.249.131:9090 with a browser to access it. The results are as follows:
Grafana
Install and start using the following shell command
Tar grafana-6.4.3.linux-amd64.tar.gzcd grafana-6.4.3nohup. / grafana-server &
After startup, open http://192.168.249.131:3000 with a browser for access. The default account password is admin/admin. You need to change the password for the first login, and the login effect is as follows:
Node_exporter
Install and start using the following shell command
Tar zxvf node_exporter-0.18.1.linux-amd64.tar.gzmv node_exporter-0.18.1.linux-amd64 node_exporternohup. / node_exporter &
Node exporter uses port 9100 by default, and you can specify the port number using-- web.listen-address= ": 9200". After the startup is completed, open http://192.168.249.129:9100/ with a browser to access it, and the display effect is as follows:
Configure prometheus
Enter the prometheus installation directory, modify the prometheus.yml file, and add listening job server-192.168.249.129. The complete configuration is as follows:
# my global configglobal: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. Evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configurationalerting: alertmanagers:-static_configs:-targets: #-alertmanager:9093rule_files: #-"first_rules.yml" #-"second_rules.yml" scrape_configs: # The job name is added as a label `job= `to any timeseries scraped from this config. -job_name: 'prometheus' static_configs:-targets: [' localhost:9090']-job_name: '192.168.249.129' static_configs:-targets: ['192.168.249.129 localhost:9090' 9100']
Restart prometheus after configuration to check the listening status.
Grafana
Configure prometheus data sources
Go to the official website to find the corresponding dial, and we choose node exporter monitoring Kanban.
Importing the dial in grafana
View the monitoring effect
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.