Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Monasca, a high performance monitoring tool for OpenStack

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to use OpenStack high-performance monitoring tool Monasca, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's learn about it!

Introduction

Monasca is a multi-tenant monitoring-as-a-service tool that helps IT teams analyze log data and set alerts and notifications.

The monitoring requirements in the OpenStack environment are huge, diverse and highly complex. Monasca's project mission is to provide a multi-tenant, highly scalable, high-performance and fault-tolerant monitoring as a service solution.

Monasca provides an extensible platform for advanced monitoring that operators and tenants can use to obtain the operational status of their infrastructure and applications.

Monasca uses REST API for high-speed log processing and query. It integrates flow alarm engine, notification engine and aggregation engine.

There are a wide variety of use cases that you can implement using Monasca. Monasca follows the microservices architecture, where several services are distributed across multiple repositories. Each module is designed to provide discrete services for the entire monitoring solution and can be deployed according to the needs of operators / customers.

Use Rest API interface to store and query performance and historical data, which is different from other monitoring tools that use special protocols and transmission methods, such as nagios's NSCA,Monasca only uses http

Multi-tenant authentication, metrics submission and authentication use Keystone components. Store associated tenant ID

Metrics are defined using the key value of (key,value), which is called dimensions.

Real-time threshold and alarm for system indicators

The compound alarm setting uses a simple syntax and consists of child alarm expressions and logical operators.

The monitoring agent supports built-in system and service check results, and only nagios checks and statsd

Open source monitoring scheme based on open source technology

Architecture

The following figure outlines the metrics pipeline of Monasca and the interaction of the components involved.

Core component

Monasca-agent: monitoring agent, written by python, including multiple sub-components, supporting various cpu utilization, available memory, nagios plug-ins, statsd, and many services such as mysql, rabbitMQ, etc.

Monasca-api::, a RESTful API interface for monitoring, is targeted at the following concepts and areas:

Metrics: storage and query of a large number of metrics in real time

Statistics: query statistics for metrics

Alarm definition: addition, deletion, query and modification of alarm definition

Alarms: query and delete alarm history

Notification method: create and delete notification methods. When the alarm status changes, you can notify the user directly by email-monasca API can be implemented through python or JAVA

Manasca-persister: consumer of message queuing delivery metrics or alarms (conceptual consumer in RPC transport), and stores metrics and alarms in the corresponding database

Monasca-transform: a transformation aggregation engine that converts the names and values of metrics, generates new metrics and passes them to message queues

Anomaly and Prediction Engine: it is still in the prototype stage.

Monasca-thresh: calculate the metrics and issue an alarm to the message queue when the threshold is exceeded, based on the Apache storm project (open source real-time distributed computing system)

Monasca-notification: accepts alarms from message queues and sends notifications, such as sending alarm messages. Notification Engine is based on Python.

Monasca-analytics: analysis engine that accepts alarms from message queues, detects anomalies and correlates alarms

Message queuing: RabbitMQ was previously supported, but moved to Kafka due to performance, scale, persistence, and high availability limitations

Metrics and Alarms Database: support for Vertica and infuxDB, support for Cassandra is in progress

Config Database: configuration information database, currently using Mysql, support for PostgreSQL is in progress

Command line client implemented by python-monascaclient:python to manipulate monasca API

Visualization of Monitoring UI:Horizon dashboard

Ceilometer publisher: a multi-publisher plug-in for Ceilometer

In addition to sending requests directly to API, you can use the following tools to interact with Monasca:

Monasca client:CLI and Python client

Horizon plugin: this plug-in adds the monitoring panel to Horizon

The Grafana app:Grafana plug-in can view and configure alarm definitions, alerts, and notifications

Libraries:

Common code used in monasca-common:Monasca components

Monasca-statsd:StatsD-compliant library for sending metrics from detected applications

Grafana Integration:

Monasca-grafana-datasource: multi-tenant Monasca data source for Grafana

Branch version of grafana:Grafana 4.1.2 with the addition of Keystone authentication

Third-party technologies and tools

Monasca uses a variety of third-party technologies:

Internal processing and middleware

Apache Kafka (http://kafka.apache.org): a distributed, partitioned, multi-replica, multi-subscriber, distributed log system based on zookeeper coordination (which can also be used as a MQ system). It can be used for web/nginx logs, access logs, message services, etc.

Apache Storm (http://storm.incubator.apache.org/) Storm) is a free open source distributed real-time computing system. With Storm, unlimited data streams can be easily and reliably processed for real-time processing, while Hadoop can be batch-processed.

ZooKeeper (http://zookeeper.apache.org/): used by Kafka and Storm

Apache Spark: used by Monasca Transform as the aggregation engine

Configure the database:

MySQL: supports MySQL as a configuration database

PostgreSQL: POSTgres that supports Config databases through Hibernate and SQLAlchemy

Vagrant (http://www.vagrantup.com/) Vagrant) provides an easy-to-configure, repeatable portable work environment based on industry-standard technology and controlled by a consistent workflow to help you maximize productivity and flexibility

Dropwizard (https://dropwizard.github.io/dropwizard/) https://dropwizard.github.io/dropwizard/) drop) brings together stable and mature libraries in the Java ecosystem into a simple, lightweight package that allows you to focus on your own tasks. Dropwizard provides out-of-the-box support for complex configurations, application metrics, logging, operating tools, etc., so that you and your team can release high-quality Web services in the shortest possible time

Time series database:

InfluxDB (http://influxdb.com/): an open source distributed time series database without external dependencies. Metrics database supports InfluxDB

Vertica (http://www.vertica.com): a highly scalable commercial enterprise-level SQL analysis database. It provides built-in automatic high availability and is good at analyzing within the database as well as compressing and storing large amounts of data. A free community version of Vertica is available, which can store up to 1 TB of data. There is no time limit. The URL is https://my.vertica.com/community/. Although Vertrica is no longer used frequently, it is supported by the Metrics database

Cassandra (https://cassandra.apache.org) Cassandra Mestrics database support

Installation

Manual installation

All components of monasca can be installed on a single node, such as an openstack controller node, or it can be deployed on multiple nodes. In this article, you will install monasca-api in a new VM created in my openstack cluster, which has an associated floating ip. Monasca-agent is installed on the controller node. The proxy node publishes metrics to the api node through a floating ip. They're in the same subnet.

Install the packages and tools we need

Apt-get install-y git apt-get install openjdk-7-jre-headless python-pip python-dev

Install mysql Database if you have monasca-api installed in the openstack controller node, you can skip the installation and use the installed msyql for the openstack service.

Apt-get install-y mysql-server

Create the monasca database schema and download mon.sql (https://raw.githubusercontent.com/stackforge/cookbook-monasca-schema/master/files/default/mysql/mon.sql)) here

Mysql-uroot-ppassword

< mon_mysql.sql 安装Zookeeper安装Zookeeper并重新启动它。我使用本地主机接口,并且只有一个Zookeeper,因此默认配置文件不需要配置。 apt-get install -y zookeeper zookeeperd zookeeper-bin service zookeeper restart 安装和配置kafka wget http://apache.mirrors.tds.net/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz mv kafka_2.9.2-0.8.1.1.tgz /opt cd /opt tar zxf kafka_2.9.2-0.8.1.1.tgz ln -s /opt/kafka_2.9.2-0.8.1.1/ /opt/kafka ln -s /opt/kafka/config /etc/kafka 创建kafka系统用户,kafka服务将以该用户身份启动。 useradd kafka -U -r 在/etc/init/kafka.conf中创建kafka启动脚本,将以下内容复制 到/etc/init/kafka.conf中并保存。 description "Kafka" start on runlevel [2345] stop on runlevel [!2345] respawn limit nofile 32768 32768 # If zookeeper is running on this box also give it time to start up properly pre-start script if [ -e /etc/init.d/zookeeper ]; then /etc/init.d/zookeeper restart fi end script # Rather than using setuid/setgid sudo is used because the pre-start task must run as root exec sudo -Hu kafka -g kafka KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" JMX_PORT=9997 /opt/kafka/bin/kafka-server-start.sh /etc/kafka/server.properties 配置kafka,vim /etc/kafka/server.properties,确保配置了以下内容: host.name=localhost advertised.host.name=localhost log.dirs=/var/kafka 创建 kafka log目录 mkdir /var/kafka mkdir /var/log/kafka chown -R kafka. /var/kafka/ chown -R kafka. /var/log/kafka/ 启动kafka服务 service kafka start 下一步就是创建 kafka topics /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 64 --topic metrics /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic events /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic raw-events /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transformed-events /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-definitions /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic transform-definitions /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-state-transitions /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic alarm-notifications /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 12 --topic stream-notifications /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 3 --topic retry-notifications 安装和配置 influxdb curl -sL https://repos.influxdata.com/influxdb.key | apt-key add - echo "deb https://repos.influxdata.com/ubuntu trusty stable" >

/ etc/apt/sources.list.d/influxdb.list apt-get update apt-get install-y apt-transport-https apt-get install-y influxdb service influxdb start

Create influxdb database, user, password, retention policy and change the password at the same time.

Influx CREATE DATABASE mon CREATE USER monasca WITH PASSWORD 'tyun' CREATE RETENTION POLICY persister_all ON mon DURATION 90d REPLICATION 1 DEFAULT exit

Install and configure storm

Wget http://apache.mirrors.tds.net/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz mkdir / opt/storm cp apache-storm-0.9.6.tar.gz / opt/storm/ cd / opt/storm/ tar xzf apache-storm-0.9.6.tar.gz ln-s / opt/storm/apache-storm-0.9.6 / opt/storm/current useradd storm- U-r mkdir / var/storm mkdir / var/log/storm chown-R storm. / var/storm/ chown-R storm. / var/log/storm/

Modify storm.yaml, vim current/storm/conf/storm.yaml

# base java.library.path: "/ usr/local/lib:/opt/local/lib:/usr/lib" storm.local.dir: "/ var/storm" # zookeeper.* storm.zookeeper.servers:-"localhost" storm.zookeeper.port: 2181 storm.zookeeper.retry.interval: 5000 storm.zookeeper.retry.times: 29 storm.zookeeper.root: "/ storm" storm.zookeeper.session.timeout: 30000 # supervisor.* configs are for node Supervisors supervisor.slots.ports:-6701-6702-6703-6704 supervisor.childopts: "- Xmx1024m" # worker.* configs are for task workers worker.childopts: "- Xmx1280m-XX:+UseConcMarkSweepGC-Dcom.sun.management.jmxremote" # nimbus.* configs are for the masteri nimbus.host: "localhost" nimbus.thrift.port: 6627 mbus.childopts: "- Xmx1024m" # ui.* configs are for the master ui.host: 127.0.0. 1 ui.port: 8078 ui.childopts: "- Xmx768m" # drpc.* configs # transactional.* configs transactional.zookeeper.servers:-"localhost" transactional.zookeeper.port: 2181 transactional.zookeeper.root: "/ storm-transactional" # topology.* configs are for specific executing storms topology.acker.executors: 1 topology.debug: false logviewer.port: 8077 logviewer.childopts: "- Xmx128m"

Create a storm supervisor startup script, vim / etc/init/storm-supervisor.conf

# Startup script for Storm Supervisor description "Storm Supervisor daemon" start on runlevel [2345] console log respawn kill timeout 240 respawn limit 25 5 setgid storm setuid storm chdir / opt/storm/current exec / opt/storm/current/bin/storm supervisor

Create a Storm nimbus startup script. Vim / etc/init/storm-nimbus.conf

# Startup script for Storm Nimbus description "Storm Nimbus daemon" start on runlevel [2345] console log respawn kill timeout 240 respawn limit 25 5 setgid storm setuid storm chdir / opt/storm/current exec / opt/storm/current/bin/storm nimbus

Start supervisor and nimbus

Service storm-supervisor start service storm-nimbus start

Install the monasca api python package

Some monasca components provide both python and java code, mainly because I chose python code for deployment.

Pip install monasca-common pip install gunicorn pip install greenlet # Required for both pip install eventlet # For eventlet workers pip install gevent # For gevent workers pip install monasca-api pip install influxdb

Vim / etc/monasca/api-config.ini, modify the host to your IP address

[DEFAULT] name = monasca_api [pipeline:main] # Add validator in the pipeline so the metrics messages can be validated. Pipeline = auth keystonecontext api [app:api] paste.app_factory = monasca_api.api.server:launch [filter:auth] paste.filter_factory = keystonemiddleware.auth_token:filter_factory [filter:keystonecontext] paste.filter_factory = monasca_api.middleware.keystone_context_filter:filter_factory [server:main] use = egg:gunicorn#main host = 192.168.2.23 port = 8082 workers = 1 proc_name = monasca_api

Vim / etc/monasca/api-config.conf, modify the following

[DEFAULT] # logging, make sure that the user under whom the server runs has permission # to write to the directory. Log_file = monasca-api.log log_dir = / var/log/monasca/api/ debug=False region = RegionOne [security] # The roles that are allowed full access to the API. Default_authorized_roles = admin, user, domainuser, domainadmin, monasca-user # The roles that are allowed to only POST metrics to the API. This role would be used by the Monasca Agent. Agent_authorized_roles = admin # The roles that are allowed to only GET metrics from the API. Read_only_authorized_roles = admin # The roles that are allowed to access the API on behalf of another tenant. # For example, a service can POST metrics to another tenant if they are a member of the "delegate" role. Delegate_authorized_roles = admin [kafka] # The endpoint to the kafka server uri = localhost:9092 [influxdb] # Only needed if Influxdb database is used for backend. # The IP address of the InfluxDB service. Ip_address = localhost # The port number that the InfluxDB service is listening on. Port = 8086 # The username to authenticate with. User = monasca # The password to authenticate with. Password = tyun # The name of the InfluxDB database to use. Database_name = mon [database] url = "mysql+pymysql://monasca:tyun@127.0.0.1/mon" [keystone_authtoken] identity_uri = http://192.168.1.11:35357 auth_uri = http://192.168.1.11:5000 admin_password = tyun admin_user = monasca admin_tenant_name = service cafile = certfile = keyfile = insecure = false

Comment out the [mysql] section and leave the rest by default.

Create a monasca system user and enter the directory

Useradd monasca-U-r mkdir / var/log/monasca mkdir / var/log/monasca/api chown-R monasca. / var/log/monasca/

On the openstack controller node, create the monasca user password and assign the administrator role to the user monasca in the tenant service.

Openstack user create-domain default-password tyun monasca openstack role add-project service-user monasca admin openstack service create-name monasca-description "Monasca monitoring service" monitoring create endpoint openstack endpoint create-region RegionOne monasca public http://192.168.1.143:8082/v2.0 openstack endpoint create-region RegionOne monasca internal http://192.168.1.143:8082/v2.0 openstack endpoint create-region RegionOne monasca admin http://192.168.1.143:8082/v2.0

192.168.1.143 is the floating IP of my api virtual machine address, please change it to your IP.

Create a monasca api startup script, vim / etc/init/monasca-api.conf

# Startup script for the Monasca API description "Monasca API Python app" start on runlevel [2345] console log respawn setgid monasca setuid monasca exec / usr/local/bin/gunicorn-n monasca-api-ke ventlet-worker-connections=2000-backlog=1000-paste / etc/monasca/api-config.ini

Install monasca-persister

Create a monasca-persister startup script

Vim / etc/init/monasca-persister.conf

# Startup script for the Monasca Persister description "Monasca Persister Python app" start on runlevel [2345] console log respawn setgid monasca setuid monasca exec / usr/bin/java-Dfile.encoding=UTF-8-cp / opt/monasca/monasca-persister.jar monasca.persister.PersisterApplication server / etc/monasca/persister-config.yml

Start monasca-persister

Service monasca-persister start

Install monasca-notificatoin

Pip install-upgrade monasca-notification apt-get install sendmail

Copy notification.yaml to / etc/monasca/ create startup script, vim / etc/init/monasca-notification.conf

# Startup script for the monasca_notification description "Monasca Notification daemon" start on runlevel [2345] console log respawn setgid monasca setuid monasca exec / usr/bin/python / usr/local/bin/monasca-notification

Start the notification service

Service monasca-notification start

Install monasca-thresh replication monasca-thresh to / etc/init.d/ replication monasca-thresh.jar to / opt/monasca-thresh/ replication thresh-config.yml to / etc/monasca / and modify host and database information to start monasca-thresh

Service monasca-thresh start

Install monasca-agent

Install monasca-agent on the openstack controller node so that it can monitor the openstack service process.

Sudo pip install-upgrade monasca-agent

Set monasca-agent to change the user domain ID and project domain ID to default values.

Monasca-setup-u monasca- p tyun-- user_domain_id e25e0413a70c41449d2ccc2578deb1e4-- project_domain_id e25e0413a70c41449d2ccc2578deb1e4-- user monasca\-project_name service-s monitoring-- keystone_url http://192.168.1.11:35357/v3-- monasca_url http://192.168.1.143:8082/v2.0-- config_dir / etc/monasca/agent-- log_dir / var/log/monasca/agent-- overwrite

Load the authentication script admin-rc.sh, and then run monasca metric-list.

DevStack installation

At least one host with 10GB RAM is required to run Monasca DevStack.

Instructions for installing and running Devstack can be found here:

Https://docs.openstack.org/devstack/latest/

To run Monasca in DevStack, perform the following three steps.

Clone the DevStack code base.

Git clone https://git.openstack.org/openstack-dev/devstack

Add the following to the DevStack local.conf file in the root of the devstack directory. If local.conf does not exist, you may need to create it.

# BEGIN DEVSTACK LOCAL.CONF CONTENTS [[local | localrc]] DATABASE_PASSWORD=secretdatabase RABBIT_PASSWORD=secretrabbit ADMIN_PASSWORD=secretadmin SERVICE_PASSWORD=secretservice SERVICE_TOKEN=111222333444 LOGFILE=$DEST/logs/stack.sh.log LOGDIR=$DEST/logs LOG_COLOR=False # The following two variables allow switching between Java and Python for the implementations # of the Monasca API and the Monasca Persister. If these variables are not set, then the # default is to install the Python implementations of both the Monasca API and the Monasca Persister. # Uncomment one of the following two lines to choose Java or Python for the Monasca API. MONASCA_API_IMPLEMENTATION_LANG=$ {MONASCA_API_IMPLEMENTATION_LANG:-java} # MONASCA_API_IMPLEMENTATION_LANG=$ {MONASCA_API_IMPLEMENTATION_LANG:-python} # Uncomment of the following two lines to choose Java or Python for the Monasca Pesister. MONASCA_PERSISTER_IMPLEMENTATION_LANG=$ {MONASCA_PERSISTER_IMPLEMENTATION_LANG:-java} # MONASCA_PERSISTER_IMPLEMENTATION_LANG=$ {MONASCA_PERSISTER_IMPLEMENTATION_LANG:-python} # Uncomment one of the following two lines to choose either InfluxDB or Vertica. # default "influxdb" is selected as metric DB MONASCA_METRICS_DB=$ {MONASCA_METRICS_DB:-influxdb} # MONASCA_METRICS_DB=$ {MONASCA_METRICS_DB:-vertica} # This line will enable all of Monasca. Enable_plugin monasca-api https://git.openstack.org/openstack/monasca-api # END DEVSTACK LOCAL.CONF CONTENTS

Run ". / stack.sh" from the root of the devstack directory.

If you want to run Monasca with the fewest OpenStack components, you can add the following two lines to the local.conf file.

Disable_all_services enable_service rabbit mysql key

If you also want to install Tempest tests, add tempest

Enable_service rabbit mysql key tempest

To enable Horizon and Monasca UI, add horizon

Enable_service rabbit mysql key horizon tempest

Use Vagrant

Vagrant can be used to deploy VM with Devstack and Monasca running using Vagrantfile. After installing Vagrant, simply run the vagrant up command in the.. / monasca-api/devstack directory.

To use the local code base in the devstack installation, commit the changes to the master branch of the local repository, and then modify the variable file://my/local/repo/location corresponding to the local repository you want to use in the configuration file. To use a local instance of monasca-api repo, change enable_plugin monasca-api https://git.openstack.org/openstack/monasca-api to enable_plugin monasca-api file://my/repo/is/here. These two settings take effect only when the devstack VM is rebuilt.

1. Use Vagrant to enable Vertica as Metrics DB

Monasca supports using both InfluxDB and Vertica to store metrics and alarm status history. By default, InfluxDB is enabled in the DevStack environment.

Vertica is the commercial database of Hewlett Packard Enterprise. You can download the free Community Edition (CE) installer, to enable Vertica:

Register and download the Vertica Debian installer https://my.vertica.com/download/vertica/community-edition/ and place it in your home directory. Unfortunately, the DevStack installer does not have a URL that can be used automatically, so you must download the URL separately and place it where it can be found when the installer runs. Setup assumes that this location is your home directory. When using Vagrant, your home directory will usually be mounted in VM as "/ vagrant_home".

Modify the MONASCA_METRICS_DB variable in local.conf to configure Vertica support, as follows:

MONASCA_METRICS_DB=$ {MONASCA_METRICS_DB:-vertica}

two。 Use PostgreSQL or MySQL

Monasca supports the use of PostgreSQL and MySQL, so the devstack plug-in also supports it. Enable postgresql or mysql.

To set up the environment using MySQL, use:

Enable_service mysql

In addition, for PostgreSQL, use:

Enable_service postgresql

3. Use ORM support

ORM support can be controlled through the MONASCA_DATABASE_USE_ORM variable. However, if PostgreSQL (also known as the database backend) is enabled, ORM support will be enforced

Enable_service postgresql

4. Enhance Apache mirroring

If APACHE_MIRROR cannot be used for some reason, it can be enforced in the following ways:

APACHE_MIRROR= http://www-us.apache.org/dist/

5. Use WSGI

Monasca-api can be deployed with Apache using uwsgi and gunicorn. By default, monasca-api runs under uwsgi. If you want to use Gunicorn, make sure that the devstack/local.conf contains:

MONASCA_API_USE_MOD_WSGI=False

Use

Monasca Dashboard

After installing Monasca Dashboard Plugin, you can view it and manage the corresponding monitoring and alarm through the web console.

In the "Monitoring" column of the Operations console, click "Launch Monitoring Dashboard", which opens a dedicated OpenStack Horizon portal running on the management node.

In this panel, you can:

Click the OpenStack service name to view service alerts.

Click the server name to view alerts for related devices.

Monitoring information is stored in two databases (Vertica/influxdb and mysql). When you backup monitoring data, both databases are backed up at the same time. See

The monitoring metrics are stored in Vertica for 7 days.

Configuration settings are stored in MySQL.

If services on the monitoring node are stopped under a high load, such as 15 control networks and 200 compute nodes, message queuing will begin to clear in approximately 6 hours.

View monitoring information

In the Operations console, open the monitoring UI by selecting Monitoring Dashboard from the main menu.

Click Launch Monitoring Dashboard.

The "Monitoring" dashboard in the OpenStack Horizon on the management device opens.

Log in with the username and password you set for the Operations console during the first installation.

Check the alarm. You can filter the results on the screen.

Click on the left navigation of the alarm to see all the services and devices that give the alarm.

On the actions menu on the right side of each line, you can click Graph metrics to view the alarm details, and you can display the history and definition of the alarm. You can also see the metric name at the top of the graph for the alarm.

Click the OpenStack service name to view the service alerts.

Click the server name to view alerts about the device.

Click Alarm Definitions in the left navigation to view and edit the types of alerts that have been enabled.

Note: do not change or delete any default alarm definitions. However, you can add a new alert definition.

You can change the name, expression, and other details of the alarm.

If you receive too many or insufficient alerts, you may need to raise or lower the alarm threshold.

Information about writing alarm expressions.

Optional: click Dashboard.

The OpenStack dashboard (Grafana) opens. From this dashboard, you can see the health of the OpenStack service and a graphical representation of the CPU and database usage of each node.

Click the title of the drawing (for example, CPU), and then click Edit.

Change the function to view other types of information in the figure.

Optional: click Monasca Health.

The Monasca Services Dashboard opens. On this dashboard, you can see a graphical representation of the health of the Monasca service.

These are all the contents of the article "how to use Monasca, a high-performance monitoring tool for OpenStack". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report