Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Data processing of ceilometer and how to configure Pipeline

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains the "ceilometer data processing and pipeline configuration", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn "ceilometer data processing and pipeline configuration" bar!

The mechanism for processing data is called a pipeline. At the configuration level, the pipeline describes the coupling between the data source and the corresponding confluence point, which is used to transform and publish the data. This feature is handled by the notification agent.

Source is the producer of data: samples or events. In fact, it is a set of notification handlers that issue data points for matching collections of meters and event types.

Each source configuration encapsulates name matching and mapping into one or more sinks for publication.

Sink, on the other hand, is a consumer of data, providing logic for transforming and publishing data from related sources.

In fact, sink describes a series of handlers. The chain starts with zero or more transformers and ends with one or more publishers. The first transformer in chain passes data from the corresponding source, takes actions such as deriving change rates, performs unit conversions or aggregations, and then publishing.

Pipeline configuration

The notification agent supports two types of pipes: one for samples and the other for events. Pipes can be enabled and disabled by setting pipe options in [notifications].

By default, the actual configuration of each pipe is stored in separate configuration files: pipeline.yaml and event_pipeline.yaml. The location of the configuration file can be set by the pipeline_cfg_file and event_pipeline_cfg_file options. The profile template can be viewed at Ceilometer Configuration Options.

Meter pipeline is defined as follows:

-sources:-name: 'source name' meters: -' meter filter' sinks:-'sink name'sinks:-name:' sink name' transformers: 'definition of transformers' publishers: -' list of publishers'

There are several ways to define the meters list of pipe sources. A list of valid meters can be found in Measurements. There is a way to define all meters or to include or filter only part of the meters. A source configuration operation should be as follows:

Contains all meters using the * wildcard. But it is wise to choose only the meters you intend to use to avoid flooding the metrology database with unused data.

To define a meters list, use any of the following:

To include part of the meters, use the meter_name syntax.

To filter part of the meters, use the! meter_name syntax.

Note: the OpenStack telemetry service does not have any duplicate checks between pipes, and if you add a meter to multiple pipes, it is assumed that the repetition is intentional and can be stored multiple times according to the specified receiver.

The above definition method can be used in the following combinations:

Use only wildcard symbols.

Use included meters.

Use excluded meters.

Wildcards are used in conjunction with excluded.

Note: at least one of the above changes should be included in meters section. Included and excluded meters cannot coexist in the same pipe. Wildcards and included meters cannot coexist in the same pipe definition section.

The transformers section of the pipeline sink provides the possibility of adding a list of transformers definitions. Existing transformers:

The name of the Transformer, the reference name of the configuration, AccumulatoraccumulatorAggregatoraggregatorArithmeticarithmeticRate of changerate_of_changeUnit conversionunit_conversionDeltadelta

The publisher section contains a list of publishers, where sample data should be sent after possible transformations.

Similarly, the event pipeline definition looks like this:

-sources:-name: 'source name' events: -' event filter' sinks:-'sink name'sinks:-name:' sink name' publishers:-'list of publishers'

Event filters use the same filtering logic as meter pipes.

Transformers

Note: Transformers keeps data in memory, so it cannot guarantee persistence in some scenarios. More durable and efficient solutions can be achieved using solutions like Gnocchi.

The definition of the converter can contain the following fields:

Name of the name converter

Parameters of the parameters converter

The parameters section can contain transformer-specific fields, and in the case of the rate of change, fields such as source and target contain different subfields, depending on the implementation of transformer.

Here are the supported transformers:

Rate of change transformer

This kind of converter calculates the time change between two data points. The following transformer example defines cpu_util meter:

Transformers:-name: "rate_of_change" parameters: target: name: "cpu_util" unit: "%" type: "gauge" scale: "100.0 / (10 minutes 9 * (resource_metadata.cpu_number or 1))"

The rate of change transformer generates cpu_util meter from the sample value of the cpu counter, which represents the cumulative CPU time of nanoseconds. The above transformer definition defines a scale factor (for nanosecond and multi-cpu) that is applied to derive a sequence of metric samples with% units from the sequential values of the cpu table before conversion.

The definition of the disk Icano rate is also generated by the rate of change converter:

Transformers:-name: "rate_of_change" parameters: source: map_from: name: "disk\\. (read | write)\\. (bytes | requests)" unit: "(B | request)" target: map_to: name: "disk.\ 1.\ \ 2.rate "unit:"\\ 1Universe "type:" gauge "Unit conversion transformer

This kind of converter is applied to unit conversion. It takes the volume of meter and multiplies it by the given scale expression. It also supports map_from and map_to fields like the rate of change of transformer.

Sample configuration:

Transformers:-name: "unit_conversion" parameters: target: name: "disk.kilobytes" unit: "KB" scale: "volume * 1.0 / 1024.0"

Use map_from and map_to:

Transformers:-name: "unit_conversion" parameters: source: map_from: name: "disk\\. (read | write)\\ .bytes" target: map_to: name: "disk.\\ 1.kilobytes" scale: "volume * 1.0 / 1024.0 "unit:" KB "Aggregator transformer

The converter summarizes the input samples before reaching enough samples or timeout.

You can use the retention_time option to specify a timeout. If you want to refresh the aggregation, after aggregating a certain number of samples, specify the parameter size.

The sample size created is the sum of the sample size input to the l converter. Samples can be aggregated through the project_id, user_id, and resource_metadata attributes. Aggregate based on the selected attributes, specify them in the configuration, and set the value of the property to get a new sample (first use the first sample attribute, finally take the last sample attribute, and then delete the attribute).

Summarize the sample values for 60 seconds through resource_metadata, and save the resource_metadata of the newly received samples:

Transformers:-name: "aggregator" parameters: retention_time: 60 resource_metadata: last

Aggregate 15 samples each through user_id and resource_metadata, keep the user_id of the first received sample, and delete the resource_metadata:

Transformers:-name: "aggregator" parameters: size: 15 user_id: first resource_metadata: dropAccumulator transformer

This converter simply caches samples until enough samples are reached, and then immediately flushes them all down the pipe:

Transformers:-name: "accumulator" parameters: size: 15Multi meter arithmetic transformer

This kind of converter enables us to perform arithmetic operations and and/or metadata on one or more meters. For example:

Memory_util = 100 * memory.usage / memory

A new sample is created based on the property description in the target section of the transformer configuration. The sample size is calculated based on the expression provided. Calculate the samples of the same resource.

Note: the scope of calculation is limited to meter within the same interval.

Example configuration file:

Transformers:-name: "arithmetic" parameters: target: name: "memory_util" unit: "%" type: "gauge" expr: "100 * $(memory.usage) / $(memory)"

To demonstrate the use of metadata, the following implementation of a new type of meter shows the average CPU time per core:

Transformers:-name: "arithmetic" parameters: target: name: "avg_cpu_per_core" unit: "ns" type: "cumulative" expr: "$(cpu) / $(cpu) .resource_metadata.cpu_number or 1)"

Remarks: Expression evaluation gracefully handles NaNs and exceptions. In this case, it does not create a new sample, but only logs a warning.

Delta transformer

This converter calculates the change between two sample data points of a resource. It can be configured to capture only positive growth increments (deltas).

Instance configuration:

Transformers:-name: "delta" parameters: target: name: "cpu.delta" growth_only: TruePublishers

Telemetry services provide several transmission methods to transfer collected data to an external system. The consumers of this data are very different, just like the monitoring system, data loss is acceptable, but the billing system requires reliable data transmission. Telemetry technology provides a way to meet the requirements of the two systems.

The publisher component can save data to persistent storage through a message bus or send it to one or more external consumers. A chain can contain multiple publishers.

To solve this problem, multiple publishers can be configured for each data point in the telemetry service, allowing the same technology meter or event to be published to multiple destinations, each of which may use a different transport.

The following publisher types are supported:

Gnocchi (default)

When gnocchi publishers are enabled, metrics and resource information are pushed to gnocchi for time series optimization storage. Gnocchi must be registered with the Identity service because Ceilometer discovers the exact path through the Identity service.

More details on how to enable and configure gnocchi can be found on its official documentation page.

Panko

Event data in cloud computing can be stored in panko, which provides a HTTP REST interface to query system events in OpenStack. Push the data to panko and set publisher to panko://.

Notifier

Notifier publisher can be specified in the form of notifier://?option1=value1&option2=value2. It uses oslo.messaging to send out AMQP data. Any consumer can then subscribe to published topics for additional processing.

The following customization options are available:

Per_meter_topic

The value of this parameter is 1. It is used to publish samples in additional metering_topic.sample_name topic queues, in addition to the default metering_topic queues.

Policy

The behavior used to configure a case, when the publisher is unable to send a sample, the possible predefined values are:

Default: used to wait and block until samples is sent.

Drop: used to discard samples that could not be sent.

Queue: used to create an in-memory queue and retry sending samples to the queue during the next sample release (queue length can be configured with the max_queue_length attribute, default is 1024).

Topic

The topic name of the queue to publish to. Setting this option will override the theme set by default by metering_topic and event_topic. This option can be used to support multiple consumers.

Udp

This publisher can be specified as udp://:/. It sends out measurement data through UDP.

File

This publisher can be specified as a file://path?option1=value1&option2=value2. Such a publisher records the survey data in a file.

Note: if you do not specify a file name and location, the file publisher does not log any meters, but instead logs a warning message in the configured log file for Telemetry.

The following options are available for file publisher:

Max_bytes when this option is greater than 00:00, it will cause a flip. When the specified size is about to overflow, the file will be closed and a new file will be silently opened for output. If its value is zero, then flipping will not occur.

Backup_count if the value is non-zero, the extension is appended to the file name of the old log, such as. 1,. 2, and so on, until the specified value is reached. The authoring status and files containing the latest data are always without any specified extensions.

Http

The telemetry service supports sending samples to an external HTTP destination. The samples is issued without any modification. To set this option as the notification agent target, set http:// as the publisher endpoint in the pipe definition file. The HTTP goal should be set with the publisher statement. For example, additional configuration options can be passed through http://localhost:80/?option1=value1&option2=value2.

The following options are available:

The number of seconds before the timeout HTTP request timed out.

The number of times max_retries retried the request before it failed.

If batch is false, the publisher will send each sample and event separately, regardless of whether the notification agent is configured for batch processing.

Verify_ssl disables ssl certificate verification if it is false.

The default publisher is gnocchi, and no other options are specified. The sample publisher section of the / etc/ceilometer/pipeline.yaml configuration file is similar to the following:

Publishers:-gnocchi://-panko://-udp://10.0.0.2:1234-notifier://?policy=drop&max_queue_length=512&topic=custom_target pipeline segmentation

Note: Partitioning is required only if the pipe contains transformations. With some publishers support, it has secondary benefits. Under large workloads, multiple notification agents can be deployed to handle the flooding of incoming messages from the monitoring service. If translation is enabled in the pipe, the notification agent must be coordinated to ensure that related messages are routed to the same agent. To enable coordination, set the workload_partitioning value in the notification section.

To distribute messages across agents, you should set the pipeline_processing_queues option. This value defines how many pipe queues to create and then distribute them to the active notification agent. It is recommended that the number of processing queues at least match the number of agents.

Increasing the number of processing queues will improve the distribution of messages among agents. It also helps minimize requests to the Gnocchi storage backend. It will also increase the load on the message queue because it will use queue-to-fragment data.

Warning: reducing the number of processing queues may result in data loss because previously created queues may no longer be assigned to active agents. It is only recommended that you increase the processing queue.

Thank you for reading, the above is the content of "data processing and pipeline configuration of ceilometer". After the study of this article, I believe you have a deeper understanding of the data processing of ceilometer and how to configure the pipeline, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report