Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of optimizing elasticsearch Storage in Zabbix

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Scene analysis

Since the historical data of the company's zabbix is stored in elasticsearch, there is a need to store the monitored historical data as long as possible, preferably one year. The current situation is that there are three ES nodes, and the amount of historical data monitored every day is 5G. At present, the data can be stored for up to one month, and those more than 30 days will be regularly deleted. Each memory is divided into 8 gigabytes, and all mechanical hard drives are used. The main part is 5, and the copy part is 1. Query requirements generally only get historical data for one week, and occasionally need to check historical data for one to two months.

Node planning

In order to enable ES to store longer historical data, and taking into account the data growth caused by the subsequent addition of monitoring items, I increased the number of nodes to 4 nodes, and increased the memory of some nodes, and some nodes used SSD storage.

192.168.179.133 200GSSD 4G memory tag:hot node.name=es1192.168.179.134 200GSSD 4G memory tag:hot node.name=es2192.168.179.135 1THDD 32G memory tag:cold node.name=es3 node.master=false192.168.179.136 1THDD 32G memory tag:cold node.name=es4 node.master=false Optimization

The data mapping is remodeled, the data of str type is not segmented, and hot and cold nodes are used to store the data. The index shard of the first seven days is designed as two primary and one deputy, and the index is stored on the hot node. The data of more than seven days will be stored on the cold node, and the index shards of more than 30 days will be set to 2 principal 0 copies. ES provides an api of shrink for compression. Because ES is a Lucene-based search engine, the index of Lucene consists of multiple segment, and each segment consumes file handles, memory and CPU running cycles. Too many segments will increase resource consumption and slow search. Here, I forcibly merge the index fragments of the previous day into one segment, and modify the interval of refresh to 60s to reduce the frequency of segments. Close indexes that are more than 3 months old. The above operations are performed regularly using curator, a management tool of ES.

The docking operation of zabbix and ES 1. Modify / etc/zabbix/zabbix_server.conf to add the following

You can fill in any node in the cluster with ES address.

HistoryStorageURL=192.168.179.133:9200HistoryStorageTypes=str,text,log,uint,dblHistoryStorageDateIndex=12. Modify / etc/zabbix/web/zabbix.conf.php and add the following content: global $DB, $HISTORY;$HISTORY ['url'] =' http://192.168.179.133:9200';// Value types stored in Elasticsearch.$HISTORY ['types'] = [' str', 'text',' log','uint','dbl']

3. Modify the ES configuration file to add labels for hot and cold nodes

Vim elasticsearch.yml

Hot node configuration

Node.attr.box_type=hot

Cold node configuration

Node.attr.box_type=cold3. Create templates and pipes on es

Templates of each data type need to be created, and the information of api can be obtained according to the elasticsearch.map file. The template definition content has matching indexes, primary and secondary shard number settings, refresh interval, new index allocation node settings and mapping settings. Here I just take the indexes of uint and str data as examples.

PUT _ template/uint_template {"template": "uint*", "index_patterns": ["uint*"], "settings": {"index": {"routing.allocation.require.box_type": "hot", "refresh_interval": "60s", "number_of_replicas": 1, "number_of_shards": 2}} "mappings": {"values": {"properties": {"itemid": {"type": "long"}, "clock": {"format": "epoch_second", "type": "date"} "value": {"type": "long"}} PUT _ template/str_template {"template": "str*", "index_patterns": ["str*"], "settings": {"index": {"routing.allocation.require.box_type": "hot" "refresh_interval": "60s", "number_of_replicas": 1, "number_of_shards": 2}}, "mappings": {"values": {"properties": {"itemid": {"type": "long"} "clock": {"format": "epoch_second", "type": "date"}, "value": {"index": false, "type": "keyword"}

The purpose of defining a pipe is to preprocess the data before it is written to the index so that it produces an index on a daily basis.

PUT _ ingest/pipeline/uint-pipeline {"description": "daily uint index naming", "processors": [{"date_index_name": {"field": "clock", "date_formats": ["UNIX"], "index_name_prefix": "uint-" "date_rounding": "d"}]} PUT _ ingest/pipeline/str-pipeline {"description": "daily str index naming", "processors": [{"date_index_name": {"field": "clock", "date_formats": ["UNIX"], "index_name_prefix": "str-" "date_rounding": "d"}]}

4. Restart zabbix after modification and check whether zabbix has data

Systemctl restart zabbix-server uses curator to operate on the index

The address of the official curator document is as follows

Https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/installation.html

1. Install curatorpip install-U elasticsearch-curator2. Create the curator configuration file mkdir / root/.curatorvim / root/.curator/curator.yml---client: hosts:-192.168.179.133-192.168.179.134 port: 9200 url_prefix: use_ssl: client_cert: client_key: ssl_no_validate: timeout: 30 master_only: Falselogging: loglevel: INFO logfile: logformat: default blacklist: ['elasticsearch',' urllib3'] 3. Edit action.yml to define action

Assign indexes from 7 days ago to cold nodes

1: action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: box_type value: require wait_for_completion: true timeout_override: continue_if_exception: false disable_action: false filters:-filtertype: pattern kind: regex value:'^ (uint- | dbl- | str-). * $'- filtertype: age source : name direction: older timestring:'% Ymuri% mmury% d'unit: days unit_count: 7

Forcibly merge the indexes of the previous day, 1 segment per shard.

2: action: forcemerge description: "Perform a forceMerge on selected indices to 'max_num_segments' per shard" options: max_num_segments: 1 delay: timeout_override: 21600 continue_if_exception: false disable_action: false filters:-filtertype: pattern kind: regex value:' ^ (uint- | dbl- | str-). * $'- filtertype: age source: name direction: Older timestring:'% Ymuri% mmi% d' unit: days unit_count: 1

If the index is longer than 30 days, the number of primary shards will be changed to 2, and the replica shards will be 0. The node performing the shrink operation cannot be used as a master node.

3: action: shrink description: "Change the number of primary shards to one, and the copy shards to 0" options: ignore_empty_list: True shrink_node: DETERMINISTIC node_filters: permit_masters: False exclude_nodes: ['es1' 'es2'] number_of_shards: 2 number_of_replicas: 0 shrink_prefix: shrink_suffix:'-shrink' delete_after: True post_allocation: allocation_type: include key: box_type value: cold wait_for_active_shards: 1 extra_settings: settings: index.codec: best_compression wait_for_completion : True wait_for_rebalance: True wait_interval: 9 max_wait:-1 filters:-filtertype: pattern kind: regex value:'^ (uint- | dbl- | str-). * $'- filtertype: age source: name direction: older timestring:'% YMY% MMI% d' unit: days unit_count: 30

Close indexes that are more than 3 months old

4: action: close description: "Close selected indices" options: delete_aliases: false skip_flush: false ignore_sync_failures: false filters:-filtertype: pattern kind: regex value:'^ (uint- | dbl- | str-). * $'- filtertype: age source: name direction: older timestring:'% YMY% mmi% d' unit: days unit_count: 90

Delete indexes for more than one year

5: action: delete_indices description: "Delete selected indices" options: continue_if_exception: False filters:-filtertype: pattern kind: regex value:'^ (uint- | dbl- | str-). * $'- filtertype: age source: name direction: older timestring:'% YMY% mMMI% d' unit: days unit_count: 3654. Execute curator to test curator action.yml

5. Write curator operations into scheduled tasks, and execute crontab-e100 * curator / root/action.yml every day

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report