In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Current architecture:
N filebeat clients send logs from each application to kafka,3 kafka for cluster log queue, four ES for cluster, the first two to store hot data logs for nearly two days, and the last two to store historical logs from two days ago, which are saved for one month. Currently, the total data volume is 4.4 billion and the size is 6T. Logstash and kibana and ES are on the same machine, and the kibana domain name points to the three kibana at the back end for polling.
There are performance issues:
1. Only the first node in the cluster has a high load, the other nodes have a low load all the time, and occasionally the second node, which is also a hot data node, has a slightly higher load.
2. Queues are often blocked. The topic of the three uat,pet,prd environments in kafka is in the same default logstash consumer group. As long as there is a backlog of queues in one of the environments, queues in other environments cannot be consumed.
3. It takes at least half a minute for the home page to open after Kibana login, and the log query is also very slow. It will take at least a few minutes to get the result.
4. Sometimes ES often leaves the cluster due to high load, resulting in the redistribution of cluster node data, the cluster status color is RED, and the Red error is displayed when the kibana page is opened. Kibana pages can't be opened intermittently for about a week or two.
At present, some index queries are found to be a little slow in ELK, so we open the ES index query log to record the slow query, and then analyze the slow query log to locate the problem. The slow log content is as follows:
[2017-08-28T11:21:02377] [WARN] [index.search.slowlog.query] [logstash-nginx-2017.08.01] [4] took [15s], took_millis [15029], types [], stats [], search_ type [query _ THEN_FETCH], total_shards [140], source [{"size": 0, "query": {"bool": {"filter": [{"match_none": {"boost": 1.0}} {"query_string": {"query": "NOTstatus: 200OR NOTstatus:304", "fields": [], "use_dis_max": true, "tie_breaker": 0.0, "default_operator": "or", "auto_generate_phrase_queries": false, "max_determined_states": 10000, "enable_position_increment": true, "fuzziness": "AUTO", "fuzzy_prefix_length": 0, "fuzzy_max_expansions": 50, "phrase_slop": 0 "analyze_wildcard": true, "escape": false, "split_on_whitespace": true, "boost": 1.0}}], "disable_coord": false, "adjust_pure_negative": true, "boost": 1.0}}, "aggregations": {"3": {"terms": {"field": "status", "size": 5, "min_doc_count": 0, "shard_min_doc_count": 0, "show_term_doc_count_error": false "order": [{"_ count": "desc"}, {"_ term": "asc"}}, "aggregations": {"2": {"date_histogram": {"field": "@ timestamp", "format": "epoch_millis", "interval": "20m", "offset": 0, "order": {"_ key": "asc"}, "keyed": false, "min_doc_count": 0, "extended_bounds": {"min": "1503886846372" "max": "1503890446372"}], [2017-08-28T11:21:02377] [WARN] [index.search.slowlog.query] [node-3] [logstash-nginx-2017.08.01] [2] took [15.7s], took_millis [15787], types [], stats [], search_ type [query _ THEN_FETCH], total_shards [140], source [{"size": 0 "query": {"bool": {"filter": [{"match_none": {"boost": 1.0}}, {"query_string": {"query": "NOT status:200 OR NOT status:304", "fields": [], "use_dis_max": true, "tie_breaker": 0.0, "default_operator": "or", "auto_generate_phrase_queries": false, "max_determined_states": 10000, "enable_position_increment": true "fuzziness": "AUTO", "fuzzy_prefix_length": 0, "fuzzy_max_expansions": 50, "phrase_slop": 0, "analyze_wildcard": true, "escape": false, "split_on_whitespace": true, "boost": 1.0}}], "disable_coord": false, "adjust_pure_negative": true, "boost": 1.0}}, "aggregations": {"3": {"terms": {"field": "status", "size": 5: "min_doc_count": 0, "shard_min_doc_count": 0, "show_term_doc_count_error": false, "order": [{"_ count": "desc"}, {"_ term": "asc"}]}, "aggregations": {"2": {"date_histogram": {"field": "@ timestamp", "format": "epoch_millis", "interval": "20m", "offset": 0, "order": {"_ key": "asc"} "keyed": false, "min_doc_count": 0, "extended_bounds": {"min": "1503886846372", "max": "1503890446372"}]
The following is an analysis:
To be continued
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.