In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
What this article shares with you is about how to achieve fragment size setting and capacity planning in Elasticsearch. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article.
Subject to Elasticsearch 7.9.2.
Fragment size
Log type: no more than 50G per shard
Search category: no more than 20g per shard
The total amount of data is estimated first, and then the number of fragments is determined according to the size of the fragments.
Capacity planning
Factors to consider when planning capacity:
The software and hardware configuration of the machine
The size of a single document, total number of documents, index size, number of fragments, number of copies
How the document is written (such as the amount of data at a time in bulk)
Complexity of the document
How the document is read (for example, what kind of query and aggregation to make)
Steps for capacity planning:
First evaluate the performance: for example, how much to write per second, how much to read per second, and how much delay you can accept to read a single document.
Then take a look at the data: what mapping looks like and what kind of queries and aggregations are needed.
Two typical scenarios:
Search: data growth is slow.
Log: fast growth, need hot and cold separation, need to delete automatically.
Hardware configuration:
For high-performance scenarios such as search, the disk should be SSD, and the number of disk GB / memory GB = 1 / 10
For scenarios where logs and concurrency are not high, disks can be mechanically hard disks. Number of disk GB / number of memory GB = 1ax 50
The data volume of a single node should be less than 2 TB, and the maximum amount of data should not exceed 5 TB.
The JVM memory is half of the machine memory, while the JVM memory is no more than 32 GB.
The above is how to achieve shard size setting and capacity planning in Elasticsearch. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.