In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
What this article shares with you is about how to operate Refresh and Flush of Elasticsearch. The editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.
When you first come into contact with these two concepts, you will probably feel that there is no difference between them, both in order to enable the index to be searched in real time after operating the index, but they are still a little different.
The underlying layer of Elasticsearch depends on Lucene. Here we introduce segment and Reopen,commit of Lucene.
Segment
In ES, the basic storage unit is shard (sharding), but it is slightly different on the lower Lucene. Each shard of ES is an index (index) of Lucene, and the index of Lucene is composed of multiple segment, and each segment is the reverse index of ES document, which contains some mapping (mapping) of term (words).
When each ES document is created, a new segment is written, so each time a new segment is written, so there is no need to modify the previous segment. When deleting a document, just mark it as deleted at the segment to which it belongs, without actually erasing it from disk. The update is the same, except that the segment is marked as logically deleted before the corresponding, and then create a new segment.
Lucene Reopen
The purpose of Reopen is to make the data searchable, although the data can be searched at this time, it is not necessarily guaranteed that the data has been persisted to disk.
Lucene Commit
Commit is to make data persistent. Each Commit and data from different segment will be persisted to disk. Although this can make the data more secure, each operation will consume system resources and there will be a large number of IO operations.
Translog
ES introduces a new way of persistence, translog (transaction log). After a document is indexed, it is added to the memory buffer and appended to translog.
Refresh of ES
By default, ES refresh once a second, and each operation copies the contents of the memory buffer to the newly created segment. This step is done in memory, when the new document is searched. In other words, ES is a near real-time search, about 1 second, so that the data can be searched.
Flush of ES
The Flush operation means that all documents in the memory buffer are written to the new Lucene Segment, that is, all the segment in memory is committed to disk and the translog is cleared.
Generally speaking, the time interval of Flush is relatively long. The default is 30 minutes, or when the translog reaches a certain size, the flush operation will be triggered.
Last
To put it simply, the refresh operation of ES is designed to make the latest data available immediately. The flush operation is to persist the data to disk, and the search for ES is handled in memory, so the Flush operation does not affect whether the data can be searched.
Translog is usually emptied during flush and persisted to disk during fsync and commit. By default, translog is fsync to disk after version 6.x. However, some index.translog configurations can be set.
The above is how to carry out the Refresh and Flush operations of Elasticsearch. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.