In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the knowledge of "what is the principle of dividing logs in distributed system design patterns". Many people will encounter this dilemma in the operation of actual cases. next, let the editor lead you to learn how to deal with these situations! I hope you can read it carefully and be able to achieve something!
Split log (Segmented Log)
Cut large files into smaller files that are easier to handle.
Problem background
A single log file can grow to a large size and become a performance bottleneck by reading it when the program starts. Old logs need to be cleaned regularly, but it is difficult to clean up a large file.
Solution
Divide a single log into multiple parts, and when the log reaches a certain size, it will switch to a new file to continue writing.
/ / write log public Long writeEntry (WALEntry entry) {/ / determine whether a new file maybeRoll (); / / write file return openSegment.writeEntry (entry);} private void maybeRoll () {/ / if the current file size exceeds the maximum log file size if (openSegment. Size () > = config.getMaxLogSize () {/ / forced flushing openSegment.flush (); / / save the saved and sorted list of old log files sortedSavedSegments.add (openSegment); / / get the last log of the file id long lastId = openSegment.getLastLogEntryId () / / according to the log id, create a new file and open openSegment = WALSegment.open (lastId, config.getWalDir ());}}
If the log is segmented, you need a mechanism to quickly locate a file at a log location (or log sequence number). This can be done in two ways:
The name of each log sharding file contains a specific beginning and log location offset (or log sequence number).
Each log sequence number contains the file name and transaction offset.
/ / create file name public static String createFileName (Long startIndex) {/ / specific log prefix _ start location _ log suffix return logPrefix + "_" + startIndex + "_" + logSuffix;} / / extract log offset public static Long getBaseOffsetFromFileName (String fileName) {String [] nameAndSuffix = fileName.split (logSuffix) from the file name; String [] prefixAndOffset = nameAndSuffix [0] .split ("_") If (prefixAndOffset [0] .equals (logPrefix)) return Long.parseLong (prefixAndOffset [1]); return-1l;}
After the file name contains this information, the read operation is divided into two steps:
Given an offset (or transaction id), get the file that is larger than the offset log
Read all logs greater than this offset from the file
/ / given offset, read all logs public List readFrom (Long startIndex) {List segments = getAllSegmentsContainingLogGreaterThan (startIndex); return readWalEntriesFrom (startIndex, segments);} / / given offset, get all log files private List getAllSegmentsContainingLogGreaterThan (Long startIndex) {List segments = new ArrayList () that contain more than this offset / / Start from the last segment to the first segment with starting offset less than startIndex / / This will get all the segments which have log entries more than the startIndex for (int I = sortedSavedSegments.size ()-1; I > = 0; iMel -) {WALSegment walSegment = sortedSavedSegments.get (I); segments.add (walSegment); if (walSegment.getBaseOffset ()
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.