In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
preface
Any database has a variety of logs, and MongoDB is no exception. MongoDB has four types of logs: system log, Journal log, oplog master-slave log, slow query log, etc. These logs trace different aspects of the MongoDB database. The following describes each of these logs.
system log
Syslog is important in MongoDB database, it records MongoDB start and stop operations, as well as any exception information that occurs during server operation.
The way to configure the system log is relatively simple. Specify the logpath parameter when starting mongod.
mongod -logpath=/data/log/mongodb/serverlog.log -logappend
The system log continues to append to the file specified by logpath.
Journal
The journaling function is a very important function in MongoDB, which ensures the integrity of the database server data in case of unexpected power failure, natural disasters, etc. It adds additional reliability to MongoDB through pre-written redo logs. When this feature is enabled,MongoDB creates a Journal log when a write is performed, which contains the disk address and bytes changed by the write operation. Therefore, in the event of a server outage, the journal can be replayed at startup to re-execute writes that were not flushed to disk prior to the outage.
MongoDB Configuration WiredTiger Engine uses memory buffers to hold journal records,WiredTiger synchronizes buffered log records to disk according to the following intervals or conditions
Starting with MongoDB version 3.2, synchronize buffered journal data to disk every 50ms WiredTiger forces synchronization of log files If j:true is set for write operations Because MongoDB uses a journal file size limit of 100MB, WiredTiger creates a new log file approximately every 100MB of data. When WiredTiger creates a new journal file, WiredTiger synchronizes the previous journal file
MongoDB reaches the commit above and writes the update to the log. This means that MongoDB commits changes in bulk, i.e. each write is not flushed to disk immediately. By default, however, it is impossible to lose more than 50ms of written data when the system crashes.
By default, data files are refreshed to disk every 60 seconds, so Journal files only need to record about 60 seconds of written data. The log system allocates several empty files for this purpose in advance. These files are stored in the/data/db/journal directory, and the directory names are_j.0,_j.1, etc.
After running MongoDB for a long time, files like_j.6217 and_j.6218 will appear in the log directory. These are the current log files, and the values in the files will increase as MongoDB runs longer. Journal files are purged after a normal database shutdown (because they are no longer needed after a normal shutdown).
Writing data to mongodb is to write the memory first, and then brush the disk every 60s, and write the journal as well. It is also to write the corresponding buffer first, and then brush the disk to the journal file every 50ms.
With WiredTiger, MongoDB can recover from the last checkpoint even without journal functionality; however, to recover changes made since the last checkpoint, you still need Journal
In the event of a system crash or forced termination of the database using the kill-9 command, mongod replays the journal file at startup, displaying a large amount of validation information.
The above is for WiredTiger engine, there is a little different for MMAPv1 engine, first it is every 100ms to brush the disk, and secondly it is written to journal files through private view, and data files through shared view. I won't go into it too much here, because MongoDB 4.0 doesn't recommend this storage engine anymore.
WiredTiger is the default storage engine recommended by MongoDB since version 3.2
Note that if the client writes faster than the journal refreshes, mongod will restrict writes until the journal finishes writing to disk. This is the only case where mongod restricts writes.
Capped Collection
Before we talk about the following two types of logs, let's first understand the capped collection.
Regular collections in MongoDB are created dynamically and can grow automatically to accommodate more data. MongoDB also has a different type of collection called a fixed collection. A fixed set needs to be created beforehand and its size is fixed. Fixed collections behave the same way as circular queues. If there is no more space, the oldest document is deleted to free up space, and the newly inserted document takes up that space.
To create a fixed set:
db.createCollection("collectionName",{"capped":true, "size":100000, "max":100})
A fixed-size collection of 100000 bytes is created, with 100 documents. Whichever limit is reached first, new documents inserted later crowd out the oldest documents from the collection: the number of documents in the fixed collection cannot exceed the document number limit, nor the size limit.
Fixed sets cannot be changed after they are created, and you cannot convert fixed sets to non-fixed sets, but you can convert regular sets to fixed sets.
db.runCommand({"convertToCapped": "test", "size" : 10000});
Fixed collections can be sorted in a special way called natural sort, which returns the order of documents in the result set as the order of documents on disk. Natural order is the order in which documents are inserted, so natural sorting results in documents arranged from old to new. Of course, you can also follow the new to the old:
db.my_capped_collection.find().sort({"$natural": -1});
oplog master-slave log
Replication Sets are used to back up data across multiple servers. MongoDB replication is implemented using oplog, which contains every write to the master node. An oplog is a fixed collection in the master node's local database. The backup node queries this collection to know what operations need to be replicated.
All databases in a mongod instance use the same oplog, that is, all database operation logs (insert, delete, modify) are recorded in the oplog
Each backup node maintains its own oplog, which records every copy of data from the primary node. This way, each member can be used as a synchronization source for other members.
As shown in the figure, the backup node obtains the operations to be performed from the currently used synchronization source, then performs these operations on its own dataset, and finally writes these operations to its own oplog. If an operation fails (only when the data of the synchronization source is corrupted or inconsistent with the primary node), the backup node will stop replicating data from the current synchronization source.
The oplog stores all write operations performed in sequence. Each member of the replica sets maintains its own oplog. The oplog of each member should be exactly the same as the oplog of the master node (there may be some delay).
If a backup node goes down for some reason, but it restarts, synchronization automatically begins with the last operation in the oplog. Since the replication operation is intended to replicate data on the write oplog, the backup node may perform a replication operation again on data that has already been synchronized. MongoDB was designed with this in mind: executing the same operation in oplog multiple times has the same effect as executing it only once.
Since the oplog size is fixed, it can only keep a certain number of log operations. In general, oplog space usage grows at almost the same rate as the system processes write requests: if 1KB of write requests are processed per minute on the primary node, the oplog is likely to write 1KB log entries per minute as well.
However, there are some exceptions: if a single request can affect multiple documents (such as deleting multiple documents or updating multiple documents), multiple action logs appear in oplog. If a single action affects multiple documents, then each affected document corresponds to a log for oplog. Thus, if db.student.remove() deletes 10w documents, then there are 10w action logs in oplog, one for each deleted document. If you perform a lot of batch operations, the oplog fills up quickly.
slow query log
MongoDB uses a system profiler to find operations that take too long. The system analyzer records operations in the fixed set system.profile and provides a lot of information about operations that take too long, but the overall performance of the corresponding mongod is also degraded. So we usually open the analyzer periodically to get information.
By default, System Analyzer is off and no logging is done. You can run db.setProfilingLevel() from shell to start the analyzer
db.setProfilingLevel(level,) 0=off 1=slow 2=all
The first parameter is the specified level. Different levels represent different meanings. 0 means off, 1 means logging operations that take more than 100 milliseconds by default, and 2 means logging all operations. The second parameter is a custom "too long" criterion, such as recording all operations that take 500ms
db.setProfilingLevel(1,500);
If the analyzer is turned on and the system.profile collection does not exist, MongoDB creates a capped collection for it that is several megabytes in size. If you want the analyzer to run longer, you may need more space to log more operations. At this point, you can close the analyzer, delete and re-create a new fixed set called system.profile, and make it fit the required capacity. Then re-enable the analyzer on the database.
You can view the maximum size of a collection via db.system.profile.stats().
summary
The above is all the content of this article, I hope the content of this article for everyone's study or work has a certain reference learning value, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.