In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Insert inserts / queries / updates / deletes mongostat-- port 21000-- rowcount=1 | grep-v insert | awk'{print $1}'
Query inserts / queries / updates / deletes mongostat-- port 21000-- rowcount=1 | grep-v insert | awk'{print $1}'
Update inserts / queries / updates / deletes mongostat-- port 21000-- rowcount=1 | grep-v insert | awk'{print $1}'
Delete inserts / queries / updates / deletes mongostat-- port 21000-- rowcount=1 | grep-v insert | awk'{print $1}'
The getmore Operand of the cursor when getmore query
The number of commands executed by command per second. For example, bulk insertion is only considered to be a command. It doesn't make much sense.
The number of flush in a second of flushes. Flush costs a lot. If you flush frequently, you may need to find out why.
Mapped all the amount of data that is mmap, in MB
Vsize virtual memory usage, if journaling is enabled, is about twice the mapped memory, and memory leaks may occur if the memory is three times or more than the mapped memory.
Res physical memory usage, res will slowly rise, if res often suddenly drop, check to see if there are other programs eating memory.
The number of faults access failures per second (only Linux has), and the data is swapped out of physical memory and put into swap. Do not exceed 100. otherwise, the machine memory is too small, resulting in frequent swap writes. At this point, you need to upgrade memory or expand
Idxmiss% index miss percentage, normally, all queries should pass the index, if the value here is large, is not missing the index.
"total": 0, # queue moninfo currently waiting for the lock to be acquired ["globalLock"] ["currentQueue"] ["total"]
"readers": 0, # queue moninfo currently waiting for the read lock to be acquired ["globalLock"] ["currentQueue"] ["readers"]
"writers": 0 # queue moninfo currently waiting for the write lock to be acquired ["globalLock"] ["currentQueue"] ["writers"]
"total": 0, # current active connections moninfo ["globalLock"] ["activeClients"] ["total"]
"readers": 0, # current active read connections moninfo ["globalLock"] ["activeClients"] ["readers"]
"writers": 0 # number of write connections currently active moninfo ["globalLock"] ["activeClients"] ["writers"]
"current": 2050, # current number of connections moninfo ["connections"] ["current"]
"available": 14350 # number of connections also available moninfo ["connections"] ["available"]
"flushes": 250852, # number of times the database refreshes data to disk moninfo ["backgroundFlushing"] ["flushes"]
"total_ms": 52897489 # time it takes for a database to refresh data to disk (in milliseconds moninfo ["backgroundFlushing"] ["total_ms"]
"average_ms": 210.871306587151, # average time spent per disk refresh, in milliseconds. Moninfo ["backgroundFlushing"] ["average_ms"]
"last_ms": 797,# the last time it took to refresh the disk (in milliseconds). Moninfo ["backgroundFlushing"] ["last_ms"]
"commits": 27, # the number of commit occurrences in journal logs in the last interval moninfo ["dur"] ["commits"]
"journaledMB": 0.114688, # the amount of data generated by journal logs in the previous interval moninfo ["dur"] ["journaledMB"]
"writeToDataFilesMB": 0.13708, # the amount of data written to disk by journal logs in the last interval moninfo ["dur"] ["writeToDataFilesMB"]
"compression": 0.8158085672418944, # journal log compression ratio moninfo ["dur"] ["compression"]
"commitsInWriteLock": 0, # how many times there are write locks when the journal log is submitted. Moninfo ["dur"] ["commitsInWriteLock"]
"earlyCommits": 0Jing # how many times have you been asked to commit moninfo ["dur"] ["earlyCommits"] before automatic commit
"dt": 3087, # time spent counting timeMs data in milliseconds moninfo ["dur"] ["timeMs"] ["dt"]
"prepLogBuffer": 0Jing # time spent preparing to write journal logs (in milliseconds). The less the performance, the better moninfo ["dur"] ["timeMs"] ["prepLogBuffer"]
"writeToJournal": 246 moninfo # time spent writing journal logs in milliseconds ["dur"] ["timeMs"] ["writeToJournal"]
"writeToDataFiles": 5 moninfo # the time spent writing data to the data file after writing the journal log ["dur"] ["timeMs"] ["writeToDataFiles"]
"remapPrivateView": the time it takes to remap data, the shorter the time, the better performance. Moninfo ["dur"] ["timeMs"] ["remapPrivateView"]
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.