In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1. Use slow query log (system.profile) to find statements that exceed 500ms
Mongos > db.setProfilingLevel (1500)
two。 Then use .explain () to analyze the number of affected lines and analyze why it exceeds the 500ms [that is, see the implementation plan] see the comment link below
3. Determine whether the missing index caused
# View status: level and time
PRIMARY > db.getProfilingStatus ()
{"was": 1, "slowms": 200}
# View level
PRIMARY > db.getProfilingLevel ()
one
# set level
PRIMARY > db.setProfilingLevel (2)
{"was": 1, "slowms": 100, "ok": 1}
# set level and time
PRIMARY > db.setProfilingLevel (1200)
{"was": 2, "slowms": 100, "ok": 1}
Profiling level description
Parameters:
0: off without collecting any data.
1: collect slow query data. Default is 100 milliseconds.
2: collect all data
Note:
1 if the operation is under the test collection, it is only valid for the operations in the collection. If it is required to be valid for the entire instance, you need to set the parameters under all collections or enable the parameters when enabled.
2 the result returned to you after each setting is the status before modification (including level, time parameters)
2: do not pass mongo shell
When mongoDB starts up
Mongod-profile=1-slowms=200
Or add 2 lines to the configuration file:
Profile = 1
Slowms = 200
3: close Profiling
# close
PRIMARY > db.setProfilingLevel (0)
{"was": 1, "slowms": 200, "ok": 1}
4: modify the size of "slow query log"
# turn off Profiling
PRIMARY > db.setProfilingLevel (0)
{"was": 0, "slowms": 200, "ok": 1}
# Delete the system.profile collection
PRIMARY > db.system.profile.drop ()
Slow query (system.profile) analysis
3.2: analysis
If you find that the millis value is large, then you need to optimize it.
1 if the number of nscanned is large, or is close to the total number of records (documents), then instead of an index query, a full table scan may be used.
2 if the nscanned value is higher than the nreturned value, the database scans a lot of documents in order to find the target document. At this point, you can consider creating indexes to improve efficiency.
'The return parameters of type' are as follows:
COLLSCAN # full table scan
IXSCAN # Index scan
FETCH # retrieve the specified document according to the index
SHARD_MERGE # merge the returned data of each shard
SORT # indicates that it is sorted in memory (consistent with the previous version of scanAndOrder:true)
LIMIT # use limit to limit the number of returns
SKIP # skip using skip
IDHACK # query for _ id
SHARDING_FILTER # queries the sharded data through mongos #
COUNT # uses db.coll.explain (). Count () and the like for count operations
COUNTSCAN # count returns stage if it does not use Index for count
COUNT_SCAN # count returns stage when using Index for count
SUBPLA # the stage return of the $or query that did not use the index
TEXT # stage return when querying with full-text index
PROJECTION # limits the return of stage when a field is returned
For normal queries, the combinations we would most like to see are these:
Fetch+IDHACK
Fetch+ixscan
Limit+ (Fetch+ixscan)
PROJECTION+ixscan
SHARDING_FILTER+ixscan
Etc.
You do not want to see a type containing the following:
COLLSCAN (full table scan), SORT (using sort but no index), unreasonable SKIP,SUBPLA ($or of index is not used)
For count queries, you would like to see:
COUNT_SCAN
What you don't want to see are:
COUNTSCAN
4 performance (explain) refer to the mongodb official website for a link to the comments below
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.