In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The previous words
This article will introduce the analysis of index construction in MongoDB database in detail.
Overview
Creating an index can speed up index-related queries, but increase disk space consumption and reduce write performance. At this point, it is necessary to judge whether the construction of the current index is reasonable. There are four ways to use
1. Mongostat tool
2. Introduction to profile collection
3. Log
4. Explain analysis
Mongostat
Mongostat is a status detection tool that comes with mongodb and is used on the command line. It gets the current running state of mongodb at regular intervals and outputs it. If you find that the database suddenly slows down or has other problems, you should first consider using mongostat to check the status of mongo.
Mongostat is a program that checks the running status of mongodb. It is used in the following ways
Mongostat-h ip:port
[field description]
Insert/s: the number of objects inserted into the database per second. If slave, there is a * before the value, indicating replication set operation query/s: number of query operations per second update/s: number of update operations per second delete/s: number of delete operations per second getmore/s: number of getmore operations per second when querying cursor (cursor) command: number of commands executed per second, two values are displayed in the master-slave system (for example, 3 | 0) Represents local | copy command dirty: cache percentage of dirty data bytes used: percentage of cache in use the number of times flushes:checkpoint is triggered during a polling interval. It is usually 0, and the discontinuity will be 1. By calculating the interval between the two 1s, you can roughly understand how long it takes to flush. Flush overhead is very high, if there is frequent flush, you may have to find out the reason: vsize: virtual memory usage, unit MB res: physical memory usage, unit MB. Res will rise slowly. If res often drops suddenly, check to see if other programs are eating memory qr: the queue length of the client waiting to read data from the MongoDB instance qw: the queue length of the client waiting to write data from the MongoDB instance ar: the number of active clients performing read operations aw: the number of live clients performing write operations. If the ar or aw value is large, then the DB is blocked and the DB is not as fast as the request. Check to see if there are expensive slow queries. If the query is all right and the load is really heavy, you need to add the network inbound traffic of the netIn:MongoDB instance and the network outbound traffic of the netOut:MongoDB instance conn: the total number of open connections is the sum of the qr,qw,ar,aw time: current time
[example]
Insert 100000 pieces of data and open mongostat to query the running status of mongodb
As you can see from the following figure, the insertion value insert value increases greatly when the data is inserted, and becomes 0 after insertion. The interval between the two 1s of flush is very long, which indicates that the performance is good; res is rising slowly, but there is no sudden drop, which means that no other programs occupy a large amount of content; qrw and arw data are very small, indicating that the database read and write status is normal, the load is small. Overall, the mongodb database is in good health
Profile
Mongodb can use profile to monitor data and optimize
[level]
First, check whether the profile function is enabled.
Using the following command returns the level level, with a value of 0 | 2: 0 indicates closing, that is, no action is recorded; 1 indicates recording slow command (default is 100ms), that is, recording operations whose running time exceeds 100ms; 2 indicates all, that is, recording any action
Db.getProfilingLevel ()
Use the following command to set the level level
Db.setProfilingLevel ()
As shown in the following figure, profile is turned off by default. Use the setProfilingLevel () method to open profile as a 50ms slow command
[status]
The operation is logged to the system.profile collection
View the current monitoring log through db.system.profile.find ()
Op: operation type ns: namespace query: query string responseLength: return length ts: time mills: execution time
[use]
After opening profile in the system, if the data recorded by profile is very large, it will significantly reduce the performance of the system. Therefore, the usage scenario of profile is generally the testing phase before the launch of the new system, and the observation phase when it is just launched to check whether the design of the database and the use of the application are normal. If profile records a large number of fields, you need to adjust the system attachment, index, etc., to reduce its size.
Journal
When configuring a log file, you can use the verbose parameter to configure the log detail level. The more parameter values from'v'to 'vvvvv','v', the higher the detail.
The log records the running status of mongodb, including connection time, current operations, etc.
Explain
MongoDB provides an explain command to let us know how the system handles query requests. Using the explain command, you can observe how the system uses the index to speed up retrieval and optimize the index.
There are three modes of explain: queryPlanner, executionStats, and allPlansExecution. In real development, executionStats mode is commonly used.
First, insert 100000 pieces of data
Index on the time field
Next, look for documents with a time range of between 100and 200and use explain ()
The results were divided into three parts: queryPlanner, executionStats and serverInfo. Next, the results of these three parts will be analyzed in detail.
[queryPlanner]
QueryPlanner.plannerVersion: version
QueryPlanner.namespace: queried table
QueryPlanner.indexFilterSet: whether there is an indexfilter for the query
QueryPlanner.parsedQuery: query condition
QueryPlanner.winningPlan: details of the optimal execution plan returned by the query optimizer for this query
QueryPlanner.winningPlan.stage: stage of the optimal execution plan
QueryPlanner.winningPlan.inputStage: used to describe the child stage and provide document and index keywords for its parent stage.
QueryPlanner.winningPlan.inputstage.stage, here is IXSCAN, indicating that index scanning is in progress.
QueryPlanner.winningPlan.inputstage.keyPattern: index key-value pair
QueryPlanner.winningPlan.inputstage.indexName: index name
QueryPlanner.winningPlan.inputstage.isMultiKey: whether it is Multikey, the return value here is false. If the index is built on array, this will be true.
QueryPlanner.winningPlan.inputstage.direction: query order, here is forward, if .sort ({time:-1}) is used, backward will be displayed
QueryPlanner.winningPlan.inputstage.indexBounds: the range of indexes scanned
QueryPlanner.rejectedPlans: other implementation plans
[executionStats]
ExecutionStats.executionSuccess: successful or not
ExecutionStats.nReturned: the number of entries returned by the query
ExecutionStats.totalKeysExamined: number of index scan entries
ExecutionStats.totalDocsExamined: number of document scan entries
ExecutionStats.executionStages.stage: scan Typ
ExecutionStats.executionTimeMillis: overall query time
ExecutionStats.executionStages.executionTimeMillisEstimate: the time it takes to retrieve the document and get the data based on the index
ExecutionStats.executionStages.inputStage.executionTimeMillisEstimate: the time taken to scan the index
[serverInfo]
ServerInfo.host: hostname
ServerInfo.port: Port
ServerInfo.version: version
ServerInfo.gitVersion: git version
[performance Analysis]
1. Execution time
The smaller the executionTimeMillis value, the better.
2. Number of entries
The ideal state is: nReturned=totalKeysExamined=totalDocsExamined
3. Stage type
The types of stage are listed below:
COLLSCAN: full table scan IXSCAN: index scan FETCH: retrieve specified documentSHARD_MERGE according to index: mergeSORT returned data of each part: indicate sorting in memory LIMIT: use limit to limit the number of returns SKIP: use skip to skip IDHACK: query _ id SHARDING_FILTER: query multipart data through mongos COUNT: perform count operations such as db.coll.explain (). Count (): Stage return when count does not use Index for count COUNT_SCAN:count returns stage when Index is used for count: stage for $or query that is not used for index returns TEXT: stage for query using full-text index returns PROJECTION: limit the return of stage when returning fields
You do not want to see a stage containing the following:
COLLSCAN (full table scan) SORT (using sort but no index) unreasonable SKIPSUBPLA (not using index's $or) COUNTSCAN (not using index for count)
The above comprehensive analysis of index construction based on MongoDB database is all the content shared by the editor. I hope I can give you a reference and I hope you can support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.