In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Pure hand play, only summed up through a variety of ways to view the memory usage of mongodb, and failed to put forward an effective way to reduce the memory of mongodb.
It is said that mongodb eats memory very much, and it is really strong. My data level has reached 40 million, and the amount of data is about 140 gigabytes. In a few days, 130 gigabytes of memory disk, to 60%, before finding a good way, I always restart mongodb, novice, I am not sure what kind of position this data amount is, what kind of excellent solution there is, please let me know, thank you.
Why do you eat memory so much? mongodb uses a memory-mapped storage engine, namely Memory Mapped Storage Engine, or MMAP,MMAP, which can map part or all of the contents of a disk file directly to memory, so that the location of the information in the file will have a corresponding address space in memory to convert disk IO operations into memory operations.
But the downside is that you don't have a convenient way to control how much memory MongoDB takes up. In fact, MongoDB takes up all available memory, so it's best not to put other services with MongoDB. The reason for this summary is that the web service has been put together with mongodb, resulting in an intermittent increase in load, and the web service has been automatically restarted by the server, of course.
If you want to know the memory usage of mongodb, you can do a few things:
Top
Shift+m, if nothing happens, mongodb should be in the first place.
VIRT: virtual memory
RES: actual memory usage
% MEM: memory usage ratio
It has just been rebooted, and it doesn't take up much memory.
2.mongostat
Mapped: data size mapped to memory
Vsize: virtual memory, twice the size of mapped
Res: actual memory used, if res often drops suddenly, check to see if any other programs are gorging on memory
Conn: current number of connections
The vsize,res here is the same as that of top. If the conn is always high, it is also problematic. If the lockedb is very high (often exceeding it), it means that there is a big problem. Once, because many python scripts could not finish running in time, it led to connect one after another, and finally the lockdb reached 50%.
3.db.stats ()
Db: current database
Collections: how many tables are the current data
Objects: how many data are there in all the tables in the current database
DataSize: the total size of all data
StorageSize: disk size occupied by all data
Indexes: number of indexes
IndexSize: index size
FileSize: file size pre-allocated to the database
In fact, this can not be said to show what happens in memory, but can only be said to display a status status of the entire database.
4.db.serverStatus ()
This command does not show any results in a separate db, and needs to be valid in the admin db. The test shows a lot of information, such as connections,current is the current connection and available is the available connection.
If we often use it, we don't show so much information, we usually show what we need. Such as memory usage
5.db.serverStatus () mem
Virtual: size of virtual memory
Mapped: data size mapped to memory
The virtual memory here is twice as large as mapped because we have turned on the Journal log and need to map it in memory once more, which is about twice as much. If Journal logging is turned off, the virtual memory size will be the same as the mapped size
I probably use top and mongostat. Every time I see a large memory, I use the method of restarting the service, but this does not solve the problem. It just suspends the problem. If there is a solution, please let me know. Thank you.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.