Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MongoDB performance Test and Python Test Code

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Recently, I participated in a project of the company, which plans to respond quickly to large-scale queries on the online platform. It is estimated that the total amount of data is about 200-300 million, the database concurrency is about 1500 per second, and the concurrency is about 3000 per second a year later. After a difficult choice between Redis and mongodb, I decided to use mongodb, mainly focusing on its parallel scalability and Map/Reduce on GridFS. After the estimated project is launched, the concurrent queries per second during peak hours are between 1500 and 3000.

In fact, I personally prefer Redis, its concurrent query ability and speed beyond memcached are very attractive, but its persistence and cluster scalability are not very suitable for business needs, so I chose mongodb in the end.

The following is the code and results of the mongodb test. Although the company uses one water CentOS, but since I am a supporter of FreeBSD, I tested the results on both FreeBSD and CentOS.

The library writing program is copied on the Internet, and the query program is written by yourself.

Write library program

#! / usr/bin/env python

From pymongo import Connection

Import time,datetime

Connection = Connection ('127.0.0.1 percent, 27017)

Db = connection ['hawaii']

# time recorder

Def func_time (func):

Def _ wrapper (* args,**kwargs):

Start = time.time ()

Func (* args,**kwargs)

Print func.__name__,'run:',time.time ()-start

Return _ wrapper

@ func_time

Def insert (num):

Posts = db.userinfo

For x in range (num):

Post = {"_ id": str (x)

"author": str (x) + "Mike"

"text": "My first blog post!"

"tags": ["mongodb", "python", "pymongo"]

"date": datetime.datetime.utcnow ()}

Posts.insert (post)

If _ name__ = = "_ _ main__":

# set cycle 5 million times

Num = 5000000

Insert (num)

Inquiry program

#! / usr/bin/env python

From pymongo import Connection

Import time,datetime

Import random

Connection = Connection ('127.0.0.1 percent, 27017)

Db = connection ['hawaii']

Def func_time (func):

Def _ wrapper (* args,**kwargs):

Start = time.time ()

Func (* args,**kwargs)

Print func.__name__,'run:',time.time ()-start

Return _ wrapper

# @ func_time

Def randy ():

Rand = random.randint (1BI 5000000)

Return rand

@ func_time

Def mread (num):

Find = db.userinfo

For i in range (num):

Rand = randy ()

# Random number query

Find.find ({"author": str (rand) + "Mike"})

If _ name__ = = "_ _ main__":

# set cycle 1 million times

Num = 1000000

Mread (num)

Delete program

#! / usr/bin/env python

From pymongo import Connection

Import time,datetime

Connection = Connection ('127.0.0.1 percent, 27017)

Db = connection ['hawaii']

Def func_time (func):

Def _ wrapper (* args,**kwargs):

Start = time.time ()

Func (* args,**kwargs)

Print func.__name__,'run:',time.time ()-start

Return _ wrapper

@ func_time

Def remove ():

Posts = db.userinfo

Print 'count before remove:',posts.count ()

Posts.remove ({})

Print 'count after remove:',posts.count ()

If _ name__ = = "_ _ main__":

Remove ()

Result set

Insert 5 million random number query 1 million delete 5 million CPU occupy CentOS394s28s224s25-30%FreeBSD431s18s278s20-22%

CentOS inserts and deletes wins; FreeBSD takes advantage of UFS2, reading wins. Because it is used as a query server, so fast reading speed is an advantage, but I am not the leader, I do not have the final say, in the end, I have to CentOS.

During the test, mongostat monitoring was used all the time, and the two systems were similar in terms of the number of concurrency. We also tested inserting concurrent queries, but the results are similar, and the sum of concurrency is about 15000-25000 per second. The performance is still very good.

However, it is true that there is a serious decline in insertion performance under a large amount of data. CentOS tested 50 million data insertion, which took nearly 2 hours. It takes about 6300 seconds. It is almost 50 per cent slower than the 5 million data insertion speed. But the query speed is about the same.

The test results can be used as a reference for those who need it.

However, this test is not very fair. The FreeBSD configuration is a little worse.

CentOS 16 GB memory, Xeon5606 two 8 cores. Dell brand machine.

FreeBSD 8G memory, Xeon5506 a 4-core. I don't have a brand 1U.

In the same environment, I think FreeBSD performance will be better.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report