In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The data of nosql is in memory, while the traditional rdbms, when a select executes for the first time, if it finds that it does not need data in memory (such as mysql's innodb buffer pool), it will read it from disk and then start calculating. In principle, it must be slower than nosql, but how slow will it be? You can compete with a full table scan based on grouping statistics.
The test environment is as follows:
Server: Ali CVM (Ubuntu14.04+1 core cpu and 1G memory)
Mysql:5.5.41
Mysql server parameters: innodb_buffer_pool_size = 512m (the rest are default)
Mongodb:2.4.9
Here we use a table with more than 10 million rows as the test object, which has only one field: imsi.
Create test objects in mysql, guide table:
Create database db_test
Create table ul_inbound (imsi varchar (15))
Load data infile'/ tmp/inbound.sub.log' into table ul_inbound (imsi)
Test results:
Mysql > select imsi,count (*) from ul_inbound group by imsi having count (*) > 5000
+-+ +
| | imsi | count (*) |
+-+ +
| | 250017831851018 | 5166 | |
| | 283100106389033 | 21291 | |
| | 302720304966683 | 41598 | |
| | 302720502859575 | 8787 | |
| | 302720505260881 | 7932 | |
| | 310170801568405 | 6018 |
| | 310170802085117 | 13452 | |
| | 310170802299726 | 13824 | |
| | 310410577936479 | 5772 | |
| | 310410610359421 | 5790 | |
| | 310410661857060 | 7038 | |
| | 310410669731926 | 7788 | |
| | 310410671702203 | 6705 | |
| | 310410673812082 | 5403 | |
+-+ +
53 rows in set (1 min 47.73 sec)
It took 107 seconds.
Import data in mongodb (import into my_mongodb 's sub):
#! / usr/bin/python
# Python:2.7.6
# Filename:mongodb.py
Import pymongo
Import random
Conn = pymongo.Connection ("127.0.0.1", 27017) # Connection () without parameters will default connect localhost mongodb
Db = conn.my_mongodb
For imsi in open ('inbound.sub.log'):
Imsi = imsi.strip ('\ n')
Db.sub.insert ({'imsi':imsi})
> use my_mongodb
Switched to db my_mongodb
> db.sub.aggregate ([{$group: {_ id: "$imsi", count: {$sum: 1}, {$match: {count: {$gt: 5000}])
{
"result": [
{
"_ id": "401025006559964"
"count": 17982
}
{
"_ id": "310410757405261"
"count": 7269
}
It's done in about 10 seconds.
Note:
1) this is only to test the full table scan ability, and the actual mysql query ability will be affected by indexes, server parameters and many other factors.
2) the use of mongodb, must have enough memory, if not enough memory, to use swap, the ability will be greatly affected.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.