Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

DB insertion performance chaos-postgresql vs mysql vs mongodb vs oracle

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Because I saw a comparison between Dege's postgresql and mongodb inserting 1 million records, I wanted to verify it, but I couldn't understand the conclusion anyway!

First of all, I am testing on a virtual machine, 4-core 8G + storage, centos7.2,mongo3.2.9,postgresql9.6 development version, are the latest version, the new installation does not do any processing

All are tested by python, but the test result of mongodb is 371s, which is close to that of Dege's test.

But pg was strange. after running for more than ten minutes, he only saw that the record reached 560000, which was surprisingly slow.

I guess there are restrictions on storage or BUG in the development version, which can be tested by a physical machine.

Hardware, ordinary PC, win10 system:

PG, the 9.5.2windows version downloaded from the official website, does not make any settings for the new installation:

This can be finished, but there is a gap between 239 seconds and Dege's test. There is a difference in the script, that is, whether it is a unix_socket connection (I don't know where to find the unix_socket connection name).

Because mongodb is not installed yet, but there is a green version of mysql that was downloaded a long time ago, the version number is 6.0.5, unofficial, but it works:

A small test, unexpectedly ran in 102 seconds! (then I installed mariadb5.5.50 on the virtual machine, and the test result was 102 seconds! )

This is the test script for PG:

Import psycopg2

Import time

Conn=psycopg2.connect (database= "test1", user= "postgres", password= "", host= "127.0.0.1", port= "5432")

Cur=conn.cursor ()

Conn.autocommit=True

Start_t=time.time ()

Print ("START:" + str (start_t))

For i in range (01,000,000):

Cur.execute ("insert into tt values (% (id) s id digoal.zhouqiang pr. 32 pagedigoallic 126.Component dagger dagger 276732431')", {"id": I})

Stop_t=time.time ()

Print ("STOP:" + str (stop_t))

Print (stop_t-start_t)

This is the test script for MySQL:

Import MySQLdb

Import time

Conn=MySQLdb.connect (host= "localhost", user= "root", passwd= "", db= "test")

Cur=conn.cursor ()

Conn.autocommit=True

Start_t=time.time ()

Print ("START:" + str (start_t))

For i in range (01,000,000):

Cur.execute ("insert into tt values (% digoal.zhou% s), (I," digoal.zhou ", 32," digoal@126.com "," 276732431 ")

Stop_t=time.time ()

Print ("STOP:" + str (stop_t))

Print (stop_t-start_t)

Unified table creation script (this script can be run in oracle,mysql,pg):

Create table tt (id int, username varchar (20), age int, email varchar (20), qq varchar (20))

Mongodb test results (mongodb3.2.9windows version, pymongo):

297 seconds, which is better than that of the virtual machine. It seems that the performance of my machine is still good ^ _ ^

Since oracle12.1 is installed on the machine, we come to join the battle, and DB fights:

Unexpectedly, the test result was 368 seconds! Attached test results (python+cx_Oracle):

How do you explain this? frankly speaking, I have a preference for O in my heart > PG > mysql, but I was hit in the face by mysql strength ^ _ ^

Analyzing the reasons for oracle, it seems that automatic submission is more time-consuming, but the problem is that other DB also submit automatically.

In plsql, the effect of a single submission is similar to that of PG in 244 seconds:

Change it to 85 seconds after batch submission:

However, if the oracle "full firepower", the best result is 1.28s, of course, this is for reference only, it is fairer or the common way to write it:

On a virtual machine, it took 3001 seconds to insert a 1 million record with a python test! I can't figure out the reason for this for the time being. I tested it with plpgsql. It only took 5 seconds to get back a little confidence:

Plpgsql Code:

Create or replace function pgins2 (num bigint) returns bigint AS $$

Declare

Ii integer

Beg_tm timestamp

End_tm timestamp

Time_consuming bigint

Begin

II:=1

Beg_tm:=now ()

FOR ii IN 1..num LOOP

Insert into tt values (ii,'digoal.zhou',32,'digoal@126.com','276732431')

End loop

End_tm:=now ()

Select round (EXTRACT (EPOCH from end_tm)-EXTRACT (EPOCH from beg_tm)) into time_consuming

Return time_consuming

End

$$LANGUAGE plpgsql

2017-02-21 Update:

PG9.2.8 and the latest 9.6.2 official version are tested again on a virtual machine, both of which take more than 2500 seconds if autocommit is turned on, while on the same machine, mariadb takes only 159s.

But when the autocommit was turned off and changed to the next commit after the insert, PG9.6 only took 129 seconds:

This is a relatively normal result. Why there is such a big difference in autocommit mode between virtual machines and physical machines may have something to do with the transaction characteristics of PG.

To be fair, auto-commit was removed from maria_db 's script and ran again:

It seems that there is not much difference between auto-commit and non-auto-commit in mariadb. Maybe it is optimized by itself.

All in all, I think both PG and MYSQL are doing very well.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report