postgresql modules and performance

M

Mage

Hello,

I started to write my PostgreSQL layer. I tried pyPgSQL and PyGreSQL. I
made a *very minimal* performance test and comparsion with the same
thing in php. Table "movie" has 129 record and many fields.

I found PyGreSQL / DB-API / fetchall horrible slow (32 sec in my test).
PHP did 13 secs and it gave the result in associative array. Maybe I did
something bad.

pyPgSQL / DB-API raised a futurewarning and didn't worked.

pyPgSQL / libpg worked well. I can create php-like dictionary result in
14 secs.

Here is the code. It's only a test...

Mage

------ test.py --------
import mpypg
import datetime

print 'This is a python postgesql module'

db = mpypg.mpypgdb('dbname=test host=localhost')

def test():
res = db.query('select * from movie')
#print res['fields']
#print res['rows']
#pass

def test2():
res = db.query('select * from movie')
#print res['fields']
#print len(res['rows'])
#print res['rows']
print len(res)
print res[1]

start = datetime.datetime.now()

for i in range(100):
test()
#pass

end = datetime.datetime.now()

print end - start

test2()

------ mpypg.py --------
from pyPgSQL import libpq
from pyPgSQL import PgSQL
import sys
import pgdb

mpypg_connect_error_message = 'Could not connect to the database'
mpypg_query_error_message = 'Could not run the query'

class mpypgdb:
'my database class'
def __init__(self,str):
try:
self.database = libpq.PQconnectdb(str)
except:
mpypg_error_msg(mpypg_connect_error_message)

def query(self,query):
try:
res = self.database.query(query)
fields = tuple([res.fname(i) for i in range(res.nfields)])
rows = []

'''
for name in fields:
rows[name] = []


for i in range(res.ntuples):
for j in range(len(fields)):
rows[fields[j]].append(res.getvalue(i,j))
'''
'''
for j in range(len(fields)):
rows[fields[j]] = tuple([res.getvalue(i,j) for i in
range(res.ntuples)])
'''

for i in range(res.ntuples):
rows.append(dict([(fields[j], res.getvalue(i,j)) for j
in range(len(fields))]))

res.clear()
result = {'fields': fields, 'rows': rows}
return result;
except:
mpypg_error_msg(mpypg_query_error_message)


class mpgdb:
'my database class'
def __init__(self,str):
try:
self.database = pgdb.connect(database='test')
except:
mpypg_error_msg(mpypg_connect_error_message)

def query(self,query):
try:
cursor = self.database.cursor()
res = cursor.execute(query)

#result = {'fields': fields, 'rows': cursor.fetchall()}
result = [cursor.fetchone() for i in range(cursor.rowcount)]

#result = cursor.fetchall()
return result;
except:
mpypg_error_msg(mpypg_query_error_message)



def mpypg_error_msg(message):
'Database error handler'
sys.exit(message)
 
R

Reinhold Birkenfeld

Mage said:
Hello,

I started to write my PostgreSQL layer. I tried pyPgSQL and PyGreSQL. I
made a *very minimal* performance test and comparsion with the same
thing in php. Table "movie" has 129 record and many fields.

I found PyGreSQL / DB-API / fetchall horrible slow (32 sec in my test).
PHP did 13 secs and it gave the result in associative array. Maybe I did
something bad.

Have you tried psycopg?

Reinhold
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,070
Latest member
BiogenixGummies

Latest Threads

Top