Performance Issues of MySQL with Python

S

sandy

Hi All,

I am a newbie to MySQL and Python. At the first place, I would like to
know what are the general performance issues (if any) of using MySQL
with Python.

By performance, I wanted to know how will the speed be, what is the
memory overhead involved, etc during database specific operations
(retrieval, update, insert, etc) when MySQL is used with Python.

Any solutions to overcome these issues (again, if any)?

Thanks and Regards,
Sandeep
 
L

Larry Bates

Wow, you give us too much credit out here. From your
post we can't determine anything about what you plan
to do (how is your data structured, how much data do
you have, can it be indexed to speed up searching...).

Python and MySQL work together beautifully. ANY SQL
database's performance is more about properly defining
the tables and indexes where appropriate than about
language uses to it. You can write compiled C (or any
other language for that matter) that calls a poorly
designed database that gets terrible performance. A
well thought out database structure with good choices
for indexes can give you outstanding performance when
called from any language. Ultimately it comes down
to building a SQL query and passing it to the SQL
database and getting back results. Front end language
isn't all that important (unless you must post-process
the data in the program a lot). It is not uncommon
to get 100x or 1000x speed increases due to adding
proper indexes to tables or refactoring master-detail
table relationships in any SQL database. You can't
get that by changing languages or even purchasing
faster hardware.

MySQL is particularly good when your read operations
outnumber your writes. US Census Bureau uses MySQL
because they have static data that gets read over and
over (even though my understanding is that they have
an Oracle site license). Databases that are transaction
oriented (e.g. accounting, etc.) can sometimes benefit
from the highly transactional nature of an Oracle or
DB2 or Postgres. Later versions of MySQL have added
transactions, but the support is IMHO a step behind
the big guys in this area. Also, if you want to be
highly scalable so as to provide for clustering, of
database servers, etc. MySQL doesn't do that well
in this area, YET.

I hope my random thoughts are helpful.

Larry Bates
 
T

Thomas Bartkus

sandy said:
Hi All,

I am a newbie to MySQL and Python. At the first place, I would like to
know what are the general performance issues (if any) of using MySQL
with Python.

By performance, I wanted to know how will the speed be, what is the
memory overhead involved, etc during database specific operations
(retrieval, update, insert, etc) when MySQL is used with Python.

Any solutions to overcome these issues (again, if any)?

There are no "general performance issues" with respect to "using MySQL with
Python".

The use of Python as a programming front end does not impact the performance
of whatever database server you might select. The choice of MySQL as your
database server does not impact the effectiveness of whatever front end
programming language you select. The 2 functions, database server and
programming language, do not interact in ways that raise unique performance
issues.

You can choose each one without worrying about the other. They two quite
separate design choices.

Thomas Bartkus
 
A

Andy Dustman

There aren't any "issues", but there are a few things to keep in mind.

First of all, prior to 4.1, MySQL does no parameter binding, which
means that the parameters must be inserted into your SQL statements as
literals. MySQLdb will do this for you automatically, but keep in mind
that you will be creating a string that is big as your original SQL
statement plus the size of all the parameters. If you are doing a large
INSERT (via executemany()), this could be pretty big. However, this is
no worse a problem with Python than it is with anything else.

MySQL-4.1 *does* support parameter binding, but MySQLdb does not yet.
The next major release will, but that is months off.

The other factor to account for is your result set. By default, MySQLdb
uses the mysql_store_result() C API function, which fetches the entire
result set into the client. The bigger this is, the longer it will take
for your your query to run. You can also use a different cursor type
which uses mysql_use_result(), which fetches the result set row by row.
The drawback to this are that you must fetch the entire result set
before you can issue another query. But again, this is not an issue
with Python.

Make sure you read PEP-249 and then the User's Guide.
 
H

Haibao Tang

There are no performance overhead except when you are dragging a huge
chunk of information out of the database, in that case, python is
converting the data to its tuple data type which adds one more
processing.

I found this when I didn't have the priviledge to do "mysql> SELECT *
FROM TBL INTO OUTFILE;", I used python MySQLdb first, which I later
found sufficiently slower enough than using >>>system("echo 'USE db;
SELECT * FROM TBL;' |mysql >outfile")

But this is the minor case.
 
A

Andy Dustman

Well, it does more than that. It converts each column from a string
(because MySQL returns all columns as strings) into the appropriate
Python type. Then you were converting all the Python types back into
strings. So it's no mystery that using the command line client is
faster, since it would take the string results and write them out
directly. (I assume it does this; there's no rational reason for it to
do otherwise.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,528
Members
45,000
Latest member
MurrayKeync

Latest Threads

Top