MySQLdb - Query/fetch don't return result when it *theorically* should

B

Bernard Lebel

Hello,

I'm stumbled at a serious problem, and quite frankly getting
desparate. This is a rather long-winded one so I'll try to get
straight to the point.

I have this Python program, that performs MySQL queries to a database.
These queries are performed at regular intervals. Basically it is
looking for fields that match certain criterias. Such fields are not
present at all time. When the program finds such field, it then takes
an action, if nothing is found, just keep this query-fetch cycle
forever.

I don't know where the source of the problem is, but the symptoms go as follow:

When the program starts, the first time it performs a certain
query/fetch, a result is fetched. Now, later on, when the action done
after the first fetched is completed, the program starts
querying/fetching again. But it seems unable to the fetch anything
else with the same query.

If an entry that the query should return is inserted in the database,
the program just can't get it once it has started.

A more schematic representation of the symptoms.

What it should do:
1. program starts
2. enter query-fetch cycle
3. find specific entry, take action
4. action done
5. resume query-fetch cycle
6. find specific entry, take action
7. action done
8. resume query-fetch cyle
9. there was nothing to be found, continue cycle
10. find specific entry, take action
11. action done
12. resume query-fetch cycle...........

What it does now:
1. program starts
2. enter query-fetch cycle
3. find specific entry, take action
4. action done
5. resume query-fetch cycle
6. no more entry fetched despite valid entries being in the database

What is does now also:
1. program starts
2. enter query-fetch cycle
3. there was nothing to be found, continue cycle
4. valid entry added my myself, manually, and successfully committed
5. query-cycle continues, entry just added never found.......

I have looked at connection objects, cursor objects, and I just can't
seem to find the cause of that behavior.

In parallel, if I run individual queries in a command line shell, I
fetch the entries as expected.
The only way I have found to force the program to find the new entry,
is to close the connection and create a new one, every time it
performs a transaction. Not very efficient.



To give a little more details about the implementation.... when the
program starts, it starts a little queue server in a separate thread.
This queue server is nothing more than a list. Each time a query has
to be performed, it is added to the queue.
The queue server checks the queue to find if it has something to do.

When if finds something, it carries the entire operation, from query
to fetch/commit. It then stores the result in a dictionary, using a
unique ID as the key. At that point, the list element is removed from
it.

The function that submitted the query to the queue, during that times,
checks the dictionary until the result of the operation shows up, and
then checks the actual result. The result is then returned to the
original function who submitted a database transaction.



I don't know what other details to provide, other than the sources
themselves.......
farmclient_2.0_beta05.py is the top program file
The other py files are the imported modules
- fcSql is the actual database transaction management module
- fcJob has a function called readyGetJob(), wich is at the origin of
the MySQL query. The actual query being used is located on line 202.
The sql file is used to create the database

Thanks for any help, let me know if you need more details

Bernard
 
A

Alan Franzoni

Il Wed, 18 Jan 2006 14:39:09 -0500, Bernard Lebel ha scritto:

[cut]

1) It would be great if you didn't post four messages in less than an hour
^_^
2) Your code is very long! You can't expect many people to run and read it
all! You should post a very small demo program with the very same problem
as your main software. It'll help us a lot.
3) IMHO your problem looks like something related to isolation levels. You
should check with your DB-adapter docs about the issue. You may need to
manually sync/commit the connection or the cursor. Instead of re-creating
the connection, have you tried just creating a new cursor object at every
query?

If you want to do a quick-test, try any ORM, like sqlalchemy or sqlobject,
and check the results.

--
Alan Franzoni <[email protected]>
-
Togli .xyz dalla mia email per contattarmi.
To contact me, remove .xyz from my email address.
-
GPG Key Fingerprint:
5C77 9DC3 BD5B 3A28 E7BC 921A 0255 42AA FE06 8F3E
 
B

Bernard Lebel

Hi Alan,

Il Wed, 18 Jan 2006 14:39:09 -0500, Bernard Lebel ha scritto:
1) It would be great if you didn't post four messages in less than an hour
^_^

Yeah I know :)
But like I said, I've been stuck for 3 days on that, so, I need to get
things off my chest. Sorry to anyone that got a smell of spam :)

2) Your code is very long! You can't expect many people to run and read it
all! You should post a very small demo program with the very same problem
as your main software. It'll help us a lot.

Okay thanks for the advice. I have done what you have suggested.


In this first example, I fetch an integer value from the database
table. So far so good. However, if I change this value, the script
keeps printing the same value over and over, eternally. Notic that
everytime the while loop performs an iteration, a new cursor object is
created, as you suggested.

if __name__ == '__main__':

oConnection = MySQLdb.connect( host = '192.168.10.101', user =
'render', passwd = 'rnrender', db = 'RenderFarm_BETA' )
sQuery = "SELECT LogLevel FROM TB_RENDERNODES WHERE ID = 108"

while 1:

oCursor = oConnection.cursor()
oCursor.execute( sQuery )
oResult = oCursor.fetchone()
print oResult
oCursor.close()

print 'waiting 5 seconds'
time.sleep( 5 )




In the next one, I close the connection and create a new one. At that
point, the script prints the right value when I change it in the
database.


if __name__ == '__main__':

sQuery = "SELECT LogLevel FROM TB_RENDERNODES WHERE ID = 108"

while 1:
oConnection = MySQLdb.connect( host = '192.168.10.101', user =
'render', passwd = 'rnrender', db = 'RenderFarm_BETA' )
oCursor = oConnection.cursor()
oCursor.execute( sQuery )
oResult = oCursor.fetchone()
print oResult
oCursor.close()
oConnection.close()

print 'waiting 5 seconds'
time.sleep( 5 )




So I suspected that it had something to do with the threaded queue,
but I can see it's not, since the examples above are not using it at
all.



Btw I did not expect anyone to run through the code, but just in case
someone spotted something fishy... :)




3) IMHO your problem looks like something related to isolation levels. You
should check with your DB-adapter docs about the issue. You may need to
manually sync/commit the connection or the cursor. Instead of re-creating
the connection, have you tried just creating a new cursor object at every
query?

If you want to do a quick-test, try any ORM, like sqlalchemy or sqlobject,
and check the results.


Okay I'll check these out.


Thanks
Bernard
 
S

Stephen Prinster

Have you tried doing a "connection.commit()" after each query attempt?
I believe mysqldb also has a connection.autocommit feature.
 
B

Bernard Lebel

I'm absolutely flabbergasted.

Your suggestion worked, the loop now picks up the changed values, and
without the need to reconnect.

It's the first time I have to commit after a query, up until I wrote
this program, I used commit() was for UPDATE/INSERT types of commands
only, and always got proper fetch results.


Thanks
Bernard
 
A

Alan Franzoni

Bernard Lebel on comp.lang.python said:
I'm absolutely flabbergasted.

As I told you, i think this is related to the isolation level - I really
think it's a feature of the DB you're using - this way, until you commit,
your database lies in in the very same state on the same connection.
Suppose you make one query, and then you make another one, and you then
decide, on these result, to take a certain action on your DB; in the
meantime (between Q1 and Q2) another user/program/thread/connection might
have done a different operation on the very same db, and so you would be
getting a Q1 results from a certain state in the DB, while Q2 from a
different state, thus misleading you to take an inopportune action.

committing the connection syncs the state you're looking at with the actual
DB state.

--
Alan Franzoni <[email protected]>
-
Togli .xyz dalla mia email per contattarmi.
Rremove .xyz from my address in order to contact me.
-
GPG Key Fingerprint:
5C77 9DC3 BD5B 3A28 E7BC 921A 0255 42AA FE06 8F3E
 
M

Mark Hertel

I'm absolutely flabbergasted.

Your suggestion worked, the loop now picks up the changed values, and
without the need to reconnect.

It's the first time I have to commit after a query, up until I wrote
this program, I used commit() was for UPDATE/INSERT types of commands
only, and always got proper fetch results.


I found a similar problem occurred when I upgrade MySQL to some of the
4.1.x versions and the newest 5.x. The default table type now seems to
be InnoDB which activates transactions, so now the autocommit has to be
turned on in mysqldb or explicit commit's have to be placed into the
code.



--Mark
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top