Very weird behavior in MySQLdb "execute"

J

John Nagle

This has me completely mystified. Some SELECT operations performed through
MySQLdb produce different results than with the MySQL graphical client.
This failed on a Linux server running Python 2.5, and I can reproduce it
on a Windows client running Python 2.4. Both are running MySQL 2.5.

The table involved is:

CREATE TABLE domaincache
(
domain VARCHAR(255) NOT NULL UNIQUE PRIMARY KEY,
rating CHAR(1) NULL,
rating_info ENUM
('error','no_domain','no_website','blocked','no_location','negative_info','non_commercial','unverified')
NULL,
special_case ENUM('normal','special'),
rating_id BIGINT UNSIGNED NULL,
last_update_end TIMESTAMP NOT NULL,
version SMALLINT UNSIGNED NOT NULL,
INDEX (rating_id)
);

Nothing exciting there.

In the MySQL query browser, I can do either

select * from domaincache where domain = "adwords.google.com"
or
select * from domaincache where domain = "google.com"

and I get one record back from each, with the correct info. That's correct.

Querying the database from Python gives different results. The database
connection is made with:

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)

When I make the same queries from Python, via IDLE, typing in commands:

cursor.execute('SELECT * FROM domaincache WHERE domain="adwords.google.com"')

returns 0L, no rows, which is wrong. That domain is in the database, and
a SELECT from the graphical client will find it.

But

cursor.execute('SELECT * FROM domaincache WHERE domain="google.com"')

returns 1L, one row, which is correct.

The database is InnoDB, and CHECK TABLE says the database is valid.

Restarting the MySQL instance changes the database. The entry "google.com"
disappears, and is replaced by "www.google.com". This must indicate a hanging
transaction that wasn't committed.

But that transaction didn't come from the Python IDLE session I've been
making test calls from. Those queries should match the graphical client
exactly.

So why don't they agree?
 
P

Paul Boddie

This has me completely mystified. Some SELECT operations performed through
MySQLdb produce different results than with the MySQL graphical client.
This failed on a Linux server running Python 2.5, and I can reproduce it
on a Windows client running Python 2.4. Both are running MySQL 2.5.

I'm not actively using MySQL at the moment, so my comments are just
passing remarks that may or may not help.
The table involved is:

CREATE TABLE domaincache
(
domain VARCHAR(255) NOT NULL UNIQUE PRIMARY KEY,

Passing remark #1: I believe that "domain" is a reserved word in SQL.
Of course, this is MySQL we're talking about, so you might get away
with this, but bizarre side-effects are sometimes a hallmark of MySQL,
too.
rating CHAR(1) NULL,
rating_info ENUM
('error','no_domain','no_website','blocked','no_location','negative_info','non_commercial','unverified')
NULL,
special_case ENUM('normal','special'),
rating_id BIGINT UNSIGNED NULL,
last_update_end TIMESTAMP NOT NULL,
version SMALLINT UNSIGNED NOT NULL,
INDEX (rating_id)
);

Nothing exciting there.

In the MySQL query browser, I can do either

select * from domaincache where domain = "adwords.google.com"
or
select * from domaincache where domain = "google.com"

Passing remark #2: In SQL, double quotes are usually used to "protect"
identifiers from being recognised as reserved words. Now, I believe
that MySQL can be quite relaxed about this, but this could be an issue
if some conformance mode gets set somewhere.
and I get one record back from each, with the correct info. That's correct.

Querying the database from Python gives different results. The database
connection is made with:

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)

When I make the same queries from Python, via IDLE, typing in commands:

cursor.execute('SELECT * FROM domaincache WHERE domain="adwords.google.com"')

returns 0L, no rows, which is wrong. That domain is in the database, and
a SELECT from the graphical client will find it.

Try using parameters instead of attempting to put the desired value of
domain directly into the query...
But

cursor.execute('SELECT * FROM domaincache WHERE domain="google.com"')

....or try using single quotes when delimiting the desired value of
domain.
returns 1L, one row, which is correct.

The database is InnoDB, and CHECK TABLE says the database is valid.

Restarting the MySQL instance changes the database. The entry "google.com"
disappears, and is replaced by "www.google.com". This must indicate a hanging
transaction that wasn't committed.

Cheap shot: I guess this is why I'm using PostgreSQL.

Seriously, though, MySQL 5.x can be quite usable provided that you
switch on conformance to standards and do as much inside explicit
transaction blocks as possible. Indeed, this latter point might point
to a solution: the MySQLdb module might automatically put statements
inside transactions (like many other DB-API modules), whereas other
clients might not use transactions unless you use BEGIN and ROLLBACK/
COMMIT - a recommended practice given that I've seen MySQL abort
inserts and leave them half finished.
But that transaction didn't come from the Python IDLE session I've been
making test calls from. Those queries should match the graphical client
exactly.

So why don't they agree?

Transactions?

Paul
 
C

Carsten Haese

Restarting the MySQL instance changes the database. The entry "google.com"
disappears, and is replaced by "www.google.com". This must indicate a hanging
transaction that wasn't committed.

But that transaction didn't come from the Python IDLE session I've been
making test calls from. Those queries should match the graphical client
exactly.

So why don't they agree?

I don't have a definitive answer, but I do have a piece of generic
advice. If two database sessions receive differing results, the cause
could be any of the following factors:

1) One session is in a transaction, but the other is not.

2) Both sessions are in a transaction, but they are at different
isolation levels.

3) Both sessions are in a transaction, but the transactions were started
at different times.

It's entirely possible that the graphical client was operating outside
of a transaction, showing you phantom rows of a transaction in progress,
whereas the Python IDLE session was inside a transaction at something
like "read last committed" (or whatever MySQL calls it) isolation level.

Any of the factors above determine which "snapshot" of the data will be
visible to the database clients. Unless you are *absolutely certain*
that both clients should have seen the exact same snapshot, it's really
not all that weird that you are seeing discrepancies, especially in
light of the fact that you had an uncommitted transaction hanging around
somewhere.

Hope this helps,
 
J

John Nagle

Carsten said:
I don't have a definitive answer, but I do have a piece of generic
advice. If two database sessions receive differing results, the cause
could be any of the following factors:

1) One session is in a transaction, but the other is not.

2) Both sessions are in a transaction, but they are at different
isolation levels.

3) Both sessions are in a transaction, but the transactions were started
at different times.

I see what's wrong, I think. I haven't been doing a commit after
a SELECT. I've been careful to commit after write-type actions,
but not after read-only actions. I'm using InnoDB in default mode,
which is REPEATABLE READ, and I've recently switched to long-running
processes which keep the database connection open for hours. So the
data view for a given connection never changes, regardless of what's
happening in other threads.

John Nagle
 
J

John Nagle

Carsten said:
I don't have a definitive answer, but I do have a piece of generic
advice. If two database sessions receive differing results, the cause
could be any of the following factors:

1) One session is in a transaction, but the other is not.

2) Both sessions are in a transaction, but they are at different
isolation levels.

3) Both sessions are in a transaction, but the transactions were started
at different times.

I see what's wrong, I think. I haven't been doing a commit after
a SELECT. I've been careful to commit after write-type actions,
but not after read-only actions. I'm using InnoDB in default mode,
which is REPEATABLE READ, and I've recently switched to long-running
processes which keep the database connection open for hours. So the
data view for a given connection never changes, regardless of what's
happening in other threads.

John Nagle
 
S

Steve Holden

John said:
I see what's wrong, I think. I haven't been doing a commit after
a SELECT. I've been careful to commit after write-type actions,
but not after read-only actions. I'm using InnoDB in default mode,
which is REPEATABLE READ, and I've recently switched to long-running
processes which keep the database connection open for hours. So the
data view for a given connection never changes, regardless of what's
happening in other threads.


I believe you are correct in your belief that a commit() is never
required after SELECT. Once a database change is committed it should
become visible to all other connections (under normal circumstances).

regards
Steve
 
G

Gabriel Genellina

I believe you are correct in your belief that a commit() is never
required after SELECT. Once a database change is committed it should
become visible to all other connections (under normal circumstances).

As said before, that depends on the transaction isolation level in
use. You describe the "read committed" level, where non-repeatable
reads may occur. A higher level is "repeatable read" where the
original data rows are always included in the result set for the same
query, even if other concurrent transaction modified that same rows
(but new "phantom" rows may appear in later reads).
Unfortunately the Python DBAPI 2.0 does not define any way to set this
parameter.
 
J

John Nagle

Steve said:
I believe you are correct in your belief that a commit() is never
required after SELECT. Once a database change is committed it should
become visible to all other connections (under normal circumstances).

No. If you're using InnoDB in default mode, you're in "repeatable read"
mode, which means the same SELECT on the same connection without an
intervening commit will produce the same result, regardless of any
other transactions.

I have one FCGI program which does only SELECT calls, for processing quick
lookup-only requests. This program wasn't picking up changes made by
transactions from other programs, because it's a long-running program
with a persistent database connection, and it didn't do a COMMIT.
Once I made it do a COMMIT after each SELECT, it started picking up
the changes being made elsewhere.

So you really do have to COMMIT after a SELECT, if you are reusing
the database connection. CGI programs usually don't have this issue,
because their connections don't live long, but long-running FCGI (and maybe
Twisted) programs do.

John Nagle
 
J

John Nagle

Paul said:
I'm not actively using MySQL at the moment, so my comments are just
passing remarks that may or may not help.


Passing remark #1: I believe that "domain" is a reserved word in SQL.

Not since SQL-99. DOMAIN stopped being a reserved word in SQL-2003.
See "http://developer.mimer.com/validator/sql-reserved-words.tml"
It is not a reserved word in MySQL.
Passing remark #2: In SQL, double quotes are usually used to "protect"
identifiers from being recognised as reserved words. Now, I believe
that MySQL can be quite relaxed about this, but this could be an issue
if some conformance mode gets set somewhere.

The SQL standard requires single quotes, while MySQL allows both.
Actually, the real code lets the MySQLdb interface do the quoting; this
was just a manual test.
Cheap shot: I guess this is why I'm using PostgreSQL.

Actually, most of the above comments were totally irrelevant. The
real problem (discussed in the Python newsgroup) was a failure to
COMMIT after a SELECT. MySQL InnoDB databases run in "repeatable read" mode,
and if you have a long-running process and don't COMMIT after a SELECT,
the results of redoing a SELECT will not change, regardless of other
updates to the table. So it really is necessary to COMMIT after SELECT
to see new updates to the database, even when not making changes.

John Nagle
 
P

Paul Boddie

Not since SQL-99. DOMAIN stopped being a reserved word in SQL-2003.
See "http://developer.mimer.com/validator/sql-reserved-words.tml"
It is not a reserved word in MySQL.

Well, it was just a passing remark, useful to bear in mind if you have
to use another database system at some point.
The SQL standard requires single quotes, while MySQL allows both.

Yes, that's what standards are for, after all.
Actually, the real code lets the MySQLdb interface do the quoting; this
was just a manual test.

Fair enough.
Actually, most of the above comments were totally irrelevant.

Well, they were just late night suggestions.
The real problem (discussed in the Python newsgroup) was a failure to
COMMIT after a SELECT. MySQL InnoDB databases run in "repeatable read" mode,
and if you have a long-running process and don't COMMIT after a SELECT,
the results of redoing a SELECT will not change, regardless of other
updates to the table. So it really is necessary to COMMIT after SELECT
to see new updates to the database, even when not making changes.

I always find this page to summarise the situation well enough:

http://www.postgresql.org/docs/8.3/static/transaction-iso.html

I think that, if you haven't actually updated the database within your
transaction, and your question would suggest that you hadn't because
you'd be thinking of the transactional issues under such
circumstances, then merely ending the transaction would allow your
code to "synchronise" its view of the database with the current state.
In other words, a ROLLBACK would also be satisfactory.

I must admit that I hadn't thought too much about transactions which
merely query the database until I noticed that in PostgreSQL you can
accumulate a number of locks if you have a long-running process which
doesn't terminate transactions in a timely fashion. Inspecting the
behaviour of your favourite DB-API module can be useful to see exactly
what happens under the covers.

Paul
 
F

Frank Aune

So you really do have to COMMIT after a SELECT, if you are reusing
the database connection. CGI programs usually don't have this issue,
because their connections don't live long, but long-running FCGI (and maybe
Twisted) programs do.

I've experienced the same thing for long-running tasks even when using
different connections/cursors against the same db for db queries and log
writing dbhandlers respectively.

Whenever I did a SELECT() on the first connection, the cursor would
stop "seeing" new entries commited in the log table by the other connection.
I always assumed you needed COMMIT() after adding new content to the
database, not after every single query, but this perhaps indicate otherwise?

Regards,
Frank
 
P

Paul Boddie

Whenever I did a SELECT() on the first connection, the cursor would
stop "seeing" new entries commited in the log table by the other connection.
I always assumed you needed COMMIT() after adding new content to the
database, not after every single query, but this perhaps indicate otherwise?

No, you just need to be aware that the database module has probably
started a transaction on your behalf and that, as John points out, for
certain database systems in certain configurations, any repetition of
the same query in the same transaction will produce the same results,
thus avoiding the "nonrepeatable read" problem.

Really, the rule is this: always (where the circumstances described
above apply) make sure that you terminate a transaction before
attempting to read committed, updated data. If you religiously perform
a COMMIT (or ROLLBACK) after every query, you'll just create lots of
transactions for no good reason.

Paul
 
F

Frank Aune

Really, the rule is this: always (where the circumstances described
above apply) make sure that you terminate a transaction before
attempting to read committed, updated data.

How exactly do you terminate a transaction then?Do you terminate a transaction
by closing the cursor? Would this indicate that for each query you perform
against the db, you should:

- Open cursor
- Perform query
- Close cursor

The concept of cursor in MySQL is apparantly very different from what I
originally thought. I always thought the cursor is the "handler" for a
certain connection, and that you needed to commit before closing down this
handler - else changes were automatically rolled back.

Thanks,
Frank
 
P

Paul Boddie

How exactly do you terminate a transaction then?Do you terminate a transaction
by closing the cursor?

No, the transaction is controlled using the connection object in the
DB-API, specifically by the commit and rollback methods.
Would this indicate that for each query you perform
against the db, you should:

- Open cursor
- Perform query
- Close cursor

I tend to open a separate cursor for each query whose result set I
want to keep around. In other words, if I'm doing a number of queries
whose results will not be manipulated via the DB-API (using fetchone,
fetchmany, fetchall) after the next query is executed, then I only
open one cursor - it's like this:

cr = c.cursor()
try:
cr.execute(query1)
do stuff with cr.fetchone/many/all
cr.execute(query2)
...
finally:
cr.close()

As the DB-API indicates, if you want to manipulate more than one
result set, you need more than one cursor.

With PostgreSQL, my impression is that the intended way of using
cursors is not entirely compatible with the DB-API: you declare
cursors only when you know what the query will be, not in advance, and
they can only be used with certain kinds of operations. As far as I
recall, pyPgSQL supports cursors fairly transparently, albeit through
various ad-hoc measures, whereas psycopg2 only does so for "named
cursors" - a concept missing from the DB-API as far as I can see.
The concept of cursor in MySQL is apparantly very different from what I
originally thought. I always thought the cursor is the "handler" for a
certain connection, and that you needed to commit before closing down this
handler - else changes were automatically rolled back.

It's easy to believe this, given the "hierarchical" nature of the API.
However, cursors are just things which keep track of result sets, and
I suppose that they are most useful when you perform a query which
produces a large number of result rows, but where you only want to
read a limited number of those rows at a time.

Paul
 
S

Steve Holden

Paul said:
No, the transaction is controlled using the connection object in the
DB-API, specifically by the commit and rollback methods.


I tend to open a separate cursor for each query whose result set I
want to keep around. In other words, if I'm doing a number of queries
whose results will not be manipulated via the DB-API (using fetchone,
fetchmany, fetchall) after the next query is executed, then I only
open one cursor - it's like this:

cr = c.cursor()
try:
cr.execute(query1)
do stuff with cr.fetchone/many/all
cr.execute(query2)
...
finally:
cr.close()

As the DB-API indicates, if you want to manipulate more than one
result set, you need more than one cursor.
Clearly you need multiple cursors if you want to be reading from the
database on both cursors simultaneously. Like you, though, I tend to use
a single cursor whenever I can get away with it (and now I think about
it, that's probably because of the greater likelihood of seeing db
consistency due to the read repeatability, though I can honestly say I
hadn't really formalized that principle).
With PostgreSQL, my impression is that the intended way of using
cursors is not entirely compatible with the DB-API: you declare
cursors only when you know what the query will be, not in advance, and
they can only be used with certain kinds of operations. As far as I
recall, pyPgSQL supports cursors fairly transparently, albeit through
various ad-hoc measures, whereas psycopg2 only does so for "named
cursors" - a concept missing from the DB-API as far as I can see.
Yes, unfortunately the nomenclature of the DB API conflicts with that of
SQL'S DECLARE CURSOR, where the named cursor is effectively a sequence
of query results that (under certain isolation levels and patterns of
usage) can reflect database changes as they occur. Different
implementers have chosen different relationships between DB API cursors
and SQL cursors since it was introduced in the SQL 92 standard.

I believe, without documentary justification, that named cursors were
introduced into SQL to support stored procedures, and therefore weren't
intended to be used for queries whose results were communicated outside
the server.

MySQL (never the most reliable indication of standardized SQL),
restricts them to use inside stored procedures and triggers. PostgreSQL
makes them more generally available, and I suppose it is possible to use
a DB API cursor to execute a FETCH statement to retrieve data from a SQL
CURSOR though I have never tried to do it.

SQL Server provides two types of cursor, one for use in the standard
manner and one that conforms more closely with the ODBC cursor concept.

I tend to treat the database as a purely relational store, and not use
stored procedures, but this is a luxury not everyone can afford - it's
just necessary to use them for efficiency reasons sometimes, though not
in the applications I am mostly involved with. Consequently I haven't
made much use of SQL CURSORs.
It's easy to believe this, given the "hierarchical" nature of the API.
However, cursors are just things which keep track of result sets, and
I suppose that they are most useful when you perform a query which
produces a large number of result rows, but where you only want to
read a limited number of those rows at a time.
That's true, and your remarks clarify cursor usage in the DB API very
well. Most people most of the time tend to ignore the existence of
cursor.fetchmany() in the DB API, despite the fact that it can provide
huge efficiency gains over both .fetchone() (can slow processing by
requiring too many database interactions per query) and .fetchmany()
(which can generate large memory overhead in the case of huge result sets).

regards
Steve
 
S

Steve Holden

Paul said:
No, the transaction is controlled using the connection object in the
DB-API, specifically by the commit and rollback methods.


I tend to open a separate cursor for each query whose result set I
want to keep around. In other words, if I'm doing a number of queries
whose results will not be manipulated via the DB-API (using fetchone,
fetchmany, fetchall) after the next query is executed, then I only
open one cursor - it's like this:

cr = c.cursor()
try:
cr.execute(query1)
do stuff with cr.fetchone/many/all
cr.execute(query2)
...
finally:
cr.close()

As the DB-API indicates, if you want to manipulate more than one
result set, you need more than one cursor.
Clearly you need multiple cursors if you want to be reading from the
database on both cursors simultaneously. Like you, though, I tend to use
a single cursor whenever I can get away with it (and now I think about
it, that's probably because of the greater likelihood of seeing db
consistency due to the read repeatability, though I can honestly say I
hadn't really formalized that principle).
With PostgreSQL, my impression is that the intended way of using
cursors is not entirely compatible with the DB-API: you declare
cursors only when you know what the query will be, not in advance, and
they can only be used with certain kinds of operations. As far as I
recall, pyPgSQL supports cursors fairly transparently, albeit through
various ad-hoc measures, whereas psycopg2 only does so for "named
cursors" - a concept missing from the DB-API as far as I can see.
Yes, unfortunately the nomenclature of the DB API conflicts with that of
SQL'S DECLARE CURSOR, where the named cursor is effectively a sequence
of query results that (under certain isolation levels and patterns of
usage) can reflect database changes as they occur. Different
implementers have chosen different relationships between DB API cursors
and SQL cursors since it was introduced in the SQL 92 standard.

I believe, without documentary justification, that named cursors were
introduced into SQL to support stored procedures, and therefore weren't
intended to be used for queries whose results were communicated outside
the server.

MySQL (never the most reliable indication of standardized SQL),
restricts them to use inside stored procedures and triggers. PostgreSQL
makes them more generally available, and I suppose it is possible to use
a DB API cursor to execute a FETCH statement to retrieve data from a SQL
CURSOR though I have never tried to do it.

SQL Server provides two types of cursor, one for use in the standard
manner and one that conforms more closely with the ODBC cursor concept.

I tend to treat the database as a purely relational store, and not use
stored procedures, but this is a luxury not everyone can afford - it's
just necessary to use them for efficiency reasons sometimes, though not
in the applications I am mostly involved with. Consequently I haven't
made much use of SQL CURSORs.
It's easy to believe this, given the "hierarchical" nature of the API.
However, cursors are just things which keep track of result sets, and
I suppose that they are most useful when you perform a query which
produces a large number of result rows, but where you only want to
read a limited number of those rows at a time.
That's true, and your remarks clarify cursor usage in the DB API very
well. Most people most of the time tend to ignore the existence of
cursor.fetchmany() in the DB API, despite the fact that it can provide
huge efficiency gains over both .fetchone() (can slow processing by
requiring too many database interactions per query) and .fetchmany()
(which can generate large memory overhead in the case of huge result sets).

regards
Steve
 
P

Paul Boddie

That's true, and your remarks clarify cursor usage in the DB API very
well. Most people most of the time tend to ignore the existence of
cursor.fetchmany() in the DB API, despite the fact that it can provide
huge efficiency gains over both .fetchone() (can slow processing by
requiring too many database interactions per query) and .fetchmany()

fetchall ;-)
(which can generate large memory overhead in the case of huge result sets).

Indeed. I managed to run into a problem with DB-API-compliant code and
psycopg2 in this respect: select a large number of rows, watch
PostgreSQL do its work, see the Python process suck down the entire
result set. It's a situation reminiscent of that incident involving a
python and an alligator in the Florida swamps, but where the alligator
survives.

I don't use psycopg2 at the moment, but I did patch it to allow more
transparent usage of cursors, and there's an unapplied patch for this
floating around in the bug tracker. Lately, I've been using pyPgSQL
instead and not really doing huge selects from Python code anyway, but
I'm still using fetchmany for one or two things.

Paul
 
S

Steve Holden

Paul Boddie wrote:
[...]
I don't use psycopg2 at the moment, but I did patch it to allow more
transparent usage of cursors, and there's an unapplied patch for this
floating around in the bug tracker. Lately, I've been using pyPgSQL
instead and not really doing huge selects from Python code anyway, but
I'm still using fetchmany for one or two things.
[Makes note to look at pyPgSQL]

regards
Steve
 
M

M.-A. Lemburg

Yes, unfortunately the nomenclature of the DB API conflicts with that of
SQL'S DECLARE CURSOR, where the named cursor is effectively a sequence
of query results that (under certain isolation levels and patterns of
usage) can reflect database changes as they occur. Different
implementers have chosen different relationships between DB API cursors
and SQL cursors since it was introduced in the SQL 92 standard.

I believe, without documentary justification, that named cursors were
introduced into SQL to support stored procedures, and therefore weren't
intended to be used for queries whose results were communicated outside
the server.

Cursors defined using DECLARE CURSOR usually live in the
scope of the database engine. They are different from the cursors
defined and used with the database APIs.

MS even warns against mixing them:

http://msdn2.microsoft.com/en-us/library/aa172576(SQL.80).aspx

The Python DB-API is defined at the database API level, so the
same applies in the context of DB-API cursors.

In practice, I've not had problems with accessing named cursors
using DB-API cursors.

The main reason for the MS warning is that
cursors can be used for lots of interesting optimizations
such as auto-updating result sets and scrolling, positioned
updates or deletions, etc. The DB-API doesn't expose all these
nifty features, so doesn't easily run into trouble.

mxODBC has support for named cursors that you can later
use for positioned updates. You declare the name of the
cursor when creating it:

cursor1 = connection.cursor('mycursor')
cursor1.execute('select id, name, value from mytable')

# Position the mycursor on row 10
cursor1.fetchmany(10)

# Update row 10
cursor2 = connection.cursor()
cursor2.execute('update mytable set value = ? where current of mycursor')

cursor1.close()
cursor2.close()

However, it's usually better to do updates in the classical
way, ie. by referencing a primary key.

--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source (#1, Feb 07 2008)________________________________________________________________________

:::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,MacOSX for free ! ::::


eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
Registered at Amtsgericht Duesseldorf: HRB 46611
 
C

Carsten Haese

mxODBC has support for named cursors that you can later
use for positioned updates.

Since we're on the topic of shameless plugs, InformixDB has this
feature, too :)
However, it's usually better to do updates in the classical
way, ie. by referencing a primary key.

That depends on the notion of "better". It's entirely possible that a
"WHERE CURRENT OF <cursor>" clause finds the row in question faster than
even an indexed access using a primary key could. If you're processing a
single row, I would agree that using an update cursor is overkill, but
if you're processing a large number of rows, using an update cursor
might be beneficial.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,023
Latest member
websitedesig25

Latest Threads

Top