Sqlite3. Substitution of names in query.

Discussion in 'Python' started by Lacrima, Oct 30, 2009.

  1. Lacrima

    Lacrima Guest

    Hello!

    I use sqlite3 module for my sqlite database. I am trying to substitute
    table name in sql query.

    >>> import sqlite3
    >>> con = sqlite3.connect('mydb')
    >>> cur = con.execute("select * from table where name='Joe'")

    That's ok

    >>> cur = con.execute("select * from table where name=?", ('Joe',))

    That's ok too

    >>> cur = con.execute("select * from ? where name=?", ('table', 'Joe'))

    Traceback (most recent call last):
    File "<string>", line 1, in <fragment>
    sqlite3.OperationalError: near "?": syntax error

    So what's wrong now?
    Is it impossible to substitute table names, using DB API?
    If so, what is a way to make table name substitutions? Are string
    operations like
    'select * from %s where...' % tablename
    ok in this case?

    Thanks in advance!
    Lacrima, Oct 30, 2009
    #1
    1. Advertising

  2. Lacrima schrieb:
    > Hello!
    >
    > I use sqlite3 module for my sqlite database. I am trying to substitute
    > table name in sql query.
    >
    >>>> import sqlite3
    >>>> con = sqlite3.connect('mydb')
    >>>> cur = con.execute("select * from table where name='Joe'")

    > That's ok
    >
    >>>> cur = con.execute("select * from table where name=?", ('Joe',))

    > That's ok too
    >
    >>>> cur = con.execute("select * from ? where name=?", ('table', 'Joe'))

    > Traceback (most recent call last):
    > File "<string>", line 1, in <fragment>
    > sqlite3.OperationalError: near "?": syntax error
    >
    > So what's wrong now?
    > Is it impossible to substitute table names, using DB API?


    Yes, it is. How should the api discern that you meant the tabe-name
    (which gets passed without quotes, or if so with double-quotes) instead
    of a string-literal? There *could* be means to do so, but in the end all
    this boils down to crossing a border that better is left alone - because
    there are many problems of sql injection lurking if you are basing your
    tablenames/columns on *input*.

    > If so, what is a way to make table name substitutions? Are string
    > operations like
    > 'select * from %s where...' % tablename
    > ok in this case?


    By substituting it with simple string-interpolation. Or even better, by
    not doing it at all - because usually, your datamodel is tied to your
    program, so the need for this kind of dynamicity shouldn't arise in the
    first place.

    Die
    Diez B. Roggisch, Oct 30, 2009
    #2
    1. Advertising

  3. Lacrima

    Matteo Guest

    On Oct 30, 7:10 am, Lacrima <> wrote:
    > Hello!
    >
    > I use sqlite3 module for my sqlite database. I am trying to substitute
    > table name in sql query.
    >
    > >>> import sqlite3
    > >>> con = sqlite3.connect('mydb')
    > >>> cur = con.execute("select * from table where name='Joe'")

    >
    > That's ok
    >
    > >>> cur = con.execute("select * from table where name=?", ('Joe',))

    >
    > That's ok too
    >
    > >>> cur = con.execute("select * from ? where name=?", ('table', 'Joe'))

    >
    > Traceback (most recent call last):
    >   File "<string>", line 1, in <fragment>
    > sqlite3.OperationalError: near "?": syntax error
    >
    > So what's wrong now?
    > Is it impossible to substitute table names, using DB API?
    > If so, what is a way to make table name substitutions? Are string
    > operations like
    > 'select * from %s where...' % tablename
    > ok in this case?
    >
    > Thanks in advance!


    I've found myself wanting this ability too, but alas, it is not
    possible. SQLite statements are compiled into an intermediate bytecode
    so that they can execute very quickly. This bytecode allows for
    placeholders to be used for values, so that the same compiled bytecode
    can be run for a multitude of values (handy for large INSERTS, of
    course) without recompilation.

    As I understand it, the bytecode is specific to the table(s) and
    columns used in the statement. I don't know the specific mechanism,
    but I would suspect that a column name gets converted to an offset
    into a row, or to a pointer to a table's column array, or somesuch. In
    particular, the code generated is probably drastically different
    depending on whether or not a column in a table is indexed or not.
    Thus, if a placeholder was used for a column, then the whole statement
    would have to be recompiled each time it was run, which would do very
    nasty things to efficiency.

    So, if you really need that ability, you must use normal python string
    interpolation.
    Matteo, Oct 30, 2009
    #3
  4. Lacrima

    Matteo Guest

    On Oct 30, 7:27 am, "Diez B. Roggisch" <> wrote:
    [snip]
    > Or even better, by
    > not doing it at all - because usually, your datamodel is tied to your
    > program, so the need for this kind of dynamicity shouldn't arise in the
    > first place.
    >
    > Die


    Perhaps that is true in the majority of cases, but there are
    exceptions. I can think of a couple of instances where one might need
    to do it:
    1) A general database exploration or visualization application, or

    2) Where one needs to perform a similar operation on several different
    tables. In a current case of mine, I'm converting from several
    externally provided tab-delimited tables to an in-memory sqlite
    database. Most of my app is tied closely to the model, and needs no
    such dynamicity. However, I want to automate the conversion, so I
    don't have to write 20 or so similar functions.
    Matteo, Oct 30, 2009
    #4
  5. On Fri, 30 Oct 2009 10:03:43 -0700 (PDT), Matteo <>
    declaimed the following in gmane.comp.python.general:

    > 1) A general database exploration or visualization application, or
    >

    Which should interrogate the database itself to find the
    table/column names, and present them to the user in some form of menu --
    such that the user never enters the table/column name as text, but
    instead as a selection from those displayed. Internally, one converts
    the selection back into a table/column name.

    This way regular string interpolation operations (or whatever Python
    3.x has replaced it with) are safe to construct the SQL, leaving only
    user supplied (or program generated) data values to be passed via the
    DB-API parameter system -- so that they are properly escaped and
    rendered safe.

    > 2) Where one needs to perform a similar operation on several different
    > tables. In a current case of mine, I'm converting from several
    > externally provided tab-delimited tables to an in-memory sqlite
    > database. Most of my app is tied closely to the model, and needs no
    > such dynamicity. However, I want to automate the conversion, so I
    > don't have to write 20 or so similar functions.


    Which again is an application in which the "user" does not have
    direct input of table/column names.
    --
    Wulfraed Dennis Lee Bieber KD6MOG
    HTTP://wlfraed.home.netcom.com/
    Dennis Lee Bieber, Oct 31, 2009
    #5
  6. In message <>, Dennis Lee
    Bieber wrote:

    > This way regular string interpolation operations (or whatever Python
    > 3.x has replaced it with) are safe to construct the SQL, leaving only
    > user supplied (or program generated) data values to be passed via the
    > DB-API parameter system -- so that they are properly escaped and
    > rendered safe.


    Mixing the two is another recipe for confusion and mistakes.
    Lawrence D'Oliveiro, Oct 31, 2009
    #6
  7. Lawrence D'Oliveiro wrote:
    > In message <>, Dennis Lee
    > Bieber wrote:
    >
    >> This way regular string interpolation operations (or whatever Python
    >> 3.x has replaced it with) are safe to construct the SQL, leaving only
    >> user supplied (or program generated) data values to be passed via the
    >> DB-API parameter system -- so that they are properly escaped and
    >> rendered safe.

    >
    > Mixing the two is another recipe for confusion and mistakes.


    Mixing the two is necessary. According to the SQL standard, parameters
    can only take the place of literal values. Parameters can't take the
    place of identifiers or keywords that make up the structure of the query.

    So, you use string manipulation to build the structure of the query, and
    then you use parameter binding to fill values into the query. They are
    two different tools for two fundamentally different jobs. As long as you
    understand what you're doing, there should be no confusion. (And if you
    don't understand what you're doing, you shouldn't be doing it!)

    --
    Carsten Haese
    http://informixdb.sourceforge.net
    Carsten Haese, Oct 31, 2009
    #7
  8. In message <>, Carsten
    Haese wrote:

    > Lawrence D'Oliveiro wrote:
    >
    >> In message <>, Dennis
    >> Lee Bieber wrote:
    >>
    >>> This way regular string interpolation operations (or whatever Python
    >>> 3.x has replaced it with) are safe to construct the SQL, leaving only
    >>> user supplied (or program generated) data values to be passed via the
    >>> DB-API parameter system -- so that they are properly escaped and
    >>> rendered safe.

    >>
    >> Mixing the two is another recipe for confusion and mistakes.

    >
    > Mixing the two is necessary.
    > ...
    > As long as you understand what you're doing, there should be no confusion.
    > (And if you don't understand what you're doing, you shouldn't be doing
    > it!)


    But if you understand what you're doing, you don't need to mix the two.
    Lawrence D'Oliveiro, Oct 31, 2009
    #8
  9. Lawrence D'Oliveiro wrote:
    > In message <>, Carsten
    > Haese wrote:
    >
    >> Lawrence D'Oliveiro wrote:
    >>
    >>> In message <>, Dennis
    >>> Lee Bieber wrote:
    >>>
    >>>> This way regular string interpolation operations (or whatever Python
    >>>> 3.x has replaced it with) are safe to construct the SQL, leaving only
    >>>> user supplied (or program generated) data values to be passed via the
    >>>> DB-API parameter system -- so that they are properly escaped and
    >>>> rendered safe.
    >>> Mixing the two is another recipe for confusion and mistakes.

    >> Mixing the two is necessary.
    >> ...
    >> As long as you understand what you're doing, there should be no confusion.
    >> (And if you don't understand what you're doing, you shouldn't be doing
    >> it!)

    >
    > But if you understand what you're doing, you don't need to mix the two.


    Are we talking about the same thing here? I thought we're talking about
    string interpolation and parameter binding, and I explained that mixing
    those two is necessary if you have a query in which the "movable" bits
    are identifiers or other syntax elements.

    On what grounds are you asserting that it's not necessary to mix the
    two? Please elaborate your point.

    --
    Carsten Haese
    http://informixdb.sourceforge.net
    Carsten Haese, Nov 1, 2009
    #9
  10. In message <>, Carsten
    Haese wrote:

    > Lawrence D'Oliveiro wrote:
    >
    >> In message <>, Carsten
    >> Haese wrote:
    >>
    >>> Lawrence D'Oliveiro wrote:
    >>>
    >>>> In message <>,
    >>>> Dennis Lee Bieber wrote:
    >>>>
    >>>>> This way regular string interpolation operations (or whatever Python
    >>>>> 3.x has replaced it with) are safe to construct the SQL, leaving only
    >>>>> user supplied (or program generated) data values to be passed via the
    >>>>> DB-API parameter system -- so that they are properly escaped and
    >>>>> rendered safe.
    >>>>
    >>>> Mixing the two is another recipe for confusion and mistakes.
    >>>
    >>> Mixing the two is necessary.
    >>> ...
    >>> As long as you understand what you're doing, there should be no
    >>> confusion. (And if you don't understand what you're doing, you shouldn't
    >>> be doing it!)

    >>
    >> But if you understand what you're doing, you don't need to mix the two.

    >
    > On what grounds are you asserting that it's not necessary to mix the
    > two? Please elaborate your point.


    On the grounds that Python has more general and powerful string parameter-
    substitution mechanisms than anything built into any database API.
    Lawrence D'Oliveiro, Nov 1, 2009
    #10
  11. On Sun, 01 Nov 2009 19:08 +1300, Lawrence D'Oliveiro
    <_zealand> declaimed the following in
    gmane.comp.python.general:

    > On the grounds that Python has more general and powerful string parameter-
    > substitution mechanisms than anything built into any database API.


    Really? In the case of MySQLdb, the DB-API /uses/ Pythons string
    interpolation -- but first does all needed escaping of special
    characters followed by putting quotes around the data. Of course, MySQL
    didn't have parameterized/prepared queries until recently, so the API
    has to submit complete SQL statements for every operation (this may
    change in the future, but then means one can not run against a v4.x
    MySQL engine)

    Since SQLite uses prepared/parameterized queries, the parameters are
    passed in a totally different means, and the engine itself ensures the
    data can't be jiggered to look like a set of statements.

    In both cases, the only thing that changes in the SQL is placeholder
    -- for SQLite, the API uses "?", and for MySQL the API uses "%s"... But
    the spec for the API also states that one can retrieve the parameter
    marker dynamically. So one could moderately easily write code that is
    engine agnostic... Something not that easy to do for data items if one
    is going to ignore the DB-API parameter passing scheme totally.
    --
    Wulfraed Dennis Lee Bieber KD6MOG
    HTTP://wlfraed.home.netcom.com/
    Dennis Lee Bieber, Nov 1, 2009
    #11
  12. Lawrence D'Oliveiro wrote:
    >> On what grounds are you asserting that it's not necessary to mix the
    >> two? Please elaborate your point.

    >
    > On the grounds that Python has more general and powerful string parameter-
    > substitution mechanisms than anything built into any database API.


    That statement is fundamentally flawed. You are assuming that the
    preferred way of getting a value into a query is by substituting a
    literal into the query string. That is, in general, not true, because
    that would be horribly inefficient. This is also why I despise the term
    "parameter substitution", since it implies incorrectly that passing
    parameters to a query is merely a string formatting exercise. The
    correct term is "parameter binding."

    Most databases actually provide an API for supplying parameters
    separately from the query string. This is more efficient, because it
    eliminates the need to render the parameter value into a literal form on
    the client side and to parse the literal form on the server side. Also,
    it allows the engine to perform the same query multiple times with
    different values without having to re-parse the query.

    Finally, you're assuming that every value that can be supplied to a
    query actually HAS a literal form. That is not true. For example, in
    Informix databases, there are no literals for BYTE-type values. (You'd
    probably call them blobs.) So, if vomiting literals into the query
    string were your only way of conveying values to the database, you'd
    never be able to populate a BYTE column on an Informix database. The
    only way to pass a BYTE value to an Informix database is by parameter
    binding.

    Since parameter binding is in general much more than string
    substitution, it is indeed necessary to mix the two.

    --
    Carsten Haese
    http://informixdb.sourceforge.net
    Carsten Haese, Nov 1, 2009
    #12
  13. In message <>, Carsten
    Haese wrote:

    > Lawrence D'Oliveiro wrote:
    >
    >> In message <>,
    >> Carsten Haese wrote:

    >
    >>> On what grounds are you asserting that it's not necessary to mix the
    >>> two? Please elaborate your point.

    >>
    >> On the grounds that Python has more general and powerful string
    >> parameter- substitution mechanisms than anything built into any database
    >> API.

    >
    > That statement is fundamentally flawed. You are assuming that the
    > preferred way of getting a value into a query is by substituting a
    > literal into the query string. That is, in general, not true, because
    > that would be horribly inefficient.


    Says someone who hasn't realized where the real inefficiencies are. Remember
    what Tony Hoare told us: "premature optimization is the root of all evil".
    These are databases we're talking about. Real-world databases are large, and
    reside on disk, which is several orders of magnitude slower than RAM. And
    RAM is where string parameter substitutions take place. So a few hundred
    extra RAM accesses isn't going to make any significant difference to the
    speed of database queries.

    > Finally, you're assuming that every value that can be supplied to a
    > query actually HAS a literal form. That is not true. For example, in
    > Informix databases, there are no literals for BYTE-type values.


    Probably why I don't use Informix. What use is a binary data type if you
    can't insert and retrieve binary data values?
    Lawrence D'Oliveiro, Nov 1, 2009
    #13
  14. Lawrence D'Oliveiro wrote:
    > Says someone who hasn't realized where the real inefficiencies are. Remember
    > what Tony Hoare told us: "premature optimization is the root of all evil".
    > These are databases we're talking about. Real-world databases are large, and
    > reside on disk, which is several orders of magnitude slower than RAM. And
    > RAM is where string parameter substitutions take place. So a few hundred
    > extra RAM accesses isn't going to make any significant difference to the
    > speed of database queries.


    You're just not getting it. The cost is not in performing the parameter
    substitutions themselves. The cost is in parsing what's essentially the
    same query one million times over when it could have been parsed only
    once. You might find an increase of seven orders of magnitude
    insignificant, but I don't.

    > Probably why I don't use Informix. What use is a binary data type if you
    > can't insert and retrieve binary data values?


    You CAN insert and retrieve binary data values. You just have to use the
    right tool for the job, and that is parameter binding.

    --
    Carsten Haese
    http://informixdb.sourceforge.net
    Carsten Haese, Nov 1, 2009
    #14
  15. In message <>, Carsten
    Haese wrote:

    > Lawrence D'Oliveiro wrote:
    >
    >> Says someone who hasn't realized where the real inefficiencies are.
    >> Remember what Tony Hoare told us: "premature optimization is the root of
    >> all evil". These are databases we're talking about. Real-world databases
    >> are large, and reside on disk, which is several orders of magnitude
    >> slower than RAM. And RAM is where string parameter substitutions take
    >> place. So a few hundred extra RAM accesses isn't going to make any
    >> significant difference to the speed of database queries.

    >
    > You're just not getting it. The cost is not in performing the parameter
    > substitutions themselves. The cost is in parsing what's essentially the
    > same query one million times over when it could have been parsed only
    > once. You might find an increase of seven orders of magnitude
    > insignificant, but I don't.


    There is no such parsing overhead. I speak from experience.

    Look at the BulkInserter class here
    <http://www.codecodex.com/wiki/Useful_MySQL_Routines>. I have successfully
    used that to insert tens of thousands of records in just a few seconds. Yet
    it makes no use of the parameter-substitution provided by MySQLdb; it is all
    just straight Python, including the SQLString routine (also on that page),
    which goes through every single character of each string value to decide
    what needs escaping. Python can do all that, and do it fast.

    You don't get to figure out what's efficient and what's not by mere hand-
    waving; you have to do actual real-world tests.
    Lawrence D'Oliveiro, Nov 2, 2009
    #15
  16. In message <>, Dennis Lee Bieber wrote:

    > On Sun, 01 Nov 2009 19:08 +1300, Lawrence D'Oliveiro
    > <_zealand> declaimed the following in
    > gmane.comp.python.general:
    >
    >> On the grounds that Python has more general and powerful string
    >> parameter- substitution mechanisms than anything built into any database
    >> API.

    >
    > Really? In the case of MySQLdb, the DB-API /uses/ Pythons string
    > interpolation ...


    Only a limited subset thereof. For instance, I'm not aware of any database
    API that lets me do this:

    sql.cursor.execute \
    (
    "update numbers set flags = flags | %(setflags)u where projectid = %(projectid)s"
    "%(match_listid)s and number = %(number)s"
    %
    {
    "projectid" : SQLString(ProjectID),
    "match_listid" :
    ("", " and listid = %s" % SQLString(ListID))[ListID != None],
    "number" : SQLString(number),
    "setflags" : flags,
    }
    )
    Lawrence D'Oliveiro, Nov 2, 2009
    #16
  17. Lacrima

    Robert Kern Guest

    On 2009-11-02 14:47 PM, Lawrence D'Oliveiro wrote:
    > In message<>, Dennis Lee Bieber wrote:
    >
    >> On Sun, 01 Nov 2009 19:08 +1300, Lawrence D'Oliveiro
    >> <_zealand> declaimed the following in
    >> gmane.comp.python.general:
    >>
    >>> On the grounds that Python has more general and powerful string
    >>> parameter- substitution mechanisms than anything built into any database
    >>> API.

    >>
    >> Really? In the case of MySQLdb, the DB-API /uses/ Pythons string
    >> interpolation ...

    >
    > Only a limited subset thereof. For instance, I'm not aware of any database
    > API that lets me do this:
    >
    > sql.cursor.execute \
    > (
    > "update numbers set flags = flags | %(setflags)u where projectid = %(projectid)s"
    > "%(match_listid)s and number = %(number)s"
    > %
    > {
    > "projectid" : SQLString(ProjectID),
    > "match_listid" :
    > ("", " and listid = %s" % SQLString(ListID))[ListID != None],
    > "number" : SQLString(number),
    > "setflags" : flags,
    > }
    > )


    When programmatically generating SQL, I like to use SQLAlchemy. This use case is
    handled with .where() on Update expression objects. Personally, I find
    manipulating the SQLAlchemy expression objects clearer, safer, and more portable
    than building the raw SQL through string interpolation.

    --
    Robert Kern

    "I have come to believe that the whole world is an enigma, a harmless enigma
    that is made terrible by our own mad attempt to interpret it as though it had
    an underlying truth."
    -- Umberto Eco
    Robert Kern, Nov 2, 2009
    #17
  18. In message <>, Robert
    Kern wrote:

    > On 2009-11-02 14:47 PM, Lawrence D'Oliveiro wrote:
    >
    >> For instance, I'm not aware of any database API that lets me do this:
    >>
    >> sql.cursor.execute \
    >> (
    >> "update numbers set flags = flags | %(setflags)u where
    >> projectid = %(projectid)s" "%(match_listid)s and number =
    >> %(number)s"
    >> %
    >> {
    >> "projectid" : SQLString(ProjectID),
    >> "match_listid" :
    >> ("", " and listid = %s" % SQLString(ListID))[ListID
    >> != None],
    >> "number" : SQLString(number),
    >> "setflags" : flags,
    >> }
    >> )

    >
    > When programmatically generating SQL, I like to use SQLAlchemy. This use
    > case is handled with .where() on Update expression objects. Personally, I
    > find manipulating the SQLAlchemy expression objects clearer, safer, and
    > more portable than building the raw SQL through string interpolation.


    Doesn't seem to support bulk insertions à la this
    <http://www.codecodex.com/wiki/Useful_MySQL_Routines#Bulk_Insertion>. Or
    even literal lists as per the SQLStringList routine on the same page.
    Lawrence D'Oliveiro, Nov 3, 2009
    #18
  19. Lawrence D'Oliveiro wrote:
    > There is no such parsing overhead. I speak from experience.
    >
    > Look at the BulkInserter class here
    > <http://www.codecodex.com/wiki/Useful_MySQL_Routines>. I have successfully
    > used that to insert tens of thousands of records in just a few seconds. Yet
    > it makes no use of the parameter-substitution provided by MySQLdb; it is all
    > just straight Python, including the SQLString routine (also on that page),
    > which goes through every single character of each string value to decide
    > what needs escaping. Python can do all that, and do it fast.


    With all due respect, but if your experience is exclusive to
    MySQL/MySQLdb, your experience means very little for database
    programming practices in general. Throughout most of its history, MySQL
    did not support prepared statements and parameter binding, and MySQLdb
    doesn't use any parameter binding API that might be available, so you're
    comparing your own implementation of string interpolation to MySQLdb's
    implementation of string interpolation. Your experience says nothing
    about databases that have an *actual* parameter binding API.

    > You don't get to figure out what's efficient and what's not by mere hand-
    > waving;


    I'm not handwaving.

    > you have to do actual real-world tests.


    I have.

    See for example the timing test in
    http://informixdb.blogspot.com/2007/07/filling-in-blanks.html . If you'd
    like to try it for yourself, here is a version of the test for SQLite:

    =================================================================
    # querytest.py
    class Tester(object):
    def __init__(self):
    import sqlite3
    conn = sqlite3.connect(":memory:")
    self.cur = conn.cursor()
    self.cur.execute("create temp table t1(a int, b int)")
    self.counter = 0

    def with_params(self):
    self.counter += 1
    self.cur.execute("insert into t1 values(?,?)",
    (self.counter,self.counter*2) )

    def without_params(self):
    self.counter += 1
    self.cur.execute("insert into t1 values(%s,%s)" %
    (self.counter,self.counter*2) )
    =================================================================

    And here are the corresponding results on my laptop:
    $ python -mtimeit -s "from querytest import Tester; t=Tester()"
    't.with_params()'
    10000 loops, best of 3: 20.9 usec per loop
    $ python -mtimeit -s "from querytest import Tester; t=Tester()"
    't.without_params()'
    10000 loops, best of 3: 36.2 usec per loop

    So, you can say whatever you want, but you will never convince me that
    string interpolation is better than parameter binding for getting
    variable values into a query. Even if you don't accept my proof that it
    is more efficient, you have not proved that parameter binding is less
    efficient.

    In addition to the efficiency factor, parameter binding is inherently
    secure, whereas string interpolation is too easy to use insecurely.
    Finally, parameter binding is the standard method, as defined by the SQL
    standard, of getting variable values into a query.

    You may call it "premature optimization", but I call it "choosing the
    right tool for the job."

    I assume that none of this will convince you, but that's fine. We'll
    just agree to disagree on this.

    --
    Carsten Haese
    http://informixdb.sourceforge.net
    Carsten Haese, Nov 3, 2009
    #19
  20. On Tue, 03 Nov 2009 09:41:10 +1300, Lawrence D'Oliveiro
    <_zealand> declaimed the following in
    gmane.comp.python.general:

    > There is no such parsing overhead. I speak from experience.
    >

    <snort>

    > Look at the BulkInserter class here
    > <http://www.codecodex.com/wiki/Useful_MySQL_Routines>. I have successfully
    > used that to insert tens of thousands of records in just a few seconds. Yet
    > it makes no use of the parameter-substitution provided by MySQLdb; it is all


    You picked the wrong database to use for your argument.

    Prior to MySQL version 5, MySQL ITSELF did not support prepared
    (that is, preparsed and parameterized) SQL with parameters passed
    separately.

    MySQLdb, being compatible with MySQL 3.x and 4.x (along with 5.x),
    therefore, ITSELF creates full SQL statements using Python string
    interpolation operations (the main reason it uses %s as the
    placeholder).

    So your main claim turns into:

    "My Python string interpolation is just as fast as MySQLdb's Python
    string interpolation"

    Don't believe me? Just take a look at the MySQLdb Python code -- the
    execute() method summarizes to (not the real names):

    resultSQL = SQL_with_placeholders % escape_arguments(argument_tuple)
    submit_to_MySQL(resultSQL)

    where escape_arguments() generates a string representation of each
    argument, escapes quotes and other dangerous characters, and then wraps
    the stringified arguments with MySQL quotes (which is why you can't use
    %f to specify a placeholder for floating numerics -- the numeric was
    converted to a quoted string before getting fed to the %f!)
    --
    Wulfraed Dennis Lee Bieber KD6MOG
    HTTP://wlfraed.home.netcom.com/
    Dennis Lee Bieber, Nov 3, 2009
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Paddy McCarthy
    Replies:
    3
    Views:
    706
    Anthony J Bybell
    Sep 24, 2004
  2. Jeffrey 'jf' Lim
    Replies:
    5
    Views:
    484
    Jeffrey 'jf' Lim
    Apr 9, 2007
  3. SunSw0rd
    Replies:
    4
    Views:
    255
    SunSw0rd
    Jul 2, 2009
  4. Robert Citek
    Replies:
    2
    Views:
    142
    Robert Citek
    Nov 10, 2009
  5. lgabiot
    Replies:
    22
    Views:
    194
    Terry Reedy
    Jan 24, 2014
Loading...

Share This Page