Dealing with "funny" characters

S

sophie_newbie

Hi, I want to store python text strings that characters like "é" "Č"
in a mysql varchar text field. Now my problem is that mysql does not
seem to accept these characters. I'm wondering if there is any way I
can somehow "encode" these characters to appear as normal characters
and then "decode" them when I want to get them out of the database
again?

-Thanks.
 
P

Paul Hankin

Hi, I want to store python text strings that characters like "é" "Č"
in a mysql varchar text field. Now my problem is that mysql does not
seem to accept these characters. I'm wondering if there is any way I
can somehow "encode" these characters to appear as normal characters
and then "decode" them when I want to get them out of the database
again?

Use unicode strings, and use the 'encode' and 'decode' methods. Mysql
has utf8 support for varchars, so that seems a likely encoding to use.
If you're stuck with ascii, you can use 'unicode_escape' if you're
only accessing it from python.

But first read this excellent article on 'funny' characters:
http://www.joelonsoftware.com/articles/Unicode.html
 
G

Gert-Jan

sophie_newbie schreef:
Hi, I want to store python text strings that characters like "é" "Č"
in a mysql varchar text field. Now my problem is that mysql does not
seem to accept these characters. I'm wondering if there is any way I
can somehow "encode" these characters to appear as normal characters
and then "decode" them when I want to get them out of the database
again?


It seems you'll have to use Unicode in your program rather than 'plain'
strings.

Before storing an unicode textstring in a database or a file, you must
encode it using an appropriate encoding/codepage, for example:

outputstring = unicodeobject.encode('utf-8')


This is not an easy subject. But it isn't very hard either. Please
search the Python documentation using these keywords: Unicode, decoding
and encoding.


But you should read this first:
http://www.joelonsoftware.com/articles/Unicode.html
It's an entertaining introduction to the world of codepages and unicode :)


Regards,
Gert-Jan
 
J

John Nagle

Gert-Jan said:
sophie_newbie schreef:


It seems you'll have to use Unicode in your program rather than 'plain'
strings.

Before storing an unicode textstring in a database or a file, you must
encode it using an appropriate encoding/codepage, for example:

outputstring = unicodeobject.encode('utf-8')

No, no, that's wrong. MySQL and the Python interface to it understand
Unicode. You don't want to convert data to UTF-8 before putting it in a
database; the database indexing won't work.

Here's how to do it right.

First, tell MySQL, before you create your MySQL tables, that the tables are
to be stored in Unicode:

ALTER database yourdatabasename DEFAULT CHARACTER SET utf8;

You can also do this on a table by table basis, or even for single fields,
but you'll probably get confused if you do.

Then, when you connect to the database in Python, use something like this:

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)

That tells MySQLdb to talk to the database in Unicode, and it tells the database
(via "charset") that you're talking Unicode.

Within Python, you can use Unicode as well. If you have a Unicode text
editor, you can create Python source files in Unicode and have Unicode text
constants in quotes. If you do this, you should put

# -*- coding: UTF-8 -*-

as the first line of Python files. Quoted constants should be written
as

s = u'Test'

rather than

s = 'Test'

Instead of "str()", use "unicode()".

Once everything is set up like this, you can pass Unicode in and out of MySQL
databases freely, and all the SQL commands will work properly on Unicode data.

John Nagle
 
D

Diez B. Roggisch

No, no, that's wrong. MySQL and the Python interface to it understand
Unicode. You don't want to convert data to UTF-8 before putting it in a
database; the database indexing won't work.

I doubt that indexing has anything to do with it whatsoever.
Here's how to do it right.

First, tell MySQL, before you create your MySQL tables, that the
tables are
to be stored in Unicode:

ALTER database yourdatabasename DEFAULT CHARACTER SET utf8;

You can also do this on a table by table basis, or even for single fields,
but you'll probably get confused if you do.

Then, when you connect to the database in Python, use something like
this:

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)

That tells MySQLdb to talk to the database in Unicode, and it tells the
database
(via "charset") that you're talking Unicode.

You confuse unicode with utf-8 here. And while this appears to be
nitpicking, it is important to write this small program and meditate the
better part of an hour in front of it running:

while True:
print "utf-8 is not unicode"


You continue to make that error below, so I snip that.

The important part is this: unicode is a standard that aims to provide a
codepoint for each and every character that humankind has invented. And
python unicode objects can also represent all characters one can imagine.

However, unicode as such is an abstraction. Harddisks, network sockets,
databases and the like don't deal with abstractions though - the eat
bytes. Which makes it necessary to encode unicode objects to
byte-strings when serializing them. Thus there are the thingies called
encodings: latin1 for most characters used in westen europe for example.
But it is limited to 256 characters (actually, even less), chinese or
russian customers won't get too happy with them.

So some encodings are defined that are capable of encoding _ALL_ unicode
codepoints. Either by being larger than one byte for each character. Or
by providing escape-mechanisms. The former are e.g. UCS4 (4 bytes per
character), the most important member of the latter is utf-8. Which uses
ascii + escapes to encode all codepoints.

Now what does that mean in python?

First of all, the coding:-declaration: it tells python which encoding to
use when dealing with unicode-literals, which are the

u"something"

thingies. If you use a coding of latin1, that means that the text

u"ö"

is expected to be one byte long, with the proper value that depicts the
german umlaut o in latin1. Which is 0xf6.

If coding: is set to utf-8, the same string has to consist not of one,
but of two bytes: 0xc3 0xb6.

So, when editing files that are supposed to contain "funny" characters,
you have to

- set your editor to save the file in an appropriate encoding

- specify the same encoding in the coding:-declaration

Regarding databases: they store bytes. Mostly. Some allow to store
unicode by means of one of the fixed-size-encodings, but you pay a
storage-size penalty for that.

So - you we're right when you said that one can change the encoding a db
uses, on several levels even.

But that's not all that is to it. Another thing is the encoding the
CONNECTION expects byte-strings to be passed, and will use to render
returned strings in. The conversion from and to the used storage
encoding is done automagically.

It is for example perfectly legal (and unfortunately happens
involuntarily) to have a database that internally uses utf-8 as storage,
potentially being able to store all possible codepoints.

But due to e.g. environmental settings, opened connections will deliver
the contents in e.g. latin1. Which of course will lead to problems if
you try to return data from the table with the topmost chines first names.

So you can alter the encoding the connection delivers and expects
byte-strings in. In mysql, this can be done explcit using

cursor.execute("set names <encoding>")

Or - as you said - as part of a connection-string.

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)


But there is more to it. If the DB-API supports it, then the API itself
will decode the returned strings, using the specified encoding, so that
the user will only deal with "real" unicode-objects, greatly reducing
the risk of mixing byte-strings with unicode-objects. That's what the
use_unicod-parameter is for: it makes the API accept and deliver
unicod-objects. But it would do so even if the charset-parameter was
"latin1". Which makes me repeat the lesson from the beginning:

while True:
print "utf-8 is not unicode"


Diez
 
J

John Nagle

Diez said:
I doubt that indexing has anything to do with it whatsoever.

Of course it does. ORDER BY, LIKE, TRIM, and other SQL expressions that
do more than an equal comparison need to know the actual data representation.
If you were to convert to UTF-8 or UCS-2 in the Python program and send
the resulting byte string to MySQL, with MySQL thinking it was storing
ASCII or a BLOB, many SQL functions won't work right. A database is
not a file system; a database looks at the data.
You confuse unicode with utf-8 here.
.... pontification deleted
But due to e.g. environmental settings, opened connections will deliver
the contents in e.g. latin1. Which of course will lead to problems if
you try to return data from the table with the topmost chines first names. ???

So you can alter the encoding the connection delivers and expects
byte-strings in. In mysql, this can be done explcit using

cursor.execute("set names <encoding>")

Or - as you said - as part of a connection-string.

db = MySQLdb.connect(host="localhost",
use_unicode = True, charset = "utf8",
user=username, passwd=password, db=database)


But there is more to it. If the DB-API supports it, then the API itself
will decode the returned strings, using the specified encoding, so that
the user will only deal with "real" unicode-objects, greatly reducing
the risk of mixing byte-strings with unicode-objects. That's what the
use_unicod-parameter is for: it makes the API accept and deliver
unicod-objects. But it would do so even if the charset-parameter was
"latin1".

Yes, and that's taken care of if the connection is set up as above.
For legacy reasons, there are both "unicode" and "charset" parameters
to "connect", and they can be set in incompatible ways.
Also, you need MySQL 5 or later to get full Unicode support.

John Nagle
 
D

Diez B. Roggisch

I doubt that indexing has anything to do with it whatsoever.
Of course it does. ORDER BY, LIKE, TRIM, and other SQL expressions
that
do more than an equal comparison need to know the actual data
representation. If you were to convert to UTF-8 or UCS-2 in the Python
program and send the resulting byte string to MySQL, with MySQL thinking
it was storing
ASCII or a BLOB, many SQL functions won't work right. A database is
not a file system; a database looks at the data.

Garbage in, garbage out. But putting correctly encoded data into it won't
make any troubles, so "You don't want to convert data to UTF-8 before
putting it in a
database; the database indexing won't work." is utter nonsense.
... pontification deleted

Pontication in contrast to what - your highly informative posts like this?
http://mail.python.org/pipermail/python-list/2007-October/461375.html

I'm sure there are other daily routines your audience here can't wait to be
informed of in regular intervals.

Just because you write nonsense like
"""
    First, tell MySQL, before you create your MySQL tables, that the tables
are
to be stored in Unicode:

        ALTER database yourdatabasename DEFAULT CHARACTER SET utf8;
"""

confusing unicode with an encoding of it doesn't make me pontificate.

Diez
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,050
Latest member
AngelS122

Latest Threads

Top