Linguistically correct Python text rendering

D

David Opstad

I have a question about text rendering I'm hoping someone here can
answer. Is there a way of doing linguistically correct rendering of
Unicode strings in Python? In simple cases like Latin or Japanese I can
just print the string and see the correct results. However, I don't know
how to get Python to do the right thing for writing systems which
require contextual processing.

For example, let's say I create this Unicode string in Arabic:

string1 = u"\u0644\u062e\u0645" # lam khah meem

If I print string1, I get just the characters I specified above, which
is not correct rendering behavior for Arabic. In the simple case, I
should get an output string where the order is reversed, and the codes
have been changed into their contextually correct forms: initial lam
(\ufedf), medial khah (\ufea8) and final meem (\ufee2). Optionally, I
could even ask for a single ligature combining all three of these;
Unicode even encodes this at \ufd85.

The situation is even more complicated for writing systems like
Devanagari (used in Hindi and Marathi). In this case there are rules for
ligatures (called "conjuncts") which are linguistically required, but
unencoded by Unicode. Technology was developed 15 years ago to deal with
this: fonts contain tables of extra information allowing access to the
correct conjunct glyphs at rendering time, even if they're unencoded.

Apple has software that deals with correctly rendering text in cases
like this (ATSUI); Microsoft does as well. IBM provides the ICU software
classes for this kind of rendering, and I believe FreeType has made a
start on dealing with AAT and OpenType fonts as well. So my question is
this: how do we get this functionality integrated into Python? I'd love
to be able to print any Unicode string and have it come out
linguistically correctly, even if I have to do a little extra formatting
(e.g. something like adding a new string-formatting conversion character
%t for typographically rich output).

Any ideas?

Dave Opstad
 
J

Josiah Carlson

Any ideas?

Perhaps using a GUI call that hooks into Windows or MacOS would be
sufficient for you. I don't know if tkinter or wxPython can handle it,
but I would imagine that standard Windows gui calls wouldn't have a
problem. Check out the pythonwin GUI extensions:
http://www.python.org/windows/pythonwin/

- Josiah
 
M

Mike Maxwell

David said:
my question is this: how do we get this functionality integrated into
Python? I'd love to be able to print any Unicode string and have it
come out linguistically correctly

Isn't this a function of whatever app you're running Python code inside (a
terminal or something)? E.g. could you take the output of your Python
program as a file and display it in Yudit or a Pango app?

Mike Maxwell
 
D

David Opstad

"Mike Maxwell said:
Isn't this a function of whatever app you're running Python code inside (a
terminal or something)? E.g. could you take the output of your Python
program as a file and display it in Yudit or a Pango app?

In an interactive Python session on the Mac, the terminal window can
display Asian or accented Latin with no problem, so that:

gives the correct output, the Chinese character "yi". (The terminal's
defaults are for UTF-8 text display)

All I'm wondering is whether this odd asymmetry (between parts of
Unicode that display correctly with no further work and parts of Unicode
that need more active processing) is something that could be addressed
by adding more sophistication to Python's own output formatting. The
alternative is, as you suggest, to export Unicode text and open it with
another application, but I want Python to shine for all languages by
default!

Dave
 
M

Michael Hudson

David Opstad said:
In an interactive Python session on the Mac, the terminal window can
display Asian or accented Latin with no problem, so that:


gives the correct output, the Chinese character "yi". (The terminal's
defaults are for UTF-8 text display)

But it seems to be impossible to programmatically determine which
encoding the terminal being printed to at a given moment is using (and
the user can fiddle this at run time). If I'm wrong about this, I'd
like to know.
All I'm wondering is whether this odd asymmetry (between parts of
Unicode that display correctly with no further work and parts of Unicode
that need more active processing) is something that could be addressed
by adding more sophistication to Python's own output formatting.

I don't think so. Depends a bit what you mean by "Python's own output
formatting". Since, 2.3 Python if sys.stdout is a terminal it
attempts to determine the encoding in use via the
"locale.nl_langinfo(locale.CODESET)" approach, but whether this
actually works seems to be a bit random. It certainly isn't going to
work on Mac OS X, which AFAICT ignores locales as much as the ISO C
standard lets it get away with.

What more would you have us do?

If you're using some other means to display Unicode text, it's up to
that means to ensure it gets displayed correctly. For PyObjC, It Just
Works, I think.

Cheers,
mwh
 
D

David Opstad

Michael Hudson said:
But it seems to be impossible to programmatically determine which
encoding the terminal being printed to at a given moment is using (and
the user can fiddle this at run time). If I'm wrong about this, I'd
like to know.

The encoding issue is peripheral to my point; sorry if I wasn't clearer
in my original message. It doesn't matter what the encoding is. The main
issue is that for some writing systems (e.g. Arabic) simply outputting
the characters in a Unicode string, irrespective of encoding, will
produce garbled results.
What more would you have us do?

Well, for those writing systems whose presentation forms are included in
Unicode, how about a further processing step? So that at a minimum, if I
start with an Arabic string like "abc" I can get out an Arabic string
like "CBA" where bidi reordering has happened, and contextual
substitution has been done. Then, outputting the processed Unicode
string using stdout will work without further intervention (assuming a
font for the writing system is present, of course).

It's probably irrational of me, I admit, but I'd love to see Python
correctly render *any* Unicode string, not just the subsets requiring no
reordering or contextual processing.

Dave
 
M

Michael Hudson

David Opstad said:
The encoding issue is peripheral to my point; sorry if I wasn't clearer
in my original message. It doesn't matter what the encoding is. The main
issue is that for some writing systems (e.g. Arabic) simply outputting
the characters in a Unicode string, irrespective of encoding, will
produce garbled results.


Well, for those writing systems whose presentation forms are included in
Unicode, how about a further processing step? So that at a minimum, if I
start with an Arabic string like "abc" I can get out an Arabic string
like "CBA" where bidi reordering has happened, and contextual
substitution has been done. Then, outputting the processed Unicode
string using stdout will work without further intervention (assuming a
font for the writing system is present, of course).

Ah, OK. You are now officially beyond my level of expertise :) You
might want to talk to the i18n-sig.

This sounds very much like the sort of thing that could/should be
developed externally to Python and then perhaps folded in later, a la
CJKCodecs.
It's probably irrational of me, I admit, but I'd love to see Python
correctly render *any* Unicode string, not just the subsets requiring no
reordering or contextual processing.

I still think "render" is probably the wrong word to use here, though.

Cheers,
mwh
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,009
Latest member
GidgetGamb

Latest Threads

Top