Flexible string representation, unicode, typography, ...

W

wxjmfauth

Le samedi 25 août 2012 11:46:34 UTC+2, Frank Millman a écrit :
Here's what I think he is saying. I am posting this to test the water. I

am also confused, and if I have got it wrong hopefully someone will

correct me.



In python 3.3, unicode strings are now stored as follows -

if all characters can be represented by 1 byte, the entire string is

composed of 1-byte characters

else if all characters can be represented by 1 or 2 bytea, the entire

string is composed of 2-byte characters

else the entire string is composed of 4-byte characters



There is an overhead in making this choice, to detect the lowest number

of bytes required.



jmfauth believes that this only benefits 'english-speaking' users, as

the rest of the world will tend to have strings where at least one

character requires 2 or 4 bytes. So they incur the overhead, without

getting any benefit.



Therefore, I think he is saying that he would have preferred that python

standardise on 4-byte characters, on the grounds that the saving in

memory does not justify the performance overhead.



Frank Millman

Very well explained. Thanks.

More precisely, affected are not only the 'english-speaking'
users, but all the users who are using not latin-1 characters.
(See the title of this topic, ... typography).

Being at the same time, latin-1 and unicode compliant is
a plain absurdity in the mathematical sense.

---

For those you do not know, the go language has introduced
the rune type. As far as I know, nobody is complaining, I
have not even seen a discussion related to this subject.


100% Unicode compliant from the day 0. Congratulations.

jmf
 
I

Ian Kelly

For those you do not know, the go language has introduced
the rune type. As far as I know, nobody is complaining, I
have not even seen a discussion related to this subject.

Python has that also. We call it "int".

More seriously, strings in Go are not sequences of runes. They're
actually arrays of UTF-8 bytes. That means that they're quite
efficient for ASCII strings, at the expense of other characters, like
Chinese (wait, this sounds familiar for some reason). It also means
that you have to bend over backwards if you want to work with actual
runes instead of bytes. Want to know how many characters are in your
string? Don't call len() on it -- that will only tell you how many
bytes are in it. Don't try to index or slice it either -- that will
(accidentally) work for ASCII strings, but for other strings your
indexes will be wrong. If you're unlucky you might even split up the
string in the middle of a character, and now your string has invalid
characters in it. The right way to do it looks something like this:

len([]rune("¥ÕÄPµ¾")) // get the length of the string in characters
string([]rune("¥ÕÄPµ¾")[0:2]) // get the substring containing the first
two characters

It reminds me of working in Python 2.X, except that instead of an
actual unicode type you just have arrays of ints.
 
W

wxjmfauth

Le dimanche 26 août 2012 00:26:56 UTC+2, Ian a écrit :
For those you do not know, the go language has introduced
the rune type. As far as I know, nobody is complaining, I
have not even seen a discussion related to this subject.



Python has that also. We call it "int".



More seriously, strings in Go are not sequences of runes. They're

actually arrays of UTF-8 bytes. That means that they're quite

efficient for ASCII strings, at the expense of other characters, like

Chinese (wait, this sounds familiar for some reason). It also means

that you have to bend over backwards if you want to work with actual

runes instead of bytes. Want to know how many characters are in your

string? Don't call len() on it -- that will only tell you how many

bytes are in it. Don't try to index or slice it either -- that will

(accidentally) work for ASCII strings, but for other strings your

indexes will be wrong. If you're unlucky you might even split up the

string in the middle of a character, and now your string has invalid

characters in it. The right way to do it looks something like this:



len([]rune("白鵬翔")) // get the length of the string in characters

string([]rune("白鵬翔")[0:2]) // get the substring containing the first

two characters



It reminds me of working in Python 2.X, except that instead of an

actual unicode type you just have arrays of ints.


Sorry, you do not get it.

The rune is an alias for int32. A sequence of runes is a
sequence of int32's. Go do not spend its time in using a
machinery to work with, to differentiate, to keep in memory
this sequence according to the *characers* composing this
"array of code points".

The message is even stronger. Use runes to work comfortably [*]
with unicode:
rune -> int32 -> utf32 -> unicode (the perfect scheme, cann't be
better)

[*] Beyond my skill and my kwowloge and if I understood correctly,
this rune is even technically optimized to ensure it it always
an int32.

len() or slices() have nothing to do here.

My experience with go is equal to uero + epsilon.

jmf
 
W

wxjmfauth

Le dimanche 26 août 2012 00:26:56 UTC+2, Ian a écrit :
For those you do not know, the go language has introduced
the rune type. As far as I know, nobody is complaining, I
have not even seen a discussion related to this subject.



Python has that also. We call it "int".



More seriously, strings in Go are not sequences of runes. They're

actually arrays of UTF-8 bytes. That means that they're quite

efficient for ASCII strings, at the expense of other characters, like

Chinese (wait, this sounds familiar for some reason). It also means

that you have to bend over backwards if you want to work with actual

runes instead of bytes. Want to know how many characters are in your

string? Don't call len() on it -- that will only tell you how many

bytes are in it. Don't try to index or slice it either -- that will

(accidentally) work for ASCII strings, but for other strings your

indexes will be wrong. If you're unlucky you might even split up the

string in the middle of a character, and now your string has invalid

characters in it. The right way to do it looks something like this:



len([]rune("白鵬翔")) // get the length of the string in characters

string([]rune("白鵬翔")[0:2]) // get the substring containing the first

two characters



It reminds me of working in Python 2.X, except that instead of an

actual unicode type you just have arrays of ints.


Sorry, you do not get it.

The rune is an alias for int32. A sequence of runes is a
sequence of int32's. Go do not spend its time in using a
machinery to work with, to differentiate, to keep in memory
this sequence according to the *characers* composing this
"array of code points".

The message is even stronger. Use runes to work comfortably [*]
with unicode:
rune -> int32 -> utf32 -> unicode (the perfect scheme, cann't be
better)

[*] Beyond my skill and my kwowloge and if I understood correctly,
this rune is even technically optimized to ensure it it always
an int32.

len() or slices() have nothing to do here.

My experience with go is equal to uero + epsilon.

jmf
 
S

Steven D'Aprano

Le dimanche 26 août 2012 00:26:56 UTC+2, Ian a écrit :

Actually, it's worse that that. Strings in Go aren't even proper UTF-8.
They are arbitrary bytes, which means you can create strings which are
invalid Unicode.

Go looks like an interesting language, but it seems to me that they have
totally screwed up strings. At least Python had the excuse that it is 20
years old and carrying the old ASCII baggage. Nobody used Unicode in 1992
when Python was invented. What is Google's excuse for getting Unicode
wrong?

In Go, strings are UTF-8 encoded sequences of bytes, except when they're
not, in which case they're arbitrary bytes. You can't tell if a string is
valid UTF-8 unless you carefully inspect every single character and
decide for yourself if it is valid. Don't know the rules for valid UTF-8?
Too bad.

This also means that basic string operations like slicing are both *slow*
and *wrong* -- they are slow, because you have to track character
boundaries yourself. And they are wrong, because most people won't
bother, they'll just assume each character is one byte.

See here for more information:

http://comments.gmane.org/gmane.comp.lang.go.general/56245

Some useful quotes:

- "Strings are *not* required to be UTF-8."

- "If the string must always be valid UTF-8 then relatively expensive
validation is required for many operations. Plus making those
operations able to fail complicates the interface."

- "In almost all cases strings are just byte arrays."

- "Go simply doesn't have 8-bit Unicode strings"

- "Python3 can afford the luxury of storing strings in UCS-2/UCS-4,
Go can't."

I don't question that Go needs a type for arbitrary bytes. But that
should be "bytes", not "string", and it should be there for the advanced
programmers who *need* to worry about bytes. Programmers who want to
include strings in their applications (i.e. all of them) shouldn't need
to care that "$" is one byte, "¢" is two, "€" is three, and "𤭢"
(U+24B62) is four. With Python 3.3, it *just works*. With Go, it doesn't.

In my not-so-humble opinion, Go has made a silly design error. Go
programmers will be paying for this mistake for at least a decade. What
they should have done is create two data types:

1) Strings which are guaranteed to be valid Unicode. That could be UTF-32
or a PEP 393 approach, depending on how much memory you want to use, or
even UTF-16 if you don't mind the complication of surrogate pairs.

2) Bytes which are not guaranteed to be valid Unicode but let the
programmer work with arbitrary bytes.

(If this sounds familiar, it should -- it is exactly what Python 3 does.
We have a string type that guarantees to be valid Unicode, and a bytes
type that doesn't.)

As given, *every single programmer* who wants to use Unicode in Go is now
responsible for doing all the hard work of validating UTF-8, converting
from bytes to strings, etc. Sure, eventually Go will have libraries to do
that, but not yet, and even when it does, many people will not use them
and their code will fail to handle Unicode correctly.

Right now, every Go programmer who wants Unicode has to pay the cost of
the freedom to have arbitrary byte sequences, whether they need those
arbitrary bytes or not. The consequence is that instead of Go making
Unicode as trivial and easy to use as it should be, it will be hard to
get right, annoying, slow and painful. Another generation of programmers
will grow up thinking that Unicode is all too difficult and we should
stick to just plain ASCII.

Since Go doesn't have Unicode strings, you can never trust that a string
is valid UTF-8, you can't slice it efficiently, you can't get the length
in characters, you can't write it to a file and have other applications
to be able to read it. Sure, sometimes it will work, and then somebody
will input a Euro sign into your application, and it will blow up.

Why am I not surprised that JMF misunderstands both Go byte-strings and
Python Unicode strings?

Sorry, you do not get it.

The rune is an alias for int32. A sequence of runes is a sequence of
int32's.

It certainly is not. Runes are variable-width. Here, for example, are a
number of Go functions which return a single rune and its width in bytes:

http://golang.org/pkg/unicode/utf8/

Go do not spend its time in using a machinery to work with, to
differentiate, to keep in memory this sequence according to the
*characers* composing this "array of code points".

The message is even stronger. Use runes to work comfortably [*] with
unicode:
rune -> int32 -> utf32 -> unicode (the perfect scheme, cann't be better)

Runes are not int32, and int32 is not UTF-32.

Whether UTF-32 is the "perfect scheme" for Unicode is a matter of opinion.
 
I

Ian Kelly

It certainly is not. Runes are variable-width. Here, for example, are a
number of Go functions which return a single rune and its width in bytes:

http://golang.org/pkg/unicode/utf8/

I think the documentation for those functions is simply badly worded.
The "width in bytes" it returns is not the width of the rune (which as
jmf notes is simply an alias for int32 that stores a single code
point). It means the UTF-8 width of the character, i.e. the number of
UTF-8 bytes the function "consumed", presumably so that the caller can
then reslice the data with that many bytes fewer.
 
I

Ian Kelly

Sorry, you do not get it.

The rune is an alias for int32. A sequence of runes is a
sequence of int32's. Go do not spend its time in using a
machinery to work with, to differentiate, to keep in memory
this sequence according to the *characers* composing this
"array of code points".

The message is even stronger. Use runes to work comfortably [*]
with unicode:
rune -> int32 -> utf32 -> unicode (the perfect scheme, cann't be
better)

I understand what rune is. I think you've missed my complaint, which
is that although rune is the basic building block of Unicode strings
-- representing a single Unicode character -- strings in Go are not
built from runes but from bytes. If you want to do any actual work
with Unicode strings, then you have to first convert them to runes or
arrays of runes. The conceptual cost of this is that the object
you're working with is no longer a string.

You call this the "perfect scheme" for working with Unicode. Why does
the "perfect scheme" for Unicode make it *easier* to write buggy code
that only works for ASCII than to write correct code that works for
all characters? This is IMO where Python 3 gets it right. When you
want to work with Unicode strings, you just work with Unicode strings
-- none of this nonsense of first explicitly converting the string to
an array of ints that looks nothing like a string at a high level.
The only place Python 3 makes you worry about converting strings is at
the boundaries of your program, where decoding from bytes to strings
and back is necessary.
 
S

Steven D'Aprano

I think the documentation for those functions is simply badly worded.
The "width in bytes" it returns is not the width of the rune (which as
jmf notes is simply an alias for int32 that stores a single code point).

Is this documented somewhere?

I can't tell you how long I spent unsuccessfully googling for variations
on "go language runes", which unsurprisingly mostly came back with pages
about Germanic runes and elf runes but not Go runes. I read the golang
FAQs, which mentioned Unicode *once* and runes not at all. Obviously Go
language programmers don't care much about Unicode.

It means the UTF-8 width of the character, i.e. the number of UTF-8
bytes the function "consumed", presumably so that the caller can then
reslice the data with that many bytes fewer.

That makes sense, given the lousy string implementation and API they're
working with.

I note that not all 32-bit ints are valid code points. I suppose I can
see sense in having rune be a 32-bit integer value limited to those valid
code points. (But, dammit, why not call it a code point?) But if rune is
merely an alias for int32, why not just call it int32?
 
D

Dan Sommers

I note that not all 32-bit ints are valid code points. I suppose I can
see sense in having rune be a 32-bit integer value limited to those
valid code points. (But, dammit, why not call it a code point?) But if
rune is merely an alias for int32, why not just call it int32?

Having a "code point" type is a good idea. If nothing else, human code
readers can tell that you're doing something with characters rather than
something with integers. If your language provides any sort of type
safety, then you get that, too.

Calling your code points int32 is a bad idea for the same reason that it
turned out to be a bad idea to call all my old ASCII characters int8.
Or all my pointers int<n> (or unsigned int<n>), for n in 16, 20, 24, 32,
36, 48, or 64 (or I'm sure other values of n that I never had the pain
or pleasure of using).

Dan
 
W

wxjmfauth

Le dimanche 26 août 2012 22:45:09 UTC+2, Dan Sommers a écrit :
Having a "code point" type is a good idea. If nothing else, human code

readers can tell that you're doing something with characters rather than

something with integers. If your language provides any sort of type

safety, then you get that, too.



Calling your code points int32 is a bad idea for the same reason that it

turned out to be a bad idea to call all my old ASCII characters int8.

Or all my pointers int<n> (or unsigned int<n>), for n in 16, 20, 24, 32,

36, 48, or 64 (or I'm sure other values of n that I never had the pain

or pleasure of using).

And this is precisely the concept of rune, a real int which
is a name for Unicode code point.

Go "has" the integers int32 and int64. A rune ensure
the usage of int32. "Text libs" use runes. Go has only
bytes and runes.

If you do not like the word "perfection", this mechanism
has at least an ideal simplicity (with probably a lot
of positive consequences).

rune -> int32 -> utf32 -> unicode code points.

- Why int32 and not uint32? No idea, I tried to find an
answer without asking.
- I find the name "rune" elegant. "char" would have been
too confusing.

End. This is supposed to be a Python forum.
jmf
 
W

wxjmfauth

Le dimanche 26 août 2012 22:45:09 UTC+2, Dan Sommers a écrit :
Having a "code point" type is a good idea. If nothing else, human code

readers can tell that you're doing something with characters rather than

something with integers. If your language provides any sort of type

safety, then you get that, too.



Calling your code points int32 is a bad idea for the same reason that it

turned out to be a bad idea to call all my old ASCII characters int8.

Or all my pointers int<n> (or unsigned int<n>), for n in 16, 20, 24, 32,

36, 48, or 64 (or I'm sure other values of n that I never had the pain

or pleasure of using).

And this is precisely the concept of rune, a real int which
is a name for Unicode code point.

Go "has" the integers int32 and int64. A rune ensure
the usage of int32. "Text libs" use runes. Go has only
bytes and runes.

If you do not like the word "perfection", this mechanism
has at least an ideal simplicity (with probably a lot
of positive consequences).

rune -> int32 -> utf32 -> unicode code points.

- Why int32 and not uint32? No idea, I tried to find an
answer without asking.
- I find the name "rune" elegant. "char" would have been
too confusing.

End. This is supposed to be a Python forum.
jmf
 
I

Ian Kelly

- Why int32 and not uint32? No idea, I tried to find an
answer without asking.

UCS-4 is technically only a 31-bit encoding. The sign bit is not used,
so the choice of int32 vs. uint32 is inconsequential.

(In fact, since they made the decision to limit Unicode to the range 0
- 0x0010FFFF, one might even point out that the *entire high-order
byte* as well as 3 bits of the next byte are irrelevant. Truly,
UTF-32 is not designed for memory efficiency.)
 
W

wxjmfauth

Le lundi 27 août 2012 22:14:07 UTC+2, Ian a écrit :
UCS-4 is technically only a 31-bit encoding. The sign bit is not used,

so the choice of int32 vs. uint32 is inconsequential.



(In fact, since they made the decision to limit Unicode to the range 0

- 0x0010FFFF, one might even point out that the *entire high-order

byte* as well as 3 bits of the next byte are irrelevant. Truly,

UTF-32 is not designed for memory efficiency.)

I know all this. The question is more, why not a uint32 knowing
there are only positive code points. It seems to me more "natural".
 
W

wxjmfauth

Le lundi 27 août 2012 22:14:07 UTC+2, Ian a écrit :
UCS-4 is technically only a 31-bit encoding. The sign bit is not used,

so the choice of int32 vs. uint32 is inconsequential.



(In fact, since they made the decision to limit Unicode to the range 0

- 0x0010FFFF, one might even point out that the *entire high-order

byte* as well as 3 bits of the next byte are irrelevant. Truly,

UTF-32 is not designed for memory efficiency.)

I know all this. The question is more, why not a uint32 knowing
there are only positive code points. It seems to me more "natural".
 
R

rusi

(e-mail address removed):


     Go's text libraries use UTF-8 encoded byte strings. Not arraysof
runes. See, for example,http://golang.org/pkg/regexp/

    Are you claiming that UTF-8 is the optimum string representation and
therefore should be used by Python?

    Neil




This whole rune/go business is a red-herring.
In said:
OK, that is roughly factor 5. Let's see what I get:

$ python3.2 -m timeit '("€"*100+"€"*100).replace("€", "œ")'
100000 loops, best of 3: 1.8 usec per loop
$ python3.3 -m timeit '("€"*100+"€"*100).replace("€", "œ")'
10000 loops, best of 3: 9.11 usec per loop

That is factor 5, too. So I can replicate your measurement on an AMD64 Linux
system with self-built 3.3 versus system 3.2.


You seem to imply that the slowdown is connected to the inability of latin-1
to encode "œ" and "€" (to take the examples relevant to the above
microbench). So let's repeat with latin-1 characters:

$ python3.2 -m timeit '("ä"*100+"ä"*100).replace("ä", "ß")'
100000 loops, best of 3: 1.76 usec per loop
$ python3.3 -m timeit '("ä"*100+"ä"*100).replace("ä", "ß")'
10000 loops, best of 3: 10.3 usec per loop

Hm, the slowdown is even a tad bigger. So we can safely dismiss your theory
that an unfortunate choice of the 8 bit encoding is causing it. Do you

In summary:
1. The problem is not on jmf's computer
2. It is not windows-only
3. It is not directly related to latin-1 encodable or not

The only question which is not yet clear is this:
Given a typical string operation that is complexity O(n), in more
detail it is going to be O(a + bn)
If only a is worse going 3.2 to 3.3, it may be a small issue.
If b is worse by even a tiny amount, it is likely to be a significant
regression for some use-cases.

So doing some arm-chair thinking (I dont know the code and difficulty
involved):

Clearly there are 3 string-engines in the python 3 world:
- 3.2 narrow
- 3.2 wide
- 3.3 (flexible)

How difficult would it be to giving the choice of string engine as a
command-line flag?
This would avoid the nuisance of having two binaries -- narrow and
wide.
And it would give the python programmer a choice of efficiency
profiles.
 
C

Chris Angelico

Clearly there are 3 string-engines in the python 3 world:
- 3.2 narrow
- 3.2 wide
- 3.3 (flexible)

How difficult would it be to giving the choice of string engine as a
command-line flag?
This would avoid the nuisance of having two binaries -- narrow and
wide.
And it would give the python programmer a choice of efficiency
profiles.

To what benefit?

3.2 narrow is, I would have to say, buggy. It handles everything up to
\uFFFF without problems, but once you have any character beyond that,
your indexing and slicing are wrong.

3.2 wide is fine but memory-inefficient.

3.3 is never worse than 3.2 except for some tiny checks, and will be
more memory-efficient in many cases.

Supporting narrow would require fixing the handling of surrogates.
Potentially a huge job, and you'll end up with ridiculous performance
in many cases.

So what you're really asking for is a command-line option to force all
strings to have their 'kind' set to 11, UCS-4 storage. That would be
doable, I suppose; it wouldn't require many changes (just a quick
check in string creation functions). But what would be the advantage?
Every string requires 4 bytes per character to store; an optimization
has been lost.

ChrisA
 
I

Ian Kelly

In summary:
1. The problem is not on jmf's computer
2. It is not windows-only
3. It is not directly related to latin-1 encodable or not

The only question which is not yet clear is this:
Given a typical string operation that is complexity O(n), in more
detail it is going to be O(a + bn)
If only a is worse going 3.2 to 3.3, it may be a small issue.
If b is worse by even a tiny amount, it is likely to be a significant
regression for some use-cases.

As has been pointed out repeatedly already, this is a microbenchmark.
jmf is focusing in one one particular area (string construction) where
Python 3.3 happens to be slower than Python 3.2, ignoring the fact
that real code usually does lots of things other than building
strings, many of which are slower to begin with. In the real-world
benchmarks that I've seen, 3.3 is as fast as or faster than 3.2.
Here's a much more realistic benchmark that nonetheless still focuses
on strings: word counting.

Source: http://pastebin.com/RDeDsgPd


C:\Users\Ian\Desktop>c:\python32\python -m timeit -s "import wc"
"wc.wc('unilang8.htm')"
1000 loops, best of 3: 310 usec per loop

C:\Users\Ian\Desktop>c:\python33\python -m timeit -s "import wc"
"wc.wc('unilang8.htm')"
1000 loops, best of 3: 302 usec per loop

"unilang8.htm" is an arbitrary UTF-8 document containing a broad swath
of Unicode characters that I pulled off the web. Even though this
program is still mostly string processing, Python 3.3 wins. Of
course, that's not really a very good test -- since it reads the file
on every pass, it probably spends more time in I/O than it does in
actual processing. Let's try it again with prepared string data:


C:\Users\Ian\Desktop>c:\python32\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_str(t)"
10000 loops, best of 3: 87.3 usec per loop

C:\Users\Ian\Desktop>c:\python33\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_str(t)"
10000 loops, best of 3: 84.6 usec per loop

Nope, 3.3 still wins. And just for the sake of my own curiosity, I
decided to try it again using str.split() instead of a StringIO.
Since str.split() creates more strings, I expect Python 3.2 might
actually win this time.


C:\Users\Ian\Desktop>c:\python32\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_split(t)"
10000 loops, best of 3: 88 usec per loop

C:\Users\Ian\Desktop>c:\python33\python -m timeit -s "import wc; t =
open('unilang8.htm', 'r', encoding
='utf-8').read()" "wc.wc_split(t)"
10000 loops, best of 3: 76.5 usec per loop

Interestingly, although Python 3.2 performs the splits in about the
same time as the StringIO operation, Python 3.3 is significantly
*faster* using str.split(), at least on this data set.

So doing some arm-chair thinking (I dont know the code and difficulty
involved):

Clearly there are 3 string-engines in the python 3 world:
- 3.2 narrow
- 3.2 wide
- 3.3 (flexible)

How difficult would it be to giving the choice of string engine as a
command-line flag?
This would avoid the nuisance of having two binaries -- narrow and
wide.

Quite difficult. Even if we avoid having two or three separate
binaries, we would still have separate binary representations of the
string structs. It makes the maintainability of the software go down
instead of up.
And it would give the python programmer a choice of efficiency
profiles.

So instead of having just one test for my Unicode-handling code, I'll
now have to run that same test *three times* -- once for each possible
string engine option. Choice isn't always a good thing.

Cheers,
Ian
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,013
Latest member
KatriceSwa

Latest Threads

Top