Chardet, file, ... and the Flexible String Representation

W

wxjmfauth

Short comment about the "detection" tools from a previous
discussion.

The tools supposed to detect the coding scheme are all
working with a simple logical mathematical rule:

p ==> q <==> non q ==> non p .

Shortly -- and consequence -- they do not detect a
coding scheme they only detect "a" possible coding schme.


The Flexible String Representation has conceptually to
face the same problem. It splits "unicode" in chunks and
it has to solve two problems at the same time, the coding
and the handling of multiple "char sets". The problem?
It fails.
"This poor Flexible String Representation does not succeed
to solve the problem it create itsself."

Workaround: add more flags (see PEP 3xx.)

Still thinking "mathematics" (limit). For a given repertoire
of characters one can assume that every char has its own
flag (because of the usage of multiple coding schemes).
Conceptually, one will quickly realize, at the end, that they
will be an equal amount of flags and an amount of characters
and the only valid solution it to work with a unique set of
encoded code points, where every element of this set *is*
its own flag.
Curiously, that's what the utf-* (and btw other coding schemes
in the byte string world) are doing (with plenty of other
advantages).

Already said. An healthy coding scheme can only work with
a unique set of encoded code points. That's why we have to
live today with all these coding schemes.

jmf
 
S

Steven D'Aprano

Short comment about the "detection" tools from a previous discussion.

The tools supposed to detect the coding scheme are all working with a
simple logical mathematical rule:

p ==> q <==> non q ==> non p .

Incorrect.

chardet does a statistical analysis of the bytes, and tries to guess what
language they are likely to come from. The algorithm is described here:

https://github.com/erikrose/chardet/blob/master/docs/how-it-works.html

(although that's rather inconvenient to read), and here:

http://www-archive.mozilla.org/projects/intl/
UniversalCharsetDetection.html


chardet is a Python port of the Mozilla charset guesser, so they use the
same algorithm.

Shortly -- and consequence -- they do not detect a coding scheme they
only detect "a" possible coding schme.

That at least is correct.

The Flexible String Representation has conceptually to face the same
problem.

No it doesn't.
 
N

Ned Batchelder

The Flexible String Representation has conceptually to
face the same problem. It splits "unicode" in chunks and
it has to solve two problems at the same time, the coding
and the handling of multiple "char sets". The problem?
It fails.

Just once, please say *how* it fails. :(

--Ned.
 
A

Antoon Pardon

Op 06-09-13 11:11, (e-mail address removed) schreef:
The Flexible String Representation has conceptually to
face the same problem. It splits "unicode" in chunks and
it has to solve two problems at the same time, the coding
and the handling of multiple "char sets". The problem?

Not true. The FSR always uses the same coding. An "A" is
always coded as 65.
 
P

Piet van Oostrum

The Flexible String Representation has conceptually to
face the same problem. It splits "unicode" in chunks and
it has to solve two problems at the same time, the coding
and the handling of multiple "char sets". The problem?
It fails.
"This poor Flexible String Representation does not succeed
to solve the problem it create itsself."

The FSR does not split unicode in chuncks. It does not create problems and therefore it doesn't have to solve this.

The FSR simply stores a Unicode string as an array[*] of ints (the Unicode code points of the characters of the string. That's it. Then it uses a memory-efficient way to store this array of ints. But that has nothing to do with character sets. The same principle could be used for any array of ints.

So you are seeking problems where there are none. And you would have a lot more peace of mind if you stopped doing this.

[*] array in the C sense.
 
C

Chris Angelico

The FSR simply stores a Unicode string as an array[*] of ints (the Unicode code points of the characters of the string. That's it. Then it uses a memory-efficient way to store this array of ints. But that has nothing to do with character sets. The same principle could be used for any array of ints..

Python does, in fact, store integers in different-sized blocks of
memory according to size - though not for anything smaller than
32-bit.
28

So why this is suddenly a bad thing for characters is a mystery none
but he can comprehend.

ChrisA
 
R

random832

The FSR does not split unicode in chuncks. It does not create problems
and therefore it doesn't have to solve this.

The FSR simply stores a Unicode string as an array[*] of ints (the
Unicode code points of the characters of the string. That's it. Then it
uses a memory-efficient way to store this array of ints. But that has
nothing to do with character sets. The same principle could be used for
any array of ints.

I think the source of the confusion is that it is described in terms of
UCS-2 and Latin-1, which people often think of (especially latin-1) as
different encodings rather than merely storing code points in a narrower
type.

----

Incidentally, how does all this interact with ctypes unicode_buffers,
which slice as strings and must be UTF-16 on windows? This was fine
pre-FSR when unicode objects were UTF-16, but I'm not sure how it would
work now.
 
C

Chris Angelico

Incidentally, how does all this interact with ctypes unicode_buffers,
which slice as strings and must be UTF-16 on windows? This was fine
pre-FSR when unicode objects were UTF-16, but I'm not sure how it would
work now.

That would be pre-FSR *with a Narrow build*, which was the default on
Windows but not everywhere. But I don't know or use ctypes, so an
answer to your actual question will have to come from someone else.

ChrisA
 
W

wxjmfauth

Le vendredi 6 septembre 2013 17:46:14 UTC+2, Piet van Oostrum a écrit :
(e-mail address removed) writes:


The Flexible String Representation has conceptually to
face the same problem. It splits "unicode" in chunks and
it has to solve two problems at the same time, the coding
and the handling of multiple "char sets". The problem?
It fails.
"This poor Flexible String Representation does not succeed
to solve the problem it create itsself."



The FSR does not split unicode in chuncks. It does not create problems and therefore it doesn't have to solve this.



The FSR simply stores a Unicode string as an array[*] of ints (the Unicode code points of the characters of the string. That's it. Then it uses a memory-efficient way to store this array of ints. But that has nothing to do with character sets. The same principle could be used for any array of ints..



So you are seeking problems where there are none. And you would have a lot more peace of mind if you stopped doing this.



[*] array in the C sense.

--

Piet van Oostrum <[email protected]>

WWW: http://pietvanoostrum.com/

PGP key: [8DAE142BE17999C4]

----------


Due to its nature, a character cann't be handled in the
same way a one another type. That's the purpose of the UTF.

-----

Chunk latin-1, perfomance

ref:
0.13144639994075646
0.13780295544393084

Chunk ucs2, perfomance
0.23505392241617074

Chunk ucs4, perfomance
0.26266673650735584

Comment: Such differences never happen with utf.

-----

Chunk latin-1, memory
26

Chunk ucs2, memory
40

Comment: 14 bytes more than latin-1

Chunk ucs4, memory
44

Comment: 18 bytes more than latin-1

Comment: With utf, a char (in string or not) never exceed 4

bytes.

-----

'a' + '€' in utf, conceptually

Concatenate the *unicode tranformation units*.
Some kind of a real direct 'a' + '€'.


'a' + '€' in FSR, conceptually

1) Check the "internal coding" of 'a'
2) Check the "internal coding" of '€'
3) Compare these codings

4a) If they match, concatenate the bytes

4b) If they do not match
5) Reencode the string which has to
6) Concatenate
7) Set the "internal coding" status for
further processing

-----

Complicate and full of side effects, eg :
39

Is not a latin-1 "é" supposed to count as a latin-1 "a" ?

----

I picked up random methods, there may be variations, basically
this general behaviour is always expected.


jmf
 
N

Ned Batchelder

Le vendredi 6 septembre 2013 17:46:14 UTC+2, Piet van Oostrum a écrit :
(e-mail address removed) writes:


The Flexible String Representation has conceptually to
face the same problem. It splits "unicode" in chunks and
it has to solve two problems at the same time, the coding
and the handling of multiple "char sets". The problem?
It fails.
"This poor Flexible String Representation does not succeed
to solve the problem it create itsself."


The FSR does not split unicode in chuncks. It does not create problems and therefore it doesn't have to solve this.



The FSR simply stores a Unicode string as an array[*] of ints (the Unicode code points of the characters of the string. That's it. Then it uses a memory-efficient way to store this array of ints. But that has nothing to do with character sets. The same principle could be used for any array of ints.



So you are seeking problems where there are none. And you would have a lot more peace of mind if you stopped doing this.



[*] array in the C sense.

--

Piet van Oostrum <[email protected]>

WWW: http://pietvanoostrum.com/

PGP key: [8DAE142BE17999C4]
----------


Due to its nature, a character cann't be handled in the
same way a one another type. That's the purpose of the UTF.

-----

Chunk latin-1, perfomance

ref:
0.13144639994075646
0.13780295544393084

Chunk ucs2, perfomance
0.23505392241617074

Chunk ucs4, perfomance
0.26266673650735584

Comment: Such differences never happen with utf.

-----

Chunk latin-1, memory
26

Chunk ucs2, memory
40

Comment: 14 bytes more than latin-1

Chunk ucs4, memory
44

Comment: 18 bytes more than latin-1

Comment: With utf, a char (in string or not) never exceed 4

bytes.

-----

'a' + '€' in utf, conceptually

Concatenate the *unicode tranformation units*.
Some kind of a real direct 'a' + '€'.


'a' + '€' in FSR, conceptually

1) Check the "internal coding" of 'a'
2) Check the "internal coding" of '€'
3) Compare these codings

4a) If they match, concatenate the bytes

4b) If they do not match
5) Reencode the string which has to
6) Concatenate
7) Set the "internal coding" status for
further processing

-----

Complicate and full of side effects, eg :
39

Is not a latin-1 "é" supposed to count as a latin-1 "a" ?

----

I picked up random methods, there may be variations, basically
this general behaviour is always expected.


jmf

jmf, thanks for your reply. You've calmed my fears that there is
something wrong with the Flexible String Representation. None of the
examples you show demonstrate any behavior contrary to the Unicode spec.

--Ned.
 
M

Michael Torrie

Comment: Such differences never happen with utf.

But with utf, slicing strings is O(n) (well that's a simplification as
someone showed an algorithm that is log n), whereas a fixed-width
encoding (Latin-1, UCS-2, UCS-4) is O(1). Do you understand what this
means?
Complicate and full of side effects, eg :

39

Why on earth are you doing getsizeof? What are you expecting to prove?
Why are you even trying to concern yourself with implementation
details? As a programmer you should deal with unicode. Period. All
you should care about is that you can properly index or slice a unicode
string and that unicode strings can be operated on at a reasonable speed.

IE string[4] should give you the character at position 4. len(string)
should return the length of the string in *characters*.

The byte encoding used behind the scenes is of no consequence other than
speed (and you have not shown any problem with speed).
Is not a latin-1 "é" supposed to count as a latin-1 "a" ?

Of course it does. 'aé'[0] == 'a' and 'aé'[1] == 'é'. len('aé') returns 2.
I picked up random methods, there may be variations, basically
this general behaviour is always expected.

Eh? Can you point to something in the unicode spec that doesn't work?

I don't even know that much about unicode yet it's clear you're either
deliberately muddying the waters with your stupid and pointless
arguments against FCS or you don't really understand the difference
between unicode and byte encoding. Which is it?
 
R

random832

That would be pre-FSR *with a Narrow build*, which was the default on
Windows but not everywhere. But I don't know or use ctypes, so an
answer to your actual question will have to come from someone else.

I did a couple tests - it works as well as can be expected for reading,
but completely breaks for writing (due to sequence size checks not
matching)
 
I

Ian Kelly

I did a couple tests - it works as well as can be expected for reading,
but completely breaks for writing (due to sequence size checks not
matching)

Do you mean that it breaks when overwriting Python string object buffers,
or when overwriting arbitrary C strings either received from C code or
created with create_unicode_buffer?

If the former, I think that is to be expected since ctypes ultimately can't
know what is the actual type of the pointer it was handed -- much as in C,
that's up to the programmer to get right. I also think it's very bad
practice to be overwriting those anyway, since Python strings are supposed
to be immutable.

If the latter, that sounds like a bug in ctypes to me.
 
R

random832

Do you mean that it breaks when overwriting Python string object buffers,
or when overwriting arbitrary C strings either received from C code or
created with create_unicode_buffer?

If the former, I think that is to be expected since ctypes ultimately
can't
know what is the actual type of the pointer it was handed -- much as in
C,
that's up to the programmer to get right. I also think it's very bad
practice to be overwriting those anyway, since Python strings are
supposed
to be immutable.

If the latter, that sounds like a bug in ctypes to me.

I was talking about writing to the buffer object from python, i.e. with
slice assignment.
s = 'Test \U00010000'
len(s) 6
buf = create_unicode_buffer(32)
buf[:6] = s TypeError: one character unicode string expected
buf[:7] = s ValueError: Can only assign sequence of same size
buf[:7] = 'Test \ud800\udc00'
buf[:7]
'Test \U00010000' # len = 6

Assigning with .value works, however, which may be a viable workaround
for most situations. The "one character unicode string expected" message
is a bit cryptic.
 
T

Terry Reedy

jmf, thanks for your reply. You've calmed my fears that there is
something wrong with the Flexible String Representation. None of the
examples you show demonstrate any behavior contrary to the Unicode spec.

The goals of the new unicode implementation:
1. one implementation on all platforms, working the same on all platforms.
2. works correctly
3. O(1) indexing
4. save as much space as sensibly possible
5. not too much time penalty for the space saving.

The new implementation succeeded on all points. It exceeded the goal for
5. With much optimization work, there essentially is no overall time
penalty left.

Jmf's size examples show success with respect to goal 4. He apparently
disagrees with that goal and would replace it with something else. At
least some of his time examples show that saving space can save time, as
was predicted when the FSR was being developed.
 
S

Steven D'Aprano

But with utf, slicing strings is O(n) (well that's a simplification as
someone showed an algorithm that is log n), whereas a fixed-width
encoding (Latin-1, UCS-2, UCS-4) is O(1).

UTF-32 is fixed-width. UTF-16 is not, but if you limit yourself to only
characters in the Basic Multilingual Plane, it is functionally equivalent
to UCS-2 and therefore fixed-width.

Do you understand what this means?

Talking about "utf" in general as JMF does is a good sign that he
doesn't. Which UTF? I know of at least eight:

UTF-1
UTF-7
UTF-8
UTF-9 # this one is a joke, but it does work
UTF-16 # in two varieties, big-endian and little-endian
UTF-18 # another joke
UTF-32 # likewise two varieties
UTF-EBCDIC


although only 3 (perhaps 4, if you include UTF-7) are in common use.


[...]
I don't even know that much about unicode yet it's clear you're either
deliberately muddying the waters with your stupid and pointless
arguments against FCS or you don't really understand the difference
between unicode and byte encoding. Which is it?

I have been watching JMF get a mad-on about the flexible string
representation since he first noticed it, and in my opinion, his
complaints are based entirely on resentment that ASCII users save more
memory than non-ASCII users. Even if it means everyone is worse off, he
is utterly opposed to giving ASCII users any benefit.

Of course, he neglects to consider that *every single Python user* is an
ASCII user, since most strings in Python are pure ASCII. Names of
builtins, standard library modules, variables, attributes, most of them
are ASCII.
 
R

random832

On Mon, Sep 9, 2013, at 10:28, (e-mail address removed) wrote:
*time performance differences*
Comment: Such differences never happen with utf.

Why is this bad? Keeping in mind that otherwise they would all be almost
as slow as the UCS-4 case.
44

Comment: 18 bytes more than latin-1

Comment: With utf, a char (in string or not) never exceed 4

A string is an object and needs to store the length, along with any
overhead relating to object headers. I believe there is also an appended
null character. Also, ASCII strings are stored differently from Latin-1
strings.
4072 = 80 bytes overhead, 4 bytes per character.

(I bet sys.getsizeof('\xa4') will return 38 on your system, so 44 is
only six bytes more, not 18)

If we did not have the FSR, everything would be 4 bytes per character.
We might have less overhead, but a string only has to be 25 characters
long before the savings from the shorter representation outweigh even
having _no_ overhead, and every four bytes of overhead reduces that
number by one. And you have a 32-bit python build, which has less
overhead than mine - in yours, strings only have to be seven characters
long for the FSR to be worth it. Assume the minimum possible overhead is
two words for the object header, a size, and a pointer - i.e. sixteen
bytes, compared to the 25 you've demonstrated for ASCII, and strings
only need to be _two_ characters long for the FSR to be a better deal
than always using UCS4 strings.

The need for four-byte-per-character strings would not go away by
eliminating the FSR, so you're basically saying that everything should
be constrained to the worst-case performance scenario.
 
S

Serhiy Storchaka

09.09.13 22:27, (e-mail address removed) напиÑав(ла):
Do you mean that it breaks when overwriting Python string object buffers,
or when overwriting arbitrary C strings either received from C code or
created with create_unicode_buffer?

If the former, I think that is to be expected since ctypes ultimately
can't
know what is the actual type of the pointer it was handed -- much as in
C,
that's up to the programmer to get right. I also think it's very bad
practice to be overwriting those anyway, since Python strings are
supposed
to be immutable.

If the latter, that sounds like a bug in ctypes to me.

I was talking about writing to the buffer object from python, i.e. with
slice assignment.
s = 'Test \U00010000'
len(s) 6
buf = create_unicode_buffer(32)
buf[:6] = s TypeError: one character unicode string expected
buf[:7] = s ValueError: Can only assign sequence of same size
buf[:7] = 'Test \ud800\udc00'
buf[:7]
'Test \U00010000' # len = 6

Assigning with .value works, however, which may be a viable workaround
for most situations. The "one character unicode string expected" message
is a bit cryptic.

Please report a bug on http://bugs.python.org/.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top