ruby 1.9 hates you and me and the encodings we rode in on so just getused to it.

M

Marnen Laibow-Koser

Benoit said:
Hi,

I think you're quite a little pessimist here :)

Until my post on this subject, I have never been complaining far from
that,
and enjoyed to play with ∑, ∆ and so on.

And I was not complaining, jsut asking how to solve that (The fact it
didn't
handle the normalization form C is quite logical I think, no language
would
do that easily).

Huh? Normalization transformations should be pretty easy to implement.
(FWIW, the Unicode Consortium recommends KC for identifiers, although
I'm not sure I agree with that recommendation.)

Best,
 
B

Brian Candler

Marnen said:
Huh? Normalization transformations should be pretty easy to implement.

But the point is, you can't do anything useful with this until you
*transcode* it anyway, which you can do using Iconv (in either 1.8 or
1.9).

ruby 1.9's big flag feature of being able to store a string in its
original form tagged with the encoding doesn't help the OP much, even if
it had been tagged correctly.

I mean, to a degree ruby 1.9 already supports this UTF-8-MAC as an
'encoding'. For example:
decomp = [101, 115, 112, 97, 110, 771, 111, 108, 46, 108, 110, 103].map { |x| x.chr("UTF-8-MAC") }.join => "español.lng"
decomp.codepoints.to_a => [101, 115, 112, 97, 110, 771, 111, 108, 46, 108, 110, 103]
decomp.encoding
=> #<Encoding:UTF8-MAC>

Notice that the n-accent is displayed as a single character by the
terminal, even though it is two codepoints (110,771)

So you could argue that Dir[] on the Mac is at fault here, for tagging
the string as UTF-8 when it should be UTF-8-MAC.

But you still need to transcode to UTF-8 before doing anything useful
with this string. Consider a string containing decomposed characters
tagged as UTF-8-MAC:

(1) The regexp /./ should match a series of decomposed codepoints as a
single 'character'; str[n] should fetch the nth 'character'; and so on.
I don't think this would be easy to implement, since finding a character
boundary is no longer a codepoint boundary.

What you actually get is this:
=> ["e", "s", "p", "a", "n", "̃", "o", "l", ".", "l", "n", "g"]

Aside: note that "̃ is actually a single character, a double quote with
the accent applied!

(2) The OP wanted to match the regexp containing a single codepoint /ñ/
against the decomposed representation, which isn't going to work anyway.
That is, ruby 1.9 does not automatically transcode strings so they are
compatible; it just raises an exception if they are not.
Encoding::CompatibilityError: incompatible encoding regexp match (UTF-8
regexp with UTF8-MAC string)
from (irb):5
from /usr/local/bin/irb19:12:in `<main>'

(3) Since ruby 1.9 has a UTF-8-MAC encoding, it *should* be able to
transcode it to UTF-8 without using Iconv. However this is simply
broken, at least in the version I'm trying here.
ArgumentError: invalid byte sequence in UTF-8
from (irb):10:in `codepoints'
from (irb):10:in `each'
from (irb):10:in `to_a'
from (irb):10
=> 24186

(4) If general support for decomposed form would be added as further
'Encodings', there would be an explosion of encodings: UTF-8-D,
UTF-16LE-D, UTF-16BE-D etc, and that's ignoring the "compatible" versus
"canonical" composed and decomposed forms.

(5) It is going to be very hard (if not impossible) to make a source
code string or regexp literal containing decomposed "n" and "̃" to be
distinct from a literal containing a composed "ñ". Try it and see.

(In the above paragraph, the decomposed accent is applied to the
double-quote; that is, "̃ is actually a single 'character'). Most
editors are going to display both the composed and decomposed forms
identically.

I think this just shows that ruby 1.9's complexity is not helping in the
slightest. If you have to transcode to UTF-8 composed form, then ruby
1.8 does this just as well (and then you only need to tag the regexp as
UTF-8 using //u)
 
M

Marnen Laibow-Koser

Brian said:
But the point is, you can't do anything useful with this until you
*transcode* it anyway, which you can do using Iconv (in either 1.8 or
1.9).

Wrong. Normalization transformations are useful within one Unicode
encoding. In fact, they have little use in transcoding as I understand.

[...]
Notice that the n-accent is displayed as a single character by the
terminal, even though it is two codepoints (110,771)

I don't think it's meaningful to say that something is displayed as a
single character. You can't see characters -- they're abstract ideas.
All you can see is the glyphs that represent those characters.
So you could argue that Dir[] on the Mac is at fault here, for tagging
the string as UTF-8 when it should be UTF-8-MAC.

But you'd be wrong, because UTF-8-MAC is valid UTF-8.
But you still need to transcode to UTF-8 before doing anything useful
with this string. Consider a string containing decomposed characters
tagged as UTF-8-MAC:

(1) The regexp /./ should match a series of decomposed codepoints as a
single 'character'

I am not sure I agree with you.

str[n] should fetch the nth 'character';

Yes, but a combining sequence is not conceptually a character in many
cases.
and so on.
I don't think this would be easy to implement, since finding a character
boundary is no longer a codepoint boundary.

Sure it is. You are confusing characters and combining sequences.
What you actually get is this:
=> ["e", "s", "p", "a", "n", "̃", "o", "l", ".", "l", "n", "g"]

Aside: note that "̃ is actually a single character,

It is nothing of the kind. It is a single combining sequence composed
of two characters. I would expect it to be matched by /../ .
a double quote with
the accent applied!
Right.


(2) The OP wanted to match the regexp containing a single codepoint /ñ/
against the decomposed representation, which isn't going to work anyway.
That is, ruby 1.9 does not automatically transcode strings so they are
compatible; it just raises an exception if they are not.

But UTF-8 NFC and UTF-8 NFD *are* compatible -- they're not even really
separate encodings. At this point I strongly suggest that you read the
article (I think it's UAX #15) on Unicode normalization.
Encoding::CompatibilityError: incompatible encoding regexp match (UTF-8
regexp with UTF8-MAC string)

If the only difference between UTF-8 and UTF-8-MAC is normalization,
then this is brain-dead.
from (irb):5
from /usr/local/bin/irb19:12:in `<main>'

(3) Since ruby 1.9 has a UTF-8-MAC encoding, it *should* be able to
transcode it to UTF-8 without using Iconv. However this is simply
broken, at least in the version I'm trying here.

ArgumentError: invalid byte sequence in UTF-8
from (irb):10:in `codepoints'
from (irb):10:in `each'
from (irb):10:in `to_a'
from (irb):10

=> 24186

Yikes! That's bad.
(4) If general support for decomposed form would be added as further
'Encodings', there would be an explosion of encodings: UTF-8-D,
UTF-16LE-D, UTF-16BE-D etc, and that's ignoring the "compatible" versus
"canonical" composed and decomposed forms.

Right. Different normal forms really aren't separate encodings in the
usual sense.
(5) It is going to be very hard (if not impossible) to make a source
code string or regexp literal containing decomposed "n" and "̃" to be
distinct from a literal containing a composed "ñ". Try it and see.

And that's probably a good thing. In fact, that's the point of
normalization.
(In the above paragraph, the decomposed accent is applied to the
double-quote; that is, "̃ is actually a single 'character').

Combining sequence.
Most
editors are going to display both the composed and decomposed forms
identically.

And at least in the case of ñ versus n + combining ~, they normalize to
the same thing in all normal forms (precomposed ñ in C and KC; a 2-char
combining sequence in D and KD). Thus, under any normalization, they
are *equivalent* and should be treated as such.
I think this just shows that ruby 1.9's complexity is not helping in the
slightest. If you have to transcode to UTF-8 composed form, then ruby
1.8 does this just as well (and then you only need to tag the regexp as
UTF-8 using //u)

Normalization really isn't transcoding in the usual sense.

Best,
 
M

Marc Heiler

If I would have one wish open, I would want to have a compile-time
option for ruby 1.9 where I could keep the old ruby 1.8 behaviour. Ruby
1.8 simply gave me less problems here.

I am using loads of umlauts like "äöü" in my comments and ruby 1.8 is
totally happy with it. Ruby 1.9 however hates it, refuses to run it, and
I dont think I want to add something like "Encoding: ISO-8859-1" to all
my .rb scripts.

I'd wish there would be more than one way to treat encodings - and one
way should be to use ruby 1.8 behaviour, because ruby 1.9 just forces me
to make all kind of changes before my old .rb scripts run again, simply
because of the encoding issue (there seem to be some other minor changes
as well, I have had problems in case/when code too, but the encoding
issue seems larger)

This is not really a rant - I am using ruby 1.8.x without any problem,
and I actually LOVE that ruby 1.8.x is not feature frozen. It is also
good that a language can keep evolving.

Personally however I don't need UTF or another exotic encoding, so the
encoding add-on is of no real advantage to me and rather a burden as I
have to modify .rb files. I can see that other people have different
needs. though.
 
B

Benoit Daloze

First, thank for your long and good answers about UTF8-MAC.

2009/12/30 Marc Heiler said:
If I would have one wish open, I would want to have a compile-time
option for ruby 1.9 where I could keep the old ruby 1.8 behaviour. Ruby
1.8 simply gave me less problems here.

I am using loads of umlauts like "=E4=F6=FC" in my comments and ruby 1.8 = is
totally happy with it. Ruby 1.9 however hates it, refuses to run it, and
I dont think I want to add something like "Encoding: ISO-8859-1" to all
my .rb scripts.

Well, I think that's not so hard to add
# encoding: ISO-8859-1
to your scripts( what do you say of writing a small Ruby script for that ?
:p ).

I think that's something really good! Well, it's kind of not easy to know a
file encoding if not specified somewhere. Think a little about somebody
working with you on another platform, he will surely meets problems of
encoding.

So yes, I think is something quite useful and good for compatibility.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,013
Latest member
KatriceSwa

Latest Threads

Top