Counting utf-8 characters -special characters

Discussion in 'Javascript' started by majna, Sep 19, 2007.

  1. majna

    majna Guest

    I have character counter for textarea wich counting the characters.
    Special character needs same place as two normal characters because of
    16-bit encoding.
    Counter is counting -2 when special character is added like some
    language specific char.

    How to count specials like 1 char?
    tnx
     
    majna, Sep 19, 2007
    #1
    1. Advertisements

  2. It doesn't.
    "€".length === 1
    The same way. ECMAScript 3 implementations use UTF-16 encoded strings. RTFM.


    PointedEars
     
    Thomas 'PointedEars' Lahn, Sep 19, 2007
    #2
    1. Advertisements

  3. Should have been -1. But even if most implementations would not be
    UTF-16 safe, that would not have sufficed. UTF-16 does not mean that
    the representation of a glyph in that encoding requires always only
    16 bits:

    http://www.unicode.org/faq/utf_bom.html#6
    Windows(-1252). Hmpf. Make that "€" any Unicode glyph (such as "₭")
    and it is still true.


    PointedEars
     
    Thomas 'PointedEars' Lahn, Sep 19, 2007
    #3
  4. You mean code *unit*, _not_ code point. The latter is a completely
    different thing, the *position* of a Unicode character in the definition tables.

    Et non sequitur, as I have encoded my first followup accidentally with
    Windows-1252, that is not the real code point of that character (it is
    0x80). With UTF-16, you are correct, except that characters beyond
    code point 63k, which would require more code units, are seldom used.
    Probably due to your SpiderMonkey build. It works just fine since Mozilla/4.0.
    Doesn't matter. The used document encoding is transparent to the
    application. The `value' property of a HTMLTextAreaElement object is of
    type DOMString, which is fully compatible to ECMAScript (UTF-16) strings.
    Most would nowadays. Even Netscape 4.78 yields 1 for "€".length.
    No unique Unicode glyph has more than one code point, that would be a major
    flaw in the standard (that does not exist). However, a glyph may be
    represented by more than one code unit, though, either due to the mere
    necessity of its higher code point (position), surrogates or composition
    (and in the latter case it consists of several glyphs with their own code
    point, and their code units concatenated according to the used encoding).

    However, that does not matter for implementations of ECMAScript 3.
    Especially, glyph composition is transparent to the application, if it
    supports it.

    http://www.unicode.org/faq/char_combmark.html#2


    PointedEars
     
    Thomas 'PointedEars' Lahn, Sep 19, 2007
    #4
  5. One might argue then that Netscape 4.78 evaluates the Windows-1252 encoded
    version of the respective currency mark which is one byte, and that it does
    not support Unicode. However, "€".charCodeAt() yields 8364 (not 128),
    String.fromCharCode(8365) yields "â‚­", and both "\u20AC".length and
    String.fromCharCode(8365).length yield 1.


    PointedEars
     
    Thomas 'PointedEars' Lahn, Sep 19, 2007
    #5
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.