The need of Unicode types in C++0x

I

Ioannis Vranos

Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.

For example its QString type provides a toWString() that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.


I don't consider being compatible with C99 as an excuse.
 
I

Ioannis Vranos

Correction:


Ioannis said:
Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.
==> For example its QString type provides a toStdWString()that returns a
 
R

REH

Hi, I am currently learning QT, a portable C++ framework which comes
with both a commercial and GPL license, and which provides conversion
operations to its various types to/from standard C++ types.

For example its QString type provides a toWString() that returns a
std::wstring with its Unicode contents.

So, since wstring supports the largest character set, why do we need
explicit Unicode types in C++?

I think what is needed is a "unicode" locale or at the most, some
unicode locales.

I don't consider being compatible with C99 as an excuse.

If I understand what you are asking...

wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).

REH
 
I

Ioannis Vranos

REH said:
If I understand what you are asking...

wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).


I do not know much about encodings, only the necessary for me stuff, but
the question does not sound reasonable for me.

If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?
 
E

Erik Wikström

I do not know much about encodings, only the necessary for me stuff, but
the question does not sound reasonable for me.

If that system supports Unicode as a system-specific type, why can't
wchar_t be made wide enough as that system-specific Unicode type, in
that system?

Because it has been to narrow for 5 to 10 years and the compiler vendor
does not want to take any chances with backward compatibility, and since
we will get Unicode types it is a good idea to use wchar_t for encodings
not the same size as the Unicode types.
 
J

James Kanze

Hi, I am currently learning QT, a portable C++ framework which
comes with both a commercial and GPL license, and which
provides conversion operations to its various types to/from
standard C++ types.
For example its QString type provides a toWString() that
returns a std::wstring with its Unicode contents.

In what encoding format? And what if the "usual" encoding for
wstring isn't Unicode (the case on many Unix platforms).
So, since wstring supports the largest character set, why do
we need explicit Unicode types in C++?

Because wstring doesn't guarantee Unicode, and implementers
can't change what it does guarantee in their particular
implementation.
I think what is needed is a "unicode" locale or at the most,
some unicode locales.

Well, to begin with, there are only two sizes of character
types; the various Unicode encoding forms come in three sizes,
so you already have a size mismatch. And since wchar_t already
has a meaning, we can't just arbitrarily change it.
I don't consider being compatible with C99 as an excuse.

How about being compatible with C++03?
 
J

James Kanze

[...]
wstring in the standard defines neither the character set, nor the
encoding. Given that Unicode is currently a 21-bit standard, how can
wstring support the largest character set on a system where wchar_t is
16-bits (assuming a one-character-per-element encoding)? You could
only support the BMP (which is exactly what most systems and language
that "claim" Unicode support are really capable of).

No. Most systems that claim Unicode support on 16 bits use
UTF-16. Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32. (In
practice, I find that UTF-8 works fine for a lot of things.)
 
H

Hendrik Schober

James said:
In what encoding format? And what if the "usual" encoding for
wstring isn't Unicode (the case on many Unix platforms).

<curious>
What are those implementations using for 'wchar_t'?
</curious>

Schobi
 
I

Ioannis Vranos

Erik said:
Because it has been to narrow for 5 to 10 years and the compiler vendor
does not want to take any chances with backward compatibility,


How will it break backward compatibility, if the size of whcar_t changes?


and since
we will get Unicode types it is a good idea to use wchar_t for encodings
not the same size as the Unicode types.


I am talking about not needing those Unicode types since we have wchar_t
and locales.
 
I

Ioannis Vranos

Pete said:
It can be. But the language definition doesn't require it to be, and
with many implementations it's not


C++03 mentions:


"Type wchar_t is a distinct type whose values can represent distinct
codes for all members of the *largest* extended character set specified
among the supported *locales* (22.1.1). Type wchar_t shall have the same
size, signedness, and alignment requirements (3.9) as one of the other
integral types, called its underlying type".
 
I

Ioannis Vranos

James said:
Because wstring doesn't guarantee Unicode, and implementers
can't change what it does guarantee in their particular
implementation.


Again, if the implementers want Unicode, they can add a Unicode local
and make wchar_t size large enough to match it.


In other words, C++0x could require all implementations to provide
specific Unicode locales that will work with existing facilities
(wchar_t, wstring, etc).
 
R

REH

No.  Most systems that claim Unicode support on 16 bits use
UTF-16.  Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32.  (In
practice, I find that UTF-8 works fine for a lot of things.)
The ones I am familiar with only support UCS-2, not UTF-16. Windows,
for example, has WCHAR_T which is not UTF-16 (although Windows does
support MBCS, but I am not sure if that is truly UTF-8).

REH
 
H

Hendrik Schober

REH said:
No. Most systems that claim Unicode support on 16 bits use
UTF-16. Granted, it's a multi-element encoding, but if you're
doing anything serious, effectively, so is UTF-32. (In
practice, I find that UTF-8 works fine for a lot of things.)
The ones I am familiar with only support UCS-2, not UTF-16. Windows,
for example, has WCHAR_T which is not UTF-16 [...].

TTBOMK, this isn't true anymore. It's UTF-16 now, not UCS-2.

Schobi
 
R

REH

The ones I am familiar with only support UCS-2, not UTF-16. Windows,
for example, has WCHAR_T which is not UTF-16 [...].

  TTBOMK, this isn't true anymore. It's UTF-16 now, not UCS-2.

Thanks. I guess I need to update my reference material. I haven't done
Windows programming since the NT days.

REH
 
J

James Kanze

<curious>
What are those implementations using for 'wchar_t'?
</curious>

EUC. EUC (= Extended Unix Codes) is originally a multi-byte
code, but exists as a 32 bit code as well, see
http://docs.sun.com/app/docs/doc/802-1950/6i5us7asn?l=en&a=view.
It's apparently the standard encoding for wchar_t under Solaris
and HP/UX, and perhaps elsewhere as well. Thus, LATIN SMALL
LETTER E WITH ACUTE has the code 0x00E9 in Unicode, but
0x30000069 under Solaris. (``printf( "%04x\n", (unsigned
int)L'é''') -- the compiler apparently recognizes my
LC_CTYPE=iso_8859_1 locale for the file input.)
 
J

James Kanze

Again, if the implementers want Unicode, they can add a
Unicode local and make wchar_t size large enough to match it.

And break their existing code base? They're not that
irresponsible (most of them, anyway). And the basic idea behind
wchar_t is that it is suppose to be locale independent, at least
for the encoding.
In other words, C++0x could require all implementations to
provide specific Unicode locales that will work with existing
facilities (wchar_t, wstring, etc).

It could. It would also be ignored by most major implementors
if it did.
 
I

Ioannis Vranos

James said:
And break their existing code base? They're not that
irresponsible (most of them, anyway). And the basic idea behind
wchar_t is that it is suppose to be locale independent, at least
for the encoding.



How would they "break" their existing code base, by adding some
additional locales and even changing the size of wchar_t?
 
I

Ioannis Vranos

Yannick said:
There is no system that support "Unicode". you should go to:
http://www.unicode.org/standard/WhatIsUnicode.html

Unicode is basically a catalog of glyphs and associated numeric value.
for a computer system, it only make sense to be precise and talk about
UTF8, UTF16 or UTF32.
http://www.unicode.org/faq/utf_bom.html

I agree so far.

A "Unicode" locale makes no sense because the
locale represent much more than simply the character encoding that is
being used.
http://www.unicode.org/reports/tr35/#Locale


True, but I think Unicode locales could be implemented for characters
only, leaving the rest unchanged (as they are).


For example:


locale::global(locale("english"));


wcin.imbue(locale("UTF16"));
wcout.imbue(locale("UTF16"));


would change only the character set, keeping the rest of the locale
settings as they are either they were previously defined or they are the
default ones.
 
I

Ioannis Vranos

Pete said:
There's nothing there that requires wchar_t to be large enough to hold
Unicode code points. Certainly if an implementation supports a Unicode
local, wchar_t has to be large enough to handle those characters. But
the language definition doesn't require Unicode locales.


Yes, I am talking about the upcoming Unicode character types in C++0x,
in comparison with the Unicode locales alternative.
 
E

Erik Wikström

How will it break backward compatibility, if the size of whcar_t changes?

Because the user expects to be able to pack 5 wchar_t into a network-
message of a fixed size, or read a few characters from a specific
position in a binary file. Or any number of reasons where someone have
made assumptions about the size of wchar_t.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top