E
Earl Purple
On VC++.NET it is implemented like this
static int __cdecl compare
(
const _Elem *_First1,
const _Elem *_First2,
size_t _Count
)
{ // compare [_First1, _First1 + _Count) with [_First2, ...)
return :memcmp(_First1, _First2, _Count));
}
i.e. using memcmp. But memcmp is an unsigned comparison, whereas char
is a signed character.
Therefore if I declare a std::string as "\x80" and another std::string
as "\x7f" and do a comparison, the one that is "\x7f" is "lower",
although if I compared their first characters then the first character
of the "\x80" string is "lower".
Is this behaviour standard? Is it correct? Is there a formal definition
of what the result of a std::string comparison should return if one or
more of the characters in one or other of the strings is "negative".
static int __cdecl compare
(
const _Elem *_First1,
const _Elem *_First2,
size_t _Count
)
{ // compare [_First1, _First1 + _Count) with [_First2, ...)
return :memcmp(_First1, _First2, _Count));
}
i.e. using memcmp. But memcmp is an unsigned comparison, whereas char
is a signed character.
Therefore if I declare a std::string as "\x80" and another std::string
as "\x7f" and do a comparison, the one that is "\x7f" is "lower",
although if I compared their first characters then the first character
of the "\x80" string is "lower".
Is this behaviour standard? Is it correct? Is there a formal definition
of what the result of a std::string comparison should return if one or
more of the characters in one or other of the strings is "negative".