Style question: Use always signed integers or not?

J

Jerry Coffin

[email protected] says... said:
Yes. And on that compiler, SCHAR_MIN _was_ -127. That _means_ that the
implementation made no guarantees about behavior when computations
reach -128 even though experiments or documentation about the computer
architecture suggest you could there. The point is that SCHAR_MIN _defines_
what counts as overflow for the signed char type.

The problem is, that the wording for that part of the standard has
remained essentially the same, and still doesn't require that the values
in limits.h (or equivalents inside of namespace std) be truly accurate
in telling you the limits of the implementation. The implementation
clearly DOES have to provide AT LEAST the range specified, but there is
_nothing_ to prevent it from exceeding that range.
In short: contrary to your claim, you can portably detect overflow (and it
is particularly easy for unsigned types, where issues like the one with
SCHAR_MIN cannot happen).

Yes, it CAN happen for unsigned types. An implementation could (for
example) supply 32-it integers, but claim that UINT_MAX was precisely 4
billion. The unsigned integer in question would NOT wrap at precisely 4
billion (and, in fact, an implementation that did would not conform).

Contrary to YOUR claim, I did NOT ever claim that you can't portably
detect overflow. What I said was:
Yes, but 1) it doesn't guarantee that the size you need will be present,
and 2) even if the size you need is present, it may not be represented
completely accurately.
and:

Even when/if <limits.h> does contain the correct data, and the right
size is present, you can end up with a rather clumsy ladder to get the
right results.

The fact is that I neither said nor implied that you could not portably
detect "overflow" (i.e. wraparound on an unsigned type).

What I said, and I maintain that it's true (and you've yet to show any
evidence of any sort to the contrary) was that 1) the information in
limits.h can be misleading, and 2) when/if you want wraparound at a
specific size, you're not guaranteed that such a size will exist, and 3)
even when/if it does exist, actually finding and using it can be clumsy.
 
K

Kai-Uwe Bux

[snip]

Jerry said:
The fact is that I neither said nor implied that you could not portably
detect "overflow" (i.e. wraparound on an unsigned type).

Really? This is from uptrhead:

Jerry said:
[ ... ]
AFAIK the unsigned arithmetics is specified exactly by the standard. This
means for example that a debug implementation cannot detect and assert
the overflow, but has to produce the wrapped result instead:

Unsigned arithmetic is defined in all but one respect: the sizes of the
integer types. IOW, you know how overflow is handled when it does
happen, but you don't (portably) know when it'll happen.

Now, if somebody impersonated you, I apologize.

Whoever made the statement

"IOW, you know how overflow is handled when it does happen, but you don't
(portably) know when it'll happen."

was mistaken. This is the statement I responded to. Snipping it away over
and over again will not change that.


Best

Kai-Uwe Bux
 
J

Jerry Coffin

[email protected] says... said:
Whoever made the statement

"IOW, you know how overflow is handled when it does happen, but you don't
(portably) know when it'll happen."

was mistaken. This is the statement I responded to. Snipping it away over
and over again will not change that.

It is not what you responded to, and it IS correct. You do NOT know (a
priori) exactly when it'll happen. Nothing in limits.h changes that. I'd
quote the standard, but the problem is that this is simply a situation
in which there's nothing in the standard to quote. If you want to claim
that the number given as (for example) UINT_MAX in limits.h precisely
reflects when arithmetic with an unisgned integer will wrap around, you
have two choices: quote something from the standard that says so, or
else admit that it doesn't exist (or, if course, continue as you are
now, making wild, unsupported accusations!)
 
K

Kai-Uwe Bux

Jerry said:
It is not what you responded to,

The record clearly shows that this is precisely the statement I responded
to. I would quote my response posting in full, but let me just paste a link
to Google instead (apologies: I had to break the line to please my
newsreader):

http://groups.google.com/group/comp.lang.c++/tree/browse_frm/
thread/52f4c25b71dee3a5/8911caf6152e4b1e?rnum=11
&_done=%2Fgroup%2Fcomp.lang.c%2B%2B%2Fbrowse_frm%2Fthread
%2F52f4c25b71dee3a5%3F#doc_5c3e4897ec30f171

If there is any meaning to using bottom-posting as opposed to top-posting,
it should be clear that I responded to no other claim than the above.

and it IS correct. You do NOT know (a
priori) exactly when it'll happen. Nothing in limits.h changes that.

Nothing in <limits> is needed for unsigned types. The standard guarantees
that

(unsigned int)(-1)

is 2^N-1 where N is the bitlength of unsigned int [4.7/2]. The corresponding
statements for other unsigned types are true, too.


I'd
quote the standard, but the problem is that this is simply a situation
in which there's nothing in the standard to quote. If you want to claim
that the number given as (for example) UINT_MAX in limits.h precisely
reflects when arithmetic with an unisgned integer will wrap around, you
have two choices:

Formally, I did not claim that that numeric_limits<unsigned int>::max()
gives the same value as (unsigned int)(-1). However, you are correct that I
mentioned <limits> in response to the above, possibly creating the
impression that this was the case.

quote something from the standard that says so, or
else admit that it doesn't exist (or, if course, continue as you are
now, making wild, unsupported accusations!)

I don't make unsupported accusations. I just restore context by quoting.
Also, I disagree with the statement that we cannot know when overflow will
happen for unsigned types. If your case for that statement relies on
<limits> being underspecified, I think the above observation from [4.7/2]
should clear things up.


Best

Kai-Uwe Bux
 
J

Jerry Coffin

[email protected] says... said:
and it IS correct. You do NOT know (a
priori) exactly when it'll happen. Nothing in limits.h changes that.

Nothing in <limits> is needed for unsigned types. The standard guarantees
that

(unsigned int)(-1)

is 2^N-1 where N is the bitlength of unsigned int [4.7/2]. The corresponding
statements for other unsigned types are true, too.

The point of my statement was that you don't know N ahead of time.
Nothing you've said changes that.

[ ... ]
On the other hand, The main part of my response demonstrated a check for
overflow that does not rely on <limits>.

Yes, but it doesn't tell you ahead of time, when it'll happen, because
you don't know what the size of any specific integer type will be.
I don't make unsupported accusations. I just restore context by quoting.

You did make unsupported accusations, and you still haven't quoted a
single thing from the standard to support them. You won't either,
because they AREN'T there.
Also, I disagree with the statement that we cannot know when overflow will
happen for unsigned types. If your case for that statement relies on
<limits> being underspecified, I think the above observation from [4.7/2]
should clear things up.

Then you haven't got a clue of what you're talking about! The quote
above does NOT tell you the size of any unsigned integer type, which is
exactly what you need before you know when wraparound will happen.
 
K

Kai-Uwe Bux

Jerry said:
The problem is, that the wording for that part of the standard has
remained essentially the same, and still doesn't require that the values
in limits.h (or equivalents inside of namespace std) be truly accurate
in telling you the limits of the implementation. The implementation
clearly DOES have to provide AT LEAST the range specified, but there is
_nothing_ to prevent it from exceeding that range.


Yes, it CAN happen for unsigned types. An implementation could (for
example) supply 32-it integers, but claim that UINT_MAX was precisely 4
billion.

Are you sure this can happen on a standard conforming implementation?

In [18.2.1.2/4], the standard describes

numeric_limits<T>::max() throw();

as returning the "maximum finite value". In a footnote, it also requires
that UINT_MAX agrees with numeric_limits<unsigned int>::max(). In the
suggested implementation, 4.000.000.001 would be a valid unsinged int
bigger than numeric_limits<unsigned int>::max(), which contradicts the
standard. (Admittedly, I do not know whether footnotes like that are
The unsigned integer in question would NOT wrap at precisely 4
billion (and, in fact, an implementation that did would not conform).

Agreed, but I think the implementation would be non-conforming anyway.


[snip]


Best

Kai-Uwe Bux
 
K

Kai-Uwe Bux

Jerry said:
[email protected] says... said:
and it IS correct. You do NOT know (a
priori) exactly when it'll happen. Nothing in limits.h changes that.

Nothing in <limits> is needed for unsigned types. The standard guarantees
that

(unsigned int)(-1)

is 2^N-1 where N is the bitlength of unsigned int [4.7/2]. The
corresponding statements for other unsigned types are true, too.

The point of my statement was that you don't know N ahead of time.
Nothing you've said changes that.

I am utterly confused now. Clearly we have different understandings of what
it means to "know (a priory) exactly when it [overflow] will happen.".

I took your words to mean that I cannot determine at compile time the upper
bound for an unsigned type. The word "portably" in your phrase:

IOW, you know how overflow is handled when it does happen, but you don't
(portably) know when it'll happen.

seemed to indicate that, and your follow up remarks on the unreliability of
limits.h strengthened that interpretation in my mind (as I got the
impression that you maintained that the portability issue arose from
possible inaccurate bounds in <limits>).

Now, it appears that you had something different in mind. I am somewhat lost
at seeing what it might be.

[ ... ]
On the other hand, The main part of my response demonstrated a check for
overflow that does not rely on <limits>.

Yes, but it doesn't tell you ahead of time, when it'll happen, because
you don't know what the size of any specific integer type will be.
I don't make unsupported accusations. I just restore context by quoting.

You did make unsupported accusations,

I maintain that

(a) all my "accusations" where supported (they were all of the form "you
claimed ...", "you snipped ..." or some such thing, and all of them were
supported by quotes) and

(b) all my unsupported claims (of which I may have made a few) do not
qualify as accusations (since they are not criticisms targeted toward
anybody).


If you felt offended by any of my remarks, I am sorry.

and you still haven't quoted a single thing from the standard to support
them. You won't either, because they AREN'T there.

How and why would I support a claim of the form "you claimed ..." by a quote
from the standard?

Also, I disagree with the statement that we cannot know when overflow
will happen for unsigned types. If your case for that statement relies on
<limits> being underspecified, I think the above observation from [4.7/2]
should clear things up.

Then you haven't got a clue of what you're talking about! The quote
above does NOT tell you the size of any unsigned integer type, which is
exactly what you need before you know when wraparound will happen.

I am pretty certain tha I know what I am talking about. It might however be
something that you were not talking about and I may have misinterpreted
your claim(s).

The above quote allows you to portably(!) determine the size of any unsigned
integer type at compile time. If that is not what you mean by "(portably)
know when [overflow will happen]", I was just misunderstanding you.


Best

Kai-Uwe Bux
 
J

Jerry Coffin

[email protected] says... said:
I am utterly confused now. Clearly we have different understandings of what
it means to "know (a priory) exactly when it [overflow] will happen.".

I took your words to mean that I cannot determine at compile time the upper
bound for an unsigned type. The word "portably" in your phrase:

IOW, you know how overflow is handled when it does happen, but you don't
(portably) know when it'll happen.

seemed to indicate that, and your follow up remarks on the unreliability of
limits.h strengthened that interpretation in my mind (as I got the
impression that you maintained that the portability issue arose from
possible inaccurate bounds in <limits>).

Yes and no -- as I thought my mention of the ladder of #if/#elif's would
make obvious, I was less concerned with compile time than with when you
write the code. Mention had been made previously (for one example) of
cryptographic algorithms that depend on wraparound happening at some
particular size (e.g. 32-bit for SHA-1).

The problem is that (with something like uint32_t, which isn't included
in the current version of C++) you can't easily pick a type that's
guaranteed to give you that. On a currently-typical 32-bit
implementation, unsigned int or unsigned long will do the trick -- but
then again, on a 64-bit implementation (hardly a rarity anymore either)
both of those might give you 64 bits instead of 32, causing an algorithm
that depends on wraparound at 32 bits to fail (unless you explicitly add
something like '& 0xffffffff' at the appropriate places).

At least to me, "portability" is mostly something you build into your
code -- i.e. writing the code so it produces correct results on
essentially any conforming implementation of C++ (or at least close to
conforming). An obvious example would be code written 20 years ago that
still works with every current C++ compiler I have handy today (and any
compiler it didn't work with I'd venture to guess was either badly
broken, or for a language other than C++).

For code like this, learning about wraparound at compile-time is FAR too
late. This cod works (just fine) with compilers that didn't exist when
it was written. Unfortunately, writing code that way is a pain -- the
aforementioned #if/#elif ladder being part (but only part) of the
problem.
 
J

Jerry Coffin

[email protected] says... said:
Yes, it CAN happen for unsigned types. An implementation could (for
example) supply 32-it integers, but claim that UINT_MAX was precisely 4
billion.

Are you sure this can happen on a standard conforming implementation?

In [18.2.1.2/4], the standard describes

numeric_limits<T>::max() throw();

as returning the "maximum finite value". In a footnote, it also requires
that UINT_MAX agrees with numeric_limits<unsigned int>::max(). In the
suggested implementation, 4.000.000.001 would be a valid unsinged int
bigger than numeric_limits<unsigned int>::max(), which contradicts the
standard. (Admittedly, I do not know whether footnotes like that are
normative. But, in any case, numeric_limits<T>::max() should give you the
maximum finite value portably according to the standard.

Yes. The problem is, that the standard is fairly specific in defining
the words and what they mean. For something to constitute a requirement,
it normally needs to include a word like "shall". As it's worded right
now, you have something that looks like a requirement, and is almost
certainly intended to be one, but really isn't.

Getting things like this right is a real pain. I once sent a list of
problems like this about the then-current draft of the C++ 0x standard,
and while I spent a fair amount of time on it, I know I didn't catch all
the problems, and probably not even the majority of them. (In case you
care, it's at:

http://groups.google.com/group/comp.std.c++/msg/7d54efb7bacb098c

There are a couple of these changes that _might_ not reflect the
original intent -- but most of them are simply changing non-normative
wording to normative wording. One example that occurred a number of
times was using "must" instead of "shall". In (at least most of) the
cited cases, these were _clearly_ intended to place requirements on the
implementation -- but the rules given in the standard itself made it
open to a LOT of question whether they could be considered requirements
or not.
 
J

James Kanze

[...]
b) With unsigned integers, you can check for overflow easily:
unsigned int a = ...;
unsigned int b = ...;
unsigned int sum = a + b;
if ( sum < a ) {
std::cout << "overflow happened.\n"
}

Which is fine for addition, but fails for other operators, like
multiplication.
It's somewhat nice that you can check for the overflow _after_
you did the addition (this does not necessarily work with
signed types). Also, the check is usually a bit cheaper than
for signed types (in the case of addition, subtraction, and
division a single comparison is enough; I did not think too
hard about multiplication).

Can division overflow? The only possible overflow I can think
of is if you divide by 0 (since no integral representations I
know of support infinity), and that's undefined behavior even on
an unsigned.

My understanding of the motivation behind undefined behavior is
that it allows the implementation to do something reasonable
(crash the program, for example). Regretfully, no
implementations today seem to take advantage of this. Which is
a shame, because at the machine instruction level, it's usually
very easy to check for overflow (signed or unsigned) after the
operation, but there's no way to access these instructions from
C++.
 
J

James Kanze

[ ... ]
Yes. And on that compiler, SCHAR_MIN _was_ -127. That
_means_ that the implementation made no guarantees about
behavior when computations reach -128 even though
experiments or documentation about the computer architecture
suggest you could there. The point is that SCHAR_MIN
_defines_ what counts as overflow for the signed char type.
The problem is, that the wording for that part of the standard
has remained essentially the same, and still doesn't require
that the values in limits.h (or equivalents inside of
namespace std) be truly accurate in telling you the limits of
the implementation. The implementation clearly DOES have to
provide AT LEAST the range specified, but there is _nothing_
to prevent it from exceeding that range.

What do the words "minimum value for an object of type signed
char" mean, if not the minimum value for an object of type
signed char? I think that the argument concerning the legality
of SCHAR_MIN being -127, even though the machine was 2's
complement, was more along the lines of what Kai-Uwe said: an
implementation could "declare" that -128 was an undefined value
(and reserve the right to trap on it, even if it didn't
currently trap). The 1999 version of C, however, clearly
doesn't allow this, allowing only 2's complement, 1's complement
and signed magnitude. (It does allow trapping on -0 in the
latter two cases.)
Yes, it CAN happen for unsigned types. An implementation could
(for example) supply 32-it integers, but claim that UINT_MAX
was precisely 4 billion. The unsigned integer in question
would NOT wrap at precisely 4 billion (and, in fact, an
implementation that did would not conform).

Again, what does "maximum value for an object of type xxx" mean?
The current C standard very definitely does require that the
value range of an unsigned integer be 0...2^N-1.

[...]
What I said, and I maintain that it's true (and you've yet to
show any evidence of any sort to the contrary) was that 1) the
information in limits.h can be misleading,

I think that the current C standard pretty does pretty much
limit what a compiler is allowed to do here. And given the time
that this has been established practice, I don't think we have
to worry too much about an error or a lack of conformance here.
(I, too, have seen SCHAR_MIN equal to -127 on a 2's complement
machine. But that was a long, long time ago; I don't think it's
an issue today, and given the tightened up wording in C99, I
would consider it an error.)
 
K

Kai-Uwe Bux

Jerry said:
[email protected] says... said:
Yes, it CAN happen for unsigned types. An implementation could (for
example) supply 32-it integers, but claim that UINT_MAX was precisely 4
billion.

Are you sure this can happen on a standard conforming implementation?

In [18.2.1.2/4], the standard describes

numeric_limits<T>::max() throw();

as returning the "maximum finite value". In a footnote, it also requires
that UINT_MAX agrees with numeric_limits<unsigned int>::max(). In the
suggested implementation, 4.000.000.001 would be a valid unsinged int
bigger than numeric_limits<unsigned int>::max(), which contradicts the
standard. (Admittedly, I do not know whether footnotes like that are
normative. But, in any case, numeric_limits<T>::max() should give you the
maximum finite value portably according to the standard.

Yes. The problem is, that the standard is fairly specific in defining
the words and what they mean. For something to constitute a requirement,
it normally needs to include a word like "shall". As it's worded right
now, you have something that looks like a requirement, and is almost
certainly intended to be one, but really isn't.

I do not entirely agree with that interpretation of the standard. It is
correct that the C-standard says that "shall" denotes a requirement, I do
not see, however, that one can infer that _only_ sentences
containing "shall" can constitute requirement. E.g., deque::size() is
defined in [23.1/6] without "shall". Should one maintain that there is no
normative requirement that it returns the number of elements in the
container?

Now, I do see differences: (a) [23.1] is a section entitled "Container
requirements" and on the other hand, (b) [18.2.1.2/4] is not even a
complete sentence. However, many return clauses in the library do not
use "shall" and Section 17.3 that could tell us how they specify
requirements is only informational. I do hesitate to conclude that return
clauses are by and large non-normative. I feel the contention that the lack
of "shall" a priory renders a sentence non-normative is too radical an
interpretation. In any case, I would consider it a very, very lame excuse
of a compiler vendor to point out that there is a "shall" missing when
confronted with the contention that his implementation of
numeric_limits<unsigned>::max() is in violation of the standard.


I will admit that I an on shaky ground here. It appears that the C++
standard incorporates those rules by reference. First, I do not even see
that they are explicitly imported from the C standard, so the above might
be completely bogus. I do see that [1.2/2] incorporates all definitions
from ISO/SEC 2382, which I don't have.

Could you provide some more details about how those "shall" rules enter the
picture with C++ and how they are worded.

Getting things like this right is a real pain. I once sent a list of
problems like this about the then-current draft of the C++ 0x standard,
and while I spent a fair amount of time on it, I know I didn't catch all
the problems, and probably not even the majority of them. (In case you
care, it's at:

http://groups.google.com/group/comp.std.c++/msg/7d54efb7bacb098c

There are a couple of these changes that _might_ not reflect the
original intent -- but most of them are simply changing non-normative
wording to normative wording. One example that occurred a number of
times was using "must" instead of "shall". In (at least most of) the
cited cases, these were _clearly_ intended to place requirements on the
implementation -- but the rules given in the standard itself made it
open to a LOT of question whether they could be considered requirements
or not.

That's an impressive list. It would definitely be better if the standard was
more consistent. Your efforts are to be commended.


Thanks

Kai-Uwe Bux
 
K

Kai-Uwe Bux

Jerry said:
[email protected] says... said:
I am utterly confused now. Clearly we have different understandings of
what it means to "know (a priory) exactly when it [overflow] will
happen.".

I took your words to mean that I cannot determine at compile time the
upper bound for an unsigned type. The word "portably" in your phrase:

IOW, you know how overflow is handled when it does happen, but you
don't (portably) know when it'll happen.

seemed to indicate that, and your follow up remarks on the unreliability
of limits.h strengthened that interpretation in my mind (as I got the
impression that you maintained that the portability issue arose from
possible inaccurate bounds in <limits>).

Yes and no -- as I thought my mention of the ladder of #if/#elif's would
make obvious, I was less concerned with compile time than with when you
write the code. Mention had been made previously (for one example) of
cryptographic algorithms that depend on wraparound happening at some
particular size (e.g. 32-bit for SHA-1).

Now, I understand. Sorry for the confusion. I didn't see that SHA-1 example
so I was considering the initial statement in isolation and my
interpretation headed down a different path from the beginning.

The problem is that (with something like uint32_t, which isn't included
in the current version of C++) you can't easily pick a type that's
guaranteed to give you that. On a currently-typical 32-bit
implementation, unsigned int or unsigned long will do the trick -- but
then again, on a 64-bit implementation (hardly a rarity anymore either)
both of those might give you 64 bits instead of 32, causing an algorithm
that depends on wraparound at 32 bits to fail (unless you explicitly add
something like '& 0xffffffff' at the appropriate places).

Right. Been there, done that.

At least to me, "portability" is mostly something you build into your
code -- i.e. writing the code so it produces correct results on
essentially any conforming implementation of C++ (or at least close to
conforming). An obvious example would be code written 20 years ago that
still works with every current C++ compiler I have handy today (and any
compiler it didn't work with I'd venture to guess was either badly
broken, or for a language other than C++).

For code like this, learning about wraparound at compile-time is FAR too
late. This cod works (just fine) with compilers that didn't exist when
it was written. Unfortunately, writing code that way is a pain -- the
aforementioned #if/#elif ladder being part (but only part) of the
problem.

Actually, I think it isn't all that bad. What I did once is to define my own
arithmetic 32-bit unsigned integer type (using the &0xffffffff trick). With
compile time template tricks, you can eliminate the & 0xffffffff for those
platforms where unsigned long or unsigned int happen to have 32 bit. From
then on, I just use the special type for all those algorithms that really
need 32 bits.


Best

Kai-Uwe Bux
 
J

James Kanze

[...]
Actually, I think it isn't all that bad. What I did once is to
define my own arithmetic 32-bit unsigned integer type (using
the &0xffffffff trick). With compile time template tricks, you
can eliminate the & 0xffffffff for those platforms where
unsigned long or unsigned int happen to have 32 bit.

There's no need for any template tricks. Just specify -O (or
whatever it takes for optimization) in the command line of the
compiler. (For that matter, I expect most compilers optimize
this correctly even without -O.)
 
J

Jerry Coffin

[email protected] says... said:
I do not entirely agree with that interpretation of the standard. It is
correct that the C-standard says that "shall" denotes a requirement, I do
not see, however, that one can infer that _only_ sentences
containing "shall" can constitute requirement. E.g., deque::size() is
defined in [23.1/6] without "shall". Should one maintain that there is no
normative requirement that it returns the number of elements in the
container?

Unfortunately, yes, at least probably.
Now, I do see differences: (a) [23.1] is a section entitled "Container
requirements" and on the other hand, (b) [18.2.1.2/4] is not even a
complete sentence. However, many return clauses in the library do not
use "shall" and Section 17.3 that could tell us how they specify
requirements is only informational. I do hesitate to conclude that return
clauses are by and large non-normative. I feel the contention that the lack
of "shall" a priory renders a sentence non-normative is too radical an
interpretation. In any case, I would consider it a very, very lame excuse
of a compiler vendor to point out that there is a "shall" missing when
confronted with the contention that his implementation of
numeric_limits<unsigned>::max() is in violation of the standard.

Yet, that's pretty much the argument that was used to justify defining
SCHAR_MIN as -127 on a twos complement machine...

The part that always struck me as odd about it was that 1) the intent
appeared quite obvious (at least to me), and 2) they almost certainly
put more work into arguing that their implementation was allowed than it
would have taken to just fix the header.

The current C++ standard is based on the C95 standard. I believe in this
area C95 is identical to C89/90. Part of its definition of undefined
behavior reads:

If a "shall" or "shall not" requirement that appears outside
of a constraint is violated, the behavior is undefined.

So, at least according to that, the presence (or lack thereof) of the
word "shall" really does govern the meaning of a specific clause. OTOH,
I'll openly admit that even in that definition, it speaks of a "'shall'
or 'shall not' requirement", which implies that some other sort of
requirement is possible -- but nothing is ever said about what it means
if some other requirement is violated.

Specifically, there is nothing to say or imply that violating any other
requirement means anything about whether an implementation is conforming
or not. In the absence of such a definition, I can't see how there's any
real basis for saying to does mean anything about conformance. Under
those circumstances, I'm left with little alternative but to conclude
exactly as I said before: they look like requirements they were probably
intended to be requirements, but they're really not.

Looking at it from a slightly different direction, they're a bit like a
law that said "you must do this", but then went on to say "no penalty of
any sort may be imposed for violating this law." What you have left
isn't much of law...
I will admit that I an on shaky ground here. It appears that the C++
standard incorporates those rules by reference. First, I do not even see
that they are explicitly imported from the C standard, so the above might
be completely bogus. I do see that [1.2/2] incorporates all definitions
from ISO/SEC 2382, which I don't have.

I tend to agree on that -- but without assuming they imported the
requirements from the C standard, we're left with _nothing_ in the C++
standard being normative. Unfortunately, I can't find _anything_ to
support the notion that the parts lacking a "shall" are normative,
beyond a bare assertion that "they obviously should be."

Looking at it from the other end for a moment, it's probably open to
question whether it makes any real difference in the end. AFAIK, there's
no longer anybody who attempts to do formal certification that compilers
conform to any of the language standards (in fact, I don't think any
such program has ever existed for C++). In that absence of such a
program, we're left with only market forces, which tend to be based more
on overall perceived quality of implementation than on technical details
of whether something constitutes conformance or not. I think it's fair
to say that almost anybody would consider something a lousy
implementation if it didn't implement requirements correctly, even in
the absence of "shall" or "shall not" in the requirement in the
standard. As such, it probably doesn't make much _practical_ difference
either way.
 
J

Jerry Coffin

[email protected] says... said:
Now, I understand. Sorry for the confusion. I didn't see that SHA-1 example
so I was considering the initial statement in isolation and my
interpretation headed down a different path from the beginning.

I believe the previous mention in the thread was of "cryptographic
algorithms", not specifically of "SHA-1" as such -- I added that as
simply an example of a specific cryptographic algorithm with
requirements like those previously mentioned.

[ ... ]
Actually, I think it isn't all that bad. What I did once is to define my own
arithmetic 32-bit unsigned integer type (using the &0xffffffff trick). With
compile time template tricks, you can eliminate the & 0xffffffff for those
platforms where unsigned long or unsigned int happen to have 32 bit. From
then on, I just use the special type for all those algorithms that really
need 32 bits.

I've never seen a need for that -- it would be a truly rare compiler
that didn't detect when the '& 0xffffffff' couldn't possibly have any
effect, and just ignored it. I wouldn't worry much about its effect at
runtime, though I can certainly see the possibility of a compiler that
gave a warning about "code has no effect" or something on that order. I
prefer clean compiles, but I think it's more important that the code
looks clean to me than to the compiler. I'm certainly not very excited
about adding a lot of template tricks just to avoid a compiler warning.
 
J

Jerry Coffin

[ ... ]
I think that the current C standard pretty does pretty much
limit what a compiler is allowed to do here. And given the time
that this has been established practice, I don't think we have
to worry too much about an error or a lack of conformance here.
(I, too, have seen SCHAR_MIN equal to -127 on a 2's complement
machine. But that was a long, long time ago; I don't think it's
an issue today, and given the tightened up wording in C99, I
would consider it an error.)

I agree that we no longer have to worry much about it -- and while I
agree that the wording in C99 has been tightened, I'm not entirely
convinced that it's quite enough to make it a requirement. OTOH, as I
said previously in this thread, I'm not entirely convinced that it
really matters either.
 
J

Jerry Coffin

[ ... ]
I tend to agree on that -- but without assuming they imported the
requirements from the C standard, we're left with _nothing_ in the C++
standard being normative. Unfortunately, I can't find _anything_ to
support the notion that the parts lacking a "shall" are normative,
beyond a bare assertion that "they obviously should be."

Thinking for a second longer, that's not really true -- as Pete pointed
out, the ISO has guidelines for standards that fairly directly say that
"shall" and "shall not" normally translate to requirements and
prohibitions, respectively.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,128
Latest member
ElwoodPhil
Top