Strange - a simple assignment statement shows error in VC++ but worksin gcc !

K

Keith Thompson

CBFalconer said:
And that means that things the C standard calls 'implementation
defined' are undefined in the C standard, and really need to be
treated in the same manner. The only difference is that the actual
effects need to be resolved and documented FOR THAT PARTICULAR
COMPILATION SYSTEM. Nothing more.
[...]

The difference between undefined and implementation-defined behavior
is bigger than you suggest.

For implementation-defined behavior, typically the standard offers a
finite number of possibilities, and each implementation chooses and
documents one of them. In many cases, a program is able to determine
which choice was made; see, for example, the macros in <limits.h> and
<float.h>. Thus even a portable program can use
implementation-defined features; for example, a multiple-precision
arithmetic program might safely represent a given large number as an
array of 20 ints, or 10 longs, or whatever.

For undefined behavior, as we all know by now, the standard guarantees
nothing. A carefully written program that relies on certain forms of
implementation-defined behavior can be guaranteed to behave properly;
if it depends on undefined behavior, watch out for nasal demons.
 
C

CBFalconer

Keith said:
.... snip ...

The difference between undefined and implementation-defined behavior
is bigger than you suggest.

For implementation-defined behavior, typically the standard offers a
finite number of possibilities, and each implementation chooses and
documents one of them. In many cases, a program is able to determine
which choice was made; see, for example, the macros in <limits.h> and
<float.h>. Thus even a portable program can use
implementation-defined features; for example, a multiple-precision
arithmetic program might safely represent a given large number as an
array of 20 ints, or 10 longs, or whatever.

For undefined behavior, as we all know by now, the standard guarantees
nothing. A carefully written program that relies on certain forms of
implementation-defined behavior can be guaranteed to behave properly;
if it depends on undefined behavior, watch out for nasal demons.

You have a valid point. However, it is not universal (the finite
possibilities).
 
H

hanukas

Golden California Girls said:
Standard" attempts to not impose a restriction it actually does,
because it does require that other standards exist and by reference
those standards become part of "this International Standard."  So if
the manual for the implementation exists and says it generates code
to run on the XYZ processor and the XYZ processor has a manual that
says what happens in the case of integer overflow then "this
International Standard" also says what happens in the case of
integer overflow despite (3.4.3p3) "EXAMPLE An example of undefined
behavior is the behavior on integer overflow."

[...]

I don't think so.

The C standard requires an implementation to document certain choices.
It doesn't require it to document that it generates code to run on the
XYZ processor, or to document, directly or indirectly, what happens on
integer overflow.  A vendor can certainly provide whatever additional
documentation it likes, but that extra documentation doesn't become
part of the C standard.

And even if it did, your conclusion doesn't follow.  For example,
suppose the FooCC implementation's documentation says that it
generates code for the Foo 4200 processor, and the Foo 4200
processor's documentation says that integer overflow wraps around in
the common two's-complement fashion, so that INT_MAX + 1 == INT_MIN.
The FooCC compiler can still perform optimizations and generate code
based on the assumption that integer overflow does not occur.  For
example, for this code fragment:

    int i = INT_MAX;
    i ++;
    puts("Hello");

the compiler could legitimately generate code that never prints
"Hello", because the behavior is undefined.  This would not violate
either the C standard, the FooCC documentation, or the Foo 4200
processor documentation.

If the FooCC compiler's documentation states that it doesn't perform
such optimization, but the compiler actually does so, then the
compiler is violating its own documentation -- but that's not a
violation of the C standard.

I am not disagreeing what you are saying, but would like to offer my
point of view behind the rationale behind leaving something undefined.
The reason makes a lot of sense and is pragmatic:

Le's use the ++ operator as example. It means basically increment
value. The idea is what it would translate to efficient machine code,
ideally, increment instruction. It might combine into something bigger
with constant propagation and other optimizations that might or might
not be going on. That is irrelevant so I won't go into that tangent in
more detail.

The _point_ is that if C standard would require a *specific* behaviour
of the overflow, this means ALL implementations would be mandated to
invoke that specific behaviour. This means EVERY FUCKING INCREMENT
OPERATOR would add the "guard" code against specific behaviour of
overflow for architectures where the behaviour isn't in line with the
standard's requirements. This would mean overhead.. the designers
wanted to avoid this overhead being FORCED by the wording of the
standard.

It isn't like the standard COULDN'T require a lot of things, but those
requirements were relaxed in favour of performance. This leaves some
burden on the programmer; there is no substitute for knowing what you
are doing. Now, why choose performance then? Why not safety? A good
question. I cannot speak on behalf of the authors of the standard, but
what I do know is that the more compromises are made the less useful
the language would be for the purposes I feel that it was designed
for: a close to metal abstraction of the underlying machine
architecture(s).

They would have shot themselves in the foot if they compromised too
much; that isn't fact but opinion and I Might Be Wrong (tm) but that's
how I feel based on various factors that are biased from my personal
experience. xD

Of course, such shotcomings would be "easy" to work around in specific
architectures by writing the code in assembly, but that would defeat
the whole purpose of writing the code in C to begin with.

I should have used fewer words because this is basically a very simple
issue, at least from my point of view. What I do not understand is how
come not more contributors to this thread don't realize the same very
simple point? I have to conclude that most likely I am full of shit
and wrong and shouldn't have posted. Oh well.
 
J

James Kuyper

The standard term for the concept GGG is describing is "incorporation by
reference". However, one document is not considered to incorporate
another by reference, just because the first document refers to the
other. The first document must explicitly say that the other document is
being incorporated; otherwise "incorporate by reference" would just be
synonymous with "refer". For example, Section 2p1 says "The following
normative documents contain provisions which, through reference in this
text, constitute provisions of this International Standard." That IS
incorporation by reference.

The standard defines the term "implementation-defined" as "unspecified
behavior where each implementation documents how the choice is made".-
but since it contains no wording with respect to such documentation
which is comparable to that used in 2p1, that is NOT incorporation by
reference.
 
H

hanukas

=) =) I figured out what I was trying to say:

The standard is drafted around the most common denominator between (at
that time's) contemporary hardware for reasons that everyone would pay
the price to cater a few excentric implementations.

Of course, now with 20/20 hindsight vision it's easy to see that two's
complement arithmetic (among other things) would have been a fairly
reasonable thing to require. If someone chooses numeric presentation
or implementation of adder that cannot implement the standard
efficiently, that's their headache.

Back then things weren't as uniform (in a way) as they are now (of
course, there is diversity in other forms, the ISA's are still
different, there is some endianess differences and similar minor
issues).

OK, so the tools we use today are based on 40 years or so of legacy..
still have occasional bugs and other minor issues. There are very few
"new" languages that are interesting from the same point of view as C/C
++ .. but for graphics work (among other things) the new kid in the
block is the OCL. (I am very biased)
 
R

Richard Bos

James Kuyper said:
Richard said:
James Kuyper said:
the strongest statement you can make along those lines is to
replace "has" wit [EF BD 88 0A E2 80 9D EF BD 8D EF BD 89 EF
BD 87 EF BD 88 EF BD 94 E3 80 80 EF BD 88 EF BD 81 EF BD 96 EF BD 85
E2 80 9D EF]

Interesting claim! :)

Sorry about that. The text you're referring to was supposed to say
replace "has" with "might have".

My newsreader has recently acquired a bug which sometimes causes it to
switch into a mode in which characters are displayed with a different
font while I'm composing a message. I have not yet figured out what
triggers this mode,

Undefined behaviour?

Richard
 
B

Bill Marcum

["Followup-To:" header set to comp.os.linux.development.apps.]
Bad idea. Newsreaders other than GG aren't going to show it that way.
Mine showed me a series of '?' characters in the original article.
slrn (version pre0.9.9-111) showed the typewriter font, but most people
don't use slrn.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top