Knowing the implementation, are all undefined behaviours become implementation-defined behaviours?

T

Tim Rentsch

Seebs said:
I do not believe this to be the case.

Okay, I'll read on...
Well, actually. The reason I hold my current belief is that I have had
opportunities to discuss "may be used uninitialized" warnings with gcc
developers.

Interjection: this response is not really on point for what I
was saying (unfortunately hidden because the context snipping was
a bit overzealous), but no matter, I'll try to clear that up
later...
And it turns out that, in fact, spurious warnings are
consistently reported as bugs, and that at any given time, gcc is usually
known both to give spurious warnings and to miss some possible uninitialized
uses. In each case, the goal of the developers seems to be to maximize
the chances that gcc is correct.

It seems worth pointing out that this comment is of the form "I
have some private knowledge, not known to the general public, and
about which I'm not going to give any real specifics, that makes
me think I'm right." It's great if made to win an argument.
Not as good if the point is to communicate reasoning and reach
shared understanding.
Now, maybe that makes gcc "not a good compiler", but it certainly makes gcc
the sort of compiler that customers appear to want, which is one which does
its best to minimize errors rather than accepting a larger number of errors
in order to ensure that they're all of the same sort.

It's not really possible to give a useful response to this,
because there is no knowledge generally publically available
on which to base a response. Effectively the statement just
stops further communication. Is that what you wanted? Or
were you meaning to do something else?
Keep in mind that, stupid though it may be, many projects have a standing
policy of building with warnings-as-errors, so a spurious warning can cause
a fair bit of hassle.

Unfortunately the discussion got sidetracked onto whether or not
gcc is a decent compiler. Of course no sensible person wants
stupid, bogus, or nearly-information-content-free warnings of the
kind attributed to gcc above (okay that description may be a
little unfair, but it's not completely unfair). However -- and
this is the key point -- that's not what I was talking about; I
tried to make that clear but despite that the discussion got
turned into an assessment of gcc's warning policy.

To get back on track, two things: first, I'm talking about
warnings where "optimizations" are based on changing the expressed
intent by exercising a complete freedom of choice where undefined
behavior is involved; second, warnings about these can be given
exactly because the compiler has available _perfect knowledge_
about whether the condition in question has occurred -- it's not
any kind of heuristic like the case of uninitialized variables.
It's because of this perfect knowledge condition that giving
warnings for this situation is not equivalent to the halting
problem. Therefore a compiler can (and for purposes of the
question under discussion, will) faithfully give exact information
warnings about when such things occur. I think any sensible person
should agree that (an option that enables) getting these warnings
is desirable. (If gcc chooses to provide some sort of related-but-
not-quite-the-same warnings, those might or might not be desirable,
but in any case that's a separate discussion.)

Is my point a little bit clearer now?
 
S

Seebs

It seems worth pointing out that this comment is of the form "I
have some private knowledge, not known to the general public, and
about which I'm not going to give any real specifics, that makes
me think I'm right." It's great if made to win an argument.
Not as good if the point is to communicate reasoning and reach
shared understanding.

I was responding to the allegation that I hadn't thought about it enough,
not necessarily to the underlying substance.

I agree that it's not a persuasive argument that my position is correct;
it is offered only as an argument that my position is not unconsidered.
It's not really possible to give a useful response to this,
because there is no knowledge generally publically available
on which to base a response. Effectively the statement just
stops further communication. Is that what you wanted? Or
were you meaning to do something else?

You have a fair point here. I guess, to put it another way: You have
an argument (and it is a sound one) that we ought to prefer a compiler
which gives warnings whenever it's not totally sure, rather than risking
not giving a warning. I personally prefer one which does its best to guess
right, even if that means it may be wrong in either direction. I don't
much rely on the warnings (I hardly ever see them anyway, because I'm a
habitual initializer of variables), but when I do see spurious warnings,
they annoy me a great deal.
To get back on track, two things: first, I'm talking about
warnings where "optimizations" are based on changing the expressed
intent by exercising a complete freedom of choice where undefined
behavior is involved; second, warnings about these can be given
exactly because the compiler has available _perfect knowledge_
about whether the condition in question has occurred -- it's not
any kind of heuristic like the case of uninitialized variables.

Ahh! I see. I think we were talking about two separate kinds of cases.
One is warnings about possibly-uninitialized values, where I favor gcc's
policy of trying for the most accurate warnings it can, even though this
means it sometimes omits a warning.

I would not object in the least to a warning flag that, say, requests warnings
whenever gcc optimizes a test out because it's concluded that a pointer is
dereferenced, and therefore non-null. I would even think it should probably
be on by default, because honestly, I can't think of a case where I would
intentionally write code subject to such an optimization.

Come to think of it, I think I'll go file that as an enhancement request with
our vendor.

(Actually, I think I can. Imagine a function which has as part of its
contract that its argument is non-null... Oh, but wait, there'd be no test
to optimize in that case. Hmm.)
Is my point a little bit clearer now?

I think so.

-s
 
T

Tim Rentsch

Seebs said:
I was responding to the allegation that I hadn't thought about it enough,
not necessarily to the underlying substance.

Just for the record it wasn't my intention to make such an allegation.
Perhaps a prediction that you would reach a new conclusion upon
consideration of new facts.
I agree that it's not a persuasive argument that my position is correct;
it is offered only as an argument that my position is not unconsidered.

My confusion. I never considered your position to be unconsidered;
sorry if it came across otherwise.

You have a fair point here. I guess, to put it another way: You have
an argument (and it is a sound one) that we ought to prefer a compiler
which gives warnings whenever it's not totally sure, rather than risking
not giving a warning. I personally prefer one which does its best to guess
right, even if that means it may be wrong in either direction. I don't
much rely on the warnings (I hardly ever see them anyway, because I'm a
habitual initializer of variables), but when I do see spurious warnings,
they annoy me a great deal.

I have to admit I don't like spurious warnings either. However there
are (at least) two different modes for acting on warning messages, and
I think that difference may be relevant here. One mode I might call
"lint like", where the warnings are taken as informational. The other
mode is (for lack of a better term) "-Werror like", where warnings
cause compilation to fail. I almost always use -Werror (and use lint
only rarely). Of course this means the set of warnings reported must
be at least somewhat selective, since there certainly are warning
conditions that are more about style than substance. I find it helps
to have the -Werror warnings that are reported be conservative (ie,
possibly false positives, but never false negatives), for two reasons.
One, if the compiler has a hard time figuring it out, people sometimes
do also, and even if I can see that a warning isn't strictly necessary
I don't want to assume all my co-workers can also (and also vice
versa). Two, like it or not, when routinely using -Werror to identify
and fail on certain warning conditions, people tend not to think so
much about those particular conditions, expecting the compiler to
catch their oversights. So it seems better to have warning conditions
be "hard" rather than "soft", with the understanding that we're
talking about using -Werror and that the set of warnings being tested
is not everything but a specifically chosen set. So at the end of
day I still grumble to myself about spurious warnings, but I think
it's better for overall quality and productivity to suffer these
once in a while to get the benefits of having hard-edged warning
conditions.

Ahh! I see. I think we were talking about two separate kinds of cases.
One is warnings about possibly-uninitialized values, where I favor gcc's
policy of trying for the most accurate warnings it can, even though this
means it sometimes omits a warning.

Right, I was talking about a different situation, and that was
probably the most important point of what I was saying.
I would not object in the least to a warning flag that, say, requests warnings
whenever gcc optimizes a test out because it's concluded that a pointer is
dereferenced, and therefore non-null. I would even think it should probably
be on by default, because honestly, I can't think of a case where I would
intentionally write code subject to such an optimization.

I probably could construct such cases if I put my mind to it,
perhaps even "natural" ones (I'm thinking macro calls here),
but I agree, nine times out of ten it's just bad thinking.
Come to think of it, I think I'll go file that as an enhancement request with
our vendor.

(Actually, I think I can. Imagine a function which has as part of its
contract that its argument is non-null... Oh, but wait, there'd be no test
to optimize in that case. Hmm.)


I think so.

It certainly seems so. As predicted, I thought you
would see my point, once you saw my point. :)
 
S

Seebs

Just for the record it wasn't my intention to make such an allegation.
Perhaps a prediction that you would reach a new conclusion upon
consideration of new facts.

Ahh, fair enough. I may or may not. I'm still waffling.
I have to admit I don't like spurious warnings either. However there
are (at least) two different modes for acting on warning messages, and
I think that difference may be relevant here. One mode I might call
"lint like", where the warnings are taken as informational. The other
mode is (for lack of a better term) "-Werror like", where warnings
cause compilation to fail. I almost always use -Werror (and use lint
only rarely).

In the stuff I have to keep building reliably for our build system, I use
-Werror during development, and then disable it for release. We end up
with a lot of code that has to be compiled by an unpredictable variety of
customer compilers, whereupon it becomes essentially impossible to
avoid SOME version of gcc yielding warnings.
Of course this means the set of warnings reported must
be at least somewhat selective, since there certainly are warning
conditions that are more about style than substance. I find it helps
to have the -Werror warnings that are reported be conservative (ie,
possibly false positives, but never false negatives), for two reasons.
One, if the compiler has a hard time figuring it out, people sometimes
do also, and even if I can see that a warning isn't strictly necessary
I don't want to assume all my co-workers can also (and also vice
versa).

For something like -Wuninitialized (or however they spell it), I am not
at all confident that there exists a way to be certain of never missing
such a case short of printing the message unconditionally.

In the case of something like the "optimize out null pointer tests when
they're irrelevant", it gets fussier. That said, Code Sourcery gave us
a really good example of a reason for which doing that silently might be
desireable.
Two, like it or not, when routinely using -Werror to identify
and fail on certain warning conditions, people tend not to think so
much about those particular conditions, expecting the compiler to
catch their oversights. So it seems better to have warning conditions
be "hard" rather than "soft", with the understanding that we're
talking about using -Werror and that the set of warnings being tested
is not everything but a specifically chosen set. So at the end of
day I still grumble to myself about spurious warnings, but I think
it's better for overall quality and productivity to suffer these
once in a while to get the benefits of having hard-edged warning
conditions.

You know, thinking about this, it occurs to me that there's probably a key
difference between your workflow and mine.

I'm usually looking at cases where people have a large pool of open source
software to compile. They either don't want to change it, or are prohibited
by some term in some policy somewhere from changing it. They want it to
compile without hassle. They are not looking to fix the code; they're okay
with taking the risk that something will blow up, in many cases, but they
still want to compile it without failures or spurious warnings.

For an extreme example, we had people who were (for reasons I never quite
understood) committed to using an old version of openssh which relied on
calling function pointers through the wrong interfaces. On PPC, gcc smacked
this behavior down hard... So the ultimate customer requirement was:
* This MUST NOT produce a warning/diagnostic.
* It's fine if the resulting code segfaults instantly.

(Why? Because it was in a segment of code they didn't use.)
Right, I was talking about a different situation, and that was
probably the most important point of what I was saying.
Yup.
I probably could construct such cases if I put my mind to it,
perhaps even "natural" ones (I'm thinking macro calls here),
but I agree, nine times out of ten it's just bad thinking.

Macro calls and inlined functions. The latter is the case that I think
makes it particularly persuasive -- you don't want to get warnings for
that.

And I don't think the optimization occurs at a level where it can tell
you whether there's macros or inlined functions involved.
It certainly seems so. As predicted, I thought you
would see my point, once you saw my point. :)

The first rule of tautology club is the first rule of tautology club.

-s
 
T

Tim Rentsch

Seebs said:
For something like -Wuninitialized (or however they spell it), I am not
at all confident that there exists a way to be certain of never missing
such a case short of printing the message unconditionally.

Clearly there must be. At the very least, any code where the only
(read-)use of a variable directly follows an assignment to the
variable needn't be flagged. This example is ridiculously
trivial but it illustrates the point (at least I hope it does).
More generically, it's relatively easy to determine all code
paths that might lead to each read-use of a variable. If all
read-uses of a variable have an assignment to the variable
along all leading code paths, the variable is always initialized
before use. So surely _some_ cases can be excluded.
In the case of something like the "optimize out null pointer tests when
they're irrelevant", it gets fussier. That said, Code Sourcery gave us
a really good example of a reason for which doing that silently might be
desireable.

It would be interesting to hear about those.

Two, like it or not, when routinely using -Werror to identify
and fail on certain warning conditions, people tend not to think so
much about those particular conditions, expecting the compiler to
catch their oversights. So it seems better to have warning conditions
be "hard" rather than "soft", with the understanding that we're
talking about using -Werror and that the set of warnings being tested
is not everything but a specifically chosen set. So at the end of
day I still grumble to myself about spurious warnings, but I think
it's better for overall quality and productivity to suffer these
once in a while to get the benefits of having hard-edged warning
conditions.

You know, thinking about this, it occurs to me that there's probably a key
difference between your workflow and mine. [snip elaboration]

Yes, that probably accounts for our different perspectives.

The first rule of tautology club is the first rule of tautology club.

That's a great cartoon! I should add, though, that my statement
wasn't quite a tautology, its intended meaning resting on two
slightly different intepretations of the phrase "see [or saw] my
point". (I hoped the smiley would be enough of a hint to
express that, but I guess not...)
 
S

Seebs

Clearly there must be. At the very least, any code where the only
(read-)use of a variable directly follows an assignment to the
variable needn't be flagged. This example is ridiculously
trivial but it illustrates the point (at least I hope it does).

You have a very good point.
It would be interesting to hear about those.

Basically, macros and inline functions.

inline int fooplusone(int *p) { if (p) { return *p + 1; } else { return 0; } }

void foo(void) {
int i = 0;
int *ip = &i;
*ip = 3;
(void) fooplusone(ip);
}

Obviously, the compiler can optimize out the test.
You know, thinking about this, it occurs to me that there's probably a key
difference between your workflow and mine. [snip elaboration]
Yes, that probably accounts for our different perspectives.

And in fact, thinking about it more... I think that, when I'm developing
the bulk of the code, I tend more towards your side of things; I want tons
and tons of warnings and errors. It's just when I am dealing with customers
who just want it to build already that I start wanting more flexibility.
That's a great cartoon! I should add, though, that my statement
wasn't quite a tautology, its intended meaning resting on two
slightly different intepretations of the phrase "see [or saw] my
point". (I hoped the smiley would be enough of a hint to
express that, but I guess not...)

Oh, it was, I was just commenting on phrasing.

-s
 
T

Tim Rentsch

Seebs said:
You know, thinking about this, it occurs to me that there's probably a key
difference between your workflow and mine. [snip elaboration]
Yes, that probably accounts for our different perspectives.

And in fact, thinking about it more... I think that, when I'm developing
the bulk of the code, I tend more towards your side of things; I want tons
and tons of warnings and errors. It's just when I am dealing with customers
who just want it to build already that I start wanting more flexibility.

You might want to consider taking the warning flags (or some of them
anyway) out of the makefile distributed for customer builds.

I have my own gripe about this. I used to use -Wtraditional in
gcc. At the time it complained of constructs that are present in
both K&R C and ANSI/ISO C but have different semantics. At some
point it changed so it (also?) complained about constructs that
are in ANSI/ISO C but _are not_ in K&R C; the most obvious
example is function prototypes. In other words it changed from
being a pretty useful set of warning conditions to a totally
useless torrent of complaints about every function prototype.
Grrr.... And that's not the only time I've been bitten by gcc
deciding to change what a particular warning flag means.
 
S

Seebs

You might want to consider taking the warning flags (or some of them
anyway) out of the makefile distributed for customer builds.

We have a couple of things like this. There's packages that use -Werror
until we're ready to ship, for instance.

Incidentally, I went through pseudo today with -W -Wextra and cleaned
up the warnings. I'm now using __atribute__((unused)) in a few places,
but I figure that's easy to make go away in the unlikely event that this
ever has cause to be compiled with something that doesn't support it.

(It's still sorta weird to me to be working on a program that is essentially
by definition nonportable.)

-s
 
N

Nick

Seebs said:
Incidentally, I went through pseudo today with -W -Wextra and cleaned
up the warnings. I'm now using __atribute__((unused)) in a few places,
but I figure that's easy to make go away in the unlikely event that this
ever has cause to be compiled with something that doesn't support it.

I have

#ifdef __GNUC__
#define NEVER_RETURNS __attribute__ ((noreturn))
#define PERHAPS_UNUSED __attribute__((unused))
#else
#define NEVER_RETURNS
#define PERHAPS_UNUSED
#endif

in a "machine specific stuff" header file.
 
S

Seebs

I have

#ifdef __GNUC__
#define NEVER_RETURNS __attribute__ ((noreturn))
#define PERHAPS_UNUSED __attribute__((unused))
#else
#define NEVER_RETURNS
#define PERHAPS_UNUSED
#endif

in a "machine specific stuff" header file.

Mind if I steal this?

-s
 
K

Keith Thompson

Nick said:
I have

#ifdef __GNUC__
#define NEVER_RETURNS __attribute__ ((noreturn))
#define PERHAPS_UNUSED __attribute__((unused))
#else
#define NEVER_RETURNS
#define PERHAPS_UNUSED
#endif

in a "machine specific stuff" header file.

You could just do

#ifndef __GNUC__
#define __attribute()
#endif

I think the syntax is specifically designed to allow this. On the
other hand, it's not as flexible if another compiler supports a
different set of __attribute__'s and/or another mechanism for
specifying the same properties.
 
R

Richard Bos

Nick said:
I have

#ifdef __GNUC__
#define NEVER_RETURNS __attribute__ ((noreturn))
#define PERHAPS_UNUSED __attribute__((unused))
#else
#define NEVER_RETURNS
#define PERHAPS_UNUSED
#endif

in a "machine specific stuff" header file.

Why not simply

#ifndef __GNUC__
#define __attribute__(dummy)
#endif

Richard
 
N

Nick

Keith Thompson said:
You could just do

#ifndef __GNUC__
#define __attribute()
#endif

I think the syntax is specifically designed to allow this. On the
other hand, it's not as flexible if another compiler supports a
different set of __attribute__'s and/or another mechanism for
specifying the same properties.

You certainly could, and I never thought of that.

I dislike multiple underscores, and leading underscores in my code
though. Hiding the stuff that lives in the implementation namespace in
as small a space as possible feels in some way I find hard to pin down
the "better" way of doing it. So whenever I use something non standard,
I tend to vector it through my own names as rapidly as possible.

This file is therefore also the one that assigns one of stricmp and
strcasecmp to caseless_strcmp (and provides the space to write your own
if you have neither).
 
N

Nick

Why not simply

#ifndef __GNUC__
#define __attribute__(dummy)
#endif

Well one reason is that someone reading my code who isn't familiar with
GNU extensions will wonder what it's all about. I, perhaps wrongly, feel
my names make it much more obvious what they mean.
 
P

Phil Carmody

Why not simply

#ifndef __GNUC__
#define __attribute__(dummy)
#endif

Because you've just invaded every other implementation's
reserved (tautologically always and for every use) namespace?

Phil
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top