Making Fatal Hidden Assumptions

D

Dik T. Winter

> CBFalconer wrote: ....
>
> Not guaranteed in what way? You are not guaranteed that p will be a
> valid pointer, but you don't require it to be a valid pointer - all that
> is required is that "p = s - 1" followed by "p++" leaves p equal to s.

But the standard allows "p = s - 1" to trap when an invalid pointer is
generated. And this can indeed be the case on segmented architectures.
 
A

Andrew Reilly

Consider the alternative.

#define LEN 100
#define INC 5000
int arr[LEN];
int *ptr = arr;
/* A */
ptr += 2*INC;
ptr -= INC;
ptr -= INC;
/* B */
ptr -= INC;
ptr -= INC;
ptr += 2*INC;
/* C */

What you're suggesting, I think, is that ptr==arr should be true at points
A, B, and C, for any(?) values of LEN and INC. It happens to work out
that way sometimes (at least in the test program I just tried), but I can
easily imagine a system where guaranteeing this would place an undue
burden on the compiler.

That *is* what I'm suggesting. In fact, I'm suggesting that p += a; p -=
a; should leave p as it was originally for any int a and pointer p. To my
mind, and I've been using C for more than 20 years, that is the very
essence of the nature of C. It's what makes pointer-as-cursor algorithms
make sense. Throw it away, and you might as well restrict yourself to
coding p[a], and then you've got fortran, pascal or Java.

Just because hardware can be imagined (or even built) that doesn't match
the conventional processor model that C most naturally fits *shouldn't* be
an argument to dilute or mess around with the C spec. Just use a
different language on those processors, or put up with some inefficiency
or compiler switches. Pascal has always been a pretty nice fit for many
hardware-pointer-checked machines. Such hardware isn't even a good
argument in this case though, since the obvious implementation will
involve base+offset compound pointers anyway, and mucking around with the
offset (as an integer) should neither trap nor cause a performance issue.

I've coded for years on Motorola 56000-series DSPs, and they don't look
anything like the conventional processor that C knows about: you've got
two separate data memory spaces and a third for program memory, pointers
aren't integers, words are 24-bits long and that's the smallest
addressable unit, and so on. Never the less,
there have been at least two C compilers for the thing, and they've both
produced *awful* code, and that's OK: they were never used for
performance-critical code. That was always done in assembler. There are
lots of processors (particularly DSPs) that are worse. I know of one that
doesn't have pointers as such at all. That's OK too. There isn't a C
compiler for that.

C is useful, though, and there's a lot of code written in it, so it's no
surprise that most of the more recent DSP designs actually do fit nicely
into the C conventional machine model. And (p + n) - n works in the
obvious fashion for those, too.

Cheers,
 
D

Dik T. Winter

> It's precicely this sort of tomfoolery on the part of the C standards
> committee that has brought the language into such ill-repute in recent
> years. It's practically unworkable now, compared to how it was in (say)
> the immediately post-ANSI-fication years.

I do not understand this. The very same restriction on pointer arithmetic
was already in the very first ANSi C standard.
 
D

Dik T. Winter

> OK, I've made enough of a fool of myself already. I'll go and have that
> second cup of coffee for the morning, before I start going on about having
> the standard support non-2's complement integers, or machines that have no
> arithmetic right shifts...

In the time of the first standard, the Cray-1 was still quite important,
and it has no arithmetic right shift. When K&R designed C there was a
large number of machines that did not use 2's complement integers.
 
R

Rod Pemberton

Gerry Quinn said:
About as good a reason as the term niggardly, as far as I can tell.
Perhaps the words are appropriate in a post relating to fatal
assumptions.

I didn't know what he meant either. Not being racist (at least I hope not),
I went GIYF'ing. I think it might be a reference to some Dungeons and
Dragons persona or something. Unfortunately, he'd need to clarify...


Rod Pemberton
 
A

Andrew Reilly

Andrew Reilly wrote On 03/07/06 16:41,:

I've been told that IBM AS/400 bollixes the bogus
arithmetic, at least under some circumstances. A friend told of fixing
code that did something like

if (buffer_pos + amount_needed > buffer_limit) {
... enlarge the buffer ...
}
memcpy (buffer_pos, from_somewhere, amount_needed); buffer_pos +=
amount_needed;

This looks innocuous to devotees of flat address spaces (Flat-Earthers?),
but it didn't work on AS/400. If the sum `buffer_pos + amount_needed'
went past the end of the buffer, the result was some kind of NaP ("not a
pointer") and the comparison didn't kick in. Result: the code never
discovered that the buffer needed enlarging, and merrily tried to memcpy()
into a too-small area ...

I have no personal experience of the AS/400, and I may
have misremembered some of what my friend related. Would anybody with
AS/400 knowledge care to comment?

I haven't used an AS/400 myself, either, but this is almost certainly the
sort of perfectly reasonable code that the standard has arranged to be
undefined, precicely so that it can be said that there's a C compiler for
that system.

Given the hardware behaviour, it would have been vastly preferable for the
compiler to handle pointers as base+offset pairs, so that the specialness
of the hardware pointers didn't interfere with the logic of the program.

Since most coding for AS/400s were (is still?) done in COBOL and PL/1,
both of which are perfectly suited to the hardware's two-dimensional
memory, any performance degredation would hardly have been noticed. (And
since AS/400s are actually Power processors with a JIT over the top now,
there would likely not be a performance problem from doing it "right"
anyway.) But no, your friend had to go and modify good code, and risk
introducing bugs in the process.
 
D

David Holland

> [...] but I'm sincerely curious whether anyone knows of an *actual*
> environment where p == s will ever be false after (p = s-1; p++).

The problem is that evaluating s-1 might cause an underflow and a
trap, and then you won't even reach the comparison. You don't
necessarily have to dereference an invalid pointer to get a trap.

You might hit this behavior on any segmented architecture (e.g.,
80286, or 80386+ with segments on) and you are virtually guaranteed to
hit it on any architecture with fine-grained segmentation. comp.std.c
periodically reminisces about the old Burroughs architecture, and
it's always possible something like it might come back sometime.

You will also see this behavior in any worthwhile bounds-checking
implementation.
> Many of the discussions in comp.lang.c seem like they'd be better
> in a new newsgroup:
> comp.lang.i'd_rather_be_a_lawyer
>
> :) :)

Yes, well, that's what comp.lang.c is about...
 
K

Keith Thompson

Andrew Reilly said:
Consider the alternative.

#define LEN 100
#define INC 5000
int arr[LEN];
int *ptr = arr;
/* A */
ptr += 2*INC;
ptr -= INC;
ptr -= INC;
/* B */
ptr -= INC;
ptr -= INC;
ptr += 2*INC;
/* C */

What you're suggesting, I think, is that ptr==arr should be true at points
A, B, and C, for any(?) values of LEN and INC. It happens to work out
that way sometimes (at least in the test program I just tried), but I can
easily imagine a system where guaranteeing this would place an undue
burden on the compiler.

That *is* what I'm suggesting. In fact, I'm suggesting that p += a; p -=
a; should leave p as it was originally for any int a and pointer p. To my
mind, and I've been using C for more than 20 years, that is the very
essence of the nature of C.
[snip]

There's (at least) one more property I forgot to mention. Given:

#define LEN 100
#define INC 5000 /* modify both of these as you like */
int arr[LEN];
int *ptr1 = arr;
int *ptr2 = ptr1 + INC;
/* D */

would you also require that, at point D, ptr2 > ptr1? (If pointer
arithmetic wraps around, this might not be the case even if adding and
subtracting as above always gets you back to the original address.)

And you think that having the standard guarantee this behavior is
worth the cost of making it much more difficult to implement C on
systems where the underlying machine addresses don't meet this
property, yes?

If so, that's a consistent point of view, but I disagree with it.

I'll also mention that none of this stuff has changed significantly
between C90 (the 1990 ISO C standard, equivalent to the original ANSI
standard of 1989) and C99 (the 1990 ISO standard).

In fact, I just checked my copy of K&R1 (published in 1978). I can't
copy-and-paste from dead trees, so there may be some typos in the
following. This is from Appendix A, the C Reference Manual, section
7.4, Additive operators:

A pointer to an object in an array and a value of any integral
type may be added. [...] The result is a pointer of the same
type as the original pointer, and which points to another object
in the same array, appropriately offset from the orginal object.

[...]

[... likewise for subtracting an integer from a pointer ...]

If two pointers to objects of the same type are subtracted, the
result is converted [...] to an int representing the number of
objects separating the pointed-to objects. This conversion will
in general give unexpected results unless the pointers point to
objects in the same array, since pointers, even to objects of the
same type, do not necessarily differ by a multiple of the
object-length.

The last quoted paragraph isn't quite as strong as what the current
standard says, since it bases the undefinedness of pointer subtraction
beyond the bounds of an object on alignment, but it covers the same
idea.

The C Reference Manual from May 1975,
<http://cm.bell-labs.com/cm/cs/who/dmr/cman.pdf>, has the same wording
about pointer subtraction, but not about pointer+integer addition.

So if you think that the requirements you advocate are "the very
essence of the nature of C", I'm afraid you're at least 28 years too
late to do anything about it.
 
A

Andrew Reilly

It's not always equivalent. The trouble starts with

char a[8];
char *p;

for ( p = a+1 ; p < a+8 ; p += 2 ) {}

intending that the loop terminates on p == a+9 (since it skips a+8). But
how do we know that a+9 > a+8 ? If the array is right at the top of some
kind of segment, the arithmetic might have wrapped round.

a+9 > a+8 because a + 9 - (a + 8) == 1, which is > 0. Doesn't matter if
the signed or unsigned pointer value wrapped around in an intermediate
term. On many machines that's how the comparison is done anyway. You're
suggesting that having the compiler ensure that a+8 doesn't wrap around
wrt a is OK, but a+9 is too hard. I don't buy it.
To support predictable pointer comparisons out of range, the compiler
would have to allocate space with a safe buffer zone. Complications are
setting in.

Only if you put them there. (The real problem is objects larger than half
the address space, where a valid pointer difference computation produces a
ptrdiff value that is out of range for a signed integer.)
Ints have the nice property that 0 is in the middle and we know how much
headroom we've got either side. So it's easy for the compiler to make
the int version work (leaving it to the programmer to take
responsibility for avoiding overflow, which is no big deal).

Unsigned ints have the nice property that (a + 1) - 1 == a for all a, even
if a + 1 == 0. Overflow is generally no big deal in any case. (Other
than the object larger than half the address space issue.)
Pointers don't have that property. The compiler can't take sole
responsibility for avoiding overflow irrespective of what the programmer
does. If the programmer wants to go out of range and is at the same
time responsible for avoiding overflow, then he has to start worrying
about whereabouts his object is and what headroom he's got.

The compiler can't necessarily avoid overflow, but it *can* arrange for
pointer comparisons to work properly.
Can't see how assembly programmers avoid the same kind of issue. I can
see how they could ignore it. The code will work most of the time.

Seems like it will work at least as well as the usual unit-stride
algorithm and idiom.
 
R

Robin Haigh

Andrew Reilly said:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist. So
we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead"

and if the arithmetic happens to wrap round after s + N, you really are dead
too.

It doesn't have to be about weird architectures and traps. No
implementation can provide an unlimited range for pointer arithmetic without
some kind of overflow behaviour, such as a wrap round. Granted a wrap-round
needn't affect addition and subtraction, but it will affect comparisons.

Every allocated object comes with a limited range for pointer comparisons to
satisfy p-1<p<p+1. Not because the standard says so, but because the
implementation can't avoid it.

The kind of code you're talking about tends to make careless assumptions
about the valid range with no justification at all. Just because we've all
been doing it for years (I don't mind pleading guilty) doesn't make it
right. Such code is broken, and it doesn't need a standard to say so.

Fortunately some people have learnt a bit since the good old days, and they
work to higher standards now.
 
J

John Temples

I haven't used an AS/400 myself, either, but this is almost certainly the
sort of perfectly reasonable code that the standard has arranged to be
undefined, precicely so that it can be said that there's a C compiler for
that system.

There are lots of embedded systems with 8- and 16-bit pointers. With
the right value of buffer_pos, it wouldn't take a very large value of
amount_needed for that addition to wrap and given you an incorrect
comparison.
 
K

Keith Thompson

Andrew Reilly said:
It's not always equivalent. The trouble starts with

char a[8];
char *p;

for ( p = a+1 ; p < a+8 ; p += 2 ) {}

intending that the loop terminates on p == a+9 (since it skips a+8). But
how do we know that a+9 > a+8 ? If the array is right at the top of some
kind of segment, the arithmetic might have wrapped round.

a+9 > a+8 because a + 9 - (a + 8) == 1, which is > 0. Doesn't matter if
the signed or unsigned pointer value wrapped around in an intermediate
term. On many machines that's how the comparison is done anyway. You're
suggesting that having the compiler ensure that a+8 doesn't wrap around
wrt a is OK, but a+9 is too hard. I don't buy it.

How would you guarantee that a+(i+1) > a+i for all arbitrary values of
i? It's easy enough to do this when the addition doesn't go beyond
the end of the array (plus the case where it points just past the end
of the array), but if you want to support arbitrary arithmetic beyond
the array bounds, it's going to take some extra work, all for the sake
of guaranteeing properties that have *never* been guaranteed by the C
language. (Don't confuse what happens to have always worked for you
with what's actually guaranteed by the language itself.)

[...]
Unsigned ints have the nice property that (a + 1) - 1 == a for all a, even
if a + 1 == 0. Overflow is generally no big deal in any case. (Other
than the object larger than half the address space issue.)

But unsigned ints *don't* have the property that a + 1 > a for all a.
 
C

CBFalconer

Randy said:
Andrew Reilly wrote

This is a lot of whining about a specific problem that can
easily be remedied just by changing the loop construction. The
whole debate is pretty pointless in that context, unless you
have some religious reason to insist upon the method in the
original.

Er, didn't I point that fix out in the original article? That was
the only error in the original sample code, all other problems can
be tied to assumptions, which may be valid on any given piece of
machinery. The point is to avoid making such assumptions, which
requires recognizing their existence in the first place.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 
M

msg

OK, I've made enough of a fool of myself already. I'll go and have that
second cup of coffee for the morning, before I start going on about having
the standard support non-2's complement integers, or machines that have no
arithmetic right shifts...

I get queasy reading the rants against 1's complement architectures; I
wish Seymour Cray were still around to address this.

Michael Grigoni
Cybertheque Museum
 
M

Michael Mair

Andrew said:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist. So
we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.

I have encountered situations where
free(p);
....
if (p == q)
leads to the platform's equivalent of the much beloved
"segmentation fault". Your theory means that this should
have worked. Assigning NULL or a valid address to p after
freeing avoids the error.

<OT>Incidentally, in gnu.gcc.help there is a discussion about
much the same situation in C++ where someone gets in trouble
for delete a; .... if (a == b) ...
Happens only for multiple inheritance and only for gcc.
Thread starts at <Eg0Pf.110735$sa3.34921@pd7tw1no>
</OT>

[snip!]

Cheers
Michael
 
M

msg

Since most coding for AS/400s were (is still?) done in COBOL and PL/1

Most coding was in flavors of RPG.

since AS/400s are actually Power processors with a JIT over the top now

There is a large installed base of CISC (non Power-PC) AS-400s; they use
a flat memory model reminiscent of the Multics design.

Michael Grigoni
Cybertheque Museum
 
A

Andrew Reilly

But unsigned ints *don't* have the property that a + 1 > a for all a.

My last comment on the thread, hopefully:

No, they don't, but when you're doing operations on pointer derivations
that are all in some sense "within the same object", even if hanging
outside it, (i.e., by dint of being created by adding integers to a single
initial pointer), then the loop termination condition is, in a very real
sense, a ptrdif_t, and *should* be computed that way. The difference can
be both positive and negative.

The unsigned comparison a + n > a fails for some values of a, but the
ptrdiff_t (signed) comparison a + n - a > 0 is indeed true for all a and
n > 0, so that's what should be used. And it *is* what is used on most
processors that do comparison by subtraction (even if that's wrapped in a
non-destructive cmp).

I actually agree completely with the piece of K&R that you posted a few
posts ago, where it was pointed out that pointer arithmetic only makes
sense for pointers within the same object (array). Since C doesn't tell
you that the pointer that your function has been given isn't somewhere in
the middle of a real array, my aesthetic sense is that conceptually,
arrays (as pointers, within functions) extend infinitely (or at least to
the range of int) in *both* directions, as far as pointer arithmetic
within a function is concerned. Actually accessing values outside of the
bounds of the real array that has been allocated somewhere obviously
contravenes the "same object" doctrine, and it's up to the logic of the
caller and the callee to avoid that.

Now it has been amply explained that my conception of how pointer
arithmetic ought to work is not the way the standard describes, even
though it *is* the way I have experienced it in all of the C
implementations that it has obviously been my good fortune to encounter.
I consider that to be a pity, and obviously some of my code wouldn't
survive a translation to a Boroughs or AS/400 machine (or perhaps even to
some operating environments on 80286es). Oh, well. I can live with that.
It's not really the sort of code that I'd expect to find there, and I
don't expect to encounter such constraints in the future, but I *will* be
more careful, and will keep my eyes more open.

Thanks to all,
 
B

Ben Bacarisse

Why is everybody concentrating on the one real error in the example code,
and not on the hidden assumptions. Errors are correctible, assumptions
need to be recognized.

Well I, for one, commented on the hidden assumption that must be made for
what you call "the one real error" to actually be an error -- but it was
not recognised! ;-)

[At the top of your original post did not, in fact, claim this was an
error but you call it a "real error" later on.]

I feel that your points would have been better made using other examples.
The context of the code made me read the C as little more than pseudo-code
with the added advantage that a C compiler might, with a following wind,
produce something like the assembler version (which in turn has its own
assumptions but you were not talking about that).

I found Eric Sosman's "if (buffer + space_required > buffer_end) ..."
example more convincing, because I have seen that in programs that are
intended to be portable -- I am pretty sure I have written such things
myself in my younger days. Have you other more general examples of
dangerous assumptions that can sneak into code? A list of the "top 10
things you might be assuming" would be very interesting.
 
R

Randy Howard

CBFalconer wrote
(in article said:
Er, didn't I point that fix out in the original article?

Yes, which is precisely why I'm surprised at the ensuing debate
over the original version, as my comments should reflect.
 
R

Randy Howard

Rod Pemberton wrote
(in article said:
I didn't know what he meant either. Not being racist (at least I hope not),
I went GIYF'ing. I think it might be a reference to some Dungeons and
Dragons persona or something. Unfortunately, he'd need to clarify...

Oh come on. Doesn't anyone own a dictionary anymore, or have a
vocabulary which isn't found solely on digg, slashdot or MTV?

blackguard:
A thoroughly unprincipled person; a scoundrel.
A foul-mouthed person.


Does everything have to become a racism experiment?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top