D
Dik T. Winter
> Morris Keesan wrote: .... ....
> Using a relative comparison operator on pointers that do not point into
> the same object is UB.
Read again. There is no comparison of pointers.
> Morris Keesan wrote: .... ....
> Using a relative comparison operator on pointers that do not point into
> the same object is UB.
[...]Kenneth Brody said:I would have to agree that there is no UB. However, I would also have
to say that the result of comparing aptr to pbtr is "meaningless".
First, you are using the signed "intptr_t", meaning that b could be in
"higher" memory, yet bptr be "lower" because aptr is positive and bptr
is negative.
Ignoring that, and assuming you changed to uintptr_t,
> Imagine a system where addresses are treated as signed. You could
> even have an object covering a range of addresses from, say, -10
> to +10. (A null pointer would have to have a representation other
> than all-bits-zero.)
Isn't the ARM a machine where some addresses were thought to be negative?
Nobody said:At the CPU level, data is neither signed nor unsigned. It's typically the
operations which treat their operands as signed or unsigned.
With two's complement, addition, subtraction and multiplication (but not
division) behave identically for signed or unsigned values. The main
difference is in comparisons.
A signed comparison subtracts two values then checks whether the overlow
flag is set, while an unsigned comparison would check the carry flag
instead.
Apart from division, the only common instruction which has signed and
unsigned variants is a right shift. An arithmetic (signed) right shift
duplicates the topmost bit (i.e. the sign bit) while a logical (unsigned)
shift fills with zeros.
Keith Thompson said:Imagine a system where addresses are treated as signed. You could
even have an object covering a range of addresses from, say, -10
to +10. (A null pointer would have to have a representation other
than all-bits-zero.)
Nobody said:At the CPU level, data is neither signed nor unsigned. It's typically the
operations which treat their operands as signed or unsigned.
With two's complement, addition, subtraction and multiplication (but not
division) behave identically for signed or unsigned values.
The main difference is in comparisons.
A signed comparison subtracts two values then checks whether the overlow
flag is set, while an unsigned comparison would check the carry flag
instead.
Apart from division, the only common instruction which has signed and
unsigned variants is a right shift.
Eric Sosman said:<topicality level="minimal">
Long ago I used a machine that treated all its CPU registers
as signed magnitude numbers, and did arithmetic accordingly.
Addresses were notionally unsigned; the machine just grabbed the
right number of low-order bits from the appropriate register and
ignored the rest, including the sign bit.
The fun part was that "all CPU registers" included the program
counter, and that "increment" meant "add one." I wasted a fair
amount of time trying to concoct a sequence of instructions that
would execute normally until encountering one that set the PC's sign
bit, then run again in reverse as the PC "incremented" to successively
lower addresses ...
</topicality>
the output might be
":red-segment: :blue-segment: :beige-segment:"
you're comparing pointers to different objects which is UB.
I like the idea that my mind can exhibit undefined behaviour...
Have I reformatted my hard drive just by thinking about this stuff?
hmm. well that's unspecified behaviour. Though we know aptr
and bptr will end up with valid integers.
interesting. The complier would have to remember that they had been
pointers
Ok, but the issue is addresses.
Suppose a machine has, say, an auto-increment addressing mode (an
idea that goes back at least to the PDP-11), which is useful for
stepping through arrays. Thus something like:
*ptr++ = 0;
might be a single instruction. Assuming for concreteness and
simplicitly that addresses are 16 bits, what happens on the machine
level when ptr==0x7FFF?? What happens when ptr==0xFFFF?
Can a
single object cover a range of addresses that includes 0x7FFF and
0x8000? What about 0xFFFF and 0x0000 (or, equivalently, -1 and 0)?
What instructions are used to compare addresses?
Full- (or double-, depending on your PoV) width multiplies are different
too. ff*ff = 0001 or fe01.
And multiply.
At the CPU level, data is neither signed nor unsigned. It's typically the
operations which treat their operands as signed or unsigned.
[...]Nobody said:I can't think of a situation where the CPU considers addresses as either
"signed" or "unsigned"; they are just "words".
My PoV is "double-".
In C
, int * int -> int, long * long -> long, and so on.
Once the types have been promoted, it makes no difference as to their
signedness. OTOH, the promotion is affected by the signedness.
True for x86's double-width multiply, but how many architectures have that
feature?
[...]Nobody said:I can't think of a situation where the CPU considers addresses as either
"signed" or "unsigned"; they are just "words".
Assuming, as before, 16-bit addresses, if a single 32-byte object
can cover the range of addresses from 0x7FF0 to 0x800F, then
addresses are being treated as unsigned. Similarly, if a single
32-byte object can cover the range of addresses from -16 to +15
,then addresses are being treated as signed
(and a null pointer is not all-bits-zero).
If both are possible then it's a rather odd architecture.
Mark McIntyre said:Firstly its my understanding that n1256 is the final draft, not the
edited final version.
Richard said:Yes,
... but for ordinary programmers, the differences between the two are
so small that they might as well not exist. However, it may be relevant
for legal reasons. Someone may be willing to pay money just so their
lawyers can say that they have a copy of the _official_ Standard.
James Kuyper said:No. They started editing from the final officially approved C99
standard, applying all three officially approved Technical corrigenda.
To get the official standard, you need not only the C99 standard
itself, but also all three officially approved Technical Corrigenda;
n1256.pdf is less official than that set of four documents, but is a
lot more convenient for actual use (and much cheaper, too).
Ben said:x86-64 treats addresses as signed numbers. Usually, user
processes occupy positive addresses and the kernel occupies
negative addresses. I don't think that objects are allowed to
cross 0.
Richard said:GCC has an feature that tracks whether it's possible for a pointer to be
null; if you dereference a pointer, GCC then sets the "notnull"
attribute on it and any future checks for a null pointer are optimized
away.
[...]
I assume that this optimization is to remove redundant tests/branches
and therefore improve performance; presumably it wouldn't be there if it
didn't help in at least some cases.
As I've said before, I wish it would tell you when it's doing
this, as it traditionally has with simpler optimisations such as
always-true comparisons. Being able to remove a chunk of code
can be a sign of a mistake by the programmer, and just removing
it often makes the results of the error even more obscure.
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.