memset all bits to zero will let float/double to zero?

D

Dan Pop

In said:
This is generating a lot more discussion than it deserves.

And it is entirely due to your pointless remarks.

If all bits zero is an inconvenient representation for the null pointer on
a given platform, no implementor is going to use it and there is no point
in explaining how it could be used, anyway.

Dan
 
K

Keith Thompson

I've seen you state that it could (and I agree), but I haven't seen you
arguing that it should. And IMO you'd need some pretty good arguments,
because sacrificing speed and safety just to cater for sloppy
programmers is rarely a good idea, IYAM.

My assumption was that there would be no performance penalty in using
data registers for any operations that are valid for null pointers.
And catering to sloppy programmers, unfortunately, can be an effective
way to sell more compilers.
 
K

Keith Thompson

And it is entirely due to your pointless remarks.

Nonsense. I was merely responding to Malcolm's remarks about a
hypothetical architecture. He said that on a CPU where loading
all-bits-zero into an address register causes a trap, a C
implementation would have to use some other value for null pointers.
I pointed out that, given a few additional assumptions, an
implementation *could* use all-bits-zero for null pointers, and there
are some advantages in doing so. One is that it makes it easy to trap
attempts to dereference null pointers. Another is that it avoids
breaking existing code that incorrectly assumes a null pointer is
all-bits-zero. (You and I might consider this second point to be a
disadvantage, but the sales and marketing departments might differ.)

BTW, none of this is intended as a criticism of Malcom; it was nothing
more than a friendly correction.

The length of this thread is due to your apparent need to jump in and
refute your misinterpretation of what I actually wrote.
If all bits zero is an inconvenient representation for the null pointer on
a given platform, no implementor is going to use it and there is no point
in explaining how it could be used, anyway.

I'm not convinced of that. There's still a lot of code out there that
passes null pointers to functions without prototypes, and that will
break if null pointers are anything other than all-bits-zero. An
implementer is faced with a choice: use a non-zero value, causing such
code to break (and, we might hope, be fixed), or cater to it and very
likely sell more compilers to users who don't have time to go back and
fix their old code. I'm not necessarily arguing that the second
choice is a good one, but it's one that some implementers might be
likely to make.

But that's really beside the point I was trying to make, which is that
trapping on loading all-bits-zero into an address register doesn't
necessarily make all-bits-zero an inconvenient representation for the
null pointer.

On a typical platform, all-bits-zero and all-bits-one are equally
valid choices for the null pointer. Any compiler implementer could
easily choose to use the latter. (It's not an option if there are
existing libraries that use all-bits-zero, but presumably the choice
is available for new architectures.) And yet, every system I'm
familiar with uses all-bits-zero for null pointers. (Yes, I know
there are a lot of systems I'm not familiar with.) I'm sure a lot of
that is inertia, but I would guess that catering to existing broken
code is part of the motivation for that.
 
C

Christian Bau

Keith Thompson said:
I'm not convinced of that. There's still a lot of code out there that
passes null pointers to functions without prototypes, and that will
break if null pointers are anything other than all-bits-zero.

That kind of code would also break if arguments are passed on the stack,
and int and pointers have different sizes.
 
R

Richard Bos

Keith Thompson said:
My assumption was that there would be no performance penalty in using
data registers for any operations that are valid for null pointers.

That is, data registers must be used for every possible operation except
dereferencing and comparison for order (there's no exception for
equality comparisons!), if there is even a small chance of any operand
being a null pointer. I don't think that such an architecture is likely
to even have pointer registers - under those constraints, they're nearly
useless.
And catering to sloppy programmers, unfortunately, can be an effective
way to sell more compilers.

Ah, well, that's the difference between "should" and "will have to".
Unfortunately.

Richard
 
C

Christian Bau

That is, data registers must be used for every possible operation except
dereferencing and comparison for order (there's no exception for
equality comparisons!), if there is even a small chance of any operand
being a null pointer. I don't think that such an architecture is likely
to even have pointer registers - under those constraints, they're nearly
useless.

Address registers could be used for any kind of pointer arithmetic,
which I think is quite common. And of course you could use the address
registers for anything if you can prove that a null pointer will lead to
undefined behavior somewhere along the line; I think that will be quite
common as well.
 
D

Dan Pop

In said:
(e-mail address removed) (Dan Pop) writes:

The length of this thread is due to your apparent need to jump in and
refute your misinterpretation of what I actually wrote.

Since I wasn't the only one to "misinterpret" what you wrote, the guilty
party must be searched somewhere else.
I'm not convinced of that. There's still a lot of code out there that
passes null pointers to functions without prototypes, and that will
break if null pointers are anything other than all-bits-zero.

You're badly confusing null pointers and the null pointer constant.
There is absolutely nothing wrong with passing a null pointer to a
non-prototyped function, regardless of the representation of the null
pointer.

Passing 0 to a non-prototyped function expecting a pointer is and has
*always* been an error. Read the standard. Read K&R1.
But that's really beside the point I was trying to make, which is that
trapping on loading all-bits-zero into an address register doesn't
necessarily make all-bits-zero an inconvenient representation for the
null pointer.

It does, every time address registers have different properties than
data registers. And even otherwise, as it puts a greater pressure on
the data registers. If the architecture has address registers, there
must be a *good* reason for that and using data registers instead for
pointer operations is defeating that good reason.

Dan
 
K

Keith Thompson

In said:
(e-mail address removed) (Dan Pop) writes: [snip]
If all bits zero is an inconvenient representation for the null pointer on
a given platform, no implementor is going to use it and there is no point
in explaining how it could be used, anyway.

I'm not convinced of that. There's still a lot of code out there that
passes null pointers to functions without prototypes, and that will
break if null pointers are anything other than all-bits-zero.

You're badly confusing null pointers and the null pointer constant.
There is absolutely nothing wrong with passing a null pointer to a
non-prototyped function, regardless of the representation of the null
pointer.

Passing 0 to a non-prototyped function expecting a pointer is and has
*always* been an error. Read the standard. Read K&R1.

Yes, I meant null pointer constants. I didn't badly confuse them; I
made a minor error.

Yes, passing 0 (or NULL, if it's #defined as 0) to a non-prototyped
function expecting a pointer is an error. The fact remains that there
is code in the real world that makes this error, and happens to work
as expected on platforms where pointers are the same size as int and
the null pointer has the same representation as (int)0.

Dan, did you really think that I don't know already this? I've
written enough here about null pointers and null pointer constants
that it should be pretty obvious that I do understand this stuff
reasonably well. If I type "null pointers" when "null pointer
constants" would make more sense, a reasonable reader would assume I
made a minor mistake, not that I have no idea what I'm talking about.
It does, every time address registers have different properties than
data registers. And even otherwise, as it puts a greater pressure on
the data registers. If the architecture has address registers, there
must be a *good* reason for that and using data registers instead for
pointer operations is defeating that good reason.

I didn't think about the hypothetical CPU architecture to that level
of detail, because I was only trying to make a minor point about
what's possible.

I don't currently have convenient access to 68k machine code, but the
68k does have separate data and address registers. If I recall
correctly (which I might not), adding a value other than 1, 2, or 4 to
an address requires loading it into a data register and performing
integer arithmetic on it (adding 1, 2, or 4 can be done with
auto-increment addressing modes). There may even be cases where it
makes sense to use address registers for some operations on integers.

Not all CPU architectures are flawless. Once the architecture is
fixed, I would expect a compiler to do whatever is most convenient
and/or efficient, even if it means using data registers for address
operations or vice versa.
 
D

Dan Pop

In said:
[email protected] (Dan Pop) said:
In said:
(e-mail address removed) (Dan Pop) writes: [snip]
If all bits zero is an inconvenient representation for the null pointer on
a given platform, no implementor is going to use it and there is no point
in explaining how it could be used, anyway.

I'm not convinced of that. There's still a lot of code out there that
passes null pointers to functions without prototypes, and that will
break if null pointers are anything other than all-bits-zero.

You're badly confusing null pointers and the null pointer constant.
There is absolutely nothing wrong with passing a null pointer to a
non-prototyped function, regardless of the representation of the null
pointer.

Passing 0 to a non-prototyped function expecting a pointer is and has
*always* been an error. Read the standard. Read K&R1.

Yes, I meant null pointer constants. I didn't badly confuse them; I
made a minor error.

Yes, passing 0 (or NULL, if it's #defined as 0) to a non-prototyped
function expecting a pointer is an error. The fact remains that there
is code in the real world that makes this error, and happens to work
as expected on platforms where pointers are the same size as int and
the null pointer has the same representation as (int)0.

This is not enough to make it work. If the platform has separate data
and address registers, the value *may* be passed in the "wrong" register
and the code will fail miserably.
Dan, did you really think that I don't know already this? I've

I have no clue about what you know and what you don't, especially
since there are many technical inaccuracies in your posts, as the
ones pointed out above. It's none of my business to figure out what you
know or what you mean, when what you write is downright incorrect.

I don't know why you keep invoking severely broken code that happens to
work by accident as an argument. Please show us some *concrete* evidence
that implementors care about such code when making their decisions and
that they're ready to sacrifice performance in order to keep broken code
working as intended by its developer.

Dan
 
W

Wayne Rasmussen

Dan said:
In said:
[email protected] (Dan Pop) said:
But, there are no implementations of the C 89 or C99 standard
that use anything but all zero bits to represent NULL or +0.0.

The problem is that the standard allows it. For instance, if an
architecture appeared that trapped whenever an illegal address
(including 0) was loaded into an address register, then obviously
NULL would have to be some other value.
[...]

Maybe not. I think the only operation for which it matters is
equality comparison. If the compiler doesn't load a pointer value
into an address register for a pointer comparison, it might still be
able to use all-bits-zero for null pointers.

OTOH, if the CPU has dedicated address registers, it may be that ALL
pointer operations are more efficient if using address registers.

Think about a CPU with 32-bit data registers and 48-bit address registers.

On any CPU, the code generated by a C compiler has to be able to do
assignments and equality comparisons on pointer values, including null
pointer values. If loading a null pointer into an address register
causes a trap, the C implementation won't be able to use the address
registers for pointer assignment and equality comparison (unless it
can recover in the trap handler, but that's likely to be inefficient).

But there is nothing preventing the implementor to use a null pointer
representation that doesn't trap! Even if that representation is not
all bits zero.
If it's possible to perform these operations efficiently without using
the address registers, that's great;

Even then, why bother? If address registers exist in the first place,
there *must* be a reason. Not using them for pointer operation is very
likely to be inefficient, even if they have the same size as data
registers for the simple reason that data registers get under greater
pressure, while the address registers are unused. When the sizes are
different, as in my example, the difference is going to be even more
important.
a C compiler will be able to use
all-bit-zero as the null pointer value, and everything will work.

You make it sound as if there is any merit in having null pointers
represented as all bits zero. Why would the implementor try to use
that representation, if it's causing even the slightest inconvenience?
If it isn't, either the C compiler will be forced to use a different
value for null pointers,

The representation of null pointers is supposed to be the most convenient
and efficient for the implementation, so there is no reason to consider
all bits zero as an a priori solution, unless it is a convenient choice.
or it won't be possible to implement C efficiently.

Only an idiot would create an inefficient implementation for the sake of
having null pointers represented as all bits zero. See below.
The same will probably be true for languages other than
C, which probably means that no such CPU will be designed for
general-purpose computers.

Sheer nonsense. There is always an efficient solution: create a
"reserved" object at some address and use its address as the null pointer
representation. This is what most implementations are doing anyway, using
the address 0 for this purpose. But there is nothing magic about this
address, it just happens to be very convenient for this particular
purpose, on most architectures.

Since most modern machines treat addresses as unsigned integers,
address 0 happens to be at the beginning of the address space and it's
traditionally reserved anyway, on non-virtual memory architectures:
it's the address of a GPR or an interrupt vector or the entry point in
the boot loader (possibly ROM space). On virtual memory architectures,
it's simply a matter of either leaving page zero unmapped (best choice,
IMHO) or mapping but not using it (in read only mode, preferably).

Or people could look at the c faq to see how this has been done in the past. At
symbolics we had a tagged architecture to handle this situation. If you are
using a theoretical system, you might as well add tags to it! BTW, this is not a
slam on anyone.

Wayne
 
K

Keith Thompson

This is not enough to make it work. If the platform has separate data
and address registers, the value *may* be passed in the "wrong" register
and the code will fail miserably.

Fine, I didn't exhaustively list all the criteria necessary for such
code to work. My point (and it was a minor one) was that such code
does happen to "work" on some platforms, and some implementers *might*
cater to such code.

[snip]
I don't know why you keep invoking severely broken code that happens to
work by accident as an argument. Please show us some *concrete* evidence
that implementors care about such code when making their decisions and
that they're ready to sacrifice performance in order to keep broken code
working as intended by its developer.

I have no such concrete examples. I know of no real world
architecture on which there's a significant performance difference
between using all-bits-zero for null pointers and using some other
representation. (Though if using all-bits-zero also makes it easier
to catch errors, that might be a reason to use all-bits-zero even if
there is a slight performance penalty.)

I do have a couple of speculative examples where implementors *might*
have catered to "broken" code, though perhaps not at the cost of a
performance penalty. For most implementations, the argument passing
convention is such that printf() happens to work in many cases even
without a valid prototype. For most implementations, null pointers
are represented as all-bits-zero, even though using a different
representation would only break invalid code. Perhaps that's just
inertia.


I responded that I'm not convinced, and presented some possible
reasons why some implementors *might* do that. Perhaps nobody
actually would; at this point, I don't particularly care, and I doubt
that anyone else still following this thread does either.
 
M

Malcolm

Keith Thompson said:
For most implementations, null pointers are represented as all-
bits-zero, even though using a different representation would
only break invalid code. Perhaps that's just inertia.
I think the main reason is psychological. foo(0) passes a NULL pointer to
foo(), so an implementor is going to use all bits zero as the representation
unless there is a really pressing reason to do otherwise. Then there's the
problem of code that uses memset() or calloc() to intialise pointer data.
 
R

Richard Bos

Malcolm said:
I think the main reason is psychological. foo(0) passes a NULL pointer to
foo(), so an implementor is going to use all bits zero as the representation
unless there is a really pressing reason to do otherwise.

Huh? Non sequitur, methinks.
Then there's the
problem of code that uses memset() or calloc() to intialise pointer data.

The problem is in that code, not in the non-zero representation of a
null pointer where this is useful.

Richard
 
D

Dan Pop

In said:
I responded that I'm not convinced, and presented some possible
reasons why some implementors *might* do that. Perhaps nobody
actually would; at this point, I don't particularly care, and I doubt
that anyone else still following this thread does either.

So, you're adamantly trying to make a pointless point, as long as you
can produce no concrete evidence supporting your position. A "might"
without any kind of support behind (except pure speculation, with no
supporting evidence) is worthless in any technical debate.

It was one of the extremely rare cases when Malcolm actually made a
sensible statement and you chose to disagree for no *good* reason
(your empty "might"s do not qualify as good reason).

When all other things are equal, implementors are known to make the choice
that preserves as much broken code as possible. But this was NOT the
case for this discussion, which involved cases when choosing all bits
zero for null pointers has performance costs, of one nature or another.

Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,048
Latest member
verona

Latest Threads

Top