struct initialization, pointers, doubles

N

Noob

Hello everyone,

Are the following snippets well-defined

unsigned u;
memset(&u, 0, sizeof u); /* now u == 0 */

and

int i;
memset(&i, 0, sizeof i); /* now i == 0 */

Is the answer the same in C90 and C99?

AFAIU, on some (many) platforms, the representation of NULL
and 0.0 is all-bits 0, but there is no such guarantee.

Consider

struct foo { int i; void *p; double d; };

struct foo bar = { 0 };

On a platform where NULL and 0.0 are all-bits 0, the compiler
is free to change the statement to

memset(&bar, 0, sizeof bar);

But if that were not the case, the compiler would have to
output the machine-code equivalent of

bar.i = 0;
bar.p = NULL;
bar.d = 0.0;

(which might be much slower if the struct holds e.g. arrays of
pointers and doubles).

Is my understanding correct?

Regards.
 
B

Ben Bacarisse

Noob said:
Hello everyone,

Are the following snippets well-defined

unsigned u;
memset(&u, 0, sizeof u); /* now u == 0 */

and

int i;
memset(&i, 0, sizeof i); /* now i == 0 */

Is the answer the same in C90 and C99?

Both are OK in C99 -- at least as modified by TC2. The matter was
raised as a defect in DR263 and TC2 (Technical Corrigendum 2) added
these words:

| For any integer type, the object representation where all the bits are
| zero shall be a representation of the value zero in that type.

to 6.2.6.2 p5.

I think it is safe to say that the code is intended to work as you
expect in C90 since C90 specifies that integers use a pure "binary
numeration system".
AFAIU, on some (many) platforms, the representation of NULL
and 0.0 is all-bits 0, but there is no such guarantee.

Consider

struct foo { int i; void *p; double d; };

struct foo bar = { 0 };

On a platform where NULL and 0.0 are all-bits 0, the compiler
is free to change the statement to

memset(&bar, 0, sizeof bar);

But if that were not the case, the compiler would have to
output the machine-code equivalent of

bar.i = 0;
bar.p = NULL;
bar.d = 0.0;

(which might be much slower if the struct holds e.g. arrays of
pointers and doubles).

Is my understanding correct?

Seems OK to me.
 
B

Ben Bacarisse

Noob said:
Therefore, it would be in an implementation's best interest to make
NULL all-bits zero, if possible, right?

It would certainly keep things simple but if the implementation could
get hardware assistance for some specific objective like trapping on
NULL pointer uses (and not simply NULL pointer dereference) then there
could be a good reason to pick some special representation for NULL.
The implementation's "best interests" are not always simplicity.
 
E

Eric Sosman

[... null pointers and zero floating-point as all-bits-zero ...]

Therefore, it would be in an implementation's best interest to make
NULL all-bits zero, if possible, right?

It is in an implementation's interest to do so, since it allows
lots and lots of unclean code to work without the bother of fixing
it. But "best" interest is open to question; it is possible that
the implementors might think some other goal "better" than the goal
of supporting slapdash programmers.
 
B

Barry Schwarz

Therefore, it would be in an implementation's best interest to make
NULL all-bits zero, if possible, right?

I work on a system where reading location 0 is allowed. I don't know
of any but it wouldn't surprise me if some systems allowed writing to
location 0. In either case, the implementation should not change the
result of an allowed activity.
 
P

Peter Nilsson

Noob said:
AFAIU, on some (many) platforms, the representation of
NULL and 0.0 is all-bits 0, but there is no such
guarantee.

Nit: NULL is a macro, you're talking about null pointers.
Consider

struct foo { int i; void *p; double d; };

   struct foo bar = { 0 };

On a platform where NULL and 0.0 are all-bits 0, the compiler
is free to change the statement to

   memset(&bar, 0, sizeof bar);

But if that were not the case, the compiler would have to
output the machine-code equivalent of

   bar.i = 0;
   bar.p = NULL;
   bar.d = 0.0;

(which might be much slower if the struct holds e.g. arrays
of pointers and doubles).

What makes you think it might be *much* slower? If you say
it's slower on x86, then my response is *everying* is slower
on x86! ;)

If it's a large array, then the memset is likely going to be
an unrolled loop in any case and a non-all-bits-zero initial
load into a register just adds 1 or 2 cycles at most to 1000.

That said, on many risc based cpu's loading and testing
non-zero values like -1 is no quicker or slower than loading
and testing zero.

But consider FPU instructions. Why would it be quicker or
slower to load, store or test 0.0 if the representation were
not all-bits-zero?
 
N

Noob

Peter said:
Nit: NULL is a macro, you're talking about null pointers.


What makes you think it might be *much* slower? If you say
it's slower on x86, then my response is *everying* is slower
on x86! ;)

If it's a large array, then the memset is likely going to be
an unrolled loop in any case and a non-all-bits-zero initial
load into a register just adds 1 or 2 cycles at most to 1000.

That said, on many risc based cpu's loading and testing
non-zero values like -1 is no quicker or slower than loading
and testing zero.

Hmmm, let me think about it...
"much slower" does look like an overstatement.

However, consider

struct foo { double d; void *p; };
struct bar { int count; struct foo arr[1000]; };

struct bar toto = { 0 };

On a platform where 0.0 and NULL are all-bits 0,
the compiler would simply output

memset(&toto, 0, sizeof toto);

relying on an optimized memset, with all the tricks needed
to make it fast.

On a platform where
the representation of 0.0 is 0x11223344 (assume 32 bits,
even if it's not correct)
the representation of NULL is 0x55667788

The compiler would have to output

toto.count = 0;
temp1 = 0x11223344;
temp2 = 0x55667788;
for (i = 0; i < 1000; ++i)
{
toto.arr.d = temp1;
toto.arr.p = temp2;
}

The compiler would likely inline this code everywhere needed.
While the code itself might not be slower than the
straight-forward memset to 0, the instruction cache
pollution might measurably degrade performance.

Or am I waaay off base?
But consider FPU instructions. Why would it be quicker or
slower to load, store or test 0.0 if the representation were
not all-bits-zero?

My "concern" was initialization only.

Regards.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top