array elements

S

sophia.agnes

Hi,

is there any guarantee that elements of an un initialized array
by default contains the element 0 ?
 
M

Malcolm McLean

is there any guarantee that elements of an un initialized array
by default contains the element 0 ?
The implementation sometimes guarantees it. For instance Microsoft's Visual
C used to initialise all memory to 0xC0 in debug, I think to zero in
release - the idea that if an integer with 0xC0C0.... is used as an index it
will generate a large negative vlaue and almost certainly a crash, whilst 0
will probably keep the show on the road.

However it is incorrect to rely on such details.

There are two exceptions.

int array[10000] = {0};

will initialise all elements of array to zero. This is a concession to save
typing.

in global space

/* global */
int array[1000];

void foo(void)
{

}

array is actually initialised to zero, because it is a global. However it is
poor form to rely on this.
 
S

santosh

Hi,

is there any guarantee that elements of an un initialized array
by default contains the element 0 ?

Yes, if it is a static array, the compiler initialises the elements to
zero if no explicit initialisation is provided.

For the case of auto arrays, if you initialise at least one member, the
trailing members are initialised to zero. Otherwise the array has
indeterminate valued elements.

/* file.c */

int arr[10]; /* This is initialised to zero by the compiler/loader */

/* file1.c */

fx()
{
int arr[10] = { 0 }; /* Remaining members are set to zero */
/* ... */
}
 
R

Richard Heathfield

Malcolm McLean said:
The implementation sometimes guarantees it.

However it is incorrect to rely on such details.

With you so far...

There are two exceptions.

int array[10000] = {0};

will initialise all elements of array to zero. This is a concession to
save typing.

in global space

/* global */
int array[1000];

void foo(void)
{

}

array is actually initialised to zero, because it is a global. However it
is poor form to rely on this.

Here I must disagree. I would argue that it's poor form to use the file
scope object in the first place, but if you're going to do so, I don't see
why it's poor form to rely on the guarantee that ISO gives in its default
static initialiser rule.
 
B

Barry Schwarz

Hi,

is there any guarantee that elements of an un initialized array
by default contains the element 0 ?

For a static array or one defined at file scope, if the elements are
integer they contain the VALUE 0. If they are floating point, they
contain 0.0. If they are pointers, they contain NULL. If the
elements are structures, apply the above recursively to the members.
If the elements are unions, ONLY the first member is initialized. They
will all compare equal to zero but need not be represented by all bits
0.

For an automatic array, the guarantee is just the opposite. The
values are indeterminate and any attempt to evaluate an indeterminate
value invokes undefined behavior.


Remove del for email
 
S

sophia.agnes

They will all compare equal to zero but need not be represented by all
bits 0.

can you be bit more precise on the above statement.
Do you mean signed zero ?, so that a sign bit 1 is used to represent -
ve values isn't it?
 
S

santosh

They will all compare equal to zero but need not be represented by all
bits 0.

can you be bit more precise on the above statement.

I suggest that you read the entirety of the following section of the
comp.lang.c FAQ:

<http://c-faq.com/null/index.html>

Also this subject has been discussed innumerable times in this group. A
quick search on Google Groups will turn up a lot links to excellent
previous discussions.
Do you mean signed zero ?, so that a sign bit 1 is used to represent -
ve values isn't it?

No, only for certain architectures.
 
B

Ben Bacarisse

For an automatic array, the guarantee is just the opposite. The
values are indeterminate and any attempt to evaluate an indeterminate
value invokes undefined behavior.

Nit: except for unsigned char where the values can only be
"unspecified". Unspecified values are always valid for the given
type, so UB does not inevitable follow. Any program that makes use
of this nit is fit only for language lawyering.
 
R

Richard Tobin

I'm certain you mean were instead of was. Third person subjunctive present
tense.

The subjunctive is in decline, at least in British English. Not many
people will miss it.

-- Richard
 
B

Barry Schwarz

They will all compare equal to zero but need not be represented by all
bits 0.

can you be bit more precise on the above statement.
Do you mean signed zero ?, so that a sign bit 1 is used to represent -
ve values isn't it?
Please configure your newsgroup software to distinguish between what
your are quoting and what you are adding. And then quote enough of
the message you are replying to so the context is clear. Not everyone
is going to remember what "they" stands for.

For a scalar object x of type T which has been initialized with or
assigned a value, you can print the individual bytes that make up the
object with a function like

void hexprint(void *v, size_t n) {
size_t i;
unsigned char *p = v;
for (i = 0; i < n; i++)
printf("%x ", p);
putc('\n');
}

You would call this function with a statement like
hexprint(&x, sizeof x);

If we assume the system uses 8-bit bytes, then for any of the integer
types with a value of 0 the output will be a sequence of 0x00. Using
the terminology in question, an integer value of 0 is represented by
all bits zero. However, this need not be true for floating point and
pointer types. (On my system, one of the many valid representations
of 0.0f is 0x40 0x00 0x00 0x00.) In those cases where it is not true,
the compiler is responsible for ensuring that the code generated for
the expression
x == 0
still evaluates to 1. How it does this is implementation dependent.

The point I was trying to make is that the default initialization for
a static object is performed more in the manner of an assignment
rather than the technique used by calloc.


Remove del for email
 
B

Barry Schwarz

Nit: except for unsigned char where the values can only be
"unspecified". Unspecified values are always valid for the given
type, so UB does not inevitable follow. Any program that makes use
of this nit is fit only for language lawyering.

This makes sense but do you have a reference? 6.2.4-5, 6.2.4-6,
6.7.8-10, and J.2 (bullet 10) don't have any exceptions for unsigned
characters when asserting that accessing an indeterminate value
results in undefined behavior.


Remove del for email
 
H

Harald van Dijk

This makes sense but do you have a reference? 6.2.4-5, 6.2.4-6,
6.7.8-10, and J.2 (bullet 10) don't have any exceptions for unsigned
characters when asserting that accessing an indeterminate value results
in undefined behavior.

J.2 is not normative, and no longer correct. The only possibility for
undefined behaviour for use of an indeterminate value is given by
6.2.6.1p5 (trap representations), and unsigned char isn't allowed to have
trap representations.

This program is also strictly conforming in C99:

#include <limits.h>
int main(void) {
if (INT_MIN == -0x8000
&& INT_MAX == 0x7FFF
&& sizeof(int) * CHAR_BIT == 16)
{
int i;
return !(i | 1);
}
else
return 0;
}

The indeterminate value of i is only read after it is determined that
trap representations are not possible, so 6.2.6.1p5 doesn't apply, so the
behaviour is defined.
 
B

Ben Bacarisse

Barry Schwarz said:
This makes sense but do you have a reference?

To add to Harald van Dijk's answer (I am just tidying up marked
messages!), my reasoning follows from 3.17.2 which defines an
indeterminate value as "either an unspecified value or a trap
representation". 6.2.6.2 para. 1 tells us that unsigned char has no
padding bits, so trap representations are not possible. Thus
uninitialised unsigned char variables can only hold, at worst, an
"unspecified value".

Section 3.17.3 defines this as a "valid value of the relevant type
where this International Standard imposes no requirements on which
value is chosen in any instance". Accessing a valid value can't be
the sole cause of undefined behaviour.
 
B

Barry Schwarz

To add to Harald van D?k's answer (I am just tidying up marked
messages!), my reasoning follows from 3.17.2 which defines an
indeterminate value as "either an unspecified value or a trap
representation". 6.2.6.2 para. 1 tells us that unsigned char has no
padding bits, so trap representations are not possible. Thus

Does the standard really require an object to have padding bits in
order for a value to be a trap representation?
uninitialised unsigned char variables can only hold, at worst, an
"unspecified value".

Section 3.17.3 defines this as a "valid value of the relevant type
where this International Standard imposes no requirements on which
value is chosen in any instance". Accessing a valid value can't be
the sole cause of undefined behaviour.

By this reasoning, if an int of indeterminate value happens to contain
an unspecified value (as opposed to a trap representation), then
accessing that int would not invoke undefined behavior.

By the way, 6.2.6.1-5 insures that all character types, not just
unsigned ones, cannot contain a trap representation.

The point I am trying to make is that the object need not contain a
trap representation for its evaluation to yield undefined behavior.
The undefined behavior is caused by the fact that the value is
indeterminate. Surely no one want to argue that evaluating an
indeterminate 8-bit unsigned char actually leads to one of 256
possible unspecified behaviors.


Remove del for email
 
H

Harald van Dijk

Does the standard really require an object to have padding bits in order
for a value to be a trap representation?

For unsigned integer types, yes. For signed types, the representation
which would correspond to negative zero or -TYPE_MAX-1 (depending on the
representation) is also allowed to be a trap representation. For floating
point or pointer types, trap representations should always be considered
a possibility.
By this reasoning, if an int of indeterminate value happens to contain
an unspecified value (as opposed to a trap representation), then
accessing that int would not invoke undefined behavior.

Correct. However, when it is unspecified whether the behaviour is
defined, effectively, the behaviour is already undefined.
By the way, 6.2.6.1-5 insures that all character types, not just
unsigned ones, cannot contain a trap representation.

Good point. Unfortunately, it doesn't actually say signed/plain char
can't have trap representations. It merely doesn't say the behaviour is
undefined if you read such a trap representation. This means the
behaviour is still undefined by omission if you add 0 to it, for example,
I believe.
The point I am trying to make is that the object need not contain a trap
representation for its evaluation to yield undefined behavior. The
undefined behavior is caused by the fact that the value is
indeterminate. Surely no one want to argue that evaluating an
indeterminate 8-bit unsigned char actually leads to one of 256 possible
unspecified behaviors.

Actually, yes, that's what the standard states as of 1999.
 
J

James Kuyper

Barry said:
On Wed, 05 Dec 2007 17:31:44 +0000, Ben Bacarisse


Does the standard really require an object to have padding bits in
order for a value to be a trap representation?

Not in general. However, the rules for unsigned types require that all
possible bit patterns of the value bits represent valid values.
Therefore, the only way to have a trap representation of an unsigned
type is if there are padding bits. Since unsigned char is not allowed to
have padding bits, this option isn't open.
By this reasoning, if an int of indeterminate value happens to contain
an unspecified value (as opposed to a trap representation), then
accessing that int would not invoke undefined behavior.

Correct. But, since you have no way of knowing or ensuring that this is
the case when you leave an int uninitialized, it doesn't do you much good.
By the way, 6.2.6.1-5 insures that all character types, not just
unsigned ones, cannot contain a trap representation.

No, it merely restricts the undefined behavior that can result from
creating or reading such a representation to non-character types. The
defining characteristic of a trap representation is not the undefined
behavior; it is the fact that it does not "represent a value of the
object type". I find this a meaningless distinction, and probably
contrary to the intent of the authors, but that is what they actually wrote.
 
S

somenath

Yes, if it is a static array, the compiler initialises the elements to
zero if no explicit initialisation is provided.

For the case of auto arrays, if you initialise at least one member, the
trailing members are initialised to zero. Otherwise the array has
indeterminate valued elements.


I tried with the bellow mentioned code to see the array initialization
working .

#include<stdio.h>
int main(void)
{
int ar[10]= {-1};
int i = 0 ;
for (i = 0 ;i <10 ;i++)
{
printf("\n value of ar[%d] = %d \n",i,ar);
}
return 0;
}

Output of the program

value of ar[0] = -1

value of ar[1] = 0

value of ar[2] = 0

value of ar[3] = 0

value of ar[4] = 0

But I would like to know what the intention behind this kind of
behavior is? Meaning ar[0] = -1 why the other members are
initialized with 0 why not -1 ? What is the idea behind this
behavior?

/* file.c */

int arr[10]; /* This is initialised to zero by the compiler/loader */

/* file1.c */

fx()
{
int arr[10] = { 0 }; /* Remaining members are set to zero */
/* ... */
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,533
Members
45,007
Latest member
OrderFitnessKetoCapsules

Latest Threads

Top