I don't get how the computer arrives at 2^31

K

Keith Thompson

Chuck F. said:
Keith Thompson wrote: [snip]
These? The initial value of i isn't used; the initial value of
sum is.

Mea Culpa. However I consider the proper initialization point is in
the for statement, i.e. "for (i = 0, sum = 0; ...)". In other words
as close as possible to the point of use.

I don't know that I'd want to squeeze the initializations of both i
and sum into the for loop, but it's certainly an option.

In C99, you can declare i at its point of first use:

for (int i = 0; i <= 30; i ++) { ... }
 
D

Douglas A. Gwyn

Jordan said:
I thought the claim i was disputing was that there could be undefined
behavior without the standard making any explicit statement, not that
the explicit statement could be worded some particular other way.

The words that Wojtek just cited clearly state that one form
of undefined behavior occurs when the standard says nothing
at all about the behavior.

We tried to explicitly call out cases of undefined behavior
that are important to know about.
The standard doesn't define the effects of the phase of the moon on the
program - does that mean running a program while the moon is full is
undefined? how about the first quarter?

The behavior of the moon is undefined according to the C
standard. Fortunately it is not something that a C
programmer or implemntor needs to know.
 
D

Douglas A. Gwyn

Jordan said:
As it happens, a positive signed int is permitted in general for
variadic functions that take an unsigned int [same for an unsigned <
INT_MAX for a signed] - The reason i added comp.std.c, was for the
question of whether this same exception would apply to printf.

Yes, the interface semantics for printf are the same as for
any other variadic function.
 
W

Wojtek Lerch

Douglas A. Gwyn said:
Jordan said:
As it happens, a positive signed int is permitted in general for
variadic functions that take an unsigned int [same for an unsigned <
INT_MAX for a signed] - The reason i added comp.std.c, was for the
question of whether this same exception would apply to printf.

Yes, the interface semantics for printf are the same as for
any other variadic function.

No, the semantics for any call to printf() are defined by the specifications
of the printf() function and the function call operator. The specification
of printf() refers to the promoted values and types of the arguments of the
function call. It does not talk about the type named in any invocation of
the va_arg() macro or refer to va_arg() in any other way. Even if the
entire contents of <stdarg.h> were removed from the standard, the
description of printf() would still make sense and be useful. And I don't
see any reason to doubt that it would define the same semantics.

The semantics for any non-standard variadic function are defined by various
parts of the standard, depending on details of the C code that defines the
function. If a variadic function completely ignores its variadic arguments,
programs are free to use any types of arguments in calls to that function;
but that doesn't imply that the same freedom applies to calls to printf(),
does it?

If some other variadic function uses va_arg() to fetch the values of the
arguments, it's the program's responsibility to ensure that the requirements
of va_arg() are sastisfied. In particular, va_arg() requires that the type
named by its second argument must be compatible with the promoted type of
the corresponding argument of the function call, or one must be the signed
version of the other, or one must be a pointer to void and the other a
pointer to a character type. That's a *restriction* that the standard
places on programs that use va_arg(). It specifically refers to va_arg().
It is *not* a general promise that it's always OK to mix signed with
unsigned or void pointers with char pointers where the standard says that
some two types must be compatible, such as in the description of printf().
 
D

Douglas A. Gwyn

Wojtek said:
... It does not talk about the type named in any invocation of
the va_arg() macro or refer to va_arg() in any other way.

I didn't say that it did.
Signed and unsigned varieties of an integer type are punnable
(for nonnegative values) in the context of function arguments,
and a variadic function cannot determine which type was actually
used in the invocation.
... If a variadic function completely ignores its variadic arguments,
programs are free to use any types of arguments in calls to that function;
but that doesn't imply that the same freedom applies to calls to printf(),
does it?

Yes, printf can be supplied with unused arguments, and it
often is (not always intentionally).
It is *not* a general promise that it's always OK to mix signed with
unsigned or void pointers with char pointers where the standard says that
some two types must be compatible, such as in the description of printf().

I don't recall the fprintf spec requiring compatible types.
 
W

Wojtek Lerch

Douglas A. Gwyn said:
I didn't say that it did.
Signed and unsigned varieties of an integer type are punnable
(for nonnegative values) in the context of function arguments,

They're intended to, according to footnote 31; but AFAIK, there's no
normative text that actually makes that promise; is there?
and a variadic function cannot determine which type was actually
used in the invocation.

But of course it can, on any implementation that has a suitable extension to
allow it. C doesn't require printf() to be implemented as strictly
conforming C, does it?
Yes, printf can be supplied with unused arguments, and it
often is (not always intentionally).

I was talking about the *complete* freedom to pass whatever you want to the
variadic arguments, regardless of the values you pass to the non-variadic
arguments. Some variadic functions allow that, but printf() does not.
I don't recall the fprintf spec requiring compatible types.

No, it requires "the correct type" (7.19.6.1#9); my mistake (I copied
"compatible" from va_arg()). That sounds even more restrictive, doesn't it?
Or do you mean that it's clear enough that "the correct type" is just a
shorter way of saying "one of the correct types, as explained in the
description of va_arg(), assuming that the type specified by 7.19.6.1#7 and
8 is used as the second argument to va_arg()"?
 
K

kuyper

Douglas said:
I didn't say that it did.
Signed and unsigned varieties of an integer type are punnable
(for nonnegative values) in the context of function arguments,
and a variadic function cannot determine which type was actually
used in the invocation.

It can't determine the type by using only those mechanism which are
defined by the standard; however, a conforming implementation can
provide as an extension additional functionality that does allow such
determination, and it's legal for printf() to be implemented in a way
that makes use of such an extension.

Until such time as the committee chooses to replace "are meant to
imply" with "guarantees", and moves the text into the normative portion
of the standard, a conforming implementation can have different types
that are not interchangeable, despite the fact that they are required
to (and do) have the same representation and same alignment. Such an
implementation would be contrary to the explicitly expressed intent of
the committee, but that doesn't prevent it from being conforming, since
the actual requirements of the standard don't implement that intent.
 
D

Douglas A. Gwyn

Wojtek said:
... C doesn't require printf() to be implemented as strictly
conforming C, does it?

However, the specification is based on C function semantics,
and the prototype with the ,...) notation is even part of the
spec, so we know what the linkage interface has to be like.
I was talking about the *complete* freedom to pass whatever you want to the
variadic arguments, regardless of the values you pass to the non-variadic
arguments. Some variadic functions allow that, but printf() does not.

That doesn't seem relevant. Obviously any specific function
has some restriction on its arguments, based on the definition
for the function. In the case of printf the arguments have to
match up well enough with the format so that the proper values
can be fetched for formatting.
No, it requires "the correct type" ...
That sounds even more restrictive, doesn't it?

What is "correct" has to be determined in other ways.
 
D

Douglas A. Gwyn

... Such an implementation would be contrary to the explicitly
expressed intent of the committee, but that doesn't prevent it
from being conforming, since the actual requirements of the
standard don't implement that intent.

That's a spuriously legalistic notion. The C Standard uses a
variety of methods to convey the intent, including examples and
explanatory footnotes. It is evident from this thread that the
actual requirement is not certain (for some readers) from the
normative text alone, but can be clarified by referring to the
footnote that explains what the normative text means.
 
W

Wojtek Lerch

Douglas A. Gwyn said:
However, the specification is based on C function semantics,
and the prototype with the ,...) notation is even part of the
spec, so we know what the linkage interface has to be like.

The "linkage interface"? What is that, in standardese? What exactly does
the standard says about it? How would it be affected if <stdarg.h> were
removed from the standard? Is it really something that the standard
requires to exist, or is it merely a mechanism that compilers commonly use
to implement the required semantics?

Neither the specification of printf() nor the description of function
semantics has references to a "linkage interface". The standard defines
semantics of printf() in terms of the promoted type of the argument. It
says, for instance, that the %u format requires an argument with type
unsigned int, and that if the argument doesn't have the "correct" type, the
behaviour is undefined. There's no hint anywhere that there might actually
be two different correct types for the %u format. There's no hint anywhere
that it's the description of va_arg() that defines what the set of "correct"
types is. There's no hint anywhere that removing <stdarg.h> from the
language could possibly affect the set of "correct" argument types for %u.

Or are all those hints actually there, and I just managed to miss them?
That doesn't seem relevant. Obviously any specific function
has some restriction on its arguments, based on the definition
for the function.

It was just an illustration of the simple fact that what the restrictions on
the arguments is depends on how the semantics of the function are defined.
If the function doesn't use va_arg() or anything else to get the values,
there's no restriction whatsoever. If the function uses
va_arg(ap,unsigned), the restriction is as described for va_arg(): the
argument must be an unsigned int or a non-negative signed int, or else the
behaviour is undefined. If the function is printf() and the format
specifier is %u, the restriction is as described for printf(): the argument
should be an unsigned int and if it doesn't have the correct type, the
behaviour is undefined.
In the case of printf the arguments have to
match up well enough with the format so that the proper values
can be fetched for formatting.

Yes, but how well is well enough? The standard doesn't say that they're
fetched via va_arg() (or "as if" via va_arg()), only that they must have
"the correct type" (notice the singular -- it doesn't say "one of the
correct types"), and names one type for each format specifier. It doesn't
say anything like "a type that has the same representation and alignment
requirements as the specified type", either.
What is "correct" has to be determined in other ways.

Other than by looking it up in the description of printf()?
 
W

Wojtek Lerch

Douglas A. Gwyn said:
That's a spuriously legalistic notion. The C Standard uses a
variety of methods to convey the intent, including examples and
explanatory footnotes. It is evident from this thread that the
actual requirement is not certain (for some readers) from the
normative text alone, but can be clarified by referring to the
footnote that explains what the normative text means.

I know of two places in the normative text that describe situations where a
signed type and the corresponding unsigned type are interchangeable as
arguments to functions, and those two places are quite clear already:
6.5.2.2p6 (calls to a function defined without a prototype) and 7.15.1.1p2
(va_arg()). If there are supposed to be more such situations, then I'm
afraid the footnote itself needs to be clarified. In particular, if the
only difference between two function types T1 and T2 is in the signedness of
parameters, was the intent that the two types are compatible, despite of
what 6.7.5.3p15 says? If not, which ones of the following were intended to
apply, if any:

- it's OK to use an expression with type T1 to call a function that was
defined as T2, even though 6.5.2.2p6 says it's undefined behaviour?

- it's OK to declare the function as T1 in one translation unit and define
as T2 in another translation unit, even though 6.2.7p1 says it's undefined
behaviour?

- it's OK to define the function as T1 and then as T2 in *the same*
translation unit, even though 6.7p4 says it's a constraint violation?

What about interchangeability as return values from function? I haven't
found any normative text that implies this kind of interchangeability; which
of the above three situations are meant to apply if T1 and T2 have different
return types?
 
E

Emmanuel Delahaye

Chad a écrit :
Now 'a' is zero! Ahhhhhhhhhhhhh........ I'm even more confused.

You seem to have hard time to match the types and the formatters...

#include <stdio.h>
#include <math.h>

int main(void)
{

unsigned long a = (unsigned long) pow (2 , 32);
double b = pow(2 , 32);

printf("The value of a is: %lu\n", a);
printf("The value of b is: %0.0f\n", b);
printf("The value of int is: %u\n", (unsigned) sizeof(int));
printf("The value of double is: %u\n", (unsigned) sizeof(double));

return 0;
}

(Windows XP/Mingw)
The value of a is: 4294967295
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
 
W

Wojtek Lerch

Another thing I just realized is not quite clear is the exact consequences
of the fact that even though the standard guarantees that every signed
representation of a non-negative value is a valid representation of the
same value in the corresponding unsigned type, it doesn't work the other way
around. There may exist other unsigned representations of the same value
that do not represent the same value in the signed type but instead either
are trap representations or represent a negative value. Except for
va_arg(), are there any other situations where the standard guarantees that
storing a value through an unsigned type and then reading it back through
the corresponding signed type is guaranteed to produce the original value?
 
W

Wojtek Lerch

Wojtek Lerch said:
Another thing I just realized is not quite clear is the exact consequences
of the fact that even though the standard guarantees that every signed
representation of a non-negative value is a valid representation of the
same value in the corresponding unsigned type, it doesn't work the other
way around. There may exist other unsigned representations of the same
value that do not represent the same value in the signed type but instead
either are trap representations or represent a negative value.

To continue this conversation with myself, the above is what 6.2.6.2p5 seems
to imply; on the other hand, 6.2.5p9 simply states that "the representation
of the same value in each type is the same", and has the footnote attached
to it that explains that that's meant to imply interchangeability. I don't
suppose that's meant to override the implication of the much more specific
6.2.6.2p5 and guarantee that the rule works both ways, is it?
 
P

pete

Wojtek said:
To continue this conversation with myself,
the above is what 6.2.6.2p5 seems
to imply; on the other hand,
6.2.5p9 simply states that "the representation
of the same value in each type is the same",
and has the footnote attached
to it that explains that that's meant to imply interchangeability.

That would have to put constraints on the values
of padding bits then, wouldn't it?


In a case where the type unsigned,
had no padding bits and also two more value bits than type int,
CHAR_BIT == 17
UINT_MAX == 131071
INT_MAX == 32767
reading an object of type int with a %u specifier
would interpret the padding bit of the int type object
as an unsigned value bit.

In a case where int and unsigned had the same number
of value bits,
reading an object of type unsigned with a %d specifier
would interpret a padding bit as the sign bit.
 
D

Dave Thompson

it's not clear that it's undefined - an unsigned int and a signed int
have the same size, and it's not clear that int can have any valid
representations that are trap representations for unsigned.
All magnitude bits in signed must be the same, but the sign bit need
not be an additional (high) magnitude bit for unsigned, it can be a
padding bit, and that padding bit set may be a trap representation.

This is why the unprototyped and vararg rules allow for corresponding
signed and unsigned integer types if (and only if) "the value is
representable in both types". (Plus technically it isn't clear the
vararg rules apply to the variadic routines in the standard library;
nothing explicitly says they use va_* or as-if, although presumably
that's the sensible thing for an implementor to do.)

- David.Thompson1 at worldnet.att.net
 
W

Wojtek Lerch

pete said:
That would have to put constraints on the values
of padding bits then, wouldn't it?

Well yes, 6.2.6.2p5 says that very clearly. Any signed representation of a
non-negative value must represent the same value in the corresponding
unsigned type. If the signed type has padding bits that correspond to value
bits in the unsigned type, those padding bits must be set to zero in all
valid representations of non-negative values.

....
In a case where int and unsigned had the same number
of value bits,
reading an object of type unsigned with a %d specifier
would interpret a padding bit as the sign bit.

That's the situation I'm concerned about. From 6.2.6.2p5 it seems that it's
OK for the padding bit to be ignored. But 6.2.5p9 may be interpreted as
implying that the representation with the padding bit set to one must be
treated as a trap representation, because otherwise you would have a bit
pattern that represents a value in the range of both types when read through
the unsigned type, but a negative value when read through the signed type.
 
W

Wojtek Lerch

Jordan Abel said:
Stage 8 of translation.

A stage of translation is an interface that implies how printf() must be
implemented? Somehow I doubt that's what he meant; but could you elaborate?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,608
Members
45,247
Latest member
crypto tax software1

Latest Threads

Top