I don't get how the computer arrives at 2^31

K

kuyper

Jordan said:
Jordan Abel said:
"...or by the omission of any explicit definition of behavior"
does not say that anything not defined is undefined behavior?

there's no explicit definition of what effect the phase of the moon has
on programs [which you not only did not reply to, but snipped.]

The standard isn't defining the moon. The behavior of the moon
is indeed undefined by the standard. That doesn't mean that the
behavior of an implementation is dependent on the phase of the
moon.

But it's not not permitted to be.

Actually, it is. The standard nowhere specifies how fast a program must
execute; therefore it's permissible for a program's processing speed to
depend upon the phase of the moon. The order of evaluation of f() and
g() in the expression f()+g(), therefore an implementation is allowed
to use a different order depending upon the phase of the moon.
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?

No. printf() isn't required to use va_arg(). It must use something that
is binary-compatible with it, to permit seperate compilation of code
that accesses printf() only indirectly, through a pointer. However,
whatever method it uses might have additional capabilities beyond those
defined by the standard for <stdarg.h>, capabilities not available to
strictly conforming user code.
 
S

Skarmander

Tim said:
Presumably you meant 7.19.6.1.
Yes. Neat little shift on the 1 there.
Reading the rules for va_arg in 7.15.1.1, it seems clear that the
Standard intends that an int argument should work for an unsigned int
specifier, if the argument value is representable as an unsigned int.
The way the va_arg rules work make an int argument be "the correct type"
in this case (again, assuming the value is representable as an unsigned
int).
That's a very reasonable interpretation, though the standard should arguably
be clarified at this point with a footnote if the intent is to treat
printf() as "just another va_arg-using function" in this regard.

S.
 
T

Tim Rentsch

[snip]
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?

I believe that's the most sensible interpretation, but to get
an authoritative answer rather than just statements of opinion
probably the best thing to do is submit a Defect Report.
 
T

Tim Rentsch

Flash Gordon said:
Jordan Abel wrote: [snip]
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?

I would argue that it is undefined because in the specific case of
printf it is explicitly stated that it the type is different it is
undefined. After all, printf might not be implemented in C and so might
make assumptions that code written in C is not allowed to make. Although
I can't think how it could manage to break this.

The Standard doesn't say that if the type is different then it's
undefined; what it does say is that if the argument is not of the
correct type then it's undefined. Absent an explicit indication to
the contrary, the most sensible interpretation of "the correct type"
would (IMO) be "the correct type after taking into account the rules
for function argument transmission". Of course, other interpretations
are possible; I just don't find any evidence to support the theory
that any other interpretation is what the Standard intends.

And, as I said in another response, the best way to get an
authoritative statement on the matter is to submit a Defect
Report.
 
T

Tim Rentsch

Skarmander said:
Yes. Neat little shift on the 1 there.

That's a very reasonable interpretation, though the standard should arguably
be clarified at this point with a footnote if the intent is to treat
printf() as "just another va_arg-using function" in this regard.

Yes, I agree the wording in the Standard needs clarifying here.
 
T

Tim Rentsch

(e-mail address removed) writes:

[snip]
The behavior of printf() is defined only for those cases where the
types match. The standard nowhere defines what they do when there's a
mismatch.

It doesn't say "where the types match", it says when an argument is
not of the correct type. Since the phrase "of the correct type" isn't
given any specific definition, the most sensible interpretation is
"the correct type after taking into account other rules for function
argument transmission". There isn't any evidence to support your
theory that the Standard intends anything else here. There is,
however, evidence to support the theory that it intends int's to
be usable as unsigned int's (obviously provided that the argument
values are suitable).
 
A

Antoine Leca

In news:[email protected], Jordan Abel va escriure:
191 points in J.2 [yes, i counted them. by hand. these things should
really be numbered

What a great idea! This way, every time a technical corrigendum introduces a
new explicited case for undefined behaviour, since it is expected it should
be inserted in the proper place in the J.2 list, the technical corrigendum
should re-list the whole rest of annex J.2, with the new numbers.
As a result, those numbers would instantaneously become without useful
profit (since it would be a great pain to track varying numbers).

If you need a numerical gimmick to design them, just use the
subclause&paragraph numbers (chapters and verses), as everybody does.


If it is just to help you counting them, assuming you are not proficient in
the use of sed/awk over Nxxxx.txt, just ask Larry, he does.


Antoine
 
T

tedu

Chad said:
Speaking of long long, I tried this:

include <stdio.h>
#include <math.h>

int main(void) {

int a = (int)pow(2.0 ,32.0);
double b = pow(2.0 , 32.0);
long long c = 4294967296;

printf("The value of a is: %u\n",a);
printf("The value of b is: %0.0f\n",b);
printf("The value of int is: %d\n", sizeof(int));
printf("The value of double is: %d\n", sizeof(double));

printf("The value of c is: %llu\n",c>>1);
}

The output is:
$gcc pw.c -o pw -lm
pw.c: In function `main':
pw.c:8: warning: integer constant is too large for "long" type
$./pw
The value of a is: 2147483648
The value of b is: 4294967296
The value of int is: 4
The value of double is: 8
The value of c is: 2147483648
$

try long long c = 4294967296LL;
 
F

Flash Gordon

Tim said:
Flash Gordon said:
Jordan Abel wrote: [snip]
regardless, the question i meant to ask for comp.std.c is still
unanswered - does the rule that allows va_arg to accept an unsigned for
signed and vice versa if it's in the right range, also apply to printf?
I would argue that it is undefined because in the specific case of
printf it is explicitly stated that it the type is different it is
undefined. After all, printf might not be implemented in C and so might
make assumptions that code written in C is not allowed to make. Although
I can't think how it could manage to break this.

The Standard doesn't say that if the type is different then it's
undefined; what it does say is that if the argument is not of the
correct type then it's undefined. Absent an explicit indication to
the contrary, the most sensible interpretation of "the correct type"
would (IMO) be "the correct type after taking into account the rules
for function argument transmission".

I agree that is a reasonable position.
> Of course, other interpretations
are possible; I just don't find any evidence to support the theory
that any other interpretation is what the Standard intends.

I'm just a awkward sod sometimes ;-)
And, as I said in another response, the best way to get an
authoritative statement on the matter is to submit a Defect
Report.

I'm not bothered enough by this one.
 
K

Keith Thompson

Tim Rentsch said:
You mean implementation defined, not undefined. ("Implementation
defined" could mean raising an implementation defined signal in
this case, but still implmentation defined.)

No, it's undefined.

C99 6.3.1.4p1:

When a finite value of real floating type is converted to an
integer type other than _Bool, the fractional part is discarded
(i.e., the value is truncated toward zero). If the value of the
integral part cannot be represented by the integer type, the
behavior is undefined.
 
K

Keith Thompson

Chad said:
Okay, maybe I'm going a bit off topic here, but, I think I'm missing
it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;

for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}

printf("The value of c is: %0.0f\n",sum);

return 0;

}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0) would
look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

No. The pow() function returns a result of type double; specifically,
it's 2147483648.0. In the code above, nothing is converted to any
32-bit integer type; it's all double, so it doesn't make much sense to
talk about the binary representation.
The first bit would be signed. This means that the value should be the
sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?

The sign bit doesn't enter into this. Floating-point types do
typically have a sign bit, but all the values you're dealing with here
are representable, so all the computed values will match the
mathematical results.

You're computing

1.0 + 2.0 + 4.0 + 8.0 + ... + 1073741824.0

The result is 2147483647.0.

This might be a little clearer if you use a "%0.1f" format. The
"%0.0f" format makes the numbers look like integers; "%0.1f" makes it
clear that they're floating-point.
 
C

Chuck F.

Chad said:
>
Okay, maybe I'm going a bit off topic here, but, I think I'm
missing it. When I go something like:

include <stdio.h>
#include <math.h>

int main(void) {
int i = 0;
double sum = 0;

Why initialize these, when the initial values will never be used?
for (i = 0; i <= 30; i++) {
sum = pow(2.0, i) + sum;
}
printf("The value of c is: %0.0f\n",sum);
return 0;
}

The output is:
$./pw
The value of c is: 2147483647 (not 2147483648).

The way I understood this was that for 32 bits, pow(2.0, 31.0)
would look something like the following:

1111 1111 1111 1111 1111 1111 1111 1111

The first bit would be signed. This means that the value should
be the sum of:
1*2^0 + 1*2^1..... 1*3^30

Why is the value off by one?

Because a double is not an integral object. It expresses an
approximation to a value, and the printf format has truncated the
value. Just change the format to "%f\n" to see the difference.
 
L

lawrence.jones

In comp.std.c Tim Rentsch said:
The Standard doesn't say that if the type is different then it's
undefined; what it does say is that if the argument is not of the
correct type then it's undefined. Absent an explicit indication to
the contrary, the most sensible interpretation of "the correct type"
would (IMO) be "the correct type after taking into account the rules
for function argument transmission".

Indeed. The difficulty of specifying the rules precisely is why the
committee weaseled out and used the fuzzy term "correct" instead of more
explicit language.

-Larry Jones

At times like these, all Mom can think of is how long she was in
labor with me. -- Calvin
 
J

Jordan Abel

No. printf() isn't required to use va_arg(). It must use something that
is binary-compatible with it, to permit seperate compilation of code
that accesses printf() only indirectly, through a pointer. However,
whatever method it uses might have additional capabilities beyond those
defined by the standard for <stdarg.h>, capabilities not available to
strictly conforming user code.

However, it would be reasonable to think that the compatibility between
signed and unsigned integers where they have the same value is a
required part of the binary interface of variadic functions.
 
K

Keith Thompson

Chuck F. said:
Why initialize these, when the initial values will never be used?

These? The initial value of i isn't used; the initial value of sum is.
 
W

Wojtek Lerch

Jordan Abel said:
However, it would be reasonable to think that the compatibility between
signed and unsigned integers where they have the same value is a
required part of the binary interface of variadic functions.

It would be unreasonable to think that that wasn't the intention, because
footnote 31 clearly says it was; but would it be reasonable to deny that the
normative text lacks clear words that state that requirement, either
directly or by mentioning va_arg(), either in the description of the
function call operator or in the description of fprintf()? Is it really
reasonable to believe that it's clear enough that a requirement explicitly
stated in the description of one interface (the argument to printf() must
have the correct type) is overriden by a promise in the description in a
different interface (it's OK for the argument to va_arg() to have a slightly
different type), even though the two descriptions don't have any references
to each other?

Think about a subset of C defined by what remains from C99 after removing
footnote 31, all the contents of <stdarg.h>, and the few functions from
<stdio.h> that take a va_list argument. This would significantly reduce the
usefulness of variadic functions defined in programs; but would it change
the semantic of printf()? Do you think it would still be reasonable to
believe that this modified C required printf() to tolerate mixing signed
with unsigned?
 
T

Tim Rentsch

Keith Thompson said:
No, it's undefined.

C99 6.3.1.4p1:

When a finite value of real floating type is converted to an
integer type other than _Bool, the fractional part is discarded
(i.e., the value is truncated toward zero). If the value of the
integral part cannot be represented by the integer type, the
behavior is undefined.

You're absolutely right. Thank you for the correction.
 
R

Richard Heathfield

Chuck F. said:
Why initialize these, when the initial values will never be used?

I have read Keith's comment, but I'll address the question as if I had not
noticed it. I, personally, give objects a known, determinate initial value
when defining them because I think it makes a program easier to debug.
Twice now I've let indeterminate values screw up a production environment
under conditions that didn't occur in testing (which is a good indication
that neither the programming nor the testing were up to scratch). Twice is
twice too many. I'm not going to let that happen again.

And now add in Keith's comment. Since the value of sum given above /was/
used, to remove it arbitrarily (as some people may well have been tempted
to do if maintaining the code) after a brief perusal of the code had
*apparently* indicated that it was not used would have introduced a bug
that may well not have been spotted in testing.
 
C

Chuck F.

Keith said:
These? The initial value of i isn't used; the initial value of
sum is.

Mea Culpa. However I consider the proper initialization point is
in the for statement, i.e. "for (i = 0, sum = 0; ...)". In other
words as close as possible to the point of use.
 
K

kuyper

Chuck said:
Why initialize these, when the initial values will never be used?

The initial value of 'i' isn't used, but the initial value of 'sum'
certainly is.

I personally wouldn't intialize 'i', but some people argue that doing
so is a safety measure. In my personal experience, this "safety"
measure frequently prevents the symptoms of incorrectly written code
from being serious enough to be noticed, which in my opinion is a bad
thing. If my code uses the value of an object before that object has
been given the value it was supposed to have at that point, I'd greatly
prefer it if the value it uses is one that's likely to make my program
fail in an easily noticeable way. 0 is often not such a value. An
indeterminate one is more likely to produce noticeable symptoms. A
well-chosen specific initializer could be even better, except for the
fact that it gives the incorrect impression that the initializing value
was intended to be used. This can be fixed by adding in a comment:

int i = INT_MAX; // intended to gurantee problems if the program
incorrectly uses this value

But I prefer the simplicity of:

int i;
Because a double is not an integral object. It expresses an
approximation to a value, and the printf format has truncated the
value. Just change the format to "%f\n" to see the difference.

Did you try that? I think you'll be surprised by the results. Just to
make things clearer, you might try using long double, and a LOT of
extra digits after the decimal point. If you've got a fully conforming
C99 implementation, it would be even clearer if you write it out in
hexadecimal floating point format.
Hint: it's not the program that's giving the wrong value for the sum of
this series.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top