Code problem

J

jacob navia

I posted this to comp.std.c, but may be of interest here too:

Consider this:

extern void abort(void);
int main (void)
{
unsigned long long xx;
unsigned long long *x = (unsigned long long *) &xx;

*x = -3;
*x = *x * *x;
if (*x != 9)
abort ();
return(0);
}

lcc-win interprets
*x = -3;
as
*x = 4294967293;
since x points to an UNSIGNED long long.
I cast the 32 bit integer -3 into an unsigned integer
then I cast the result to an unsigned long long.

Apparently gcc disagrees.

Am I doing something wrong somewhere?

I should first cast into a long long THEN into an unsigned
long long?

Thanks for your help.
 
B

Ben Pfaff

jacob navia said:
int main (void)
{
unsigned long long xx;
unsigned long long *x = (unsigned long long *) &xx;

Why the cast? It should be unnecessary.
*x = -3;
*x = *x * *x;
if (*x != 9)
abort ();
return(0);
}

lcc-win interprets
*x = -3;
as
*x = 4294967293;
since x points to an UNSIGNED long long.

unsigned long long has to be at least 64 bits in size.
18446744073709551613 is the minimum correct value for *x here.
I cast the 32 bit integer -3 into an unsigned integer
then I cast the result to an unsigned long long.

I don't see any cast to unsigned int in the above program.
 
R

Richard Heathfield

jacob navia said:
I posted this to comp.std.c, but may be of interest here too:

Consider this:

extern void abort(void);
int main (void)
{
unsigned long long xx;
unsigned long long *x = (unsigned long long *) &xx;

Remove the unnecessary cast:

unsigned long long *x = &xx;

See 6.2.5(9).
*x = *x * *x;
if (*x != 9)
abort ();

It is deeply unlikely that *x will be 9 at this point.
return(0);
}

lcc-win interprets
*x = -3;
as
*x = 4294967293;
since x points to an UNSIGNED long long.

That's a bug in lcc-win. *x must have the value ULLONG_MAX - 2, and since
ULLONG_MAX must be at least 18446744073709551615, *x must be at least
18446744073709551613.
I cast the 32 bit integer -3 into an unsigned integer
then I cast the result to an unsigned long long.
Why?

Apparently gcc disagrees.

Am I doing something wrong somewhere?

Yes. If your article is an accurate description of lcc-win's behaviour,
your mistake is in using a non-conforming compiler.
I should first cast into a long long THEN into an unsigned
long long?

No, there is almost certainly no need to cast anything at all. The first
step is to identify the problem you are trying to solve, which is far from
clear. If you want to know the result of multiplying -3 by -3, why use an
unsigned type in the first place?
 
J

jameskuyper

jacob said:
I posted this to comp.std.c, but may be of interest here too:

Consider this:

extern void abort(void);
int main (void)
{
unsigned long long xx;
unsigned long long *x = (unsigned long long *) &xx;

*x = -3;
*x = *x * *x;
if (*x != 9)
abort ();
return(0);
}

lcc-win interprets
*x = -3;
as
*x = 4294967293;
since x points to an UNSIGNED long long.

The minimum value for ULLONG_MAX is 18446744073709551615, so you
should be getting a value of at least 18446744073709551613
I cast the 32 bit integer -3 into an unsigned integer
then I cast the result to an unsigned long long.

Why did you do that?
Apparently gcc disagrees.

Am I doing something wrong somewhere?
Yes.

I should first cast into a long long THEN into an unsigned
long long?

Why do you think that? Why are you using intermediaries?

You should be converting -3 directly to unsigned long long. You should
neither detour through unsigned int, nor detour through long long. The
result should be ULLONG_MAX+1-3.
 
B

Ben Pfaff

Richard Heathfield said:
jacob navia said:

It is deeply unlikely that *x will be 9 at this point.

2**64 - 3 == 18446744073709551613
(18446744073709551613)**2 = 340282366920938463352694142989510901769
340282366920938463352694142989510901769 % 2**64 = 9

At least according to the calculator I have here.
 
E

Eric Sosman

jacob said:
I posted this to comp.std.c, but may be of interest here too:

Consider this:

extern void abort(void);
int main (void)
{
unsigned long long xx;
unsigned long long *x = (unsigned long long *) &xx;

What is the cast for? (Hint: What is the type of
the expression `&xx'?)
*x = -3;
*x = *x * *x;
if (*x != 9)
abort ();
return(0);
}

lcc-win interprets
*x = -3;
as
*x = 4294967293;

That cannot possibly be correct. ULLONG_MAX is at
least 18446744073709551615, so ULLONG_MAX-2 (the required
result) is at least 18446744073709551614.
since x points to an UNSIGNED long long.
I cast the 32 bit integer -3 into an unsigned integer
then I cast the result to an unsigned long long.

That would be correct iff ULLONG_MAX==ULONG_MAX.

Imagine converting a signed char to an unsigned long
by the analogous route. Let's assume an 8-bit char, a
16-bit int, and a 32-bit long (and because there's no
way to write a literal of type char, I'll need to use
a variable instead):

signed char sc = -3;
unsigned long ul = sc;

The procedure you've outlined would convert sc to an
unsigned int, getting 65533u, and then convert that value
to unsigned long, yielding 65533ul. Yet the correct
result is ULONG_MAX-2 == 4294967293. The intermediate
conversion has lost sign information that affects the
ultimate result.
Apparently gcc disagrees.

Am I doing something wrong somewhere?

I should first cast into a long long THEN into an unsigned
long long?

You should convert the signed int to unsigned long long
by adding or subtracting ULLONG_MAX+1 the appropriate number
of times: in this case, by adding it once.
 
J

jacob navia

Ben said:
Why the cast? It should be unnecessary.


unsigned long long has to be at least 64 bits in size.
18446744073709551613 is the minimum correct value for *x here.


I don't see any cast to unsigned int in the above program.

Of course not. The casts were the result of loading a 32 bit constant
and extending it to a 64 bit constant in assembly.

I was missing the sign extend in the process.

Conceptually however, what is
(unsigned)-3

???

supposing sizeof(int)=4
sizeof(long long)=8

-3 is 4294967293

When you write "-3" that is a signed integer constant, in my system
32 bits, i.e. the above number.

When I do a sign extend, then it works
 
J

jacob navia

Ben said:
2**64 - 3 == 18446744073709551613
(18446744073709551613)**2 = 340282366920938463352694142989510901769
340282366920938463352694142989510901769 % 2**64 = 9

At least according to the calculator I have here.

Yes, it should be 9. I was missing a sign extend when
converting a signed int into an unsigned long long.
 
J

jacob navia

The minimum value for ULLONG_MAX is 18446744073709551615, so you
should be getting a value of at least 18446744073709551613


Why did you do that?


Why do you think that? Why are you using intermediaries?

You should be converting -3 directly to unsigned long long. You should
neither detour through unsigned int, nor detour through long long. The
result should be ULLONG_MAX+1-3.

Yes, I was missing a sign extend.

But the abstract problem is still not clear to me. I mean when I see
a number like

-3

unadorned this is a signed integer constant. Since I am assigning it to
an unsigned value, I reinterpret the bits as an unsigned (this is my
mistake probably) and then convert THAT into an unsigned long long.
 
J

jameskuyper

jacob navia wrote:
....
Conceptually however, what is
(unsigned)-3

It is a quantity which is utterly and completely irrelevant to this
code. It's value is UINT_MAX+1-3.
supposing sizeof(int)=4
sizeof(long long)=8

-3 is 4294967293

I can't figure out any sense in which that statement is true. What is
true is that

(unsigned long long)-3 == ULLONG_MAX+1-3

and sizeof(int) has no relevance to that answer whatsoever.
 
J

jacob navia

Richard said:
Yes. If your article is an accurate description of lcc-win's behaviour,
your mistake is in using a non-conforming compiler.

????

Bugs are non conforming by definition...

:()

No, there is almost certainly no need to cast anything at all. The first
step is to identify the problem you are trying to solve, which is far from
clear. If you want to know the result of multiplying -3 by -3, why use an
unsigned type in the first place?

Because that is part of a bigger program that I got
and I isolated (after some hours of debugging) the code
that exposes the bug in my software! (lcc-win)

I assure you that I know the multiplication tables, but thanks
for the answer anyway.
 
R

Richard Heathfield

Ben Pfaff said:
2**64 - 3 == 18446744073709551613
(18446744073709551613)**2 = 340282366920938463352694142989510901769
340282366920938463352694142989510901769 % 2**64 = 9

I did the math before posting, and removed an entire paragraph about this
(further down the article), but evidently I omitted to remove the above
sentence. Oops, sorry etc.
 
R

Richard Heathfield

Eric Sosman said:
ULLONG_MAX is at
least 18446744073709551615, so ULLONG_MAX-2 (the required
result) is at least 18446744073709551614.

ITYM 18446744073709551613
 
R

Richard Tobin

Is there supposed to be any significance to using *x instead of xx?
The types of *x and xx are the same, so the same conversions should
apply.
But the abstract problem is still not clear to me. I mean when I see
a number like

-3

unadorned this is a signed integer constant.

Strictly speaking, there are no negative integer constants. The syntax
for integer constants doesn't allow minus signs. It's a constant
expression of type int (the result of applying unary minus to the
integer constant 3).
Since I am assigning it to
an unsigned value, I reinterpret the bits as an unsigned (this is my
mistake probably) and then convert THAT into an unsigned long long.

Yes, this is your mistake. You have an int that you are assigning to
an unsigned long long, so you should do that conversion in a single
step.

The final answer will be 9 as your code implies, since the result of
the conversion will be equal to -3 (mod N), where log2(N) is the
number of bits in an unsigned long long, and if

a = A (mod N) and b = B (mod N)
then
a*b = A*B (mod N)

-- Richard
 
R

Richard Tobin

Bugs are non conforming by definition...

Not necessarily - a bug in code that implements undefined behaviour
just results in some other undefined behaviour, which is equally
conformant :)

-- Richard
 
E

Eric Sosman

jacob said:
[...]
But the abstract problem is still not clear to me. I mean when I see
a number like

-3

unadorned this is a signed integer constant. Since I am assigning it to
an unsigned value, I reinterpret the bits as an unsigned (this is my
mistake probably) and then convert THAT into an unsigned long long.

Yes, that's the error. Conversion from one type to another
involves the *value* being converted, not its representation.
As a related example consider

signed char sc = -1;
int a = sc;
unsigned int b = sc;

In neither case will "reinterpret the bits and then convert"
produce the correct answer. What you're doing is more akin to

unsigned int c = (unsigned char)sc;
 
J

Jack Klein

I posted this to comp.std.c, but may be of interest here too:

Consider this:

extern void abort(void);
int main (void)
{
unsigned long long xx;
unsigned long long *x = (unsigned long long *) &xx;

*x = -3;
*x = *x * *x;
if (*x != 9)
abort ();
return(0);
}

lcc-win interprets
*x = -3;
as
*x = 4294967293;
since x points to an UNSIGNED long long.
I cast the 32 bit integer -3 into an unsigned integer
then I cast the result to an unsigned long long.

Apparently gcc disagrees.

Am I doing something wrong somewhere?

Yes, I believe you are.

The C standard's wording on initializing scalars (6.7.8 P11) states:

"The initializer for a scalar shall be a single expression, optionally
enclosed in braces. The initial value of the object is that of the
expression (after conversion); the same type constraints and
conversions as for simple assignment apply, taking the type of the
scalar to be the unqualified version of its declared type."

Referring to "Simple assignment" 6.5.26.2 P2:

"In simple assignment (=), the value of the right operand is converted
to the type of the assignment expression and replaces the value stored
in the object designated by the left operand."

Putting these together, the integer constant expression -3, of type
int, is converted to type unsigned long long. There are no
intermediate conversions specified or implied to unsigned int or
signed long long. Your compiler might take these intermediate steps,
under the as-if rule, but only if you produce the same results as a
direct conversion.

And the correct result is (ULLONG_MAX + 1) - 3;
I should first cast into a long long THEN into an unsigned
long long?

No, I think not. What would the result be if you had written either
of these:

*x = -3ULL;

....or:

*x = (unsigned long long)3;

I think you are getting hung up on the details of how you code the
conversion in your compiler, and losing sight of the meaning of the
expression in the language.

The conversion, like all such in C, is defined in terms of value, not
of steps or types to achieve it.

As a practical matter, I suspect the simplest method to get the
correct result would be to convert the signed int constant to signed
long long, then to unsigned long long.

Note the following program, and its output when run in VS 2005
Express, which does not support much of C99 but does support the long
long types:

#include <stdlib.h>
#include <stdio.h>

int main(void)
{
unsigned long long x = (unsigned int)-3;
unsigned long long y = (unsigned long long)-3;
unsigned long long z = -3;
unsigned long long a = (long long)-3;
printf("x = %llu\ny = %llu\nz = %llu\na = %llu\n",
x, y, z, a);
return 0;
}

Output:

x = 4294967293
y = 18446744073709551613
z = 18446744073709551613
a = 18446744073709551613

So I suspect your compiler will generate the proper value using the
signed int to signed long long to unsigned long long series of
conversions.
Thanks for your help.

You're welcome.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://c-faq.com/
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++
http://www.club.cc.cmu.edu/~ajo/docs/FAQ-acllc.html
 
S

somenath

2**64 - 3 == 18446744073709551613
(18446744073709551613)**2 = 340282366920938463352694142989510901769
340282366920938463352694142989510901769 % 2**64 = 9

At least according to the calculator I have here.


I beg your pardon for asking basic question in the flow of high level
technical discussion .I am sorry if it break the flow of the
discussion .

My doubt is about the following lines and the result.

1) *x = -3;
2) *x = *x * *x;

After executing line 1) *x will be equal to at least
18446744073709551613.
And after executing line 2)
*x is 3. But my doubt is how it is possible?

My doubt is while executing *x * *x is *x again converted to 3 ?
That's why 3 * 3 is 9? If yes why it is required?
Because now *x is not negative so it is not required to be converted
to unsigned.
 
J

James Kuyper

Actually, it's guaranteed to be 9.
I beg your pardon for asking basic question in the flow of high level
technical discussion .I am sorry if it break the flow of the
discussion .

My doubt is about the following lines and the result.

1) *x = -3;
2) *x = *x * *x;

After executing line 1) *x will be equal to at least
18446744073709551613.
And after executing line 2)
*x is 3. But my doubt is how it is possible?

*x isn't 3 at that point. It should be 9.
My doubt is while executing *x * *x is *x again converted to 3 ?
That's why 3 * 3 is 9? If yes why it is required?

No. It's a little more interesting than that. All of the following
expressions are intended to be interpreted mathematically, rather than
as C expressions that could (and would) overflow. The value that should
be stored in *x in step 1 is obtained by adding ULLONG_MAX + 1 to -3 as
many times as are needed to generate a value between 0 an ULLONG_MAX,
inclusive. In this case, it only has to be added one time:

ULLONG_MAX + 1 - 3

Now, let's calculate the mathematical value of the square of that value:

(ULLONG_MAX + 1)^2 -2*3*(ULLONG_MAX + 1) + 9

= (ULLONG_MAX - 5)*(ULLONG_MAX + 1) + 9

The value that is actually stored in *x by step 2 is obtained from that
mathematical value by (conceptually) subtracting ULLONG_MAX + 1 as many
times as needed until the result is between 0 and ULLONG_MAX, inclusive.
I hope it's clear that it needs to be subtracted exactly ULLONG_MAX-5
times, giving a result of 9. This isn't a coincidence, but a normal
consequence of modulus arithmetic. In reality, of course, no
subtractions are actually carried out; the required result is obtained
naturally as a result of properly implemented unsigned multiplication.
The explanation given above can be generalized to prove that

((a mod c) * (b mod c)) mod c = (a*b) mod c

(I hope I got the modulus notation right - it's been nearly three
decades since I last used it)
In this case, a and b are -3, and c is ULLONG_MAX + 1
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,521
Members
44,995
Latest member
PinupduzSap

Latest Threads

Top