C arithmetic question.


8

88888 Dihedral

Why do we have to condemn some novice kid writing c codes less than 10 times and can't understand even two positive integers "added" in c can become a negative integer for signed
integer representations in c programming?

Of course if that is from a c programmer in some company for a joke.
Just take it easy.
 
Ad

Advertisements

K

Keith Thompson

Michael Press said:
I already said what I expect from the % operation.

Yes, and the % operator defined by the C language doesn't meet your
expectations.

What now?
 
K

Kaz Kylheku

The mod operator does not violate that law.
Your problem is not with the mod operator, it's
that the numbers that are given to the mod operator
are not what you think they are.

You consider that more important than
the mod operator being consistent. I do not.[/QUOTE]

what do you want % to do?

3 % 4 == 3 /* everyone agrees, hopefully */
3 % -4 == ?
-3 % 4 == ?
-3 % -4 == ?

Common Lisp has two functions: mod and rem. (mod 3 4) and (rem 3 4) is 3.
But:

[1]> (mod -3 -4)
-3
[2]> (mod -3 4)
1
[3]> (mod 3 -4)
-1
[4]> (rem -3 -4)
-3
[5]> (rem -3 4)
-3
[6]> (rem 3 -4)

In some languages, there is a mod which always returns the least positive
residue. (But does that even mean when the modulus is negative?)
Note that the Lisp mod does give you a positive least residue value if the
modulus is positive.

Would you want the C % operator to be like Lisp's mod?

But note that in Lisp, there are two divisions too: mod pairs with floor (which
truncates toward minus infinity), and rem pairs with truncate (which truncates
toward zero).

Just like in C we have "(a/b)*b + a%b shall equal a", in Lisp we have
a similar relationship between floor and mod, and truncate and rem:

(defun mod-test (a b)
(values (= a (+ (* b (floor a b)) (mod a b)))
(= a (+ (* b (truncate a b)) (rem a b)))))

(mod-test 3 -4) -> T ; T
(mod-test 3 4) -> T ; T
(mod-test -3 -4) -> T ; T
(mod-test 3 4) -> T ; T

As of C99, it appears that C's operators are like truncate and rem.

I think, IMO, that if you're only going to have one kind of truncating division
and one kind of mod, it's best to have a division which truncates toward zero,
and a modulus which is like the Lisp mod: the sign is always positive if the
modulus is positive, giving you a least positive residue.

Does the equality (a/b)*b + a%b == a have any practical value?
 
J

James Kuyper

James Kuyper said:
[-58ul % 37 == 23]
...
23 cannot be the right answer because
37 does not divide -58 - 23 = -81.

You have the explanation in the other posts. ...
The lesson is to use a library designed to
do arithmetic rather than expecting it from C.

Do you really believe that a%(long)b is so much more complicated than
a%b that a special library is called for?

I would guess that you are probably thinking of this as just one issue
among many, the totality of which justifies use of a special library -
so what are those other issues? How would the behavior of that library
differ from that of ordinary C arithmetic?

I already said what I expect from the % operation.

So, your unmet expectations for the % operation are the totality of your
objections to C arithmetic? It seems overkill to condemn all of C
arithmetic on the basis of one operator; or, for that matter, to favor
using an entire library of functions as a work-around for such a
problem. Using the new Generic feature, you could create a single
type-generic function to replace that operator.
 
J

James Kuyper

James Kuyper said:
On 01/18/2012 06:51 PM, Michael Press wrote:

[-58ul % 37 == 23]
...
23 cannot be the right answer because
37 does not divide -58 - 23 = -81.

You have the explanation in the other posts.
...
The lesson is to use a library designed to
do arithmetic rather than expecting it from C.

Do you really believe that a%(long)b is so much more complicated than
a%b that a special library is called for?

I would guess that you are probably thinking of this as just one issue
among many, the totality of which justifies use of a special library -
so what are those other issues? How would the behavior of that library
differ from that of ordinary C arithmetic?

I already said what I expect from the % operation.

So, your unmet expectations for the % operation are the totality of your
objections to C arithmetic? It seems overkill to condemn all of C
arithmetic on the basis of one operator; or, for that matter, to favor
using an entire library of functions as a work-around for such a
problem. Using the new Generic feature, you could create a single
type-generic function to replace that operator.

Let me expand upon that comment. The key reason for your unmet
expectations in this case was the "usual arithmetic conversions"
(6.1.3.8). Those same conversions apply to many other arithmetic
operators: *, /, +, -, <, <=, >, >=, ==, !=, bitwise &, ^, bitwise |,
?:. The corresponding compound assignment operators, and the ++ and --
operators all have their behavior defined in terms of those operator, so
they perform the usual arithmetic conversions, too. Is the % operator
really the only one where the usual arithmetic conversions cause a
result that doesn't meet your expectations? That seems quite implausible
to me.
 
K

Kenny McCormack

Yes, and the % operator defined by the C language doesn't meet your
expectations.

What now?

Well, obviously, fix the C standard.

I can't see how anyone could question that that is what MP would most like
to see happen. Whether or not the rest of us agree isn't really the point,
now, since it seems clear that you somehow don't understand what Mike was
saying.

HTH. HAND.

--
Religion is regarded by the common people as true,
by the wise as foolish,
and by the rulers as useful.

(Seneca the Younger, 65 AD)
 
Ad

Advertisements

J

jacob navia

Le 20/01/12 23:32, James Kuyper a écrit :
Let me expand upon that comment. The key reason for your unmet
expectations in this case was the "usual arithmetic conversions"
(6.1.3.8). Those same conversions apply to many other arithmetic
operators: *, /, +, -,<,<=,>,>=, ==, !=, bitwise&, ^, bitwise |,
?:. The corresponding compound assignment operators, and the ++ and --
operators all have their behavior defined in terms of those operator, so
they perform the usual arithmetic conversions, too. Is the % operator
really the only one where the usual arithmetic conversions cause a
result that doesn't meet your expectations? That seems quite implausible
to me.

Computer arithmetic has nothing to do with maths. For instance:

A+B is different from A for all B different than zero.

A+B is equal to A if A is a floating point number and B is smaller than
the floating point epsilon for that type.

Or A + B is different than B + A.

And I could go on for a while.

Mathematics is concerned about numbers as abstract entities that have
abstract properties that can't be reproduced in a finite machine with
a finite and time limited operator.

Those are the basics. Now, some systems like Maple or Maxima approach
MORE the maths we are used to, and should be used when you do maths.

C is a computer language that doesn't shield you from the concrete
machine you are using. It is the lowest "high level" language in
existence, always just a bit above assembler.

To expect to find here "sane" mathematical laws shows only a certain
ignorance from Mr Press, specially if he uses unsigned numbers, and
then he mixes them with negative values...

Unsigned numbers correspond to the set 0-UINT_MAX, and can't be used
with negative values. That should be obvious too but apparently
Mr Press doesn't know that. If he is disappointed with C I would
recommend him to use Maxima, a free mathematical system available
for all, written in Lisp.

jacob
 
G

Geoff

$cat hunh.c
#include <stdio.h>

int
main(void)
{
long int a = -58;
unsigned long int b = 37;

a %= b;
printf("%ld\n", a);
return 0;
}
$cc -std=c99 -Wall -pedantic -o hunh hunh.c
$./hunh
23
$ cc -v
Using built-in specs.
Target: i686-apple-darwin9
Configured with: /var/tmp/gcc/gcc-5465~16/src/configure --disable-checking -enable-werror --prefix=/usr --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib --build=i686-apple-darwin9 --with-arch=apple --with-tune=generic --host=i686-apple-darwin9 --target=i686-apple-darwin9
Thread model: posix
gcc version 4.0.1 (Apple Inc. build 5465)

---

23 cannot be the right answer because
37 does not divide -58 - 23 = -81.

Is this a bug in the compiler or a problem with the C standard?

Neither. Your program has a bug. Your program output changes depending
on the implementation. In 32-bit mode where you are compiling on OS X
Tiger, the result is 23. In 64-bit mode on OS X Lion, the result is
28. Neither answer is correct for reasons that others have explained.

The obligation of the programmer is to understand and consider the
rules of the language and the implementation and to code accordingly.
This is the contract between the language and the programmer. The
programmer breaks the contract at his own peril.
 
M

Michael Press

Kaz Kylheku said:
what do you want % to do?

x % y gives an element of x + yZ.
3 % 4 == 3 /* everyone agrees, hopefully */
3 % -4 == ?
-3 % 4 == ?
-3 % -4 == ?

Common Lisp has two functions: mod and rem. (mod 3 4) and (rem 3 4) is 3.
But:

[1]> (mod -3 -4)
-3

-4 | -3 - (-3)
[2]> (mod -3 4)
1

4 | -3 - 1
[3]> (mod 3 -4)
-1

-4 | 3 - (-1)
[4]> (rem -3 -4)
-3

-4 | -3 - (-3)
[5]> (rem -3 4)
-3

4 | -3 - (3)
[6]> (rem 3 -4)

In some languages, there is a mod which always returns the least positive
residue. (But does that even mean when the modulus is negative?)
Note that the Lisp mod does give you a positive least residue value if the
modulus is positive.

Would you want the C % operator to be like Lisp's mod?

I do not know.
But note that in Lisp, there are two divisions too: mod pairs with floor (which
truncates toward minus infinity), and rem pairs with truncate (which truncates
toward zero).

There could be a third truncating to +oo.
Just like in C we have "(a/b)*b + a%b shall equal a", in Lisp we have
a similar relationship between floor and mod, and truncate and rem:

(defun mod-test (a b)
(values (= a (+ (* b (floor a b)) (mod a b)))
(= a (+ (* b (truncate a b)) (rem a b)))))

(mod-test 3 -4) -> T ; T
(mod-test 3 4) -> T ; T
(mod-test -3 -4) -> T ; T
(mod-test 3 4) -> T ; T

As of C99, it appears that C's operators are like truncate and rem.

I think, IMO, that if you're only going to have one kind of truncating division
and one kind of mod, it's best to have a division which truncates toward zero,
and a modulus which is like the Lisp mod: the sign is always positive if the
modulus is positive, giving you a least positive residue.

Does the equality (a/b)*b + a%b == a have any practical value?

The practical value is that y | (x - (x % y)).
 
M

Markus Wichmann

Le 20/01/12 23:32, James Kuyper a écrit :

Computer arithmetic has nothing to do with maths. For instance:

A+B is different from A for all B different than zero.

That's mathematically correct, but not what happens in a computer,
since, if A is of type unsigned int, then A + UINT_MAX + 1 is actually
equal to A.
A+B is equal to A if A is a floating point number and B is smaller than
the floating point epsilon for that type.

Not quite. A+B is equal to A if A is a floating point number and abs(B)
is smaller than the floating point epsilon for that type _times_A_. Or
even more accurate: scaled by the exponent of A.
Or A + B is different than B + A.

Erm... no, that actually can't happen. There was a discussion in the
German C newsgroup about this a year ago or so, which resulted in
finding that it follows from the C standard that the addition is indeed
commutative, but not associative. That is

a + b + c == (a + b) + c == c + (b + a) !=* c + b + a

* = i.e. not always equal. There may be instances of a, b and c where
this holds, but it doesn't have to.
And I could go on for a while.

If you have a signed number s and an unsigned number u, and you want to
know if s is smaller than u, the expression for that is

(s < 0 || s < u)

because if s < 0 the comparison s < u might produce unexpected results.
(Because if s == -1, then you are actually comparing UINT_MAX to u.)
Mathematics is concerned about numbers as abstract entities that have
abstract properties that can't be reproduced in a finite machine with
a finite and time limited operator.

Actually much of it can. Sure, real numbers are a PITA. Like, all my
approximations for the sine can only produce algebraical numbers while
most values of the sine are actually transcendental. But the computer
doesn't care since it cannot accurately save all algebraical or
transcendental numbers.

But the definition of the C comparison operators defies much mathematics
by staying simple: The "usual arithmetic conversions" take place before
applying _any_ operator, so why not on the comparison operators?
Those are the basics. Now, some systems like Maple or Maxima approach
MORE the maths we are used to, and should be used when you do maths.

Or perhaps you should think about what you really want from your
computer before writing a program? Just killing the problem by throwing
a library at it without thinking isn't really good style.
C is a computer language that doesn't shield you from the concrete
machine you are using. It is the lowest "high level" language in
existence, always just a bit above assembler.

That argument is so overused: C is actually pretty far removed from the
machine it runs on. I mean, every CPU I know has a flags register, and
if I want to know, whether that addition that just executed overflowed,
I can check it. In assembly. Not in C. In C, an overflow is IIRC
undefined behaviour and may even result in an interrupt. Even if it does
not, it is a PITA to find out whether the addition overflowed if you
didn't save the values beforehand. Not to talk about multiplications!
And really not to talk about complex expressions that may still be
critical. AFAIK I'd have to hand-compile them into three-operand-form
and do an overflow check between each step!
To expect to find here "sane" mathematical laws shows only a certain
ignorance from Mr Press, specially if he uses unsigned numbers, and
then he mixes them with negative values...

Yeah, that's true.
Unsigned numbers correspond to the set 0-UINT_MAX, and can't be used
with negative values. That should be obvious too but apparently
Mr Press doesn't know that.

Perhaps he's just sulking that his compiler should have warned him or
something...
If he is disappointed with C I would
recommend him to use Maxima, a free mathematical system available
for all, written in Lisp.

If he's disappointed with C I suggest him using Python or any other
language out there that does a better job of hiding the computer and
emulating the mathematics machine.

Because that's the trouble with C: It actually does exactly what the
programmer tells it to, no more, no less; whereas languages like Python
and C++ do so much stuff behind the back of the programmer he doesn't
need to care about (they say). It's another philosophy.

Ciao,
Markus
 
B

Ben Pfaff

jacob navia said:
Computer arithmetic has nothing to do with maths. For instance:

A+B is different from A for all B different than zero.

This is also true in C arithmetic, if A and B both have the same
unsigned integer type.
 
Ad

Advertisements

K

Keith Thompson

Markus Wichmann said:
On 20.01.2012 23:52, jacob navia wrote: [...]
Computer arithmetic has nothing to do with maths. For instance:

A+B is different from A for all B different than zero.

That's mathematically correct, but not what happens in a computer,
since, if A is of type unsigned int, then A + UINT_MAX + 1 is actually
equal to A.

But that's really (A + UINT_MAX) + 1, two separate additions. There is
no non-zero value of B for which A + B differs from A. (UINT_MAX + 1
*is* zero.)

[...]
Erm... no, that actually can't happen. There was a discussion in the
German C newsgroup about this a year ago or so, which resulted in
finding that it follows from the C standard that the addition is indeed
commutative, but not associative.

I'm curious how that conclusion was reached. For implementations
that don't define __STDC_IEC_559__, I don't think the floating-point
model is restrictive enough to ensure commutativity. (But I
wouldn't be surprised if addition is commutative in all existing
implementations.)

[...]
Actually much of it can. Sure, real numbers are a PITA. Like, all my
approximations for the sine can only produce algebraical numbers while
most values of the sine are actually transcendental. But the computer
doesn't care since it cannot accurately save all algebraical or
transcendental numbers.

All representable floating-point numbers are not merely algebraic, but
rational. (For example, sqrt(2.0) is algebraic but not rational.)

[snip]
 
S

Seebs

A small signed, negative value *can't* be converted to unsigned format, no
matter how wide the number.

Sure it can. The conversion happens by mapping it into a positive number. :)

-s
 
S

Seebs

You consider that more important than
the mod operator being consistent. I do not.

I am really wondering if you are hearing what people are saying.

The mod operator is absolutely consistent with that law. There are no
cases in which the mod operator does not produce the behavior so described...
*BUT*, in some cases the numbers it operates on are not the numbers you see
in the source.

-s
 
J

jacob navia

Le 21/01/12 19:23, Markus Wichmann a écrit :
Erm... no, that actually can't happen.

Yes, if you are using the x86 with the FPU in 80 bits precision.
If you store the intermediate result a+b in memory, it will be
truncated to 64 bits float. If you don't, it will retain its
80 bits precision, so when you reload the number from memory
it can be different from the one you started!


The gcc compiler has an option -ffloat-store to avoid
this kind of problems but it decreases the speed of
floating point calculations.

Welcome to the world of computer arithmetic!

:)
 
B

Ben Bacarisse

Seebs said:
I am really wondering if you are hearing what people are saying.

The mod operator is absolutely consistent with that law. There are no
cases in which the mod operator does not produce the behavior so described...
*BUT*, in some cases the numbers it operates on are not the numbers you see
in the source.

I don't think this is true. The problematic conversions are part of the
definition of the % operator. Not all operators include it in their
definition (for example, << and >> only do integer promotions) and %
could have been defined differently. That would have been odd, to say
the least, but it's the C definition of % that was causing the OP's
surprise, arithmetic conversions and all.
 
Ad

Advertisements

B

Ben Bacarisse

Robert Wessel said:
I don't know. The conversion rules are not different so much, as
they're subsetted based on the fact that the shift (and some other)
operators only take integral arguments.

I'm not sure what you mean. % is odd in that it takes only integer
arguments yet it performs the usual arithmetic conversions designed to
establish "a common real type" for the operands.

The more I think about it, the odder % looks. It could be defined like
the shift operators to do only integer promotions and to yield a value
whose type is that of the promoted left operand. That would give it
more natural semantics, and, contrary to what I said above, would not be
odd at all.
 
M

Markus Wichmann

Le 21/01/12 19:23, Markus Wichmann a écrit :

Yes, if you are using the x86 with the FPU in 80 bits precision.
If you store the intermediate result a+b in memory, it will be
truncated to 64 bits float. If you don't, it will retain its
80 bits precision, so when you reload the number from memory
it can be different from the one you started!

Well... so what? The question was whether there are values for A and B
so that A + B != B + A. If the compiler has to store an intermediate in
memory when executing A + B it will have to do the same when computing B
+ A.

No-one ever said that A + B - A == B at all times. However, it should
yoield at all times when you restrict yourself to using integers. But,
as well, signed to unsigned conversion may ruin your day here.

Also, extended precision only really matters if A and B are long doubles
to begin with, doesn't it? At least with addition and subtraction the
additional bits shouldn't really matter.
The gcc compiler has an option -ffloat-store to avoid
this kind of problems but it decreases the speed of
floating point calculations.

Welcome to the world of computer arithmetic!

:)

Well, there's also -mpc64, setting the precision control of the x87 to
64 bits. However, you don't need to tell me that floating-point
computations are always inexact, I know that myself.

Ciao,
Markus
 
B

Ben Bacarisse

Markus Wichmann said:
No-one ever said that A + B - A == B at all times. However, it should
yoield at all times when you restrict yourself to using integers. But,
as well, signed to unsigned conversion may ruin your day here.

C does not guarantee that A + B - A == B even when restricted to integer
types. It does apply when the arithmetic is unsigned, but for signed
integer types, A + B can overflow before A is subtracted, and the
overflow of signed integers is undefined.

<snip>
 
Ad

Advertisements

K

Kaz Kylheku

and "The operands of the % operator shall have integer type / The
usual arithmetic conversions are performed on the operands" ( % )

any different in result?

As there is no "usual" promotion from any non-integer type
(pointers/floats) *to* an integer type, these would appear to be the
same in all cases.

The usual arithmetic conversions can convert one argument to the type
of the other.

Promotions happen in isolation: e.g. char turns to int (or, on
rare implementations, unsigned int).

Consider:

(unsigned long) OP (char)

promotions call for:

(unsigned long) OP (int)

usual conversions:

(unsigned long) OP (unsigned long)

And now a -ive int value becomes a positive operand.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top