Math.abs

R

Richard F.L.R.Snashall

Oliver said:
This is true assuming we allow standard Java overflow/underflow
behaviour.

It seems that the integers and the multiplication does NOT form a group,
as we don't have closure.

But then Ada (apologies if that is a cuss word in this group:-0) calls
it modular arithmetic.
 
M

Mark Thornton

Chris said:
Eric Sosman wrote:




Probably not "enormously". "Measurably", I think, is a fairer description.
There are programming languages which don't suffer from integer overflow, and
they are not hugely slower than Java.

It's waaaay too late now, but I think that using primitive types /by default/
was a mistake in Java's design (possibly swayed by a desire to run on
resource-limited devices). "Proper", non-overflowing, real object, integers
can be implemented pretty efficiently (as has been known for decades). And if
the Java guys had wanted to allow primitive types /too/, as an optimised form
available to careful and knowledgable programmers, then I don't see much wrong
with that.

-- chris

There was also a desire to have an easy migration from C/C++. Having
basic arithmetic behave differently would have made this more difficult.
Especially with those algorithms that take deliberate advantage of the
way arithmetic works in such languages.

Mark Thornton
 
R

Roedy Green

Probably not "enormously". "Measurably", I think, is a fairer description.
There are programming languages which don't suffer from integer overflow, and
they are not hugely slower than Java.

How about "considerably".

quoting from http://mindprod.com/jgloss/overflow.html

Overflow occurs when the result of an integer multiply, add or
subtract cannot fit in 32 bits. Java just quietly drops the high order
bits. There is no exception. If you need to detect overflow, you would
use long operands and examine the high order 32 bits of the result. If
there was no overflow, they would be all 0 or all 1. To detect
overflow from two longs would require testing the range of the
operands before the operation and/or testing the sign of the result.
Why does Java ignore overflow? Most computer hardware has no ability
to automatically generate an interrupt on overflow. And some hardware
has no ability to detect it. Java would have to explicitly test for it
on every add, subtract and multiply, greatly slowing down production.
Further ex-C programmers are very used to this cavalier ignoring of
overflow, and commonly write code presuming that high order bits will
be quietly dropped

The Pentium has hardware overflow detect but no hardware interrupt. So
if Java were to support overflow detect, inside the JVM
implementation would need to add a JO *jump on overflow" instruction
after every add and subtract, and special code to look at the high
order bits of the 32x32->64 bit multiply. 64/32->32 bit division
might need special handling.
 
R

Roedy Green

It could have been handled by defining an overflow int to be a long
result. But that would have slowed down the basic operations
considerably
 
C

Chris Uppal

Mark said:
There was also a desire to have an easy migration from C/C++. Having
basic arithmetic behave differently would have made this more difficult.

That sounds plausible.

Especially with those algorithms that take deliberate advantage of the
way arithmetic works in such languages.

Which makes the (bloody inconvenient -- at best) ommision of unsigned types
even harder to comprehend.

/Signed/ bytes -- phah !

-- chris
 
T

Thomas G. Marshall

Chris Uppal coughed up:
That sounds plausible.



Which makes the (bloody inconvenient -- at best) ommision of unsigned
types
even harder to comprehend.

/Signed/ bytes -- phah !

-- chris


I agree, however, I have to wonder if they were attempting to (except for
characters) stave off any issues re: signed + unsigned combinations.
{shrug}
 
G

Googmeister

Roedy said:
It could have been handled by defining an overflow int to be a long
result. But that would have slowed down the basic operations
considerably

In mathematics (set of all integers, *) is not a group either because
of the inverse operation. However, (set of all integers, +, *) is a
commutative ring.

In Java, (int, *) is closed. It's not a group because of the
inverse operation. However, (int, +, *) is a commutative
ring since it does integer arithmetic mod 2^32 (where the
elements are named -2^31 to 2^31 - 1 instead of the usual
0 to 2^32 - 1).
 
C

Chris Uppal

Thomas G. Marshall wrote:

[me:]
Which makes the (bloody inconvenient -- at best) ommision of unsigned
types
even harder to comprehend.
[...]
I agree, however, I have to wonder if they were attempting to (except for
characters) stave off any issues re: signed + unsigned combinations.

I have heard that suggestion before, but for me there's no meat in it. The
(perfectly real) problem arises in C because of silent conversion between
signed and unsigned. But given the Java designers' track record, I don't think
they'd have allowed that. I.e. converting between signed and unsigned
representations of the same width would require an explicit cast, as would
converting from signed to unsigned of any sizes.

Incidentally, another possible reason -- this has just occured to me as I
write -- is be that they might have wanted to avoid doubling the number of
arithmetic bytecodes. Pure speculation, of course...

-- chris
 
C

Chris Uppal

Roedy said:
How about "considerably".

I doubt it in theory, and there is plenty of counter-evidence in fact.

Consider adding two numbers. The numbers must have come from somewhere
(2 logical machine operations). The result must go somewhere (1
logical machine op). The numbers must be added together (1 logical
machine op). So that's 4 logical operations. Adding a
branch-on-overflow would add one more logical machine operation. So,
at this very abstract level, we see a 25% increase.

Next remember that no application spends all its time adding numbers
together. So you should divide the above marginal cost by an
application dependent scaling factor. I have real difficulty believing
that the appropriate scaling would ever be less than about 2, and would
nearly always be higher -- a /lot/ higher.

Already we are looking at small numbers. But now condsider that we are
also working with real machines, with instuction piplining, speculative
execution, branch prediction (and/or hinting), caching, and so on. I
haven't tried to work through the details for any particular machine
architecture (beyond my competance), but it seems highly unlikely that
the /real/ underlying cost is anything like the 25% I derived above.
In fact, to me it seems plausible that the marginal cost could be
exactly zero in many cases (i.e specific sequences of JITer-emmited
code), at least unless the numbers actually /did/ overflow (which would
stall the pipeline).

I won't buy "considerably" without considerable evidence. I won't buy
"measurably" without measurable evidence. I won't buy "enormously" at
all ;-)

-- chris
 
R

Roedy Green

Consider adding two numbers.

If a machine has no overflow detect, I have posted the code you need
to detect it. It is takes considerable overhead. Can you improve on
that?

I am not talking about the ease of adding overflow detect in hardware.
Obviously that is not that hard since you can do the entire operation
and set the condition codes in one cycle for an add in Pentium.
 
T

Thomas Hawtin

Incidentally, another possible reason -- this has just occured to me as I
write -- is be that they might have wanted to avoid doubling the number of
arithmetic bytecodes. Pure speculation, of course...

I think we did this a few months ago. Most arithmetic operations don't
care if they work on signed or unsigned operands. You would need some
way to handle testing/branching and conversion. An obvious solution is
to emit the same bytecodes as if the code was written in Java. Array
loads/stores could be doubled up, as load byte/boolean from array
(baload) is.

Tom Hawtin
 
H

Hendrik Maryns

Chris Uppal schreef:
Mark Thornton wrote:


Which makes the (bloody inconvenient -- at best) ommision of unsigned types
even harder to comprehend.

/Signed/ bytes -- phah !

As someone who only knows C(++) from school books and horror stories
about memory leak, I wonder what the particular advantage of unsigned
types is, except that a bit higher positive numbers fit in them (which
seems not very big an advantage to me)?

H.

--
Hendrik Maryns

==================
www.lieverleven.be
http://aouw.org
 
C

Chris Uppal

Thomas Hawtin wrote:

[me:]
I think we did this a few months ago. Most arithmetic operations don't
care if they work on signed or unsigned operands.

Good point. Thanks for the correction.

-- chris
 
R

Roedy Green

As someone who only knows C(++) from school books and horror stories
about memory leak, I wonder what the particular advantage of unsigned
types is, except that a bit higher positive numbers fit in them (which
seems not very big an advantage to me)?

1. any 8-bit char processing almost always wants unsigned bytes.

2. Any endian fiddles requires unsigned bytes.

3. any cryptography work wants unsigned bytes.

I can't think of a time when I used the signed feature of byte even
once it my entire Java programming career.
 
C

Chris Uppal

Hendrik Maryns wrote:

[me:]
As someone who only knows C(++) from school books and horror stories
about memory leak, I wonder what the particular advantage of unsigned
types is, except that a bit higher positive numbers fit in them (which
seems not very big an advantage to me)?

If you stay entirely within the world of Java then, as you say, it probably
doesn't make too much difference. At least the extra bit's worth of integer
range isn't too important. Personally I really /hate/ having to do any kind of
bit-twiddling with signed quantities -- it's unnatural, confusing, and hard to
debug. Not everyone has to do much bit-twiddling, though, especially if they
/are/ sticking entirely to a closed Java world.

But sticking to the Java world is an unreasonable requirement. It requires
that you never need to exchange any data with the rest of the world, and never
have to implement any kind of data format that was designed by someone not
using Java, and never have to read books/articles/standard/etc which are not
aimed specifically at a Java readership.

The rest of the world can express 4G in 32 bits. So they do so. The rest of
the world can express 255 in 8 bits. So they do so. Dealing with such
situations requires mangled Java code in one way or another.

-- chris
 
C

Chris Uppal

I said:
If you stay entirely within the world of Java then, as you say, it
probably doesn't make too much difference. At least the extra bit's
worth of integer range isn't too important. Personally I really /hate/
having to do any kind of bit-twiddling with signed quantities -- it's
unnatural, confusing, and hard to debug.

Forgot to add: also working with unsigned quantities (assuming they are
suitable for the application) avoids problems like the one that gave rise to
this thread. The greater mathematical tractability of unsigned quantities may
be an advantage in some situations. I don't know if such situations occur any
more frequently than the need for bit-twiddling, though.

-- chris
 
T

Thomas G. Marshall

Chris Uppal coughed up:
Hendrik Maryns wrote:

[me:]
As someone who only knows C(++) from school books and horror stories
about memory leak, I wonder what the particular advantage of unsigned
types is, except that a bit higher positive numbers fit in them (which
seems not very big an advantage to me)?

If you stay entirely within the world of Java then, as you say, it
probably
doesn't make too much difference. At least the extra bit's worth of
integer
range isn't too important. Personally I really /hate/ having to do any
kind
of bit-twiddling with signed quantities -- it's unnatural, confusing, and
hard to debug.

And as horrifying as this may sound, you will often find engineers not even
trained in the notion of sign extension when changing data type sizes. It
can be a moderate danger to your maintenance.

Not everyone has to do much bit-twiddling, though,
especially if they /are/ sticking entirely to a closed Java world.

But sticking to the Java world is an unreasonable requirement. It
requires
that you never need to exchange any data with the rest of the world, and
never have to implement any kind of data format that was designed by
someone
not using Java, and never have to read books/articles/standard/etc which
are
not aimed specifically at a Java readership.

The rest of the world can express 4G in 32 bits. So they do so. The rest
of
the world can express 255 in 8 bits. So they do so. Dealing with such
situations requires mangled Java code in one way or another.

-- chris



--
Unix users who vehemently argue that the "ln" command has its arguments
reversed do not understand much about the design of the utilities. "ln arg1
arg2" sets the arguments in the same order as "mv arg1 arg2". Existing file
argument to non-existing argument. And in fact, mv itself is implemented as
a
link followed by an unlink.
 
O

Oliver Wong

Googmeister said:
In mathematics (set of all integers, *) is not a group either because
of the inverse operation.

Yes, I realized that (slapping my forehead), a few minutes after
submitting my post.
In Java, (int, *) is closed. It's not a group because of the
inverse operation. However, (int, +, *) is a commutative
ring since it does integer arithmetic mod 2^32 (where the
elements are named -2^31 to 2^31 - 1 instead of the usual
0 to 2^32 - 1).

So you see? Java's arithmatic doesn't require all sorts of weird new
definitions. You just have to realize that it's working with finite rings
instead of infinite ones.

- Oliver
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top