5/0.0 = Infinity : NO error is thrown

A

Arne Vajhøj

Mark said:
There's lots of things that hardware supports today, which it didn't ten
years ago.

First make it correct, then make it fast.

We are talking every floating point division here.

Arne
 
M

Mark Space

Arne said:
We are talking every floating point division here.

And?

Seriously, maybe I need to explain a bit more, but I don't see the issue.

First I said the JIT compiler could be involved to emit code to check,
or not check, for divide by zero. That should involve setting or
clearing a flag on the floating point unit. Or checking one flag
afterwards. Not a big deal. If you don't care, emit code to ignore the
flag. Easy.

For byte-code, well you have to have the JVM check. But anything that
isn't compiled by the JIT compiler shouldn't be used very often, so
again the performance hit should be minimal. It might be worth while to
install a trap in the fpu if the fpu supports them, and just ignore
divides by zero when they happen, if that is the selected option. That
takes NO time, other than the initial poke-the-address-in-here at start
up. (Most of the time I'd expect the OS to do that for you.) I haven't
looked at Intel's or AMD's specs lately, but this is 1990 technology.

For ints, same deal. I think even the original 8086 had a hardware trap
for integer divide by zero. No time during execution of code was needed
to check for this error.

I don't think these concepts are that hard. Are they being taught in
school these days? Folks, hardware has always bent over backwards to
accommodate the software. This overflow stuff is old old news. There
are much more exciting issues with modern architecture than number overflow.
 
M

Martin Gregorie

Arne said:
We are talking every floating point division here.
....and some hardware used to support things better in the past. The
first computer I used, an Elliott 503, was a British purpose designed
scientific computer with 39 bit words. It was about 10-15% faster for
floating point calculations than it was for integer.
 
J

John W. Kennedy

Mark said:
And?

Seriously, maybe I need to explain a bit more, but I don't see the issue.

There are other problems, specifically in hardware and operating-system
support.

Does all hardware support this? If not, is the overhead of simulating it
acceptable?

Do all OSs support toggling the option? If not, is the overhead of
simulating it acceptable?

How is the option controlled by the OS for native programs?
At compile time?
At link time?
At process-start time?
Dynamically, within a process?
At thread-start time?
Dynamically, within a thread?

Is there an agreement? Is there even a consensus? Which models can be
emulated on other models without unacceptable overhead?

And, given all this, is it really worth it? In a great many cases,
letting the result "Infinity" or "Not a Number" go through to the end is
entirely satisfactory, producing correct and useful output (to the
degree that the input allows), whereas checking and/or interrupt
handling adds significant overhead, while also crippling machine-code
optimization. IEEE-754 did not incorporate these features as a joke. If
mere debugging is an issue, use assertions.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,787
Messages
2,569,631
Members
45,338
Latest member
41Pearline46

Latest Threads

Top