Arithmetic overflow checking

J

Joshua Cranmer

Wrong.

Ordinary ints aren't (in Java, at least, with no "int *" type) subject
to aliasing and other problems that a mutable Integer-analogue would be.

Then I suppose you'd have no objection to making the Matrix class
immutable and implicitly copying itself every time you do an operation.
I really appreciate that when using Householders to find eigenvalues in
my small 10Kx10K matrix.
 
S

supercalifragilisticexpialadiamaticonormalizeringe

Then I suppose you'd have no objection to making the Matrix class
immutable and implicitly copying itself every time you do an operation.
I really appreciate that when using Householders to find eigenvalues in
my small 10Kx10K matrix.

We were talking about numbers, not matrices.

For something like matrices, I'd consider it "good enough" to have a
mutable and an immutable version, with mutable ones used inside of
methods as temporaries and the immutable type used in argument and
return types. (Obviously each would have a constructor that accepted the
other type.)
 
D

David Lamb

For something like matrices, I'd consider it "good enough" to have a
mutable and an immutable version, with mutable ones used inside of
methods as temporaries and the immutable type used in argument and
return types.

I'd be reasonably happy with that, too.
 
J

Joshua Cranmer

Until some enterprising mathematician thinks up something that can be
added and multiplied, but has no sensible correspondence to any subset
of any kind of vectors of real numbers ...

You've never taken abstract algebra, have you? Polynomials are the most
common counterexample I can think of (although they are treatable as an
infinite vector space over any finite field). You can also generalize
functions over the real numbers as a ring, and I'd find it hard pressed
to come up with a scheme that can sensibly "correspond to any subset of
any kind of vectors of real numbers" that can handle things as wildly
varying as f(x) = 1, f(x) = sin x, f(x) = integral from 0 to x of
exp(-t^2) dt, f(x) = 1 if x is rational and 1/x if x is irrational, or
f(x) = sum from n = 0 to infinity of 1/2 ^n cos(9^n pi x) (the
Weierstrass function).

Of course, if you just want addition, I can pull any old group out of a
hat. Symmetries of an n-gon are a particular favorite example for
nonabelian groups, as are permutations of sets.
I think you can cram matrices in there, and complex numbers. Numbers mod
something, too -- though do you quietly truncate when converting to
these, or throw an exception? Of course, primitive int is already
actually an integers-mod-2^32 type so that may be considered already
settled.

What do you mean by "truncate" in modular arithmetic? Mod 10, 1 and 11
are the same thing (the set of numbers {..., -9, 1, 11, ... }).
 
T

Thomas Richter

Am 25.07.2011 09:56, schrieb Patricia Shanahan:
Historically, complex numbers were indeed invented by and for
mathematicians for the abstract satisfaction of having solutions for all
quadratic equations.

Not at all! There was simply no need to solve quadratic equations like
x^2 + 4 = 0 simply because everyone knew, and it was obvious, that this
equation *does not have* a solution.

Complex numbers were invented as a tool to solve *cubic* equations. The
tricky thing is that, if you want to solve a cubic equation, you *know*
that it must either have one, two or three solutions, ever. However, in
the three-solution case, a closed formula for computing the solutions
was found, but this equation required you, somewhere in the middle, to
handle with roots of negative numbers. All these strange negative roots
then vanish again in the final solution using rules like sqrt(-1) *
sqrt(-1) = -1, and then cancel out, so everything was "correct" in the
end. All what happened then is that these rules for manipulating the
strange temporary = "imaginary" numbers where written down, and *this*
gave the complex numbers.

Thus, complex numbers weren't an abstract invention by crazy
mathematicians trying to solve what wasn't possible to solve, but rather
as a toolkit to solve something very practical, namely to apply an
algorithm for solving cubic(!) equations. This is very much a useful
task, as it is to use complex numbers to describe AC currents.

That java lacks operator overloading is a historic decision, probably
taken on the basis that C++ started with this feature, and everyone
considered it "so cool" that it was misused for all types of nonsense.
Basically, I don't consider the choice of the C++ STL to use << and >>
operators for input and output a sane one, and probably the java
language designers had the same impression, so wanted to avoid it, but
then overdid it by disallowing operator overloading in total.

Nowadays, one can probably say that operator overloading *is* a useful
tool, but as for all useful tools it is one that should be handled with
care, and a tool that can cause quite some trouble in the hands of
fools. Thus, what C++ overdid, and Java "under"-did. From today's
perspective, I *would* prefer to have operator overloading in java, but
apply it only where it makes sense and the semantics of operators is
correct. That is, apply it to numeric types (numbers including complex
numbers, matrices, vectors). Appyling it to I/O as does C++ is, IMHO,
not a good idea.

Greetings,
Thomas
 
H

Henderson

[delete bestiary]

Yeesh. Anyone adding operator overloading to Java in a sane way has his
work cut out for him, it seems.
What do you mean by "truncate" in modular arithmetic? Mod 10, 1 and 11
are the same thing (the set of numbers {..., -9, 1, 11, ... }).

As in, if you have a mod10 type, and you assign 36 to it, do you just
silently accept that and store it internally as 6, or do you throw an
exception if the input isn't in the range 0 to 9?
 
J

Joshua Cranmer

As in, if you have a mod10 type, and you assign 36 to it, do you just
silently accept that and store it internally as 6, or do you throw an
exception if the input isn't in the range 0 to 9?

There is nothing inherently correct about the elements in mod 10
arithmetic having values 0 to 9, so 36 is as valid in mod 10 as 6 is.
 
N

Niklas Holsti

lewbloch said:
Gene said:
lewbloch said:
Martin Gregorie wrote: [snip]

[1] The instrument causing the problem was an unmodified Ariane 4 SRI
which raised an out-of-limits exception when the normal Ariane 5
trajectory exceeded a permitted Ariane 4 horizontal velocity limit.
...the Ariane 5 having more powerful engines.
In other words, this was a case where there *was* an out-of-range
exception, thus it makes the exact opposite point to the one Gene
presumably wanted to support.
The data I read was that the exception was not handled. IIRC,
debugging got interpreted as navigational data.

Precisely. There was an exception, and it was not handled. Having
the exception was not enough.

There was an exception (overflow in a conversion instruction), and it
was handled. However, the handler was designed for the Ariane 4, where
the designers (after careful analysis of the data ranges for the Ariane
4) decided to assume that an overflow exception indicated a CPU HW
failure, and consequently the handler shut down the CPU. The same
overflow then occurred in the redundant CPU, which also shut down, so
the rocket was left with no guidance.

If the SW had been tested properly, the overflow exception would have
occurred during testing, which would have revealed the flawed
assumptions made in the reuse of the Ariane 4 SW on the Ariane 5. Surely
that would have been better than ignoring the overflow.

The Ariane 501 disaster is a poor guide for this discussion, because of
the difficulty of proper exception handling in that system: you cannot
abort the launch and return to the launch pad. In most applications, it
is much easier to design reasonable and safe exception handlers.
 
W

Wolfgang Draxinger

Am Sun, 10 Jul 2011 01:47:25 -0700 (PDT)
schrieb tm said:
A CPU could (in theory) easily recognize the overflow
and generate an interrupt. This way normal computations
(without overflow) would have no extra cost.

Interrupts are extremely expensive. They require the OS kernel into a
full blown context switch, registers must be saved and restored. And
finally the process needs to be signaled. Nobody sane in his mind would
it be done that way.
AFAIK commonly used CPUs do not have this possibility. They
have some FLAG, which is set when an overflow occurred.

And checking a flag is how difficult? It's a simple, cheap conditional
jump. And the execution prediction system works very efficient on
flags, because they're integral CPU state.
But there is no possibility to cause an interrupt, when
the overflow FLAG is set. So code, which checks for
overflow, must check this flag after every computation.

So? Checks are not expensive. Heck, some architectures, like ARM,
execute instructions conditionally, depending on a flag. Reacting on a
overflow flag comes with NO EXTRA COST there.
Needless to say: Normal computations (without overflow)
are slowed down by this checks.

No, they are not. Modern pipelined CPUs have codepath prediction and
out-of-order-operations, where some codepaths are executed proactively
and their results are kept on-hold. The instructions reacting upon the
overflow flag check may have been finished, even before the offending
operation may have been.
Because of this slow down most compilers and virtual
machines (AFAIK inluding the JVM) have no overflow
checking.

Almost every new language designed implement overflow checks. Also
every generic container library does so. Overflow checks cost NOTHING
these days.
In other words: A missing hardware feature:

Trigger interupt when overflow flag is set.

Causes compilers and JVMs to omit overflow checks.

Did you ever think about the kind of infrastructure that's required for
this? This goes down into the bowels of the OS kernel. Interrupts
always cause context switches, which are expensive. Checking a flag
comes practically for free. Especially if the execution prediction can
determine, that a certain codepath depends on a certain flag only.

To give you some figures: Processing an interrupt takes Linux about 1 to
10µs. On a 3GHz CPU that amounts to 30000 clockcycles, and most
modern, pipelined CPUs finish at least 1 instruction per clock cycle;
on a Intel Pentium or Core for the majority of instructions have a up
to 8 finished instructions per clock cycle. In the time your proposed
overflow-check interrupt is processed you can do about 200000 flag
checks.


Wolfgang
 
W

Wolfgang Draxinger

Am Thu, 8 Sep 2011 21:02:43 +0200
schrieb Wolfgang Draxinger said:
To give you some figures: Processing an interrupt takes Linux about 1
to 10µs. On a 3GHz CPU that amounts to 30000 clockcycles, and most
modern, pipelined CPUs finish at least 1 instruction per clock cycle;
on a Intel Pentium or Core for the majority of instructions have a up
to 8 finished instructions per clock cycle. In the time your proposed
overflow-check interrupt is processed you can do about 200000 flag
checks.

Well, I was off by a factor of ~10. The typical Linux ISR takes about
2000 clock cycles. Still going such a long way is still a lot slower
than checking the flag in under a cycle.


Wolfgang
 
W

Willem

Wolfgang Draxinger wrote:
) Am Thu, 8 Sep 2011 21:02:43 +0200
) schrieb Wolfgang Draxinger <[email protected]>:
)
)> To give you some figures: Processing an interrupt takes Linux about 1
)> to 10??s. On a 3GHz CPU that amounts to 30000 clockcycles, and most
)> modern, pipelined CPUs finish at least 1 instruction per clock cycle;
)> on a Intel Pentium or Core for the majority of instructions have a up
)> to 8 finished instructions per clock cycle. In the time your proposed
)> overflow-check interrupt is processed you can do about 200000 flag
)> checks.
)
) Well, I was off by a factor of ~10. The typical Linux ISR takes about
) 2000 clock cycles. Still going such a long way is still a lot slower
) than checking the flag in under a cycle.

So if an overflow condition typically occurs less than 1 in 10000 times,
the interrupt solution is better.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
 
W

Wolfgang Draxinger

Am Thu, 8 Sep 2011 19:15:13 +0000 (UTC)
schrieb Willem said:
So if an overflow condition typically occurs less than 1 in 10000
times, the interrupt solution is better.

Theoretically yes, practically no!

It depends on the architecture. On ARM it doesn't matter at all, since
ALL instructions are conditional.

On x86 architecture due to pipelining and OOE the differences are not
measureable as well (the overflow case code executes in parallel and
may have been finished, before the code, creating the overflow executed
at all). Also the overflow condition will probably not do any
arithmetic, in contrast to the arithmetic that does overflow. So these
are separate functional units at work, so pipelining will execute them
in parallel.

Really, there no cost to speak of in doing overflow checks.


Wolfgang
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top