M
Mark L Pappin
<puts on Compiler Vendor hat>
I've recently discovered that [at least two of] our compilers don't
make any attempt to handle floating point overflow in add/subtract/
multiply/divide, with the result that programmers who don't range-
check their data can e.g. multiply two very tiny values and end up
with a very large one. This is (AFAICT) quite fine according to the
Standard but is certainly a QoI issue and I'd like to raise ours.
I believe that it's the programmer's job to know what possible ranges
his values could take, and to check they're sane before performing
operations upon them. If this is done, then overflow and underflow
can never happen. Rarely is this done, of course, so the compromise
suggested by TPTB is to offer a safety net in the form of adding
inexpensive checks to the generated code which can propagate status
back to the user.
The approach I'm taking is to detect overflow or underflow and set a
flag (in the implementation namespace) as appropriate, but leave the
[invalid] value in the result. This way, if it really is important to
the user to eke that last bit (pun not intended) of information out of
the operation then the way is clear for them to do so. For example, a
multiplication which overflows will end up with a value pow(2,256)
smaller than the correct [unrepresentable] value, thanks to wraparound
of our 8-bit signed binary exponent, but the mantissa will still have
full precision - some recovery may be possible.
An alternative approach (since we use an IEEE-754-compatible
representation already) is to go the whole hog and implement
Infinities, NaNs, and Denormals. At this time we don't want to do this
because it appears to be a significant cost (see below) for not much
return. In any case, how to deal with those types of value in C is
still not defined, so any solution of this type would still be Not C.
I'm interested to hear the c.l.c set of viewpoints on silent vs. noisy
underflow and overflow - in this case, coercing values to 0 or Inf,
vs. leaving them as is and setting a user-checkable flag.
('Very-noisy' would be throwing an exception of some kind, also
undefined by the Standards.) What practices to you use to avoid
hitting overflow and underflow? Is our plan to let the dodgy value
continue to exist of any conceivable value?
For those who've read this far: we make cross-compilers for [often
tiny; <256 bytes of RAM is common] CPUs commonly used in embedded
systems, and we do our damnedest to make them meet C90 and are inching
toward C99 in places. Floating point has become popular, some
deficiencies have been found, and Muggins put his hand up to fix them.
mlp
I've recently discovered that [at least two of] our compilers don't
make any attempt to handle floating point overflow in add/subtract/
multiply/divide, with the result that programmers who don't range-
check their data can e.g. multiply two very tiny values and end up
with a very large one. This is (AFAICT) quite fine according to the
Standard but is certainly a QoI issue and I'd like to raise ours.
I believe that it's the programmer's job to know what possible ranges
his values could take, and to check they're sane before performing
operations upon them. If this is done, then overflow and underflow
can never happen. Rarely is this done, of course, so the compromise
suggested by TPTB is to offer a safety net in the form of adding
inexpensive checks to the generated code which can propagate status
back to the user.
The approach I'm taking is to detect overflow or underflow and set a
flag (in the implementation namespace) as appropriate, but leave the
[invalid] value in the result. This way, if it really is important to
the user to eke that last bit (pun not intended) of information out of
the operation then the way is clear for them to do so. For example, a
multiplication which overflows will end up with a value pow(2,256)
smaller than the correct [unrepresentable] value, thanks to wraparound
of our 8-bit signed binary exponent, but the mantissa will still have
full precision - some recovery may be possible.
An alternative approach (since we use an IEEE-754-compatible
representation already) is to go the whole hog and implement
Infinities, NaNs, and Denormals. At this time we don't want to do this
because it appears to be a significant cost (see below) for not much
return. In any case, how to deal with those types of value in C is
still not defined, so any solution of this type would still be Not C.
I'm interested to hear the c.l.c set of viewpoints on silent vs. noisy
underflow and overflow - in this case, coercing values to 0 or Inf,
vs. leaving them as is and setting a user-checkable flag.
('Very-noisy' would be throwing an exception of some kind, also
undefined by the Standards.) What practices to you use to avoid
hitting overflow and underflow? Is our plan to let the dodgy value
continue to exist of any conceivable value?
For those who've read this far: we make cross-compilers for [often
tiny; <256 bytes of RAM is common] CPUs commonly used in embedded
systems, and we do our damnedest to make them meet C90 and are inching
toward C99 in places. Floating point has become popular, some
deficiencies have been found, and Muggins put his hand up to fix them.
mlp