max(NaN,0) should be NaN

N

Nick Maclaren

|> In article <[email protected]>,
|> (e-mail address removed) writes:
|>
|> > This makes no sense, as the outcome of the operation is undefined and
|> > should be NaN.
|> > max(NaN,0.) = NaN
|>
|> Why?
|>
|> > After researching, it appears the first outcome is accepted behavior,
|> > and might be included in the revised IEEE 754 standard, which affects
|> > not only Fortran. The discussion posted at
|> > www.cs.berkeley.edu/~ejr/Projects/ieee754/meeting-minutes/02-11-21.html#minmax
|> > suggests that "There is no mathematical reason to prefer one reason to
|> > another."
|>
|> Don't take this the wrong way. But, the members of the IEEE754
|> committee probably have much more experience than you (and
|> many of the people here in c.l.f) in floating point mathematics.
|> If they came to the conclusion that
|>
|> "There is no mathematical reason to prefer one reason to another."
|>
|> then you may want to pay attention to them, and guard against
|> suspect comparisons.

Well, it is unlikely that any of them have more experience than me in
practical numerical software engineering, though I am less than 1% of
the numerical analyst that Kahan is, to give just one example. Note
that I am not saying that some of them don't have comparable experience,
though I can't think of any offhand.

They are quite simply wrong. There IS a very strong mathematical
argument, and the reasons for making that statement are dogmatism, not
science. I give the reasons for the DECISION below, but that is a
consequent matter. It is the reason for making that STATEMENT I am
talking about above.


The background is that traditional design started by creating a fairly
precise mathematical model, and then deriving the operations to fit in
with that model. This maximises the possibilities of reasoning about
the behaviour of the program (e.g. validation, program proving, software
engineering, etc. etc.) at the expense of restricting flexibility.

The alternative approach is to start with the facilities, maximise
them for flexibility, and let the overall model lie where it falls.
This maximises the flexibility of the design, at the cost of making
static validation somewhere between harder and impossible.

One of Kahan's papers points out that IEEE 754 did the latter, and that
it was a deliberate deviation from the traditional approach.


Jumping aside, the need for missing values occurs almost entirely in
vector or other composite operations, and EVERY language that has
supported them needs BOTH semantics. In particular, the requirement
order for the operations in statistics is:

Count non-missing values in vector
Sum non-missing values in vector
Take mininum/maximum of non-missing values in vector
Take product of non-missing values in vector
Derived operations and more esoteric ones

Also, EVERY language needs BOTH semantics, according to context. For
example, in the following:

top = max(max(vector_A,vector_B))

sum should be the maximum of the elements of vector_A and vector_B
where BOTH of a pair are non-missing. Look at any decent book on
statistics or good statistical package for ample evidence.


IEEE 754 NaNs are VERY clearly indications of 'invalid' values (though
even that has several interpretations, but the subtleties are irrelevant
to this). If they were to be treated as missing values, then it is
immediately clear that NaN+1.23 should be 1.23. No ifs or buts.

I have a paper on this somewhere, which I have circulated but not
published, if anyone is interested.


Regards,
Nick Maclaren.
 
E

ejko123

Nick said:
Jumping aside, the need for missing values occurs almost entirely in
vector or other composite operations, and EVERY language that has
supported them needs BOTH semantics.

Both Fortran 2003 (and C99 and F95 with TR 15581)
provide standard ways to do what you want, assuming that
NaN indicates a missing value. In Fortran's case,
since IEEE_IS_NAN is elemental, you can simply say for
a vector declared REAL X(N):
Count non-missing values in vector
non_missing = count(.not.ieee_is_nan(x))
Sum non-missing values in vector
total = sum(x, mask=.not.ieee_is_nan(x))
(and similarly for PRODUCT)
Take mininum/maximum of non-missing values in vector
biggest = maxval(x, mask=.not.ieee_is_nan(x))
(and similarly for MINVAL)

You're always free to special-case things, as in

if(any(ieee_is_nan(x))) then
result = NaN
else
result = f(x) ! all non-NaN elements in X
endif

You can do the same thing (albeit more verbosely) in C99 with the
analogous library functions.

Some applications will want to silently ignore NaN elements, and
in other cases NaNs indicate serious errors that can be indicated
by NaN result values. Seems to me that the current C and Fortran
language standards offer basic support for both alternatives,
but the onus is on the programmer to anticipate the cases where
NaNs are possible and to take the appropriate action.

Things might get tricky if you're also testing/setting the various IEEE
exception flags or if X contains signaling NaNs. I don't know,
for example, whether using the mask functions above can change
the state of the INVALID flag or if the language standards even
address this issue.

--Eric
 
N

Nick Maclaren

|>
|> Both Fortran 2003 (and C99 and F95 with TR 15581)
|> provide standard ways to do what you want, ...
|>
|> You can do the same thing (albeit more verbosely) in C99 with the
|> analogous library functions.

Don't bet on it :-( Even Fortran doesn't specify what you imply,
and C99's facilities are so badly specified as to be effectively
useless (and even actively harmful).

|> Some applications will want to silently ignore NaN elements, and
|> in other cases NaNs indicate serious errors that can be indicated
|> by NaN result values. Seems to me that the current C and Fortran
|> language standards offer basic support for both alternatives,
|> but the onus is on the programmer to anticipate the cases where
|> NaNs are possible and to take the appropriate action.

Which is precisely what writing robust, portable code (a.k.a. software
engineering) is NOT about!


Regards,
Nick Maclaren.
 
P

P.J. Plauger

|>
|> Both Fortran 2003 (and C99 and F95 with TR 15581)
|> provide standard ways to do what you want, ...
|>
|> You can do the same thing (albeit more verbosely) in C99 with the
|> analogous library functions.

Don't bet on it :-( Even Fortran doesn't specify what you imply,
and C99's facilities are so badly specified as to be effectively
useless (and even actively harmful).

Easy on the hyperbole. C99 spells out quite clearly, in F.9.9.2,
that fmax must effectively ignore NaNs. I don't like it, but out
C99 implementation does exactly what's specified in the 1999
C Standard. And our test suite looks for that behavior too.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
N

Nick Maclaren

|> > |>
|> > |> Both Fortran 2003 (and C99 and F95 with TR 15581)
|> > |> provide standard ways to do what you want, ...
|> > |>
|> > |> You can do the same thing (albeit more verbosely) in C99 with the
|> > |> analogous library functions.
|> >
|> > Don't bet on it :-( Even Fortran doesn't specify what you imply,
|> > and C99's facilities are so badly specified as to be effectively
|> > useless (and even actively harmful).
|>
|> Easy on the hyperbole. C99 spells out quite clearly, in F.9.9.2,
|> that fmax must effectively ignore NaNs. I don't like it, but out
|> C99 implementation does exactly what's specified in the 1999
|> C Standard. And our test suite looks for that behavior too.

No, that's not my point. I agree that THAT is clear.

NaN support and fmax/fmin are in the 'open' (mandatory) sections of the
standard, but footnotes 204/205 refer forward to Annex F. However,
Annex F applies only if an implementation sets __STDC_IEC_559__, and
makes little or no sense on many architectures and with many compiler
options (e.g. VAX/Alpha, zOS, ones with IEEE 754 representation but only
partial semantics, etc.).

And then there is the question of exactly how the "as if" rule and
pragma STDC FP_CONTRACT apply (and the default for THAT may be anything).
The only compiler that I have used that supported C99 Annex F continued
to set __STDC_IEC_559__ even when I specified optimisation options that
affect the numeric results (which is forbidden by IEEE 754). So a lot
of code will have __STDC_IEC_559__, but NOT the semantics it expects.

If, however, the compiler DIDN'T do that, things would be no better.
There is almost nothing in the C99 standard that says what the "IEEE 754"
parts of the standard that are not shielded by __STDC_IEC_559__ (e.g.
<fenv.h>, chunks of <math.h> and some others) mean if __STDC_IEC_559__
is not set. As you recall, this issue was raised repeatedly during the
standardisation process.

And, of course, fmax/fmin are among the most clearly specified parts of
C99's IEEE 754 support - many of the other aspects are MUCH less clear,
and the flag handling is unbelievable. But let's skip that, on the grounds
of good taste and public decency.


Regards,
Nick Maclaren.
 
R

Richard Bos

The background is that traditional design started by creating a fairly
precise mathematical model, and then deriving the operations to fit in
with that model. This maximises the possibilities of reasoning about
the behaviour of the program (e.g. validation, program proving, software
engineering, etc. etc.) at the expense of restricting flexibility.

The alternative approach is to start with the facilities, maximise
them for flexibility, and let the overall model lie where it falls.
This maximises the flexibility of the design, at the cost of making
static validation somewhere between harder and impossible.

One of Kahan's papers points out that IEEE 754 did the latter, and that
it was a deliberate deviation from the traditional approach.

And yet, it was IEEE 754 which gained whitespread usage, and was
officially adopted in at least one, and if I understand this thread
correctly at least two, programming language standards. IYAM, this
demonstrates the gap between computing science in academe and the
practical art of writing programs that are supposed to be used by
humans.

Richard
 
N

Nick Maclaren

|> (e-mail address removed) (Nick Maclaren) wrote:
|>
|> > The background is that traditional design started by creating a fairly
|> > precise mathematical model, and then deriving the operations to fit in
|> > with that model. This maximises the possibilities of reasoning about
|> > the behaviour of the program (e.g. validation, program proving, software
|> > engineering, etc. etc.) at the expense of restricting flexibility.
|> >
|> > The alternative approach is to start with the facilities, maximise
|> > them for flexibility, and let the overall model lie where it falls.
|> > This maximises the flexibility of the design, at the cost of making
|> > static validation somewhere between harder and impossible.
|> >
|> > One of Kahan's papers points out that IEEE 754 did the latter, and that
|> > it was a deliberate deviation from the traditional approach.
|>
|> And yet, it was IEEE 754 which gained whitespread usage, and was
|> officially adopted in at least one, and if I understand this thread
|> correctly at least two, programming language standards. IYAM, this
|> demonstrates the gap between computing science in academe and the
|> practical art of writing programs that are supposed to be used by
|> humans.

You have made at least three major errors in that, I am afraid.

Firstly, I am older and more experienced than that, and am referring
to the days before the rise of computing science in academia. The
traditional design I referred to was and is used by practical software
engineers (though we weren't called that, then). Yeah, I know that
we are now an engandered species :-(

Secondly, IEEE 754 has NOT gained widespread acceptance, and almost all
"IEEE 754" systems use its format and not its semantics, or go in for
some simplification or variant of it. Many or most don't support
denormalised numbers or exceptions (as it specifies them) in some of
their modes (often their defaults), and some older and embedded systems
didn't/don't support NaNs or infinities.

Thirdly, only Java has attempted to include it as part of its specification,
and Kahan has written a diatribe against Java. C99 has some support, but
words fail me when describing how unusable it is. Fortran makes gestures,
but the 'support' is minimal, to put it mildly.

IEEE 754R is still in denial about the reasons for the near-total uptake
in programming languages (after 22 years!) and the people STILL believe
that it is due to inertia. No way, Jose.


Regards,
Nick Maclaren.
 
P

P.J. Plauger

Secondly, IEEE 754 has NOT gained widespread acceptance, and almost all
"IEEE 754" systems use its format and not its semantics, or go in for
some simplification or variant of it.

You could say the same thing about Standard C, if you're sufficiently
vague about degree of conformance. The fact is that *most* architectures
in widespread use support a pretty good approximation to IEEE 754. If
you want to savor all the small ways they don't quite conform, then ask
Fred Tydeman to give you his list of grievances. But in real life they
hardly matter to the practicing programmer.
Many or most don't support
denormalised numbers

I called you on this in Berlin and you temporized. The statement
IME is simply untrue. And ME involves supporting Standard C99 and
C++ libraries on about a dozen popular architectures, with about
as many floating-point processors.
or exceptions (as it specifies them)

I guess the parenthetic quibble gives you some wiggle room, but
I still have trouble with it. Our fenv.h tests seem to indicate
that the popular architectures *do* meet the requirements of
IEEE 754 in this area.
in some of
their modes (often their defaults),

Another potential quibble, which I still have trouble believing,
from direct experience.
and some older and embedded systems
didn't/don't support NaNs or infinities.

Now that's true. My former company Whitesmiths, Ltd. provided software
floating-point packages that blew off denormals, infinities, and NaNs,
with no complaints from our customers. But the last packages we shipped
went out the door in 1988. Since then, both hardware and software have
learned to take IEEE 754 a bit more seriously. My book "The Standard C
Library" worried considerably about infinities and NaNs back in 1992.
It looks pretty naive to me now, in many ways, but it got things mostly
right. And IMO it reflected the spirit of the times.
Thirdly, only Java has attempted to include it as part of its
specification,
and Kahan has written a diatribe against Java.

IIRC, the diatribe did *not* complain that Java slavishly followed
IEEE 754 to its detriment. There was this little thing about favoring
Sun architecture over Intel, then patching thing up in a half-assed
manner...
C99 has some support, but
words fail me when describing how unusable it is.

The detailed words always seem to fail you, but you're consistently
quick with the poisonous adjectives. In fact, C99 has two extensive
annexes (F and G) describing in exquisite detail how IEEE 754 and
Standard C should play together. My current company Dinkumware, Ltd.
has endeavored to conform completely to those annexes, and aside
from a few aggressively perverse tests in Tydeman's floating-point
suite we do just fine. While I don't always agree with some of the
more creative choices made in those annexes (particularly G), I had
little trouble following their guidance. Hence, I have to disagree
that C99 support for IEEE 754 is unusable, either to us or our
customers.
Fortran makes gestures,
but the 'support' is minimal, to put it mildly.

I'm glad you're capable of putting something mildly, from time to
time. Now if only you could put some of your jeremiads a bit more
accurately. Or even more precisely, so they're easier to criticize
in detail.
IEEE 754R is still in denial about the reasons for the near-total uptake
in programming languages (after 22 years!) and the people STILL believe
that it is due to inertia. No way, Jose.

From out here, IEEE 754R seems to be suffering more from internecine
warfare than from any denial about their effects on programming
languages. But I'm always willing to cut slack for any war
correspondent. YMMV.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
B

Ben Pfaff

P.J. Plauger said:
Since then, both hardware and software have learned to take
IEEE 754 a bit more seriously. My book "The Standard C Library"
worried considerably about infinities and NaNs back in 1992.
It looks pretty naive to me now, in many ways, but it got
things mostly right. And IMO it reflected the spirit of the
times.

I'd be gratified if you'd say a few more words about how the
discussion in "The Standard C Library" treats floating-point
arithmetic (or just infinities and NaNs) naively. An important
part of my own understanding of floating-point and IEEE 754 has
come from reading that book, so I'm now wondering how naive I am
;-)
 
P

P.J. Plauger

I'd be gratified if you'd say a few more words about how the
discussion in "The Standard C Library" treats floating-point
arithmetic (or just infinities and NaNs) naively. An important
part of my own understanding of floating-point and IEEE 754 has
come from reading that book, so I'm now wondering how naive I am
;-)

Well, the topic of this thread is one example. I assumed that NaN
always meant, "Abandon hope, all meaning is lost." The idea never
occurred to me that it might simply mean, "Pay no attention to me,
just go compute a useful value from other arguments." (And I still
have trouble with that viewpoint.)

Similarly, I always took Inf to mean "unbounded". But C99 actually
expects you to compute many different angles for atan2, as if two
Inf values were usefully taken as equal sides of a triangle. (And
I still have trouble with that viewpoint.)

I was unaware of several of the subtle requirements of IEEE 754,
particularly regarding domain vs. range errors and Inf vs. NaN
return values at mathematical singularities.

The code I presented in that book often botched denormal arguments.
It's all too easy to blow away those tiny values by mixing them
unnecessarily with other values.

The code was also intentionally cavailer about preserving, and
generating, -0 results. (And I still think that -0 is of dubious
value.)

All I had back then were the Elefunt tests, which I translated
painfully from Fortran. With the much better testing technology
we've developed in Dinkumware, we've cleaned up quite a few
spots where we lost accuracy. We also eliminated a few spots
where we were trying too hard, to no avail.

Finally, our work in Dinkumware on the special math functions
drove home the importance of sensitivity, both in determining
how hard an implementer should try to get exactly the right
result and in how much a user should trust numeric
approximations to functions in their sensitive regions.

That's all.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
G

glen herrmannsfeldt

(snip)
Well, the topic of this thread is one example. I assumed that NaN
always meant, "Abandon hope, all meaning is lost." The idea never
occurred to me that it might simply mean, "Pay no attention to me,
just go compute a useful value from other arguments." (And I still
have trouble with that viewpoint.)

The R language, mostly used for statistics work, has both NA and NaN.
NA when you don't know anything about the value, often used
in input data where no data exists. NaN usually as the result
of computations.

I believe the distiction could also make sense in other languages,
though I don't expect it to appear anytime soon.

-- glen
 
J

James Giles

P.J. Plauger said:
I guess the parenthetic quibble gives you some wiggle room, but
I still have trouble with it. Our fenv.h tests seem to indicate
that the popular architectures *do* meet the requirements of
IEEE 754 in this area.

I suppose you could regard trap handlers as part of "IEEE exceptions".
I've never seen an implementation of IEEE traps.

--
J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare
 
D

David N. Williams

P.J. Plauger said:
> [...]
>
> The code was also intentionally cavailer about preserving, and
> generating, -0 results. (And I still think that -0 is of dubious
> value.)

I hesitate to stick my head up in such august company, being so
nonaugust, but I was impressed by how easy it was to replicate
Kahan's Borda's mouthpiece example in C99-based ANS Forth
implementations (pfe and gforth) of complex mathematics:

http://www-personal.umich.edu/~williams/archive/forth/complex/borda.html

And furthermore how easy it was to get the right answers
generally for the complex elementary functions above and below
their principal value branch cuts:

http://www-personal.umich.edu/~williams/archive/forth/complex/complex-ieee-test.fs

The link just above is not comprehensible outside of a niche of
a niche. :)

-- David
 
N

Nick Maclaren

|> >
|> > The code was also intentionally cavailer about preserving, and
|> > generating, -0 results. (And I still think that -0 is of dubious
|> > value.)
|>
|> I hesitate to stick my head up in such august company, being so
|> nonaugust, ...

Don't worry - as in politics, it is often the least august people who
see most clearly.

|> but I was impressed by how easy it was to replicate
|> Kahan's Borda's mouthpiece example in C99-based ANS Forth
|> implementations (pfe and gforth) of complex mathematics:
|>
|> http://www-personal.umich.edu/~williams/archive/forth/complex/borda.html
|>
|> And furthermore how easy it was to get the right answers
|> generally for the complex elementary functions above and below
|> their principal value branch cuts:
|>
|> http://www-personal.umich.edu/~williams/archive/forth/complex/complex-ieee-test.fs

Well, yes, but one of the questions is whether that is useful. Now, I
quite agree that a numerical analyst as good as Kahan can use branch
cut information directly in real applications, and I date from the days
when we were taught to do so (though I doubt that I still can), but I
know of nobody under 45 who can.

My experience is that Bill Plauger is understating the case - the IEEE 754
handling of zeroes (and hence infinities) causes a huge number of errors
that would not have occurred in older arithmetics. People REALLY don't
expect the "gotchas" it introduces - and C99 makes that a lot worse.

|> The link just above is not comprehensible outside of a niche of
|> a niche. :)

Very true. In a related niche, I have a simple test that shows up the
chaos with C99's complex division and overflow. It really is VERY
nasty :-(


Regards,
Nick Maclaren.
 
N

Nick Maclaren

|>
|> I suppose you could regard trap handlers as part of "IEEE exceptions".
|> I've never seen an implementation of IEEE traps.

I have. I have even tested them. They didn't work, because few (if
any) modern systems support interrupt recovery by user code properly.
However, Plauger has mangled what I said so appallingly that I hope
that you don't think that I said what he implied I said.

What I said was:

Many or most don't support denormalised numbers or exceptions
(as it specifies them) in some of their modes (often their defaults),
and some older and embedded systems didn't/don't support NaNs or
infinities.

That statement is true and I stand by it. I did not say that any
modern general-purpose desktop and server systems don't support
IEEE 754 at all[*], though many do not support C99 - for good reasons,
including customer demand.


[*] zOS supports only a subset - see:

http://www-306.ibm.com/software/awdtools/czos/features/


Regards,
Nick Maclaren.
 
N

Nick Maclaren

|>
|> > Secondly, IEEE 754 has NOT gained widespread acceptance, and almost all
|> > "IEEE 754" systems use its format and not its semantics, or go in for
|> > some simplification or variant of it.
|>
|> You could say the same thing about Standard C, if you're sufficiently
|> vague about degree of conformance. The fact is that *most* architectures
|> in widespread use support a pretty good approximation to IEEE 754. If
|> you want to savor all the small ways they don't quite conform, then ask
|> Fred Tydeman to give you his list of grievances. But in real life they
|> hardly matter to the practicing programmer.

You cannot now say the same about C90, though you could up to about 1995.
You can, indeed, say the same about C99 - and a large part of the reason
for that is customer demand for C90 (for very good reasons).

Unfortunately, they do matter very much to anyone who is attempting to
write robust, portable code or is in the position of assisting users
who develop on one system and then find that their code doesn't work
on another. I have considerable experience with both.

|> > Many or most don't support
|> > denormalised numbers
|>
|> I called you on this in Berlin and you temporized. The statement
|> IME is simply untrue. And ME involves supporting Standard C99 and
|> C++ libraries on about a dozen popular architectures, with about
|> as many floating-point processors.

That is a falsehood. As you have done here, you quoted me out of
context, completely changing the meaning of what I said to something
that I did not say and was false, and I was not given a chance to
respond by the chairman. I wondered whether to object on a point
of order, but that seemed excessive.

|> > or exceptions (as it specifies them)
|>
|> I guess the parenthetic quibble gives you some wiggle room, but
|> I still have trouble with it. Our fenv.h tests seem to indicate
|> that the popular architectures *do* meet the requirements of
|> IEEE 754 in this area.
|>
|> > in some of
|> > their modes (often their defaults),
|>
|> Another potential quibble, which I still have trouble believing,
|> from direct experience.

For heaven's sake! Quoting individual phrases (not even clauses!) out
of context is a low political trick. I have requoted the full sentence
in another posting, and shall not do so here.

My statement was, however, based on detailed investigations of four
Unices on 4 totally separate architectures, and brief ones on a fair
number of others. In particular, it is true for at least AIX, IRIX
Solaris and Linux/gcc.

|> > Thirdly, only Java has attempted to include it as part of its
|> > specification,
|> > and Kahan has written a diatribe against Java.
|>
|> IIRC, the diatribe did *not* complain that Java slavishly followed
|> IEEE 754 to its detriment. There was this little thing about favoring
|> Sun architecture over Intel, then patching thing up in a half-assed
|> manner...

Yes, and there was a much larger thing about how it was essential to
provide both the flags and the values in order to get reliable exception
handling.

|> > C99 has some support, but
|> > words fail me when describing how unusable it is.
|>
|> The detailed words always seem to fail you, but you're consistently
|> quick with the poisonous adjectives.

As you know, that is another falsehood. I wrote a great many detailed
descriptions of the problems for the SC22WG14 reflector, often including
solutions, and some of them were raised by the BSI as official National
Body comments. All were either ignored or responded to entirely by
ad hominem attacks, as you are doing here.

I have a fair number of very detailed documents describing the issues,
and often solutions to the problems, which have been widely circulated.
I posted one to the SC22WG21 list, which you managed to get ignored
without consideration at a meeting at which I was not present, apparently
by claiming that it was false. However, I should point out that I gave
an independent reference that my statements were correct (Python).

[ To anyone else: please Email me if you want copies, and tell me
what aspects you are interested in. I don't guarantee to be able to
find everything! ]

|> In fact, C99 has two extensive
|> annexes (F and G) describing in exquisite detail how IEEE 754 and
|> Standard C should play together.

Well, no, and that was one of the reasons that the BSI voted "no"
to C99 and many customers have explicitly specified C90. As I gave
enough reasons in a previous response in this thread, I shan't repeat
them.

|> My current company Dinkumware, Ltd.
|> has endeavored to conform completely to those annexes, and aside
|> from a few aggressively perverse tests in Tydeman's floating-point
|> suite we do just fine. While I don't always agree with some of the
|> more creative choices made in those annexes (particularly G), I had
|> little trouble following their guidance. Hence, I have to disagree
|> that C99 support for IEEE 754 is unusable, either to us or our
|> customers.

There is a difference between "unimplementable" and "unusable". I
never said that Annex F or G were unimplementable on systems with
hardware that supports IEEE 754 in at least one mode. They are
unusable for real applications that need robustness, portability
and efficiency (and, to some extent, any of those).



Enough, already. I am not going to respond to your attacks further.
If you ask me to justify what I have actually said, I may respond, but
that is unlikely if you misquote to the above extent.


Regards,
Nick Maclaren.
 
K

Keith Thompson

Very true. In a related niche, I have a simple test that shows up the
chaos with C99's complex division and overflow. It really is VERY
nasty :-(

Can you post it, or a link to it if it's too large? I think a lot of
comp.lang.c folks would be interested (not sure about
comp.lang.fortran).
 
P

P.J. Plauger

...
|> I called you on this in Berlin and you temporized. The statement
|> IME is simply untrue. And ME involves supporting Standard C99 and
|> C++ libraries on about a dozen popular architectures, with about
|> as many floating-point processors.

That is a falsehood.

That's one.
...
As you have done here, you quoted me out of
context, completely changing the meaning of what I said to something
that I did not say and was false, and I was not given a chance to
respond by the chairman. I wondered whether to object on a point
of order, but that seemed excessive.

You're responding now, by questioning my integrity and motives.
...
|> Another potential quibble, which I still have trouble believing,
|> from direct experience.

For heaven's sake! Quoting individual phrases (not even clauses!) out
of context is a low political trick.

That's two.
...
|> > C99 has some support,
but
|> > words fail me when describing how unusable it is.
|>
|> The detailed words always seem to fail you, but you're consistently
|> quick with the poisonous adjectives.

As you know, that is another falsehood.

That's three strikes, and you're completely out of line. It's one
thing to accuse me of stating falsehoods, and of various other
tricks; it's quite another to say that I deliberately made a false
statement.
...
Enough, already. I am not going to respond to your attacks further.
If you ask me to justify what I have actually said, I may respond, but
that is unlikely if you misquote to the above extent.

Not to worry. The conversation is over.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

David N. Williams

Nick said:
> In article <[email protected]>,
> [...]
> |> but I was impressed by how easy it was to replicate
> |> Kahan's Borda's mouthpiece example in C99-based ANS Forth
> |> implementations (pfe and gforth) of complex mathematics:
> |>
> |> http://www-personal.umich.edu/~williams/archive/forth/complex/borda.html
> |>
> |> And furthermore how easy it was to get the right answers
> |> generally for the complex elementary functions above and below
> |> their principal value branch cuts:
> |>
> |> http://www-personal.umich.edu/~williams/archive/forth/complex/complex-ieee-test.fs
>
> Well, yes, but one of the questions is whether that is useful. [...]
Indeed!

> My experience is that Bill Plauger is understating the case - the IEEE 754
> handling of zeroes (and hence infinities) causes a huge number of errors
> that would not have occurred in older arithmetics. People REALLY don't
> expect the "gotchas" it introduces - and C99 makes that a lot worse.

I really don't have much experience here.
> |> The link just above is not comprehensible outside of a niche of
> |> a niche. :)
>
> Very true. In a related niche, I have a simple test that shows up the
> chaos with C99's complex division and overflow. It really is VERY
> nasty :-(

I'd be curious to see it. The stuff I was talking about doesn't
use the C99 complex math library at all, just the underlying
gcc support for IEEE 754.

-- David
 
N

Nick Maclaren

|> >
|> > Very true. In a related niche, I have a simple test that shows up the
|> > chaos with C99's complex division and overflow. It really is VERY
|> > nasty :-(
|>
|> I'd be curious to see it. The stuff I was talking about doesn't
|> use the C99 complex math library at all, just the underlying
|> gcc support for IEEE 754.

Mine doesn't, either. Yes, it was designed with malice aforethought,
but I was checking whether the situation was really as bad as a look
at the code indicated it was. It shows what happens if you use the
'native' complex divide, the example in Annex G, or calculate the real
and imaginary parts separately. The results on several systems are
along the lines of:

0.00e+00 (1.000,1.000) (1.000,1.000) (1.000,1.000)
1.00e+307 (1.089,0.891) (1.089,0.891) (1.089,0.891)
2.00e+307 (1.154,0.769) (1.154,0.769) (1.154,0.769)
3.00e+307 (1.193,0.642) (1.193,0.642) (1.193,0.642)
4.00e+307 (1.207,0.517) (1.207,0.517) (1.207,0.517)
5.00e+307 (1.200,0.400) (1.200,0.400) (1.200,0.400)
6.00e+307 (1.176,0.294) (1.176,0.294) (1.176,0.294)
7.00e+307 (1.141,0.201) (inf,nan) (1.141,0.201)
8.00e+307 (inf,nan) (inf,nan) (inf,0.122)
9.00e+307 (nan,nan) (inf,nan) (nan,0.000)
1.00e+308 (nan,nan) (inf,nan) (nan,0.000)
1.10e+308 (nan,nan) (inf,nan) (nan,-0.000)
1.20e+308 (nan,nan) (inf,nan) (nan,-0.000)
1.30e+308 (0.000,-0.000) (inf,nan) (0.000,-0.000)
1.40e+308 (0.000,-0.000) (inf,nan) (0.000,-0.000)
1.50e+308 (0.000,-0.000) (inf,nan) (0.000,-0.000)
1.60e+308 (0.000,-0.000) (inf,nan) (0.000,-0.000)
1.70e+308 (0.000,-0.000) (nan,nan) (0.000,-0.000)
inf (nan,nan) (nan,nan) (0.000,-0.000)
inf (nan,nan) (nan,nan) (0.000,-0.000)

Note that the result should be decreasing, but becomes infinite before
it drops to zero. The example code does rather better, but the rule
that (inf,nan) is an infinity causes significant trouble - as I knew it
would, and was investigating. I don't know how much better is possible,
because this sort of issue is very nasty.


Regards,
Nick Maclaren.




#pragma STDC CX_LIMITED_RANGE OFF
#pragma STDC FP_CONTRACT OFF
#include <math.h>
#include <stdio.h>
#include <complex.h>

#ifndef TRUST_C99
#define creal(x) ((double)(x))
#define cimag(x) ((double)(-I*(x)))
#define INFINITY (1.0/0.0)
#define isfinite(x) (! isinf(x) && ! isnan(x))
extern double fmax(double,double);
#endif



double complex cdivd(double complex z, double complex w)
{
double a, b, c, d, logbw, denom, x, y;
int ilogbw = 0;
a = creal(z); b = cimag(z);
c = creal(w); d = cimag(w);
logbw = logb(fmax(fabs(c), fabs(d)));
if (isfinite(logbw)) {
ilogbw = (int)logbw;
c = scalbn(c, -ilogbw); d = scalbn(d, -ilogbw);
}
denom = c * c + d * d;
x = scalbn((a * c + b * d) / denom, -ilogbw);
y = scalbn((b * c - a * d) / denom, -ilogbw);
if (isnan(x) && isnan(y)) {
if ((denom == 0.0) &&
(!isnan(a) || !isnan(b))) {
x = copysign(INFINITY, c) * a;
y = copysign(INFINITY, c) * b;
}
else if ((isinf(a) || isinf(b)) &&
isfinite(c) && isfinite(d)) {
a = copysign(isinf(a) ? 1.0 : 0.0, a);
b = copysign(isinf(b) ? 1.0 : 0.0, b);
x = INFINITY * ( a * c + b * d );
y = INFINITY * ( b * c - a * d );
}
else if (isinf(logbw) &&
isfinite(a) && isfinite(b)) {
c = copysign(isinf(c) ? 1.0 : 0.0, c);
d = copysign(isinf(d) ? 1.0 : 0.0, d);
x = 0.0 * ( a * c + b * d );
y = 0.0 * ( b * c - a * d );
}
}
return x + I * y;
}



int main() {
int i;
double d, r, z;
double complex c1, c2;

for (i = 0; i < 20; ++i) {
d = i*0.1e308;
c1 = (1.0e308+I*1.0e308)/(1.0e308+I*d);
c2 = cdivd(1.0e308+I*1.0e308,1.0e308+I*d);
if (1.0e308 > d) {
r = d/1.0e308;
z = 1.0e308+d*r;
printf("%.2e (%.3f,%.3f) (%.3f,%.3f) (%.3f,%.3f)\n",d,
creal(c1),cimag(c1),creal(c2),cimag(c2),
(1.0e308+1.0e308*r)/z,(1.0e308-1.0e308*r)/z);
} else {
r = 1.0e308/d;
z = d+1.0e308*r;
printf("%.2e (%.3f,%.3f) (%.3f,%.3f) (%.3f,%.3f)\n",d,
creal(c1),cimag(c1),creal(c2),cimag(c2),
(1.0e308*r+1.0e308)/z,(1.0e308*r-1.0e308)/z);
}
}
return 0;
}
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,266
Messages
2,571,075
Members
48,772
Latest member
Backspace Studios

Latest Threads

Top