P.J. Plauger said:
Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if
at all possible. I want to explore thoroughly the implications of
*not* having Standard C support multiple floating-point formats
simultaneously, before we commit to adding all that complexity to C.
This is a very valid, and important, question, which we (IBM)
spent some time in considering before proposing the addition of
new types. Here's a slightly edited extract from a note Raymond
Mak wrote describing some of the main points [additional comments
by me are in square brackets]:
... there was a question about using a pragma to switch the
meaning of the floating-point types [between base 10 and base
2].
Yes, in principle it can be done, and on the surface it might
seems it would limit complexity. But after some code
prototyping, and thinking it through more carefully, using pragma
has a number of disadvantages.
Below quickly summarizes the main points:
1/ The fact that there are two sets of floating-point types in
itself does not mean the language would become more complex.
The complexity question should be answered from the
perspective of the user's program - that is, do the new data
types add complexity to the user's code? My answer is no,
except for the issues surrounding implicit conversions, which
I will address below. For a program that uses only binary
floating-point [FP] types, or uses only decimal FP types,
the programmer is still working with at most three FP
types. We are not making the program more difficult to
write, understand, or maintain.
2/ Implicit conversions can be handled by simply disallowing them
(except maybe for cases that involve literals). If we do this,
for CUs that have both binary and decimal FP types, the
code is still clean and easy to understand. In a large
source file, with std pragma flipping the meaning of the
types back and forth, the code is actually a field of land
mines for the maintenance programmer, who might not
immediately aware of the context of the piece of code.
[For example, if a piece of code expected to be doing
'safe' exact decimal calculations were accidentally
switched to use binary, the change could be very hard to
detect, or only cause occasional failure.]
3/ Giving two meanings to one data type hurts type safety. A
program may bind by mistake to the wrong library, causing
runtime errors that are difficult to trace. It is always
preferable to detect errors during compile time. Overloading
the meaning of a data type makes the language more
complicated, not more simple.
4/ A related advantage of using separate types is that it
facilitates the use of source checking/scanning utilities (or
scripts). They can easily detect which FP types are used
in a piece of code with just local processing. If a std
pragma can change the representation of a type, the use of
grep, for example, as an aid to understand and to search
program text would become very difficult.
Comparatively speaking, this is not a technical issue for the
implementation, as it might seem on the surface initially --
i.e., it might seem easier to just tag new meaning to
existing types -- but is an issue about usability for the
programmer. The meaning of a piece of code can become
obscure if we reuse the float/double/long double types.
Also, I feel that we have a chance here to bind the C
behavior directly with [the new] IEEE types, reducing the
number of variations among implementations. This would help
programmer writing portable code, with one source tree
building on multiple platforms. Using a new set of data
types is the cleanest way to achieve this.
To this I would add (at least) a few more problems with
the 'overloading' approach:
5/ There would be no way for a programmer in a 'decimal'
program to invoke routines in existing (binary) libraries.
Every existing routine and library would need to be
rewritten for decimal floating-point, whereas in many
(most?) cases the binary value from an existing library
would have been perfectly adequate.
6/ Similarly, any new routine that was written using decimal FP
would be inaccessible to programmers writing programs which
primarily used binary FP.
7/ There would be no way to modify existing programs (using
binary FP calculation) to cleanly access data in the new
IEEE 754 decimal formats.
8/ There would be no way to have both binary and decimal
FP variables in the same data structure.
9/ Debuggers would have no way of detecting whether a FP number
is decimal or binary and so would be unable to display the
value in a human-readable form. The datatypes need to be
distinguished at the language level and below.
The new decimal types are true primitives, which will exist at
the hardware level. Unlike compound types (such as Complex),
which are built from existing primitives, they are first class
primitives in their own right. As such, they are in the same
category as ints and doubles, and should be treated similarly and
distinctly.
Mike Cowlishaw