why still use C?

G

glen herrmannsfeldt

Dan said:
What for? Given the intrinsic nature of floating point, if the base is
relevant to your application, then you should review the application.

If the ability of representing $3.11 *exactly* is important, simply do all
your computing in pennies and the base used by your floating point
representation no longer matters (only the precision is relevant).

I think I agree 100% with these statements, but not everyone else does,
and not everyone even understands what they mean.

To me, it would almost be worth it for the reduction in posts asking why
some operation comes out 24.299999 when it obviously should be 24.3.

Maybe if a license was required before one was allowed to use floating
point arithmetic, but that isn't likely anytime soon.

-- glen
 
D

Dan Pop

In said:
Certainly the proposal accepted by both the C and C++ committees
calls for a parallel set of FP capabilities, but I confess I'm
a long way from being convinced that such added complexity is
either necessary or desirable. I *am* convinced that fleshing
out IEEE 754R decimal FP arithmetic as an alternative representation
for the existing three FP builtin types is quite viable.

Can this be done without changing the current floating point model of the
C standard? IIRC, the IEEE 754R specification goes beyond merely
specifying base 10. Are the other details compatible with the C floating
point model?

Dan
 
D

Dan Pop

In said:
Morris Dovey said:
[1] Assuming the standard were changed to include FP-10, will the
compiler producers consider the new standard non-relevant, given
that it covers hardware not now available anywhere in the world?

Yes... where appropriate. Ie FP-10 will never become available in many
architectures. However I think it is likely to become common on many
others. Therefore compilers will support it.

There's no question of supporting it in the compiler if the hardware
supports it. The question is about the perspectives of a C standard
mandating unconditional support for it, on platforms with no hardware
support for it.

For the time being, the perspectives of C99 as an industry standard are
not particularly bright...

Dan
 
H

Hans-Bernhard Broeker

You really think so?

Yes. Been there, done that. It was big High-Energy Physics
experiment, with a total project time of more than a decade, and still
counting. Lots of computations go on between the actual raw data
taking and the output of published results. At least 3 generations of
computer hardware were involved over the time the experiment has been
running, and they want to be sure that changing the FPU doesn't affect
the results. Not even minimally. Result was that they decided to
re-configure the Intel FPUs to turn of their "excess" precision.
These guys would be *very* upset if a compiler came out that no longer
supported binary FP.
Most `scientists' I know are content to have their FP calculations
produce results that look more or less reasonable to, say, a dozen
decimal places.

Then may you only know `scientists' (including the quotes), but no
actual scientists.

I see no problem adding new features to the language. But the day you
start removing features is when you may be causing real trouble for
people out there.

Actually, if the plan were to just use decimal FP *instead* of the now
common binary FP, there would be nothing for the committee to decide
about, as far as I can see. A platform with FLT_RADIX==10 should be
perfectly compliant right now, as far as I can see. It might enrage
some potential users and steer them away from such a platform, but
that's an economic risk for its vendors to worry about rather than a
concern for the C standardization comittee.

The only thing the current standard(s) doesn't support, and thus
requiring an actual committee decision, would be having more than one
FP base available on the same platform, to be used within the same
program. And once you support more two, you have to make essentially
3 substantial decisions:

1) Whether to prescribe which of them is used as good old float,
double & surrounding tools, or leave that to the implementors
2) If so, which one to prescribe.
3) Whether to make support for the known-base types optional or mandatory
 
K

Kevin Bracey

In message <[email protected]>
Fergus Henderson said:
The problem is that C doesn't have any support for user-defined operators,
operator overloading, or indeed any user-defined infix notation, so
user-defined numeric types feel decidedly like second-class citizens in
comparison with.

E.g. compare

decimal_t x, y, z, a, b, c;
z += x * y * z + a * b * (c - 1);

with

decimal_t x, y, z, a, b, c;
decimal_mult_assign(&z, decimal_add(
decimal_mult(decimal_mult(x, y), z),
decimal_mult(decimal_mult(a, b),
decimal_sub(c, int_to_decimal(1)))));

The former is a lot more readable.

Which strikes me as a very good argument for using C++. I mean, that's
the language with user defined operators, paramaterised classes and all
that jazz. If one really wants float<2> and float<10> etc, surely C++
is the way to go.

Create an analogue of Annex F for implementations with FLT_RADIX==10 by all
means, but the added complexity of yet another class of base types? The
complex stuff is hairy enough. I've certainly not been able to tackle
implementing any of that yet, and it's not clear that my users have any
particular interest in it. Any base 10 stuff will be similar.
 
P

P.J. Plauger

Can this be done without changing the current floating point model of the
C standard? IIRC, the IEEE 754R specification goes beyond merely
specifying base 10. Are the other details compatible with the C floating
point model?
From what I've seen so far, and I have studied 754R more than casually,
the answer is yes. 754R adds a few functions that are quite useful
for (possibly denormalized) base 10 arithmetic, but none of those
functions are completely silly when applied to (usually normalized)
binary arithmetic either.

Luckily, we put in C89 the possibility that FLT_RADIX could be other
than 2. Mostly that was to accommodate S/370, which is base 16. But
we also had an eye to a few machines that did decimal floating point
in software. So the basic C model is not severely stressed by 754R.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

Yes. Been there, done that. It was big High-Energy Physics
experiment, with a total project time of more than a decade, and still
counting. Lots of computations go on between the actual raw data
taking and the output of published results. At least 3 generations of
computer hardware were involved over the time the experiment has been
running, and they want to be sure that changing the FPU doesn't affect
the results. Not even minimally. Result was that they decided to
re-configure the Intel FPUs to turn of their "excess" precision.
These guys would be *very* upset if a compiler came out that no longer
supported binary FP.

This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped):

: Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.

I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...

I have trouble stifling the evolution of a programming language
standard to accommodate people who `solve' problems this way.
Then may you only know `scientists' (including the quotes), but no
actual scientists.

Uh, I spent most of a decade in cyclotron laboratories while earning
an AB and a PhD in nuclear physics. I worked my way through school
writing computer programs for both theoretical calculations and
data reduction. I've spent a good part of the past 40 years writing
and rewriting math functions that only a small fraction of our
clientele cares much about. Many of them consider themselves
`scientists'. I do too, to the extent they favor rational thinking
over dogma and/or officiousness.
I see no problem adding new features to the language. But the day you
start removing features is when you may be causing real trouble for
people out there.

Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.
Actually, if the plan were to just use decimal FP *instead* of the now
common binary FP, there would be nothing for the committee to decide
about, as far as I can see. A platform with FLT_RADIX==10 should be
perfectly compliant right now, as far as I can see. It might enrage
some potential users and steer them away from such a platform, but
that's an economic risk for its vendors to worry about rather than a
concern for the C standardization comittee.

You're mostly correct. The one added piece of work would be to add
the handful of functions recommended by IEEE 754R. I believe those
can and should be overhauled to be useful for floating-point of
*any* base.
The only thing the current standard(s) doesn't support, and thus
requiring an actual committee decision, would be having more than one
FP base available on the same platform, to be used within the same
program. And once you support more two, you have to make essentially
3 substantial decisions:

1) Whether to prescribe which of them is used as good old float,
double & surrounding tools, or leave that to the implementors
2) If so, which one to prescribe.
3) Whether to make support for the known-base types optional or mandatory

Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if
at all possible. I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats simultaneously,
before we commit to adding all that complexity to C.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
S

Sheldon Simms

This is trivially handled by defining the scaling factor as a macro.
The precision of an IEEE 754 double allows enough space for that, without
limiting the range of precise representations too much for the needs of
the usual financial application.

And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited operation
for this class of applications?

For that particular calculation you'll have to use the new trinary
arithmetic annex being added to support such things.
 
D

Douglas A. Gwyn

P.J. Plauger said:
... I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats simultaneously,
before we commit to adding all that complexity to C.

The main implication would be that applications requiring
the properties of decimal f.p. would not be portable to C
implementations that use binary f.p. as the sole flavor
of f.p. (and if there are any depending on the properties
of binary f.p. they would not be portable in the other
direction). It could be that software emulation of the
other flavor of f.p. would be adequate in many cases, and
requiring support would thus enhance portability. We
need to determine a reliable estimate for the number of
applications in this category.

In past committee discussions, reprsentatives of the
numerical analysis community have assured us that there
are important algorithms in use where the exact behavior
of the lowest-order bits significantly affects the
outcome of f.p. computation. Thus that community would
presumably care whether a binary or a decimal radix was
used, and we should get their feedback also.

I'll also remark that this newsgroup discussion isn't a
very effective way to proceed. In several days of
dicussion so far, no point has been made that wasn't
dealt with within a few minutes at the evening session
during the recent Kona C meeting.
 
H

Hans-Bernhard Broeker

In comp.lang.c.moderated P.J. Plauger said:
[...]
running, and they want to be sure that changing the FPU doesn't affect
the results. Not even minimally. Result was that they decided to
re-configure the Intel FPUs to turn of their "excess" precision.
These guys would be *very* upset if a compiler came out that no longer
supported binary FP.
This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped):
: Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.

Going over your arguments again, I concede your point. Sorry if I
came across as being mule-headed. Germans from the region of the
country I come from _do_ have a tendency to be quite stubborn...
I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...

Well, the new hardware should have offered to silently return infinity
instead ...
Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary.

You're right of course, C never promised that. It's the hardware
designers who effectively did that. I'm not aware of any architecture
in active use right now that has FLT_RADIX != 2. In fact, there seems
to be hardly anything else but IEEE 754 (with its extensions) out
there these days.

Given that, programmers did start to rely on this de-facto standard,
and would at least somewhat justifiably be opposed to any change if
that caused extra work for them. Since that change would only come to
them with the next generation of hardware, though, users can still
vote against it with their hardware investments, so no harm done.
And it's damned hard to write a program that can determine whether
it is.

that is ;-) said:
Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if at
all possible. I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats
simultaneously, before we commit to adding all that complexity to C.

I agree. So the issue boils down to this question:

How likely is a single program going to really need both decimal and
binary FP?

If and only if the answer to that is "essentially never", then there's
nothing to do --- let the compiler vendors and platform ABI designers
make the decision on a per-program basis, and thus outside the realm
of standard C.
 
M

Mike Cowlishaw

Dan said:
This is trivially handled by defining the scaling factor as a macro.
The precision of an IEEE 754 double allows enough space for that,
without limiting the range of precise representations too much for
the needs of the usual financial application.

I suggest you try it; it is not trivial at all.
And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited
operation for this class of applications?

This is exactly why floating-point is the right solution. Whenever
the result is Inexact, you get the best possible approximation to
the 'ideal' result -- that is, full precision. With a fixed scaling
factor you cannot make full use of the precision unless you
know how many digits there will be to the left of the radix
point.

Mike Cowlishaw
 
D

Dan Pop

In said:
And you wish to lay that all at the feet of decimal floating
point? PL/I also had semicolons to end statements and a number of
other things that C has.

How about we carefully consider any proposed changes instead,
regardless of whether the features have been used before?

How about *you* carefully consider the meaning of the emoticon ending my
previous post?

Dan
 
M

Mike Cowlishaw

P.J. Plauger said:
Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.

This one should be quite reliable:

#include <stdio.h>
int main(void) {
double x = 0.1;
if (x*8 != x+x+x+x+x+x+x+x)
printf("Binary");
else
printf("Decimal");
return 0;
}

(Though it's harder to distinguish if more than just the two possibilities,
of course.)

Mike Cowlishaw
 
D

Dan Pop

In said:
PL/I did many things wrong (some of its coercions were/are quite
extraordinary). But given that it was designed in the 1960s it
actually was a significant step forward in many areas. It
certainly wasn't a failure, and it is still widely used today.

It failed to achieve the goal of its designers: to be the ultimate
programming language. It never achieved the popularity of the languages
it was supposed to render obsolete: FORTRAN and COBOL.

Dan
 
M

Mike Cowlishaw

P.J. Plauger said:
Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if
at all possible. I want to explore thoroughly the implications of
*not* having Standard C support multiple floating-point formats
simultaneously, before we commit to adding all that complexity to C.

This is a very valid, and important, question, which we (IBM)
spent some time in considering before proposing the addition of
new types. Here's a slightly edited extract from a note Raymond
Mak wrote describing some of the main points [additional comments
by me are in square brackets]:

... there was a question about using a pragma to switch the
meaning of the floating-point types [between base 10 and base
2].

Yes, in principle it can be done, and on the surface it might
seems it would limit complexity. But after some code
prototyping, and thinking it through more carefully, using pragma
has a number of disadvantages.

Below quickly summarizes the main points:

1/ The fact that there are two sets of floating-point types in
itself does not mean the language would become more complex.

The complexity question should be answered from the
perspective of the user's program - that is, do the new data
types add complexity to the user's code? My answer is no,
except for the issues surrounding implicit conversions, which
I will address below. For a program that uses only binary
floating-point [FP] types, or uses only decimal FP types,
the programmer is still working with at most three FP
types. We are not making the program more difficult to
write, understand, or maintain.

2/ Implicit conversions can be handled by simply disallowing them
(except maybe for cases that involve literals). If we do this,
for CUs that have both binary and decimal FP types, the
code is still clean and easy to understand. In a large
source file, with std pragma flipping the meaning of the
types back and forth, the code is actually a field of land
mines for the maintenance programmer, who might not
immediately aware of the context of the piece of code.

[For example, if a piece of code expected to be doing
'safe' exact decimal calculations were accidentally
switched to use binary, the change could be very hard to
detect, or only cause occasional failure.]

3/ Giving two meanings to one data type hurts type safety. A
program may bind by mistake to the wrong library, causing
runtime errors that are difficult to trace. It is always
preferable to detect errors during compile time. Overloading
the meaning of a data type makes the language more
complicated, not more simple.

4/ A related advantage of using separate types is that it
facilitates the use of source checking/scanning utilities (or
scripts). They can easily detect which FP types are used
in a piece of code with just local processing. If a std
pragma can change the representation of a type, the use of
grep, for example, as an aid to understand and to search
program text would become very difficult.

Comparatively speaking, this is not a technical issue for the
implementation, as it might seem on the surface initially --
i.e., it might seem easier to just tag new meaning to
existing types -- but is an issue about usability for the
programmer. The meaning of a piece of code can become
obscure if we reuse the float/double/long double types.
Also, I feel that we have a chance here to bind the C
behavior directly with [the new] IEEE types, reducing the
number of variations among implementations. This would help
programmer writing portable code, with one source tree
building on multiple platforms. Using a new set of data
types is the cleanest way to achieve this.

To this I would add (at least) a few more problems with
the 'overloading' approach:

5/ There would be no way for a programmer in a 'decimal'
program to invoke routines in existing (binary) libraries.
Every existing routine and library would need to be
rewritten for decimal floating-point, whereas in many
(most?) cases the binary value from an existing library
would have been perfectly adequate.

6/ Similarly, any new routine that was written using decimal FP
would be inaccessible to programmers writing programs which
primarily used binary FP.

7/ There would be no way to modify existing programs (using
binary FP calculation) to cleanly access data in the new
IEEE 754 decimal formats.

8/ There would be no way to have both binary and decimal
FP variables in the same data structure.

9/ Debuggers would have no way of detecting whether a FP number
is decimal or binary and so would be unable to display the
value in a human-readable form. The datatypes need to be
distinguished at the language level and below.

The new decimal types are true primitives, which will exist at
the hardware level. Unlike compound types (such as Complex),
which are built from existing primitives, they are first class
primitives in their own right. As such, they are in the same
category as ints and doubles, and should be treated similarly and
distinctly.

Mike Cowlishaw
 
C

CBFalconer

P.J. Plauger said:
..... snip ...

This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped):

: Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.

I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...

I have trouble stifling the evolution of a programming language
standard to accommodate people who `solve' problems this way.

And the entire episode reminds me of those who complain(ed) about
Pascal strong typeing and range checking "getting in their way".

USE worldnet address!
 
B

Brian Inglis

This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped):

: Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.

I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...

I have trouble stifling the evolution of a programming language
standard to accommodate people who `solve' problems this way.


Uh, I spent most of a decade in cyclotron laboratories while earning
an AB and a PhD in nuclear physics. I worked my way through school
writing computer programs for both theoretical calculations and
data reduction. I've spent a good part of the past 40 years writing
and rewriting math functions that only a small fraction of our
clientele cares much about. Many of them consider themselves
`scientists'. I do too, to the extent they favor rational thinking
over dogma and/or officiousness.


Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.


You're mostly correct. The one added piece of work would be to add
the handful of functions recommended by IEEE 754R. I believe those
can and should be overhauled to be useful for floating-point of
*any* base.


Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if
at all possible. I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats simultaneously,
before we commit to adding all that complexity to C.

Oh good -- "spirit of C" thinking -- and don't the IBM C compilers
currently select between binary and hex FP instructions with options?
Are they proposing to support anything more than a decimal FP
instructions option?
 
B

Brian Inglis

Morris Dovey wrote:

This would mean that one could never have both binary and decimal
FP data in the same program/structure.

This is a very good idea -- mixing binary, decimal (and hex) FP
formats in a structure or a set of related modules is a very bad idea
-- unless the radix can be detected at the hardware level.
A pragma which could be used
inside a program would be especially dangerous (consider the
base being switched inside an #include).
Nooooooooo!

The entire existing base
of binary FP functions could not be used from a program which
selected base 10.

Current IBM 390/z compilers and platforms don't appear to support any
mixing of binary and hex FP instructions and functions in the same
code -- Linux supports IEEE functions and requires IEEE instructions
-- native OSes support hex functions and require hex instructions.

I think the FP functions can be adequately handled by the tgmath.h
(generic math function names) additions, with the extra effort to
develop decimal FP functions borne solely on platforms where that is a
compiler option.

Modules either have an explicit radix dependency or not -- those with
require special coding or compile options -- those without either
don't care, or have to take precautions assuming the worst case --
typically the minimum limits documented for float.h.
 
G

Gene Wirchenko

How about *you* carefully consider the meaning of the emoticon ending my
previous post?

Ah, yes, but what about that PL/I did fail big time? I think you
may have been more sincere than you intended.

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
 
P

P.J. Plauger

The main implication would be that applications requiring
the properties of decimal f.p. would not be portable to C
implementations that use binary f.p. as the sole flavor
of f.p. (and if there are any depending on the properties
of binary f.p. they would not be portable in the other
direction). It could be that software emulation of the
other flavor of f.p. would be adequate in many cases, and
requiring support would thus enhance portability. We
need to determine a reliable estimate for the number of
applications in this category.
Agreed.

In past committee discussions, reprsentatives of the
numerical analysis community have assured us that there
are important algorithms in use where the exact behavior
of the lowest-order bits significantly affects the
outcome of f.p. computation. Thus that community would
presumably care whether a binary or a decimal radix was
used, and we should get their feedback also.

Also agreed.
I'll also remark that this newsgroup discussion isn't a
very effective way to proceed. In several days of
dicussion so far, no point has been made that wasn't
dealt with within a few minutes at the evening session
during the recent Kona C meeting.

But it has educated a wider audience to some of the issues,
and that's an important part of proceeding.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,280
Latest member
BGBBrock56

Latest Threads

Top