why still use C?

T

Thad Smith

Mike said:
Indeed, but there is the huge class of 'interesting' calculations
where using base 10 FP will yield exact results where binary
cannot.

The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.

If you have a daily rate and divide by 8 hours to get an hourly rate --
no benefit. If you have a weekly water usage and divide by 7 to get a
daily average -- no benefit. If you evaluate a transcendental, like
sine or take a square root -- no benefit.

When you have an approximation from any of these calculations, then
round to a given precision for presentation, you usually get the same
results or closer for binary, given the same number of bits for
calculation. If you use an IEEE-754 64-bit binary floating point, you
get in excess of 15 digits of precision. Which applications find that
insufficient and why?

Thad
 
G

Gene Wirchenko

Eric Backus said:
You also get the best possible approximation to the 'ideal' result if you
use binary floating-point. I'm sure you know this, but it's worth
emphasizing anyway: decimal floating point does *not* get rid of the inexact
nature of floating-point operations for the majority of calculations.

I have not seen that anyone is saying that it does. It does give
less astonishing results though (as in Law of Least Astonishment).
It seems to me that the *only* thing decimal floating point gets you is
conformance to the conventions used by the financial community for how to
deal with that inexactness. You can do that today with a library, so the
only thing that hardware support gets you is faster computations for those
calculations where this is a requirement.

The same would be true of binary floating point. Some have
raised the point that decimal floating point would raise the bar for
some embedded applications. Let us remove all floating point from the
language and handle it with libraries. If that does not seem too
appetising to you, consider how the people who want decimal float
feel.

I have long felt that the lack of decimal floating point is a
severe lack in many languages.
Note that this library could be implemented in assembly on those platforms
that have hardware support, so you don't really even need native C support
for decimal floating point in order to get the faster computations. Of
^^^^^^^
Replace with "any".
course, using such a library is inconvenient compared to having native C
support, but that inconvenience is payed only by those who need this
capability. Is that group of people large enough that inconveniencing them

That is quite right. Many of the programs that I write do not
use floating point at all. Let us get rid of the baggage of floating
point. I mean, I could consider others, but a little inconvenience
will, ah, strengthen them.
justifies changing the language?

You will never know for sure until it is available. If I need
decimal floating point in an app and C does not have it, C is out of
the running. I will not make elaborate complaints; I will simply not
consider an language that is inadequate to my requirements.

I understand that some people think that C should be inadequate
in this regard. It might be different if their ox were being gored.

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
 
J

James Kuyper

cody said:
when you have a C++ Library you can only call C-Style-Functions from that
Library. You cannot export Classes/Methods from that Library.

He said "call", not "export", and there's a lot more to the C++ Library
than Classes and Methods. For instance, to take a ridiculous (but AFAIK,
legal) case, memcpy() could be implemented as a wrapper for
std::copy<unsigned char>().
 
B

Brian Inglis

There are reasons for that unrelated to the language itself, such as how
long it took to write the compiler, and how slow it ran on machines that
existed at the time.

It does seem hard to replace an existing language with an improved one.

Not long ago a discussion in another newsgroup related to assembler
programming reminded me of ALP, which is a preprocessor for a certain
assembler that offers free format, structured programming, and some
other features, but it never got popular.

Ratfor, and a few different versions of MORTRAN as Fortran
preprocessors, again with improvements over the original, but never got
very popular.

The original PL/I compiler supplied conversion programs to convert
Fortran, COBOL, and maybe ALGOL to PL/I.

Maybe more relevant here, C++ was an improved version of C, possibly
with the intention of converting C programmers to C++ programmers, yet C
is still reasonably popular.

I think you're just demonstrating programmer inertia -- programmers
want to be able to write the same old code, and *possibly* learn the
best ways to use the new features. IME a lot of "C" programmers were
never very happy using pointers, except to modify function arguments,
preferring Pascal style array indices over C pointers, and Pascal
style I/O processing over C loops with function calls. I suspect a lot
of C/Pascal style code is being written in C++ and Java.
 
C

CBFalconer

Eric said:
..... snip ...

You also get the best possible approximation to the 'ideal' result
if you use binary floating-point. I'm sure you know this, but it's
worth emphasizing anyway: decimal floating point does *not* get rid
of the inexact nature of floating-point operations for the majority
of calculations.

AFAICT only a binary system has the advantage of 'assumed leading
bit one', allowing replacement by a sign bit. This means a binary
system can always provide better accuracy than any other built on
the same word size.

USE worldnet address!
 
G

glen herrmannsfeldt

Dan Pop wrote:

(snip)
(I wrote)
None of which still applies today. Yet, the usage of PL/I is still
marginal.

In many parts of life, you only get one chance to make it.
Not at all. FORTRAN IV had little difficulty replacing FORTRAN II and
F77 had little difficulty replacing F66. C89 replaced K&R C in a couple
of years.

Hmmm, I didn't say that right, though you snipped the part about RATFOR
and MORTRAN. It is hard to replace a language with an improved
language that isn't mostly backward compatible. MORTRAN, and I believe
also RATFOR use free format, semi-colon terminated statements. Other
improvements replace hard to use features in Fortran.

Is FORTRAN IV a new language, or a new version of an old language?

Is Fortran 2000 a new language, or an improved version of Fortran II?

It does seem that Fortran-77 hasn't been replaced yet.
This is the opposite of what you've been arguing above, i.e. the
difficulty of the improved language to become popular.

I don't know if it is opposite or not. How many C programmers converted
to C++ programmers, how many stayed C only programmers, and how many
started as C++ programmers without learning C first? It doesn't seem
that C++ replaced C, though.

C++ is significantly different, yet allows most C constructs to be used.

Though as someone else pointed out I should have said C++ was intended
to be an improved version of C.
Both FORTRAN and COBOL remained popular long after the introduction and
even after the de facto death of the improved PL/I.

-- glen
 
F

Francis Glassborow

Thad Smith said:
The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.

True (but the actual proposed form of decimal float has other features
that are useful in some circumstances) but the commercial/financial
world makes extensive use of percentages. Those inherently use division
by powers of five. In addition we have unit pricing that often involves
small fractions of the smallest unit of currency. IOWs we live in a
world where commerce often specifies computations that will be exact if
done in decimal though they will not be in binary.
 
D

Dan Pop

In said:
It gets conformance with the results people get on pocket calculators,
or when they do long division by hand on paper.

Why is such conformance (up to the last digit) important?
People learn early that one third is a repeating decimal, and one tenth
is not.

People programming computers learn early that this property is neither
true nor relevant when performing floating point computations. Regardless
of the base, floating point numbers are approximations of a subset of the
real numbers (the [-type_MAX, type_MAX] interval).
In the days of billions of transistors on a chip, only a small
percentage need be allocated to decimal floating point hardware.

A small percentage that would be better used as an additional binary
floating point execution unit.

Dan
 
J

James Kuyper

Thad said:
Mike Cowlishaw wrote: ....

The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.

What about multiplication by integers, addition, and subtraction? Those
are very common operations, especially in the financial world, and
they're all exact (except when they overflow) when performed in decimal
arithmetic, and inexact when performed in binary floating point. I'm
assuming here that the floating point numbers being multiplied, added,
and subtracted are numbers that binary float can't represent exactly,
but which decimal floating point can, such as 1.10. Such numbers are the
rule, not the exception, in financial calculations.
 
D

Dan Pop

In said:
What about multiplication by integers, addition, and subtraction? Those
are very common operations, especially in the financial world, and
they're all exact (except when they overflow) when performed in decimal
arithmetic, and inexact when performed in binary floating point. I'm
assuming here that the floating point numbers being multiplied, added,
and subtracted are numbers that binary float can't represent exactly,
but which decimal floating point can, such as 1.10. Such numbers are the
rule, not the exception, in financial calculations.

This has already been rehashed to death. Use an appropriate scaling
factor and all these computations can be performed exactly using binary
floating point, or, even better, long long arithmetic. Before C99, the
usual portable solution was double precision, but 64-bit integer
arithmetic is even more appropriate to this kind of applications. Maybe
even more appropriate than decimal floating point arithmetic (depending
on the size of the mantissa and the scaling factor imposed by the
application).

Dan
 
D

Dan Pop

In said:
Dan Pop wrote:

(snip)
(I wrote)


In many parts of life, you only get one chance to make it.

If PL/I had any compelling merits, I'm sure it would have caught on,
sooner or later. It was certainly not lack of availability of
implementations that caused its failure.
Hmmm, I didn't say that right, though you snipped the part about RATFOR
and MORTRAN. It is hard to replace a language with an improved
language that isn't mostly backward compatible.

Yet, C brilliantly succeeded. If the improved language has enough merits
on its own, programmers are always willing to make the effort of learning
it. Another example in the scripting languages category is Perl.

The replaced language(s) will be kept alive only by the legacy
applications that still need maintenance, but their usage for new
applications will be marginal. Non-legacy applications are reimplemented
in the new language, to be easier to maintain and enhance.
It does seem that Fortran-77 hasn't been replaced yet.

What people call F77 today is Fortran-77 with a ton of extensions, mostly
inherited from VAX Fortran. I can't remember ever seeing a (non-trivial)
program written in pure ANSI F77.

Dan
 
L

lawrence.jones

In comp.std.c Brian Inglis said:
I think you're just demonstrating programmer inertia -- programmers
want to be able to write the same old code, and *possibly* learn the
best ways to use the new features. IME a lot of "C" programmers were
never very happy using pointers, except to modify function arguments,
preferring Pascal style array indices over C pointers, and Pascal
style I/O processing over C loops with function calls. I suspect a lot
of C/Pascal style code is being written in C++ and Java.

As they say, you can write FORTRAN code in any language.

-Larry Jones

I can do that! It's a free country! I've got my rights! -- Calvin
 
C

CBFalconer

Brian said:
I think you're just demonstrating programmer inertia --
programmers want to be able to write the same old code, and
*possibly* learn the best ways to use the new features. IME a lot
of "C" programmers were never very happy using pointers, except
to modify function arguments, preferring Pascal style array
indices over C pointers, and Pascal style I/O processing over C
loops with function calls. I suspect a lot of C/Pascal style code
is being written in C++ and Java.

I consider C++ was a marvelous marketing ploy, in that it coaxed C
programmers over by using virtually all their known grammar etc.
The C++ language is probably actually quite competent, but loses
it entirely (IMO) by shoehorning new constructs on top of the
already overly sparse C constructs. In the process it has made
things even more context dependant, which I consider to be bad.

There is no reason for such awkward constructs as "::". The
syntactical structure of a Pascal case statement is much superior
to that of a C (or C++) switch statement. (The existence of fall
through is not part of that structure.)

It (C++) is really a development experiment, implemented with
macros, awaiting a real definition of reserved words, etc. Again,
IMO. Ratfor hid obfuscated Fortran constructs behind a
self-consistent language, while C++ operates in the opposite
direction.

USE worldnet address!
 
G

glen herrmannsfeldt

Why is such conformance (up to the last digit) important?
People programming computers learn early that this property is neither
true nor relevant when performing floating point computations. Regardless
of the base, floating point numbers are approximations of a subset of the
real numbers (the [-type_MAX, type_MAX] interval).

How about a requirement that only people who have passed a high school
calculus class are allowed to use floating point arithmetic? Then issue
licenses to people who have proved that they understand the effects of
rounding and truncation in floating point arithmetic. Only holders of
such a license can purchase and use programs that use floating point.

I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.
A small percentage that would be better used as an additional binary
floating point execution unit.

With SMT, processors are going the other way. One floating point unit
used by two processors. The additional logic to generate a decimal ALU
relative to a binary ALU is pretty small. As I understand the
proposals, they store numbers in memory with each three digits stored
into 10 bits, though in registers they are BCD. The memory format is
only slightly less efficient than binary, partly made up since fewer
exponent bits are required for the desired exponent range.

I will guess that it increases the logic between 10.000% and 20.000%.

An additional binary floating point unit requires the logic to schedule
operations between the two, and eliminate conflicts between them.

-- glen
 
J

Joe Wright

CBFalconer said:
AFAICT only a binary system has the advantage of 'assumed leading
bit one', allowing replacement by a sign bit. This means a binary
system can always provide better accuracy than any other built on
the same word size.
The 'assumed leading bit one' of the mantissa is the low order of the
exponent. The sign bit is the high order bit of the float or double. But
you're right. The assumption gives you an extra virtual bit of
precision. My 32-bit float has 24 bits of mantissa, 8 bits of exponent
and a sign bit. Count 'em. My 64-bit double has 53, 11 and 1. Like a
Baker's dozen (13 instead of 12). :=)
 
G

Gene Wirchenko

[snip]
How about a requirement that only people who have passed a high school
calculus class are allowed to use floating point arithmetic? Then issue
licenses to people who have proved that they understand the effects of
rounding and truncation in floating point arithmetic. Only holders of
such a license can purchase and use programs that use floating point.

I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.

While we are at it, let us do the same for integer arithmetic.
That way, we need never again see "I think my compiler has a bug. My
program keeps giving wrong answers. It says that 5/3 equals 1."

[snip]

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
 
D

Douglas A. Gwyn

Dan said:
If PL/I had any compelling merits, I'm sure it would have caught on,
sooner or later. It was certainly not lack of availability of
implementations that caused its failure.

PL/I was moderately successful; for example MULTICS was
implemented in PL/I, and a lot of IBM systems programming
was done in PL/I. Just because other languages, designed
using "lessons learned" from previous languages including
PL/I, caught on doesn't make PL/I a failure, any more
than FORTRAN, COBOL, BASIC, LISP, etc. were failures.
Some of them evolved and are still widely used, others
fell into disuse; however, they had a significant impact
for their time and were used to implement many valuable
applications. That's not "failure".
What people call F77 today is Fortran-77 with a ton of extensions, mostly
inherited from VAX Fortran. I can't remember ever seeing a (non-trivial)
program written in pure ANSI F77.

I have seen literally hundreds, many of them written by
scientists and engineers on PDP-11 Unix systems, where
the native Fortran was Fortran-77 with essentially no
extensions. (There was also a port of DEC's F4P that
some used because of faster execution.)
 
C

CBFalconer

glen said:
..... snip ...

With SMT, processors are going the other way. One floating point
unit used by two processors. The additional logic to generate a
decimal ALU relative to a binary ALU is pretty small. As I
understand the proposals, they store numbers in memory with each
three digits stored into 10 bits, though in registers they are
BCD. The memory format is only slightly less efficient than
binary, partly made up since fewer exponent bits are required for
the desired exponent range.

Think about how you would normalize such a value. That would not
be a decimal format, that would be a base 1000 format. The count
of significant digits would jitter over a range of 3!

USE worldnet address!
 
D

Dan Pop

In said:
Why is such conformance (up to the last digit) important?
People programming computers learn early that this property is neither
true nor relevant when performing floating point computations. Regardless
of the base, floating point numbers are approximations of a subset of the
real numbers (the [-type_MAX, type_MAX] interval).

How about a requirement that only people who have passed a high school
calculus class are allowed to use floating point arithmetic?

No need for that, in order to understand floating point arithmetic.
The basic concepts are within the grasp of a junior high school student.
Then issue
licenses to people who have proved that they understand the effects of
rounding and truncation in floating point arithmetic. Only holders of
such a license can purchase and use programs that use floating point.

The user doesn't need any clue, only the programmer. If the programmer
does his job well, the user will get the expected result.

What we see in practice is clueless *programmers* that don't get the
results they expect from their programs. And changing one aspect of
floating point won't make it work as expected by the clueless ones, e.g.
largeval + 1 == largeval will still evaluate to true, baffling the
ignorant. No, decimal floating point is not the right cure for
ignorance.
I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.

Do you *really* think anyone in the industry would care about this
argument? Do you really think it is worth the loss of precision (for the
same bit count).
With SMT, processors are going the other way. One floating point unit
used by two processors.

Intel's flagship processor in terms of high performance computing has
multiple FP execution units.
The additional logic to generate a decimal ALU
relative to a binary ALU is pretty small. As I understand the
proposals, they store numbers in memory with each three digits stored
into 10 bits, though in registers they are BCD. The memory format is
only slightly less efficient than binary, partly made up since fewer
exponent bits are required for the desired exponent range.

I will guess that it increases the logic between 10.000% and 20.000%.

For *what* redeeming benefits?
An additional binary floating point unit requires the logic to schedule
operations between the two, and eliminate conflicts between them.

Which is paid off by increasing the FP throughput by a factor of 2.
Which explains why most modern architectures have chosen to go this way.

Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,161
Latest member
GertrudeMa
Top