ISO C standard - which features need to be removed?

M

Marco

On a previous thread many folks lamented about the C99 standard not
being fully implemented and still referencing the C95 version. Maybe
the ISO needs to reconsider some of the C99 features that are not
being universally implemented in C compilers and make them an optional
feature or deprecate them. The C99 standard has a lot of good changes
like defining portable fixed width integers <stdint.h> but these tend
to be overshadowed by the un-implemented features.

Any suggestions?


[ example snippet from other thread
In theory, yes. In practice, conforming C90 compilers are still
much more common than conforming C99 compilers. Many compilers
implement large parts of C99, but others, still in common use,
implement almost none of it (more precisely, almost none of the
differences between C90 and C99).
end snippet]
 
J

jacob navia

Marco a écrit :
On a previous thread many folks lamented about the C99 standard not
being fully implemented and still referencing the C95 version. Maybe
the ISO needs to reconsider some of the C99 features that are not
being universally implemented in C compilers and make them an optional
feature or deprecate them. The C99 standard has a lot of good changes
like defining portable fixed width integers <stdint.h> but these tend
to be overshadowed by the un-implemented features.

Any suggestions?


[ example snippet from other thread
In theory, yes. In practice, conforming C90 compilers are still
much more common than conforming C99 compilers. Many compilers
implement large parts of C99, but others, still in common use,
implement almost none of it (more precisely, almost none of the
differences between C90 and C99).
end snippet]


Complex arithmetic should be replaced with operator overloading, that
gives a better and general solution to the problem of new numeric types
than each time changing the language to add one.

Take for instance the situation now, where decimal based floating point,
fixed point and some other formats are all proposed as a Technical
Report to the standard committee. We can't accomodate them all in the
language.

The solution is to allow the user to develop its own types as many other
languages allow, from venerable FORTRAN to C# and many others.


This solution allows the user to have that extension to numeric types if
he/she wants it, and not forcing him/her to swallow a predefined
solution that will be wrong in most cases.

Even in relatively simple things like complex division, many users will
have different needs than the proposed compiled solution since some will
favor accuracy, others will need more speed, etc etc!
 
J

jacob navia

Malcolm McLean a écrit :
I can't agree with you here. There's a relatively short list of common
things you need to do (huge integers, arbitrary-precision floating point,
complex numbers, fixed point). A standard solution will be right for most
people, most of the time.

Sure.

How huge must be those "huge integers" so that everyone is satisfied?

Complex numbers must favor speed or accuracy?

Arbitrary precision floating point must present 352 bits precision?
Or rather 512? or maybe 1024?

You can't have them ALL.
The problem with roll your own is that you recreate the bool problem.
Everyone defines "complex" in a slightly different way, maybe using r and j

Yes, that notation is not going to die anytime soon because it is needed
in many contexts! C99 decided for cartesian coordinates, what means that
polar coordinates must be done outside the standard notation anyway.
if an enegineer, r and i if a mathermatician, real and imag if a programmer,
x and y if slightly eccentric. Then pieces of code can't talk to each other
without huge wodges of conversion routines, all to convert between types
which are essentially the same.

They are the same as 1.0L is the same as 1ULL or 0x1 but you will agree
with me that the above representations of the number one in C are all
different, even if they represent the same number. This is an old
problem, and surely it doesn't help to favor one over the other!
 
I

Ian Collins

Malcolm said:
I can't agree with you here. There's a relatively short list of common
things you need to do (huge integers, arbitrary-precision floating point,
complex numbers, fixed point). A standard solution will be right for most
people, most of the time.

The problem with roll your own is that you recreate the bool problem.
Everyone defines "complex" in a slightly different way, maybe using r and j
if an enegineer, r and i if a mathermatician, real and imag if a programmer,
x and y if slightly eccentric. Then pieces of code can't talk to each other
without huge wodges of conversion routines, all to convert between types
which are essentially the same.

The solution is simple - do what C++ did and put them in the standard
library.
 
N

Nick Keighley

I can't agree with you here. There's a relatively short list of common
things you need to do (huge integers, arbitrary-precision floating point,
complex numbers, fixed point).

rationals, quaternions


<snip>
 
N

Nobody

How huge must be those "huge integers" so that everyone is satisfied?
Arbitrary precision floating point must present 352 bits precision?
Or rather 512? or maybe 1024?

In general, "big" (i.e. arbitrary-size) numbers are limited only by
available resources; i.e. they must fit into memory, their size must fit
into a machine word, and if they're so large that primitive arithmetic
operations exceed available memory then you lose.

Most modern high-level languages include big integers as a primitive type
(typically via either the BSD MP or GNU MP libraries). Some also include
arbitrary-precision rational and/or floating-point numbers (typically via
GNU MP), either as primitive types or as standard libraries.
 
J

jacob navia

Nobody a écrit :
In general, "big" (i.e. arbitrary-size) numbers are limited only by
available resources; i.e. they must fit into memory, their size must fit
into a machine word, and if they're so large that primitive arithmetic
operations exceed available memory then you lose.

If you do a simple loop of (say) 50 iterations, each time multiplying
a bignum by another, does the result bit count grow exponentially?

Or it stays fixed at some value?

If you choose solution (1) you can't do multiplications in a loop because
the result would have around 2^50 bits precision.

If you choose solution (2) you have to ask the user at how much precision
you want the bignums to stop.

If we have operator overloading, each user can choose the solution he/she
needs. Note that the code using this solution is MUCH more portable than
what is possible now since code like:
c = (b+c)/n;
will stay THE SAME independently of which solution is preferred. Now you
would have to write:
c = divide(add(b+c),n);
where "divide" and "add" have to be replaced by specific library calls.

Lcc-win implements operator overloading in the context of C, and the big
number library furnished by lccwin can be replaced by another library
like gnu mp without making any modifications to the user code.
Most modern high-level languages include big integers as a primitive type
(typically via either the BSD MP or GNU MP libraries).

True. Lccwin provides those too.
Some also include
arbitrary-precision rational and/or floating-point numbers (typically via
GNU MP), either as primitive types or as standard libraries.

With operator overloading it is easy to implement a rational or a quaternion package.

By avoiding complex numbers a s a primitive type we make the language smaller
and easier to implement. The operator overloading solution makes possible to
implement all kinds of numbers where needed, without imposing constraints in
compilers for embedded systems where complex numbers are (in general) not used
a lot.
 
N

Nobody

If you do a simple loop of (say) 50 iterations, each time multiplying
a bignum by another, does the result bit count grow exponentially?

Or it stays fixed at some value?

Are you talking about integers/rationals or floats?

Integers and rationals grow, using as much space as is necessary to
represent the result (or until you run out of memory, in which case the
result simply isn't representable; while it's often convenient to treat
computers is if they were Turing machines, they're really only finite
automata).

Arbitrary-precision floating-point usually requires the precision to be
specified, rather than being dictated by the operands (otherwise, how many
bits should be used for e.g. 1/3?)
 
J

jacob navia

Nobody a écrit :
Are you talking about integers/rationals or floats?

Integers and rationals grow, using as much space as is necessary to
represent the result (or until you run out of memory, in which case the
result simply isn't representable; while it's often convenient to treat
computers is if they were Turing machines, they're really only finite
automata).

This is completely impractical since with just 40-50 multiplications
you find yourself with gigabyte big numbers that are unusable.

lccwin lets you specify the maximum size of bignums, and then they are
fixed.

But your way is better in other applications of course. And this proves
that each application should be using the bignums it needs, using
operator overloading.

Arbitrary-precision floating-point usually requires the precision to be
specified, rather than being dictated by the operands (otherwise, how many
bits should be used for e.g. 1/3?)

The same problems will appear here. There is no "one size fits all"
solution.

The true solution is to let the user specify the number type he/she
needs. You get some basic types, then you can add your own.
 
B

Ben Bacarisse

jacob navia said:
Nobody a écrit :

This is completely impractical since with just 40-50 multiplications
you find yourself with gigabyte big numbers that are unusable.

You've said this twice now, but I can't see what you mean. At first I
thought you'd simply mistyped what you intended to say but it seems
not. Multiplying a bignum by (for example) 1024 adds 10 bits to the
required length. Doing that 50 times adds 500 bits. 500+ bit numbers
are common in many applications of bignums. Even multiplying a
1000-bit number by another 1000 bit number 50 times makes a 51,000-bit
number. Not at all unmanageable.

<snip>
 
B

bartc

Ben Bacarisse said:
You've said this twice now, but I can't see what you mean. At first I
thought you'd simply mistyped what you intended to say but it seems
not. Multiplying a bignum by (for example) 1024 adds 10 bits to the
required length. Doing that 50 times adds 500 bits. 500+ bit numbers
are common in many applications of bignums. Even multiplying a
1000-bit number by another 1000 bit number 50 times makes a 51,000-bit
number. Not at all unmanageable.

Multiplying a number by itself will approximately double the number of bits.
Repeat that process, and the number of bits increases exponentially.

But I agree, a lot of useful work can still be done with variable width
bignums without overflowing memory.

Applying a fixed width (if that is the alternative) is harder to work with
(how do you know how many bits will be needed), and wastes resources when
the numbers are smaller.
 
I

Ian Collins

bartc said:
Multiplying a number by itself will approximately double the number of
bits. Repeat that process, and the number of bits increases exponentially.

But I agree, a lot of useful work can still be done with variable width
bignums without overflowing memory.

Applying a fixed width (if that is the alternative) is harder to work
with (how do you know how many bits will be needed), and wastes
resources when the numbers are smaller.

How do you know if an expression overflows?
 
B

bartc

Ian Collins said:
How do you know if an expression overflows?

Of variable width bigints? Expressions don't overflow, although memory can
get tight.

With fixed width bigints: I've never used these, I assume there's some
mechanism to find out. But it's another mark against them.
 
B

Ben Bacarisse

bartc said:
Multiplying a number by itself will approximately double the number of
bits. Repeat that process, and the number of bits increases
exponentially.

Of course. I was responding to what Jacob wrote originally.
Multiplying one number by another does not sound like he meant
multiplying one number by itself. I did not comment at first but then
repeated the remark with even more general language "just 40-50
multiplications" so I though it best to clear the matter up.

<snip>
 
I

Ian Collins

bartc said:
Of variable width bigints? Expressions don't overflow, although memory
can get tight.

With fixed width bigints: I've never used these, I assume there's some
mechanism to find out. But it's another mark against them.

That highlights one of the problems with adding operator overloading to
C: how to report errors?
 
N

Nobody

This is completely impractical since with just 40-50 multiplications
you find yourself with gigabyte big numbers that are unusable.

If an integer is so large that it requires a gigabyte of memory to store
it, then it requires a gigabyte of memory to store it. If you don't have
that much memory, then you may as well simply terminate with an
out-of-memory condition. There is no advantage to continuing using an
incorrect (i.e. wrapped) value. Neither approach will give you the correct
answer, but at least the former isn't likely to lead anyone astray.

If you only need an approximate answer, then you use floating point.
The same problems will appear here. There is no "one size fits all"
solution.

The true solution is to let the user specify the number type he/she
needs. You get some basic types, then you can add your own.

This approach becomes inconvenient when you move beyond stand-alone
programs and need to use libraries, as the application has to mediate
between the various formats (assuming that this is even possible).
 
B

bartc

Ian Collins said:
That highlights one of the problems with adding operator overloading to C:
how to report errors?

You mean because using functions instead allows more arguments to be used?

The problems are no different to the difficulties of detecting errors with
the current built-in operators.

And how do floating point operations (many of which *are* implemented as
functions) deal with it?
 
K

Keith Thompson

bartc said:
You mean because using functions instead allows more arguments to be used?

The problems are no different to the difficulties of detecting errors
with the current built-in operators.

And how do floating point operations (many of which *are* implemented
as functions) deal with it?

Clumsily.
 
K

Keith Thompson

Malcolm McLean said:
That's exactly the problem. Error handling is the same problem. Whilst
almost all programs will want to flag an attempt to divide by zero, the
question is exactly how to convey the message to the user. The C standard
solution, printing a message to stderr, isn't appropriate for many programs.
(There's also the question of whether to terminate or to return infinity).

Printing a message to stderr isn't "The C standard solution".
Division by zero invokes undefined behavior. (The behavior might be
defined in some cases if you have IEC 60559 support. I'm too lazy
to look up the details, but I'm sure it doesn't involve printing
a message to stderr, unless your program does it explicitly.)
 
M

Marco

variable length arrays.

good choice - not many compilers have implemented it
Also, the different integer types have a huge drawback, which is that the
exact type has to be passed by indirection. The more types you have, the
less likely it is that the type ypu are using in caller matches the type
demanded by callee.

not sure what you mean here - the fixed width types should be used
where necessary such as interfacing to HW registers.
For most algorithm use - I would just use a "int" or "long" type with
an assert if the caller did not conform on the particular platform
that the code was compiled on

you think that the bad old days (C89) where every project rolled their
own 32, 16, 8 bit, etc unsigned integer is the way to go??

I mostly do embedded work
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,073
Latest member
DarinCeden

Latest Threads

Top