which tutorial to use?

B

Ben Pfaff

Richard Heathfield said:
But the real world /does/ matter, and there are always (or, at
least, for some time to come) going to be some platforms that
are going to struggle to provide 64-bit integers.

It is a lot more work and a lot more code, in my experience, to
implement floating-point types on a processor that doesn't have
built-in support for them, than it is to implement integer types
longer than a processor's natural word size. But I cannot recall
anyone here complaining that C89 made floating-point types
mandatory. So why the complaints that C99 has made 64-bit types
mandatory?
 
P

Philip Potter

santosh said:
Which would break a lot of currently conforming implementations.

You asked about how to avoid breaking existing code rather than
implementations; and besides, introducing long long broke *all*
conforming implementations.

Having thought about this, while this change could not affect any
strictly conforming program, it could still break C code in general. But
that's the case with pretty much any change you make.
IMHO, it seems even worse than the solution represented by long long.

Here we certainly agree. :)
 
S

santosh

Ian said:
I did that in my last job (swap M$ for Solaris and Linux), I made all
the developers happy spending the license budget on better equipment!

Over here you'll raise eyebrows a mile high for suggesting the use of
anything other than MS products, unless the company is oriented around
UNIX or networking. And if you suggest this after a recent MS
deployment you can pretty much expect that months paycheck to be your
last from that company.
 
R

Richard Heathfield

santosh said:
Richard Heathfield wrote:

This same argument could probably have been made during the
standardisation process of C89/C90 with regards to 32 bits for long.
Agreed.


Even with emulation? If so, they could be non-conforming.

Well, yes, obviously they would be non-conforming if the Standard requires
64-bit ints that they can't supply. That's kind of my point - that
unrealistic demands cannot be met. 64-bit ints aren't unrealistic for big
iron or even micros, of course, but I would be surprised if there weren't
at least one common embedded environment on which implementing 64-bit ints
is going to be a stumbling block.
Not having a guaranteed 64 bits integer type in ISO C would mean that
programs using values of this size would either have to use a compiler
extension or a bignum library, both of which reduce portability, or be
open to the possibility of not compiling on some set of systems.

Right - so the portability argument bites both ways.
There is an argument to be made that 64 bit integers are not often
required by most computations. Indeed by this angle of reasoning an
even stronger case can be made against the inclusion of complex and
imaginary types, as well as the whole of fenv.h's facilities.
Agreed.

IOW you seem to be saying that C99 was essentially unnecessary. Would
you say the same for C1x?

I don't know what's in it. But if C1x does require features that are hard
for some implementations to provide, at least let them be *exciting*
features! :)
 
R

Richard Heathfield

Ben Pfaff said:
It is a lot more work and a lot more code, in my experience, to
implement floating-point types on a processor that doesn't have
built-in support for them, than it is to implement integer types
longer than a processor's natural word size. But I cannot recall
anyone here complaining that C89 made floating-point types
mandatory. So why the complaints that C99 has made 64-bit types
mandatory?

Okay, I'll bite.

Oi! C89 folks! WHY did you make floating-point types mandatory?

Happy now? :)
 
B

Ben Pfaff

Richard Heathfield said:
Ben Pfaff said:

Okay, I'll bite.

Oi! C89 folks! WHY did you make floating-point types mandatory?

Well, it's a funny answer, and I do appreciate humor. But I
think I've made a reasonably good point. Do you have a
reasonably good response?
 
I

Ian Collins

santosh said:
Over here you'll raise eyebrows a mile high for suggesting the use of
anything other than MS products, unless the company is oriented around
UNIX or networking. And if you suggest this after a recent MS
deployment you can pretty much expect that months paycheck to be your
last from that company.
Others controlled the purse strings, but I controlled the requirements
I had a requirements MS products couldn't meet.
 
S

santosh

Ben said:
Well, it's a funny answer, and I do appreciate humor. But I
think I've made a reasonably good point. Do you have a
reasonably good response?

Well it would be unreasonable to implement long long in implementations
for things like microcontrollers, but those are not fully conforming
anyway. Under almost all other systems it could be implemented by
emulation, if the system does not have a native type, much like how
implementations for 8086 dealt with long.

It seems that things like VLAs, variadic macros, fenv.h, all the
additional math functions and their macros in tgmath.h, mixed
declarations and code, extended identifiers, universal character names,
and complex and imaginary types would have all been far harder to
implement than long long.
 
R

Richard Heathfield

Ben Pfaff said:
Well, it's a funny answer, and I do appreciate humor. But I
think I've made a reasonably good point. Do you have a
reasonably good response?

No, not really. I *was* going to say that it's a bit late to complain about
float and double - but then it's a bit late to complain about long long,
too, isn't it?
 
S

santosh

Richard said:
Ben Pfaff said:


No, not really. I *was* going to say that it's a bit late to complain
about float and double - but then it's a bit late to complain about
long long, too, isn't it?

The Committee seems to have recognised some of the causes for C99's poor
uptake. Apparently one of it's guiding principles for C1x is no new
inventions. OTOH, it seems that they want to standardise common
extensions. A proposal has even been submitted for including a
threading API.

<http://www.open-std.org/jtc1/sc22/wg14/www/docs/PreDelft.htm>
 
C

CBFalconer

user923005 said:
.... snip ...


At 4GHz, how long does it take to exhaust a 32 bit integer? Not
very long. 50 years from now, imagine the pain that will result
from fixing all the code broken by short-sighted decisions.

Exhaust? What does that mean? A 32 bit integer can hold values
between roughly -4E9 and 4E9, to an accuracy of 1. If you want to
measure 1 nanosec. ticks since the birth of Christ, that is
probably too small. If you want to measure 1 day ticks over the
same period, it suffices.

You are also allowed to use two 32 bit integers to measure
something. You may lose something due to the difficulties of
portably detecting overflow.
 
C

CBFalconer

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
One way would be to guarantee that long was 64 bits. This is
not necessarily the best solution, but it is a solution.

Note the underlining above. So that is not any solution.
 
C

CBFalconer

Richard said:
.... snip ...

If the real world weren't important, ISO could simply have made
every int type, qualified or not, 1048576 bits wide and be done
with the whole issue forever. But the real world /does/ matter,
and there are always (or, at least, for some time to come) going
to be some platforms that are going to struggle to provide 64-bit
integers.

Now look at the way Pascal, to either ISO7185 or ISO10206, does
it. It defines a single value, maxint. Integral values can define
anything from -maxint to +maxint. ANY other range can be defined
for any value, using a type definition:

CONST
MAXFOO = 1234;

TYPE
myfoo = -MAXFOO .. MAXFOO;
posfoo = 0 .. MAXFOO;

VAR
foo : myfoo;
ufoo : posfoo;

and the var is placed in some object known only to the compiler.

This is not feasible for C, due to the usual problem of 'existing
code'. However, note how easy it makes range checking. Any
constant expression can immediately be evaluated for legitimacy.
Any expression in variables can be evaluated for maximum (and
minimum) values, and again this can be done at compile time, so
that run time checks are not needed. This has been shown to reduce
runtime checks by about 75 to 90%.

Pascallers don't design 'checking code'. Instead, they design
types. Note that the popular Pascals, such as Borland, Turbo,
Delphi, FreePascal, don't follow the Pascal standards, so this note
doesn't apply to them.

Ada is very similar. You probably know all this, but the wide
world doesn't.
 
C

CBFalconer

Ben said:
Well, it's a funny answer, and I do appreciate humor. But I
think I've made a reasonably good point. Do you have a
reasonably good response?

Probably for the same general reason as Stallmans claims that all C
systems should be on 32 bit machines (or bigger). If the machine
has that level of complexity, floating point ability is not an
excessive requirement. I don't agree with either.
 
I

Ioannis Vranos

santosh said:
If we want a clean, elegant language with unlimited width types we have
plenty to choose from right now. C was meant to be practical, not
infinitely flexible (though in practise it has turned out to be very
adaptable!) or elegant or whatever.


C was meant to be practical, but also a general purpose programming
language.

Or maybe pragmatism. How else could WG14 have introduced a guaranteed 64
bits type without breaking existing code?


Why did it need to introduce a guaranteed 64-bit type? I am a bit old
for playing games, but I think there are 128-bit game consoles out there
since some time ago. The "practical approach" is to add new types
indefinitely or to "shift" the existing built in types from the old,
marginal sizes to larger ones?
 
L

lawrence.jones

santosh said:
The Committee seems to have recognised some of the causes for C99's poor
uptake. Apparently one of it's guiding principles for C1x is no new
inventions. OTOH, it seems that they want to standardise common
extensions.

Note that long long *was* a common extension that the committee
standardized, it was not a new invention. In fact, most of the
committee didn't really like it either, but it would have been a
disservice to leave it non-standard when it was nearly ubiquitous.

-Larry Jones

Start tying the sheets together. We'll go out the window. -- Calvin
 
B

Ben Pfaff

CBFalconer said:
Probably for the same general reason as Stallmans claims that all C
systems should be on 32 bit machines (or bigger).

I am not aware that RMS has said that. I do know that he said
that GNU targets 32-bit (and larger) systems. As the GNU coding
standards say, "...don't make any effort to cater to the
possibility that an `int' will be less than 32 bits. We don't
support 16-bit machines in GNU."
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,074
Latest member
StanleyFra

Latest Threads

Top