Who can explain this bug?

T

Tim Rentsch

Seebs said:
However, for floating point types, including in particular
the type double, the register is wider than the place in
memory where 'tmp' resides. So the register can retain
extra information which could not be retained if the value
were put into the 'tmp' memory location. So re-using the
value in the register can behave differently than storing
and reloading.

There was a really lovely example of this I encountered recently.
Someone had come up with a test case which failed only on one or
two CPUs at a particular optimization level. [...]

Nice example -- thanks.
 
8

88888 Dihedral

glen herrmannsfeldtæ–¼ 2013å¹´4月19日星期五UTC+8上åˆ7時27分39秒寫é“:
Tim Rentsch said:
(e-mail address removed) (Gordon Burditt) writes:

So, what sort of "optimization" is it that gives the wrong result for
the "<=" condition when *ymax and tmp are equal????

An optimization that keeps excess precision in a calculation.

A typical floating point unit uses registers that are 80 bits wide,
and only narrows it down to 64 bits when you store in a double.
The code may not bother storing the result, then re-loading it
before doing a comparison. [snip]


Only in non-conforming implementations. What happens in a
conforming implementation must be as if the value were actually
stored and re-fetched, and the semantics of assignment imcludes
removing any extra range and precision (footnote, 6.3.1.8 p2).
What gcc does is not a legitimate optimization but non-conforming
behavior.



I believe that is correct. Fortran is more lenient in what it allows

for floating point expression evaluation. But storing every intermediate

value can slow things down a lot!



Well, they should go to cache instead of all the way to main memory,

but it is still slow.



Does gcc always store intermediate values with -O0?



-- glen

The C-language standard does specify the requirements
of the precisions of floating numbers and doubles.

In arm or mips the doubles are not the same as the 80 bits
in the Intel format which has to ensure the backward
compatibility since 199X.

Bounding floating point rounding errors in several
operations in C programs requires some disciplines.

This is not an easy problem.
 
J

James Kuyper

On 05/03/2013 03:53 AM, David Brown wrote:
.....
I think when I have used the term "conforming", I have meant "strictly
conforming" (and usually without making clear distinction between
compiler implementations and programs).

I'd like to point out one thing that you might or might not be aware of:
the standard assigns a meaning to the term "strictly conforming" only
when applied to programs. A C implementation either conforms or doesn't
conform. My first draft of this paragraph said "There's only one level
of conformance for implementations", but that's not true: it can conform
as either a freestanding or hosted implementation, and the latest
version of the standard made many features optional, so there's several
levels of conformance - but none of them is labeled "strict".
But it strikes me that "strictly conforming" is so tight that it is
close to useless, if you can't write real-world programs that are
strictly conforming. Certainly you can't do anything useful in the
embedded world (where I work) while remaining strictly conforming -
there are always parts of the code that are specific to the target
and/or particular compiler, so that on other conforming compilers you
would get compiler errors or at least different run-time behaviour.

Strictly conforming code has some uses; failure to accept such code can
be used to prove an implementation non-conforming. Any new feature that
would break even strictly conforming conforming code has to be very
strongly motivated in order for the committee to be willing to accept it.
On the other hand, "conforming" is so lose that it is close to useless.

As applied to code, it is exactly useless; there's nothing "close" about it.
If I write a compiler that accepts Pascal as well as C, but is
otherwise a conforming C compiler, then Pascal programs suddenly become
conforming C code.

A compiler that has an option to process either Pascal or C code does
not, in itself, make that Pascal code qualify as conforming C code. Only
if the compiler accepts Pascal code when invoked in a mode where it is a
conforming implementation of C would it have that effect. In most cases,
a Pascal program will violate either C's syntax rules or constraints; if
so, a conforming implementation of C must generate at least one
diagnostic message - that message could be "Pascal code detected -
switching to Pascal mode". I don't know if it's possible to write a
Pascal program that could also be parsed as a correct C program, but if
so, the behavior required upon translating and accepting such a program
must be the C behavior, even if that's incompatible with the required
Pascal behavior.

....
This is one of my mistakes. Basically, I thought an implementation
/had/ to follow IEEE fully (including all the awkward bits like NaNs) to
be "conforming". But I suppose the requirements for conforming floating
point behaviour are loser than that.

An implementation can per-#define __STDC_IEC_60559__, in which case it
must satisfy all the requirements of Appendix F, which are based on and
very close to (but not identical with) those of IEC 60559 (== IEEE 754).
Otherwise, there are VERY few meaningful requirements on floating point
operations in C - those requirements are not, for instance, strong
enough to guarantee that 1.0-1.0 != DBL_MAX.
 
J

James Kuyper

....
That makes a little more sense. A conforming compiler not only has to
accept all strictly conforming code, it also has to reject syntactically
incorrect C code with a diagnostic message.

Close, but not quite: it must diagnose syntactically incorrect C code,
but it need not reject it. The only kind of program that a conforming
implementation is required to reject is one that contains a #error
directive that survives conditional compilation (#if, #ifdef, etc.).
Ironically, this implies, among other things, that in order to guarantee
that a program must be rejected, the code must NOT contain any syntax
errors or constraint violations that apply during translation phases
1-4, since translation phase 4 is the one where conditional compilation
and implementation of #error directives occurs.

That's what makes the definition of "conforming" code so useless: code
is conforming if there is any conforming implementation, anywhere, which
can accept it. Therefore, the only kind of program that is guaranteed to
be non-conforming is code which contains a #error directive that is
guaranteed to survive conditional compilation.

....
So as long as the compiler does not define __STDC_IEC_60559__ (and thus
claim to follow the "Appendix F" rules), it can be "conforming" despite
doing almost anything it wants with floating point?

There are some requirements: conversions of floating point constants and
strings to floating point numbers, and back again, are tightly
constrained. However, there are not enough guarantees to make any
practical use of floating point.
Some embedded compilers I have 4-byte doubles (identical to their
single-precision floats), which is not unreasonable for 8-bit
microcontrollers. Can such a compiler be "conforming", or must the
doubles be bigger? I'm getting the impression that a conforming
compiler would need 8-byte doubles, but could throw away the extra bits
and do the calculations as floats.

The <float.h> standard header must #define a constant named DBL_EPSILON,
which is "the difference between 1 and the least value greater than 1
that is representable in the given floating point type", and that
difference cannot be greater than 1e-9 (5.2.4.2.2p13). The standard
imposes no accuracy requirements on the basic arithmetic operations +,
-, *, and /, nor on the floating point functions from <math.h> and
<complex.h> (5.2.4.2.2p6), which means it's not possible to reliably
test DBL_EPSILON by calculating the defining difference: nextafter(1.0,
2.0)-1.0.

However, it is possible to test it indirectly. The most extreme possible
case occurs if DBL_EPSILON is exactly 1e-9, which requires that
FLT_RADIX be a power of 10. In that case the floating point constant
1.000000002 could be represented exactly, but the standard does not
require it to be. It could be converted into a floating point value as
low as 1.000000001, but no lower, or as high as 1.000000003, but no
higher (6.4.4.2p3). The comparison operators <, >, <=, >=, ==, and !=
are not covered by 5.2.4.2.2p6, so you can use them to reliably test
whether or not this requirement has been met: 1.00000000 <= 1.000000002
&& 1.000000002 <= 1.000000004.
 
J

James Kuyper

On 05/03/2013 10:09 AM, James Kuyper wrote:
....
However, it is possible to test it indirectly. The most extreme possible
case occurs if DBL_EPSILON is exactly 1e-9, which requires that
FLT_RADIX be a power of 10. In that case the floating point constant
1.000000002 could be represented exactly, but the standard does not
require it to be. It could be converted into a floating point value as
low as 1.000000001, but no lower, or as high as 1.000000003, but no
higher (6.4.4.2p3). The comparison operators <, >, <=, >=, ==, and !=
are not covered by 5.2.4.2.2p6, so you can use them to reliably test
whether or not this requirement has been met: 1.00000000 <= 1.000000002
&& 1.000000002 <= 1.000000004.

I just realized that comparisons that could come up equal don't properly
test conformance. I need comparisons for which inequality is guaranteed,
despite the inaccurate conversions allowed by 6.4.4.2p3. Therefore, go for
1.00000000 < 1.000000003 && 1.000000003 < 1.0000000006
 
M

Marcin Grzegorczyk

David Brown wrote:
[...]
Some embedded compilers I have 4-byte doubles (identical to their
single-precision floats), which is not unreasonable for 8-bit
microcontrollers. Can such a compiler be "conforming", or must the
doubles be bigger? I'm getting the impression that a conforming
compiler would need 8-byte doubles, but could throw away the extra bits
and do the calculations as floats.

Indeed, the minimum requirements given in 5.2.4.2.2 for DBL_DIG,
DBL_MIN_10_EXP and DBL_MAX_10_EXP cannot be met by an implementation
that stores double in 32 bits. However, 8 bytes are not necessary; 6
are enough. (If both the exponent and the significand are stored in
binary, then (unless I made a mistake in the calculations) the exponent
requires min. 8 bits and the significand requires min. 34 bits, of which
the most significant one need not be stored. Together with the sign
bit, that gives 42 bits total.)

(Side note: IIRC, at least one C implementation for Atari ST (which did
not have an FPU) did use 6-byte doubles. Unlike the IEEE formats, the
exponent was stored after the significand, so that they could be
separated for further processing with only AND instructions, no shifts.)

Still, as James Kuyper wrote in this thread, an implementation could
pretend (via the <float.h> constants) to provide more precision than it
really did, and probably still claim conformance. It would be a rather
perverse thing to do, though.
 
G

glen herrmannsfeldt

(snip, someone wrote)
(snip)
To be pedantic, IEEE-754 does not specify the internal representation
of the float. Field order, bit order, and the (non)presence of
internal padding are all unspecified. There is a specified
*interchange* format (which uses the common sign/exponent/mantissa
format), but there is not requirement that a CPU implement that as its
internal format. It would be perfectly acceptable for an
implementation of doubles to put the exponent in last eleven bits, if
the ISA architects found that convenient.

Yes. Well, normally you can't say much about the actual order of bits
inside the processor. There is no reason to give them an order until
they are written to memory.

As an example, the internal format of the x87 is the 80 bit temporary
real format. The order is known, as it is possible to write them out
in that form.
OTOH, I've not seen an actual implementation that didn't use the
interchange format (endian issues aside) internally, although I could
see a software implementation taking some liberties for the same
reasons the Atari implementation did (perhaps on an embedded C
implementation where external communications of binary float images is
unlikely).

It would complicate things. You would expect, for example, the Fortran
UNFORMATTED I/O to use the interchange format, though normally it is
expected to just copy the bits. It is even less obvious that fwrite()
could write the bits in a different order than they were stored in
memory.

-- glen
 
M

Malcolm McLean

It's neither required to store the interchange format in memory, or
write it to a file. But your odds of reading it from a file on a
different IEEE-754 supporting system are better if you do write the
interchange format. And even that's of somewhat limited value, as
most people prefer a text format for exchanging data between systems
(not that I necessarily agree with that, but that seems to be the
world's preference).


I wrote this function to write IEEE floats portably.

/*
* write a double to a stream in ieee754 format regardless of host
* encoding.
* x - number to write
* fp - the stream
* bigendian - set to write big bytes first, elee write litle bytes
* first
* Returns: 0 or EOF on error
* Notes: different NaN types and negative zero not preserved.
* if the number is too big to represent it will become infinity
* if it is too small to represent it will become zero.
*/
static int fwriteieee754(double x, FILE *fp, int bigendian)
{
int shift;
unsigned long sign, exp, hibits, hilong, lowlong;
double fnorm, significand;
int expbits = 11;
int significandbits = 52;

/* zero (can't handle signed zero) */
if(x == 0)
{
hilong = 0;
lowlong = 0;
goto writedata;
}
/* infinity */
if(x > DBL_MAX)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31 - expbits);
lowlong = 0;
goto writedata;
}
/* -infinity */
if(x < -DBL_MAX)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31-expbits);
hilong |= (1 << 31);
lowlong = 0;
goto writedata;
}
/* NaN - dodgy because many compilers optimise out this test, but
*there is no portable isnan() */
if(x != x)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31 - expbits);
lowlong = 1234;
goto writedata;
}

/* get the sign */
if(x < 0) {sign = 1; fnorm = -x;}
else {sign = 0; fnorm = x;}

/* get the normalized form of f and track the exponent */
shift = 0;
while(fnorm >= 2.0) { fnorm /= 2.0; shift++; }
while(fnorm < 1.0) { fnorm *= 2.0; shift--; }

/* check for denormalized numbers */
if(shift < -1022)
{
while(shift < -1022) {fnorm /= 2.0; shift++;}
shift = -1023;
}
/* out of range. Set to infinity */
else if(shift > 1023)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31-expbits);
hilong |= (sign << 31);
lowlong = 0;
goto writedata;
}
else
fnorm = fnorm - 1.0; /* take the significant bit off mantissa */

/* calculate the integer form of the significand */
/* hold it in a double for now */

significand = fnorm * ((1LL<<significandbits) + 0.5f);


/* get the biased exponent */
exp = shift + ((1<<(expbits-1)) - 1); /* shift + bias */

/* put the data into two longs (for convenience) */
hibits = (long) ( significand / 4294967296);
hilong = (sign << 31) | (exp << (31-expbits) ) | hibits;
x = significand - hibits * 4294967296;
lowlong = (unsigned long) (significand - hibits * 4294967296);

writedata:
/* write the bytes out to the stream */
if(bigendian)
{
fputc( (hilong >> 24) & 0xFF, fp);
fputc( (hilong >> 16) & 0xFF, fp);
fputc( (hilong >> 8) & 0xFF, fp);
fputc( hilong & 0xFF, fp);

fputc( (lowlong >> 24) & 0xFF, fp);
fputc( (lowlong >> 16) & 0xFF, fp);
fputc( (lowlong >> 8) & 0xFF, fp);
fputc( lowlong & 0xFF, fp);
}
else
{
fputc( lowlong & 0xFF, fp);
fputc( (lowlong >> 8) & 0xFF, fp);
fputc( (lowlong >> 16) & 0xFF, fp);
fputc( (lowlong >> 24) & 0xFF, fp);

fputc( hilong & 0xFF, fp);
fputc( (hilong >> 8) & 0xFF, fp);
fputc( (hilong >> 16) & 0xFF, fp);
fputc( (hilong >> 24) & 0xFF, fp);
}
return ferror(fp);
}
 
I

Ike Naar

I wrote this function to write IEEE floats portably.

/*
* write a double to a stream in ieee754 format regardless of host
* encoding.
* x - number to write
* fp - the stream
* bigendian - set to write big bytes first, elee write litle bytes
* first
* Returns: 0 or EOF on error
* Notes: different NaN types and negative zero not preserved.
* if the number is too big to represent it will become infinity
* if it is too small to represent it will become zero.
*/
static int fwriteieee754(double x, FILE *fp, int bigendian)
{
int shift;
unsigned long sign, exp, hibits, hilong, lowlong;
double fnorm, significand;
int expbits = 11;
int significandbits = 52;

/* zero (can't handle signed zero) */
if(x == 0)
{
hilong = 0;
lowlong = 0;
goto writedata;
}
/* infinity */
if(x > DBL_MAX)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31 - expbits);
lowlong = 0;
goto writedata;
}
/* -infinity */
if(x < -DBL_MAX)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31-expbits);
hilong |= (1 << 31);
lowlong = 0;
goto writedata;
}
/* NaN - dodgy because many compilers optimise out this test, but
*there is no portable isnan() */
if(x != x)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31 - expbits);
lowlong = 1234;
goto writedata;
}

/* get the sign */
if(x < 0) {sign = 1; fnorm = -x;}
else {sign = 0; fnorm = x;}

/* get the normalized form of f and track the exponent */
shift = 0;
while(fnorm >= 2.0) { fnorm /= 2.0; shift++; }
while(fnorm < 1.0) { fnorm *= 2.0; shift--; }

/* check for denormalized numbers */
if(shift < -1022)
{
while(shift < -1022) {fnorm /= 2.0; shift++;}
shift = -1023;
}
/* out of range. Set to infinity */
else if(shift > 1023)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31-expbits);
hilong |= (sign << 31);
lowlong = 0;
goto writedata;
}
else
fnorm = fnorm - 1.0; /* take the significant bit off mantissa */

/* calculate the integer form of the significand */
/* hold it in a double for now */

significand = fnorm * ((1LL<<significandbits) + 0.5f);


/* get the biased exponent */
exp = shift + ((1<<(expbits-1)) - 1); /* shift + bias */

/* put the data into two longs (for convenience) */
hibits = (long) ( significand / 4294967296);
hilong = (sign << 31) | (exp << (31-expbits) ) | hibits;
x = significand - hibits * 4294967296;

What's the purpose of the above statement?
It looks like x is not used in the rest of the function.
 
I

Ian Collins

Malcolm said:
I wrote this function to write IEEE floats portably.

Handy.

I would make "writedata" a function and remove the gotos!
/*
* write a double to a stream in ieee754 format regardless of host
* encoding.
* x - number to write
* fp - the stream
* bigendian - set to write big bytes first, elee write litle bytes
* first
* Returns: 0 or EOF on error
* Notes: different NaN types and negative zero not preserved.
* if the number is too big to represent it will become infinity
* if it is too small to represent it will become zero.
*/
static int fwriteieee754(double x, FILE *fp, int bigendian)
{
int shift;
unsigned long sign, exp, hibits, hilong, lowlong;
double fnorm, significand;
int expbits = 11;
int significandbits = 52;

/* zero (can't handle signed zero) */
if(x == 0)
{
hilong = 0;
lowlong = 0;
goto writedata;
}
/* infinity */
if(x > DBL_MAX)
{
hilong = 1024 + ((1<<(expbits-1)) - 1);
hilong <<= (31 - expbits);

Sun cc flags this as an integer overflow, which looks to be correct.
 
T

Tim Rentsch

David Brown said:
[snipping a lot]

But it strikes me that "strictly conforming" is so tight that it
is close to useless, if you can't write real-world programs that
are strictly conforming. [...] On the other hand, "conforming"
is so lose that it is close to useless.

The point of these terms (and their definitions) is to aid
discussion of implementations, not of programs. Sometimes
people forget that the main reason for having a C standard
is to define requirements for implementors, not to help
developers.
 
T

Tim Rentsch

David Brown said:
[..snip..]

Some embedded compilers I have 4-byte doubles (identical to their
single-precision floats), which is not unreasonable for 8-bit
microcontrollers. Can such a compiler be "conforming", or must
the doubles be bigger? I'm getting the impression that a
conforming compiler would need 8-byte doubles, but could throw
away the extra bits and do the calculations as floats.

You'll get better answers to questions like this, and a better
understanding of what is being said, if you first read what
the Standard says and try to discover the answer yourself:

http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf

For this question you should look at what's written in section
5.2.4.2.2 'Characteristics of floating types'.
 
T

Tim Rentsch

Marcin Grzegorczyk said:
David Brown wrote:
[...]
Some embedded compilers I have 4-byte doubles (identical to their
single-precision floats), which is not unreasonable for 8-bit
microcontrollers. Can such a compiler be "conforming", or must the
doubles be bigger? I'm getting the impression that a conforming
compiler would need 8-byte doubles, but could throw away the extra bits
and do the calculations as floats.

Indeed, the minimum requirements given in 5.2.4.2.2 for DBL_DIG,
DBL_MIN_10_EXP and DBL_MAX_10_EXP cannot be met by an implementation
that stores double in 32 bits. However, 8 bytes are not necessary; 6
are enough. (If both the exponent and the significand are stored in
binary, then (unless I made a mistake in the calculations) the
exponent requires min. 8 bits and the significand requires min. 34
bits, of which the most significant one need not be stored. Together
with the sign bit, that gives 42 bits total.)

The formula given for DBL_DIG in 5.2.4.2.2 p11 implies, for b == 2,
a lower bound of 35 significand bits, giving 43 bits altogether (ie,
the rest of the calculation is right).

However, if b == 10 is used, and the exponent and significand are
encoded into a single (binary) numeric quantity, this can be
reduced to just 41 bits. This representation also gives a
noticeably larger exponent range (though entailing, of course, a
loss of almost two bits of precision, plus an unmentionably
large cost converting to/from the stored representation to do
floating point arithmetic).
(Side note: IIRC, at least one C implementation for Atari ST (which
did not have an FPU) did use 6-byte doubles. Unlike the IEEE formats,
the exponent was stored after the significand, so that they could be
separated for further processing with only AND instructions, no
shifts.)

Still, as James Kuyper wrote in this thread,

Either I missed this posting, or I understood its contents as
saying something different from the rest of the sentence.
an implementation could pretend (via the <float.h> constants) to
provide more precision than it really did, and probably still
claim conformance. It would be a rather perverse thing to do,
though.

Are you saying that, for example, an implementation could define
DBL_DIG to be 10, even when the stored representation would make
it impossible to convert any 10 decimal digit floating point
number to the stored representation and back again (so as to get
the original 10 decimal digits)? AFAICS such an implementation
could no more be conforming than one that defined INT_MAX as 15.
Have I misunderstood what you're saying? If not, can you explain
how you arrive at this conclusion, or what interpretation might
allow such a result?
 
K

Keith Thompson

Tim Rentsch said:
David Brown said:
[snipping a lot]

But it strikes me that "strictly conforming" is so tight that it
is close to useless, if you can't write real-world programs that
are strictly conforming. [...] On the other hand, "conforming"
is so lose that it is close to useless.

The point of these terms (and their definitions) is to aid
discussion of implementations, not of programs. Sometimes
people forget that the main reason for having a C standard
is to define requirements for implementors, not to help
developers.

I'm not sure I agree with that last statement. I see a language
standard as a contract between implementers and developers, applying
equally to both.
 
J

James Kuyper

On 05/04/2013 06:55 PM, Marcin Grzegorczyk wrote:
....
Still, as James Kuyper wrote in this thread, an implementation could
pretend (via the <float.h> constants) to provide more precision than it
really did, and probably still claim conformance. It would be a rather
perverse thing to do, though.

I didn't really intend to imply that, though there's a certain amount of
truth to it. The standard does fail to impose any accuracy requirements
whatsoever on floating point arithmetic or on the functions in the
<math.h> and <complex.h> (5.2.4.2.2p6). That fact makes it difficult to
write any code involving floating point operations that is guaranteed by
the standard to do anything useful. As a special case of that general
statement, it makes it difficult to prove that a defective <float.h> is
in fact defective.

However, it was not my point that <float.h> could be defective. In fact,
what I basically demonstrated is that there's a limit on how much
imprecision can be covered up by this means. The inaccuracy allowed by
the standard for conversion of floating point constants to floating
point values is sufficient to make it difficult to prove non-conformance
if the correct value for DBL_EPSILON were as large as twice the
standard's maximum value. However, anything more than twice that minimum
would be provably non-conforming, despite what it says in 5.2.4.2.2p6.
That's not enough to fake conformance with the standard's requirements
while using 32-bit floating point for double, which was the issue I was
addressing.
 
M

Marcin Grzegorczyk

Tim said:
The formula given for DBL_DIG in 5.2.4.2.2 p11 implies, for b == 2,
a lower bound of 35 significand bits, giving 43 bits altogether (ie,
the rest of the calculation is right).

Checked that again, you're right.
I did make a mistake in the calculations, then :)
 
T

Tim Rentsch

Keith Thompson said:
Tim Rentsch said:
David Brown said:
[snipping a lot]

But it strikes me that "strictly conforming" is so tight that it
is close to useless, if you can't write real-world programs that
are strictly conforming. [...] On the other hand, "conforming"
is so lose that it is close to useless.

The point of these terms (and their definitions) is to aid
discussion of implementations, not of programs. Sometimes
people forget that the main reason for having a C standard
is to define requirements for implementors, not to help
developers.

I'm not sure I agree with that last statement. I see a
language standard as a contract between implementers and
developers, applying equally to both.

I don't disagree on your fundamental point, but there's a
subtle distinction between what the two statements are
considering. The main reason for having C be standardized is
just as you say, to form a contract between implementors and
developers, and this is of interest to both. However, once the
terms of the contract are decided, the main reason for having a
defining document is to serve as a reference for implementors.
To be sure, some developers will read the defining document,
and even benefit from reading it, but it isn't necessary to be
a successful C developer, whereas reading and understanding the
C Standard _is_ pretty much necessary to write a conforming
implementation. So I don't think it's wrong to say the main
reason for having a C standard -- in the sense of the actual
formal document -- is to define requirements for implementors.
C developers do very much benefit from having the language
be standardized, but for the most part they do not benefit
(directly) from the actual formal document.
 
G

glen herrmannsfeldt

Tim Rentsch said:
I don't disagree on your fundamental point, but there's a
subtle distinction between what the two statements are
considering. The main reason for having C be standardized is
just as you say, to form a contract between implementors and
developers, and this is of interest to both. However, once the
terms of the contract are decided, the main reason for having a
defining document is to serve as a reference for implementors.
To be sure, some developers will read the defining document,
and even benefit from reading it, but it isn't necessary to be
a successful C developer, whereas reading and understanding the
C Standard _is_ pretty much necessary to write a conforming
implementation.

Well, to me a good language can be understood by developers
using a simpler set of rules, rarely having to look up the
fine details in the standard.

PL/I includes features for scientific, commercial, and systems
programming, and people from one group will rarely need ones
specific for the others. (Many features are used by more than
one group.)

As for precision and range of data types, often the limitations
of an implementation are passed along to users of the program.
That might be more true in floating point, but still true
in fixed point.

One consequence of that is that undetected overflow may surprise
the user of a program.
So I don't think it's wrong to say the main
reason for having a C standard -- in the sense of the actual
formal document -- is to define requirements for implementors.
C developers do very much benefit from having the language
be standardized, but for the most part they do not benefit
(directly) from the actual formal document.

I agree.

-- glen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

preprocessor bug? 2
bug in gcc? 17
Parsing a string 44
explain this..... 24
How to correct the code segment????????????? 13
Doubly linked list 6
my fingerprint identification project , plz help 2
program wont compile... 5

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,906
Latest member
SkinfixSkintag

Latest Threads

Top