C 99 compiler access

C

Chris Hills

Douglas A. Gwyn said:
No, there aren't "a lot of problems with the maths".
There were a couple of minor issues with the
specification of some functions which were addressed
by technical corrigenda. The actual mathematics is
okay.

We will have to agree to differ as I do not know the maths well enough
by I know several experts who are still complaining about the C99 maths.
(I think you know Nick)
You appear to have a personal agenda that includes
torpedoing the C standard.

On the contrary. I want a C standard that is used widely. It fills me
with great sadness that ISO-C compliant compilers are not seen as a
requirement by most programmers.

How do we go about creating the climate where ISO-C (is the current
version) is automatically considered necessary?
Could this be related to
your notion that we should have allowed type int to
be only 8 bits wide?

No. Why have an int that is 8bits? that would not be as much use
generally. unsigned and signed char are fine for 8 bits.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
C

Chris Hills

Douglas A. Gwyn said:
Really? If they can't follow the spec for even such
a simple feature, why would you trust those compilers
at all?


Because both compilers hold an 80% market share in their own markets.
(different platforms) and both are the the "standard" compiler for their
market.

I had a problem that someone was trying our some code in one development
environment that he wanted to move to the other.

The problem (I will see if I can dig it out) involved multi line macros
and // style comments. I am not sure which one was more correct to the
standard.

The problem is that in both development areas (world wide) both groups
of developers are happy with their system and for neither group
generally is "ISO-C compliant a major point.


I have been told on more than one occasion by a compiler user that "the
ISO-C committee has got it wrong" if the standard does not behave the
way their "industry standard" compiler does.


This is the problem. How do we change the perception of programmers so
that they insist in their compiler conforming to the current ISO-C
standard and not saying their standard is wrong for not tracking their
compiler?

It is all well and good creating a "Good Standard" technically. It is a
commercial world.

Who is responsible for ensuring that it is a required standard? ISO? and
locally BSI? ANSI?, DIN?

or the professional bodies IEE, IEEE, BCS etc

or governments?

I don't think it fits the UN field somehow.


/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
C

Chris Hills

Note, for ISO-C below read "Current version of ISI-C, what ever that may
be at the point in question"

CBFalconer said:
Never heard of TT.

Tasking, unlike Comeau and Dinkumware, are a major compiler company
with development offices in several countries. The do a wide range of
embedded cross compilers. They are in the top 2 or 3 in most of the
areas they do compilers for.

The Infineon Tricore is a 32bit MCU that is used widely in the
automotive sector for ECU's and I have come across it in image
processing.
I believe the Dinkum effort is a library, not
a compiler.

Just out of curiosity can a C90 compiler compile C99 libraries?
Comeau uses it. I think Comeaus system outputs C90
tuned to specific compilers, not executable code.

So this means that only the Tasking Tricore compiler is claiming C99
compliance for the a complete compiler suite? Mind you I have not
tested this claim...

Which brings me on to another point... there seems to be no interest in
the industry for validated ISO-C compilers.

The industry does not really care if the compiler ic ISO-C (90/99 or
abc) GNU C or splat C as long as "the product" gets out the door on time
and makes a profit... (OK, there are exceptions some of us could name
but I am talking generally)

How do we change this?
It is down, partly to individual Engineers and Programmers.
You are using C because that is what programmers wanted. Managers,
lawyers, accountants, marketing people don't care what language you use.

So now we need to create the environment where programmers and engineers
start insisting on ISO-C compilers. If they do compiler writers will
start producing them.


A technically good standard that no one uses is pointless.
How do you get programmers at large (and not just the infinitesimally
small group who read this NG) to think about and want ISO-C compilers
and tools?



/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ (e-mail address removed) www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
 
J

Joseph Myers

That make absolutely no sense. I don't believe in quantum
creation of bits on demand.

6.3.1.5 says

[#1] When a float is promoted to double or long double, or a
double is promoted to long double, its value is unchanged.

[#2] When a double is demoted to float, a long double is
demoted to double or float, or a value being represented in
greater precision and range than required by its semantic
type (see 6.3.1.8) is explicitly converted to its semantic
type, if the value being converted can be represented
exactly in the new type, it is unchanged. If the value
being converted is in the range of values that can be
represented but cannot be represented exactly, the result is
either the nearest higher or nearest lower representable
value, chosen in an implementation-defined manner. If the
value being converted is outside the range of values that
can be represented, the behavior is undefined.

So if FLT_EVAL_METHOD is 2, and x and y are of type float, then (x*y)
has type float but is represented with the range and precision of long
double. (float)(x*y) is represented with the range and precision of
float; any excess bits are removed by the explicit cast. But
(double)(x*y) is represented with the range and precision of long
double; it may have excess bits beyond those in the precision of
double (i.e. be a value not exactly representable as a double).
However (double)(double)(x*y) has only the range and precision of a
double.
 
J

Joseph Myers

I read all of the document but didn't find anything that really
contradicted my interpretation. Even the parts you pointed out don't
clearly state anything one way or another. Is there a place where these
exceptions are documented? Making this information more prevalent may
prevent this type of confusion in the future.

http://gcc.gnu.org/bugzilla/
(specifically the dependencies of bugs 16620 and 16989)

http://gcc.gnu.org/c99status.html

It would be a bit odd to have documentation that alternates between
saying "complete" and "not complete" whenever conformance bugs are
found and fixed.
 
M

Mabden

Chris Hills said:
Yes, there is a lot of competition in the embedded world. From 4 to 128
bit. There is also Gcc and many other free tools for most embedded
platforms.

However whilst they are "rushing" to make their library code MISRA-C
complient (in as much as it can be) they are not rushing to make their
code their compilers C99 compliant...

This is strange. MISRA-C is "just a coding guideline" NOTE "guideline"
not "Standard" whereas ISO-C is THE standard for their compiler.

The problem is that the ISO-c standard is not seen as important or a
prerequisite for a compilers by the industry.

How do we change this perception? How do we get the industry to demand
ISO-C compliant compilers?


Part of the problem is a large part fot he industry uses GCC because
"its free and you get the source" rather than for any Engineering
reasons.

I suspect that until the law or the insurance companies REQUIRE ISC-C
complient compilers they will not become essential in the industry.

So I can go back to my world of K&R2 and tell Keith and Dan to stuff it?
;-)
 
C

CBFalconer

Joseph said:
CBFalconer said:
That make absolutely no sense. I don't believe in quantum
creation of bits on demand.

6.3.1.5 says

[#1] When a float is promoted to double or long double, or a
double is promoted to long double, its value is unchanged.

[#2] When a double is demoted to float, a long double is
demoted to double or float, or a value being represented in
greater precision and range than required by its semantic
type (see 6.3.1.8) is explicitly converted to its semantic
type, if the value being converted can be represented
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
exactly in the new type, it is unchanged. If the value
being converted is in the range of values that can be
represented but cannot be represented exactly, the result is
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
either the nearest higher or nearest lower representable
value, chosen in an implementation-defined manner. If the
value being converted is outside the range of values that
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
can be represented, the behavior is undefined.

So if FLT_EVAL_METHOD is 2, and x and y are of type float, then (x*y)
has type float but is represented with the range and precision of long
double. (float)(x*y) is represented with the range and precision of
float; any excess bits are removed by the explicit cast. But
(double)(x*y) is represented with the range and precision of long
double; it may have excess bits beyond those in the precision of
double (i.e. be a value not exactly representable as a double).
However (double)(double)(x*y) has only the range and precision of a
double.

See the underlined clauses above. Also the following in
5.2.4.2.2. (N869)

14)The floating-point model is intended to clarify the
description of each floating-point characteristic and
does not require the floating-point arithmetic of the
implementation to be identical.
 
C

CBFalconer

Chris said:
.... snip ...

I suspect that until the law or the insurance companies REQUIRE ISC-C
complient compilers they will not become essential in the industry.

That in turn requires a freely (in practice) available standard
test suite for evaluation purposes. Then every purchaser can
check compliance.

The suite should have various levels. To do anything it must use
at least getchar and putchar with stdin and stdout. After that
levels can check compliance with each possible #include of
standard headers.

The suite could be developed in much the same manner as the late
Pascal test suite, circa 1980, was created. Unfortunately that
collection was handed over to a firm for maintenance, with rights,
and is now lost. Each test monitored compliance with an
identifiable (via test no.) clause in the standard and was a
complete program. For example
p2-3-4t5 was a program implementing a 5th test on clause 2.3.4.
 
C

CBFalconer

Chris said:
.... snip ...

Who is responsible for ensuring that it is a required standard?
ISO? and locally BSI? ANSI?, DIN?

or the professional bodies IEE, IEEE, BCS etc

or governments?

I don't think it fits the UN field somehow.

If you can implant the idea in GWBs head, he can then incarcerate
any who fail to comply in the Gitmo gulag forever. You have about
2 months. :)

Seriously, it means some form of legality. Such as allowing
conformance with the standard to be a defence against certain
liabilities. That also requires insisting that software vendors,
contractors, etc. assume liability for their products. Similarly
hardware. My head hurts.
 
R

Robert Gamble

http://gcc.gnu.org/bugzilla/
(specifically the dependencies of bugs 16620 and 16989)

http://gcc.gnu.org/c99status.html

It would be a bit odd to have documentation that alternates between
saying "complete" and "not complete" whenever conformance bugs are
found and fixed.

If gcc is beleived to be c90 conformant aside from the bugs that pop up
from time to time, I would try to make this fact more clear in the
documentation and point the user to the bugzilla site for the latest
possible conformance issues.

If there are known conformance problems when a new major version is
released, this should be documented. Maybe a small table similiar to the
c99status page that just lists the areas where the last x releases do not
conform to the c90 standard?

Rob Gamble
 
K

Keith Thompson

Mabden said:
So I can go back to my world of K&R2 and tell Keith and Dan to stuff it?
;-)

What the hell are you talking about?

An insult with a smiley is still an insult, you know, and if there's
no apparent motivation for it (why would you want to tell me to "stuff
it" in this context?), it's just bizarre.

Yes, it's common to engage in good-natured insults among friends and
acquaintances, but based on what I've seen here you don't seem to have
the knack for it. On several occasions, you've written something that
was apparently intended as harmless teasing, but it's come across as a
seriously offensive insult. I mean no offense; you could be perfectly
charming and witty in person.

A grossly exaggerated example might be, "So-and-so is an ax-murdering
pedophile -- just kidding!" (No, you're not nearly that bad.)

You might consider just avoiding irony and sarcasm when posting to
technical newsgroups. Or just avoid posting anything with a smiley
that you wouldn't post without one.
 
L

lawrence.jones

In comp.std.c Joseph Myers said:
So if FLT_EVAL_METHOD is 2, and x and y are of type float, then (x*y)
has type float but is represented with the range and precision of long
double. (float)(x*y) is represented with the range and precision of
float; any excess bits are removed by the explicit cast. But
(double)(x*y) is represented with the range and precision of long
double; it may have excess bits beyond those in the precision of
double (i.e. be a value not exactly representable as a double).

I think you've found a bug in the standard -- the intent was that casts
(and assignments) to a narrower type than the representation should
scrape off the extra bits.

-Larry Jones

You know how Einstein got bad grades as a kid? Well MINE are even WORSE!
-- Calvin
 
F

Fred J. Tydeman

Joseph said:
So if FLT_EVAL_METHOD is 2, and x and y are of type float, then (x*y)
has type float but is represented with the range and precision of long
double. (float)(x*y) is represented with the range and precision of
float; any excess bits are removed by the explicit cast. But
(double)(x*y) is represented with the range and precision of long
double; it may have excess bits beyond those in the precision of
double (i.e. be a value not exactly representable as a double).
However (double)(double)(x*y) has only the range and precision of a
double.

I understand your example and see why you have that interpretation
(since Standard C is talking about types, not representations, in
most cases). I think Defect Report 290, which addresses a similar
case, needs to be revisited with this example in mind.
---
Fred J. Tydeman Tydeman Consulting
(e-mail address removed) Programming, testing, numerics
+1 (775) 287-5904 Vice-chair of J11 (ANSI "C")
Sample C99+FPCE tests: ftp://jump.net/pub/tybor/
Savers sleep well, investors eat well, spenders work forever.
 
D

David Hopwood

Chris said:
How do we change this?
It is down, partly to individual Engineers and Programmers.

Why would a competent engineer/programmer introduce an unnecessary dependency
of a project on C99 when compilers for C99 are so thin on the ground? At most,
they will use C99 features that are supported by the compiler they want to use
for other reasons.
So now we need to create the environment where programmers and engineers
start insisting on ISO-C compilers. If they do compiler writers will
start producing them.

C99 doesn't provide enough benefit over C90 for that to happen.
It's not *just* a "marketing" issue, it's a technical issue as well.

To break the chicken-and-egg cycle, someone has to create a conforming C99
implementation *despite* the fact programmers are not asking for it. For
example, someone could fund Gnu to solve gcc's remaining C99 conformance
issues.

Incidentally, this is exactly why IETF don't standardize things that haven't
been implemented.
 
A

Allin Cottrell

Douglas said:
How do you know there is a "lack of interest"? Have you taken
a scientifically valid poll?

Anyway, until C99 compliance is sufficiently widely available,
it is unlikely to be a project requirement. That alone would
be sufficient to explain any apparent "lack of interest".

Surely this is putting the chicken before the horse.

The following is my analysis. It is not based on scientifically
validated evidence and may be wrong.

Around the time when the first C standard was introduced, C was a
very popular programming language. The C world had a strong reason
to avoid the proliferation of dialects and the standard was
welcome. Compiler vendors had a strong incentive to produce
standard-compatible compilers.

By the time C99 came out, C had lost a fair amount of ground to
C++ and Java (and now it, and/or the latter languages, are losing
ground to C#). Microsoft has no particular reason to invest a lot
of resources in a C99 compiler, since it has hitched its fortunes
to C# and .NET.

Nonetheless, C remains the lingua franca of open-source programming.
I suspect that the great bulk of newly-written C code is open
source. In that context, people are mostly compiling it with gcc,
so the relationship between gcc and C99 is of key importance.
But as Dan has said, C99 to a large extent reinvents wheels that
were already available as gcc extensions. Hence C99 is in
trouble.

Just how much of this was foreseeable prior to 1999, I'm not sure.

Allin Cottrell
 
D

Douglas A. Gwyn

Chris said:
It is a coding guideline.
My point was that most of the worlds embedded compiler suites are making
their library code MISRA-C compliant because they think it is a Good
Idea (commercially).
On the same line very few of the worlds compiler writers are making
their compilers C99 compliant (with any real effort) despite the fact
they knew it was coming and despite the fact that in theory it is the
specification for their compilers.

You aren't making sense. You agree that MISRA C is a coding
guideline. Apparently it assumes that such code will be
compiled etc. using a C90 conforming implementation. The
same code can be compiled etc. unchanged using a C99
implementation.
So the question is how do re create an environment in the Industry in
general where ISO-C is automatically considered a prerequisite for a
compiler?

Standard conformance is normally a contractual requirement
such as found in the U.S. FIPS. If you don't specify anything
for a product you buy, then you get whatever the vendor wants
to deliver.
"several" "features" this is not good enough. We need "most" and "full
compliance"

My point was that the 1999 standard is playing a role in the
evolution of C compilers. It is unrealistic to expect fully
conforming implementations on day 1 of the standard.
 
D

Douglas A. Gwyn

CBFalconer said:
That in turn requires a freely (in practice) available standard
test suite for evaluation purposes. Then every purchaser can
check compliance.

No, in fact historically compiler validation has been
done by profesional validation services, and not for
free. The U.S. government purchased a validation
suite for use in validating Federal C compiler
procurements against the initial C FIPS.
 
D

Douglas A. Gwyn

Chris said:
I have been told on more than one occasion by a compiler user that "the
ISO-C committee has got it wrong" if the standard does not behave the
way their "industry standard" compiler does.

Many people would find that grounds for switching to
another vendor with a better appreciation of standards.
 
D

Douglas A. Gwyn

David said:
Not true. Because auto variables are constant size and recursion can be
limited, it may be the case on a particular platform that a given program
will *never* have undefined behaviour due to a stack overflow. This is
very unlikely to be true if the program uses VLAs, because the main point
of using VLAs is to allocate objects of arbitrary sizes.

Actually it is to allow *parametric* array sizes,
not *arbitrarily large* sizes. Even if the entire
app were coded using constant, largest supported
sizes for every array, in general you still wouldn't
know whether the stack will overflow at run time,
unless you do careful analysis (and happen to have
an algorithm that is not too dynamic).
 
D

Douglas A. Gwyn

Richard said:
I mean malloc() may return a non-null value and then fail when you
try to actually use the memory. Presumably you already know about this.

I know that such an implementation is badly broken.
For me, the behaviour of malloc() is not the only consideration in
choosing a platform.

What, reliable execution of carefully written programs
is not important?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,609
Members
45,254
Latest member
Top Crypto TwitterChannel

Latest Threads

Top