So what Standard are we working off?

R

Richard Heathfield

Chris Hills said:
Poelstra <[email protected]> writes

You mean C95. C99 conformance is a major issue in some areas.

I think he meant that, because we already have an implemented C Standard,
the existence of an *un*implemented standard is of little consequence from
the perspective of people whose goal is to be able to port their programs
with little or no rewriting. For such people, to use the Standard that is
already implemented across the board is a very easy decision to make.
 
M

Mark F. Haigh

Mark said:
C is a language whose problems are paramount and staring everyone in
the face. But C99 basically addresses none of these problems over C89.
The rise of the importance of cryptography as a field points out two
problems with the C language: 1) No simple way to express a high-word
multiply operation (even though the vast majority of CPU architectures
do support this with direct, and often highly accelerated hardware
support), [...]

Practically speaking, this is a non-issue. People just insert the
machine instruction into the code via platform-specific inline assembly
support.

Uh ... right, that's what they do. But then its not C code anymore --
its assembly.

Spare me the theatrics. The code is simply platform-specific C. The
inclusion of assembly fragments into a C codebase does not make it
suddenly all assembly code. That's a rather ridiculous thing to say.
So why doesn't Python and Java simply call out to a
external library that is outside of their language (though native
methods or shell calls or whatever) to do this same thing? Obviously,
these are things for which are naturally covered by those languages
directly (in their embodiment of bignums, of course.)

I'm not sure what you're asking. If you want to definitively know why
Python and Java do what they do, you're going to have to ask the
authors about it. Python and Java are both built upon on hundreds of
thousands of lines of C and C++ code, some of it very platform
specific. It's really a testament to the understated portability,
efficiency, and ubiquity of the C language family that somebody whould
accuse C itself of being so inferior to languages implemented with it!
Oh! You were so close. You meant to say: "Of course it does". I
personally endorse the inclusion of pop-count operations in the C
standard library as well. Also bit scanning instructions. Following
the impeccable reasoning you just presented.

WG14 is certainly smart enough to know that this type of approach is
fools' gold. Use additional libraries.

Well they manage to do so with a lot of extra redundant work. Unless
they are an exacly integer multiple of 32 bits, they are also going to
end up being a lot slower because of it.

Am I reading that right? You're claiming my systems with a 64-bit long
do "a lot of redundant work"? Or am I magically exempt because they're
"an exactly integer multiple of 32 bits"? I'd like to hear your
reasoning on that.

I was referring to what C89 was. I specifically point out that C99
"fixes" the situation by making it explicit from a syntactical point of
view (adding restrict.)

No. C99 enables __further__ optimization. It does not fix the
situation, unless the situation is "Hey! r8 doesn't need to be spilled
and later reloaded because the access through the pointer at r9 can't
possibly change it. But how can we tell the compiler that..."
But in real effect, compilers don't behave
differently except for being able to go whole hog on no-alias
optimiation whenever they see "restrict".

Aliasing is also an issue in terms of *correctness*. strcat(p,p) has a
very obvious intuitive meaning that doesn't match up with its actual
meaning (which is nothing, since it performs UB).

That's the thing with C. There's no such thing as "intuitive meaning".
Actual meaning is the only relevant meaning.
But even now that we
have this explicit syntax, we don't see any of these compilers
enforcing compile time checking for aliasing anyways. In general it
requires runtime checking to be fully enforced anyhow. Once you do
that, however, it becomes just as easy to go ahead and break out the
aliasing case and make it function in some well defined way.

You mean other than these types of warnings?

foo.c:11: warning: dereferencing type-punned pointer will break
strict-aliasing rules


Mark F. Haigh
(e-mail address removed)
 
C

Chris Hills

Richard Heathfield said:
Chris Hills said:


I think he meant that, because we already have an implemented C Standard,
the existence of an *un*implemented standard is of little consequence from
the perspective of people whose goal is to be able to port their programs
with little or no rewriting. For such people, to use the Standard that is
already implemented across the board is a very easy decision to make.

I agree.

The problem is in some safety critical areas there a requirement to use
a "conforming compiler". Legally the C standard is C99. This is where
the problem is caused. It is for this reason AFAIK some of the few C99
compilers are C99 compliant. Not for any other [technical] reason.

BTW you keep going on about portability. There are some areas where this
is useful Libraries, RTOS, graphics libraries etc but not for the
majority.

Portable across several parts in the same MCU family is one thing but
not from architecture to architecture.

That said you do want the non architecture specific code to behave in a
consistent manner.
 
K

Keith Thompson

Keith said:
Keith Thompson wrote:
(e-mail address removed) writes: [...]
and 2) Non-determinable integer scalar sizes, that are not
enforced to 2s complement.

I.e., the standard can be implemented on architectures that don't use
2's complement.

Right; and there was a time where this might have made sense. It was
certainly long before 1999.

*I* as a programmer can will never ever be in a position to test the
validity of code on a non-2s complement machine. I can't realistically
buy such a machine. I can't gain access to one, and I don't personally
know of anyone who has access to such a machine. And 99.999% of all
programmers are in the same position. Cryptographers have decided they
don't know how to deal with those 0.001% and so they just push on ahead
assuming 2s complement so they can do useful things like cryptography.

There's nothing stopping you from writing pure C code that assumes a
2's-complement representation. Just add something like this:

#include <limits.h>
#if INT_MIN == -INT_MAX
#error "This code only works on two's-complement systems."
#endif

in one of your application's header files. (I can imagine the test
not working properly in some bizarre circumstances, but it seems
unlikely.)

How do I know this? As I said I don't have a machine where I can test
this.

Neither do I, but I know enough about integer representations to be
reasonably sure of it.
[...] Or, if you're really paranoid, do a test at run time that
examines the bits of a negative integer, and abort the program if it
fails (you can also check for padding bits if necessary). You're now
effectively programming in a subset of C that requires 2's-complement,
and there was no need to change the standard to forbid implementations
that use other representations.

Right, or I could do nothing and watch nobody complain. BTW, which
test should I use to eliminate all other number representation systems?
Because I have no idea what all the alternatives are.

In C99, the alternatives are 2's-complement, 1s'-complement, and
signed magnitude. The C90 standard isn't as specific, but I've never
heard of a C implementation that used any other representation. In my
opinion, it's safe to assume that those are the only possibilites.
It's *almost* safe to assume 2's-complement, but making the assumption
explicit is easy and doesn't hurt anything.
How would I know this? I use right shift, wrap around, mix exclusive
or with addition, etc as just a natural way of doing things. I know
that some of these things relies on the representation, but I don't
know what would fail on other systems.

It depends on what you're doing, I suppose.
On 64-bit systems, long is typically 64 bits.

Perhaps on marginal older 64-bit systems.
No.
[...] These are not "marginal
platforms", and they're becoming less marginal all the time. You can
buy an x86-64 system at your local computer store, and you can install
any of a number of mainstream operating systems on it.

On x86-64 systems long is *32 bits*. This is because defacto standards
are far more compelling than unadopted ones. I'm pretty sure that
64bit UltraSparc is the same way, and I'd bet that 64bit PPC is also
the same.

No. On an x86-64 system running Red Hat Linux and gcc 3.2.3, long is
64 bits. On a SPARC Solaris 9 system, using either "gcc -m64" or
"cc -xtarget=ultra -xarch=v9", long is 64 bits. On a 64-bit IBM PPC
AIX system using "xlc -q64", long is 64 bits.
Tell that to Apple, Sun and DEC.

I don't currently have access to an Apple system. Sun, as you can
see, already knows this. DEC no longer exists; they were absorbed by
Compaq, which was then absorbed by HP.

Assuming that long is exactly 32 bits is neither safe, nor correct,
nor necessary.
 
R

Richard Heathfield

Chris Hills said:

BTW you keep going on about portability. There are some areas where this
is useful Libraries, RTOS, graphics libraries etc but not for the
majority.

If portability doesn't matter, why bother using C? I can write a Win32
program a darn sight quicker using C++ Builder than I can using C. Super
Mario clicks the mouse to select the button to rescue the princess, then
double-clicks the button to write the code to find the treasure. Lots of
fun. :)

But if I want my program to run on *your* computer, without knowing who
*you* are, I have to make sure it's portable. It has to work on Win32,
MS-DOS, Linux, BSD, VM/CMS, MVS, MacOS, AmigaDOS, TOS... in short, any
hosted environment at all.
Portable across several parts in the same MCU family is one thing but
not from architecture to architecture.

I would guess you haven't worked much in a mainframe environment? In the
Lost World, so to speak, it is very common for C code to be written and
tested on the PC, and then moved up to the mainframe right at the last
minute (or "fortnight" as they are sometimes called) for integration
testing. So the code has to be portable between two very, very different
architectures, even though it will only be /used/ in one of them.
 
J

Joe Wright

Keith said:
Certainly.

#include <stdio.h>
main(void) /* C99 removes implicit int */
{
int restrict = 5; /* restrict and inline are new keywords */
int inline = 10 //*
/* subtle effect of "//" comments */
restrict;
printf("inline = %d\n", inline);
return; /* C99 requires return <expr> for a non-void function */
}
In C90 I wouldn't be using restrict nor inline. Neither variable-length
arrays. Long habit has me using explicit int as type for those functions
that want it and always returning neatly.

I should probably get the latest gcc and turn on the c99 flag and see if
any of my current code has any problems with it.
 
S

santosh

Richard said:
Chris Hills said:


I would guess you haven't worked much in a mainframe environment? In the
Lost World, so to speak, it is very common for C code to be written and
tested on the PC, and then moved up to the mainframe right at the last
minute (or "fortnight" as they are sometimes called) for integration
testing. So the code has to be portable between two very, very different
architectures, even though it will only be /used/ in one of them.

But I suppose, practically speaking, most serious programs don't
contain hundred percent portable code. They'll probably have some
"interface" code, specific to the platform, while the rest of the
program, (as much as this is feasible), would be in standard C.

I guess, currently, programs using C99 features are in some ways like
that.

If I want to do more than what is capable with standard C, I'll have to
use implementation specific code. At this point the tendency arises to
go ahead and make most of the program use compiler/OS extensions, to
get most bang for line of code. Resisting this temptation, however, and
retaining as much standard C code as possible and using non-standard
code in a decoupled manner and in the 'correct' places, takes some
thinking and practise.

The more the program depends on compiler/platform specific functions,
the more the temptation to forget about standards and portability and
just throw your money behind a specific platform. :)
 
C

Chris Hills

Richard Heathfield said:
Chris Hills said:



If portability doesn't matter, why bother using C? I can write a Win32
program a darn sight quicker using C++ Builder than I can using C.

Then use C++ builder. I would.
Super
Mario clicks the mouse to select the button to rescue the princess, then
double-clicks the button to write the code to find the treasure. Lots of
fun. :)

But if I want my program to run on *your* computer, without knowing who
*you* are, I have to make sure it's portable.

but how many apps need to do this? Most seem to be WIn2K and XP these
days or ME, 2K & XP, they don't even cover win9* anymore let alone any
of the other OS you mention. Most of the SW I have is PC or PC. Only a
few are PC or Mac
It has to work on Win32,
MS-DOS, Linux, BSD, VM/CMS, MVS, MacOS, AmigaDOS, TOS... in short, any
hosted environment at all.

Maybe.

I would guess you haven't worked much in a mainframe environment?

A couple of times but most is embedded.
In the
Lost World, so to speak, it is very common for C code to be written and
tested on the PC, and then moved up to the mainframe right at the last
minute (or "fortnight" as they are sometimes called) for integration
testing. So the code has to be portable between two very, very different
architectures, even though it will only be /used/ in one of them.

Fair enough but I bet it won't run on an 8051? :)

There is portability and there is portability....

I can write code that is completely portable but most of the stuff I
write is not because it is for embedded micros where portable code is
large and slower. If you have code that is large and slower it can
cost a fortune if it needs a slightly larger memory.... adds 1 USD to a
unit..... 50,000 units a year.


As I keep saying though code which is not machine specific should be
portable and behave in the same way on similar machines...
 
F

Frederick Gotham

Richard Heathfield posted:
In the Lost World, so to speak, it is very common for C code to be
written and tested on the PC, and then moved up to the mainframe right
at the last minute (or "fortnight" as they are sometimes called) for
integration testing.


A "fortnight" is exactly two weeks where I'm from.
 
R

Richard Heathfield

Chris Hills said:
<[email protected]> writes

Then use C++ builder. I would.

If I'm writing a Win32 program, then most of the time that's /precisely/
what I do. But I'm not always writing Win32 programs. In fact, hardly ever.

but how many apps need to do this?

Few *applications*, it's true - if you are using that in the sense of "big
program", so to speak. Mostly, application writers know their target
platform. Even then, if it's a good application, one day it'll have to be
ported. If it is written with portability planned-in, that task will be
easier.

And then there are libraries. I'd far rather use /one/ library, regardless
of which platform I'm using, than have to learn a separate library for each
platform. So if libraries can be written portably, obviously that benefits
platform-specific software development on /all/ platforms, which is good,
right?
Most seem to be WIn2K and XP these
days or ME, 2K & XP, they don't even cover win9* anymore let alone any
of the other OS you mention. Most of the SW I have is PC or PC.

Sorry, I thought this was comp.lang.c - I didn't realise it was
comp.lang.softwarethatChrisbuys :)

I have four different OSs running right here, only one of which is in the
Win2K/XP category. I doubt very much whether I'm the only one who runs
several OSs, either. If I want my computers to be able to do <foo>, I think
it makes sense to write *one* program to do <foo>, rather than write a
WinXPFoo, a LinuxFoo, and so on.


Well, that's precisely why the code posted by clc regs is portable - we
don't know, don't ask, and don't care what platform the OP is using. Such
minor details are left for other newsgroups to deal with.

And the same goes for exegetic programs (of which I write far too many). How
would the Unix world have reacted if K&R had begun their book on C by
writing an 80-line "Hello world" with a WinMain entry point?

Fair enough but I bet it won't run on an 8051? :)

Depends. Is it a hosted environment? If so, probably the code would work
just fine.
There is portability and there is portability....
Agreed.

I can write code that is completely portable

....and that's what we do here in comp.lang.c ...
but most of the stuff I write is not

....and that's what we do in other newsgroups.
because it is for embedded micros where portable code is
large and slower. If you have code that is large and slower it can
cost a fortune if it needs a slightly larger memory.... adds 1 USD to a
unit..... 50,000 units a year.

50,000 bucks is nothing compared to the money you save by being able to port
your half-million-line browser/mailclient to a new platform in four
person-weeks rather than a year or so, over and over again as more STB
suppliers become interested in your product. One of my former clients made
precisely this investment, and for precisely this reason, and it paid off
in hard cash savings.
As I keep saying though code which is not machine specific should be
portable and behave in the same way on similar machines...

Precisely so. And code that /is/ machine-specific can be isolated into
self-contained modules.
 
K

Keith Thompson

Joe Wright said:
In C90 I wouldn't be using restrict nor inline.
[...]

Do you mean you wouldn't be using those identifiers (which are
perfectly valid in C90), or that you wouldn't be using those features
(which are new in C99)?

You asked about correct C90 code that can't be compiled with a C99
compiler.
 
S

santosh

Richard said:
Chris Hills said:



If portability doesn't matter, why bother using C?
[snip]

This paper seems to be a good summary of C99 and it's availability:
<http://www-128.ibm.com/developerworks/linux/library/l-c99.html>

It appears to vindicate what Richard has said: Stick to C90 if
portability is to be ensured, or if partial portability is acceptable,
then C99 can be considered.

Personally I like the integer types and snprintf() as well as flexible
structure members.
 
S

Stephen Sprunk

santosh said:
But I suppose, practically speaking, most serious programs don't
contain hundred percent portable code. They'll probably have some
"interface" code, specific to the platform, while the rest of the
program, (as much as this is feasible), would be in standard C.

Correct. Most programs intended to be portable have the application logic
written in 100% portable C that calls into implementation-specific wrappers
which are disgustingly non-portable. It's probably 90% former and 10%
latter on any given platform, but _all_ of the wrappers probably constitute
30-50% of the project given how many of them you need (consider a GUI
program that's supposed to work on Win32, Motif, OS/X, etc).
I guess, currently, programs using C99 features are in some ways like
that.

What I've seen in OSS projects is that someone will submit code that is
subtly non-portable, and a few weeks later one of the port maintainers for
"uncommon" platforms will complain that the code no longer works,
investigate things, and cleverly rewrite the code to be portable without
losing performance on "common" systems.

C99 is pretty much rejected in the "portable" sections because there's
always someone, somewhere who is still using a C90 compiler. I learned that
the hard way; I use a lot of C99isms and GNUisms in my own code for
convenience, and before I learned what they were, I found when I'd submit
patches to OSS projects, what actually got committed by the maintainers was
rather different because they'd rewrite sections to make them C90.

Some projects are explicitly only supported on GCC, e.g. the Linux kernel,
so GNUisms and many C99isms are allowed. This is similar to the large
number of projects that explicitly require POSIX, though many of those still
require C90 since not all compilers on POSIX systems support C99 yet.

That said, I'm not aware of any project that would allow "new" or "restrict"
as a variable/function name. Code breaking when compiled as C99 or C++ is
just as bad as code breaking when compiled as C90.

S
 
S

Stephen Sprunk

Richard Heathfield said:
Chris Hills said:

50,000 bucks is nothing compared to the money you save by being
able to port your half-million-line browser/mailclient to a new platform
in four person-weeks rather than a year or so, over and over again as
more STB suppliers become interested in your product. One of my
former clients made precisely this investment, and for precisely this
reason, and it paid off in hard cash savings.

Exactly. I've worked for several companies that develop embedded products,
and the CPU tends to change every few years but we need the same code to
work on all of them so we can keep the feature set consistent and
maintenance manageable.

My present employer spent nearly a year getting their code to work on the
_second_ CPU they used. When it came time for the third CPU, they just
recompiled, ran it through QA, and called it done -- and profit margins are
up because that third CPU costs less than the first two plus requires no
other chips on the board. Without portable code, we'd be a year late to
market (which did happen with products using the second CPU); with portable
code, we have 50 man-years of programmer time available to write cool new
features that will sell more widgets.

S
 
K

Keith Thompson

Stephen Sprunk said:
Some projects are explicitly only supported on GCC, e.g. the Linux
kernel, so GNUisms and many C99isms are allowed. This is similar to
the large number of projects that explicitly require POSIX, though
many of those still require C90 since not all compilers on POSIX
systems support C99 yet.

That said, I'm not aware of any project that would allow "new" or
"restrict" as a variable/function name. Code breaking when compiled
as C99 or C++ is just as bad as code breaking when compiled as C90.

I can certainly understand avoiding "restrict" (and "inline"), but
"new"? Much well-written C code *should* break when compiled as C++.
In particular, good C code should not cast the result of malloc(),
something that C++ requires.

Would an OSS project really reject C code that contains this?

int *ptr;
...
ptr = malloc(COUNT * sizeof *ptr);
 
B

Ben Pfaff

Stephen Sprunk said:
That said, I'm not aware of any project that would allow "new" or "restrict"
as a variable/function name. Code breaking when compiled as C99 or C++ is
just as bad as code breaking when compiled as C90.

A fair amount of GNU code uses "new" as a variable name, with no
reported fallout.
 
S

Stephen Sprunk

Keith Thompson said:
I can certainly understand avoiding "restrict" (and "inline"), but
"new"? Much well-written C code *should* break when compiled as C++.
In particular, good C code should not cast the result of malloc(),
something that C++ requires.

At a minimum, C headers should be digestible by a C++ compiler (with extern
"C" {...} wrapped around them) so that C++ programs can link with C
libraries. This greatly expands the number of people that can use your code
and costs virtually nothing.

As far as the internals in the .c file, I wouldn't say anyone goes to pains
to make the code into the common subset of C and C++, but one shouldn't go
out of their way to make code uncompilable as C++ (e.g. using new or class
as identifiers). This is one reason you see a lot of casts in "C" code that
aren't needed -- someone put them in so they could compile as C++ too.

S
 
A

Al Balmer

Few *applications*, it's true - if you are using that in the sense of "big
program", so to speak. Mostly, application writers know their target
platform. Even then, if it's a good application, one day it'll have to be
ported. If it is written with portability planned-in, that task will be
easier.

Often code must be "ported" to a later version of the same system. I'm
currently maintaining a huge system which still has lots of
pre-standard code, and the compilers available for current hardware
won't compile it. To make it even more interesting, we may have to go
from a 32-bit to a 64-bit environment. Of course, the original
programmer "knew" how big an integer is, and that pointers and
integers were pretty much interchangeable ...

One bright side - when making an old program ISO compatible, the
improved compile-time checking often finds subtle bugs that customers
have been occasionally running into for years.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,144
Latest member
KetoBaseReviews
Top