Arthur J. O'Dwyer said:
[Malcolm wrote]
Things don't work as nicely on non-Unix boxes.
I guess you must use Unix a lot -- I've been a MS-DOS/Windows
person for most of my life, and I can't say I've noticed anything
about C or its myriad implementations that could reasonably be
described as "not nice."
I used to use UNIX a lot, but now I use a Windows PC mostly.
Several things are "not nice" about C on a PC.
Your mileage obviously varies from mine. I've been using C on
PC clones since Turbo C, and thence through DJGPP, and I've never
seen anything "not nice." If you want sympathy or advice, you'll
have to be much more specific in your posts, I guess.
But the C library functions, like printf(), need to call the OS at some
level.
On PC clones, that level is usually the level of character I/O.
I don't have enough experience to say whether the same is true of
Unix, but I doubt it -- the kernel-level abstractions are highly
different.
Try getting printf("Hello world\n") to work under windows 3.1.
Done. What's so complicated about 'printf'? It can be (and
IME usually is) implemented as a loop over 'putchar', which is a
special case of 'putc', which is <OT>a system call on both DOS
and Unix said:
Most of the time an ANSI C stdin/stdout program will compile and run. The
pascal keywords and other nonsense comes in when you need to access more
sophisticated services.
...Such as?
[re: DOS doesn't have pipes as a kernel-level concept]
No but it treats IO as streams, which was influenced by UNIX, because this
model makes sense on a UNIX box.
True -- and historically C and Unix were closely interdeveloped (if
you know what I mean). So I can't debate that C makes sense on Unix;
however, don't most post-Cobol languages use stream I/O, even those
primarily developed on non-Unix systems?
Wait. Don't answer that. I just remembered that whether C makes
sense or not, that doesn't affect its portability to non-Unix systems.
So it's a moot point.
Most hosted implementations *do* provide sensible semantics for
things like 'stdin', 'stdout', 'arg[cv]', 'fopen', and so on.
They do work, but they don't work as well as they do on UNIX platforms.
Speak for yourself.

IME most Unix-targeted C programs work
just fine on DOS, modulo some nits like the direction of slashes in
pathnames.
A UNIX shell will expand filenames, for instance,
Here's one source of your confusion. You think there's something
privileged about the concept of "shell program" on Unix. Go to
comp.unix.programmer [i.e., somewhere more appropriate] and ask
the people there, "I don't understand shells, but a friend advised
me to write a simple shell program using fork and exec. Pointers,
please?" I bet they'll be happy to oblige you.
"a DOS shell" != "all DOS shells." COMMAND.COM and CMD.EXE won't,
but if I recall correctly there does exist a bash port for MS-DOS
(either part of Cygwin or part of DJGPP's gnutils port). Anyway,
this isn't an issue with the QoI of C compilers, it's an issue with
the QoI of command parsers.
prevent
you from implementing del *.o in ANSI C.
One simple way is to use a C implementation that expands wildcards
for you; DJGPP will, for instance. Another way is to write a little
non-portable C function to expand wildcards by looking at the
contents of the current directory [in a non-portable manner]. A
third, and sillier, way is to use the standard 'system' function:
/* Del utility */
#include <stdlib.h>
int main(int c, char **v)
{
char *p = malloc(4+strlen(argv[1]));
strcpy(p, "rm ")
strcat(p, argv[1]);
system(p);
free(p);
return 0;
}
Anyway, go write that shell program [assuming you still have access
to a *nix system for debugging purposes], and maybe lurk in some of
the system-specific groups for a while, and I bet you'll learn a
lot. Then, if you still feel confused, we can continue this
discussion.
-Arthur