How does C work on non-unix?

R

Ryan M

I've been programming for a while, but most of my experience is on unix.
How do C compilers work on operating systems that weren't written in C?
And that have no libc?

Compiling C on unix seems so easy. Everything in the code either goes
right to machine code, or links to a C library (often libc) or links to
the kernel. Are there libc equivalents on non-unix OSes?
 
M

Mike Wahler

Ryan M said:
I've been programming for a while, but most of my experience is on unix.
How do C compilers work on operating systems that weren't written in C?
And that have no libc?

Compiling C on unix seems so easy. Everything in the code either goes
right to machine code, or links to a C library (often libc) or links to
the kernel. Are there libc equivalents on non-unix OSes?

C (as discussed here) is a platform-independent language, defined
by an international standard.

See:
http://www.angelfire.com/ms3/bchambless0/welcome_to_clc.html
and
http://www.eskimo.com/~scs/C-faq/top.html

-Mike
 
S

Seth Morecraft

Ryan,
Before someone says this is outside the topic of this group, my humble
answer is:
If you are talking about something like an embedded system- usually you
would use a cross-compiler and generate machine code for your specific
target- then run that program on the OS of your target.
Kernels and libs do not have to be written in C for you to use C to
write a program- although it might not be *standard* C.
What target processor/OS are you looking to compile a C program for?
There might be a C compiler out there.
Hope that helps.

Seth

Ryan said:
I've been programming for a while, but most of my experience is on
unix. How do C compilers work on operating systems that weren't written
in C? And that have no libc?

Compiling C on unix seems so easy. Everything in the code either goes
right to machine code, or links to a C library (often libc) or links to
the kernel. Are there libc equivalents on non-unix OSes?

~ Let us linux ~
 
K

Kevin Goodsell

Ryan said:
I've been programming for a while, but most of my experience is on
unix. How do C compilers work on operating systems that weren't written
in C? And that have no libc?

Compiling C on unix seems so easy. Everything in the code either goes
right to machine code, or links to a C library (often libc) or links to
the kernel. Are there libc equivalents on non-unix OSes?

Why would it be any different? On all hosted implementations (Unix and
otherwise), the OS (or something similar) provides some basic
functionality. The C library may be part of that functionality, or it
may be written separately by making use of that functionality.

Windows happens to provide a dynamic library (msvc.dll or something like
that) that implements the C library. Cygwin does something similar. This
is basically the same thing Unix does, as far as I know. Compilers for
systems that don't have a 'built in' C library usually ship with their
own implementation.

-Kevin
 
K

Kevin Goodsell

Seth said:
Ryan,
Before someone says this is outside the topic of this group, my humble
answer is:

If you've been reading this group long enough to expect that answer, you
should have also caught on that top-posting is rude. Please don't do it
anymore.

-Kevin
 
S

Seth Morecraft

Ok, thanks.

Seth

Kevin said:
If you've been reading this group long enough to expect that answer, you
should have also caught on that top-posting is rude. Please don't do it
anymore.

-Kevin

~ Let us linux ~
 
R

Richard Heathfield

Ryan said:
I've been programming for a while, but most of my experience is on unix.
How do C compilers work on operating systems that weren't written in C?

They accept the source code, preprocess it, translate it, and produce object
files. Just like on Unix!
And that have no libc?

The compiler typically isn't interested in libraries. That's the linker's
job. :)
Compiling C on unix seems so easy.

Then why use anything else? ;-)

Seriously, though, C is remarkably portable, and you should have little
difficulty obtaining an easy-to-use C compiler for any modern desktop
operating system.
 
J

J. J. Farrell

Ryan M said:
I've been programming for a while, but most of my experience is on unix.
How do C compilers work on operating systems that weren't written in C?

In more or less exactly the same way as they work on OSes that were
written in C. Why should the language the OS was written in make any
difference? There are compilers for COBOL, FORTRAN, ADA, and countless
other languages on UNIX that manage perfectly well even though the
OS wasn't written in that language.
And that have no libc?

The C library is a C thing, not a UNIX thing. If a system offers a
hosted C implementation, it must include the functionality defined
by the C library in whatever way is appropriate for the compiler and
linker tools (or equivalent) on the OS. If it only offers a non-hosted
C implementation (such as is used on some embedded systems, or for
building the UNIX kernel for example) then the main difference is
that it doesn't offer the full C library functionality. In that case
you can't use the C library routines, simple as that.
Compiling C on unix seems so easy. Everything in the code either goes
right to machine code, or links to a C library (often libc) or links to
the kernel.

Just the same as on most other OSes that provide hosted C.
Are there libc equivalents on non-unix OSes?

Yes - it's a C thing, nothing to do with UNIX.

It sounds like you have some misconception about the relationship
between C and the OS. If you explain why you think there might be a
problem using C on OSes that aren't written in C, someone should be
able to help clear it up for you.
 
R

Richard Heathfield

J. J. Farrell said:
It sounds like you have some misconception about the relationship
between C and the OS.

This is understandable, given the truism that Unix is written in C, and C is
written in Unix. :)
 
K

Kevin Goodsell

Seth said:
Ok, thanks.

You've been asked not to top-post, and informed that it's rude, yet you
continue to do it. Is this your way of requesting to be killfiled? Or
are you just ignorant?

Please stop top-posting. If you don't know what that means, find out
before posting again.

-Kevin
 
M

Malcolm

J. J. Farrell said:
It sounds like you have some misconception about the relationship
between C and the OS. If you explain why you think there might be a
problem using C on OSes that aren't written in C, someone should be
able to help clear it up for you.
Things don't work as nicely on non-Unix boxes. For instance, C has a
printf() function, which means that arguments have to be passed from left to
right on the stack. If the OS is written in a language using a different
convention, either you have to have layers of wrapping, or you need a
keyword like "pascal" to make it all work together.

Similarly, Unix treats a printer like a file, so output can be piped
directly to it. Other OSes work in different ways, and stdin and stdout
can't always be piped simply.

Sometimes it isn't even obvious how to provide a stdout / stderr. For
instance, graphics and stderr don't mix on some systems.
 
A

Arthur J. O'Dwyer

Things don't work as nicely on non-Unix boxes.

I guess you must use Unix a lot -- I've been a MS-DOS/Windows
person for most of my life, and I can't say I've noticed anything
about C or its myriad implementations that could reasonably be
described as "not nice." :)
For instance, C has a
printf() function, which means that arguments have to be passed from
left to right on the stack.

Assuming one exists. ;) Either that, or the implementation has to
make some sort of [ad hoc] protocol to deal with variadic functions:
for example, it could pass doubles on the FPU stack, structs in a
buffer area in RAM,... you get the idea. Anyway, IMLE the vast
majority of compilers targeted at OSes targeted at desktop users use
the stack for passing arguments, so there's nothing special about
*nix in that regard.
If the OS is written in a language using a different
convention, either you have to have layers of wrapping, or you need a
keyword like "pascal" to make it all work together.

Nonsense. The OS has nothing to do with the C library, in most
cases. Sure, Unix has some clever bits that let different programs
share the same object libraries [IANA Unix expert], but that's
not relevant to C programming any more than is the fact that Unix
also allows multiple users at a time.
DOS compilers like Borland, and many other compilers [DJGPP for
example], simply include their own "libc" implementations in the
compiler package itself. You write a program that wants to use
'printf'; the code for 'printf' gets linked into your executable.
No funky "pascal" keywords involved.
In fact, the only compiler family I know of that uses "pascal"
is Borland, and the purpose of their "pascal" keyword is to allow
linking of code between different languages -- to link C code
with Pascal code, e.g. That's completely different and OT here,
where we discuss only pure C code, not C-plus-Pascal-plus-Fortran
or whatever.
Similarly, Unix treats a printer like a file, so output can be piped
directly to it. Other OSes work in different ways, and stdin and stdout
can't always be piped simply.

The C language doesn't understand the concept of "pipe" anyway;
that's a shell-level concept (which in Unix is built into the
kernel, and AFAIK in DOS is simulated directly at the level of
the shell), not a language-level concept.
Sometimes it isn't even obvious how to provide a stdout / stderr. For
instance, graphics and stderr don't mix on some systems.
^^^^^^^^ Neither here [ISO C] nor there [Unix]. :)

True, but then we get into hosted versus freestanding implementations,
and we'll be here all night with the embedded-systems folks telling
us again about how 'stdout' on ToastMaster 2000 is connected to the
brownness setting, blah blah blah. ;-)
Freestanding implementations aren't required to provide a lot of
the C library's functionality; the Standard covers all that, but I
don't know it off the top of my head.
Most hosted implementations *do* provide sensible semantics for
things like 'stdin', 'stdout', 'arg[cv]', 'fopen', and so on.

-Arthur
 
M

Malcolm

Arthur J. O'Dwyer said:
I guess you must use Unix a lot -- I've been a MS-DOS/Windows
person for most of my life, and I can't say I've noticed anything
about C or its myriad implementations that could reasonably be
described as "not nice." :)
I used to use UNIX a lot, but now I use a Windows PC mostly.
Several things are "not nice" about C on a PC.
Anyway, IMLE the vast majority of compilers targeted at OSes
targeted at desktop users use the stack for passing arguments, so
there's nothing special about *nix in that regard.

The OS has nothing to do with the C library, in most
cases.
But the C library functions, like printf(), need to call the OS at some
level. Try getting printf("Hello world\n") to work under windows 3.1.
No funky "pascal" keywords involved.
Most of the time an ANSI C stdin/stdout program will compile and run. The
pascal keywords and other nonsense comes in when you need to access more
sophisticated services. On UNIX this is all neat and tidy.
In fact, the only compiler family I know of that uses "pascal"
is Borland, and the purpose of their "pascal" keyword is to allow
linking of code between different languages -- to link C code
with Pascal code, e.g. That's completely different and OT here,
where we discuss only pure C code, not C-plus-Pascal-plus-Fortran
or whatever.
Yes sure, we don't discuss the minutae of the "pascal" keyword on many
platforms. However we can note its existence, and the lack of a need for it
on UNIX.
The C language doesn't understand the concept of "pipe" anyway;
that's a shell-level concept (which in Unix is built into the
kernel, and AFAIK in DOS is simulated directly at the level of
the shell), not a language-level concept.
No but it treats IO as streams, which was influenced by UNIX, because this
model makes sense on a UNIX box.
Most hosted implementations *do* provide sensible semantics for
things like 'stdin', 'stdout', 'arg[cv]', 'fopen', and so on.
They do work, but they don't work as well as they do on UNIX platforms. A
UNIX shell will expand filenames, for instance, a DOS shell won't, prevent
you from implementing del *.o in ANSI C.
 
D

Dik T. Winter

> Things don't work as nicely on non-Unix boxes. For instance, C has a
> printf() function, which means that arguments have to be passed from left to
> right on the stack.

Where do you find that requirement? It is not in the standard. Moreover,
I have used various Unix systems doing it in quite a few different ways.
> Similarly, Unix treats a printer like a file, so output can be piped
> directly to it. Other OSes work in different ways, and stdin and stdout
> can't always be piped simply.

But that is *not* a C issue. C does not know anything about printers
(or files), it only knows about streams. You have exactly the same
problem with Pascal and Fortran, to name a few.
> Sometimes it isn't even obvious how to provide a stdout / stderr. For
> instance, graphics and stderr don't mix on some systems.

But Unix has exactly the same problem. If you are using curses for
graphics, you will see problems if you write something to stderr.
So I do not understand where you see the distinction.
 
P

pete

Dik said:
But that is *not* a C issue. C does not know anything about printers
(or files), it only knows about streams.

Actually, C knows something of files,
and if your standard output stream goes to a printer,
then in C, that printer is a file.

N869
7.19.3 Files
[#1] A stream is associated with an external file (which may
be a physical device) by opening a file, which may involve
creating a new file. Creating an existing file causes its
former contents to be discarded, if necessary. If a file
can support positioning requests (such as a disk file, as
opposed to a terminal), then ...
 
A

Arthur J. O'Dwyer

Arthur J. O'Dwyer said:
[Malcolm wrote]
Things don't work as nicely on non-Unix boxes.

I guess you must use Unix a lot -- I've been a MS-DOS/Windows
person for most of my life, and I can't say I've noticed anything
about C or its myriad implementations that could reasonably be
described as "not nice." :)

I used to use UNIX a lot, but now I use a Windows PC mostly.
Several things are "not nice" about C on a PC.

Your mileage obviously varies from mine. I've been using C on
PC clones since Turbo C, and thence through DJGPP, and I've never
seen anything "not nice." If you want sympathy or advice, you'll
have to be much more specific in your posts, I guess.

But the C library functions, like printf(), need to call the OS at some
level.

On PC clones, that level is usually the level of character I/O.
I don't have enough experience to say whether the same is true of
Unix, but I doubt it -- the kernel-level abstractions are highly
different.
Try getting printf("Hello world\n") to work under windows 3.1.

Done. What's so complicated about 'printf'? It can be (and
IME usually is) implemented as a loop over 'putchar', which is a
special case of 'putc', which is <OT>a system call on both DOS
and Unix said:
Most of the time an ANSI C stdin/stdout program will compile and run. The
pascal keywords and other nonsense comes in when you need to access more
sophisticated services.

...Such as?

[re: DOS doesn't have pipes as a kernel-level concept]
No but it treats IO as streams, which was influenced by UNIX, because this
model makes sense on a UNIX box.

True -- and historically C and Unix were closely interdeveloped (if
you know what I mean). So I can't debate that C makes sense on Unix;
however, don't most post-Cobol languages use stream I/O, even those
primarily developed on non-Unix systems?
Wait. Don't answer that. I just remembered that whether C makes
sense or not, that doesn't affect its portability to non-Unix systems.
So it's a moot point.

Most hosted implementations *do* provide sensible semantics for
things like 'stdin', 'stdout', 'arg[cv]', 'fopen', and so on.

They do work, but they don't work as well as they do on UNIX platforms.

Speak for yourself. :) IME most Unix-targeted C programs work
just fine on DOS, modulo some nits like the direction of slashes in
pathnames.
A UNIX shell will expand filenames, for instance,

Here's one source of your confusion. You think there's something
privileged about the concept of "shell program" on Unix. Go to
comp.unix.programmer [i.e., somewhere more appropriate] and ask
the people there, "I don't understand shells, but a friend advised
me to write a simple shell program using fork and exec. Pointers,
please?" I bet they'll be happy to oblige you.
a DOS shell won't,

"a DOS shell" != "all DOS shells." COMMAND.COM and CMD.EXE won't,
but if I recall correctly there does exist a bash port for MS-DOS
(either part of Cygwin or part of DJGPP's gnutils port). Anyway,
this isn't an issue with the QoI of C compilers, it's an issue with
the QoI of command parsers.
prevent
you from implementing del *.o in ANSI C.

One simple way is to use a C implementation that expands wildcards
for you; DJGPP will, for instance. Another way is to write a little
non-portable C function to expand wildcards by looking at the
contents of the current directory [in a non-portable manner]. A
third, and sillier, way is to use the standard 'system' function:

/* Del utility */
#include <stdlib.h>
int main(int c, char **v)
{
char *p = malloc(4+strlen(argv[1]));
strcpy(p, "rm ")
strcat(p, argv[1]);
system(p);
free(p);
return 0;
}

Anyway, go write that shell program [assuming you still have access
to a *nix system for debugging purposes], and maybe lurk in some of
the system-specific groups for a while, and I bet you'll learn a
lot. Then, if you still feel confused, we can continue this
discussion.

-Arthur
 
J

John Bode

Ryan M said:
I've been programming for a while, but most of my experience is on unix.
How do C compilers work on operating systems that weren't written in C?
And that have no libc?

Pretty much the same way that they work on Unix; they translate source
code to native machine code.

I've written C on VMS, multiple flavors of Unix (including linux),
multiple flavors of Windows, MPE, MacOS (7/8/9), and a couple of
others you'll never hear of if you're lucky. All compile code pretty
much the same way. All provide an equivalent of libc.
Compiling C on unix seems so easy. Everything in the code either goes
right to machine code, or links to a C library (often libc) or links to
the kernel. Are there libc equivalents on non-unix OSes?

Yes.

System calls on non-Unix systems may or may not look much like Unix
system calls. VMS had a funky naming convention, and some data
(notably strings) had to be converted to a "descriptor" format before
passing it to a VMS library call. MacOS (pre-X) system calls required
some data massaging since the MacOS Toolbox was originally written in
Pascal, so sometimes a "pascal" keyword was needed to indicate a
different calling convention. And then with older versions of Windows
you had to mess with the "near" and "far" keywords when talking about
pointers.

So I'd say if you had to do a lot of interaction with the underlying
OS, working with Unix is often quite a bit more straightforward, but
that doesn't imply that other systems are any less well-supported.
 
C

Chris Torek

System calls on non-Unix systems may or may not look much like Unix
system calls. VMS had a funky naming convention, and some data
(notably strings) had to be converted to a "descriptor" format before
PASSING it to a VMS library call. MacOS (pre-X) system calls required
some data massaging since the MacOS Toolbox was originally written in
Pascal, so sometimes a "pascal" keyword was needed to indicate a
different calling convention.

I would say not "needed" but rather "used for convenience".

Consider, for instance, the old Borland-related problem of calling
an actual Pascal procedure named f() that, in Pascal, had been
written as:

procedure f(x, y : integer; var z : double) ...

There is no reason you could not just do this in "Pure C":

extern void f_shim(int, int, double);
...
f_shim(x, y, zp);

along with an assembly-coded function "f_shim" that reverses the
arguments and makes a call that assumes that f() itself will do a
"RET 6" to add 6 to the PC's %sp (not %esp) register. It was simply
more *convenient* (for users) to skip the shim by teaching the C
compiler to do handstands. (The fact that it also runs faster is
just part of "convenience", also known as Quality of Implementation,
as far as Standard C is concerned.)
And then with older versions of Windows
you had to mess with the "near" and "far" keywords when talking about
pointers.

Again, this is more of a "convenience" issue -- the "right" (but
very slow) way to do it would have been to run all C code in "huge
model" ALL THE TIME, and use assembly shims if/when needed to
interact with "small model" or "mixed model" non-C code. Had that
been done, all the C code would immediately have moved to 32-bit
systems with simple recompilation, where it would run as fast as
fast can be. The folks who wrote those C compilers preferred short
cuts, however, which -- as any LotR fan knows -- make for long
delays. :)
So I'd say if you had to do a lot of interaction with the underlying
OS, working with Unix is often quite a bit more straightforward, but
that doesn't imply that other systems are any less well-supported.

Any time you use the language(s) that the majority of the pieces
you intend to interact with also use, you remove translation layer
problems. VMS had "translation layer" problems (solveable without
any C extensions, but they added extensions anyway for convenience)
because so much of it was written in BLISS and assembly code and
because they wanted to support Fortran directly (so even constants
were often passed via pointers -- of course, the Fortran they
supported was DEC's heavily-modified VMS Fortran, rather than ANSI
F77). Borland had "translation layer" problems because they wanted
to support their own Pascal code. Windows had (and has) "translation
layer" problems mainly because it is just a big pile of hacks.
 
M

Malcolm

Arthur J. O'Dwyer said:
Done. What's so complicated about 'printf'? It can be (and
IME usually is) implemented as a loop over 'putchar', which is a
special case of 'putc', which is <OT>a system call on both DOS
and Unix</OT>.
Maybe you'll explain how you did it. I never got stdout to work under
Windows 3.1.
The pascal keywords and other nonsense comes in when you need
to access more sophisticated services.

...Such as?

[re: DOS doesn't have pipes as a kernel-level concept]
Windows, buttons and other GUI items, printers, audio, clipboard or other
inter-process communication. All stuff that can't be coded in pure C, but
you often need.
True -- and historically C and Unix were closely interdeveloped (if
you know what I mean). So I can't debate that C makes sense on
Unix; however, don't most post-Cobol languages use stream I/O, even
those primarily developed on non-Unix systems?
I'm a games programmer. Very little data comes in streams - even files from
a CD will generally be DMAed directly into memory.
Wait. Don't answer that. I just remembered that whether C makes
sense or not, that doesn't affect its portability to non-Unix systems.
So it's a moot point.
Really it's an issue with the standard library rather than the C language.
If it's designed for Unix then you can expect things to be less good on
non-Unix systems. For instance take this program.int main(int argc, char **argv)
{
FILE *fp;
fp = fopen(argv[1], "w");
if(fp)
{
fprintf(fp, "Hello World\n");
fclose(fp);
}
else
printf("File not opened\n");
return 0;
}
Lets say we want the program to be a NOP - ie run but produce absolutely
nothing. How would you achieve this in Unix? How in DOS?
Here's one source of your confusion. You think there's something
privileged about the concept of "shell program" on Unix.
There's something privileged about command.com on DOS. On UNIX you have a
choice of shells, though you might be forced to use one by the system
administrator.
 
A

Alexander Bartolich

followup to Malcolm:
Maybe you'll explain how you did it. I never got stdout to work
under Windows 3.1.

Both 16-bit Windows and the original MacOS have no concept of
pseudo-terminal or a similar text-only environment. This feature
was emulated by the compiler vendor's run time library though.
First access to stdin or stdout opened a window looking like
an xterm. One remaining deficiency was the lack of a shell that
could handly redirections or pipes.

Anyway, these omissions have been fixed.
The way of the Unix prevailed.
Windows, buttons and other GUI items, printers, audio, clipboard
or other inter-process communication. All stuff that can't be
coded in pure C, but you often need.

Well, it's no different under X11.
I'm a games programmer. Very little data comes in streams - even
files from a CD will generally be DMAed directly into memory.

Nonsense. Memory mapped files work on IDE, SCSI, floppy or network
shares, regardless what particular access mode the hardware/driver/OS
prefers.
[...] Lets say we want the program to be a NOP - ie run but produce
absolutely nothing. How would you achieve this in Unix? How in DOS?

By providing either /dev/null or nul: as the first argument.
The paranoid might redirection to this device as well.
There's something privileged about command.com on DOS.

Not really. There is even a famous replacement shell, 4dos.
On UNIX you have a choice of shells, though you might be forced to
use one by the system administrator.

exec other_shell

should take care of that.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top