Which lean C compiler for 32-bit OS development

J

James Harris

Consider this: why would the designers of an OS ABI choose an
inefficient calling convention?

There is far more to this than an ABI!
They are also frequently up for grabs in embedded systems, where most C
programmers operate these days.

OK.

Why? Is it that the embedded development market in C is so large that
it dwarfs anything done elsewhere or is it that C has been replaced
with something else for development under Unix and Windows? The latter
would be sad!
You could say the same for any programming language.  C does allow you
to specify exact with types, but they are provided to enable C to
interface with low level entities.  Most C programs won't use them.

Sure. Those types are ideal for what I want to do.
How an instance of a type is manipulated is very much dependent on
optimisation.  A declared variable may not even exist in optimised code..
  The machine instructions emitted by an optimising compiler often bear
little resemblance to the code written by the programmer. Functions get
inlined, instructions get reordered and so on.

I understand where you were coming from now - and others have said
similar. But this is missing the point. I don't care about reorganised
code or elided operations. In terms of operations the value of C is in
being able to keep to low level concepts in the source - and be
virtually guaranteed that the generated code will be at least as
simple/fundamental.

James
 
J

James Harris

Le 11/12/11 00:10, James Harris a écrit :




Gcc is too big for what you want. You need a compiler that you can
possibly understand or have someone that can help you out with a
question, etc.

<valid points about the difficulty of modifying gcc source snipped>

Gcc is not perfect but nothing is. Remember the list of requirements:

---
* to compile standard, portable, C to object code
* simple object file output (not linked)
* Intel-format 32-bit asm output would be good
* selectable subroutine linkage options (calling conventions)
* simple demands on the OS so relatively easily ported to another OS
* works under various environments - Dos, Unix, Windows (without
Cygwin etc)
* reasonably effective at optimisation

and *above all*, a compiler that gets in the way as little as
possible. It should not compile for a certain target OS but merely
produce simple, lean, unencumbered, OS-agnostic object code, i.e.
object code that does just what the source code says and nothing
more.
---

Well, gcc fits most of these. It only really falls down badly on the
one you have highlighted, being easily ported - but that is just one
failing. As I say, nothing was perfect.

How to deal with that failing? Easy. Stick to standard, portable, C.
(That was the first requirement.) Then, if and when I get to that
point, port the smallest and simplest compiler I can find. As long as
the compiler compiles standard C it should get the system working. I
can worry about efficiency later.

James
 
S

Seebs

However, given enough time, enthusiasm and a refusal to do something
because "that's what is done" it is surprising what can be come up
with.

Yes. But not always *good* surprising. :)
That's OK. The main thing is to keep the source to just simple
operations, the data types to bytes and multiples of machine words,
and the structures to things which will be predictably translated into
assembler. Then there shouldn't be any unwanted surprises.

I've used a machine with 32-bit words and 80-bit floats. :)

I think the fact is, there's always gonna be surprises with a good compiler,
especially with decent optimization.

-s
 
S

Seebs

I'm a little confused by the implications of that last requirement. It
seems to me that there's generally no reason for a compiler to produce
object code that is anything other than OS-agnostic, except when making
use of OS facilities, in which case OS-agnostic code is impossible.
Could you please give a detailed real-world example of a compiler that
generates object code which is unnecessarily OS-dependent?

ABI is often specified by OS, and two OSs might specify incompatible ABIs.

I don't think that's unnecessary, and it's not *really* OS-dependent, but
the net result is that you can't exchange binary code between conflicting
models.

-s
 
J

James Harris

Hmm.  Well, it's an interesting experiment, anyway.  I'm not sure I'dexpect
it to yield significant advantages.

One has to be realistic. Many innovations are devised by teams at
universities or of professionals. They have a pool of brainpower to
call on and to interact with.

However, given enough time, enthusiasm and a refusal to do something
because "that's what is done" it is surprising what can be come up
with.
I have rarely *wanted* to.  Interactions with stuff is what makes things
interesting.

Oh, there are *plenty* of interactions still! It's just that one
starts with a clean slate.

I understand your point, though.
For an obvious example that someone recently brought up:  Many ARM processors
lack a divider in hardware.

So there's often, for even the most primitive operations, substantial code
generation going on behind the scenes.

That's OK. The main thing is to keep the source to just simple
operations, the data types to bytes and multiples of machine words,
and the structures to things which will be predictably translated into
assembler. Then there shouldn't be any unwanted surprises.

James
 
J

James Harris

On 12/10/2011 02:29 PM, James Harris wrote:
...


Not all machines had opcodes that match every C operations, and not all
C operations have corresponding single opcodes on every machine. Many
operations have long been emulated on some machines rather than being
directly supported; such as 32-bit integers on 8-bit machines, or 8-bit
types on 32-bit machines, or floating point operations on machines with
no FPU.

Also, optimization has always rendered the relationship less than
perfectly "clear, direct, and simple". Compare the assembly code
generated by the highest optimization level for a typical compiler with
your original C code; if the relationship is at all "clear, direct, and
simple", it's probably a compiler whose optimization capabilities are
substantially inferior to the current state of the art.

That's OK. I was saying C was the most suitable language I know
because the source can use the simple concepts necessary to carry out
what I want to do. Other languages generally have programmer-friendly
features that, while convenient, could be translated into something
very inefficient or practically impossible to interface with. A C
optimiser rearranging things doesn't change the essential simplicity
of the correspondence between the C source and the CPU the machine
code will run on.

James
 
S

Seebs

There's very little mandated by the CPU that I can think of.

Not mandated by the CPU, but defined and documented by the CPU vendor.

Look, you can make any calling convention you want. If you don't use
the standard ABI defined by the MIPS architecture, though, no one will
interoperate with your code, or care about it.
which registers are caller-save and which are callee-save
and how return addresses are stored when a sub-sub-routine is called
etc are generally just conventions aren't they?

Sure.

But if you ignore those conventions, you have just eliminated any reason
for anyone to try to interoperate with your code, support your platform,
or otherwise interact with you at all.

I have two choices: I can spend half an hour adding flawless support for
someone who followed published standards, or I can spend months adding
buggy support for someone who ignored them. Which of these choices is going
to give me a better return on my time?
Besides, since we are focussing on the ABI, I think the mechanism
would probably be relevant to the CPU type, not to the OS as a whole.
In other words, one OS but each platform could have its own calling
conventions - the ones that were most appropriate for that hardware.

Why, yes. And that's the thing. If you target 64-bit mips, but don't use
the standard 64-bit ABI, you have chosen irrelevance preemptively. You're
basically guaranteed not to get any support or buy-in, and without that,
it's a purely academic research project.
I think I may have mislead people as that's not what I had in mind. As
explained in reply to others I can predict the basics of what C will
do with my code on a given architecture. If it makes things better
than I thought of that's fine. The key point is that it won't
(normally) do nasty things that I don't expect. I can't say that of
any other HLL.

I guess it depends on what you expect, doesn't it?

Consider:

int x, y = 3;
x = y;

Assume that int is 32 bits. How many instructions will be used to perform
this operation, and how much data will be loaded to or from registers or
memory?

Answers I've seen include:

* No instructions, because the optimizer discarded one or both of the
variables.
* No instructions, because the optimizer arranged for them to have the
right values in advance.
* Both are in registers.
* Neither is in a register.
* They're both in memory. To perform the operation, the compiler generates
code to load a 16-byte vector starting with y into one register, load
another 16-byte vector starting with x into another register, load an
immediate value into another register, use the immediate value to mask
the first four bytes of y into the first four bytes of x, then save that
16-byte chunk of memory back to main memory.
You may be right. The trouble is I don't know of anything better. Do
you?

I don't think there exists a good tool for developing a competing ABI in
the absence of a really, really, compelling argument that a better ABI is
needed.

-s
 
J

James Kuyper

On 12/12/2011 02:46 PM, James Harris wrote:
....
possible. It should not compile for a certain target OS but merely
produce simple, lean, unencumbered, OS-agnostic object code,

I'm a little confused by the implications of that last requirement. It
seems to me that there's generally no reason for a compiler to produce
object code that is anything other than OS-agnostic, except when making
use of OS facilities, in which case OS-agnostic code is impossible.
Could you please give a detailed real-world example of a compiler that
generates object code which is unnecessarily OS-dependent?

By "detailed", I mean the following: please provide a simple piece of
portable C source code. Identify the specific compiler you're talking
about, and specify the command line options used to invoke it. Provide a
listing of the generated object code, and identify the specific ways in
which the generated object code is unnecessarily OS-dependent. To make
it clearer what you're looking for, please also provide corresponding
object code that would meet your specification.
 
J

James Harris

I would point out that ABIs are often defined by the architecture in some
way.  There is a "MIPS ABI" (well, there's several) which exists independent
of your choice of operating system.

There's very little mandated by the CPU that I can think of. What did
you have in mind? A given CPU may have an instruction pointer
register, a stack pointer register and/or dedicated registers for use
in call and return instructions but the definitions of how parameters
are passed, which registers are caller-save and which are callee-save
and how return addresses are stored when a sub-sub-routine is called
etc are generally just conventions aren't they?

Besides, since we are focussing on the ABI, I think the mechanism
would probably be relevant to the CPU type, not to the OS as a whole.
In other words, one OS but each platform could have its own calling
conventions - the ones that were most appropriate for that hardware.
Well, strictly speaking, it never was.  Consider:

        long a, b, c;
        a = get_a_number();
        b = get_a_number();
        c = a * b;

I'm pretty sure this has been implemented in software on at least some C
implementations since the 70s, because there have been 32-bit longs on 16-bit
systems.

Similarly, 64-bit values on 32-bit systems, floating point math on machines
with no FPU...  The list goes on, and on, and on.

C hasn't really been much for 1:1 correspondences in a very long time.  I
think the most obvious cutoff, if you wanted one, would be struct assignment.

I think I may have mislead people as that's not what I had in mind. As
explained in reply to others I can predict the basics of what C will
do with my code on a given architecture. If it makes things better
than I thought of that's fine. The key point is that it won't
(normally) do nasty things that I don't expect. I can't say that of
any other HLL.
If you don't use a standard ABI for each CPU, you will have a much harder
time.
True.


I think C is the wrong tool for this, then, as a great deal of the benefit of
C is a mature ecosystem for using standard ABIs.

You may be right. The trouble is I don't know of anything better. Do
you?

James
 
J

James Harris

Not mandated by the CPU, but defined and documented by the CPU vendor.
Exactly.

Look, you can make any calling convention you want.  If you don't use
the standard ABI defined by the MIPS architecture, though, no one will
interoperate with your code, or care about it.

Now I understand what you are thinking of. That's not my intention.
Remember I spoke of not wanting to be constrained by existing systems?
The calling convention is a case in point. If I followed existing
conventions across the board it would influence the design *far* too
much. I've already looked at that.

Instead the approach is to design the convention for the OS internals
with reference only to what's best for the internals themselves and
the desired execution environment and ignore (or, better still, not
even find out) what everyone else does. When that is done, tested,
reviewed and stabilised is the right time to look at making shims (my
term, AFAIK) to allow linkage to/from other code as appropriate.

This principle applies in many areas of the OS design. Get the
internals right. Only once done look at interfacing to traditional
systems where necessary.

....
I guess it depends on what you expect, doesn't it?

Consider:

        int x, y = 3;
        x = y;

Assume that int is 32 bits.  How many instructions will be used to perform
this operation, and how much data will be loaded to or from registers or
memory?

Answers I've seen include:

* No instructions, because the optimizer discarded one or both of the
  variables.
* No instructions, because the optimizer arranged for them to have the
  right values in advance.
* Both are in registers.
* Neither is in a register.
* They're both in memory.  To perform the operation, the compiler generates
  code to load a 16-byte vector starting with y into one register, load
  another 16-byte vector starting with x into another register, load an
  immediate value into another register, use the immediate value to mask
  the first four bytes of y into the first four bytes of x, then save that
  16-byte chunk of memory back to main memory.

This is not relevant to my argument. (Though I accept it may be to
yours.) I'll try to explain what I mean by giving some reasons why C
is good. Maybe that will explain the scope and show why the assignment
you showed is a non-sequitur.

C provides sized integers, it provides both signed and unsigned
integers. It provides pointers that can refer to almost anything
including functions and I can rely on the size of those pointers (on a
given compiler). It does not do any automatic reference counting. It
provides no autonomous memory management. It provides structuring I
can rely on. It provides operations that I can mentally map to CPU
opcodes. Where a CPU does not support certain instructions like divide
I can rely on the compiler making suitable choices, etc, etc.

Of course, I can do all the above in assembler. One key thing C adds
is CPU-independence (and it should be easier to use).

James
 
J

James Harris

On 12/12/2011 02:46 PM, James Harris wrote:
...


I'm a little confused by the implications of that last requirement. It
seems to me that there's generally no reason for a compiler to produce
object code that is anything other than OS-agnostic, except when making
use of OS facilities, in which case OS-agnostic code is impossible.
Could you please give a detailed real-world example of a compiler that
generates object code which is unnecessarily OS-dependent?

By "detailed", I mean the following: please provide a simple piece of
portable C source code. Identify the specific compiler you're talking
about, and specify the command line options used to invoke it. Provide a
listing of the generated object code, and identify the specific ways in
which the generated object code is unnecessarily OS-dependent. To make
it clearer what you're looking for, please also provide corresponding
object code that would meet your specification.

I got an answer to my original question a while ago. I'm happy to
respond to questions or challenges but can't quite summon up the
motivation to do what you ask for. Thanks, if you were trying to help,
though.

James
 
J

James Kuyper

I got an answer to my original question a while ago. I'm happy to
respond to questions or challenges but can't quite summon up the
motivation to do what you ask for. Thanks, if you were trying to help,
though.

No, I wasn't trying to help; I was trying to figure out what you were
talking about. What you appear to be saying is incompatible with my
mental model of how compilers and operating systems interact. Either I'm
misunderstanding you, or my model is incorrect, or what you're saying
doesn't make sense; I'm not sure which, and I wanted to find out.

If what you're saying doesn't make sense, then proving that fact to your
satisfaction would be helpful to you. However, if either of the other
two options apply, you won't derive any benefit from resolving the issue
for me, and I can't claim that you have any duty to do so.

Though my career has been in scientific computing, my formal education
was in advanced theoretical physics, not computer science. I've learned
a lot about how compilers work in the course of my career, but I've
never even taken the kind of course, which I gather is standard for Comp
Sci students, in which you actually write a simple compiler.

I only have a limited range of actual experience with operating systems.
The vast majority of my computer programming has been on variety of
Unix-like operating systems - mostly, and most recently, several
different distributions of Linux. Most of the rest of my experience was
a long time ago on MS-DOS machines, plus one summer I spent writing code
for a VMS system.

Therefore, it's entirely possible that my model of compiler/OS
interactions is incorrect, or at least incomplete.
 
K

Keith Thompson

James Harris said:
That's OK. I was saying C was the most suitable language I know
because the source can use the simple concepts necessary to carry out
what I want to do. Other languages generally have programmer-friendly
features that, while convenient, could be translated into something
very inefficient or practically impossible to interface with. A C
optimiser rearranging things doesn't change the essential simplicity
of the correspondence between the C source and the CPU the machine
code will run on.

I wonder if C-- would suit your purposes better.
<http://www.goosee.com/cmm/>.
 
S

Seebs

Now I understand what you are thinking of. That's not my intention.
Remember I spoke of not wanting to be constrained by existing systems?
The calling convention is a case in point. If I followed existing
conventions across the board it would influence the design *far* too
much. I've already looked at that.

Then there are two possibilities:
1. You are DRAMATICALLY smarter than everyone who has ever worked on this.
2. What you are doing is absolutely doomed to irrelevance from the start.

I can't see it as remotely possible that messing with calling conventions
is going to provide sufficiently enormous benefits to make a system which
invents its own of any interest. The cost of interoperability will be
huge; unless you have transcendently gigantic benefits on the table, that
means it's not interesting.

So basically, this is an interesting amusement, but that design decision
rules out from the beginning any chance of it yielding anything useful
except experience.
C provides sized integers, it provides both signed and unsigned
integers. It provides pointers that can refer to almost anything
including functions and I can rely on the size of those pointers (on a
given compiler).

There can be *at least* three distinct classes of pointers. Real systems
have existed on which pointer-to-function, pointer-to-struct, and
pointer-to-void were three different kinds of objects, and not
interchangeable. I suspect at least a few are still in use.
It does not do any automatic reference counting. It
provides no autonomous memory management. It provides structuring I
can rely on. It provides operations that I can mentally map to CPU
opcodes. Where a CPU does not support certain instructions like divide
I can rely on the compiler making suitable choices, etc, etc.

Except the mental mapping to CPU opcodes is liable to be wrong. That
makes it, I'd think, sort of a negative.

-s
 
I

Ian Collins

There's very little mandated by the CPU that I can think of. What did
you have in mind? A given CPU may have an instruction pointer
register, a stack pointer register and/or dedicated registers for use
in call and return instructions but the definitions of how parameters
are passed, which registers are caller-save and which are callee-save
and how return addresses are stored when a sub-sub-routine is called
etc are generally just conventions aren't they?

Not just conventions in cases such as Sparc where register wheels are
used for in/local/out registers.
Besides, since we are focussing on the ABI, I think the mechanism
would probably be relevant to the CPU type, not to the OS as a whole.
In other words, one OS but each platform could have its own calling
conventions - the ones that were most appropriate for that hardware.

Which is how things are normally done.
 
J

James Kuyper

Obviously general calling conventions impact things like debuggers as
well as the obvious ability to call other code, compilers generate
system-dependent stack probes on some systems, some things may depend
on the assumed state of certain resources (consider inlining cos() on
x86 - the required code depends on what state the FPU is in), the
presence of threading might impact generated code, exactly how the
system loads code and data areas, and which areas are actually
writeable. I'm sure there are others.

But is there any compiler that unnecessarily does such things? I would
think that generating such code is a necessity, in most contexts; and in
the unlikely case that it's not a necessity, I would expect the compiler
to have, at least as an option, the ability to not generate such code.
I know, in particular, that inlining cos() calls has been optional on
every compiler I've used where it was a) possible to inline them and b)
reasonable, under some circumstances, to not do so.
 
E

Eric Sosman

[...]
There's very little mandated by the CPU that I can think of. What did
you have in mind? A given CPU may have an instruction pointer
register, a stack pointer register and/or dedicated registers for use
in call and return instructions but the definitions of how parameters
are passed, which registers are caller-save and which are callee-save
and how return addresses are stored when a sub-sub-routine is called
etc are generally just conventions aren't they?

You're probably right that the CPU doesn't "mandate" subroutine
linkage conventions. You're free to ignore that tempting stack, those
automatically-turned register windows, and whatever other doodads the
CPU provides for your convenience. If you feel like it, you can write
all the arguments to a disk file and have the subroutine read them
back again.
Besides, since we are focussing on the ABI, I think the mechanism
would probably be relevant to the CPU type, not to the OS as a whole.
In other words, one OS but each platform could have its own calling
conventions - the ones that were most appropriate for that hardware.

If you plan to write everything yourself, and to make no use of
tools or components developed in the outside world, this is viable.
But if you want to leverage independent work, you must give thought to
doing things in a way that will ease, or at least not impede, the
integration of the parts.

Real-world example: In a former life I worked for the late, great
Sun Microsystems. One continuing and exasperating problem was getting
good device drivers for desirable gadgets dreamed up by third parties.
Company X has a super-duper-speed fibre channel adapter, Company Y has
a video card that is all the rage, and so on. The first thing the
companies do is provide device drivers for a couple flavors of Windows,
then maybe Linux -- but because Solaris had its own device driver
framework, we often wound up with no driver, or paid Companies X,Y,Z
for slapdash inferior drivers ... because we were "better."

Arthur Clarke wrote a story called "Superiority." Read it, if
you can find it somewhere.
 
J

John Tsiombikas

Then there are two possibilities:
1. You are DRAMATICALLY smarter than everyone who has ever worked on this.
2. What you are doing is absolutely doomed to irrelevance from the start.

I'm not saying this discussion doesn't have some interesting posts here
and there, but I don't understand why is everyone trying so passionately
to convince the OP to not try out his ideas about different calling
conventions in his own operating system. If he feels like it, I think
that's justification enough for me.

P.S. I'm also writing an operating system currently, and I'm following
the established calling conventions mostly because I don't think I would
have anything to gain by not doing it, but also because I'm too lazy to
start modifying compilers, linkers and debuggers on top of everything
else.
 
S

Seebs

I'm not saying this discussion doesn't have some interesting posts here
and there, but I don't understand why is everyone trying so passionately
to convince the OP to not try out his ideas about different calling
conventions in his own operating system. If he feels like it, I think
that's justification enough for me.

You have a point here. I guess... The impression I get is that he has
underestimated the importance for interoperability, and unimportance for
performance, of standard calling conventions. I could be wrong. But it's
nearly always the case that when someone starts talking about a new ABI
for existing hardware with a stable ABI, it's going to be a bad idea.

-s
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,143
Latest member
SterlingLa
Top