(e-mail address removed) wrote:
[snip]
Look, I just do not understand where you want to go
It is impossible to explain object files in a few words, i.e. in a
newsgroup posting, mentioning ALL possible forms an object file
can have.
That is why I explained the concept of separate compilation,
and the three parts that an object file must have, when separate
compilation is done.
Your example of the switch of MSVC doesn't apply here because actually
there is no separate compilation precisely. There is no linking and
there is no linker since all the inteermediate code is just
thrown into a back end.
I should have answered you that first.
OK, I type:
cl -GL program1.c
cl -GL program2.c
link program1.obj+program2.obj
You're claiming that this is somehow *not* separate compilation as
normally understood? Even though the invocations of CL fully parse
and error check the input programs? And do who knows how much
analysis.
I'm sorry, but that's a really silly assertion. The implementation
details of what happens when and where in the process are just that:
implementation details. The fact the some part (*part*) of what's
traditionally considered the compilation process happens a bit latter
than usual, makes no difference.
And if I remove the -"GL" from the command lines, I get a program that
does exactly the same thing, just a little slower. Again, so what?
The fact that the vendor did something odd and funky under the hood
that allows the final program to run faster is not relevant to the
process.
And what would be your dividing line between what functions are
allowed in a linker before it becomes something other than a linker.
Plenty of linkers have done all sorts of things in the process of
generating executables. Plenty of linkers can patch up branches so
that a shorter form is used if the target is in range. Some linkers
can add overlay processing to a program. Some linkers can search for
missing code in libraries and conditionally add it to the executable.
Heck, what happens when code gets patched up at *runtime*? Is the
compiler now in the runtime somewhere?
Then, what it means "conceptually"
It means that I did not want to get very precise as to what is
actually in each of the formats of object files around because they
can change a lot.
That is why I insisted only in the general concepts of what must
be inside:
o Symbol Table
o Relocations
o Sections
And even if the object files vary a lot, there isn't
any object file (that supports separate compilation) that
doesn't have at least that inside.
While you might have such things internally in the linker in some
cases, are you really trying to argue that the transitory existence of
some internal structure is a fundamental part of C?
Obviously, you feel an urge to polemic, irony etc.
Maybe because you have still not swallowed the fact that
the non existent stack is universally present
No. You've said "an object file has..." I think that's overbroad, to
be point of being an outright error. "Commonly, an object file
has..." would be rather more reasonable. You're trying to argue the
former, even when I've given you a simple example where it's not
correct.
In essence you're trying to apply the as-if rule to the behavior of
the compiler/linker system in regards to the stuff in the output from
a translation of a translation unit. That's reasonable, except that
you've got the wrong basis for your as-if. Program1 has a reference
of some type to program2. That has to be resolved in the linking
step, but it does *not* require the traditional object file structures
to do that.
As a practical example, most systems that support link-time code
generation do not actually produce anything, internal or external,
that looks like a traditional object file for each translation unit.
It simply never exists. Rather they *do* typically produce an object
file (often internally, or as a temporary file - same thing), but
*one* object file, that is the combination of all the programs tossed
into the back end. In fact the object file looks like what would
happen if you pasted the source code for all the programs together and
then compiled them as a unit. In practice, the common technique is to
past together the parse trees, after patching up some names so you
don't get collisions and the like.
That one object file is then typically run through the traditional
link process to deal with libraries and whatnot (and other objects
from different compilers).
Now, you get into philosophy.
EPROMS are executables?
They have surely another format, but they do not have
an essential characteristic of executable files.
Executable files are files that are RELOCATABLE. They
can be loaded at different addresses in memory. Even if in
many systems that use virtual memory that address is fixed,
the shared objects (dlls) are truly relocatable.
Files in EPROM/ ROM are NOT relocatable therefore they are NOT
executable files in the normal sense.
They can be executed of course, but their organization is completely
different because there is no loader.
Good god I must be old. I remember that relocating linkers were once
the option or *upgrade* from the "normal" form. Heck, once upon a
time we worried that the extra overhead from the relocated versions of
the executables cost too much. Relocation is *not* a characteristic
attribute of executable files.
But I'm confused. It sounds like *you're* arguing that executables
are not really required. Which doesn't sound like you...