C/C++ calling convention

  • Thread starter StanisÅ‚aw Findeisen
  • Start date
G

Goran Pusic

is there any "particular" reason why people should not use COM interfaces
from C?...
this sounds like personal bias, rather than an objective statement, FWIW....

Yes, of course, it's personal opinion on one hand.

On the other, it boils down to: why would you want to bother? It's so
much easier in C++ and you gain strictly nothing by sticking to C.

Goran.
 
G

Goran Pusic

In theory, we really have to start by defining what we mean by
"OS".  I'm an old timer---for me, the OS is the part of the code
which has to be executed in kernel mode: things like COM or the
GUI aren't directly part of the OS.

I bet you can beat that, too: OS are software interrupts that are used
to call into... Well, OS :).

But more seriously, if you take e.g. Telephony API on Windows, sure,
you can use it without COM, but how would you not consider it a part
of the OS? (And there are probably Windows APIs that are strictly COM-
dependent, that is, there is no lower level C interface for them at
all).

And of course, as good people here noted, why would you not consider
assembly?

Goran.
 
G

Goran Pusic

That ABI uses sort of Pascal way to call functions if i remember
correctly. Haven't looked long time into some ms header but i think
these are still that "stdcall".

That's just a leftover from the past: 8086 code is faster (or shorter,
don't remember which) with ret x (where "x" is the number of bytes to
pop off the stack when returning), which is possible to do when number
of arguments and their sizes are known up front, and which is what
Pascal compilers on 8086 used to do since Pascal did not know of
variable number of arguments (inhales).

As opposed to C with f(TYPE, ...).

BTW, I remember reading that first Windows code used to be compiled
with quite a bit of Pascal (+asm) code.

Goran.
 
J

James Kanze

I bet you can beat that, too: OS are software interrupts that
are used to call into... Well, OS :).
But more seriously, if you take e.g. Telephony API on Windows,
sure, you can use it without COM, but how would you not
consider it a part of the OS?

What is the telephony API? I've never heard of it, and I've
been doing some pretty low level Windows programming lately.
And if the name is anywhere relevant, how could you consider it
part of the OS?
(And there are probably Windows APIs that are strictly COM-
dependent, that is, there is no lower level C interface for
them at all).

There are certainly programs under Windows whose API is strictly
COM. Or something else completely.
And of course, as good people here noted, why would you not
consider assembly?

Because Windows doesn't define the assembler level API.
 
A

Alf P. Steinbach /Usenet

* James Kanze, on 30.08.2010 17:05:
What is the telephony API? I've never heard of it, and I've
been doing some pretty low level Windows programming lately.

You might look up RAS.

And if the name is anywhere relevant, how could you consider it
part of the OS?

In Windows even the GUI is part of the OS. So much so that at least still in
Windows 2000, as I recall, the thread scheduler interacted with window message
loops; you could affect performance by sprinkling in some window message queue
checking. Since that code reportedly is very very hairy spaghetti it's probably
still that way.

One big annoyance is that DCOM (Distributed COM, sort of remote procedure calls
in a slightly object-oriented way) is so very much part of the OS that if you
stop that service then the OS reboots.

The reason that it's a big annoyance is that much malware has been targeted
directly at DCOM -- and you can't turn it off.

There are certainly programs under Windows whose API is strictly
COM. Or something else completely.

E.g. the shell API is mostly COM based.

Because Windows doesn't define the assembler level API.

Uh, it does. :)

That's what an ABI means.

Defining that level.


Cheers & hth.,

- Alf
 
B

BGB / cr88192

is there any "particular" reason why people should not use COM interfaces
from C?...
this sounds like personal bias, rather than an objective statement,
FWIW...

<--
Yes, of course, it's personal opinion on one hand.

On the other, it boils down to: why would you want to bother? It's so
much easier in C++ and you gain strictly nothing by sticking to C.
-->

it is easier, though, if one does a lot of code-processing via automated
tools, and their tools are not smart enough to really deal with C++ syntax
or concepts (such as namespaces, ...). whereas things like scoping via
naming convention is generally easier to handle within a dumb tool.


but, anyways, it all depends a lot on the region of code.

for example, a project may be divided into different regions (or layers),
each with different rules:
for example, lower layers may disallow using C++, and enforce certain
practices WRT modularity and internal dependency management, ...;
middle layer code may be written in C++, and may relax modularity
requirements;
frontend code may even allow other languages, such as C# or Java, ...

so, the bigger factor may well be where in the larger codebase the code is
located.


and, it may so happen that in some of the places where there is the greatest
need to make use of COM-like systems, may also be the areas where the most
rigid practices are in place, which may reasonably exclude things like using
C++ (just as it may enforce modularity practices, disallow
cyclic-dependencies between subsystems, ...).


similarly, coding practices which work well in the <100 kloc range may not
work out as well in the >100 kloc range, and turn disasterous in the Mloc
range, and so these factors need to be considered as well, ...
 
B

BGB / cr88192

Alf P. Steinbach /Usenet said:
* James Kanze, on 30.08.2010 17:05:

You might look up RAS.



In Windows even the GUI is part of the OS. So much so that at least still
in Windows 2000, as I recall, the thread scheduler interacted with window
message loops; you could affect performance by sprinkling in some window
message queue checking. Since that code reportedly is very very hairy
spaghetti it's probably still that way.

One big annoyance is that DCOM (Distributed COM, sort of remote procedure
calls in a slightly object-oriented way) is so very much part of the OS
that if you stop that service then the OS reboots.

The reason that it's a big annoyance is that much malware has been
targeted directly at DCOM -- and you can't turn it off.



E.g. the shell API is mostly COM based.



Uh, it does. :)

That's what an ABI means.

Defining that level.

if there is one thing I think Windows got right better than most Unix
variants and friends did, it is that they tend to define things down to the
binary level, so although it may be painful to interop at these levels in
many cases, it is at least possible...

in nearly all forms of Unix, most binary details are left up to the
implementation (including flag bit constants, numerical magic values, ...).
only really source-level compatibility is considered, and even then it is a
bit hit or miss as a lot is still only vaguely defined and tend to differ
from one system to another.

granted, there may be preactical reasons for doing this, but it doesn't help
matters much, more so that no comprehensive binary level specifications have
been undertaken.

for example, with Linux (theoretically a single OS), it is a bit hit or miss
if binary code will even work between distros, or between different versions
of the same distro (since many of the library writers see no problem in
making changes to library headers and API's which tend to break binary code,
seeing things like changing structure layouts or magic numbers as "innocent
changes"...).

ideally, people could get all this crap nailed down, but at this point it
seems unlikely...
 
I

Ian Collins

if there is one thing I think Windows got right better than most Unix
variants and friends did, it is that they tend to define things down to the
binary level, so although it may be painful to interop at these levels in
many cases, it is at least possible...

in nearly all forms of Unix, most binary details are left up to the
implementation (including flag bit constants, numerical magic values, ...).
only really source-level compatibility is considered, and even then it is a
bit hit or miss as a lot is still only vaguely defined and tend to differ
from one system to another.

Which is why Solaris and its derivatives is such a breeze to release
code for. Build on the the oldest version you want to support and it
will work on all newer versions. I agree it's a shame Linux didn't
follow that path.
 
B

BGB / cr88192

Ian Collins said:
Which is why Solaris and its derivatives is such a breeze to release code
for. Build on the the oldest version you want to support and it will work
on all newer versions. I agree it's a shame Linux didn't follow that
path.

yep.

would have been nice...

as is, some of the benefit of open source is lost when all software needs to
be compiled for a particular system...
 
N

Nick Keighley

Any compiler provides you with a way to specify what calling
convention you want from a list of calling conventions chosen by that
compiler (look for e.g. __cdecl).

you mean *all* compilers provide you with a way of secifying calling
conventions? I find that surprising.
 
G

Goran Pusic

you mean *all* compilers provide you with a way of secifying calling
conventions? I find that surprising.

Heh, you're probably right :). Or at least, I see no reason why there
would not be compilers that have calling convention set in stone.

But normally, compilers are prepared for eventualities, so that you
can use calling that fits the HW/SW platform.

Goran.
 
J

James Kanze

* James Kanze, on 30.08.2010 17:05:
You might look up RAS.

Looks like an in house version of SNMP. In other words, not
really part of the OS (but interacting with it).
In Windows even the GUI is part of the OS. So much so that at
least still in Windows 2000, as I recall, the thread scheduler
interacted with window message loops; you could affect
performance by sprinkling in some window message queue
checking. Since that code reportedly is very very hairy
spaghetti it's probably still that way.

The GUI is a borderline case. Regardless of the system, it does
need some OS support; you can't have just any user process
writing anywhere on the screen (or capturing keystrokes or mouse
clicks that weren't intended for it). From what little I've
seen, it's clear that the boundary between the OS and things
like displaying a button in a window is much higher under
Windows than under Unix (at least Unix with X---I don't know
about Mac). But things like adding a button to a pane in a
Window still aren't really part of the OS. They're part of a
library which uses whatever lower level interfaces the system
provides.
One big annoyance is that DCOM (Distributed COM, sort of
remote procedure calls in a slightly object-oriented way) is
so very much part of the OS that if you stop that service then
the OS reboots.

:). Well, it's fairly easy to write a program like that for
Unix, at least if you have root priviledges to install it.
(Many of the applications I've worked on, in fact, did this.)
That doesn't really make them part of the OS.

[...]
Uh, it does. :)
That's what an ABI means.
Defining that level.

Yes and no. An ABI does involve defining a number of low level
things, like how struct's are laid out (today---it wasn't always
the case). On the other hand, I've yet to see any Windows
documentation concerning how to call CreateFile (for example)
other than in C.
 
J

James Kanze

[...]
if there is one thing I think Windows got right better than
most Unix variants and friends did, it is that they tend to
define things down to the binary level, so although it may be
painful to interop at these levels in many cases, it is at
least possible...

I don't see what that buys you. However...
in nearly all forms of Unix, most binary details are left up
to the implementation (including flag bit constants, numerical
magic values, ...). only really source-level compatibility is
considered, and even then it is a bit hit or miss as a lot is
still only vaguely defined and tend to differ from one system
to another.
granted, there may be preactical reasons for doing this, but
it doesn't help matters much, more so that no comprehensive
binary level specifications have been undertaken.
for example, with Linux (theoretically a single OS), it is a
bit hit or miss if binary code will even work between distros,
or between different versions of the same distro (since many
of the library writers see no problem in making changes to
library headers and API's which tend to break binary code,
seeing things like changing structure layouts or magic numbers
as "innocent changes"...).

This is a real problem. Techically, it's not a problem with the
OS, since most of the libraries are third party, and not part of
the OS, but if you're trying to get something to work, it really
doesn't matter.

Note that Windows is even worse, however. You have to ensure
that all of the libraries were compiled with the same compiler,
using the same options. (The distributed binaries for Boost
didn't work with our system. Nor did those created by another
group in the company, using the same compiler, but slightly
different options.)
ideally, people could get all this crap nailed down, but at
this point it seems unlikely...

I fear you're right.

Most commercial libraries seem to realize the importance of
backward compatibility; if your code worked with version x, it
will work with version x+1. So if some new code requires a
newer version of the library, you don't break all of the older
code. I've not found this to be the case with most free
libraries, however. (But there are exceptions both ways.)
 
A

Alf P. Steinbach /Usenet

* James Kanze, on 31.08.2010 18:46:
Yes and no. An ABI does involve defining a number of low level
things, like how struct's are laid out (today---it wasn't always
the case). On the other hand, I've yet to see any Windows
documentation concerning how to call CreateFile (for example)
other than in C.

It's stdcall convention, which is well defined for the things used in the API
(it doesn't extend to C++ RVO, though).


Cheers & hth.,

- Alf
 
J

James Kanze

On 08/31/10 04:05 AM, BGB / cr88192 wrote:

[...]
Which is why Solaris and its derivatives is such a breeze to release
code for. Build on the the oldest version you want to support and it
will work on all newer versions. I agree it's a shame Linux didn't
follow that path.

I think it's moving that way. At least, I've had less problems
in the last couple of years than previously.

At least with regards to the OS. A lot of the software under
Linux (and to some degree under Solaris and Windows as well) is
third party freeware, which depends on a specific version of
some other third party freeware, ad infinitum.
 
B

BGB / cr88192

you mean *all* compilers provide you with a way of secifying calling
conventions? I find that surprising.

<--
Heh, you're probably right :). Or at least, I see no reason why there
would not be compilers that have calling convention set in stone.

But normally, compilers are prepared for eventualities, so that you
can use calling that fits the HW/SW platform.
-->

yes, this is probably extra true of compilers which may target multiple OS
and CPU architectures, where many things will be set as "sane defaults" for
a target architecture, but may not be internally fixes within the compiler,
and can be effected by all manner of extension keywords...

consider a compiler which targets Win32 and Linux, which may internally
support all of Win32's conventions on Linux, even though Linux doesn't use
them.

now, what if the compiler is ported to Win64 and Linux x86-64? likely the
keywords are still there, and may still do "something", although not
necessarily what is intended (the compiler could complain, ignore it, or
silently do something different and unexpected, such as use a different
calling convention, produce mutilated code, ...).

and if more targets are supported, many other features may be added.

and, if the compiler is extended to support multiple source languages as
well, even more bizarre possibilities exist (the risk of inter-language
syntax/semantic bleedover), ... for example, because several different
languages could use the same parser and compiler machinery (say, with an
internal "lang" modifier or similar to keep them separate and effect other
source-language-specific behavior, but this border being crossed if one
writes code outside the usual borders of a given source language, ...).

example:
__kw_function foo(x):int
{ return((int)(&x)); }
for example, the code above exploiting compiler internals to mix C and
JavaScript syntax...
say, the compiler in question uses '__kw_' as a prefix to escape keywords
(and '__type_' to escape new base-types, '__builtin_' for builtins, ...),
and exploits the JS 'function' keyword and parse logic (the parser not
knowing of any other use for this keyword), and the compiler not checking
that this use is sane, ... (and, whether or not the backend doesn't blow up
is another issue...).


but, granted, single source language, single target, compilers are likely to
not have any of these sorts of "features".
 
J

James Kanze

"James Kanze" <[email protected]> wrote in message
<--
By "most machines", I meant most types of machines. Windows is
a bit of an exception, although even here, it's only really an
exception when using VC++, who decided not to use the standard
mechanism. (cdecl and stdcall resolve to ``extern "C"'' and
``extern "Pascal"'', I think. And thiscall is an extension of
cdecl, only relevant for ``extern "C++"''. The language also
allows a compiler to define something like ``extern "Fast"'',
although why one would use a slow call when a fast one is
available is beyond me.)
-->

most compilers I have seen have used the "__callconv" type keywords,

The only one I've seen which does this is the Microsoft
compiler.
although granted my experience is mostly limited to Windows (where this
convention would likely be default as MS uses it),

Most of my experience is Unix. I've only really aborded Windows
in the last year (and only with the Visual Studios).
and Linux (where generally there is only a single calling
convention in use, hence no real need to use it...).

I've not seen this under Linux. G++ does have a __attribute__
keyword, but this covers a lot more than linkage. It allows
declaring, for example, that a function never returns, or is
pure. Things so useful for the optimizer that the next version
of the standard will provide a similar mechanism.

Under Windows, this technique is used for things like dllimport
and dllexport as well (since it is necessary), and on Intel
processors, there is an attribute cdecl, which will override any
compile line option telling the compiler to use a different
calling convention (which no one in their right mind would use,
since it means that you can't link with any number of other
programs---although by all rights, the "optional" form should be
the default for C++).
this notation makes a little more sense IMO than the 'extern
"lang" ' notation in many cases, as it allows defining the
calling convention of function pointers, ... which the former
can't do effectively,

Why not? Function pointers do have language binding as part of
their type; you can't assign the address of a C++ function to a
function pointer with ``extern "C"'' in its type. (At least two
compilers are broken in this regard, however, and don't enforce
the rule: Microsoft VC++ and g++.)
however, the 'extern "lang" ' notation does have the
usefulness that it can be applied to an entire block of
definitions, rather than having to be endlessly repeated for
each declaration.
hence:
void (__stdcall *foo)(int x, double y);
and:
extern "C"
{
...

}

Or also:
extern "C" void foo(void* (*)(void*));
Which can only be called with a pointer to an ``extern "C"''
function.
<--
As is the case under Solaris, HP/UX and AIX. And probably every
other system around.
-->
most non-Windows systems on x86-64 AFAIK use AMD64.

Only those running on AMD 64 bit hardware and the Intel clones
of them. Sparc has completely different conventions, as does
HP/UX on HP's PA based machines. IIRC, 32 bit Linux uses an
adaptation of Intel's Itanium 64 bit conventions.
admittedly, I personally like Win64's design a little more, as
AMD64 seems a bit complicated and over-engineered and likely
to actually reduce performance slightly in common use cases vs
Win64's design.
<--
On 32-bit Linux, I've never used anything.
-->
on 32-bit Linux, there is only a single calling convention in
single use, so no one needs to... doesn't mean the calling
convention is not there, only that it is not needed to specify
it.

Rather that the default is universal. I think the difference is
that under Windows, the default may be something like cdecl, but
many (most) of the OS interface functions use something else.
<--
Yes and no. All of the C compilers I tried did the same thing.
All of the C++ compilers were different. But that's the case
almost universally today as well.
-->
there were differences. many C compilers used OMF (as the
object format), and some used others (including COFF, but this
was typically for DPMI-based compilers, as well as I think I
remember there being 32-bit OMF, ...); there were a few others
as well IIRC.
although, all this was long ago, and my memory is faded.

Most of the compilers I used under MS-DOS used Microsoft's
object format, which was originally based on Intel's. The one
exception, Intel's own compiler, used the Intel format, but
could link the Microsoft object format as well.

[...]
but, Windows and x86 (or Windows and x64 / x86-64) represent
the vast majority of total systems (desktop and laptop at
least) in use...

You notice that you have to qualify it. I'd be surprised if
there were more Windows than Symbian or VxWorks, in terms of
numbers of machines running the system. And of course, Unix
still dominates servers and large scale embedded systems
(network management, telecoms, etc.).
Linux and OSX (on x86 or x86-64) are most of the rest.
ARM and PPC are used in many embedded systems. (not sure the
OS popularity distribution for embedded systems, from what I
have seen I would guess: Linux, FreeDOS, and various
proprietary OS's...).

VxWorks dominates, I think. Except on portable phones, where
Symbian dominates.
most other architectures and operating systems can be largely
safely ignored...

Unless you're doing something important: a large scale server,
network management, etc. I've done far more work under Solaris
than under Windows.
 
J

James Kanze

you mean *all* compilers provide you with a way of secifying
calling conventions? I find that surprising.

They're required to by the standard. All compilers are required
to support both ``extern "C"'' and ``extern "C++"''. Most do
this in a standard conformant manner, and don't support anything
else, however.

But it depends on the platform. The Intel architecture has
hardware support for cleaning up the stack when returning from a
function. To use it, however, the function must always be
called with the same number of bytes as arguments; it doesn't
work with varargs. The C standard added the requirement that
when calling a varargs function, a declaration for it must be in
scope, so that one could use the hardware support most of the
time, and skip it when the function used varargs. Historically,
however, this came too late. The intelligent solution is to use
the vararg convention for C, and the hardware support for C++
(which requires a function declaration to be in scope for all
functions), but all modern compilers follow Microsoft rather
than doing this. (The old Zortech C++ compiler did this, using
a different calling convention for C and for C++. And passing
the address of a C++ function to a function expecting a pointer
to a C function would cause a crash.)
 
J

James Kanze

On Aug 25, 9:42 am, Stanis³aw Findeisen <[email protected]> wrote:
[...]
I'd say that for a given compiler (version) on a given
platform, there's no difference between a C and a C++
function, except that you need extern "C" to turn off name
mangling that happens with a C++ compiler. So that's why
you have extern "C".
And you'd be wrong. At least one compiler I've used used
different calling conventions for C and for C++.
Ugh. Which one and why (if you can be bothered)?

The old Zortech compler. Because C is more broken than C++:).
Intel has hardware support for cleaning up the stack, provided
the function in question has a fixed number of bytes as
arguments. In pre-standard C, it was usual to call functions
like printf without a declaration in scope, so the compiler had
to assume that all functions were varargs (and thus used the
less efficient convention). C++ has always requires a function
declaration to be in scope in order to call the function, so the
compiler used the hardware support to clean up the stack. (The
results were slightly smaller and slightly faster.)
To be honest, I can't think of a reason why would any compiler
match calling conventions betwen C and C++, but that seems a
most reasonable (only sensible, really) thing to do when using
extern "C".

When a function is declared ``extern "C"'', it must match the
calling conventions of C.
 
B

BGB / cr88192

James Kanze said:
Alf P. Steinbach /Usenet said:
* James Kanze, on 30.08.2010 17:05:
[...]
And of course, as good people here noted, why would you not
consider assembly?
Because Windows doesn't define the assembler level API.
Uh, it does. :)
That's what an ABI means.
Defining that level.
if there is one thing I think Windows got right better than
most Unix variants and friends did, it is that they tend to
define things down to the binary level, so although it may be
painful to interop at these levels in many cases, it is at
least possible...

I don't see what that buys you. However...
in nearly all forms of Unix, most binary details are left up
to the implementation (including flag bit constants, numerical
magic values, ...). only really source-level compatibility is
considered, and even then it is a bit hit or miss as a lot is
still only vaguely defined and tend to differ from one system
to another.
granted, there may be preactical reasons for doing this, but
it doesn't help matters much, more so that no comprehensive
binary level specifications have been undertaken.
for example, with Linux (theoretically a single OS), it is a
bit hit or miss if binary code will even work between distros,
or between different versions of the same distro (since many
of the library writers see no problem in making changes to
library headers and API's which tend to break binary code,
seeing things like changing structure layouts or magic numbers
as "innocent changes"...).

This is a real problem. Techically, it's not a problem with the
OS, since most of the libraries are third party, and not part of
the OS, but if you're trying to get something to work, it really
doesn't matter.

Note that Windows is even worse, however. You have to ensure
that all of the libraries were compiled with the same compiler,
using the same options. (The distributed binaries for Boost
didn't work with our system. Nor did those created by another
group in the company, using the same compiler, but slightly
different options.)

this usually only pops up when using C++ across library borders though...

if one only uses C-level APIs across these borders, these problems are
greatly reduced, as the "which compiler with which options" issue,
typically, mostly disappears.

but, even then, one has to be fairly rigid about coding practices to avoid
many of the subtle issues which tend to pop up with more "casual" API-design
practices (even things as simple as physically passing and returning
structs, vs sending them as pointers, may foul things up as different
compilers seem to disagree over things like how to pass or return various
types of struct, or over things like how to pad misaligned struct members,
exact size/alignment for struct arrays, ...).

CC-A: expects a pointer as a hidden first arg for returning a given struct;
CC-B: expects this struct to be put in registers (say, the < 12-byte
EAX/ECX/EDX interpretation).

CC-A: expects to pass an internal reference to a struct on the stack;
CC-B: expects the whole struct to be placed on the stack.
....

but, otherwise, at the C level things tend to be much more solid, whereas
the C++ level is an inter-compiler mess of sorts.


except WRT Cygwin, which adds its own bizarreness, which IMO as a matter of
policy should not be used to compile DLL's... (MinGW and MSVC are fairly
safe though IME...).

however, most open-source code is difficult to get to build on even on
Cygwin, much less MinGW, and MSVC typically requires a lot of internal
tweaking (as, sadly, even most "portable" OSS code tends to use the
occasional GCC'ism here or there...).

I fear you're right.

Most commercial libraries seem to realize the importance of
backward compatibility; if your code worked with version x, it
will work with version x+1. So if some new code requires a
newer version of the library, you don't break all of the older
code. I've not found this to be the case with most free
libraries, however. (But there are exceptions both ways.)

yes.

the problem comes down a lot to API design...
to keep everything from breaking requires fairly careful API design and
maintainence, which many/most OSS libraries don't seem to care about
bothering with...

not everyone wants to write libraries, say, with the look and feel of
OpenGL, although GL is a good example of a library/system which has done
notably well at avoiding versioning issues...

DirectX doesn't do as well, since an app usually has to consider issues
related to the particular version of the library they are developing
against, API versions, ..., and in a few cases DX has dropped little-used
features, potentially breaking any (likely rare) apps which may have
depended on them.

but, even then, DX is still a lot better than many OSS libraries in these
regards (many which make little real effort to address the matter of
versioning, ...).


but, yes, keeping one's bit flags, magic values, struct layouts, ... "set in
stone" (or avoiding them altogether), is a little more effort (and many OSS
libs don't bother).

directly using C++ across a library boundary is a practice I personally
think is nearly the opposite extreme, as it provides almost no protection
from the matters of compiler-dependent features and from versioning (since
something as "innocent" as adding a new method to a class may essentially
change the vtable layouts of both the class and any other class which
inherits from it, thus breaking binary interop, ...).


even in naively designed codebases, this option may pop up, as say changing
something in a header and rebuilding code may leave much of the rest of the
codebase as "stale" and causing bugs, requiring either that code be rebuilt
if headers change (an extreme time-wasting hassle, especially for Mloc
codebases...), or doing a "clean" build (deleting all objects and binary
code) if any significant changes are made.

however, recently with my coding practices, I have largely avoided both of
the above (I neither track header changes, nor usually need to bother with
clean builds).


or such...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,059
Latest member
cryptoseoagencies

Latest Threads

Top