C/C++ calling convention

  • Thread starter StanisÅ‚aw Findeisen
  • Start date
B

BGB / cr88192

"James Kanze" <[email protected]> wrote in message
<--
By "most machines", I meant most types of machines. Windows is
a bit of an exception, although even here, it's only really an
exception when using VC++, who decided not to use the standard
mechanism. (cdecl and stdcall resolve to ``extern "C"'' and
``extern "Pascal"'', I think. And thiscall is an extension of
cdecl, only relevant for ``extern "C++"''. The language also
allows a compiler to define something like ``extern "Fast"'',
although why one would use a slow call when a fast one is
available is beyond me.)
-->

most compilers I have seen have used the "__callconv" type keywords,

<--
The only one I've seen which does this is the Microsoft
compiler.
-->

GCC and most other compilers which support Windows also do this.

although granted my experience is mostly limited to Windows (where this
convention would likely be default as MS uses it),

<--
Most of my experience is Unix. I've only really aborded Windows
in the last year (and only with the Visual Studios).
-->

yeah.
I develop on both Windows and Linux, but anymore mostly on Windows, since
this is where most of the potential users are.

however, I use an "unusual" build setup, typically mixing MSVC and the GNU
toolchain (via Cygwin), which I worry sometimes may hinder others' from
using the code (since they would have to figure out how to get this
build-setup to work).

but, the problem is that building from the commandline is notably less
"nice" with MS's tools than with the GNU tools (MS's nmake sucks, ...).

also lame is that I often don't get around with keeping all of the
Linux-specific (and non-MSVC) code and Makefile's up to date, ... so usually
trying to get a Linux build to work (when I get around to it usually
requires several hours or more trying to get everything to build and work
again).

and Linux (where generally there is only a single calling
convention in use, hence no real need to use it...).

<--
I've not seen this under Linux. G++ does have a __attribute__
keyword, but this covers a lot more than linkage. It allows
declaring, for example, that a function never returns, or is
pure. Things so useful for the optimizer that the next version
of the standard will provide a similar mechanism.
-->

"no real need to use it" also implies "will rarely/never be seen" or "can be
assumed not to exist".
one could try and see what happens if they try to use __stdcall on 32-bit
Linux, and whether or not the compiler will accept it or what the exact
result will be, I don't know...

<--
Under Windows, this technique is used for things like dllimport
and dllexport as well (since it is necessary), and on Intel
processors, there is an attribute cdecl, which will override any
compile line option telling the compiler to use a different
calling convention (which no one in their right mind would use,
since it means that you can't link with any number of other
programs---although by all rights, the "optional" form should be
the default for C++).
-->

yes.

there is also __declspec, which MSVC uses.

my compiler ended up having to support both __declspec and __attribute__
modifiers, since it partly emulates both MSVC and GCC (this itself is a bit
of a mess...). this is partly because many headers will not do anything
"sane" if not emulating a compiler they know about.

#if defined(__GNUC__)
....
#elif defined(_MSC_VER)
....
#elif defined(__WATCOMC__)
....
#else
#error "GASP! What is going on here!"
#endif

and, going and having to fiddle with all these headers to add specifics for
a new compiler would be a problem.

so, it is easier to try to emulate whatever other compiler is being used for
building code (using my own defines to identify my compiler in "emulation
mode"), and then try to sort out the mountains of crud which typically
results from this (IOW, the "side compiler" strategy...).

this notation makes a little more sense IMO than the 'extern
"lang" ' notation in many cases, as it allows defining the
calling convention of function pointers, ... which the former
can't do effectively,

<--
Why not? Function pointers do have language binding as part of
their type; you can't assign the address of a C++ function to a
function pointer with ``extern "C"'' in its type. (At least two
compilers are broken in this regard, however, and don't enforce
the rule: Microsoft VC++ and g++.)
-->

possibly, but I was unaware of this working in the typical (inline) case, as
I would have thought it would require either defining them as proper
variables or as typedef's...

example:
fptr=((void *(__stdcall *)(HANDLE, LPCSTR))GetProcAddrss(windll,
"GetProcAddress"))(windll, "CreateWindowEx");

although a contrieved example, this sort of usage pattern does pop up
sometimes (usually in nasty code which is better off wrapped).

most commonly in my case, this sort of usage pattern pops up when calling
from statically compiled code into dynamically compiled code, since for
(likely fairly obvious) reasons, one can't call directly from a
statically-compiled piece of code into code which does not exist until
run-time.

this is also one of the few nasty places where I need glue-code to interface
with script languages (another place being to expose C the contents of
structs to dynamically-typed languages, ...), as the machinery for doing all
this transparently is still not complete.

actually, implicit C -> dynamic-typed language calls present a few other
hassles, such as needing to identify another function/prototype/... as a
"template" for the function pointer to return (although, many wrappers can
use themselves as the template, presuming they exist and are C functions).

admittedly, I don't really trust the code which does all this, and it is not
well tested, and so the strategy of using an API call to perform the
function call is preferable, rather than trying to call the thing via a
function pointer.

t=dyCall2("someDynamicFunction", x, y); //safer, as it avoids internal
thunk-generation nastiness

the above also works in wrappers, but requires extra glue-code if the
objective is a function pointer to use as a callback or similar. current
"best practice" is to not try to use callbacks with dynamically typed code,
or at least until better "proven safe", or at least "proven to generally
work"...

however, the 'extern "lang" ' notation does have the
usefulness that it can be applied to an entire block of
definitions, rather than having to be endlessly repeated for
each declaration.
hence:
void (__stdcall *foo)(int x, double y);
and:
extern "C"
{
...

}

<--
Or also:
extern "C" void foo(void* (*)(void*));
Which can only be called with a pointer to an ``extern "C"''
function.
-->

yes, the "declare a proper variable" case from before.
I have no idea if this works from casts though, or what the exact notation
would be.

<--
As is the case under Solaris, HP/UX and AIX. And probably every
other system around.
-->
most non-Windows systems on x86-64 AFAIK use AMD64.

<--
Only those running on AMD 64 bit hardware and the Intel clones
of them. Sparc has completely different conventions, as does
HP/UX on HP's PA based machines. IIRC, 32 bit Linux uses an
adaptation of Intel's Itanium 64 bit conventions.
-->

I did say "on x86-64" here, which naturally excludes things like SPARC, ...

IIRC, g++ on Win32 uses the same C++ ABI as on Linux, hence, it doesn't play
well with MSVC for C++ code, hence creating a wall if one is using libraries
compiled with both compilers together...

admittedly, I personally like Win64's design a little more, as
AMD64 seems a bit complicated and over-engineered and likely
to actually reduce performance slightly in common use cases vs
Win64's design.
<--
On 32-bit Linux, I've never used anything.
-->
on 32-bit Linux, there is only a single calling convention in
single use, so no one needs to... doesn't mean the calling
convention is not there, only that it is not needed to specify
it.

<--
Rather that the default is universal. I think the difference is
that under Windows, the default may be something like cdecl, but
many (most) of the OS interface functions use something else.
-->

yep.
<--
Yes and no. All of the C compilers I tried did the same thing.
All of the C++ compilers were different. But that's the case
almost universally today as well.
-->
there were differences. many C compilers used OMF (as the
object format), and some used others (including COFF, but this
was typically for DPMI-based compilers, as well as I think I
remember there being 32-bit OMF, ...); there were a few others
as well IIRC.
although, all this was long ago, and my memory is faded.

<--
Most of the compilers I used under MS-DOS used Microsoft's
object format, which was originally based on Intel's. The one
exception, Intel's own compiler, used the Intel format, but
could link the Microsoft object format as well.
-->

fair enough...



[...]
but, Windows and x86 (or Windows and x64 / x86-64) represent
the vast majority of total systems (desktop and laptop at
least) in use...

<--
You notice that you have to qualify it. I'd be surprised if
there were more Windows than Symbian or VxWorks, in terms of
numbers of machines running the system. And of course, Unix
still dominates servers and large scale embedded systems
(network management, telecoms, etc.).
-->

servers and embedded systems represent different domains...

a lot depends on if one is intending the eventual target use of the app to
be:
on a server somewhere;
being used by an end-user;
on someones' cellphone;
in their microwave;
....

most of my experience is with desktop/end-user targetted software, and here
Windows is dominant...

Linux and OSX (on x86 or x86-64) are most of the rest.
ARM and PPC are used in many embedded systems. (not sure the
OS popularity distribution for embedded systems, from what I
have seen I would guess: Linux, FreeDOS, and various
proprietary OS's...).

<--
VxWorks dominates, I think. Except on portable phones, where
Symbian dominates.
-->

fair enough.

most of my (limited) exposure to embedded systems has been with things like
Linksys routers and with Mizu and some other PRC-manufactured devices (which
often use Linux AFAICT).

granted, I am not sure where most devices or manufactured, or what the
largest manufacturing statistics tend to be.

(just as a wild guess from personal experience, I would think the PRC
manufactures most of the devices I see around, and what little I have heard
implies that a stripped down Linux kernel is most popular there, but really
I don't know... and most of these devices are not terribly convinient to
just go and look at and try to figure out what sort of OS or HW they are
running...).

most other architectures and operating systems can be largely
safely ignored...

<--
Unless you're doing something important: a large scale server,
network management, etc. I've done far more work under Solaris
than under Windows.
-->

fair enough, but I have never really done much related to larger-scale
systems, since these are generally the sole property of people/companies/...
who actually have money...

otherwise, one may run a server which is basically just a Win XP laptop or
similar which is left always running and is maybe rebooted every so often,
like if it starts bogging down or crashes or whatever...

luckily, most often if XP crashes it will reboot anyways, minimizing the
need for manual intervention most of the time (except if it gets stuck on a
blue-screen or similar...).
 
J

Jorgen Grahn

yep.

would have been nice...

as is, some of the benefit of open source is lost when all software needs to
be compiled for a particular system...

[I'm not sure this is always correct -- at least my systems come
with legacy libc/libstdc++ versions, and if your packages say they
require them, they will be installed. That should cover a few
years.]

I'd say one benefit of open source is that there *can be* different
systems. You have *BSDs and Linuxes for all possible tastes and
architectures.

Windows seems so stable partly because they're locking you to an
obsolete architecture (Intel x86). They were late with AMD64 and my
company still runs new machines in 32-bit mode -- presumably because
too much third-party closed-source software sees that as "the Windows
ABI".

/Jorgen
 
I

Ian Collins

would have been nice...

as is, some of the benefit of open source is lost when all software needs to
be compiled for a particular system...

[I'm not sure this is always correct -- at least my systems come
with legacy libc/libstdc++ versions, and if your packages say they
require them, they will be installed. That should cover a few
years.]

I'd say one benefit of open source is that there *can be* different
systems. You have *BSDs and Linuxes for all possible tastes and
architectures.

You have missed the point of this off topic ramble. The gripe was
incompatibilities between versions of the same OS, not between different
OSs.
 
B

BGB / cr88192

Ian Collins said:
On 08/31/10 04:05 AM, BGB / cr88192 wrote: ....
if there is one thing I think Windows got right better than most Unix
variants and friends did, it is that they tend to define things down
to
the
binary level, so although it may be painful to interop at these levels
in
many cases, it is at least possible...

in nearly all forms of Unix, most binary details are left up to the
implementation (including flag bit constants, numerical magic values,
...).
only really source-level compatibility is considered, and even then it
is
a
bit hit or miss as a lot is still only vaguely defined and tend to
differ
from one system to another.

Which is why Solaris and its derivatives is such a breeze to release
code
for. Build on the the oldest version you want to support and it will
work
on all newer versions. I agree it's a shame Linux didn't follow that
path.

would have been nice...

as is, some of the benefit of open source is lost when all software
needs to
be compiled for a particular system...

[I'm not sure this is always correct -- at least my systems come
with legacy libc/libstdc++ versions, and if your packages say they
require them, they will be installed. That should cover a few
years.]

I'd say one benefit of open source is that there *can be* different
systems. You have *BSDs and Linuxes for all possible tastes and
architectures.

You have missed the point of this off topic ramble. The gripe was
incompatibilities between versions of the same OS, not between different
OSs.

yeah...

WinXP can run software all the way back to DOS years, and Vista/Win7 can run
most software back to the Win95 timeframe (and still older software, but it
requires an emulator).


and, yes, customization is good.
however, needlessly breaking old code and effectively requiring code to be
recompiled for a particular version of a particular distro is limited.

this essentially takes away much of the advantage of rapidly distributing
binary code, which is about the only real way most end users are likely to
understand how to install software. this effectively puts much of the weight
of distrubuting binary packages into the hands of the people creating and
maintaining the distributions (the rise of people using "package managers"
as their main way of getting software).

another problem (even for people who understand the OS) is that many larger
packages are a *PAIN* to get built, and worse yet, if one builds a new
version of something and installs it, it also holds the risk of essentially
breaking the OS due to versioning issues, ... the strategy then is to wait
for the next version of the distro, and then "upgrade".

so, from the POV of an end user, they almost may as well just be using an
iPhone or iPad where all software needs to be gained from Apple Store and so
on...


most of the blame I suspect is best put on many of the end-developers, many
who don't follow even basic versioning practices:
don't change anything in the public API unless it is critical to do so.

don't add fields to structs or move them around to make them nicer (or,
OTOH, avoid using structs or complex data types in public API's);
don't needlessly change numerical variables (reorganizing flag bit patterns,
changing around magic numbers, ...);
don't notably change the behavior of API calls (or, worse yet, change their
function signatures...);
....


but, I suspect many of this is made harder for many libraries, as their
public API is often just them directly exposing the innards of the library,
rather than designing a (separate) external API for the thing (and keeping
all of the library internals essentially hidden behind an impassable wall).

but, in a sense, this impassable-wall design is needed to help avoid binary
versioning issues.

part may also be developer mindset "opensource means one can rebuild from
source", which is taken to relegate the importance of binary compatibility
to being a concern only for "closed-source" systems.


for example, Win7 is architecturally *very* different from Win95, yet Win95
software still generally works.
MS is probably doing something right...


or such...
 
J

Jorgen Grahn

On 08/31/10 04:05 AM, BGB / cr88192 wrote: ....
if there is one thing I think Windows got right better than most Unix
variants and friends did, it is that they tend to define things down to
the
binary level, so although it may be painful to interop at these levels in
many cases, it is at least possible...

in nearly all forms of Unix, most binary details are left up to the
implementation (including flag bit constants, numerical magic values,
...).
only really source-level compatibility is considered, and even then it is
a
bit hit or miss as a lot is still only vaguely defined and tend to differ
from one system to another.

Which is why Solaris and its derivatives is such a breeze to release code
for. Build on the the oldest version you want to support and it will work
on all newer versions. I agree it's a shame Linux didn't follow that
path.

would have been nice...

as is, some of the benefit of open source is lost when all software needs to
be compiled for a particular system...

[I'm not sure this is always correct -- at least my systems come
with legacy libc/libstdc++ versions, and if your packages say they
require them, they will be installed. That should cover a few
years.]

I'd say one benefit of open source is that there *can be* different
systems. You have *BSDs and Linuxes for all possible tastes and
architectures.

You have missed the point of this off topic ramble. The gripe was
incompatibilities between versions of the same OS, not between different
OSs.

I don't think I really missed it -- I counted all the different
releases of the different Linux distributions as "versions of the same
OS".

I should not have brought in the BSDs, perhaps.

/Jorgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,062
Latest member
OrderKetozenseACV

Latest Threads

Top