Is C++ really portable?

J

Jakob Bieling

Thank all of you for all your answers, I really
appreciate it. But there's one very imporatant
thing that I might have been a little vague about,
and that is the importance of not having to
recompile the application because of a library
change. Therefore #ifdef won't do.

But will you not have to recompile anyway, since you cannot use the
Linux executable on a Windows system anyway?
Suppose you sell your application to someone and
this guy in turn buys a plugin to it from a third
party vendor, where this plugin only supports
"strangeThread" but not the apps default pthread.
Then it would be neat to just send him a new
strangeThread.so or .dll to put in a specific
directory and suddenly it works.

Well, think about it. That plugin is written after you ceated your app,
so who would ever create a plugin that does not work with the app? Sure, you
do restrict plugin writers to use a specific lib, but that is life ;)
It's feasible with the design patterns 'abstract
factory' or 'bridge', but both of them result in
an overhead with pointer referencing and memory
allocation. I just wanted that C++ could do it
anyway, but the only solution I can see right now
is this solution

As I mentioned in another post, you just create an interface and let the
libraries contain a derived class from that interface and let it do all the
implementation specific stuff. Then you have the dynamic lib export two
functions: create_thread and destroy_thread. In the former you allocate the
derived class with new and return the pointer, in the latter you destroy it
again. The overhead of allocating from the heap will surely be neglectable,
because it is rather unlikely that you will create hundrets of threads ..

hth
 
S

Siemel Naran

Gianni Mariani said:
Also, on some systems, a virtual function call is just as fast
as a regular call.

How can this be? You have to look up the address of a function in the
virtual table and then call it, so logic suggests it has to be slower. But
maybe I'm overlooking something?
 
S

Siemel Naran

DeMarcus said:
I really like that pool allocation idea! That's an area where I want to
improve my programming skills. Do you know where to find exhaustive
information about creating rock solid buffer pools?



That's something I didn't know about. Do you know where I can read
more about it?

Try google, google newsgroups, browse the boost libraries, etc. Sorry, have
to catch a flight now.
 
D

Daniel T.

DeMarcus said:
Let's say we're gonna make a system library and have a class with
two prerequisites.

* It will be used _a_lot_, and therefore need to be really efficient.
* It can have two different implementations. (e.g. Unix/Windows)

I feel stuck. The only solution I've seen so far is using the
design pattern 'abstract factory' that gives me a pointer to a pure
virtual interface (which can have whatever implementation). But that
forces me to make a memory allocation every time I need an instance
of that class! That's all but efficient.

Do I have to live with this? Or do I have to make some kind of

class SystemThing
{
...
private:

strange_unix_var uv;
strange_win_var wv;
}

and then use the appropriate variable in the library implementation?

How is this problem commonly solved?

Part of my job has been to ensure that the code written by the rest of
the people on the team is portable between Windows and MacOS X. Here is
how I solve the problem:

class GenericThingThatsImplementedDifferently {
class Impl;
Impl* pimpl;
public:
void genericFunction();
};

Then I will write two or three source files, one would be Windows
spicific code, one is Mac spicific code, and the last (optional one)
would be for non-spicific code.
 
G

Gianni Mariani

Siemel said:
How can this be? You have to look up the address of a function in the
virtual table and then call it, so logic suggests it has to be slower. But
maybe I'm overlooking something?

CPU's today are very advanced and many things that made sense even 10
years ago, no longer are true. The biggest effects come from multiple
issue, speculative execution, cache effects and branch prediction.

For example - look at this code:


class foo
{
public:
virtual bool X() const = 0;

};

bool Y( void * v);

int DY( void ** v)
{
Y( * v );
return 0;
}

int Dfoo( const foo ** v)
{
(*v)->X();
return 0;
}


$ g++ -mtune=pentium4 -c -o xx.o xx.cpp -O3
$ objdump --disassemble xx.o

xx.o: file format elf32-i386

Disassembly of section .text:

00000000 <_Z2DYPPv>:
0: 55 push %ebp
1: 89 e5 mov %esp,%ebp
3: 83 ec 08 sub $0x8,%esp
6: 8b 55 08 mov 0x8(%ebp),%edx
9: 8b 02 mov (%edx),%eax
b: 89 04 24 mov %eax,(%esp,1)
e: e8 fc ff ff ff call f <_Z2DYPPv+0xf>
13: 31 c0 xor %eax,%eax
15: c9 leave
16: c3 ret
17: 90 nop

00000018 <_Z4DfooPPK3foo>:
18: 55 push %ebp
19: 89 e5 mov %esp,%ebp
1b: 83 ec 08 sub $0x8,%esp
1e: 8b 4d 08 mov 0x8(%ebp),%ecx
21: 8b 01 mov (%ecx),%eax
23: 8b 10 mov (%eax),%edx
25: 89 04 24 mov %eax,(%esp,1)
28: ff 12 call *(%edx)
2a: 31 c0 xor %eax,%eax
2c: c9 leave
2d: c3 ret

Note that the code size for the virtual call version is *shorter* than
the non virtual call. On a multiple issue machine, both would execute at
in the same number of cycles.


On an Athlon64 machine:

$ g++ -m64 -c -o xx.o xx.cpp -O3
$ objdump --disassemble xx.o

xx.o: file format elf64-x86-64

Disassembly of section .text:

0000000000000000 <_Z2DYPPv>:
0: 48 83 ec 08 sub $0x8,%rsp
4: 48 8b 3f mov (%rdi),%rdi
7: e8 00 00 00 00 callq c <_Z2DYPPv+0xc>
c: 48 83 c4 08 add $0x8,%rsp
10: 31 c0 xor %eax,%eax
12: c3 retq
13: 90 nop
14: 66 data16
15: 66 data16
16: 66 data16
17: 90 nop
18: 66 data16
19: 66 data16
1a: 66 data16
1b: 90 nop
1c: 66 data16
1d: 66 data16
1e: 66 data16
1f: 90 nop

0000000000000020 <_Z4DfooPPK3foo>:
20: 48 83 ec 08 sub $0x8,%rsp
24: 48 8b 3f mov (%rdi),%rdi
27: 48 8b 07 mov (%rdi),%rax
2a: ff 10 callq *(%rax)
2c: 48 83 c4 08 add $0x8,%rsp
30: 31 c0 xor %eax,%eax
32: c3 retq

Note that the code size again for the virtual call version is *shorter*
than the non virtual call. Note also that the compiler generated a 32
bit instruction for the non-virtual call. On some other erchitectures,
a 64 bit call means loading 2 32 bit values into 2 separate registers
and combining them to create a 64 bit address (mips n64 is an example).

I don't have access to an Itanium compiler but I suspect that it is even
more telling.

In summary, it depends on your code and the architecture, but there are
examples where it can be faster to use virtual functions compared to
non-virtual functions.

The only time it's interesting to use non-virtual functions is where
it's possible to use inlining. As for the OP, inlining is out of the
question.
 
D

DeMarcus

Daniel said:
Part of my job has been to ensure that the code written by the rest of
the people on the team is portable between Windows and MacOS X. Here is
how I solve the problem:

class GenericThingThatsImplementedDifferently {
class Impl;
Impl* pimpl;
public:
void genericFunction();
};

Then I will write two or three source files, one would be Windows
spicific code, one is Mac spicific code, and the last (optional one)
would be for non-spicific code.

Yes, I think this pattern is what they call a 'bridge' and it looks like
it will be the solution I go for.
 
D

Daniel T.

[QUOTE="DeMarcus said:
Part of my job has been to ensure that the code written by the rest of
the people on the team is portable between Windows and MacOS X. Here is
how I solve the problem:

class GenericThingThatsImplementedDifferently {
class Impl;
Impl* pimpl;
public:
void genericFunction();
};

Then I will write two or three source files, one would be Windows
spicific code, one is Mac spicific code, and the last (optional one)
would be for non-spicific code.

Yes, I think this pattern is what they call a 'bridge' and it looks like
it will be the solution I go for.[/QUOTE]

It looks like a bridge but it really isn't. In the Bridgle pattern, the
Implementor can be different for different Abstraction objects within
the same program (in fact the Implementor can be changed out for a
particular Abstraction object at runtime,) in the above all the
Abstraction objects must use the same Implementor.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,610
Members
45,255
Latest member
TopCryptoTwitterChannels

Latest Threads

Top