peter said:
(e-mail address removed) wrote:
[...]
I certainly hope not. That wouldn't be conform. (I presume, of
course, that the compiler will only do so if the objects are
PODs as well. Otherwise, the code won't be identical.)
I believe you are wrong.
I'm sure I'm not. The issue has been discussed before.
But you compare two different types. I do know that for two
objects of the same type, if their adress compares equal, the
adresses must refer to the same object.
This is true for objects of different types as well, as long as
they are complete objects (e.g. not base classes or members of a
class or union). Except, of course, objects with different
complete types must be different objects.
But thus is not the same situation: you can't directly compare
a void f(int) and a void f(long). Without really knowing (and
bothering!) the standard in this respect, I am quite confident
that to compare you do need some quite heavy casting.
You need some casting, yes. In the case of objects, you don't,
because pointers to objects convert implicitly to void*. In the
case of pointers to function and all pointers to members, you
need some casting. But the guarantees still hold.
About the only case this might be useful in practice, I think,
is in some very advanced forms of template metaprogramming,
where you don't know the types to begin with (except that they
are e.g. pointers to a member function). And even then, I'm not
sure that it would be that useful. But the standard explicitly
guarantees it.
I believe you are wrong. This certainly is not by accident as
there are different mechanisms involved.
You're right about that. The first depends on "weak
references"; the linker ignores any weak references if the
extern has already been resolved. Even if the functions aren't
identical. (That could be a result of undefined behavior, but
it could also be because you compiled some of the instances with
optimization, and others without.)
To remove different occurences of the same function, you
typically mark the function as discardable: when during
linking, the second definition of the function shows up, you
discard the new function instead of giving an error-message.
For removing different functions that generate the same code,
you probably calculate the hash-value of the code. When you
find two functions with the same hash-value, you compare the
individual opcodes: if they are similar, one function is - as
above discarded. Notice that a true duplication of code
(define a non-inline function in a header and include that
header in more then one compilation unit) still gives
linker-errors.
Yes. It's also possible to arrange things so that the addresses
are different (e.g. by inserting more or less no-ops before the
function). But I've never seen a compiler which did this, and
given the size of memory today, I doubt that it's an
optimization with a particularly high priority. (It also has
the problem that it involves the linker, which means that there
can be political problems involved in implementing it.)
In the end, of course, the reason you factor out the common
behavior today is generally because you don't want complex code
to be generic (more difficult to debug and maintain), and you
definitely don't want it in a header (and most compilers still
don't implement export).