STL objects and binary compatibility

O

osama178

Hi,

What does it mean for an object to be binary compatible? And why
aren't STL objects binary compatible? Any insights, links, resources
for further reading are greatly appreciated.

Thanks.
 
G

Gianni Mariani

Hi,

What does it mean for an object to be binary compatible? And why
aren't STL objects binary compatible? Any insights, links, resources
for further reading are greatly appreciated.

It sounds like you're a little confused.

Binary compatablity usually is associated with a context.

If you build with different versions of the STL or with various options
that render binary incompatibility, then this is your choice.

Please elaborate what you're really concerned about.
 
J

James Kanze

(e-mail address removed) wrote:
It sounds like you're a little confused.

Between C++ objects and object files? Whether two object files
are binary compatible makes sense as a question.
Binary compatablity usually is associated with a context.

It is usually associated with compiler output: object files and
libraries. If two files are object compatible, they can be
linked together and will work as expected. (I know you know
that, but I think it's the answer to the question the original
poster tried to ask.)
If you build with different versions of the STL or with
various options that render binary incompatibility, then this
is your choice.

If you're using third party libraries, you don't always have a
choice. You have to compile with whatever options are necessary
to ensure binary compatibility with what they furnish. (For
starters, this usually means compiling with CC, and not g++.)
 
O

osama178

further reading are greatly appreciated.
It sounds like you're a little confused.

Binary compatablity usually is associated with a context.

If you build with different versions of the STL or with various options
that render binary incompatibility, then this is your choice.

Please elaborate what you're really concerned about.

Sorry my question was not very clear. In other words, can STL Objects
cross DLL boundaries. For example. Can we pass STL objects between
different binaries by reference safely?
 
G

Gianni Mariani

further reading are greatly appreciated.

Sorry my question was not very clear. In other words, can STL Objects
cross DLL boundaries. For example. Can we pass STL objects between
different binaries by reference safely?

Only if you're using the same STL library (or binary compatible
library). This usually means you're stuck with making sure that the DLL
versions are using the same compiler major revision.
 
I

Ian Collins

further reading are greatly appreciated.

Sorry my question was not very clear. In other words, can STL Objects
cross DLL boundaries. For example. Can we pass STL objects between
different binaries by reference safely?

That question goes beyond C++, you'd better ask it on a windows group.
 
J

James Kanze

Only if you're using the same STL library (or binary
compatible library). This usually means you're stuck with
making sure that the DLL versions are using the same compiler
major revision.

And the same compiler options. At least some of them can also
affect the binary compatibility. Using g++, try linking code
compiled with -D_GLIBCXX_DEBUG, and code compiled without it.
(Actually, it links just fine---it just crashes at runtime.)
And while I've not actually tried it, I would expect similar
problems with VC++ using mixtures of /Gd, /Gr and /Gz or
different values of /vm<x> or /Zc:wchar_t. (In practice,
without /vmg, you'll get binary incompatibility anyway.)

Note too (to the original poster) that whether you're using
dynamic linking or not is really irrelevant. If the object or
library files are not binary compatible, you can't statically
link them either (or the results of the static link will crash).
 
J

Joe Greer

(e-mail address removed) wrote in (e-mail address removed):
further reading are greatly appreciated.

Sorry my question was not very clear. In other words, can STL Objects
cross DLL boundaries. For example. Can we pass STL objects between
different binaries by reference safely?

Yes, no, maybe. Since you use the term DLL, I will assume that you are
talking windows. The problems you can have under windows have to do with
memory allocation. If you pass a collection to a dll and it adds an object
to it and if that addition causes the collection to allocate memory, then
since it is using code compiled into the dll, it will allocate memory from
the dll's heap and return it to the main program. This can cause problems
with memory accounting when the collection is later freed. Especially if
the dll is unloaded before the collection is destroyed. This effect can be
reduced if you use the DLL version of the crt for all dlls and the exe.
This is because the memory for everything then comes from the crt dll
rather than the individual dll/exe. Along those same lines, if you plan on
unloading your dlls make very sure that every object allocated from that
dll has been destroyed first or you will get a fault when you eventually do
get around to destroying those objects. Something about the destructors
not being loaded into memory any more makes the system get upset. :)

HTH,
joe
 
O

osama178

Thank you all for responding. Your input helped clear things a lot.
Along those same lines, if you plan on
unloading your dlls make very sure that every object allocated from that
dll has been destroyed first or you will get a fault when you eventually do
get around to destroying those objects. Something about the destructors
not being loaded into memory any more makes the system get upset. :)

This excerpt from Effective C++, Iterm 18 adds to what you said:

"An especially nice feature of tr1::shared_ptr is that it
automatically uses its per-pointer deleter to eliminate another
potential client error, the "cross-DLL problem." This problem crops
up when an object is created using new in one dynamically linked
library (DLL) but is deleted in a different DLL. On many platforms,
such cross-DLL new/delete pairs lead to runtime errors.
tr1:;shared_ptr avoids the problem, because its default deleter uses
delete from the same DLL where the tr1::shared_ptr is created."
 
J

Joe Greer

(e-mail address removed) wrote in

Thank you all for responding. Your input helped clear things a lot.


This excerpt from Effective C++, Iterm 18 adds to what you said:

"An especially nice feature of tr1::shared_ptr is that it
automatically uses its per-pointer deleter to eliminate another
potential client error, the "cross-DLL problem." This problem crops
up when an object is created using new in one dynamically linked
library (DLL) but is deleted in a different DLL. On many platforms,
such cross-DLL new/delete pairs lead to runtime errors.
tr1:;shared_ptr avoids the problem, because its default deleter uses
delete from the same DLL where the tr1::shared_ptr is created."

And that is true. The problem with STL collections is that they can sneak
an allocation in on you for one of their internal structures and the
current versions don't take any steps to guarantee that the internal
structure gets deleted where it was allocated. AFAIK, this is a problem
only for classes where the implementation is all in the header, allowing
for the inlining of methods that may allocate memory.

joe
 
J

James Kanze

James Kanze said:
(e-mail address removed) wrote: [...]
Sorry my question was not very clear. In other words, can
STL Objects cross DLL boundaries. For example. Can we pass
STL objects between different binaries by reference safely?
Only if you're using the same STL library (or binary
compatible library). This usually means you're stuck with
making sure that the DLL versions are using the same compiler
major revision.
And the same compiler options. At least some of them can also
affect the binary compatibility. Using g++, try linking code
compiled with -D_GLIBCXX_DEBUG, and code compiled without it.
(Actually, it links just fine---it just crashes at runtime.)
And while I've not actually tried it, I would expect similar
problems with VC++ using mixtures of /Gd, /Gr and /Gz or
different values of /vm<x> or /Zc:wchar_t. (In practice,
without /vmg, you'll get binary incompatibility anyway.)
The current annoyance in VC++ is the _SECURE_SCL thing; it is
on by default in both Debug and Release builds.

I don't think so. You define whether it is on or off when you
invoke the compiler; you can turn it off in debug builds, and
leave it on in release builds, if that's what you want.
Typically, of course, you'll want it on in both. Typically, of
course, you'll want exactly the same compiler options in your
debug builds as in your release builds, so that your unit tests
test the code that is actually delivered.
IMHO it should be switched off in optimized builds (what
would be the point of optimization otherwise?).

Well, obviously, if the profiler shows you have performance
problems due to the extra checks, then you'll have to turn it
off. I'm not familiar enough with VC++ to be sure, but I rather
suspect that you have to compile everything with the same value
for it; in other words, that it affects binary compatibility.
There was a recent thread in this NG about C++ and Java speed
wars where C++ honour could only be saved by turning
_SECURE_SCL off...

There are a couple of recent threads in this NG that are full of
nonsense. Who cares?
However, if some DLL-s happen to be compiled with different
settings and are trying to expose or exchange STL containers
or iterators, the program will crash at run-time with
mysterious error messages.

In other words, it affects binary compatibility.

From what I've been led to believe, if you use the default
settings for release and debug builds in Visual Studios, you
don't have binary compatibility between the two. I'll admit
that I only have hearsay for that, however, since I've yet to
find a compiler where the default settings were usable for
anything serious, VC++ included. I systematically set things as
I need them.
Note that this is a preprocessor definition which can reside
in an header file as well, so even ensuring the same compiler
options does not guarantee binary compatibility. In a quality
implementation binarily incompatible object files should not
link, but it seems this is not the case in practice.

Yes. This is exactly the problem I was talking about with g++.
And you've got a good point about something being defined (or
undef'ed) in a header: _GLIBCXX_DEBUG is also, formally, a
macro.

Of course, both _GLIBCXX_DEBUG and _SECURE_SCL are squarely in
the implementation namespace. Any code which does anything with
them is invoking undefined behavior. So the answer to that is:
don't do it (and don't use any third party code which is so
poorly written as to do it).
 
J

James Kanze

(e-mail address removed):

[concerning the _SECURE_SCL option in VC++...]
I'm developing kind of library which will be used and reused
in yet unknown situations. Performance is often the issue. So
if I can gain any speed by a so simple way as turning off a
nearly useless (at least for me - this feature has produced
only false alarms for me so far) compiler feature, I will
certainly want to do that.

Can you point out an example of when it produces a false alarm?
I've almost no experience with it (since I develope almost
exclusively under Unix), but I know that the various checking
options in g++ have turned up a few errors in my own code
(mostly the concept checking), and have made others (which would
have been caught by the unit tests) far easier to localize and
fix. And I've never seen a false alarm from it.

Which means that you'll probably have to either deliver the
sources, and let the user compile it with whatever options he
uses, or provide several versions of it---how many I don't know.
In this case, the Checked Iterators are enabled by default in
both builds,

Which is simply false. Checked iterators are enabled if you ask
for them; you can enable them in all of the builds you do,
disable them in all, or use any combination you find
appropriate. (I enable them in all my builds.) You choose the
options you compile with; there's nothing which forces you one
way or the other.
thus probably being binary compatible (never tried).

There are a number of different things which can affect binary
compatibility. This is just one. As I mentionned earlier, the
only sure answer is to compile everything with exactly the same
options (which rather argues for delivering sources).
I would say this would be the only justification of this
feature (to be included in the Release build). OTOH, in order
to mix Debug and Release builds in such a way one has to solve
the conflicts appearing from multiple runtime libraries first,
which is not an easy task (and not solvable by command-line
switched AFAIK).

What libraries you link with very definitely is controlled by
the command line. How else could they be controlled, since the
only way to invoke the compiler is by the command line? (All
Visual Studios does is create the equivalent of a command line
invocation.) Look at the /M options.

[...]
This is UB only formally, only from POV of C++ standard. These
macros are documented in the compilers' documentation, so any
code which uses them has implementation-defined behavior.

Yes, but doesn't the documentation more or less say (or at least
imply) that they should be set in the command line, and not by
means of #define's/#undef's in the code. I can imagine that
including some headers, then changing their settings, and
including others, could result in binary incompatibilities
within a single source file. In other words, the implementation
has defined this "undefined behavior", but only in certain
cases.
Of course, VC++ documentation fails to note the binary
incompatibility issues, so the implementation-defined behavior
is actually not very well defined here...

That is, regretfully, a general problem. For many compilers,
jsut finding the documentation is a challenge, independently of
the quality of that which you do manage to find.
Unfortunately, twiddling the macros is exactly what I'm trying
to do (#define _SECURE_SCL 0 in the Release build). I will
need much more convincing before giving it up.

What ever you do with _SECURE_SCL, do it in the command line,
not in source code. But unless there are some very strong
counter arguments, I'd recommend delivering source code, and
letting the client choose his options. (And yes, I realize that
this means additional maintenance work, since sooner or later,
you'll get a bug report concerning a combination of options you
didn't test.) Alternatively, you'll probably want a fair number
of versions of the library: full debugging support, normal
release version (with all run-time checks still in, of course),
an optimized version with run-time checks removed, a version
compiled for profiling, and possibly other variants as well, all
for both multi-threaded and single threaded, for use in a DLL or
in the main, with or without the Boehm collector, etc., etc.
 
T

Triple-DES

Maybe I should have been more precise. The alarms are not "false" in the
sense that the code has UB by the standard. On the other hand, the code
appears to have defined meaning and guaranteed behavior by the same
implementation which raises the alarms. Example 1:

#include <vector>
#include <iostream>
#include <ostream>

double summate(const double* from, const double* to) {
        double sum=0.0;
        for(const double* p=from; p!=to; ++p) {
                sum += *p;
        }
        return sum;

}

int main() {
        std::vector<double> v;
        v.push_back(3.1415926);
        v.push_back(2.7182818);
        size_t n=v.size();
        std::cout << summate(&v[0], &v[n]) << "\n";

}

In real life, summate() is some legacy function accepting double*
pointers, which has to be interfaced with a new std::vector array used
for better memory management. The expression &v[n] is formally UB. VC++
on XP crashes the application with the message "test.exe has encountered
a problem and needs to close.  We are sorry for the inconvenience.". If
_SECURE_SCL is defined to 0, the program runs nicely and produces the
expected results.

I think the problem is not only formal in this case, because the
expression v[n], or v.operator[](n) dereferences an invalid pointer.
Okay, the program may happen to work, but the checked iterator alarm/
crash is justified by the fact that it exposes a potential problem
that could easily be avoided by using (&v[0] + n) instead of &v[n].
 
J

James Kanze

innews:[email protected]:
(e-mail address removed):
[concerning the _SECURE_SCL option in VC++...]
IMHO it should be switched off in optimized builds (what
would be the point of optimization otherwise?).
Well, obviously, if the profiler shows you have performance
problems due to the extra checks, then you'll have to turn it
I'm developing kind of library which will be used and reused
in yet unknown situations. Performance is often the issue. So
if I can gain any speed by a so simple way as turning off a
nearly useless (at least for me - this feature has produced
only false alarms for me so far) compiler feature, I will
certainly want to do that.
Can you point out an example of when it produces a false alarm?
I've almost no experience with it (since I develope almost
exclusively under Unix), but I know that the various checking
options in g++ have turned up a few errors in my own code
(mostly the concept checking), and have made others (which would
have been caught by the unit tests) far easier to localize and
fix. And I've never seen a false alarm from it.
Maybe I should have been more precise. The alarms are not
"false" in the sense that the code has UB by the standard. On
the other hand, the code appears to have defined meaning and
guaranteed behavior by the same implementation which raises
the alarms.

That is obviously false, since if it raises the alarm, it is
expressedly saying that the behavior isn't defined.
Example 1:
#include <vector>
#include <iostream>
#include <ostream>
double summate(const double* from, const double* to) {
double sum=0.0;
for(const double* p=from; p!=to; ++p) {
sum += *p;
}
return sum;
}
int main() {
std::vector<double> v;
v.push_back(3.1415926);
v.push_back(2.7182818);
size_t n=v.size();
std::cout << summate(&v[0], &v[n]) << "\n";
}
In real life, summate() is some legacy function accepting
double* pointers, which has to be interfaced with a new
std::vector array used for better memory management. The
expression &v[n] is formally UB.

Not just formally. I'm not aware of any implementation which
defines it.
VC++ on XP crashes the application with the message "test.exe
has encountered a problem and needs to close. We are sorry
for the inconvenience.". If _SECURE_SCL is defined to 0, the
program runs nicely and produces the expected results.

By shear chance, maybe. It's definitly something that you
shouldn't be doing.
Example 2:
#include <vector>
#include <iostream>
#include <ostream>
double summate(const double* arr, size_t n) {
double sum=0.0;
for(size_t i=0; i<n; ++i) {
sum += arr;
}
return sum;
}

int main() {
std::vector<double> v;
// ...
std::cout << summate(&v[0], v.size()) << "\n";
}
This fails in the similar fashion, but only on an important
customer demo, where the size of array happens to be zero.

And? It's undefined behavior, and there's no reason to expect
it to work.
It appears that VC++ is warning me against potential problems
which might happen on exotic hardware (or on common hardware
in exotic modes), but does this on the most unpleasant way,
crashing on run-time the application which would have run
nicely otherwise.

Or not. It certainly wasn't guaranteed to run nicely.
Providing the sources is out of question by company rules.
Providing several versions is out of question because of lack
of resources.

So you have to impose restrictions on the user. If they accept,
fine. If they prefer to find a different supplier who doesn't
impose such restrictions, that could be a problem.
Cite fromhttp://msdn.microsoft.com/en-us/library/aa985965.aspx

"Checked iterators apply to release builds and debug builds.
...

Maybe, but I manage to turn them off or on at will. (I
currently have them turned on in both release and debug builds,
at least at present, but I rather suspect that I'll change this
policy in the long. Basically, I use a common build for both
release and debug, and offer an "optimized" build for the
special cases where performance is an issue. And it does make
sense to turn them off in the optimized build.)
The default value for _SECURE_SCL is 1, meaning checked
iterators are enabled by default."

The default value is only used if _SECURE_SCL hasn't been
previously defined. If I invoke VC++ with -D_SECURE_SCL=0,
there's no checking.
Yes that's true in theory, but not in practice. It is possible
to link with the run-time libraries meant for another build.

Certainly. You compile for one build, and you tell the compiler
to link with the libraries for another, and the compiler will do
exactly what you tell it to.
And no, this is not simpler. For starters, VC++ wizards
generate code (can be deleted of course) to redefine the "new"
keyword in Debug builds, which would cause link errors when
attempting to link to the Release-build run-time library.

In other words, no one in his right mind uses the wizards.
Because the last thing you want is funny things happening with
your keywords.
All modules exchanging dynamically allocated objects should
better be linked to the same run-time library - if the choice
is not the default one it is very hard to achieve. And by
abandoning the debug library one abandons also the debugging
features it offers.

But all objects will be linked to the same run-time library,
since you only link once. I think you're confusing issues
somewhat: what you mean, I think, is that all of the modules
must be compiled with the same options (at least with regards to
those which can affect binary compatibility), and that the
application must then be linked with a run-time library which
was also compiled with those options.
Compared with the simplicity and flexibility of the Linux/ELF
dynamic linking process (cf. libefence) it's a shame. I know
your answer - use static linking and monolithic applications.
Unfortunately this is not always an option.

Not always, I know. And when it's not, I've run into exactly
the same problem under Linux, with g++. For that matter, I've
run into it when statically linking as well---the problem is
really independent of whether you link statically or
dynamically. If sizeof( std::vector<double> ) is 12 in the
calling function, and 28 in the library routine which receives a
reference to it, problems will ensue.

Which brings us back to the start of the thread: binary
compatibility. Binary compatibility means not just using the
same version of the same compiler on the same platform, it also
means using the same compiler options, at least for some of
those options.
[...]
Of course, both _GLIBCXX_DEBUG and _SECURE_SCL are squarely in
the implementation namespace. Any code which does anything with
them is invoking undefined behavior. So the answer to that is:
don't do it (and don't use any third party code which is so
poorly written as to do it).
This is UB only formally, only from POV of C++ standard. These
macros are documented in the compilers' documentation, so any
code which uses them has implementation-defined behavior.
Yes, but doesn't the documentation more or less say (or at least
imply) that they should be set in the command line, and not by
means of #define's/#undef's in the code. I can imagine that
Cite fromhttp://msdn.microsoft.com/en-us/library/aa985896.aspx
To enable checked iterators, set _SECURE_SCL to 1:
#define _SECURE_SCL 1
To disable checked iterators, set _SECURE_SCL to 0:
#define _SECURE_SCL 0

Yuck. They really need to fix that. (What happens if you
define _SECURE_SCL to 0 after including <vector>?)

At any rate, I've just verified: /D_SECURE_SCL=0 in the command
line does the trick. But you very definitely must be sure that
all of the code using your library is compiled with this option.
If you're selling a library to outside customers, this could be
considered a very constraining requirement.
 
K

kwikius

----- Original Message -----
From: "Paavo Helde" <[email protected]>
Newsgroups: comp.lang.c++
Sent: Tuesday, May 13, 2008 6:27 PM
Subject: Re: STL objects and binary compatibility

Now I have to deliver the final application (Release mode) to the
customer. The question is whether to keep the checked iterators feature
on or not. I have the following choices:

a) Keep the checked iterators on: this makes the program somewhat slower,
and if a corner case of this UB is encountered, the probability of a
customer support problem appearing is 100%.

b) Switch them off: the code is faster, and if this corner case of UB is
encountered, the probability to have a customer support problem is below
100%

Its obvious what you *should* do...

:)

regards
Andy Little
 
A

ajalkane

And the same compiler options. At least some of them can also
affect the binary compatibility. Using g++, try linking code
compiled with -D_GLIBCXX_DEBUG, and code compiled without it.
(Actually, it links just fine---it just crashes at runtime.)
And while I've not actually tried it, I would expect similar
problems with VC++ using mixtures of /Gd, /Gr and /Gz or
different values of /vm<x> or /Zc:wchar_t. (In practice,
without /vmg, you'll get binary incompatibility anyway.)

How does this affect exceptions? If I read correctly,
does it mean that it is a bad idea to throw exceptions from
a library function because the library might be compiled
with different compile options than the application using
the library?
 
I

Ian Collins

ajalkane said:
How does this affect exceptions? If I read correctly,
does it mean that it is a bad idea to throw exceptions from
a library function because the library might be compiled
with different compile options than the application using
the library?

Typically this isn't a problem. Consider your environment's run time
library, it will throw exceptions as specified by the standard.
 
J

James Kanze

(e-mail address removed):
27g2000hsf.googlegroups.com:
[concerning the _SECURE_SCL option in VC++...]
[...]
VC++ on XP crashes the application with the message "test.exe
has encountered a problem and needs to close. We are sorry
for the inconvenience.". If _SECURE_SCL is defined to 0, the
program runs nicely and produces the expected results.
By shear chance, maybe. It's definitly something that you
shouldn't be doing.
Example 2:
#include <vector>
#include <iostream>
#include <ostream>
double summate(const double* arr, size_t n) {
double sum=0.0;
for(size_t i=0; i<n; ++i) {
sum += arr;
}
return sum;
}
int main() {
std::vector<double> v;
// ...
std::cout << summate(&v[0], v.size()) << "\n";
}
This fails in the similar fashion, but only on an important
customer demo, where the size of array happens to be zero.

And? It's undefined behavior, and there's no reason to expect
it to work.
It appears that VC++ is warning me against potential problems
which might happen on exotic hardware (or on common hardware
in exotic modes), but does this on the most unpleasant way,
crashing on run-time the application which would have run
nicely otherwise.
Or not. It certainly wasn't guaranteed to run nicely.

Programming is kind of engineering. In engineering you have to
make trade-offs in order to achieve your goals. Here, in this
case I have a large application which may contain UB similar
to above cases. I have tried hard to find and fix all UB, but
one never can be sure. The UB of the above sort does arise
only if the size of vector is zero, which usually does not
happen. The code is complex and I'm not sure all corner cases
are correctly covered by unit tests. In other words, I believe
that the code is correct, but I am not 100% sure.

That's one of the nice things about UB:). You can never be
100% sure.
Now I have to deliver the final application (Release mode) to
the customer. The question is whether to keep the checked
iterators feature on or not. I have the following choices:
a) Keep the checked iterators on: this makes the program
somewhat slower, and if a corner case of this UB is
encountered, the probability of a customer support problem
appearing is 100%.
b) Switch them off: the code is faster, and if this corner
case of UB is encountered, the probability to have a customer
support problem is below 100% (0% by empiric evidence, but I
will not stress that). The probability of producing slightly
wrong results is somewhat higher than 0, but also close to
zero by my estimation, as the operation involved in UB case
most probably does not write anything to memory.
Of course, the decision also depends on the application
domain. In a safety-critical work you cannot live on
probabilities. In our case my favorite is clearly b).

Maybe. You explain a bit more about the commercial context
later, which may mean that you don't really have much choice.
Of course they can be turned off. In an earlier message in
this thread you said: "Any code which does anything with them
(_GLIBCXX_DEBUG and _SECURE_SCL) is invoking undefined
behavior." Or do you want to claim it is not UB if specified
on command-line?

Well, most correctly: an implementation may define undefined
behavior. If you conform to the "definitions" in the
documentation, you're OK. I had simply supposed that the
documentation would say to define it on the command line,
because it seems to be the only thing which makes sense.
Judging from the little bit you quoted, however, there seems to
be a problem in the documentation as well. (I'm pretty sure
that *if* you use a #define in the code, it has to be before any
of the includes. Personally, I think it easier to manage
this---and the potential portability issues---by putting it in
the command line.)

(Technically, of course, the command line is implementation
defined, totally. But I don't want to get into playing word
games.)
Yep, that's what the word "default" means.

Yes. And the fact that it is a "default" generally implies that
other values are possible. Whether you use it or not is up to
you.
[...]
But all objects will be linked to the same run-time library,
since you only link once. I think you're confusing issues
I have about 20-30 different DLL-s, which are all first linked
once when they are built, and loaded in the running
application dynamically when needed, thus completing the
linking process. Different DLL-s may well happen to be linked
to different runtime libraries (as is the case for static libs
as well, but this can be controlled much better).

Because you do the linking:). In general, avoid DLL's unless
they are absolutely necessary, because you don't know what
you're going to get. However...

[...]
We are selling a complete application containing many
libraries. In principle the customer can develop and add their
own libraries. OTOH, they can also create custom applications
using our libraries. Yes, it might be appropriate to offer
different versions. However, as you can see we have hard times
sometimes even to get the single version consistent...

So your commercial constraints say you need the DLL's. And
binary compatilibity with code which you don't control. It's a
difficult problem, since you really don't have any control over
what the customer does. At the very least, however, I think you
have to deliver two versions, corresponding to the two default
command lines that Visual Studio uses. (Not that these
default's are really useful for anything, but a lot of people
will probably use them.) Beyond that, I can well see an
interest in providing other versions as well. The problem is,
of course, documenting this, and getting your customers to
manage it correctly. I'm not familiar enough with the Windows
world to make any concrete suggestions, but under Unix, I'd
start by putting each version in a separate directory, and very
carefully documenting the compiler flags which can't be changed
for each, probably providing a set of script files which can be
sourced to define shell variables to be used when invoking the
compiler (or in the makefile). This also would work for me
under Windows, but I'm pretty sure that I'm not a typical
Windows developer. (I use bash/vim/GNU make, rather than Visual
Studios:).) I don't know how you'd integrate this so it would
be easy to import under Visual Studios (but I rather suspect
that finding a way is a commercial necessity for you).
 
J

James Kanze

Typically this isn't a problem. Consider your environment's
run time library, it will throw exceptions as specified by the
standard.

It depends. It may also throw exceptions in case of undefined
behavior. And binary compatibility will affect any exceptions
you throw as well.

And I really should have mentionned the /EH options as well. If
you compile some code without them, you may have binary
compatibility problems if other code is compiled with one of
them. I tend to forget them, however, since if you use the
default (no /EH options, e.g. no exception support), you get
compiler errors as soon as you include any of the standard
headers (which do use exceptions); in practice, /EHs is about
the only thing that makes sense (sort of like /vmg---I don't
even understand why the option is there, much less why the
default is not to use the only reasonable value for it).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top