On Apr 14, 8:12 pm, Tony Strauss <
[email protected]>
wrote:
On Apr 13, 10:10 pm, Tony Strauss
On Apr 13, 2:37 pm, peter koch
[...]
One additional caveat that I've found with shared
libraries and C++ in particular is that it's easy to make
a change that requires recompilation of all clients of the
library but not to be aware of it.
Anytime you change anything in a header file, all client
code must be recompiled. Formally, if all you change are
comments, you're OK, and there are a few other things you
can change, but you can't change a single token in a class
definition. And in practice, the simple rule is: header
file changed=>all client code must be recompiled. (Which is
the way makefiles work, anyway.)
You make it sound so simple
, but at the particular place
where I was working, the rules did not end up being as clear
as you're laying out (naturally).
The rules I just explained have nothing to do with where you
might work. They're part of the language. Something called the
one definition rule, section 3.2 of the standard.
I was talking about the feasibility of ensuring that all
client code was recompiled when a header changed, not about
the correctness of doing so. The feasibility does of course
depend on various 'local' factors.
It has to be feasible, since it is necessary. If the local
factors don't make if feasible, then you can't use C++. The
standard is quite clear about this.
[...]
This isn't the forum to discuss this, but I have to disagree.
I think that recursive builds are inherently flawed (although,
of course, they can be perfectly functional). You already may
well have read this and come to your own conclusions, but I
tend to agree with much of what Peter Miller says in his
classic paper on non-recursive build
systems:
http://aegis.sourceforge.net/auug97.pdf
I've read it. The author doesn't know anything about software
engineering; recursive make doesn't work well if the project is
poorly engineered. But then, nor does anything else. (Read the
list of problems in section 2. They all start from a
fundamental assumption that the system wasn't designed properly.
That in fact, it wasn't designed at all---I've worked in a lot
of different companies, at different maturity levels, but even
the worse never had these problems.)
My brain must have been asleep when I wrote my prior post,
because both recursive and non-recursive build systems have
issues with shared libraries.
Yes. The problem is shared libraries (or more correctly,
dynamicly linked objects---they're not really libraries, and
they don't have to be shared), not recursive make. The answer
to this problem is to use static linking. Dynamic linking
increases the management overhead---basically, the "header
files" (the interface specification) of a dynamic library must
be fixed at the start, and anytime it's changed (which should be
very, very rarely, if ever), *everything* must be recompiled.
Or you design some sort of version management into the system.
(I'm not sure how this works on other systems, but Unix does
support this somewhat; if you link against libxxx.so.4.3, that's
the dynamic library it will load. Even if there is a
libxxx.so.4.4 available.)
In retrospect, I'm not sure why I got on my 'recursive build
systems' are evil rant and confused the point that I was
trying to make. To illustrate the issues that shared
libraries can introduce with any build system, consider:
library1/Makefile
library1/library1.h => library1.so
library1/library1.cpp
program1/Makefile
program1/program1.cpp => program1 (which depends on
library1.so)
program2/Makefile
program2/program2.cpp => program2 (which depends on
library1.so)
Leaving aside SCM and just talking about a local working copy
of the source tree,
There shouldn't be one. A good version management system acts
as a file server, presenting different views to each programmer.
But it doesn't matter.
suppose a developer:
1.) Changes library1.
2.) Rebuilds library1 (cd library1 && make)
3.) Rebuilds program1 (cd program1 && make)
Shared libraries introduce the complication that changing and
rebuilding library1 invalidates any programs that depend on library1
(program2 may no longer work correctly).
If the changes in the library don't affect the header files,
there should be no problem. If they do, then the version number
changes; programs linked against the old version will continue
to load it (at least under Unix).
You're right that this is not a flaw in the build system, but
it does put a burden on the programmer to remember to rebuild
program2 before moving it. If the system in question had been
using static libraries, this rebuild of program2 would not be
necessary (assuming that the functional change to library1 was
not required in program2).
If you've versioned the shared objects, and haven't deleted the
old ones, the rebuild won't be necessary either.
Since, however, the example system is using shared libraries,
the programmer is responsible for remembering to rebuild any
programs that depends on library1 before he moves library1.
At my former workplace, this proved to be a drag on
development efficiency (there never were any problems in
production because the system as a whole was built and moved
to production).
It is usual to do a complete, clean build before moving into
production (and often, once a week, over the weekend). If for
no other reason than to not have to deliver several versions of
each shared library.