No unanswered question

  • Thread starter Alf P. Steinbach /Usenet
  • Start date
I

Ian Collins

Ok, I give. I misunderstood, or you were unclear, either way my fault,
and I persisted in this without further clarifying exactly what you
have done.

In effect, it sounds like in your system that there are some rules for
building certain kinds of code, like C++, which have been prevetted
and rarely change, very much like the $(eval $(value ...)) approach.

Yes. I think all "make" solutions have a hierarchical structure with
fixed system wide rules (how to make a .o form a .c etc.) and
progressively more flexible local conventions.
As I said earlier, I'm much more partial to this approach. In fact, my
initial prototype was very much something like this. However, at least
my implementation on GNU Make was quite slow. Some profiling showed
that it spent a large portion of its time in string manipulation,
process creation for the cat and echo processes, and io, on windows.

I can imagine. Trying to get too smart with make rules will do that, it
is after all just another interpreted scripting language.
Still though, I'm curious how exactly how yours is implemented.

We/I just use Sun's dmake.
Is it
portable across operating systems, like HPUX, z linux, win itanium,
and more? Does it support cross compilation which is required for msvc
on win itanium? What exact make does it use? What other tools does it
use? Anything fancy, or just cat, echo, gcc (or whatever c++
compiler), etc.?

None of the above! The tools run on Solaris and Linux flavours only.
> You said it was created by an IDE initially, which?

An old Sun product called Workshop.
 
K

Keith H Duggar

So, developers never specified new rules? What if the developer added
new source code which was to be in a new library? I'm confused.
Presumably the developers had to define new make rules. I can only
assume that's what you meant by the addition of new targets. In which
case, do you even track header dependencies? If not, your system is
laughably not incremental. If you do track header file dependencies,
and the developer has to add the rules to track header file
dependencies every time he adds a new library, then there's plenty of
room for error. (In addition to all of my other points.) Also typos.

Let me show you the exact commands one would use to create a new
library in my system assuming that $LIBROOT is the root library
path (ie the dir that contains top level libraries) and newlib is
the new library:

$ cd $LIBROOT
$ mkdir newlib
$ cp $LIBROOT/makes/makefile.lib newlib/makefile
$ cd newlib
$ gencode -ch fileone filetwo
$ gvim fileone.hpp fileone.cpp filetwo.hpp filetwo.cpp ...
... edit/write the new code for fileone.hpp etc ...
... check code into depot if desired ...

that's it; job is done. Now you want to build the new library?
Ok assume $OBJROOT is the directory in which you have previously
built the libraries and now you want to build this new library:

$ cd $OBJROOT
$ gmake -f $LIBROOT/makefile

end of story; the library collection is incrementally built which
in this case includes a clean build of this new library. Did the
developer specify new rules? No. All they needed to do was create
directories, copy files, and write code.

Now I have a question. Do you know how to implement a build system
that provides the above simple support for adding new libraries?

KHD
 
K

Keith H Duggar

Yes.  I think all "make" solutions have a hierarchical structure with
fixed system wide rules (how to make a .o form a .c etc.) and
progressively more flexible local conventions.


I can imagine.  Trying to get too smart with make rules will do that, it
is after all just another interpreted scripting language.

Or simply not knowing when to use := instead of =, or when to
let a script/program do the work instead, etc. You know, all the
usual noob mistakes. The kinds of things that a decent book such
as "Managing Projects with GNU Make" by Robert Mecklenburg would
teach you about. I wonder if the op has read that book?

KHD
 
J

Joshua Maurice

Let me show you the exact commands one would use to create a new
library in my system assuming that $LIBROOT is the root library
path (ie the dir that contains top level libraries) and newlib is
the new library:

   $ cd $LIBROOT
   $ mkdir newlib
   $ cp $LIBROOT/makes/makefile.lib newlib/makefile
   $ cd newlib
   $ gencode -ch fileone filetwo
   $ gvim fileone.hpp fileone.cpp filetwo.hpp filetwo.cpp ...
   ... edit/write the new code for fileone.hpp etc ...
   ... check code into depot if desired ...

that's it; job is done. Now you want to build the new library?
Ok assume $OBJROOT is the directory in which you have previously
built the libraries and now you want to build this new library:

   $ cd $OBJROOT
   $ gmake -f $LIBROOT/makefile

end of story; the library collection is incrementally built which
in this case includes a clean build of this new library. Did the
developer specify new rules? No. All they needed to do was create
directories, copy files, and write code.

I'm sorry. I was specifically attacking the idiomatic usage of make,
where I thought that was developers specifying rules explicitly ala
foo.o : foo.cpp
or some use of wildcards, or whatever. Using prebuilt macros does not
appear to be idiomatic usage as described in the manual or half of the
books which I have read.

Your system is pretty good. I presume that it uses file system
wildcards to get the list of cpp files in that directory, sets the
include path to automatically include the relevant dir (or you specify
all includes relative to the base dir of the project), and it uses the
lib-root dir name as the soname of the output library. Overall, pretty
good. I'd still expect that the users will have to modify the makefile
for any real usage though, such as for link dependencies, preprocessor
defines, and possibly include path dirs.

Then there's still also situations where you might need to tweak the
compiler options for a particular platform. In my company, there's a
lot of places where this is the case, to work around compiler bugs, to
get acceptable compile times, etc. In this one particularly complex
template file, on one platform we had to turn down the optimization
level of this one for regular builds, otherwise the compile time would
be many hours for just this one file.

With your system, you could also handling new file creation hiding
other files on search paths, file deletion with file system wildcards
triggering appropriate cleans, and changes to compile options also
triggering cleans such as changing command line preprocessor defines.
I'd be curious if you handle those. In which case, your system is very
good.

As I just mentioned, I have no real problems with such a system
besides
1- its very poor speed compared to a solution written not so much in
an interpreted language,
2- and it can't work for other compilation models such as Java, which
is a sticking point for my use case.
So, if it's speed is fast enough and/or on a small enough project, and
you don't need to support other compilation models, I really like it
(conditionally on a couple other minor aspects).

Still, I think we're really abusing GNU Make (or whatever make flavor)
when we do this. This is definitely not what was intended when make
was originally written. Make was written with the intent of users
specifying rules to build stuff, possibly with variables like $(CC)
etc. It wasn't really written with a pure macro aka task system in
mind. Instead, this is much closer in spirit to Maven or Ant. It's
just that the (GNU) Make engine is pretty general purpose, portable,
and reusable, so we reused it to solve the problem in a very
unorthodox approach from the perspective of the original authors of
Make. Also, I would be a little happier if there was a publicly
available \standard\ implementation of these makefile scripts instead
of each company and/or team rolling their own, with their own
conventions. My complaint I guess is simply that GNU Make is a build
system framework, but I would really like a standardized build system
usable out of the box instead of having to roll my own.
Now I have a question. Do you know how to implement a build system
that provides the above simple support for adding new libraries?

Yes.

Hell, I wrote a Make clone myself, to provide built-ins for echo to
file, cat, etc., to see if I could get acceptable performance. This
was before I decided to abandon the Make model entirely. I know some
won't believe me, but on some rather uncontrived, large makefiles, my
own implementation outperformed GNU Make by quite a large factor, in
the area of 25 to 50x faster runtimes for an up to date build on
~25,000 source files on a recent Linux install with 8 cores. (Haven't
got legal to let me open source it yet. Yes, both were compiled with
standard optimization compiler options to gcc 4 something on Linux. No
cheating.) I'll get around to profiling later to figure out how this
is the case. I really don't know why. I just wrote mine in a
simplistic straightforward C++ approach, whereas the GNU Make
internals is crazy paranoid about string copies. You should see the
hackery they do for the built in functions and how they pass a large
string buffer around in an attempt to avoid extra string copies.
Perhaps it's because I don't keep information around about the
definition location of variables like GNU Make, but I doubt that could
be a significant performance hit.
 
J

Joshua Maurice

Or simply not knowing when to use := instead of =, or when to
let a script/program do the work instead, etc. You know, all the
usual noob mistakes. The kinds of things that a decent book such
as "Managing Projects with GNU Make" by Robert Mecklenburg would
teach you about. I wonder if the op has read that book?

Partially. I admit I have only read the sections available on
http://oreilly.com/catalog/make3/book/index.csp

Still, I would note that I am not the run of the mill Make noob.

It's just that I really don't like the contortions I had to go through
to get an incrementally correct build for C++ in Make. The biggest
problem with Make is it's dependency graph model. Specifically, if one
node is considered out of date, all nodes downstream are considered
out of date. This makes it exceptionally hard, or I think impossible
(? correct me if I'm wrong please), to have "code" execute on every
go, in parallel, and be able to make a node out of date, but not force
out-of-date-ness.

An example is dealing with when a cpp source file has been removed. In
this case, you want to force a relink of its library, and preferably
delete the "orphaned" object file. This can be handled in pass 1, but
then it's done by a single thread only. You could try to move such
logic to rule evaluation, but rules only run when the node is out of
date, and there's no way AFAIK to conditionally mark another node out
of date in a rule's command. At least, GNU Make has some interesting
caching of file timestamps, so you can't just simply delete a file
which has an order-only dependency on yourself and expect that to
work. I've tried; it doesn't.

This is also one of the reasons I wrote my own make clone. In my
clone, for nodes A and B, A depends order-only on B, if B's command
deletes file A, then A will be considered out of date, and otherwise
it behaves as usual. The result is that you can conditionally execute
portions of the graph downstream. Again, you can get this same
behavior by writing this logic outside of rule commands so it will be
done in pass 1, but pass 1 is single threaded.
 
K

Keith H Duggar

I'm sorry. I was specifically attacking the idiomatic usage of make,
where I thought that was developers specifying rules explicitly ala
    foo.o : foo.cpp
or some use of wildcards, or whatever. Using prebuilt macros does not
appear to be idiomatic usage as described in the manual or half of the
books which I have read.

I don't understand why you would say that. The make manuals I've
seen discuss macros (or just "variables" and GNU calls them) and
both the make books I've read discuss them at length.

Anyhow who cares what is "idiomatic" or not? I would've thought
our shared interest is in solving problems not arguing about what
is "idiomatic". It seems instead that you are trying to limit the
discussion a narrow practice where your complaints are justified
rather than discussing how to effectively use make.
Your system is pretty good. I presume that it uses file system
wildcards to get the list of cpp files in that directory, sets the

Correct. There are also a couple of scripts that find sets of
files in a way slightly more sophisticated that simple wildcards.
include path to automatically include the relevant dir (or you specify
all includes relative to the base dir of the project), and it uses the

Correct. We specify all includes relative to the base dir (well,
to a small set of "base" dirs actually).
lib-root dir name as the soname of the output library. Overall, pretty

That is the default. Of course it can be overridden easily but
so far nobody has seen the need to. And it would need a decent
justification to pass review.
good. I'd still expect that the users will have to modify the makefile
for any real usage though, such as for link dependencies, preprocessor
defines, and possibly include path dirs.

Link dependencies are automatically computed by a script that
examines the apps includes and a database (also automatically
generated) that relates and partially orders inter-library
dependencies.

We specify any app specific defines and if necessary link
dependency overrides in the individual application makefile.

So far we have not had the need to compile internal libraries
with idiosyncratic defines. However, if we did I suspect we
would put those in the library specific make files (similar to
how we handle 3rd party configurations) but I haven't thought
that through yet.
Then there's still also situations where you might need to tweak the
compiler options for a particular platform. In my company, there's a
lot of places where this is the case, to work around compiler bugs, to
get acceptable compile times, etc. In this one particularly complex
template file, on one platform we had to turn down the optimization
level of this one for regular builds, otherwise the compile time would
be many hours for just this one file.

Yeah, for a while we compiled on 3 platforms (now just 2) so
there were various platform defines. Those are centralized in
a single included makefile and a few header files.
With your system, you could also handling new file creation hiding
other files on search paths, file deletion with file system wildcards
triggering appropriate cleans, and changes to compile options also
triggering cleans such as changing command line preprocessor defines.
I'd be curious if you handle those. In which case, your system is very
good.

Yes, all are handled with a caveat: command line defines are
captured only if one runs make with the forwarding script:

runmake -f $LIBROOT/makefile ...

which also does some other stuff. Common developers are taught
to use (and do use) this command. However, there is sometimes
a use for the "raw" command I gave earlier ie gmake -f ...

A good example is the NDEBUG argument. Specifying NDEBUG would
be captured by runmake and could (depending on which production
library cache you are linking against) trigger and total world
rebuild. Often when you are debugging you might want to define
NDEBUG=0 only for your module while still linking against the
NDEBUG=1 library cache. To do that one can use the raw gmake
command.

This has never been a problem and if it became one we would
just alias gmake -> runmake in everyone's environment and
restrict direct access to gmake.
As I just mentioned, I have no real problems with such a system
besides
1- its very poor speed compared to a solution written not so much in
an interpreted language,

Not in our case. The time between invoking the gmake and the
first g++ invocation (by which point all the hard work has
been done to synthesize the global make file) is just several
seconds. And that is a project having nearly 80,000 files,
scores of libraries, and hundreds of apps. The bulk of that
time is just file system access not the "interpretation" you
are worried about.
2- and it can't work for other compilation models such as Java, which
is a sticking point for my use case.

Well, you say that but I don't know if that is true. There is
Java code compiled/jarred whatever in this system it's just that
I wasn't involved in specifying the rules for it. That's why I
cannot speak with any certainty but so far those guys haven't
complained to me about the core system. What specifically is
your Java specific difficulty.
So, if it's speed is fast enough and/or on a small enough project, and
you don't need to support other compilation models, I really like it
(conditionally on a couple other minor aspects).

Our project seems to be larger than yours by far so I guess
the question is whether several seconds it too long for you.
Still, I think we're really abusing GNU Make (or whatever make flavor)
when we do this. This is definitely not what was intended when make
was originally written. Make was written with the intent of users
specifying rules to build stuff, possibly with variables like $(CC)
etc. It wasn't really written with a pure macro aka task system in
mind. Instead, this is much closer in spirit to Maven or Ant. It's
just that the (GNU) Make engine is pretty general purpose, portable,
and reusable, so we reused it to solve the problem in a very
unorthodox approach from the perspective of the original authors of

Oh boy. That is the kind of ranting where we part ways. I
don't personally know what the original authors of think/
feel about how make should be used. And I don't care. make
is a /tool/ (contrary to your repeated assertions that it
is a "framework" or a "system" which it is not). As a tool
it performs a job and does so well enough that it can form
the basis of a nice build /system/.
Make. Also, I would be a little happier if there was a publicly
available \standard\ implementation of these makefile scripts instead
of each company and/or team rolling their own, with their own

Yes, that would be very nice (if there isn't already such
a thing). But, that's up to the community not make.
conventions. My complaint I guess is simply that GNU Make is a build
system framework, but I would really like a standardized build system
usable out of the box instead of having to roll my own.

I don't think we agree on what the words "framework" and "system"
mean. Maybe that is partly why you are so angry at make because
you are thinking of it as a framework or a system and thus are
expecting too much from it. Try to get this: make is a TOOL.
Yes.

Hell, I wrote a Make clone myself, to provide built-ins for echo to
file, cat, etc., to see if I could get acceptable performance. This

The problem I'm having is that I find your claims incongruous
with seemingly naive questions like:

I mean come on, seriously. Yeah yeah, I know you just said
that /you/ were talking/thinking only about using make in a
certain gimped way. Well, that wasn't clear at all. So such
questions as the above came across either as ignorant or
"playing stupid". Especially when paired with all the many
many paragraphs of ranting/complaining that came before.

KHD
 
J

Joshua Maurice

On Jul 23, 3:45 pm, Joshua Maurice <[email protected]> wrote:
[snip the discussion of Keith's good build system]
Not in our case. The time between invoking the gmake and the
first g++ invocation (by which point all the hard work has
been done to synthesize the global make file) is just several
seconds. And that is a project having nearly 80,000 files,
scores of libraries, and hundreds of apps. The bulk of that
time is just file system access not the "interpretation" you
are worried about.

Odd. GNU Make 3.81? What kind of box are you running this on?

Here's mine.

psflor.informatica.com ~$ uname -a
Linux psflor.informatica.com 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
psflor.informatica.com ~$ free
total used free shared buffers
cached
Mem: 16366100 14678028 1688072 0 471320
11997828
-/+ buffers/cache: 2208880 14157220
Swap: 33551744 1830912 31720832
psflor.informatica.com ~$ grep "model name" /proc/cpuinfo
model name : Quad-Core AMD Opteron(tm) Processor 2387
model name : Quad-Core AMD Opteron(tm) Processor 2387
model name : Quad-Core AMD Opteron(tm) Processor 2387
model name : Quad-Core AMD Opteron(tm) Processor 2387
model name : Quad-Core AMD Opteron(tm) Processor 2387
model name : Quad-Core AMD Opteron(tm) Processor 2387
model name : Quad-Core AMD Opteron(tm) Processor 2387
model name : Quad-Core AMD Opteron(tm) Processor 2387

For a very simple makefile containing (?) 20,000 rules in a binary-
tree dependency graph with branch factor 75, it took the regular GNU
Make install (which defaults to -O3 and such IIRC) several minutes to
run. My make clone took seconds.

For my own sanity, let me go rerun this test. I'll get back today
hopefully. I'll put up the source code of the test generator at least
to see if you think it's a somewhat fair test.
Well, you say that but I don't know if that is true. There is
Java code compiled/jarred whatever in this system it's just that
I wasn't involved in specifying the rules for it. That's why I
cannot speak with any certainty but so far those guys haven't
complained to me about the core system. What specifically is
your Java specific difficulty.


Our project seems to be larger than yours by far so I guess
the question is whether several seconds it too long for you.

Very odd.

I'd imagine that it can't be incremental for the Java, though your
system has surprised me thus far. I would be very impressed if it did
do incremental java compilation.

The problem is that Java's compilation model is very different than C+
+. In C++, each unit can be done in isolation to the other units, and
in the end a "minimal" linking step is done. Java's compilation model
is much closer to compiling a C++ source file to a dll, a shared
library, which is not allowed to have unresolved external symbols.
Each compilation cannot be done independently. When I change a cpp
source file, I only need to recompile that object file and do some
relinking. When I change a Java file, a naive solution is to recompile
that Java file, all direct dependent Java files, and all of their
dependencies, and so on. This is how Make operates: any change to a
node in the graph forces a full recompile of all nodes downstream.
There is no easy termination condition to the rebuild cascade across
the dependency graph.

Simple example:
A.java uses type name B, but does not use type name C.
B.java uses type name C.
C.java is there too.

When a change is made to the internal implementation of a function in
C.java, there is no need to recompile B or A. When there is a change
to the implementation of C.java, you do need to recompile B, but there
is no need to recompile A. Does your system do this automatically?

Then, does your system handle ghost dependencies?
http://www.jot.fm/issues/issue_2004_12/article4.pdf
I would congratulate you for having one of the few correct Java
incremental build systems in existence if this is the case.
Oh boy. That is the kind of ranting where we part ways. I
don't personally know what the original authors of think/
feel about how make should be used. And I don't care. make
is a /tool/ (contrary to your repeated assertions that it
is a "framework" or a "system" which it is not). As a tool
it performs a job and does so well enough that it can form
the basis of a nice build /system/.

It's a good tool for what it claims to do. It's fast, portable,
extensible, etc. However, I still claim that there could be a better
tool. (I'm in the process of writing something which I hope will be
that.) Make's overall design of a rebuild cascading down the
dependency tree without termination is not a good one. When I
implemented a correct incremental build system on top of it, I found
myself frequently fighting against it instead. I wasn't able to use
what I thought were idiomatic and standard styles, like (modulo typos,
simplified for explanation)
*.o : *.cpp ; $(CC) $(OPT_FLAGS) -c $^ -o $@
Sorry that you consider it ranting. I was merely trying to raise
consciousness that it actually is not ideal, and that its foundation
is flawed. You still manage to make quite good use out of it though.
Yes, that would be very nice (if there isn't already such
a thing). But, that's up to the community not make.


I don't think we agree on what the words "framework" and "system"
mean. Maybe that is partly why you are so angry at make because
you are thinking of it as a framework or a system and thus are
expecting too much from it. Try to get this: make is a TOOL.

Well, all I'm trying to say is that make is not usable out of the box.
Instead, you need heavy customization to get it anywhere near usable.
I think this is a bad state of affairs. I think most developers would
much prefer some sort of standard build tool which is usable out of
the box. It's not to say make is a bad tool. I just wish instead that
developers didn't have to write their own build system for each
company.
The problem I'm having is that I find your claims incongruous
with seemingly naive questions like:


I mean come on, seriously. Yeah yeah, I know you just said
that /you/ were talking/thinking only about using make in a
certain gimped way. Well, that wasn't clear at all. So such
questions as the above came across either as ignorant or
"playing stupid". Especially when paired with all the many
many paragraphs of ranting/complaining that came before.

I'm sorry. I haven't really seen this in practice. I don't have that
much corporate experience. What little I have is that no one in my
company has done anything like that. I also haven't found any public
reference implementations using macros like you do. I assumed such
usage was rather rare. I apologize for grouping you with them.
 
J

Joshua Maurice

Simple example:
A.java uses type name B, but does not use type name C.
B.java uses type name C.
C.java is there too.

When a change is made to the internal implementation of a function in
C.java, there is no need to recompile B or A. When there is a change
to the implementation of C.java, you do need to recompile B, but there
is no need to recompile A. Does your system do this automatically?

Ack, hit submit accidentally. Small typo. That should read:
 
K

Keith H Duggar

[snip the discussion of Keith's good build system]
Not in our case. The time between invoking the gmake and the
first g++ invocation (by which point all the hard work has
been done to synthesize the global make file) is just several
seconds. And that is a project having nearly 80,000 files,
scores of libraries, and hundreds of apps. The bulk of that
time is just file system access not the "interpretation" you
are worried about.

Odd. GNU Make 3.81?
Yes.

What kind of box are you running this on?

Here's mine.

psflor.informatica.com ~$ uname -a
Linux psflor.informatica.com 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38
EST 2008 x86_64 x86_64 x86_64 GNU/Linux
psflor.informatica.com ~$ free
             total       used       free     shared    buffers
cached
Mem:      16366100   14678028    1688072          0     471320
11997828
-/+ buffers/cache:    2208880   14157220
Swap:     33551744    1830912   31720832
psflor.informatica.com ~$ grep "model name" /proc/cpuinfo
model name      : Quad-Core AMD Opteron(tm) Processor 2387
model name      : Quad-Core AMD Opteron(tm) Processor 2387
model name      : Quad-Core AMD Opteron(tm) Processor 2387
model name      : Quad-Core AMD Opteron(tm) Processor 2387
model name      : Quad-Core AMD Opteron(tm) Processor 2387
model name      : Quad-Core AMD Opteron(tm) Processor 2387
model name      : Quad-Core AMD Opteron(tm) Processor 2387
model name      : Quad-Core AMD Opteron(tm) Processor 2387

$ uname -a
Linux [redacted] 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST
2010 x86_64 x86_64 x86_64 GNU/Linux
$ free
total used free shared buffers
cached
Mem: 65875328 4239496 61635832 0 441532
3411480
-/+ buffers/cache: 386484 65488844
Swap: 33551744 0 33551744
$ grep "model name" /proc/cpuinfo
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
model name : Intel(R) Xeon(R) CPU X7460 @ 2.66GHz
For a very simple makefile containing (?) 20,000 rules in a binary-
tree dependency graph with branch factor 75, it took the regular GNU
Make install (which defaults to -O3 and such IIRC) several minutes to
run. My make clone took seconds.

Please explain, several minutes until the first g++ (or another
compiler call)? Or several minutes to complete an entire --dry-run
incremental or full build?

I'm thinking now it is somewhat difficult to actually measure the
performance of the build system itself. I've always used time to
first compiler execution because even though I would prefer to do
something like a --dry-run a --dry-run will obviously not trigger
all the dependencies that would be triggered in a real run since
the targets of multiple output rules are not understood by make
and will not actually be modified.

Furthermore, I wasn't making any effort to sporadically touch
multiple files which could effect the analysis time. Though I have
not thought that through and given your experience in writing make
clones you would be in a much better position to comment on that
possibility?

However, there is still a chance that this could be as simple as
a poor choice of = when := would have done.
For my own sanity, let me go rerun this test. I'll get back today
hopefully. I'll put up the source code of the test generator at least
to see if you think it's a somewhat fair test.

Yeah, the tests are probably not equivalent. You might be running
a much more stressful test than I. I just touched a single library
..cpp file and ran the build. Is that a gimped performance test?
Very odd.

I'd imagine that it can't be incremental for the Java, though your
system has surprised me thus far. I would be very impressed if it did
do incremental java compilation.

It may not be. I truly don't know. I know they often have trouble
with Java deployments but maybe that is just "ordinary" jar-hell.
The problem is that Java's compilation model is very different than C+
+. In C++, each unit can be done in isolation to the other units, and
in the end a "minimal" linking step is done. Java's compilation model
is much closer to compiling a C++ source file to a dll, a shared
library, which is not allowed to have unresolved external symbols.
Each compilation cannot be done independently. When I change a cpp
source file, I only need to recompile that object file and do some
relinking. When I change a Java file, a naive solution is to recompile
that Java file, all direct dependent Java files, and all of their
dependencies, and so on. This is how Make operates: any change to a
node in the graph forces a full recompile of all nodes downstream.
There is no easy termination condition to the rebuild cascade across
the dependency graph.

Simple example:
A.java uses type name B, but does not use type name C.
B.java uses type name C.
C.java is there too.

When a change is made to the internal implementation of a function in
C.java, there is no need to recompile B or A. When there is a change
to the interface of C.java, you do need to recompile B, but there
is no need to recompile A.

So in Java B.java depends on C.class instead of C.java? As if a
B.cpp depended on C.obj instead of C.hpp? In this partly because
Java does not separate interface and implementation into separate
files (ie .hpp, .cpp) or no?
Does your system do this automatically?

I don't know. As I said I did not write the Java specific rules.
However, if it is the case that B.java depends on C.class instead
of C.java then it does seem like a pretty nasty situation.
It's a good tool for what it claims to do. It's fast, portable,
extensible, etc. However, I still claim that there could be a better
...
Well, all I'm trying to say is that make is not usable out of the box.
Instead, you need heavy customization to get it anywhere near usable.
I think this is a bad state of affairs. I think most developers would
much prefer some sort of standard build tool which is usable out of
the box. It's not to say make is a bad tool. I just wish instead that
developers didn't have to write their own build system for each
company.

Ok and agreed. However, let's please be honest here; that latest
statement is significantly different from your previous statements
that make is fundamentally fubar, horrific, etc. Agreed? If so I
think we have come far in understanding each other. At least I now
understand what you are saying and I agree that:

1) make is a limited tool and does not provide out-of-the-box
an incrementally correct build /system/. It must be augmented
to provide that.

2) make does have subtle annoyances and may be "slow".

3) it would be very, very nice if there was an open source build
/system/ of script augmentations etc for make.
tool. (I'm in the process of writing something which I hope will be
that.) Make's overall design of a rebuild cascading down the
dependency tree without termination is not a good one. When I

Ok well that is a point of disagreement still. I don't understand
why "cascading down the dependency tree without termination" is a
fundamentally bad design. What mathematical model do you propose
to replace a partially ordered directed acyclic graph analysis
such as the make model?

KHD
 
J

Joshua Maurice

Please explain, several minutes until the first g++ (or another
compiler call)? Or several minutes to complete an entire --dry-run
incremental or full build?

I'm thinking now it is somewhat difficult to actually measure the
performance of the build system itself. I've always used time to
first compiler execution because even though I would prefer to do
something like a --dry-run a --dry-run will obviously not trigger
all the dependencies that would be triggered in a real run since
the targets of multiple output rules are not understood by make
and will not actually be modified.

Furthermore, I wasn't making any effort to sporadically touch
multiple files which could effect the analysis time. Though I have
not thought that through and given your experience in writing make
clones you would be in a much better position to comment on that
possibility?

However, there is still a chance that this could be as simple as
a poor choice of = when := would have done.

Tests are running. Finally got some free time. Will get back shortly.
So in Java B.java depends on C.class instead of C.java? As if a
B.cpp depended on C.obj instead of C.hpp?

Well, it's actually much more perverse than that. At least in my
company, for organization reasons, each team has its own java source
directory. In addition, for packaging reasons, they're further split
into more source directories. Each of these source directories is
built with a single javac invocation. For a clean build of that Java
source dir, a Java file depends on the other Java source files in that
source directory, and it depends on the class files external to that
source directory. For an "incremental build", javac will check file
somewhat, somewhat, and selectively use the "up to date" (not
determined accurately) class file or recompile "out of date" (not
determined accurately) java source file in the this java source
directory. It's really different.

The short version is for the dependency chain A depends on B, B
depends on C, A does not directly use C, then a change to C's source
should not trigger a rebuild of A. If you cascade the rebuild without
termination, then a single change to a java file near the beginning of
the dependency graph could trigger a rebuild of the whole graph under
GNU Make's model, but it should not because almost all of the
recompiles would be unnecessary.
In this partly because
Java does not separate interface and implementation into separate
files (ie .hpp, .cpp) or no?

Yeah. This is basically because interface and implementation are in
the same file. Luckily at least, there is no automatic transitive
compile dependencies like you get when headers include headers. That
allows you to terminate the cascading rebuild at the right spot and
still get a good, small / fast, correct incremental build.
It's [Make]
a good tool for what it claims to do. It's fast, portable,
extensible, etc. However, I still claim that there could be a better
...
Well, all I'm trying to say is that make is not usable out of the box.
Instead, you need heavy customization to get it anywhere near usable.
I think this is a bad state of affairs. I think most developers would
much prefer some sort of standard build tool which is usable out of
the box. It's not to say make is a bad tool. I just wish instead that
developers didn't have to write their own build system for each
company.

Ok and agreed. However, let's please be honest here; that latest
statement is significantly different from your previous statements
that make is fundamentally fubar, horrific, etc. Agreed? If so I
think we have come far in understanding each other. At least I now
understand what you are saying and I agree that:

Perhaps. I did come off too strongly. I just honestly did not expect
people to actually be doing that judging from the experience of the
build team specialists in my company, the architects, etc. I was
surprised. GNU Make can be made to work quite well for a simple C++
build. However, it really can't for other kinds of builds, such as
Java, and "1 to many" or "many to many" C++ code generation. See below
for an example.
   1) make is a limited tool and does not provide out-of-the-box
      an incrementally correct build /system/. It must be augmented
      to provide that.

   2) make does have subtle annoyances and may be "slow".

   3) it would be very, very nice if there was an open source build
      /system/ of script augmentations etc for make.


Ok well that is a point of disagreement still. I don't understand
why "cascading down the dependency tree without termination" is a
fundamentally bad design. What mathematical model do you propose
to replace a partially ordered directed acyclic graph analysis
such as the make model?

Doesn't work in the general case. For example, Java.

For example, there's some makefile code which you want to run on every
invocation. Such could includes checking if there's a stale object
which no longer has its corresponding source file. If such a stale
object file is found, it should be deleted, along with its
corresponding lib. You can run this code in phase 1, but this has
several limitations:

1- Not parallelizable.

2- The source code must be known in phase 1. My company, for example,
generates C++ code from a model to facilitate serialization between
Java and C++. Thus, there is some C++ code which does not exist in
phase 1. This plays havoc with the entire Make scheme. You need to
check for stale object files \after\ code generation, but \before\
deciding if the shared library is up to date, and this is quite
difficult and masochistic to do in GNU Make.

I thought I could do it with order only dependencies. I tried a phony
dependency, which depends on the C++ code generation, which is an
order only dependency of the C++ compilation. Its command checks to
see if there's a stale object file, and if there is, kill the shared
lib and the stale object file(s). However, GNU Make does fun stuff
with caching file timestamps during phase 2, so any modification to a
file in phase 2 may not be picked up. In fact, I found that such a
scheme broke more often than it worked - GNU Make would not detect
that the shared lib was just deleted by the phony order only
dependency, and it would conclude from the cached file timestamp that
the shared lib was up to date. (I wouldn't rely on undocumented
behavior anyway in a production system. There's a couple threads about
this on some of the GNU Make mailing lists. I recall one where a GNU
Make implementer basically said this is intended behavior.) I think
this is the thing which finally drove me to do write my make clone, to
give the guarantee that make will only check a file's timestamp in
phase two after all out of date dependencies have had their commands
run.

PS: I realized just now that you could put the logic to check for
stale object files in the make command of C++ code generation.
However, then the C++ code generation command needs to know about the C
++ compilation object dir. I don't think such an approach would scale
well either; it tightly couples what should be two independent
"macros".

PPS: You could also abandon entirely letting GNU Make track out of
date. That's the other way I see to do solve this problem in GNU Make
and get parallelization.
 
J

Joshua Maurice

Please explain, several minutes until the first g++ (or another
compiler call)? Or several minutes to complete an entire --dry-run
incremental or full build?

I'm thinking now it is somewhat difficult to actually measure the
performance of the build system itself. I've always used time to
first compiler execution because even though I would prefer to do
something like a --dry-run a --dry-run will obviously not trigger
all the dependencies that would be triggered in a real run since
the targets of multiple output rules are not understood by make
and will not actually be modified.

I've included several metrics, most of the relevant ones. I did not
include any good testing of an actual incremental build, as that would
be more involved than I have time for right now, but I believe that it
should lie somewhere between the times for "all build from all build"
and "all build from clean".
Furthermore, I wasn't making any effort to sporadically touch
multiple files which could effect the analysis time. Though I have
not thought that through and given your experience in writing make
clones you would be in a much better position to comment on that
possibility?

However, there is still a chance that this could be as simple as
a poor choice of = when := would have done.

As I promised earlier, here is some speed testing between my make
clone and GNU Make.

The source code for the test generator, and the script used to run the
tests, is at the end of this post.

The short version of what my tests were is: The dependency graph is a
balanced binary tree with roughly 25,000 nodes, with a single root,
where each node has 75 children except for the leafs. I figured that
this is somewhat indicative of the dependency graph of C++ source
code. (At least, without header dependencies. That could get a lot
more complex trying to model. I figure that this is still a relatively
indicative test of real world usage, though by no means exhaustive or
foolproof.) I hope I have no glaring mistakes or oversights. I'm
welcome to rerun it or redesign the test if you find any problems.

The command to "build" each node is simply touch. Rules for cleaning
are also present. It's only a skeleton implementation of a correct
incremental build system, but hopefully it is indicative of a fully
fleshed out build system. I think it is. This testing here also fits
with my previous experience implementing such a fully fleshed out
build system and comparing the time with GNU Make and my make clone.
Tests were run on the same machine listed in my above post. I ran the
tests twice, and the numbers were reasonably close.

Results:

my make clone using new built-in no-process-spawn touch and rm
** notarget -- ~1 second
** all from clean -- 55 seconds overall -- ~1 second in phase 1
** all from all -- ~3 seconds
** all from single leaf out of date -- ~2 seconds
** clean from all -- 24 seconds overall -- ~1 second in phase 1
** clean from clean -- 13 seconds overall -- ~1 second in phase 1

my make clone using $(process touch ...) and $(process rm ...)
** notarget -- ~1 second
** all from clean -- 225 seconds overall -- ~1 second in phase 1
** all from all -- ~2 seconds
** all from single leaf out of date -- ~1 second in phase 1
** clean from all -- 136 seconds overall -- ~1 second in phase 1
** clean from clean -- 125 seconds overall -- ~1 second in phase 1

my make clone using $(shell touch ...) and $(shell rm ...)
** notarget -- ~1 second
** all from clean -- 150 seconds overall -- ~1 second in phase 1
** all from all -- ~3 seconds
** all from single leaf out of date -- ~2 seconds
** clean from all -- 134 seconds overal -- ~1 second in phase 1
** clean from clean -- 137 seconds overall -- ~1 second in phase 1

GNU Make 3.81 using $(shell touch ...) and $(shell rm ...)
** notarget -- 32 seconds overall -- 19 seconds spent in phase 1
** all from clean -- 95 seconds overall -- 20 seconds spent in phase 1
** all from all -- 44 seconds overall -- 20 seconds spent in phase 1
** all from single leaf out of date -- 44 seconds overall -- 20
seconds spent in phase 1
** clean from all -- 63 seconds overall -- 20 seconds spent in phase 1
** clean from clean -- 46 seconds overall -- 20 seconds spent in phase
1

Let me talk about a couple things about the tests.

The first set of tests for my make clone does not spawn processes. It
uses the C APIs of POSIX and WIN32 directly. The next set uses my new
built-in $(process ) which directly spawns the process without a shell
inbetween. The last set of tests of my make clone uses $(shell ),
which uses an intermediary shell to spawn the process just like Make
is advertised to do. (See gen_tests.cpp in the appendex for a more
complete description.)

One thing to note: I remember that GNU Make is optimized so that if it
sees a single "simple" command, then it bypasses the shell entirely
and directly spawns the process. Similarly, my $(process ) primitive
does not spawn a shell and spawns the process directly. However, it
appears that my underlying process library, used to implement $
(shell ) and $(process ), written by me, is rather inefficient. I need
to look at it. If I was doing this again, I would just use Boost.
(However, I am not pursing a Make clone any more, for the
aforementioned reasons.)

I just now noticed this inefficiency in my process spawning because I
wasn't really planning on using $(shell ) and $(process ) all that
often. I was planning on using built-ins to avoid a lot of process
spawning. These built-ins are one of the major motivating factors
behind me writing my make clone. The performance of using an external
shell for "read from file" and "write to file" was especially bad on
windows. (That, and I really did not feel like learning in depth the C
code for GNU Make. It has some particularly annoying hacks and styles
devoted to avoiding string copies.) I also did it as a learning
exercise. I never expected that I would so outperform GNU Make on
Linux.

One can also see that my parsing code and string interpreter code is
the neighborhood of 20 times faster than GNU Make's. Why? I do not
know. Figuring that out would involve profiling and looking at the GNU
Make source code in depth, and one of my explicit goals in writing
this clone has been to avoid looking at GNU Make's source in depth.
From what I recall on an actual production-ready makefile
implementation for incremental C++, this number was about the same.

Similarly, we can compare the "all from all" tests, and we see that my
phase 2 code (including dependency graph walking, where we stat files,
decide if a node's commands need to be run, etc.), is also in the
neighborhood of 10 to 20 times faster. Why? Again I do not know.

The rest of this post is just an explanation of what my make clone is
and why I think this is a fair comparison (and appendix).

My make clone is in many ways a near drop-in replacement for GNU Make.
It's written in portable C++, uses only the C++ standard, POSIX, and
WIN32 headers. It still interprets the makefiles, doesn't compile
anything at runtime, or anything else fancy. It supports recursive and
simple variables. It supports a large portion of the GNU Make built-in
functions, including:
and, abspath, addprefix, addsuffix, basename, call, delete-files, dir,
error, eval, filter, filter-out, findstring, firstword, foreach, if,
index, info, lastword, notdir, or, patsubst, shell, sort, strip,
subst, suffix, value, warning, wildcard, word, wordlist, words.

I also support the additional new built-in functions:
append-to-file [especially useful to get around command line length
limitations with $(shell echo )], cat, cwd, def-recursive-var, eq, eq-
ci [equals case insensitive ASCII], eval-value [equivalent to eval and
value, but allows use of line numbers in the stack trace from the
original variable definition given to $(eval-value ...)], foreach-
line, index-ci, lowercase [ASCII], makedirs, namespace, neq, neq-ci,
not, print-to-file, process [allows spawning a process directly
without a shell], recursive-wildcard, remove-dirs, sleep, sort, sort-
ci, touch, uppercase [ASCII],

However, I do honestly admit that I do not have all of the
functionality of GNU Make, some of which favors the results in my
make's favor. Things which immediately jump out at me are:

1- I don't support vpath or VPATH. (I don't think this is a
significant contributer to the slowness when all the files of the test
are in a single directory, though.)

2- I do not have $(origin ) nor $(flavor ), which would make GNU Make
slower than mine. I don't believe this is a significant contributer to
the difference in observed speed though.

3- My make does not have any implicit rules, nor pattern rules, nor
any other kind of rule which isn't the simple explicit kind. (However,
I don't think that these features would apply a significant penalty
when they are not applied such as this test.) (My clone, however, does
support rules with multiple targets made by a single command
invocation which GNU Make does not.)

#### ####
#### test_driver.sh contents

#! /bin/sh

date | tee foo.txt



echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile1.mk -j=8 -d=phase notarget | tee -a
foo.txt ; infamake -f infamakefile1.mk -j=8 -d=phase notarget 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile1.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile1.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile1.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile1.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- echo "" > aasuaa' | tee -a
foo.txt ; echo "" > aasuaa

echo ---- infamake -f infamakefile1.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile1.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile1.mk -j=8 -d=phase clean | tee -a
foo.txt ; infamake -f infamakefile1.mk -j=8 -d=phase clean 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile1.mk -j=8 -d=phase clean | tee -a
foo.txt ; infamake -f infamakefile1.mk -j=8 -d=phase clean 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt




echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile2.mk -j=8 -d=phase notarget | tee -a
foo.txt ; infamake -f infamakefile2.mk -j=8 -d=phase notarget 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile2.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile2.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile2.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile2.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- echo "" > aasuaa' | tee -a
foo.txt ; echo "" > aasuaa

echo ---- infamake -f infamakefile2.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile2.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile2.mk -j=8 -d=phase clean | tee -a
foo.txt ; infamake -f infamakefile2.mk -j=8 -d=phase clean 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile2.mk -j=8 -d=phase clean | tee -a
foo.txt ; infamake -f infamakefile2.mk -j=8 -d=phase clean 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt




echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile3.mk -j=8 -d=phase notarget | tee -a
foo.txt ; infamake -f infamakefile3.mk -j=8 -d=phase notarget 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile3.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile3.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile3.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile3.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- echo "" > aasuaa' | tee -a
foo.txt ; echo "" > aasuaa

echo ---- infamake -f infamakefile3.mk -j=8 -d=phase all | tee -a
foo.txt ; infamake -f infamakefile3.mk -j=8 -d=phase all 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile3.mk -j=8 -d=phase clean | tee -a
foo.txt ; infamake -f infamakefile3.mk -j=8 -d=phase clean 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a
foo.txt ; ls | wc |
tee -a foo.txt ; date | tee -a foo.txt

echo ---- infamake -f infamakefile3.mk -j=8 -d=phase clean | tee -a
foo.txt ; infamake -f infamakefile3.mk -j=8 -d=phase clean 2>&1 |
tee -a foo.txt ; date | tee -a foo.txt




echo '---- ls | wc' | tee -a foo.txt ; ls | wc
| tee -a foo.txt ; date | tee -a foo.txt

echo ---- make -j8 notarget | tee -a foo.txt ; make -j8 notarget 2>&1
| tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a foo.txt ; ls | wc
| tee -a foo.txt ; date | tee -a foo.txt

echo ---- make -j8 all | tee -a foo.txt ; make -j8 all 2>&1
| tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a foo.txt ; ls | wc
| tee -a foo.txt ; date | tee -a foo.txt

echo ---- make -j8 all | tee -a foo.txt ; make -j8 all 2>&1
| tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a foo.txt ; ls | wc
| tee -a foo.txt ; date | tee -a foo.txt

echo '---- echo "" > aasuaa' | tee -a foo.txt ; echo "" > aasuaa

echo ---- make -j8 all | tee -a foo.txt ; make -j8 all 2>&1
| tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a foo.txt ; ls | wc
| tee -a foo.txt ; date | tee -a foo.txt

echo ---- make -j8 clean | tee -a foo.txt ; make -j8 clean 2>&1
| tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a foo.txt ; ls | wc
| tee -a foo.txt ; date | tee -a foo.txt

echo ---- make -j8 clean | tee -a foo.txt ; make -j8 clean 2>&1
| tee -a foo.txt ; date | tee -a foo.txt

echo '---- ls | wc' | tee -a foo.txt ; ls | wc
| tee -a foo.txt ; date | tee -a foo.txt

#### ####
#### gen_tests.cpp

#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <queue>
using namespace std;

struct Node
{ Node() : parent(0) {}
~Node() { for (int i=0; i<children.size(); ++i) delete
children; }
string name;
Node * parent; //does not own
vector<Node*> children; //this object owns these
private:
Node(Node const& ); //not copyable
Node& operator= (Node const& ); //not copyable
};

string& makeNextName(string& name)
{ while (name.size() < 2)
name.push_back('a');
for (int i=2; ; )
{ if (i >= name.size())
{ name.push_back('a');
return name;
}
if (name == 'z')
{ name = 'a';
++i;
continue;
}
++name;
return name;
}
};

string getChildrenNameList(Node const* n)
{ string x;
for (int i=0; i<n->children.size(); ++i)
{ x += ' ';
x += n->children->name;
}
return x;
}

ostream& print(ostream& out, Node* n)
{ out << "$(call macro," << n->name << "," << getChildrenNameList(n)
<< ")" "\n";
for (int i=0; i<n->children.size(); ++i)
print(out, n->children);
return out;
}

int main()
{
int const cap = 25 * 1000;
int const branchingFactor = 75;

string name;

Node base;
base.name = makeNextName(name);

queue<Node*> q;
q.push(&base);

int numNodes = 1;
while (numNodes < cap)
{ Node * const next = q.front();
q.pop();
for (int i=0; i<branchingFactor; ++i)
{ auto_ptr<Node> child(new Node);
child->name = makeNextName(name);

++numNodes;
q.push(child.get());

next->children.push_back(child.get());
child.release();
}
}

ofstream gnumakefile("makefile");
ofstream infamakefile1("infamakefile1.mk");
ofstream infamakefile2("infamakefile2.mk");
ofstream infamakefile3("infamakefile3.mk");

gnumakefile <<
"all : ; $(info all) \n"
".PHONY : all " "\n"
"\n"
"clean : ; $(info clean) \n"
".PHONY : clean " "\n"
"\n"
"notarget : ; $(info notarget) \n"
".PHONY : notarget " "\n"
"\n"
"macro = $(eval $(value macro_impl))" "\n"
"define macro_impl" "\n"
" # $1 name of file" "\n"
" # $2 list of dependencies" "\n"
" " "\n"
" $1 : $2 ; @touch $@" "\n"
" " "\n"
" all : $1" "\n"
" " "\n"
" $1.clean : ; @rm -f $(basename $@)" "\n"
" .PHONY : $1.clean " "\n"
" clean : $1.clean " "\n"
"endef" "\n"
"\n"
;
infamakefile1 <<
".PHONY.all : ; $(info all)" "\n"
".PHONY.clean : ; $(info clean)" "\n"
".PHONY.notarget : ; $(info notarget)" "\n"
"\n"
"macro = $(eval $(value macro_impl))" "\n"
"define macro_impl" "\n"
" # $1 name of file" "\n"
" # $2 list of dependencies" "\n"
" " "\n"
" $1 : $2 ; $(touch $@)" "\n"
" " "\n"
" .PHONY.all : $1" "\n"
" " "\n"
" .PHONY.$1.clean : ; $(delete-files $(basename $@))" "\n"
" .PHONY.clean : .PHONY.$1.clean " "\n"
"endef" "\n"
"\n"
;

infamakefile2 <<
".PHONY.all : ; $(info all)" "\n"
".PHONY.clean : ; $(info clean)" "\n"
".PHONY.notarget : ; $(info notarget)" "\n"
"\n"
"macro = $(eval $(value macro_impl))" "\n"
"define macro_impl" "\n"
" # $1 name of file" "\n"
" # $2 list of dependencies" "\n"
" " "\n"
" $1 : $2 ; $(process touch $@)" "\n"
" " "\n"
" .PHONY.all : $1" "\n"
" " "\n"
" .PHONY.$1.clean : ; $(process rm $(basename $@))" "\n"
" .PHONY.clean : .PHONY.$1.clean " "\n"
"endef" "\n"
"\n"
;

infamakefile3 <<
".PHONY.all : ; $(info all)" "\n"
".PHONY.clean : ; $(info clean)" "\n"
".PHONY.notarget : ; $(info notarget)" "\n"
"\n"
"macro = $(eval $(value macro_impl))" "\n"
"define macro_impl" "\n"
" # $1 name of file" "\n"
" # $2 list of dependencies" "\n"
" " "\n"
" $1 : $2 ; $(shell touch $@)" "\n"
" " "\n"
" .PHONY.all : $1" "\n"
" " "\n"
" .PHONY.$1.clean : ; $(shell rm $(basename $@))" "\n"
" .PHONY.clean : .PHONY.$1.clean " "\n"
"endef" "\n"
"\n"
;

print(gnumakefile, &base);
print(infamakefile1, &base);
print(infamakefile2, &base);
print(infamakefile3, &base);

gnumakefile << "$(info $(shell date) -- done with phase 1)" "\n";
infamakefile1 << "$(info $(shell date) -- done with phase 1)" "\n";
infamakefile2 << "$(info $(shell date) -- done with phase 1)" "\n";
infamakefile3 << "$(info $(shell date) -- done with phase 1)" "\n";

gnumakefile.close();
if (!gnumakefile)
{ cerr << "error" << endl;
return 1;
}
infamakefile1.close();
if (!infamakefile1)
{ cerr << "error" << endl;
return 1;
}
infamakefile2.close();
if (!infamakefile2)
{ cerr << "error" << endl;
return 1;
}
infamakefile3.close();
if (!infamakefile3)
{ cerr << "error" << endl;
return 1;
}
}
 
J

Joshua Maurice

string& makeNextName(string& name)
{ while (name.size() < 2)
    name.push_back('a');
  for (int i=2; ; )
  { if (i >= name.size())
    { name.push_back('a');
      return name;
    }
    if (name == 'z')
    { name = 'a';
      ++i;
      continue;
    }
    ++name;
    return name;
  }
};


Also, yes, I know this works only for ASCII and similar encodings.
It's a throwaway program, and I was lazy. Fix it up if you want to use
it on a non-ASCII system.
 
J

Jorgen Grahn

I've been close enough to it to see your point ... the extra
limitations that come from "multi-site" are indeed scary.

But I've used ClearCase a lot, and while there *are* things that are
delegated, it's mostly the invisible housekeeping[1] (and the guru/
troubleshooter role). The daily use of the system has been a pure
project thing.

Yes, but you need someone (a) close enough to notice, (b) knowing it
doesn't have to be like that, and (c) with enough authority to slap
them without causing even more problems.
Can you come slap mine please? We have two parallel build machines,
one full clean, one this hacked version of incremental I set up. Every
such incremental build changes the version.h (and a couple other
version files like version.java), updating the build number, which has
the result that my incremental build tends to rebuild like 40% of all
of the code on every streaming incremental build because these version
files were changed. This was noted to management, but no time was
allocated to fix this.

In those situations I sometimes find an hour or two somewhere
(overtime, or even unpaid[2]) and fix it. You can probably come up
with various fixes, for example:
- rename version.h version.cpp
- make it contain a const char* build_number()
- replace all references to the build number with
extern const char* build_number();
foo(..., build_number(), ...);
- Show it to people and offer to merge it into mainline.

You need a bit of cred to pull it off, and the right kind of
workplace. And you better not introduce bugs or force people to change
their work habits.

/Jorgen

[1] Not trying to belittle what it takes to keep ClearCase
up and reasonably fast.
[2] Sometimes removing a source of daily frustration is worth
a few hours of your free time.
 
J

Jorgen Grahn

They'll argue that 95% incremental correctness is acceptable, just as
someone else has else-thread.

I think it was me, but the other 5% was the make "loopholes" we've
been talking about (source files being removed etc), not Makefiles
that are plain incorrect.
If you allow a build system where the
developer can incorrectly specify a build script, but it works most of
the time, management will not see a need to spend developer time
fixing it. That's why I want it near impossible for a developer to be
able to break incremental correctness short of maliciousness.

That's a realistic descripton of many organizations :-/ The costs
(broken builds, long compile-edit cycles, bored developers ...) are
hidden; if a build has always taken 2 hours, it's hard for management
to imagine that it could really take 2 minutes.

I guess I see that as the same fight as for type safety, const
correctness, thread safety ... whatever makes the C++ code safe to
work with, but has an initial cost and doesn't visibly contribute to
productivity.
I'm working on it. See else-thread for a description of my build
system.

I don't see how it's possible (technically, and also have it accepted
and so widely spread that it matters to people like me). But I'm not
exactly known for embracing new ideas, so don't listen to me!

/Jorgen
 
J

Joshua Maurice

I don't see how it's possible (technically, and also have it accepted
and so widely spread that it matters to people like me).  But I'm not
exactly known for embracing new ideas, so don't listen to me!

Keith H Duggar has done it quite well I think. He hasn't fully
specified enough so I could replicate it, but he has specified enough
that I'm convinced it meets all of my criteria for his use case.

Developers do not modify makefiles for their normal activities. It has
prebuilt macros (not make evil-value macros, but whole prebuilt
makefiles). (He said something about a database for handling link
dependencies. Sounds interesting. At the very least, there could just
be a single file describing link dependencies and nothing else, so
presumably it would be quite difficult either way for developers to
break it, and quite easy to fix if they did.)

Once you have everyone using instantiations of verified macros, the
rest is childsplay. Keith's macros seem to handle most or all of my
corner cases, or at the very least could be made to handle all of
them, including includes hiding includes and removed cpp files causing
a relink of its output library.

First, it's not very portable. That's a big problem for my company
when we support basically every desktop and mainframe known to man (or
at least a large portion of them).

Second, from all of my testing, using GNU Make would be unacceptably
slow. As I have shown, other make clones could greatly outperform GNU
Make, but even then, they would be outperformed by a solution written
more in a compiled language and not an interpreted language. At least,
the last time I tried using makefile for a small section of my
company's codebase, the build system overhead was in the minutes to
check "out of date" aka phase 1.

Also, it's a little light on customization. He basically said that
their developers never need to add custom command line preprocessor
defines, nor change any other build options. My company's makefiles
have the occasional "hack" to overcome a compiler bug, to turn down
optimizations in this one particular file on this one system otherwise
the compile of that file would take hours alone, and so on. (Of
specific note is the handling of shared libraries for windows. AFAIK,
the standard solution is for each project to have its own special
preprocessor define, which is defined when compiling that shared lib
only, to allow correct usage of __declspec(dllimport) and
__declspec(dllexport). Then again, I suppose the scripts could auto
define this preprocessor define based on the soname, so actually
nevermind on this point.)

Finally, most importantly, it only works well for C++ or similar
compilation models. As soon as you throw in Java, C++ code generation,
or anything else which doesn't fit the make model, then you lose
correct, fast incremental.

So, how do you fix this? See my earlier posts. The short version is to
write a domain specific language, a build specific language, where all
the normal developer can do is say "I have some cpp source files here
which I want turned into a shared lib here", "I have some java source
files here, with this classpath, which I want turned into a jar here",
etc. Do not give them a Turing complete programming language. Just
give them a simple declarative language where they can declare
instances of a predefined built type and specify the particular build
options (ex: includes, preprocessor defines, dependencies) to the
instantiation of the macro. Then, just have an engine which you can
plugin these macros, and then write all the macros you need. Sometimes
a developer will have a need not met by the current list of macros, so
he can write his own, or if it comes to that I'll have to provide a
general purpose "exec" macro. (I've been thinking about it, and I've
been thinking if I could expose such a think and have some level of
guarantees with incremental correctness. I don't think I can. As soon
as I expose the shell, they have their Turing complete programming
language, and they will inevitably break incremental correctness
through hackery and stupidity [ignorance], myself included.)

There are two insights. First is that file level dependency graphs
don't work all that well, but a "macro level" dependency graph would
work wonders, something very much like Ant. (Except less stupid. I am
particularly annoyed that you cannot just hit a switch and parallelize
all Ant tasks as much as possible within dependency constraints ala
GNU Make -j[#]. Instead, you have to identify beforehand the
particular sets of tasks which are to be executed in parallel. Far
less powerful, far more cumbersome, and error-prone.) Second, give a
bunch of prebuilt tasks, or macros, for the developer to use, which
have been vetted for incremental correctness when used together.
Again, very much like Ant at first glance, except the large majority
of Ant's actual tasks are not incrementally correct, whereas I'm
suggesting you write actually incrementally correct macros / tasks.
 
K

Keith H Duggar

Keith H Duggar has done it quite well I think. He hasn't fully
specified enough so I could replicate it, but he has specified enough
that I'm convinced it meets all of my criteria for his use case.

Developers do not modify makefiles for their normal activities. It has
prebuilt macros (not make evil-value macros, but whole prebuilt
makefiles). (He said something about a database for handling link
dependencies. Sounds interesting. At the very least, there could just
be a single file describing link dependencies and nothing else, so
presumably it would be quite difficult either way for developers to
break it, and quite easy to fix if they did.)

The database is flat text in a simple format that looks like
this (some names redacted):

% DEPTYPE TGTPATH DEPPATH
- link apps/abc/abc libs/libbase.a
- link apps/abc/abc libs/libmath.a
...
- link libs/libmath.a libs/libbase.a

There are many such text databases ("tdb") housing the majority
of our data (not only build related but business related). They
are accessed through a set of command line tools. For example,
the dependency database can be dumped to stdout with the command:

tdbcat build.deps

That command processes one or more actual files to generate the
requested view. The files include override files that allow for
easy manual correction or augmentation.

The build system essentially runs

tdbcat build.deps | tdbtransclose

to project the transitive closure of dependencies and processes
that to generate an ordered list of libraries for an apps linker
command ie variables that look like:

LIBDEPS := libs/libprop.a libs/libmath.a libs/libbase.a

Note that we do not allow circular link dependecies so if the
transitive closure results in any a -> a entries the build is
halted.

KHD
 
J

Jorgen Grahn

I don't see how it's possible (technically, and also have it accepted
and so widely spread that it matters to people like me). But I'm not
exactly known for embracing new ideas, so don't listen to me!

And I should say, when you have something and want to share it,
announce it on comp.software.config-mgmt. That's one group where I
know build tools are on topic.

/Jorgen
 
J

Jorgen Grahn

.
Second, from all of my testing, using GNU Make would be unacceptably
slow. As I have shown, other make clones could greatly outperform GNU
Make,

I have not seen that shown (I have missed a few postings), but is that
really so, and which clones?
but even then, they would be outperformed by a solution written
more in a compiled language and not an interpreted language.

I doubt it. I believe any sane make's performance is limited by
stat()ing files and running external programs. Once you have minimized
the first one, there is not much more power to squeeze out of it.
At least,
the last time I tried using makefile for a small section of my
company's codebase, the build system overhead was in the minutes to
check "out of date" aka phase 1.

Also surprising, unless you're on Windows or have some other (to me)
unusual thing in your environment. I once wrote a sane Makefile
(replaced insane ones) for a fairly big project (maybe 500 translation
units and 1000 header files), and had it run on a *really* slow file
system (ClearCase dynamic view). That overhead was a fraction of a
second -- I expected much more.
Finally, most importantly, it only works well for C++ or similar
compilation models. As soon as you throw in Java, C++ code generation,
or anything else which doesn't fit the make model, then you lose
correct, fast incremental.

I'm not sure why you say C++ code generation doesn't fit the make model.
Make even has builtin rules for tools like lex and yacc. Is it because
you cannot find that part of the dependency graph without actually
generating the code?
So, how do you fix this? See my earlier posts. The short version is to
write a domain specific language, a build specific language, where all
the normal developer can do is say "I have some cpp source files here
which I want turned into a shared lib here", "I have some java source
files here, with this classpath, which I want turned into a jar here",
etc. Do not give them a Turing complete programming language. Just
give them a simple declarative language where they can declare
instances of a predefined built type and specify the particular build
options (ex: includes, preprocessor defines, dependencies) to the
instantiation of the macro. Then, just have an engine which you can
plugin these macros, and then write all the macros you need. Sometimes
a developer will have a need not met by the current list of macros, so
he can write his own, or if it comes to that I'll have to provide a
general purpose "exec" macro. (I've been thinking about it, and I've
been thinking if I could expose such a think and have some level of
guarantees with incremental correctness. I don't think I can. As soon
as I expose the shell, they have their Turing complete programming
language, and they will inevitably break incremental correctness
through hackery and stupidity [ignorance], myself included.)

Most people (at least me) expect a build tool to be able to run
programs which the build tool has never heard of. I often have a Perl
script generate some table, a shell script formatting the documentation,
or some tool prepare the distributable stuff (e.g. RPMs for some
Linuxes, or some odd format for loading into PROM).

But perhaps some or all of those steps can be isolated from the
compilation and linking ... into a trivial Makefile.

/Jorgen
 
J

Joshua Maurice

I have not seen that shown (I have missed a few postings), but is that
really so, and which clones?  

Sorry. I quoted some tests run by me else-thread. Please see that.
It's not exactly showing reliably because I can't post all of the
source, so I guess you'll just have to trust me. I'm working on
getting my company to open source it, but no guarantees on time.
I doubt it. I believe any sane make's performance is limited by
stat()ing files and running external programs. Once you have minimized
the first one, there is not much more power to squeeze out of it.

Please see my tests else-thread. Here's the google groups link:
http://groups.google.com/group/comp.lang.c++/msg/843d5f7230cccc00
Also surprising, unless you're on Windows or have some other (to me)
unusual thing in your environment.  I once wrote a sane Makefile
(replaced insane ones) for a fairly big project (maybe 500 translation
units and 1000 header files), and had it run on a *really* slow file
system (ClearCase dynamic view). That overhead was a fraction of a
second -- I expected much more.

My company builds on windows and most other common Unix-like OSs and a
lot of mainframes. The particular product which I work on has about
25,000 source files in its build, and growing.
Finally, most importantly, it only works well for C++ or similar
compilation models. As soon as you throw in Java, C++ code generation,
or anything else which doesn't fit the make model, then you lose
correct, fast incremental.

I'm not sure why you say C++ code generation doesn't fit the make model.
Make even has builtin rules for tools like lex and yacc. Is it because
you cannot find that part of the dependency graph without actually
generating the code?
So, how do you fix this? See my earlier posts. The short version is to
write a domain specific language, a build specific language, where all
the normal developer can do is say "I have some cpp source files here
which I want turned into a shared lib here", "I have some java source
files here, with this classpath, which I want turned into a jar here",
etc. Do not give them a Turing complete programming language. Just
give them a simple declarative language where they can declare
instances of a predefined built type and specify the particular build
options (ex: includes, preprocessor defines, dependencies) to the
instantiation of the macro. Then, just have an engine which you can
plugin these macros, and then write all the macros you need. Sometimes
a developer will have a need not met by the current list of macros, so
he can write his own, or if it comes to that I'll have to provide a
general purpose "exec" macro. (I've been thinking about it, and I've
been thinking if I could expose such a think and have some level of
guarantees with incremental correctness. I don't think I can. As soon
as I expose the shell, they have their Turing complete programming
language, and they will inevitably break incremental correctness
through hackery and stupidity [ignorance], myself included.)

Most people (at least me) expect a build tool to be able to run
programs which the build tool has never heard of. I often have a Perl
script generate some table, a shell script formatting the documentation,
or some tool prepare the distributable stuff (e.g. RPMs for some
Linuxes, or some odd format for loading into PROM).

But perhaps some or all of those steps can be isolated from the
compilation and linking ... into a trivial Makefile.

Let me repeat what I have said else-thread. A "true" correct
incremental build system should do only correct incremental builds. It
should not be capable of doing incorrect incremental builds; that is,
all incremental builds should produce output equivalent to full clean
builds.

This includes, when a developer removes a cpp source file, it should
remove its corresponding stale obj file, and it should relink its
corresponding lib.

I believe that this is rather unpractical when everyday developer
actions involve modifying the build scripts when the build scripts are
written in a Turing complete language like GNU Make. As Keith has
shown, you can use GNU Make to do a good incremental build system, but
in so doing, developers do not generally modify the makefiles
themselves.

For the specific question, C++ code generation. I specifically
mentioned else-thread that it does not work well if the output of the
code generation is not known at makefile parse time. For example, my
company does codegen from Rose model files to cpp source files, and to
java source files. This is done to facilitate serialization of object
graphs between the cpp and java. This is a one to many build step: a
single Rose file can have many cpp files as output. Thus, when make is
parsing the makefile, it cannot tell which obj files are out of date
and should be deleted. No one knows which obj files are out of date
until actually running the code gen. The exact order of events needs
to be "generate cpp code", "check for out of date obj files and lib",
and "relink if necessary the lib". I don't see a particularly
intuitive way to do this in GNU Make. As mentioned else-thread, I
suppose I could put the logic for detecting stale object files in the
same make rule command as the Rose code generation, but this doesn't
seem like the best of ideas. Also, suppose that the rose file itself
was removed, but other checked-in cpp source code for that lib was
still there: I don't see offhand how I could check for stale obj files
if the developer just removes the Rose file.

Then we also have the difficulty of checking if a new header file hid
an old header file on some search path. This you really could not just
add to the Rose code generation command. The logic to check for header
file hiding needs to be at the point of compilation, not at the header
file generation.

The short version is that you need some logic to run on every build,
after certain incremental build steps but before others, and GNU Make
does not support this at all.
Most people (at least me) expect a build tool to be able to run
programs which the build tool has never heard of. I often have a Perl
script generate some table, a shell script formatting the documentation,
or some tool prepare the distributable stuff (e.g. RPMs for some
Linuxes, or some odd format for loading into PROM).

But no build tool actually does this. At best, they just provide a
framework for the new compilation step. GNU Make just provides a
framework. The build tool which I'm writing just provides a framework.
However, for actual incremental correctness, and for applicability to
lots of different kinds of build steps like my company uses, the make
model - file level dependency graph with cascading rebuilds without
termination conditions - does not work well at all, at least for my
company's uses.
 
J

Jorgen Grahn

....

For the specific question, C++ code generation. I specifically
mentioned else-thread that it does not work well if the output of the
code generation is not known at makefile parse time. For example, my
company does codegen from Rose model files to cpp source files, and to
java source files. This is done to facilitate serialization of object
graphs between the cpp and java. This is a one to many build step: a
single Rose file can have many cpp files as output. Thus, when make is
parsing the makefile, it cannot tell which obj files are out of date
and should be deleted. No one knows which obj files are out of date
until actually running the code gen. The exact order of events needs
to be "generate cpp code", "check for out of date obj files and lib",
and "relink if necessary the lib". I don't see a particularly
intuitive way to do this in GNU Make.

I tend to blame the company developing the tool in cases like these. I
know Rational Rose (not Rose Realtime, which I hear is different and
better), and I have always loathed it for its lack of support for sane
version control, parallel development etc.

It never occured to me because I have never used it for code
generation, but I suppose that in the same way it lacks support for
building. Perhaps when you generate a lot of source files at random,
you should at the same time generate a Makefile fragment describing
the dependencies between them. Perhaps the code generator should
follow the same policy as a Makefile build and not touch a generated
header file unless it is actually changed.

So I'm defining tools that break Make as bad tools ... which of course
doesn't help people who are stuck with them :-/

Maybe it would help somewhat to wrap the code generator in a shell
script which only touches the regenerated .cpp/.h files which have
actually changed (and removes all of them if the generation fails).
As mentioned else-thread, I
suppose I could put the logic for detecting stale object files in the
same make rule command as the Rose code generation, but this doesn't
seem like the best of ideas. Also, suppose that the rose file itself
was removed, but other checked-in cpp source code for that lib was
still there: I don't see offhand how I could check for stale obj files
if the developer just removes the Rose file.

Then we also have the difficulty of checking if a new header file hid
an old header file on some search path. This you really could not just
add to the Rose code generation command. The logic to check for header
file hiding needs to be at the point of compilation, not at the header
file generation.

My assertion early in this thread was that such things (the "make
loopholes") can be avoided (don't use many include search paths)
and/or detected manually when they happen (rebuild from scratch when
files disappear from version control). I don't think I saw you
explaining why that isn't good enough, or did I miss that?

....

[About being able to specify other actions than compiling and linking]
But no build tool actually does this. At best, they just provide a
framework for the new compilation step. GNU Make just provides a
framework. The build tool which I'm writing just provides a framework.

Yes, but it seemed to me you considered *not* providing that. That's
why I pointed out that it's important to many of us.


It seems to me that you have a pretty narrow focus and don't want to
listen to objections a lot. Actually, I think that's fine. That's what
*I* do when I have an idea and want to summon the energy to do
something about it.

/Jorgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,802
Messages
2,569,663
Members
45,433
Latest member
andrewartemow

Latest Threads

Top