No unanswered question

  • Thread starter Alf P. Steinbach /Usenet
  • Start date
A

Alf P. Steinbach /Usenet

Occasionally I fire up Thunderbird and look in [comp.lang.c++] if perhaps there
should be some question that I could answer.

But no.

There's always some new postings, but even if a question has been asked only a
minute ago, it has already been answered!

Argh.

OK, I'll ask a question myself: what is a good way to implement some thing where
C++ event handlers can be specified in an XML definition of a GUI?


Cheers,

- Alf
 
A

Andrea

Alf said:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if perhaps there
should be some question that I could answer.

But no.

There's always some new postings, but even if a question has been asked only a
minute ago, it has already been answered!

ther's one about patterns, give it a shot
Argh.

OK, I'll ask a question myself: what is a good way to implement some thing where
C++ event handlers can be specified in an XML definition of a GUI?

don't understand your question
 
J

Jonathan Lee

OK, I'll ask a question myself: what is a good way to implement some thing where
C++ event handlers can be specified in an XML definition of a GUI?

I used to use XUL with javascript to accomplish something like this.
I suppose you could define a mapping between XUL widgets and the
host GUI, and between javascript events and C++. I'm sure that's
effectively what happens anyway.

--Jonathan
 
A

Alf P. Steinbach /Usenet

* Andrea, on 08.07.2010 23:36:
Alf said:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if perhaps there
should be some question that I could answer.

But no.

There's always some new postings, but even if a question has been asked only a
minute ago, it has already been answered!

ther's one about patterns, give it a shot

Well, OK, done, but really it isn't any fun replying to malformed questions.

don't understand your question

If you had used XUL you would have. :)


Cheers,

- Alf
 
J

Joshua Maurice

* Andrea, on 08.07.2010 23:36:
Alf said:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if perhaps there
should be some question that I could answer.
But no.
There's always some new postings, but even if a question has been asked only a
minute ago, it has already been answered!
ther's one about patterns, give it a shot

Well, OK, done, but really it isn't any fun replying to malformed questions.
don't understand your question

If you had used XUL you would have. :)

At great risk of being offtopic (though I would argue it's quite
important to using the C++ language, and thus on topic), here's a fun
question for you. (Well, a series of questions.) Why is there no
incrementally correct build system, or easy to use incrementally
correct build framework, for C, C++, and Java? As far as I can tell,
all publicly available solutions fail basic incremental correctness
tests.

Let me further define the problem.

A build is the act of following a set of steps, a plan, a process, of
creating executable "files" from source "files".

Let me try to define incremental. A user has a codebase, a local view
of source control, a bunch of source files on disk. the source
includes the build script files as well, such as the makefiles. The
user does a full clean build. The user then makes some changes to the
source, such as adding, removing, or editing source files (which
include build script files). The user then does another build, called
an incremental build, which selectively skips (some) portions of the
full build which are unnecessary because they would produce equivalent
output as the already existing files. This partial build is called an
incremental build.

A correct incremental build is an incremental build which produces
output files equivalent to what a full clean build would produce. An
incremental build process, or incremental build system, is
(incrementally) correct if it can only produce correct incremental
builds, that is, if it will always produce output equivalent to a full
clean build.

An incremental build can be done by hand. A build system is just a
build process, a plan to do a build, a set of actionable items to do a
build. The dependency analysis can be done manually. However, such
analysis tends to take longer than just a full clean build, and
mistakes can be made the human doing the analysis, so it's not really
a correct build system either. Thus any correct incremental build must
automate the tracking of dependencies.

Under that definition, all the build systems and build frameworks
known to me are not incrementally correct, to varying degrees.

The common GNU Make solution described in Recursive Make Considered
Harmful for C and C++ is the closest, but still misses out on corner
cases, including:
1- Removing a new C++ source file when using wildcards will not result
in a relink of its library or executable.
2- Adding a new include file which "hides" another include file on an
include path will not result in a recompile of all corresponding
object files.
3- A change to a makefile itself won't always be caught, such as a
change to a command line preprocessor maco definition.
4- Other various changes to the makefiles which might inadvertently
break incremental correctness.

One might argue that 3, and to a larger extent 4, are outside the
scope of a build system. I disagree with that assessment. When I'm
working on my company's large codebase, and I do a sync to changelist,
this includes changing makefiles which I know nothing about. I want
the build system to correctly incrementally build affected files
without requiring a full clean build. However, with the common GNU
Make solution described in Recursive Make Considered Harmful, this
will not happen; the build can be incorrect.

I'll skip the in depth discussion of other various publicly available
build systems, but as far as I can tell, they are all incorrect under
this set of criteria.

So, my questions were, is there some build system for C, C++, Java,
and extensible to other sane programming languages, which is
incrementally correct, which I somehow glossed over?

Why is there no incrementally correct build system out there? I would
argue that incremental builds represent the most effective approach to
decreasing build times. If your build is taking too long, you can
throw hardware at it, parallelization at it, distributed systems on a
grid at it, throw better faster compilers at it, etc., but all of
these approaches are just taking off a coefficient of the build.
Incremental builds tend to result in build times which are
proportional to the size of the change, not the size of the code base,
which makes them asymptotically much faster than any other possible
change to the build. (pimp is an interesting exception. pimpl does
improve full clean build times by more than just some coefficient by
reducing the size of the effective source given the compiler. However,
pimpl plays with incremental by decreasing the number of dependencies
of the build, thereby improving an incremental build.)

Finally, if I manage to get my company's legal to let me open source
my own little build tool which I've been writing myself in my spare
time, what would I license it under? (I was leaning GNU GPL.) Where
would I put it up to get people to actually consider using it?

Perhaps more generally, who should I bug about the huge shortcoming in
all modern build technology?
 
Ö

Öö Tiib

OK, I'll ask a question myself: what is a good way to implement some thing where
C++ event handlers can be specified in an XML definition of a GUI?

How that XML is used? If GUI is generated from it then event handling
code can be also generated. If XML is run time loaded then there
should be some sort of common event handling interface that accepts
strings and ints.
 
V

Vladimir Jovic

Alf said:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if
perhaps there should be some question that I could answer.

But no.

There's always some new postings, but even if a question has been asked
only a minute ago, it has already been answered!

Argh.

Tough luck. Press faster that "Get Mail" button ;)
OK, I'll ask a question myself: what is a good way to implement some
thing where C++ event handlers can be specified in an XML definition of
a GUI?

Take a look at this library :
http://code.google.com/p/pococapsule/
 
G

Gil

<connections>
<connection>
<sender>Form</sender>
<signal>customContextMenuRequested(Point)</signal>
<receiver>Form</receiver>
<slot>showContextMenu()</slot>
<hints>
<hint type="sourcelabel">
<x>199</x>
<y>149</y>
</hint>
<hint type="destinationlabel">
<x>199</x>
<y>149</y>
</hint>
</hints>
</connection>
<connection>
<sender>horizontalSlider</sender>
<signal>sliderReleased()</signal>
<receiver>calendarWidget</receiver>
<slot>showMonth()</slot>
<hints>
<hint type="sourcelabel">
<x>190</x>
<y>197</y>
</hint>
<hint type="destinationlabel">
<x>141</x>
<y>95</y>
</hint>
</hints>
</connection>
</connections>
 
C

cpp4ever

* Andrea, on 08.07.2010 23:36:
Alf P. Steinbach /Usenet wrote:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if perhaps there
should be some question that I could answer.
There's always some new postings, but even if a question has been asked only a
minute ago, it has already been answered!
ther's one about patterns, give it a shot

Well, OK, done, but really it isn't any fun replying to malformed questions.
OK, I'll ask a question myself: what is a good way to implement some thing where
C++ event handlers can be specified in an XML definition of a GUI?
don't understand your question

If you had used XUL you would have. :)

At great risk of being offtopic (though I would argue it's quite
important to using the C++ language, and thus on topic), here's a fun
question for you. (Well, a series of questions.) Why is there no
incrementally correct build system, or easy to use incrementally
correct build framework, for C, C++, and Java? As far as I can tell,
all publicly available solutions fail basic incremental correctness
tests.

Let me further define the problem.

A build is the act of following a set of steps, a plan, a process, of
creating executable "files" from source "files".

Let me try to define incremental. A user has a codebase, a local view
of source control, a bunch of source files on disk. the source
includes the build script files as well, such as the makefiles. The
user does a full clean build. The user then makes some changes to the
source, such as adding, removing, or editing source files (which
include build script files). The user then does another build, called
an incremental build, which selectively skips (some) portions of the
full build which are unnecessary because they would produce equivalent
output as the already existing files. This partial build is called an
incremental build.

A correct incremental build is an incremental build which produces
output files equivalent to what a full clean build would produce. An
incremental build process, or incremental build system, is
(incrementally) correct if it can only produce correct incremental
builds, that is, if it will always produce output equivalent to a full
clean build.

An incremental build can be done by hand. A build system is just a
build process, a plan to do a build, a set of actionable items to do a
build. The dependency analysis can be done manually. However, such
analysis tends to take longer than just a full clean build, and
mistakes can be made the human doing the analysis, so it's not really
a correct build system either. Thus any correct incremental build must
automate the tracking of dependencies.

Under that definition, all the build systems and build frameworks
known to me are not incrementally correct, to varying degrees.

The common GNU Make solution described in Recursive Make Considered
Harmful for C and C++ is the closest, but still misses out on corner
cases, including:
1- Removing a new C++ source file when using wildcards will not result
in a relink of its library or executable.
2- Adding a new include file which "hides" another include file on an
include path will not result in a recompile of all corresponding
object files.
3- A change to a makefile itself won't always be caught, such as a
change to a command line preprocessor maco definition.
4- Other various changes to the makefiles which might inadvertently
break incremental correctness.

One might argue that 3, and to a larger extent 4, are outside the
scope of a build system. I disagree with that assessment. When I'm
working on my company's large codebase, and I do a sync to changelist,
this includes changing makefiles which I know nothing about. I want
the build system to correctly incrementally build affected files
without requiring a full clean build. However, with the common GNU
Make solution described in Recursive Make Considered Harmful, this
will not happen; the build can be incorrect.

I'll skip the in depth discussion of other various publicly available
build systems, but as far as I can tell, they are all incorrect under
this set of criteria.

So, my questions were, is there some build system for C, C++, Java,
and extensible to other sane programming languages, which is
incrementally correct, which I somehow glossed over?

Why is there no incrementally correct build system out there? I would
argue that incremental builds represent the most effective approach to
decreasing build times. If your build is taking too long, you can
throw hardware at it, parallelization at it, distributed systems on a
grid at it, throw better faster compilers at it, etc., but all of
these approaches are just taking off a coefficient of the build.
Incremental builds tend to result in build times which are
proportional to the size of the change, not the size of the code base,
which makes them asymptotically much faster than any other possible
change to the build. (pimp is an interesting exception. pimpl does
improve full clean build times by more than just some coefficient by
reducing the size of the effective source given the compiler. However,
pimpl plays with incremental by decreasing the number of dependencies
of the build, thereby improving an incremental build.)

Finally, if I manage to get my company's legal to let me open source
my own little build tool which I've been writing myself in my spare
time, what would I license it under? (I was leaning GNU GPL.) Where
would I put it up to get people to actually consider using it?

Perhaps more generally, who should I bug about the huge shortcoming in
all modern build technology?

You do know Linus Torvalds, (the original Linux guy), created his own
version control system, (Git), because nothing met his needs. I wish you
the best of luck with your build system, but as folks become ever more
sophisticated, no doubt the wish list will change. Hopefully, the useful
C/C++ code base that has built up over the years will not become
obsolete too soon, (although I suspect some Businesses like it that way).
 
J

Joshua Maurice

[Snipping my incremental build rant]
You do know Linus Torvalds, (the original Linux guy), created his own
version control system, (Git), because nothing met his needs. I wish you
the best of luck with your build system, but as folks become ever more
sophisticated, no doubt the wish list will change. Hopefully, the useful
C/C++ code base that has built up over the years will not become
obsolete too soon, (although I suspect some Businesses like it that way).

All I want is what every developer wants: to be able to make an
arbitrary change to "the source", and have a always correct
incremental build. I disagree that there has been this "ever more
sophisticated" trend. This simple requirement existed back in the
first days of make, and it still exists now. It's just that the
original make author either:
- purposefully punted because he decided that makefiles are not source
and instead part of the implementation of the build system (but now,
makefiles effectively are source in a large project as no single
person understands all of the makefiles in a 10,000 source file
project),
- or purposefully punted on some deltas because he considered being
100% correct is too hard,
- or as I think more likely, he just didn't realize that a build
system based strictly on a file dependency DAG is insufficient for
incremental correctness.

Most people don't actually realize that all build systems out there
are not 100% incrementally correct. Some think that theirs actually is
correct. I'm wondering why this is the case, and why developers put up
with this sad state of affairs. While I'm not the first to realize
this, judging from various papers it's not something commonly known,
such as Recursive Make Considered Harmful, which claims / assumes that
a file dependency DAG is sufficient for incremental correctness (and
it's not).
http://miller.emu.id.au/pmiller/books/rmch/

I have stumbled across a paper which actually does address these
concerns, and notes that all current build systems fail incremental
correctness tests. Oddly though, it is in the context of Java, though
a lot of its ideas also apply to C++ builds. Capturing Ghost
Dependencies In Java
http://www.jot.fm/issues/issue_2004_12/article4.pdf
 
J

Joshua Maurice

Hopefully, the useful
C/C++ code base that has built up over the years will not become
obsolete too soon, (although I suspect some Businesses like it that way).

Oh yes. I am not advocating changing C++ itself at all. I am merely
advocating abandoning GNU Make and all other incorrect incremental
build systems in favor of correct incremental build systems. Old
projects which use Make can continue to use Make, but new projects
would hopefully have build scripts of an incrementally correct build
system.
 
C

cpp4ever

[Snipping my incremental build rant]
You do know Linus Torvalds, (the original Linux guy), created his own
version control system, (Git), because nothing met his needs. I wish you
the best of luck with your build system, but as folks become ever more
sophisticated, no doubt the wish list will change. Hopefully, the useful
C/C++ code base that has built up over the years will not become
obsolete too soon, (although I suspect some Businesses like it that way).

All I want is what every developer wants: to be able to make an
arbitrary change to "the source", and have a always correct
incremental build. I disagree that there has been this "ever more
sophisticated" trend. This simple requirement existed back in the
first days of make, and it still exists now. It's just that the
original make author either:
- purposefully punted because he decided that makefiles are not source
and instead part of the implementation of the build system (but now,
makefiles effectively are source in a large project as no single
person understands all of the makefiles in a 10,000 source file
project),
- or purposefully punted on some deltas because he considered being
100% correct is too hard,
- or as I think more likely, he just didn't realize that a build
system based strictly on a file dependency DAG is insufficient for
incremental correctness.

Most people don't actually realize that all build systems out there
are not 100% incrementally correct. Some think that theirs actually is
correct. I'm wondering why this is the case, and why developers put up
with this sad state of affairs. While I'm not the first to realize
this, judging from various papers it's not something commonly known,
such as Recursive Make Considered Harmful, which claims / assumes that
a file dependency DAG is sufficient for incremental correctness (and
it's not).
http://miller.emu.id.au/pmiller/books/rmch/

I have stumbled across a paper which actually does address these
concerns, and notes that all current build systems fail incremental
correctness tests. Oddly though, it is in the context of Java, though
a lot of its ideas also apply to C++ builds. Capturing Ghost
Dependencies In Java
http://www.jot.fm/issues/issue_2004_12/article4.pdf

You are correct about current build systems not handling all incremental
changes correctly, I've been programming long enough to have come across
that problem. Without more thought on this topic I'm not entirely sure
how to ensure incremental changes are correctly handled, but it sounds
like you'd need to generate some sort of relationship graph. As long as
the overhead in doing this is not too time consuming, it is worthwhile.
Then again if the relationships are becoming that complex, perhaps the
code design is poor and needs to be redesigned. Again I wish you every
success in your endeavours with this.

cpp4ever
 
J

Joshua Maurice

You are correct about current build systems not handling all incremental
changes correctly, I've been programming long enough to have come across
that problem. Without more thought on this topic I'm not entirely sure
how to ensure incremental changes are correctly handled, but it sounds
like you'd need to generate some sort of relationship graph. As long as
the overhead in doing this is not too time consuming, it is worthwhile.
Then again if the relationships are becoming that complex, perhaps the
code design is poor and needs to be redesigned.

I'm leaning towards the following design, and I've implemented
something based on this design. My design is a bastard mix of concepts
from Ant, Maven, and GNU Make.

First, my goal, using the terms define above, is to guarantee
incremental correctness over all possible deltas which can be checked
into source control.

This specifically declares things to be outside the scope of
incremental correctness, including correct and bug-free OS, correct
version of gcc and other compilers, correctly installed "third party
libraries" which are installed apart from the project in question and
the project's source control system. This also excludes the build
system implementation itself. That is, most makefiles (or their
equivalent) will be checked into source control, so they must be
covered by the incremental correctness guarantees, but the make
implementation itself is not checked into the same source control as
the project in question, so the make implementation is outside the
guarantee of correctness; the incremental correctness guarantee is
contingent on the correctness of the build system implementation.

With that out of the way, it follows pretty quickly that a free form
language like makefiles of GNU Make are unsuitable for this purpose.
Apart from the design considerations of a file-level dependency graph,
the turing complete nature of GNU Make means that we cannot guarantee
incremental correctness when developers can arbitrarily modify
makefiles.

What is needed is a very strict and narrow build script language ala
idiomatic Maven, and to a lesser degree idiomatic Ant. The goal is a
build script language where the developer "instantiates" a template or
a macro from a prebuilt list of macros. A macro would correspond to a
"build type", like "cpp dll" or "java jar". The macro definition
itself would not be in the source control repository of the project in
question, and thus the incremental correctness guarantee would be
contingent on the correctness of the macro implementations. Great care
must be made when changing or adding new macros as it could break
incremental correctness, but this would not be a normal developer
activity.

This system also has the desirable property that it removes a lot of
clutter from the build scripts. In my company's current build scripts,
especially the vcproj files, there is a very low "information
content". There is much duplication of settings. Sometimes the
settings differ. It's difficult to find the variations, if not near
impossible, and it is near impossible to determine why this project
has that build variation from that build.

With these macros, all of the common configurations would be moved to
a common place. This allows for easier auditing of current build
options as only the deltas from the default would be in the build
script files (delta used in a meaning than the deltas for incremental
builds). It also allows for useful and easy extensions and rewrites. A
simple example might be that you want to publish Javadocs. In an Ant
system, you would have to add a new Javadoc task to each build script,
whereas if you used Ant macros, or my macros, you could just add a new
piece to the macro which all the build scripts use. I go one step
further than Ant and require all developers to use the prebuilt,
presumably correct, macros. (Ant's prebuild tasks are overwhelmingly
not incrementally correct, including its depends task.)

Each build script would give arguments to the macros, things such as
include path, library dependencies, preprocessor defines, etc. The
build system would parse it all, get all of the instantiated macros,
and then create tasks for each macro. Ex: a macro might be "java files
to jar". (Sorry I didn't use C++ as an example, but the C++ case is
actually more complex.) Each macro invocation would create two tasks,
one task to call javac, and one task to call jar on the produced class
files. In the future, this macro might be edited to include another
task, a Javadoc task. The macro implementation contains information to
associate execution time dependencies between these tasks. Once all
the tasks are created, the build engine can then execute these tasks
in DAG order, parallelizing as possible. It's up to each individual
task to guarantee incremental correctness.

GNU Make has all of the incremental logic hidden away in its
implementation, in its dependency graph logic. However, this is
insufficient. In my system, this logic is encoded in the task
implementations. There will be a very small number of very stable task
implementations. It would not be usual practice for a developer to
modify them. Thus there is some hope that they could be kept correct.

Let me also note that I don't see any particular reason that this
should be much slower than GNU Make's approach of keeping all of the
incremental logic isolated to one place. Sure, the implementation code
is a little more complex, but the actual actions, such as filesystem
accesses, will be on the same order. As an example, for a real portion
of code of my company's product, roughly 4300 java files, on a
standard 4 core desktop, this dependency analysis finishes in less
than 5 seconds. An optimization already in place which skips detailed
analysis of each javac unit when all dependencies have not changed
reduces that to about 2 seconds. For the same codebase, my build
system does a full clean build in about 200 seconds to 340 seconds
depending on the state of the caches of the hard disk (aka hot vs cold
start), and Maven does it in about about 700 seconds. (However, this
comparison is somewhat cheating, as my build system parallelizes the
build whereas Maven does not.) When I picked files which I expected to
have the worst case incremental build time, the worst I managed was
about 60 seconds for a single file modification. (No time given for
Maven, as Maven isn't incrementally correct, so the number is
meaningless.) As noted else-thread, I expect this time to remain
relatively independent of the size of the code base, though
particulars do matter. This is also Java dependency analysis, which
when done correctly, is a lot more complex than the dependency
analysis required for C++ compilation, so I would expect it to be even
faster for the equivalent amount of C++ source files.

A slightly longer explanation of my build system for java: The java
code is broken up into different directories which will end up in
different jars. There's about 100 jars for those java files. The
dependency cascade goes without termination to the boundary of this
javac task, taking the conservative approach, then executes javac. It
begins the analysis anew for the next javac task, in effect allowing a
termination to the dependency cascade. This is required because of the
possible circular references in the java code, because developers
frequently do not specify java to java file dependency information,
and because javac is quite expensive to invoke so the cost is
amortized over lots of java files. This seems to result in a good
degree of parallelization and a good degree of incremental while still
having a fast build.

C++ has an entirely different model. Each cpp dir has its own macro
invocation, each of which creates a single task. When that task
executes during dependency graph traversal, it analyzes which cpp
files are out of date, then creates new tasks for the out of date cpp
files, and adds those tasks to the dependency graph. To be clear, I am
modifying the graph during graph traversal. This allows cool things
like creating cpp files from a black box process (such as unzipping)
and compiling those in parallel (something which would be quite
difficult in GNU Make while still preserving a global cap on the
number of active build jobs), and aggregating multiple cpp files into
a single compiler invocation. I've read that some compilers, IIRC
notably microsoft's visual studios compiler, has a large startup cost,
so if more than 1 cpp file is given to the compiler, say instead in
batches of 5, this greatly speeds up the build.

While implementing some common tasks (such as compile cpp, link obj
files, compile java files, make jar, make zip, unzip, download a file,
etc.), I have been seeing some commonalities between tasks. I still
don't see a good way to factor out some of these commonalities, but
I'm trying. Still, the logic, the hard implementation stuff, is
centralized to a small set of tasks instead of spread out across all
the build scripts of a project.

Let me emphasize that with this system, when the macros are correctly
implemented, it will be quite difficult for a developer to break
incremental correctness short of outright maliciousness. Examples
include: modifying the state file saved between runs of the build
system, adding malicious code in automated tests, setting timestamps
of source files to be the past (a rather notable one as this could
happen from an unzip tool or from a sync from a source control
system), modifying the timestamps of output files at all (though these
are hidden away in a parallel folder structure, so less prone to
accidents).
 
J

joe

Alf P. Steinbach /Usenet said:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if
perhaps there should be some question that I could answer.

But no.

There's always some new postings, but even if a question has been asked
only a minute ago, it has already been answered!

Did you really mean "answered" or simply "responded to"? I tend to ask
"big picture" "questions", so I like to get multiple answers/responses
(yes, even from the naysayers). If you want to really "answer" some
questions, dive into the on-going and even tangential threads of
discussion and help out those who are going back-n-forth,
round-in-circles, talking-past-each other (if you don't do that already
that is, I'm not in here enough to know what goes on and who is who),
etc. I remember you doing a fine job of summarizing the "to unsigned or
not to unsigned" debate: I actually kept that post for future reference,
and I didn't need the whole thread then. I think I remember that it was
all one long paragraph though, so consider using multiple paragraphs if
that was/is the case with you.
 
J

joe

Alf P. Steinbach /Usenet said:
* Andrea, on 08.07.2010 23:36:
Alf said:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if
perhaps there
should be some question that I could answer.

But no.

There's always some new postings, but even if a question has been
asked only a
minute ago, it has already been answered!

ther's one about patterns, give it a shot

Well, OK, done, but really it isn't any fun replying to malformed
questions.

don't understand your question

If you had used XUL you would have. :)

Some "IDL-ish" kind of thing was you original question? Can't help ya.
Aside though, how many people here have used Excel to generate C++ code
for them? Think about it, given a dataset and a bit of VBA, the code is
guaranteed correct as dataset changes (if you've coded your VBA
correctly!).
 
J

joe

Joshua said:
* Andrea, on 08.07.2010 23:36:
Alf P. Steinbach /Usenet wrote:
Occasionally I fire up Thunderbird and look in [comp.lang.c++] if
perhaps there should be some question that I could answer.
There's always some new postings, but even if a question has been
asked only a minute ago, it has already been answered!
ther's one about patterns, give it a shot

Well, OK, done, but really it isn't any fun replying to malformed
questions.
OK, I'll ask a question myself: what is a good way to implement
some thing where C++ event handlers can be specified in an XML
definition of a GUI?
don't understand your question

If you had used XUL you would have. :)

At great risk of being offtopic (though I would argue it's quite
important to using the C++ language, and thus on topic), here's a fun
question for you. (Well, a series of questions.) Why is there no
incrementally correct build system, or easy to use incrementally
correct build framework, for C, C++, and Java? As far as I can tell,
all publicly available solutions fail basic incremental correctness
tests.

Wow, deja vu: this thread is going off into tangential chaos. My last
post in this thread ended in a completely tangential question, and here
you are doing the exact same thing! Uncanny, for sure. I stopped
believing in witches years ago, so I'm sure that isn't it. :)
 
J

joe

cpp4ever said:
[Snipping my incremental build rant]
You do know Linus Torvalds, (the original Linux guy), created his
own version control system, (Git), because nothing met his needs. I
wish you the best of luck with your build system, but as folks
become ever more sophisticated, no doubt the wish list will change.
Hopefully, the useful C/C++ code base that has built up over the
years will not become obsolete too soon, (although I suspect some
Businesses like it that way).

All I want is what every developer wants: to be able to make an
arbitrary change to "the source", and have a always correct
incremental build. I disagree that there has been this "ever more
sophisticated" trend. This simple requirement existed back in the
first days of make, and it still exists now. It's just that the
original make author either:
- purposefully punted because he decided that makefiles are not
source and instead part of the implementation of the build system
(but now, makefiles effectively are source in a large project as no
single person understands all of the makefiles in a 10,000 source
file project),
- or purposefully punted on some deltas because he considered being
100% correct is too hard,
- or as I think more likely, he just didn't realize that a build
system based strictly on a file dependency DAG is insufficient for
incremental correctness.

Most people don't actually realize that all build systems out there
are not 100% incrementally correct. Some think that theirs actually
is correct. I'm wondering why this is the case, and why developers
put up with this sad state of affairs. While I'm not the first to
realize this, judging from various papers it's not something
commonly known, such as Recursive Make Considered Harmful, which
claims / assumes that a file dependency DAG is sufficient for
incremental correctness (and it's not).
http://miller.emu.id.au/pmiller/books/rmch/

I have stumbled across a paper which actually does address these
concerns, and notes that all current build systems fail incremental
correctness tests. Oddly though, it is in the context of Java, though
a lot of its ideas also apply to C++ builds. Capturing Ghost
Dependencies In Java
http://www.jot.fm/issues/issue_2004_12/article4.pdf

You are correct about current build systems not handling all
incremental changes correctly, I've been programming long enough to
have come across that problem. Without more thought on this topic I'm
not entirely sure how to ensure incremental changes are correctly
handled, but it sounds like you'd need to generate some sort of
relationship graph. As long as the overhead in doing this is not too
time consuming, it is worthwhile. Then again if the relationships are
becoming that complex, perhaps the code design is poor and needs to
be redesigned. Again I wish you every success in your endeavours with
this.

Was he "endeavoring"? I thought he was just noting the sad state of
affairs in building C++ systems (yes, I said "systems"). I know where
this line of discussion goes to: header files vs. modules. Gotta fix the
underlying before finding fresh air.
 
J

Jorgen Grahn

(If you had used a proper subject line, and started a new thread
instead of Alf's unfortunately named, one, I would have commented much
sooner.)

.
All I want is what every developer wants: to be able to make an
arbitrary change to "the source", and have a always correct
incremental build. I disagree that there has been this "ever more
sophisticated" trend. This simple requirement existed back in the
first days of make, and it still exists now. It's just that the
original make author either:
- purposefully punted because he decided that makefiles are not source
and instead part of the implementation of the build system (but now,
makefiles effectively are source in a large project as no single
person understands all of the makefiles in a 10,000 source file
project),

A good insight. Makefiles are part of the source code.
- or purposefully punted on some deltas because he considered being
100% correct is too hard,
- or as I think more likely, he just didn't realize that a build
system based strictly on a file dependency DAG is insufficient for
incremental correctness.

I understand your arguments, but I still think the best (easiest, with
greatest chance of success) is to:

- Insist on one, complete, Makefile a la "recursive make considered
...". It doesn't have to be split into fragments, by the way -- I
find that even for hundreds of source files in dozens of directories,
one big Makefile at the top is quite clear and readable.

- Accept its shortcomings, and learn (teach others) in which
situations you have to issue a "make clean" to be on the safe side.
Having 95% of all builds be incremental ones is pretty good!
Most people don't actually realize that all build systems out there
are not 100% incrementally correct. Some think that theirs actually is
correct. I'm wondering why this is the case, and why developers put up
with this sad state of affairs.

In my experience, many programmers put up with having 0% of their
builds being incremental ones, or with the other abnormal situations
that the "recursive make considered harmful" paper describes in such
detail. They'd be much better off with a 95% solution /now/ than with
100% in the future.
While I'm not the first to realize
this, judging from various papers it's not something commonly known,
such as Recursive Make Considered Harmful, which claims / assumes that
a file dependency DAG is sufficient for incremental correctness (and
it's not).
http://miller.emu.id.au/pmiller/books/rmch/
....

The most noticeable problem with make's approach to me is the one I
don't think you mentioned: timestamp changes due to version control
tools. E.g. switching from seeing foo.cc version 5 to version 4.

/Jorgen
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,189
Latest member
CryptoTaxSoftware

Latest Threads

Top