That calculaton is quite contrived.
Contrived? Well, yes, in the sense that any large project is likely
to have much *more* than 1500 #include statements. For example, I just
ran a count against the trn4 source (which is less than 1 megabit
when gzip'd), and it has 1659 #include statements. openssl 0.9.7e
has 4679 #include statements (it's about 3 megabytes gzip'd).
I wonder how you would do changes
in you code base besides removing an #include, not to speak of
refactoring.
You seem to have forgotten that you yourself proposed,
"Automate that and you have the requested tool!" in response to my
saying, "It might be easier just to start commenting out #include's".
When I indicated that it is more complex than that and that
comparing object code is necessary (not just looking to see if
looking for compile errors), you said,
"You have to test your application after code changes anyway."
Taken in context, your remark about testing after code changes
must be considered to apply to the *automated* tool you proposed.
And the difficulty with automated tools along these lines is that they
are necessarily dumb: if removing #include file1.h gives you a
compile error, then the tool cannot assume that file1.h is a -necessary-
dependancy (i.e., a tool that could test in linear time): the tool
would have to assume the possibility that removing file1.h
only gave an error because of something in file2.h --- and yes,
there can be backwards dependancies, in which file1.h is needed to
complete something included -before- that point. Thus, in this kind
of automated tool that doesn't know how to parse the C code itself,
full dependancy checking can only be done by trying every -possible-
combination of #include files, which is a 2^N process.
Do you feel that 1 second to "test your application after code changes"
is significantly longer than is realistic? It probably takes longer
than that just to compile and link the source each time.
I wonder how you would do changes
in you code base besides removing an #include, not to speak of
refactoring.
I don't mechanically automate the code change and test process.
... and if it compiles but produces different object code then you
have found an include order dependency bug ;-)
Include order dependencies are not bugs unless the prevailing
development paradigm for the project has declared them to be so.
Once you get beyond standard C into POSIX or system dependancies,
it is *common* for #include files to be documented as being order
dependant upon something else. Better system developers hide
that by #include'ing the dependancies and ensuring that, as far as
is reasonable, that each system include file has guards against
multiple inclusion, but that's a matter of Quality of Implementation,
not part of the standards.
Still, it is true that in the case of multiple source files that
together have 1500 #include, that you would not need to do pow(2,1500)
application tests, if you are using a compiler that supports
independant compilation and later linking. If you do have independant
compilation, then within each source file it is a 2^N process
to find all the #include combinations that will compile, but most of
the combinations will not. Only the versions that will compile need
to go into the pool for experimental linkage; linkage experiments
would be the product of the number of eligable compilations for each
source. Only the linkages that survived would need to go on for testing.
The number of cases that wil make it to testing is not possible to
estimate without statistical information about the probability that any
given #include might turn out to be unneeded.