On 29 Sep 2004 05:34:35 -0700, "Ramesh Natarajan"
more appropriate for most of this and may have better ideas:
The problem with these lot of un wanted #includes is that my
development
platform is Tandem and as I understand, the file open and close are the
most
expensive operations on the platform. Unfortunately we dont use a
cross compiler
and depend on a native compiler that needs to be run on the Tandem!!
In general the compilation is pretty slow and with these header
problems it takes
forever to compile!!
From your example names I guess you are using the POSIX "personality"
or "subsystem" OSS. It's been a while since I've done so but as I
recall OSS file opens (or more precisely lookups) are unusually
expensive because the Unix-like filesystem must be emulated on the
real (Guardian) filesystem. Plus on Tandem almost all I/O is somewhat
more expensive because the OS is (and must be) message-based.
However, I would still expect opening and reading a few hundred files
(as long as none of them are absurdly large) to take only a few
seconds, maybe 5-10 at worst. If you have many -I directories that
need to be checked, maybe a few times that. If you are seeing worse,
it might be the system is not well configured for what you're doing --
there are a _lot_ of tuning "knobs" on Tandem and very few of them
automatic. You might ask your system manager if s/he has measured and
tuned for your type of workload. (Unless you are compiling on a
production system; then the response will probably be that the system
is tuned for production and if development suffers tough noogies.)
In fact if you have a large number of -I directories and can just
reduce them significantly, it will probably help. Maybe just create
one directory that contains a link or (I believe now) symlink to each
real file; you can do that with a few shell commands.
If a substantial number perhaps most of your included files are (or
can be) in one (single-level) directory or a few such, with names of
only up to 7 alphanumeric plus the .h, you might try putting them in a
Guardian subvolume and putting /G/somevol/mysubvol early in your
include path -- that _may_ bypass the emulation and go direct to
discprocess, I'm not sure.
Even more kludgily, the Tandem compilers have a nonstandard option to
#include only named sections of a file. You could combine all or at
least many of your current files into a single file with sections, and
change from a list of #include's to a single #include with a list of
sections; then you only have one open. But this won't help if one
"file" (section) #include's another, so you have to do the "all
includes at top level" style to really benefit. Also, regardless of
the order you specify the section names in the #include, they are
included in the order they appear in the file; you must make sure that
is consistent with any dependencies -- and if there are circular
dependencies that may be impossible. And of course this isn't
portable, although you could easily write a few lines of awk or
similar to convert it back when needed, or #if it.
All of these address only the symptom; if many of the #include's are
in fact unneeded as you said, it's obviously preferable to eliminate
them, for other systems and human readers as well. In addition to the
generic suggestions from others, I have one Tandem idea that _might_
help. The Tandem compilers used to have options to generate
cross-reference listings, although I can't find it in current (AFAICT)
manuals. If that option still exists, and possibly only if it includes
macros (#define's) which I don't recall, you could write a simple
program to go through such a listing, tally used symbols by file id,
and report any file ids not having any such symbol.
- David.Thompson1 at worldnet.att.net