Boost Workshop at OOPSLA 2004

W

Walter

Hyman Rosen said:
No, because users may define any non-reserved name as an
arbitrary macro, so any code forced in by the inclusion
model can confict in that way. Furthermore, any headers
needed by the template implementation are also forced
into the users code, causing more potential conflict.

This can be addressed by adding scoped macros to C++, which is a couple
orders of magnitude easier than implementing export. Furthermore, export is
certainly not a general solution to the macro scope problem, and so cannot
be justified by that.
And template implementations can't use techniques based
upon anonymous namespaces under the inclusion model for
fear of violating the ODR.

So use a named namespace. Like std:: is for the standard library.
 
W

Walter

Gabriel Dos Reis said:
| | > No. I just meant that I believe that in most template code,
| > you are not going to find calls to free non-dependent functions
| > that vary by instantiation context. But export lets you write
| > the template method definitions without requiring them to be
| > safe for bodily inclusion into arbitrary compilation units.
| > For example, library writers can write their template code
| > without having to prefix every name in sight with '__' to avoid
| > potential name clashes with user code.
|
| Or they could use namespaces, which were specifically designed to fix that
| problem.

Macros don't obey scope rules, therefore anything namespaces are doing
are irrelevant to macro name leakage.

Macro name pollution is a real problem, and is not vanquished by export
because it affects a lot of things besides just templates. Why not instead
of export, add scoped macros to the standard? Isn't there a proposal before
the committee to do that?
 
W

Walter

Hyman Rosen said:
Can these operators be member templates? If not, you've
thrown away the possibility of building a units system
as described by Barton & Nackman.

I am unfamiliar with the work of Barton & Nackman. However, D does not
support implicit instantiation of function templates, although it is
possible to add that. Also, there was a person working on a D library to do
units with templates, so I don't think that implicit function template
instantiation is always required for that to work.
What about overloading operators on enumeration types?

Not unless one of the operands is a reference to a class or struct.
What is the "reverse" of subtraction and division?

A "reverse" operator overload handles cases like:
2 / p
where p is a reference to a class object with a "reverse" operator overload.
All it really means is the 'this' for the operator overload is taken from
the right operand rather than the left.
 
D

David B. Held

Hyman said:
The standard mandated lots of things not demanded by the market, such as
STL.

And yet, compiler and library vendors did not find it unduly difficult
to implement, and the value thereof was apparent to most. On the other
hand, export has had critics more or less from its very inception.
Export is no albatross.

Did you read N1426? What are your rebuttals to the numerous claims
made therein?
From the user point of view, it presents a perfectly simple concept -
put your declarations in a header file and your code in an
implementation file, and compile. It's the inclusion model which is
nasty and broken and counter-intuitive.

Only when you think of templates as functions. And that's exactly why
export doesn't work. Because it lulls the programmer into thinking
that "separate compilation" means "like you have for functions", when,
in fact, it really means "like you have for metafunctions". That's
because templates are not functions, they are metafunctions, or
function-generators. You don't compile them to object code, you compile
them to meta-object code. Or, if you're EDG, you don't even do that.
You just compile-on-demand, because meta-object-code is too big to be
useful. You can make libraries of functions and object files, but you
cannot make libraries of metafunctions and meta-object files with any
product on the market. Nobody thinks of a "compiled macro library",
but templates are really just glorified macros, right? They are code
generators, and it is perfectly normal for us to include macros
wherever they are needed. So from that perspective, the inclusion
model makes perfect sense and gives no false analogy to real functions.
[...]
I find it hard to believe that writing export is harder than doing
all the unwinding correctly when an exception is thrown.

Take it up with EDG. They're the ones that say how hard it is to
implement export, saying that it took as long as "any three other
features in C++ combined"...no exceptions mentioned (such as exception
handling).
The reason vendors haven't implemented export is that users can muddle
along with the inclusion model, so there's no pressure on them to get it
working.

It's worse than that...[from N1426]

John Spicer notes: “Any program written using export can be
rearranged trivially into a program that doesn’t use export.â€

Not only that, but export literally changes language semantics, and was
so poorly understood that many issues were left unspecified in the
standard, so EDG basically had to make up a reasonable set of rules,
which no other vendor is technically obliged to respect.
The only reason templates are in as good a shape as they are is
_Modern C++ Design_ - that library drove enough user demand that
implementors felt that they had to get this part of the standard
implemeneted. Otherwise they just can't be bothered. For God's sake,
until the recent version 7, Microsoft's exception handling was totally
broken. Maybe that was an albatross too?

I think the fact that other vendors did a better job of exception
handling in the mean time speaks volumes. While MC++D was surely
influential in making template support a priority, I would argue that
templates were becoming more popular on several fronts as people began
to see their utility in metaprogramming. On the other hand, supporters
of export are less busy demonstrating the utility of export as they are
apologizing for why it does not do what the naive programmer thinks it
ought to. I mean, c'mon! The convener of the C++ committee is lobbying
against it! What other feature has that ignominious honor?

I think that export is ill-conceived because it confuses metaprogramming
with programming. You can only compile meta-source code to meta-object
code, and C++'s template mechanism doesn't lend itself to a nice,
concise meta-object file format. Part of that is because the template
engine was not conceived as a first-class metaprogramming facility.
That is not to disparage its design, since it was certainly cutting
edge in some respects. But what we do have is more like a glorified
type-safe macro system, which is why the inclusion model is so natural
to implement. I think "export" is a feature that belongs in a different
language...a language that has fundamental support for metaprogramming
and does not even inadvertently convey the expectation that metacode
can be treated like concrete code.

Dave
 
P

Peter C. Chapin

That would solve the clunky name problem. It would do nothing about to
allowing anonymous namespaces, or avoiding the need to include all of
the template implementation's header files into the body of every file
that uses the template, or about the need to recompile the template
instantiation code for every compilation unit that uses the template.

I'm not sure the anonymous namespace issue is that big a deal but
certainly I agree that it seems ungainly to process all those template
bodies in each translation unit that wants to use even a single template
declared in a particular header. However, it seems like even with
export, the instantiation context would have to be recompiled---or at
least re-examined---each time a template body was modified. The only
difference is that with export the compiler is responsible for locating
the relevant instantiations rather than, for example the programmer's
Makefile (not necessarily a bad thing; Makefile maintenance can be a
problem at times). In either case basically the same amount of compile-
time work needs to be done. Am I misunderstanding something here?

Peter
 
J

Jerry Coffin

Walter said:
Compare an EDG-based compiler using export vs DMC++ using precompiled
headers.

By itself, that would produce more or less meaningless results -- we'd
not only get the effects of export vs. inclusion, but also of the
basic speeds of the compilers.

I've used both Comeau and Intel C++ on Windows, and even when
compiling non-template code, they're both normally quite slow
(generally substantially slower than VC++, for example). Consider the
compilers though: Intel's intent is to produce the best code they can,
at great cost in compilation speed. Comeau, OTOH, compiles from C++ to
C, and then invokes another compiler to translate that to object code.

As such, it's no surprise that both of these compilers are generally
quite a bit slower than many others, and export really has nothing to
do with that one way or the other.

I suppose if you really wanted to isolate the effects of export, you
could do something like starting with some code that didn't use
templates at all, and compare the speed of Comeau to Digital Mars (or
VC++, gcc, etc.)

Then you'd compile some templated code, using export with Comeau and
inclusion with Digital Mars, and use the results from the first test
to weight the results of the second.

Which of these makes more sensse depends on what question you want to
answer. If you're managing a programming project and want to find how
practical export is right now, then export implies using Comeau, and
comparing its speed directly to whatever other compiler you'd use
without export would make sense. In this case, however, I feel obliged
to point out that simple speed of compilation is a poor indicator of
productivity -- quality of error messages (for one example) can play a
HUGE role in overall productivity. A compiler that gives a poor error
message in N seconds may be much less productive than one that tells
you the exact problem, even if it takes ten times as long to do so.

OTOH, for you (Walter, I mean) or any other compiler writer, the
relevant question is somewhat different: what benefit could be
expected from adding export to your compiler?

In that case, I think the second comparison would produce much more
meaningful results. Comeau is slow, but adding export to your compiler
would not imply changing your entire compilation model to the one
Comeau uses.
 
G

Gabriel Dos Reis

| | >
| > | | > | > No. I just meant that I believe that in most template code,
| > | > you are not going to find calls to free non-dependent functions
| > | > that vary by instantiation context. But export lets you write
| > | > the template method definitions without requiring them to be
| > | > safe for bodily inclusion into arbitrary compilation units.
| > | > For example, library writers can write their template code
| > | > without having to prefix every name in sight with '__' to avoid
| > | > potential name clashes with user code.
| > |
| > | Or they could use namespaces, which were specifically designed to fix
| that
| > | problem.
| >
| > Macros don't obey scope rules, therefore anything namespaces are doing
| > are irrelevant to macro name leakage.
|
| Macro name pollution is a real problem, and is not vanquished by export
| because it affects a lot of things besides just templates. Why not instead
| of export, add scoped macros to the standard?

But scoped macros don't supersede export semantics.

| Isn't there a proposal before the committee to do that?

In 2004, but not before.
 
J

Jean-Marc Bourguet

Walter said:
Compiler vendors answer to their customers, and
by and large do what their customers want them to do.

The only input vendors can get from customers is desiderata.
There is hopefully a correlation between desiderata and
needs, but they are not the same things and some times by a
large margin especially in large corporation where usually
there are several layers (sometimes non technical or having
lost contact with state of the art) between the people who
could best express the needs and the one which is in contact
with the providers (each layer introducing some bias).

Then in the vendor organization, there are several layers
between the customers and the person which prioritize the
features. And customers needs extrapolated from the
desiderata he gets is one thing he considers (and note that
if he thinks they provide other solution that the one asked
for, he may well drop a customer desiderata). Company
ressources and global strategy are the main other points;
and they most probably get precedence.
I suspect that Microsoft was under a lot of customer
pressure to produce something like C++/CLI.

I'm not so sure. They may have deceided to produce C++/CLI
because they perceived it was necessary to the success of
..NET, whatever relative importance the customer gave to .NET
compared to other things.

<paranoia ON>
And one may wonder if some actions are not PR trying to
convince customers that "export" is not an answer to their
needs. If it is the case, that's mean that there is
demand... and it is one way to answer to the customer.
</paranoia>

Yours,
 
J

Jean-Marc Bourguet

Walter said:
No, between different compilers on the same machine and
the same OS.

I can't provide that unless you provide me with DMC++ for
Linux.
This can be dealt with by adjusting the dependencies
specified in the makefile.

I'm sorry, dependencies are generated automatically (in
preference order: by ClearMake, the compiler or other
tools). With the inclusion model the only way to reduce the
dependencies is not providing the implementation to some
compilation units and that was an intractable problem in the
one case I know that was seriously tried in a non trivial
program(*).
Such could also theoretically be added to precompiled
headers, but nobody has done that to the best of my
knowledge.

I don't understand. If an implementation change in the
inclusion model, every compilation unit including this code
must be recompiled. How a precompiled header can change
that?

I can see how a system providing incremental compilation
could track dependencies in sub-file units and so provide
even better reduction of dependencies. But that would be
more complex than export to implement.
I agree that would be a problem, but Comeau's compiler is
available on Windows, so a direct comparison is possible.

The code for that benchmark is not available for Windows.
If DMC++ can, using precompiled headers, compile faster
than a compiler optimized with export,

I hope nobody pretend that como -- especially using gcc as
back end -- is a compiler optimized for compilation
speed. DMC++ seems to be so (that's the second point in your
features list).
then that shows that exported templates are not needed for
boosting compile speed.

Who says that was needed? But export is the only viable
method I know to reduce dependencies on the template
instanciation.

(*) stopping to demand that explicit instanciation generates
all members (and so allow to use that when some members are
not instantiable for some types) would help with that
approach.

A+
 
W

Walter

Jean-Marc Bourguet said:
The only input vendors can get from customers is desiderata.
There is hopefully a correlation between desiderata and
needs, but they are not the same things and some times by a
large margin especially in large corporation where usually
there are several layers (sometimes non technical or having
lost contact with state of the art) between the people who
could best express the needs and the one which is in contact
with the providers (each layer introducing some bias).

Digital Mars has no such layers, and if you surf the DMC newsgroups, you'll
see I talk directly to the engineers who use it. So I have a 50-yardline
seat on what they're using it for and the kinds of problems they're having.
Sometimes you can take a leap and do something you think they'll like rather
than what they ask for, like D, but that's taking a big risk. I seriously
doubt if I worked for a huge corporation they'd ever let me take a gamble
like D.

<paranoia ON>
And one may wonder if some actions are not PR trying to
convince customers that "export" is not an answer to their
needs. If it is the case, that's mean that there is
demand... and it is one way to answer to the customer.
</paranoia>

I invite you to the DM newsgroups (news.digitalmars.com) and you can see for
yourself what they're asking for <g>. You can also surf the microsoft
newsgroups, or the borland ones. Given the sometimes very unkind messages
posted there, I doubt any of them are cens+ored by their respective PR
departments. You won't see such issues here because they are OT for this
forum.
 
W

Walter

Jerry Coffin said:
"Walter" <[email protected]> wrote in message

By itself, that would produce more or less meaningless results -- we'd
not only get the effects of export vs. inclusion, but also of the
basic speeds of the compilers.

If DMC++ proves that it is still faster than another compiler using export,
then that demonstrates that export is NOT required for fast compiles. I'd
say that was a very meaningful result.

As such, it's no surprise that both of these compilers are generally
quite a bit slower than many others, and export really has nothing to
do with that one way or the other.

Compile speed is listed as one of the big three reasons why export is
needed. If other compilers are faster without needing export, or if export
really has nothing to do with compile speed, then there goes that reason
OTOH, for you (Walter, I mean) or any other compiler writer, the
relevant question is somewhat different: what benefit could be
expected from adding export to your compiler?

I don't know.
 
J

Jerry Coffin

[ ... ]
The only input vendors can get from customers is desiderata.
There is hopefully a correlation between desiderata and
needs, but they are not the same things and some times by a
large margin especially in large corporation where usually
there are several layers (sometimes non technical or having
lost contact with state of the art) between the people who
could best express the needs and the one which is in contact
with the providers (each layer introducing some bias).

This really isn't accurate at all -- quite a few people who work
directly on compilers at various vendors monitor newsgroups
extensively. Posts from people at EDG, Microsoft, Dinkumware, etc.
These aren't just non-technical people either -- quite a few of them
are the people who write code for these companies. Of course, in some
of those cases (e.g. EDG and Dinkumware) the companies are small
enough that there ARE hardly any non-technical people there. Even in
the case of Microsoft (about as big as software companies get)
technical people are easy to find on newsgroups. Most of them tend
more toward MS-specific newsgroups, but at least IMO, that's not a
particular surprise.

In any case, the bottom line is that in quite a few cases compiler
vendors get input directly from customers, and the people who work
directly on the compiler often receive that input _quite_ directly.
 
W

Walter

Jean-Marc Bourguet said:
I'm sorry, dependencies are generated automatically (in
preference order: by ClearMake, the compiler or other
tools). With the inclusion model the only way to reduce the
dependencies is not providing the implementation to some
compilation units and that was an intractable problem in the
one case I know that was seriously tried in a non trivial
program(*).

I agree that would be impractical if you're using automated dependency
generation, unless the dependency generation tool was updated (or the
compiler generated it).
I don't understand. If an implementation change in the
inclusion model, every compilation unit including this code
must be recompiled. How a precompiled header can change
that?

It can keep track of which symbols in a source file are references to which
header. From that, and from the datestamps, it can deduce the dependencies.
Whether a compiler did this or not would be a quality of implementation
issue, and wouldn't need to be addressed by the standard. I'm hard pressed
to believe that export is the most practical way to achieve this, and
furthermore, it only works for templates. It isn't a general solution for
dependencies (consider inline functions, for example).
Who says that was needed?

Daveed Vandevoorde wrote in this thread:
-----------------------------------------------------------------
The intent of the feature was to protect template definitions from
"name leakage" (I think that's the term that was used at the time;
it refers to picking up unwanted declaration due to excessive
#inclusion). export certainly fulfills that.

export also allows code to be compiled faster. (I'm seeing gains
without even using an export-aware back end.)

export also allows the distribution of templates in compiled form
(as opposed to source form).
 
J

Jerry Coffin

Francis Glassborow said:
I may be wrong but I thought Intel was based on Comeau.

I suppose that's possible, but if so it's the first time I've heard of
it. I have heard (and can easily believe) that both are based on the
EDG front-end, but I've never previously heard of any relationship
beyond that.
 
L

llewelly

I suppose if you really wanted to isolate the effects of export, you
could do something like starting with some code that didn't use
templates at all, and compare the speed of Comeau to Digital Mars (or
VC++, gcc, etc.)
[snip]

That wouldn't be my approach at all.

I'd estimate the compilation effects of export by produce two
codebases which differed only in their use of export; one would
use export for all function templates, and place all function
template definitions in files seperate from their declarations,
while the other would would use the inclusion model.

I'd compile both the export-using version of the code and the
inclusion-using version of the code with the same compiler. That
way, compiler issues other than export would not come into play.
 
J

Jean-Marc Bourguet

Jean-Marc Bourguet said:
With como, you'll need dependencies on implementation file for
exported template for every compilation unit which is responsible of
providing an instanciation. As far as I remember, in my tests all
the dependancies (even on exported template implementation) where
generated automatically by como.

I didn't remember correctly. Everything was automatic -- which was
what was important to me -- but automatically generated dependencies
took into account only what the preprocessor knew. The recompilations
needed after modifying the implementation of a template where done by
the pre-linker (which is there to trigger recompilation in some other
cases, like missing a template instanciation)

Yours,
 
K

kanze

It is also fair to point out that said company only develops front
ends. It doesn't develop optimizers, code generators, linkers,
librarians, runtime libraries, debuggers, or IDEs, all of which are
part of a compiler product, and all of which consume a lot of
resources to create, enhance, and maintain, and all of which customers
ask for improvements in.

I know. They also don't sell to end users, which probably saves them
more in support effort than all of the points you mention.

Still, as I said, the difference in some cases (I wasn't thinking of
Digital Mars) is several magnitudes. Are you trying to tell me that all
of these bits multiply the required effort by a thousand or more?
 
K

kanze

Peter C. Chapin said:
In addition to allowing anonymous namespace, export reduces the chances
of an accidental violation of the ODR.
I'm not sure the anonymous namespace issue is that big a deal but
certainly I agree that it seems ungainly to process all those template
bodies in each translation unit that wants to use even a single
template declared in a particular header. However, it seems like even
with export, the instantiation context would have to be
recompiled---or at least re-examined---each time a template body was
modified.

From what I understand of the EDG implementation, at least one of the
instantiation contexts would have to be recompiled. I find it not
unusual to have templates which are instantiated with the same arguments
in many files. (The most obvious example would be std::basic_string, I
think.) In such cases, with export, only one of the sources with the
instantation context needs to be recompiled; without export, the
makefile will cause all of them to be recompiled.
The only difference is that with export the compiler is responsible
for locating the relevant instantiations rather than, for example the
programmer's Makefile (not necessarily a bad thing; Makefile
maintenance can be a problem at times). In either case basically the
same amount of compile- time work needs to be done. Am I
misunderstanding something here?

Basically, you're missing that the compiler understands C++, and the
implications of a given change, much better than make does. In theory,
even without export, if you modified the implementation of the template,
the compiler could recognize that this modification only required the
recompilation of a single source, and not of every source which included
the makefile.

In theory... In practice, such compilers are even rarer than compilers
implementing export.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top