Boost Workshop at OOPSLA 2004

J

Jeremy Siek

CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf


Submissions

Each participant will be expected to develop a position paper
describing a particular library or category of libraries that is
lacking in the current C++ standard library and Boost. The participant
should explain why the library or libraries would advance the state of
C++ programming. Ideally, the paper should sketch the proposed library
interface and concepts. This will be a unique opportunity to critique
and review library proposals. Alternatively, a participant might
describe the strengths and weaknesses of existing libraries and how
they might be modified to fill the need.

Form of Submissions

Submissions should consist of a 3-10 page paper that gives at least
the motivation for and an informal description of the proposal. This
may be augmented by source or other documentation of the proposed
libraries, if available. Preferred form of submission is a PDF file.

Important Dates

• Submission deadline for early registration: September 10, 2004
• Early Notification of selection: September 15, 2004
• OOPSLA early registration deadline: September 16, 2004
• OOPSLA conference: October 24-28, 2004

Contact committee (e-mail address removed)

Program Committee
Jeff Garland
Nicolai Josuttis
Kevlin Henney
Jeremy Siek
 
A

Andrei Alexandrescu \(See Website for Email\)

Jeremy Siek said:
CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf
[snip]

I wonder if the submitters follow this post's trail or I should email
them... anyway, here goes.

I am not sure if I'll ever get around to writing it, so I said I'd post the
idea here, maybe someone will pursue it. In short, I think a proposal for a
replacement of C++'s preprocessor would be, I think, welcome.

Today Boost uses a "preprocessor library", which in turn (please correct me
if my understanding is wrong) relies on a program to generate some many big
macros up to a fixed "maximum" to overcome preprocessor's incapability to
deal with variable number of arguments.

Also, please correct me if I'm wrong (because I haven't really looked deep
into it), but my understanding is that people around Boost see the PP
library as a necessary but unpleasantly-smelling beast that makes things
around it smelly as well. [Reminds me of the Romanian story: there was a guy
called Pepelea (pronounced Peh-Peh-leah) who was poor but had inherited a
beautiful house. A rich man wanted to buy it, and Pepelea sold it on one
condition: that Pepelea owns a nail in the living room's wall, in which he
can hang whatever he wanted. Now when the rich man was having guests and
whatnot, Pepelea would drop by and embarraisingly hang a dirty old coat. Of
course in the end the rich man got so exasperated that he gave Pepelea the
house back for free. Ever since that story, "Pepelea's nail" is referred to
as something like... like what the preprocessor is to the C++ language.]

That would be reason one to create a new C++ preprocessor. (And when I say
"new," that's not like in "yet another standard C++ preprocessor". I have
been happy to see my suggestion on the Boost mailing list followed in that
the WAVE preprocessor was built using Boost's own parser generator library,
Spirit.) What I am talking now is "a backwards-INcompatible C++ preprocessor
aimed at displacing the existing preprocessor forever and replacing it with
a better one".

If backed by the large Boost community, the new preprocessor could easily
gain popularity and be used in new projects instead of the old one. To avoid
inheriting past's mistakes, the new preprocessor doesn't need to be
syntax-compatible in any way with the old preprocessor, but only
functionally compatible, in that it can do all that can be done with the
existing preprocessor, only that it has new means to do things safer and
better.

I think that would be great. Because it we all stop coding for a second and
think of it, what's the ugliest scar on C++'s face - what is Pepelea's nail?
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years that
took them to implement it? Maybe namespaces that are so badly designed,
you'd think they are inherited from C? I'd say they are good contenders
against each other, but none of them holds a candle to the preprocessor.

So, a proposal for a new preprocessor would be great. Here's a short wish
list:

* Does what the existing one does (although some of those coding patterns
will be unrecommended);

* Supports one-time file inclusion and multiple file inclusion, without the
need for guards (yes, there are subtle issues related to that... let's at
least handle a well-defined subset of the cases);

* Allows defining "hygienic" macros - macros that expand to the same text
independent on the context in which they are expanded;

* Allows defining scoped macros - macros visible only within the current
scope;

* Has recursion and possibly iteration;

* Has a simple, clear expansion model (negative examples abound - NOT like
m4, NOT like tex... :eek:))

* Supports variable number of arguments. I won't venture into thinking of
more cool support a la scheme or dylan ofr Java Extender macros.


Andrei
 
P

P.M.

"Andrei Alexandrescu \(See Website for Email\)"
replacement of C++'s preprocessor...

If you're going to build a better text-substitution layer, or even a
true Lisp-ish macro system (although one which was restricted to
compile-time), why not go further and cleanup more of the language via
this new parser? How about embracing and simplifying template
meta-programming with a better template system that is a true
compile-time functional language in its own right with unlimited
recursion, clear error messages, etc. In fact, it may be possible to
unify both this new uber macro system and the template
expansion/met-programming syntax.

* C++ improvements: Dare to dream, but wear asbestos underpants just
in case.
 
D

David Abrahams

Andrei Alexandrescu \(See Website for Email\) said:
Jeremy Siek said:
CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf
[snip]

I wonder if the submitters follow this post's trail or I should email
them... anyway, here goes.

I am not sure if I'll ever get around to writing it, so I said I'd post the
idea here, maybe someone will pursue it. In short, I think a proposal for a
replacement of C++'s preprocessor would be, I think, welcome.

Hard to see how this is going to be about C++ libraries, but I'll
follow along.
Today Boost uses a "preprocessor library", which in turn (please
correct me if my understanding is wrong) relies on a program to
generate some many big macros up to a fixed "maximum" to overcome
preprocessor's incapability to deal with variable number of
arguments.

That's a pretty jumbled understanding of the situation.

The preprocessor library is a library of headers and macros that allow
you to generate C/C++ code by writing programs built out of macro
invocations. You can see the sample appendix at
http://www.boost-consulting.com/mplbook for a reasonably gentle
introduction.

In the preprocessor library's _implementation_, there are lots of
boilerplate program-generated macros, but that's an implementation
detail that's only needed because so many preprocessors are badly
nonconforming. In fact, the library's maintainer, Paul Mensonides,
has a _much_ more elegantly-implemented PP library
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).

There is no way to "overcome" the PP's incapability to deal with
variable number of arguments other than by using PP data structures as
described in http://boost-consulting.com/mplbook/preprocessor.html to
pass multiple items as a single macro argument, or by extending the PP
to support variadic macros a la C99, as the committee is poised to do.

The PP library is _often_ used to overcome C++'s inability to support
typesafe function (template)s with variable numbers of arguments, by
writing PP programs that generate overloaded function (template)s.
Also, please correct me if I'm wrong (because I haven't really
looked deep into it), but my understanding is that people around
Boost see the PP library as a necessary but unpleasantly-smelling
beast that makes things around it smelly as well.

I don't see it that way, although I wish there were ways to avoid
using it in some of the more common cases (variadic template). Maybe
some others do see it like that.
[Reminds me of the Romanian story: there was a guy called Pepelea
(pronounced Peh-Peh-leah) who was poor but had inherited a beautiful
house. A rich man wanted to buy it, and Pepelea sold it on one
condition: that Pepelea owns a nail in the living room's wall, in
which he can hang whatever he wanted. Now when the rich man was
having guests and whatnot, Pepelea would drop by and embarraisingly
hang a dirty old coat. Of course in the end the rich man got so
exasperated that he gave Pepelea the house back for free. Ever since
that story, "Pepelea's nail" is referred to as something
like... like what the preprocessor is to the C++ language.]
cute.

That would be reason one to create a new C++ preprocessor. (And when
I say "new," that's not like in "yet another standard C++
preprocessor". I have been happy to see my suggestion on the Boost
mailing list followed in that the WAVE preprocessor was built using
Boost's own parser generator library, Spirit.) What I am talking now
is "a backwards-INcompatible C++ preprocessor aimed at displacing
the existing preprocessor forever and replacing it with a better
one".

Bjarne's plan for that is to gradually make the capabilities of the
existing PP redundant by introducing features in the core
language... and then, finally, deprecate it.
If backed by the large Boost community, the new preprocessor could
easily gain popularity and be used in new projects instead of the
old one.

I doubt even with Boost backing that the community at large is likely
to easily accept integrating another tool into its build processes.
The big advantage of the C++ PP is that it's built-in... and that's
one of the biggest reasons that the PP _lib_ is better for my purposes
than any of the ad hoc code generators I've written/used in the past.
To avoid inheriting past's mistakes, the new preprocessor
doesn't need to be syntax-compatible in any way with the old
preprocessor, but only functionally compatible, in that it can do
all that can be done with the existing preprocessor, only that it
has new means to do things safer and better.

I think Bjarne's approach is the best way to do that sort of
replacement. As long as the PP's functionality is really being
replaced by a textual preprocessor (or a token-wise one as we have
today) it's going to suffer many of the same problems. Much of those
jobs should be filled by a more robust metaprogramming system that's
fully integrated into the language and not just a processing phase.
I think that would be great. Because it we all stop coding for a
second and think of it, what's the ugliest scar on C++'s face - what
is Pepelea's nail? Maybe "export" which is so broken and so useless
and so abusive that its implementers have developed Stockholm
syndrome during the long years that took them to implement it?

That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.
Maybe namespaces that are so badly designed, you'd think they are
inherited from C?

Wow, I'm impressed; that's going to piss off both the hardcore C _and_
C++ people!

I've never seen a serious proposal for better namespaces, other than
http://boost-consulting.com/writing/qn.html, which seems to have been
generally ignored. Have you got any ideas?
I'd say they are good contenders against each other, but none of
them holds a candle to the preprocessor.

So, a proposal for a new preprocessor would be great.

If that's your point, I think it's an interesting one, but somehow I
still don't get how it could be appropriate for a workshop on C++
libraries.
 
B

Bob Hairgrove

On 11 Aug 2004 16:19:01 -0400, David Abrahams

[snip]
That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.

Has anyone except Comeau actually implemented it? I think it is a
great idea WRT hiding of implementation and probably (I never actually
used this feature) towards eliminating the code bloat typical of
heavily-templated code.

Here I'd have to vote for function throw specs, not export.
 
A

Andrei Alexandrescu \(See Website for Email\)

David Abrahams said:
"Andrei Alexandrescu \(See Website for Email\)"

That's a pretty jumbled understanding of the situation.

The preprocessor library is a library of headers and macros that allow
you to generate C/C++ code by writing programs built out of macro
invocations. You can see the sample appendix at
http://www.boost-consulting.com/mplbook for a reasonably gentle
introduction.

Ok, it's a half jumbled understanding of the situ, coupled with a half
jumbled expression of my half-jumbled understanding :eek:).

First, I've looked in my boost implementation to see things like
BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
in my post) is that such macros are generated by a program. That program is
admittedly not part of the library as distributed (I believe it is part of
the maintenance process), but I subjectively consider it a witness that a
more elegant approach would be welcome.

Then I've looked again over the PP library (this time through the link
you've sent), and honestly it reminds me of TeX macro tricks more than any
example of elegant programming. As such, I'd find it hard to defend it with
a straight face, and I am frankly surprised you do. But then I understand
the practical utility, as you point out below.
Bjarne's plan for that is to gradually make the capabilities of the
existing PP redundant by introducing features in the core
language... and then, finally, deprecate it.

It's hard to introduce the ability to define syntactic replacement (which
many people consider useful) in the core language.
I doubt even with Boost backing that the community at large is likely
to easily accept integrating another tool into its build processes.
The big advantage of the C++ PP is that it's built-in... and that's
one of the biggest reasons that the PP _lib_ is better for my purposes
than any of the ad hoc code generators I've written/used in the past.

Practicality, and not elegance or suitability, is about the only reason that
I could agree with.
I think Bjarne's approach is the best way to do that sort of
replacement. As long as the PP's functionality is really being
replaced by a textual preprocessor (or a token-wise one as we have
today) it's going to suffer many of the same problems. Much of those
jobs should be filled by a more robust metaprogramming system that's
fully integrated into the language and not just a processing phase.

I think here we talk about different things. One path to pursue is indeed to
provide better means for template programming, and another is to provide
syntactic manipulation. To me, they are different and complementary
techniques.
That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.

Sure. Similarly, they discovered that the expensive air filters for the
space shuttle can be used (only) as coffee filters for the team on the
ground :eek:).
Wow, I'm impressed; that's going to piss off both the hardcore C _and_
C++ people!

Heh heh... I knew this is gonna be taken that way :eek:). What I meant was,
many shortcomings of C++ root in a need for compatibility with C. With
namespaces and export, there's no C to blame :eek:).
I've never seen a serious proposal for better namespaces, other than
http://boost-consulting.com/writing/qn.html, which seems to have been
generally ignored. Have you got any ideas?

That's a good doc solidly motivated; I am sorry it does not get the
attention that it deserves.
If that's your point, I think it's an interesting one, but somehow I
still don't get how it could be appropriate for a workshop on C++
libraries.

Ok, I'll drop it. May I still bicker about it on the Usenet? :eek:)


Andrei
 
P

Paul Mensonides

That's a pretty jumbled understanding of the situation.

The preprocessor library is a library of headers and macros that allow
you to generate C/C++ code by writing programs built out of macro
invocations. You can see the sample appendix at
http://www.boost-consulting.com/mplbook for a reasonably gentle
introduction.

In the preprocessor library's _implementation_, there are lots of
boilerplate program-generated macros, but that's an implementation
detail that's only needed because so many preprocessors are badly
nonconforming. In fact, the library's maintainer, Paul Mensonides,
has a _much_ more elegantly-implemented PP library
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).

Yes, for example, it would be relatively easy to a construct a macro that would
(given enough memory) run for 10,000 years generating billions upon trillions of
results. Obviously that isn't useful; I'm merely pointing out that many of the
limits Andrei refers to above aren't really limits.
There is no way to "overcome" the PP's incapability to deal with
variable number of arguments other than by using PP data structures as
described in http://boost-consulting.com/mplbook/preprocessor.html to
pass multiple items as a single macro argument, or by extending the PP
to support variadic macros a la C99, as the committee is poised to do.

Incidentally, variadics make for highly efficient data structures--basically
because they can be unrolled. Given variadics, it is possible to tell if there
is at least a certain number of elements in constant time. This allows unrolled
processing in batch.
The PP library is _often_ used to overcome C++'s inability to support
typesafe function (template)s with variable numbers of arguments, by
writing PP programs that generate overloaded function (template)s.
Yes.


I don't see it that way, although I wish there were ways to avoid
using it in some of the more common cases (variadic template). Maybe
some others do see it like that.

In some ways, with Chaos more so than Boost PP, preprocessor-based code
generation is very elegant. It should be noted also that well-designed code
generation via the preprocessor typically yields more type-safe code that is
less error-prone and more maintainable than the alternatives.
Bjarne's plan for that is to gradually make the capabilities of the
existing PP redundant by introducing features in the core
language... and then, finally, deprecate it.
I doubt even with Boost backing that the community at large is likely
to easily accept integrating another tool into its build processes.
The big advantage of the C++ PP is that it's built-in... and that's
one of the biggest reasons that the PP _lib_ is better for my purposes
than any of the ad hoc code generators I've written/used in the past.

Safer? In what way? Name clashes? Multiple evaluation?

I have probably written more macros than any other person. Chaos alone has
nearly two thousand *interface* (i.e. not including implementation) macros. The
extent is not so great in the pp-lib, but it is large nonetheless, and the
pp-lib is widely used even if not directly. However, there have been no cases
of name collisions that I am aware of--simply because the library follows simple
guidelines on naming conventions. The fact that users of Boost need not even be
aware of the preprocessor-generation used within Boost is a further testament of
the elegance of the solutions--even in spite of the limitations and hacks
imposed by non-conforming preprocessors.

Consider the recent CUJ with the Matlab article which has unprefixed,
non-all-caps macro definitions *on the cover of the magazine*. Though the code
of which that is a part may well be good overall and serve a useful function,
those macros are a simply bad coding--and nothing can prevent bad coding and
wanton disregard for the consequences of actions.

As far as multiple evaluation is concerned, that is a result of viewing a macro
as a function--which it is not. Macros expand to code--they have nothing
specifically to do with function calls or any other language abstraction. Even
today people recommend, for example, that macros that expand to statements (or
similar) should leave out the trailing semicolon so the result looks like a
normal function call. In general, that is a terrible strategy. Macro
invocations are not function calls, do not have the semantics of function calls,
and should not be intentionally made to *act* like function calls. The code
that a macro expands to is the functional result of that macro and should be
documented as such--not just what that code does.
I think Bjarne's approach is the best way to do that sort of
replacement. As long as the PP's functionality is really being
replaced by a textual preprocessor (or a token-wise one as we have
today) it's going to suffer many of the same problems. Much of those
jobs should be filled by a more robust metaprogramming system that's
fully integrated into the language and not just a processing phase.

This is a fundamental issue. It would indeed be great to have a more advanced
preprocessor capable of doing many of the things that Boost PP (or Chaos) is
designed to enable. However, there will *always* be a need to manipulate source
without the semantic attachment of the underlying language's syntactic and
semantic rules. In many cases those rules lead to generation code that is
significantly more obtuse than it actually needs to be because the restrictions
imposed by syntax are fundamentally at odds with the creation of that syntax (or
the equivalent semantic effect). If there was another metaprogramming layer in
the compilation process (which would be fine), the preprocessor would just be
used to generate that also--for the basic reason that the syntax of the
generated language just gets in the way.

The ability to manipulate the core language without that attachment is one of
the preprocessor's greatest strengths. It is also one of the preprocessor's
greatest weaknesses. Just like any other language feature, particularly in C
and C++, it must be used with care because just like any other language feature,
it can be easily abused. The preprocessor enables very elegant and good
solutions when used well.

Without resorting to arbitrary rhetoric such as "macros are evil" what is ugly
about the preprocessor? Certain uses of the preprocessor have in the past
caused (and still cause) problems. However, labeling macros as ugly because
they can be misused is taking the easy way out. It represents a failure to
isolate and understand how those problems surface and how they should be avoided
through specific guidelines (instead of gross generalizations). This has been
happening (and is still ongoing) with the underlying language for some time.
You avoid pitfalls in languages like C and C++ through understanding.
Guidelines themselves are not truly effective unless they are merely reminders
of the reasoning behind the guidelines. Otherwise, they just lead to
brain-dead, in-the-box programming, and inhibit progress.
That's slander ;->. Export could be used to optimize template
metaprograms, for example (compile the templates to executable code
that does instantiations). It may not have been a good idea, but
those who suffered through implementing it now think it has some
potential utility.

I agree.

Regards,
Paul Mensonides
 
A

Andrei Alexandrescu \(See Website for Email\)

Paul Mensonides said:
In some ways, with Chaos more so than Boost PP, preprocessor-based code
generation is very elegant. It should be noted also that well-designed code
generation via the preprocessor typically yields more type-safe code that is
less error-prone and more maintainable than the alternatives.

Would be interesting to see some examples of that around here. I would be
grateful if you posted some.
Safer? In what way? Name clashes? Multiple evaluation?

I have probably written more macros than any other person.

I think you mean "I have probably written more C++ macros than any other
person." That detail is important. I'm not one to claim having written lots
of macros in any language, and I apologize if the amendment above sounds
snooty. I just think it's reasonable to claim that the C++ preprocessor
compares very unfavorably with many other languages' means for syntactic
abstractions.

I totally agree with everything you wrote, but my point was, I believe,
misunderstood. Yes, "macros are evil" is an easy cop-out. But I never said
that. My post says what tantamounts to "The C++ preprocessor sucks". It
sucks because it is not a powerful-enough tool. That's why.

So let me restate my point. Macros are great. I love macros. Syntactic
abstractions have their place in any serious language, as you very nicely
point out. And yes, they are distinct from other means of abstraction. And
yes, they can be very useful, and shouldn't be banned just because they can
be misused.

(I'll make a parenthesis here that I think is important. I believe the worst
thing that the C/C++ preprocessor has ever done is to steer an entire huge
community away from the power of syntactic abstractions.)

So, to conclude, my point was that the preprocessor is too primitive a tool
for implementing syntactic abstractions with.

Let's think wishes. You've done a great many good things with the
preprocessor, so you are definitely the one to be asked. What features do
you think would have made it easier for you and your library's clients?


Andrei
 
P

Paul Mensonides

First, I've looked in my boost implementation to see things like
BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
in my post) is that such macros are generated by a program. That program is
admittedly not part of the library as distributed (I believe it is part of
the maintenance process), but I subjectively consider it a witness that a
more elegant approach would be welcome.

Agreed, that implementation is junk and is the result of poor preprocessor
conformance.
Then I've looked again over the PP library (this time through the link
you've sent),

(I believe that the link Dave posted was to Chaos--which is distinct from Boost
Preprocessor.)
and honestly it reminds me of TeX macro tricks more than any
example of elegant programming. As such, I'd find it hard to defend it with
a straight face, and I am frankly surprised you do. But then I understand
the practical utility, as you point out below.

What it reminds you of is irrelevant. You know virtually nothing about how it
works--you've never taken the time. Without that understanding, you cannot
critique its elegance or lack thereof.
It's hard to introduce the ability to define syntactic replacement (which
many people consider useful) in the core language.

I agree--but that doesn't mean that we can't take steps in that direction.
Practicality, and not elegance or suitability, is about the only reason that
I could agree with.

Once again, a quick glance is wholly insufficient. You have not taken the time
to learn the idioms involved. The solutions that Chaos uses *internally* are
indeed far more elegant than you realize. Likewise, the solutions that Chaos
(or Boost PP) engenders through client code is more elegant than you realize.
You simply don't know enough about it to weigh the pros and cons.

Regards,
Paul Mensonides
 
D

David Abrahams

Andrei Alexandrescu \(See Website for Email\) said:
Ok, it's a half jumbled understanding of the situ, coupled with a half
jumbled expression of my half-jumbled understanding :eek:).

First, I've looked in my boost implementation to see things like
BOOST_PP_REPEAT_1_0 to BOOST_PP_REPEAT_1_256 and then BOOST_PP_REPEAT_2_0 to
BOOST_PP_REPEAT_2_256 and so on. My understanding (which I tried to convey
in my post) is that such macros are generated by a program.

Yes, but as I mentioned none of that is required in std C++.
http://sourceforge.net/projects/chaos-pp/ doesn't use any
program-generated macros.
That program is admittedly not part of the library as distributed (I
believe it is part of the maintenance process), but I subjectively
consider it a witness that a more elegant approach would be welcome.

Yeah, I'd rather be using Chaos everywhere instead of the current
Boost PP lib. Too bad it isn't portable in real life.
Then I've looked again over the PP library (this time through the link
you've sent), and honestly it reminds me of TeX macro tricks more than any
example of elegant programming.

Where are the similarities with TeX macro tricks?
As such, I'd find it hard to defend it with a straight face, and I
am frankly surprised you do.

You're surprised I defend the PP library based on the fact that it
reminds _you_ of TeX macros?

The PP lib provides me with an expressive programming system for code
generation using well-understood functional programming idioms. In
the domain of generating C++ from token fragments, it's hard to
imagine what more one could want other than some syntactic sugar and
scoping.
But then I understand the practical utility, as you point out below.

It's hard to introduce the ability to define syntactic replacement
(which many people consider useful) in the core language.

Right. I personally think the PP will always have a role. That
said, I think its role could be substantially reduced.
Practicality, and not elegance or suitability, is about the only
reason that I could agree with.

Practicality in this case is elegance. My users can adjust
code-generation parameters by putting -Dwhatever on their
command-line.

FWIW, I designed a sophisticated purpose-built C++ code generation
language using Python and eventually scrapped it. Ultimately the
programs I'd written were harder to understand than those using the PP
lib. That isn't to say someone else can't do better... I'd like to
see a few ideas if you have any.
I think here we talk about different things. One path to pursue is
indeed to provide better means for template programming and another
is to provide syntactic manipulation. To me, they are different and
complementary techniques.

Metaprogramming != template programming. In meta-Haskell, they
actually manipulate ASTs in the core language. As I understand the
XTI project, it's going in that sort of direction, though a key link
for metaprogramming is missing.
Heh heh... I knew this is gonna be taken that way :eek:). What I meant was,
many shortcomings of C++ root in a need for compatibility with C. With
namespaces and export, there's no C to blame :eek:).

I don't know about that. Isn't C's inclusion model a big part of the
reason that namespaces are not more like modules?
That's a good doc solidly motivated; I am sorry it does not get the
attention that it deserves.

Thanks. Maybe I should re-submit it.
Ok, I'll drop it.

If you have PP library ideas, by all means bring those up.
May I still bicker about it on the Usenet? :eek:)

It's your dime ;-)
 
A

Andrei Alexandrescu \(See Website for Email\)

Paul Mensonides said:
(I believe that the link Dave posted was to Chaos--which is distinct from
Boost
Preprocessor.)

I've looked at the existing PP library, not at Chaos.
What it reminds you of is irrelevant. You know virtually nothing about
how it
works--you've never taken the time. Without that understanding, you
cannot
critique its elegance or lack thereof.

It seems like my comments have annoyed you, and for a good reason. Please
accept my apologies.

FWIW, what I looked at were usage samples, not at how it works (either Boost
PP or Chaos). Those *usage* examples I deemed as wanting.
Once again, a quick glance is wholly insufficient. You have not taken the
time
to learn the idioms involved. The solutions that Chaos uses *internally*
are
indeed far more elegant than you realize. Likewise, the solutions that
Chaos
(or Boost PP) engenders through client code is more elegant than you
realize.
You simply don't know enough about it to weigh the pros and cons.

Again, I am sorry if I have caused annoyance. I still believe, however, that
you yourself would be happier, and could provide more abstractions, if
better facilities would be available to you, than what the preprocessor
currently offers. That is what I think would be interesting to discuss.


Andrei
 
D

Daniel R. James

Since no one else has pointed this out, it does this to overcome the
preprocessor's lack of recursion, it's nothing to do with variable
arguments.

David Abrahams said:
In the preprocessor library's _implementation_, there are lots of
boilerplate program-generated macros, but that's an implementation
detail that's only needed because so many preprocessors are badly
nonconforming. In fact, the library's maintainer, Paul Mensonides,
has a _much_ more elegantly-implemented PP library
(http://sourceforge.net/projects/chaos-pp/) that has almost no
boilerplate, but it only works on a few compilers (GCC among them).

Unless I'm missing something, that link goes to an empty sourceforge
project. Which is a pity, because I remember seeing some old chaos
code somewhere, and it looked ace.

Daniel
 
P

Paul Mensonides

"Andrei Alexandrescu (See Website for Email)"
Would be interesting to see some examples of that around here. I would be
grateful if you posted some.

Do you mean a code examples or just general examples? As far as general
examples go, the inability to manipulate the syntax of the language leads to
either replication (which is error-prone, dramatically increases the number of
maintenance points, and obscures the abstraction represented by the totality of
the replicated code) or the rejection of implementation strategies that would
otherwise be superior. The ability to adapt, to deal with variability, is often
implemented with less type-safe, runtime-based solutions simply because the
metalanguage doesn't allow a simpler way to get from conception to
implementation.

Regarding actual code examples, here's a Chaos-based version of the old
TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses variadics
which are likely to be added with C++0x. (It is also a case-in-point of why
variadics are important.)

#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/detection/is_empty.h>
#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/facilities/split.h>
#include <chaos/preprocessor/limits.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define TYPELIST(...) TYPELIST_BYPASS(CHAOS_PP_LIMIT_EXPR, __VA_ARGS__)
#define TYPELIST_BYPASS(s, ...) \
CHAOS_PP_EXPR_S(s)(TYPELIST_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_PREV(s), __VA_ARGS__, \
)) \
/**/
#define TYPELIST_INDIRECT() TYPELIST_I
#define TYPELIST_I(_, s, ...) \
CHAOS_PP_IIF _(CHAOS_PP_IS_EMPTY_NON_FUNCTION(__VA_ARGS__))( \
Loki::NilType, \
Loki::TypeList< \
CHAOS_PP_DECODE _(CHAOS_PP_SPLIT _(0, __VA_ARGS__)), \
CHAOS_PP_EXPR_S _(s)(TYPELIST_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_PREV(s), \
CHAOS_PP_SPLIT _(1, __VA_ARGS__) \
)) \
) \
/**/

The TYPELIST macro takes the place of all of the TYPELIST_x macros (and more) at
one time, has facilities to handle types with open commas (e.g. std::pair<int,
int>), and this is more-or-less doing it by hand in Chaos. If you used
facilities already available, you could do the same with one macro:

#include <chaos/preprocessor/facilities/encode.h>
#include <chaos/preprocessor/lambda/ops.h>
#include <chaos/preprocessor/punctuation/comma.h>
#include <chaos/preprocessor/recursion/expr.h>
#include <chaos/preprocessor/tuple/for_each.h>

#define TYPELIST(...) \
CHAOS_PP_EXPR( \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(Loki::TypeList<) \
CHAOS_PP_DECODE_(CHAOS_PP_ARG(1)) CHAOS_PP_COMMA_(), \
(__VA_ARGS__) \
) \
Loki::NilType \
CHAOS_PP_TUPLE_FOR_EACH( \
CHAOS_PP_LAMBDA(>), (__VA_ARGS__) \
) \
) \
/**/

This implementation can process up to ~5000 types and there is no list of 5000
macros anywhere in Chaos. (There are also other, more advanced methods capable
of processing trillions upon trillions of types.)

This example is particularly motivating because it is an example of code used by
clients that is itself client to Chaos. In this case, its primary purpose it
produce facilities for type manipulation (i.e. Loki, MPL, etc.) which raises the
level of abstraction for clients without sacrificing any type safety whatsoever.
I think you mean "I have probably written more C++ macros than any other
person." That detail is important.

Yes, it is. I was referring to C and C++ macros.
I'm not one to claim having written lots
of macros in any language, and I apologize if the amendment above sounds
snooty. I just think it's reasonable to claim that the C++ preprocessor
compares very unfavorably with many other languages' means for syntactic
abstractions.
Yes.

I totally agree with everything you wrote, but my point was, I believe,
misunderstood. Yes, "macros are evil" is an easy cop-out. But I never said
that. My post says what tantamounts to "The C++ preprocessor sucks". It
sucks because it is not a powerful-enough tool. That's why.

It is a powerful enough tool, but it could be easier to employ than it is.
So let me restate my point. Macros are great. I love macros. Syntactic
abstractions have their place in any serious language, as you very nicely
point out. And yes, they are distinct from other means of abstraction. And
yes, they can be very useful, and shouldn't be banned just because they can
be misused.

(I'll make a parenthesis here that I think is important. I believe the worst
thing that the C/C++ preprocessor has ever done is to steer an entire huge
community away from the power of syntactic abstractions.)

That is an *extremely* good point.
So, to conclude, my point was that the preprocessor is too primitive a tool
for implementing syntactic abstractions with.

It could be better, by all means, but it is plenty powerful enough to implement
syntactic abstractions--it is more powerful than most people realize. For
example, the first snippet above is using generalized recursion--recursion
itself can be a shareable, extensible, library facility.
Let's think wishes. You've done a great many good things with the
preprocessor, so you are definitely the one to be asked. What features do
you think would have made it easier for you and your library's clients?

The most fundamental thing would be the ability to separate the first arbitrary
preprocessing token (or whitespace separation) from those that follow it in a
sequence of tokens and be able to classify it in some way (i.e. determine what
kind of token it is and what its value is). The second thing would be the
ability to take a single preprocessing token and deconstruct it into characters.
I can do everything else, but can only do those things in a limited ways.

Regards,
Paul Mensonides
 
P

Paul Mensonides

Yes, but as I mentioned none of that is required in std C++.
http://sourceforge.net/projects/chaos-pp/ doesn't use any
program-generated macros.

It does use some, but not for algorithmic constructs. E.g. the closest
equivalent (i.e. as feature-lacking as possible) to BOOST_PP_REPEAT under Chaos
is:

#include <chaos/preprocessor/arithmetic/dec.h>
#include <chaos/preprocessor/control/when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/

Regards,
Paul Mensonides
 
P

Paul Mensonides

"Andrei Alexandrescu (See Website for Email)"
I've looked at the existing PP library, not at Chaos.

In that case, I agree. Internally, Boost PP is a mess--but a mess caused by
lackluster conformance.
It seems like my comments have annoyed you, and for a good reason. Please
accept my apologies.

I don't mind the comments. I do mind preconceptions. With the preprocessor
there are a great many preconceptions about what it can and cannot do.

Regards,
Paul Mensonides
 
D

David Abrahams

Paul Mensonides said:
It does use some, but not for algorithmic constructs. E.g. the closest
equivalent (i.e. as feature-lacking as possible) to BOOST_PP_REPEAT under Chaos
is:

#include <chaos/preprocessor/arithmetic/dec.h>
#include <chaos/preprocessor/control/when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/

Confused. I don't see anything here that looks like a
program-generated macro.
 
P

Paul Mensonides

David said:
Confused. I don't see anything here that looks like a
program-generated macro.

That was the point. REPEAT is an algorithmic construct that uses recursion, but
it doesn't require macro repetition. However, some lower-level abstractions,
like recursion itself (e.g. EXPR_S) and saturation arithmetic (e.g. DEC),
require macro repetition. For something like recursion, which is not naturally
present in macro expansion, some form of macro repetition will always be
necessary. The difference is that that repetition is hidden behind an
abstraction and the relationship of N macros need not imply on N steps. As
Andrei mentioned, BOOST_PP_REPEAT requires (at least) N macros to repeat N
things. Similarly, BOOST_PP_FOR, BOOST_PP_WHILE, etc., all require (at least) N
macros to perform N steps. That is not the case with Chaos.

#include <chaos/preprocessor/control/inline_when.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define FOR(pred, op, macro, state) \
FOR_S(CHAOS_PP_STATE(), pred, op, macro, state) \
/**/
#define FOR_S(s, pred, op, macro, state) \
FOR_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
pred, op, macro, state\
) \
/**/
#define FOR_INDIRECT() FOR_I
#define FOR_I(_, s, pred, op, macro, state) \
CHAOS_PP_INLINE_WHEN _(pred _(s, state))( \
macro _(s, state) \
CHAOS_PP_EXPR_S _(s)(FOR_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
pred, op, macro, op _(s, state) \
)) \
) \
/**/

#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>

#define WHILE(pred, op, state) \
WHILE_S(CHAOS_PP_STATE(), pred, op, state) \
/**/
#define WHILE_S(s, pred, op, state) \
WHILE_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
pred, op, state \
) \
/**/
#define WHILE_INDIRECT() WHILE_I
#define WHILE_I(_, s, pred, op, state) \
CHAOS_PP_IIF _(pred _(s, state))( \
CHAOS_PP_EXPR_S _(s)(WHILE_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
pred, op, op _(s, state) \
)), \
state
) \
/**/

Regards,
Paul Mensonides
 
P

Paul Mensonides

Daniel R. James said:
Unless I'm missing something, that link goes to an empty sourceforge
project. Which is a pity, because I remember seeing some old chaos
code somewhere, and it looked ace.

The project is definitely not empty, it just hasn't made any "official"
releases.

Regards,
Paul Mensonides
 
A

Andrei Alexandrescu \(See Website for Email\)

Paul Mensonides said:
Regarding actual code examples, here's a Chaos-based version of the old
TYPELIST_1, TYPELIST_2, etc., macros. Note that this example uses
variadics
which are likely to be added with C++0x. (It is also a case-in-point of
why
variadics are important.)

Cool. Before continuing the discussion, I have a simple question - how does
your implementation cope with commas in template types, for example:

TYPELIST(vector<int, my_allocator<int> >, vector<float>)

would correctly create a typelist of two elements? If not, what steps do I
need to take to creat such a typelists (aside from a typedef)?


Andrei
 
G

galathaea

Andrei Alexandrescu \(See Website for Email\) said:
Jeremy Siek said:
CALL FOR PAPERS/PARTICIPATION

C++, Boost, and the Future of C++ Libraries
Workshop at OOPSLA
October 24-28, 2004
Vancouver, British Columbia, Canada
http://tinyurl.com/4n5pf
[snip]

I wonder if the submitters follow this post's trail or I should email
them... anyway, here goes.

I am not sure if I'll ever get around to writing it, so I said I'd post the
idea here, maybe someone will pursue it. In short, I think a proposal for a
replacement of C++'s preprocessor would be, I think, welcome.

There is only one replacement for the c++ preprocessor which I would
consider truly up to c++'s potential as a competitive language into
the near future: metacode. Full metacode capabilities, not just a
minor update to template capabilities.

By this I mean the capability to walk the parse tree at compile time
and perform transformations in a meta-type safe manner (analagous to
full second-order lambda capability such as System F). Vandevoorde's
metacode seems a good step in this direction, but I really think that
such a proposal must be as complete as possible (and not a library
proposal but a full language extension).

Consider some of the things I've seen talked about recently on the
newsgroups. Injecting a function call after all constructors have
executed is certainly one of those things the language should allow,
but currently we must work 'around' the language by forcing the use of
factories in such cases or leaving the two-stage use to clients (never
a good idea). In terms of functional relationships, though (even in
the presence of exceptions), such a task is a simple injection in
terms of functional orderings of the ctors and we are currently made
to fight with the language definitions enforced by the compiler.

Or consider the place where I currently use the Boost preprocessing
library the most: serialisation of classes. If we were allowed to
walk the member list for a class, walk its inheritance graph, and
stringise the class names (or produce better unique identifiers),
serialisation would be a cakewalk. Unfortunately, it is made much
more difficult and requires a more difficult object definition model
if any of the tasks of serialisation are to be automated by a library.

And of course there is all that control over the exception path
process, pattern generation, and general aspect functionsl
relationship injection that programmers have been crying about for
years.
Today Boost uses a "preprocessor library", which in turn (please correct me
if my understanding is wrong) relies on a program to generate some many big
macros up to a fixed "maximum" to overcome preprocessor's incapability to
deal with variable number of arguments.

Others have pointed out that this is much more a nonconformance issue
than it is an inherent preprocessor limitation, but I'd like to stress
that a fully recursive code generation system in c++ would not present
such problems.
Also, please correct me if I'm wrong (because I haven't really looked deep
into it), but my understanding is that people around Boost see the PP
library as a necessary but unpleasantly-smelling beast that makes things
around it smelly as well. [Reminds me of the Romanian story: there was a guy
called Pepelea (pronounced Peh-Peh-leah) who was poor but had inherited a
beautiful house. A rich man wanted to buy it, and Pepelea sold it on one
condition: that Pepelea owns a nail in the living room's wall, in which he
can hang whatever he wanted. Now when the rich man was having guests and
whatnot, Pepelea would drop by and embarraisingly hang a dirty old coat. Of
course in the end the rich man got so exasperated that he gave Pepelea the
house back for free. Ever since that story, "Pepelea's nail" is referred to
as something like... like what the preprocessor is to the C++ language.]

You are certainly a storyteller, but I'd give the c++ preprocessor
more credit. It has been the only method to acheive certain
limitations (shortfalls in complete second-order lambda
expressiveness) when needed by the coder. Indeed, if you take one of
the coders duties as minimising the updates needed by future feature
revisions of other coders, the preprocessor has always had its place
secure from typed language features. Again, serialisation has been my
major use, but other reasons for, for instance, type to string
conversion include interception of API's through lookup in the
import/export lists of the modules. A full metacode capability would
make that obsolete (walking the symbol table should be as easy as
walking the parse tree itself).
That would be reason one to create a new C++ preprocessor. (And when I say
"new," that's not like in "yet another standard C++ preprocessor". I have
been happy to see my suggestion on the Boost mailing list followed in that
the WAVE preprocessor was built using Boost's own parser generator library,
Spirit.) What I am talking now is "a backwards-INcompatible C++ preprocessor
aimed at displacing the existing preprocessor forever and replacing it with
a better one".

Certainly full metacoding capabilities would make this obsolete...
If backed by the large Boost community, the new preprocessor could easily
gain popularity and be used in new projects instead of the old one. To avoid
inheriting past's mistakes, the new preprocessor doesn't need to be
syntax-compatible in any way with the old preprocessor, but only
functionally compatible, in that it can do all that can be done with the
existing preprocessor, only that it has new means to do things safer and
better.

I think that would be great. Because it we all stop coding for a second and
think of it, what's the ugliest scar on C++'s face - what is Pepelea's nail?
Maybe "export" which is so broken and so useless and so abusive that its
implementers have developed Stockholm syndrome during the long years that
took them to implement it? Maybe namespaces that are so badly designed,
you'd think they are inherited from C? I'd say they are good contenders
against each other, but none of them holds a candle to the preprocessor.

If we extend the idea of metacode to all of the translation process,
in other words if the programmer were to have control points inserted
into all parts of the code generation process, then export would never
have been a problem to begin with. Unfortunately, the c++
standardisation community feels that processes like linking are
sacrilege and not to be touched by regulation. If a full parse tree
walk were to include the ability to load other translation units and
manipulate their trees, then we wouldn't find export a 'scar' or in
any way difficile.

You know, if the c++ committee had the cojones to make standardisation
over the full translation process, we might even see dynamic linking a
possibility for the next language revision.
So, a proposal for a new preprocessor would be great. Here's a short wish
list:

* Does what the existing one does (although some of those coding patterns
will be unrecommended);

* Supports one-time file inclusion and multiple file inclusion, without the
need for guards (yes, there are subtle issues related to that... let's at
least handle a well-defined subset of the cases);

* Allows defining "hygienic" macros - macros that expand to the same text
independent on the context in which they are expanded;

* Allows defining scoped macros - macros visible only within the current
scope;

* Has recursion and possibly iteration;

* Has a simple, clear expansion model (negative examples abound - NOT like
m4, NOT like tex... :eek:))

* Supports variable number of arguments. I won't venture into thinking of
more cool support a la scheme or dylan ofr Java Extender macros.

I'm not a big fan of textual macros when typed completion gives the
same computational capability. I really think that metacoding,
architecture generation, and all of those great things we come to look
for in AOP and generative intentional programming is what the c++
standards commitee should focus on. With that type of capability, a
"pre"-processor is superfluous.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

galathaea: prankster, fablist, magician, liar
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top