Boost Workshop at OOPSLA 2004

T

tom_usenet

Fundamentally templates are sensitive to the point where they are
instantiated, and not on the parameters alone. That's a classic PL design
mistake because it undermines modularity. The effects of such a mistake are
easily visible :eek:). They've been visible in a couple of early languages as
well.

The alternative is to simply not have any kind of separate compilation
of templates. So we could drop two phase name lookup, "template" and
"typename" disambiguators, and possibly some other nasty features,
basically moving to the basic template inclusion model implemented by
old Borland and Microsoft compilers (except without the bugs!)

Hmm. It was simpler back then; I think two phase name lookup is still
extremely badly understood, as are the merits and otherwise of export.
(How many people realise that point-of-instantiation lookup of
function names uses ADL only, and not ordinary lookup?)

Tom
 
G

Gabriel Dos Reis

| | > "Andrei Alexandrescu \(See Website for Email\)"
| >
| > | | > | > "Andrei Alexandrescu wrote:
| > | > [...]
| > | > > Maybe "export" which is so broken and so useless and so abusive that
| > its
| > | > > implementers have developed Stockholm syndrome during the long years
| > | > > that
| > | > > took them to implement it?
| > | >
| > | > How is "export" useless and broken?
| > | >
| > | > Have you used it for any project? I find it very pleasant
| > | > to work with in practice.
| > |
| > | Haven't used export, and not because I didn't wanna.
| >
| > Very interesting.
|
| Well, it's very banal really. I just only had the chance to use compilers
| that don't implement export.

What I found very interesting is not the fact that you used compilers
that don't implement export. What I found interesting is that you
made such strong statements based on no actual experience, as you
confessed. I'm not saying it is bad. Just very interesting.
 
G

Gabriel Dos Reis

[...]

| Hmm. It was simpler back then; I think two phase name lookup is still
| extremely badly understood, as are the merits and otherwise of export.

I just wanted to remind that two-phase name lookup has been being
discussed looong before "export" came into the picture.
 
W

Walter

tom_usenet said:
The alternative is to simply not have any kind of separate compilation
of templates. So we could drop two phase name lookup, "template" and
"typename" disambiguators, and possibly some other nasty features,
basically moving to the basic template inclusion model implemented by
old Borland and Microsoft compilers (except without the bugs!)

Hmm. It was simpler back then; I think two phase name lookup is still
extremely badly understood, as are the merits and otherwise of export.
(How many people realise that point-of-instantiation lookup of
function names uses ADL only, and not ordinary lookup?)

How it works in D is pretty simple. The template arguments are looked up in
the context of the point of instantiation. The symbols inside the template
body are looked up in the context of the point of definition. It's how you'd
intuitively expect it to work, as it works analogously to ordinary function
calls.

I say "point" of instantiation/definition, but since D symbols can be
forward referenced, it's more accurate to say "scope" of
instantiation/definition. Isn't it odd that C++ class scopes can forward
reference member symbols, but that doesn't work at file scope?
 
R

Rob Williscroft

tom_usenet wrote in in
comp.lang.c++:
The alternative is to simply not have any kind of separate compilation
of templates.
So we could drop two phase name lookup,

2 phase lookup, allows us to write determinate code (*), I really don't
see what it has to do with seperate compilation, if anything, its a
feature that makes export harder (the exported code wouldn't need
to remeber the declaration context).

*) By this I mean code that does what the author of the code
intended and is subject to the minimum possible reinterpretation
at the point of instantiation.
"template" and

Which "template", do you mean the .template, if so this is a
parsing problem, the parser preferes < to mean "less-than" over
"template-argument-list", It could be solved with some bactraking
(I think :), but we would loose: "if ( 0 < i > 0 ) ... ".

Or do you want code that is sometimes a member template or sometimes
a non template member ?
"typename" disambiguators,

Again AIUI typename is about writing determinate code.
and possibly some other nasty features,
basically moving to the basic template inclusion model implemented by
old Borland and Microsoft compilers (except without the bugs!)

Hmm. It was simpler back then; I think two phase name lookup is still
extremely badly understood, as are the merits and otherwise of export.

It was simpler, but also very confusing, the behaviour of you code
depended on when you instantiated you templates. Everyting worked untill
it it stoped working and then things just got strange (atleast thats how
I remember it :).

(How many people realise that point-of-instantiation lookup of
function names uses ADL only, and not ordinary lookup?)

Not enough, but we haven't had meny compilers that get this right,
so I think that will change.

Rob.
 
G

Gabriel Dos Reis

[...]

| I say "point" of instantiation/definition, but since D symbols can be
| forward referenced, it's more accurate to say "scope" of
| instantiation/definition. Isn't it odd that C++ class scopes can forward
| reference member symbols, but that doesn't work at file scope?

Why should I find that odd?
 
D

David Abrahams

Hyman Rosen said:
There have been environments in which the linker did whole-program
optimization, inlining routines out of object files into the call
sites. I think you are mistaken in concept when you try to peer under
the hood of the compiler to call some of its operations "true" and
some not. Step back and think of nothing but the point of view of
the user. Also consider that while extremely complex cases of
instantiation must be handled correctly by the compiler, most actual
templates that people write don't have weird name-capturing problems.

Doesn't that tend to argue that export isn't solving a real problem?
 
W

Walter

Gabriel Dos Reis said:
[...]

| I say "point" of instantiation/definition, but since D symbols can be
| forward referenced, it's more accurate to say "scope" of
| instantiation/definition. Isn't it odd that C++ class scopes can forward
| reference member symbols, but that doesn't work at file scope?

Why should I find that odd?

It's inconsistent. If I can do:

class Foo
{ int abc() { return x; } // fwd reference ok
int x;
}

why can't I do in C++:

int abc() { return x; } // error, x is undefined
int x;

?? (You can do that in D.) Furthermore, if forward references worked at file
scope, some of the confusing arcana of template instantiation lookup rules
would be unnecessary.
 
P

Paul Mensonides

Andrei said:
Not it only remains for me to convince you that that's a disadvantage
:eek:).

I'll rephrase slightly. The preprocessor (macro expansion in particular) is a
wholly different language. The primitives used are low-level library elements
used to implement the solution directly. It is possible to make the resulting
syntax cleaner by using higher-level constructs.

That said, along some lines I agree that having a different language is a
disadvantage. Along others I don't. A different language exercises the brain
and promotes new ways of doing things.
I disagree it's only syntactic cleanliness. Lack of syntactic
cleanliness is the CHAOS_PP_ that you need to prepend to most of your
library's symbols. But let me pull the code again:

#define REPEAT(count, macro, data) \
REPEAT_S(CHAOS_PP_STATE(), count, macro, data) \
/**/
#define REPEAT_S(s, count, macro, data) \
REPEAT_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), \
count, macro, data \
) \
/**/
#define REPEAT_INDIRECT() REPEAT_I
#define REPEAT_I(_, s, count, macro, data) \
CHAOS_PP_WHEN _(count)( \
CHAOS_PP_EXPR_S _(s)(REPEAT_INDIRECT _()( \
CHAOS_PP_OBSTRUCT _(), CHAOS_PP_NEXT(s), \
CHAOS_PP_DEC(count), macro, data \
)) \
macro _(s, CHAOS_PP_DEC(count), data) \
) \
/**/

As far as I understand, REPEAT, REPEAT_S, REPEAT_INDIRECT, REPEAT_I,
and the out-of-sight CHAOS_PP_STATE, CHAOS_PP_OBSTRUCT,
CHAOS_PP_EXPR_S are dealing with the preprocessor alone and have zero
relevance to the task.

First, REPEAT and REPEAT_S are interfaces, one being lower-level than the other.
REPEAT_INDIRECT and REPEAT_I are implementation macros of the REPEAT interface.
REPEAT_INDIRECT isn't even necessary, I used it to be clearer. CHAOS_PP_STATE,
CHAOS_PP_OBSTRUCT, and CHAOS_PP_EXPR_S are primitives used to implement
bootstrapped recursion.
The others implement an idiom for looping that
I'm sure one can learn, but is far from familiar to a C++ programmer.

Yes, it is far from familiar--which is both good and bad.
To say that that's just a syntactic cleanliness thing is a bit of a
stretch IMHO. By the same argument, any Turing complete language will
do at the cost of "some" syntactic cleanliness.

For the most part, the difference is syntactic cleanliness. Without the
boilerplate required for recursion, the primary implementation macro becomes:

#define REPEAT_I(count, macro, data) \
CHAOS_PP_WHEN(count)( \
REPEAT_I(CHAOS_PP_DEC(count), macro, data) \
macro(CHAOS_PP_DEC(count), data) \
) \
/**/

Which is really not much different than any higher-order construct:

(defun repeat (count function data)
(unless (zerop count)
(repeat (1- count) function data)
(funcall function (1- count) data)))

; e.g.
(repeat 10
#'(lambda (count data)
(unless (zerop count)
(format t ", "))
(format t "~a~a" data count))
"class T")
I maintain my opinion that we're talking about more than syntactic
cleanliness here. I didn't say the preprocessor is "incapable" for
the task. But I do believe (and your code strengthened my belief)
that it is "inadequate". Now I looked on www.m-w.com and I saw that
inadequate means "
"sufficient for a specific requirement" and "lawfully and reasonably
sufficient". I guess I meant it as a negation of the last meaning,
and even that is a bit too strong. Obviously the preprocessor is
"capable", because hey, there's the code, but it's not, let me
rephrase - very "fit" for the task.

The preprocessor is not designed for the task. Obviously it isn't ideal. The
difference is not just syntactic. There are also idioms that go with it. Even
so, those idioms are straightforward when you're familiar with the language and
become a relatively small amount of boilerplate and syntactic clutter.
Wouldn't it be nicer if you just had one mechanism (true recursion or
iteration) that does it all in one shot?

Yes, it would, but it isn't at the top of my list because I can already simulate
it in a plethora of ways. There are other things that I cannot do that are more
important to me.

Regards,
Paul Mensonides
 
A

Andrei Alexandrescu (See Website for Email)

Paul said:
That said, along some lines I agree that having a different language is a
disadvantage. Along others I don't. A different language exercises the brain
and promotes new ways of doing things.

I'd be the first to agree with that. But isn't it preferrable to have
a good new language to start with and think up from that, instead of
having a poor new language that asks you to do little miracles to get
the most basic things done?
For the most part, the difference is syntactic cleanliness. Without the
boilerplate required for recursion, the primary implementation macro becomes:

#define REPEAT_I(count, macro, data) \
CHAOS_PP_WHEN(count)( \
REPEAT_I(CHAOS_PP_DEC(count), macro, data) \
macro(CHAOS_PP_DEC(count), data) \
) \
/**/

Well if one removes some boilerplate code, C can do virtuals and
templates. That doesn't prove a lot.

I saw only convoluted code in the URL that Dave forwarded. I asked for
a good example here on the Usenet. You gave me an example. I commented
on the example saying that it drowns in details. Now you're telling me
that "if you remove the details in which the example is drowning" it
doesn't drown in them anymore. Well ok :eek:).
The preprocessor is not designed for the task. Obviously it isn't ideal. The
difference is not just syntactic. There are also idioms that go with it. Even
so, those idioms are straightforward when you're familiar with the language and
become a relatively small amount of boilerplate and syntactic clutter.


Yes, it would, but it isn't at the top of my list because I can already simulate
it in a plethora of ways. There are other things that I cannot do that are more
important to me.

That's reasonable. I just tried to give that as an example. Given that
there *are* things that you cannot do, I was hoping to increase your
(and others') motivation to look into ideas for a new preprocessor.


Andrei
 
L

llewelly

Hyman Rosen said:
Jerry Coffin wrote:
[snip]
I'm not on the committee myself, but I've certainly conversed with a
number of committee members, and ALL of them I've talked to have
admitted that export has turned out to be substantially more difficult
to implement than was expected.

And yet, they shot down Herb's proposal to remove it, 28 in favor of
keeping it, 8 against. (See
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1459.html)

Those in favor of keeping it included major implementors that don't
yet support it.
So far, few people who've really used export seem to have found
tremendous benefits to it either.

By contrast, nobody seems to have been terribly surprised by the
amount of work it has taken to implement exception handling, and most
people seem to believe that its benefits are justify the work.

The area where exception handling may have surprised some people
and/or had unjustified costs is not in its implementation, but the
burden for exception safety that's placed on the user. Though few
people seem to be aware of it (yet), export has a similar effect -- it
affects name lookup in ways most people don't seem to expect, and can
substantially increase the burden on the user.

I think most of those effects come not from export but from two-phase
lookup.
So, if you'd prefer, I'd rephrase the question: as I recall, it's
claimed that EDG put something like three man-years of labor into
implementing export, and that's not all it takes to implement it
either. So far even if we take ALL the users into account, I've yet to
see an indication that anybody has saved three man-years of labor
because export was there.
[snip]

If export took 1095 days to implement, and there are 10000 users of
export, each of them need only save one hour, and export is a net
win for the community.

I don't know if there are 10000 users of export, but one thing people
seem to keep ignoring in all this cost/benefit discussion is the
sheer size if the C++ community; a feature can be extrodinarily
expensive for implementors, and yet be a net win even it is only
a small savings for each individual user.
 
P

Paul Mensonides

"Andrei Alexandrescu (See Website for Email)"
I'd be the first to agree with that. But isn't it preferrable to have
a good new language to start with and think up from that, instead of
having a poor new language that asks you to do little miracles to get
the most basic things done?

From an immediate productivity standpoint, yes. However, there are a lot of
ways that people think about code (in C++ for example) explicitly because of a
lack of direct language facility. Look at template metaprogramming for example
or SFINAE manipulation. Those things could be implemented better with direct
support from the language, but something beyond that or something completely
different won't be. Because of the kind of thinking that those existing
solutions engender, those other things may not be unreachable. Removing
boilerplate or indirect hacks by adding language features is a neverending
story. Granted, something like recursion is very basic, but then, there are
things that can be done because of the lack of recursion (e.g. I use it to
compare identifiers among other things). As I'm sure that you understand, doing
much with little is also satisfying in its own right. Besides, the "little
miracles" required can be hidden behind a library interface, as Chaos does. I
told you that that was basically doing it by hand. Here's a more high-level
variant:

#define TYPELIST(...) \
CHAOS_PP_EXPR(CHAOS_PP_TUPLE_FOLD_RIGHT( \
TYPELIST_OP, (__VA_ARGS__), Loki::NilType \
)) \
/**/
#define TYPELIST_OP(s, type, ...) \
Loki::Typelist<CHAOS_PP_DECODE(type), __VA_ARGS__> \
/**/

Though there is some, there isn't a lot of boilerplate here. Note also that
this itself is a library interface, and the ultimate boilerplate clutter is all
but gone when user's use the interface. The only relic that you have is that
you have to parenthesize types that contain open commas--which is insignificant.

TYPELIST(int, double, char)

It is also important to note the fundamental thing that is important. This
implementation, along with reusable library abstractions, regardless of
boilerplate, requires only about ten lines of code and completely replaces at
least fifty lines of really ugly repetition (i.e. the TYPELIST_x macros). Note
only does it replace it, but it expands the upper limit from fifty types to
about five thousand types--effectively removing any impact from the limit.
Well if one removes some boilerplate code, C can do virtuals and
templates. That doesn't prove a lot.

No, it doesn't, nor is it meant to. I'm merely pointing out that the
boilerplate is simple when you know what you are doing. It doesn't even get
close to "drowning in details".
I saw only convoluted code in the URL that Dave forwarded.

Which URL was that? If Boost, then the code is drowning in workarounds. If
Chaos, then much of the code you saw is doing much more advanced things than
users need to ever see or do.
I asked for
a good example here on the Usenet. You gave me an example. I commented
on the example saying that it drowns in details. Now you're telling me
that "if you remove the details in which the example is drowning" it
doesn't drown in them anymore. Well ok :eek:).

What I was saying is that the details in that it "drowns in", as you put it, are
nothing but boilerplate. Boilerplate is basically syntactic clutter that you
relatively easily learn to ignore when reading and writing code.
That's reasonable. I just tried to give that as an example. Given that
there *are* things that you cannot do, I was hoping to increase your
(and others') motivation to look into ideas for a new preprocessor.

To be more accurate, there is no code that cannot be generated. It is only a
question of how clean it is and whether it is clean enough to be an acceptable
solution.

As far as a new preprocessor is concerned, I just don't think that it will
happen. I do like the idea of a new kind of macro that can be recursive. That
is the main limitation preventing your imaginary sample syntax. Lack of
backlashes or overloading is insignificant compared to that. Even so, the
ability to process tokens individually, regardless of the type of preprocessing
token, is far more important than any of those, because with that ability you
can make interpreters for any syntax that you like--including advanced
domain-specific languages. (For example, consider a parser generator that
operates directly on grammar productions rather than encoded grammar
productions.) Such an ability would promote C++ to a near intentional
environment.

Regards,
Paul Mensonides
 
T

tom_usenet

tom_usenet wrote in in
comp.lang.c++:



2 phase lookup, allows us to write determinate code (*), I really don't
see what it has to do with seperate compilation, if anything, its a
feature that makes export harder (the exported code wouldn't need
to remeber the declaration context).
*) By this I mean code that does what the author of the code
intended and is subject to the minimum possible reinterpretation
at the point of instantiation.

It is necessary for export though.

int helper(); //internal to definition.cpp

export template<class T>
void f()
{
helper(); //want to lookup in definition context!
}

That doesn't work without two-phase name lookup. I don't think two
phase name lookup is vital otherwise, since if the inclusion model is
used, names used in the template definitions can be made visible
simply by putting the definitions before the point of instantiation
(as is usually the case anyway).

The determinate code is a secondary issue, and I think a rare problem;
impl namespaces are generally employed to make sure internal template
gubbins won't be replaced by stuff from the point of instantiation.
Which "template", do you mean the .template, if so this is a
parsing problem, the parser preferes < to mean "less-than" over
"template-argument-list", It could be solved with some bactraking
(I think :), but we would loose: "if ( 0 < i > 0 ) ... ".

Or do you want code that is sometimes a member template or sometimes
a non template member ?


Again AIUI typename is about writing determinate code.

It's also about allowing the compiler to pre-parse and syntax check
templates even if they aren't instantiated. I don't think it is a
common problem that someone passes something that evaluates to a
static member where a type was expected! template and typename are
annoying at best, and give people new to templates unnecessary (if
export isn't used) headaches.
It was simpler, but also very confusing, the behaviour of you code
depended on when you instantiated you templates. Everyting worked untill
it it stoped working and then things just got strange (atleast thats how
I remember it :).

This is the case even with two phase lookup, although I agree that
there is slightly less freedom for the template to change meaning at
instantiation time. But I really don't think that this was a common
cause of problems, compared to the errors people get through two-phase
name lookup related issues, was it?

Tom
 
G

Gabriel Dos Reis

(e-mail address removed) (Jerry Coffin) writes:

[...]

| So, if you'd prefer, I'd rephrase the question: as I recall, it's
| claimed that EDG put something like three man-years of labor into
| implementing export, and that's not all it takes to implement it
| either.

But duing the period EDG implemented export, they also implemented a
complete Java front-end. It is not like when they were implementing
export, that was the only thing they were doing.
 
K

kanze

How it works in D is pretty simple. The template arguments are looked
up in the context of the point of instantiation. The symbols inside
the template body are looked up in the context of the point of
definition. It's how you'd intuitively expect it to work, as it works
analogously to ordinary function calls.

I'm not sure I understand this. Do you mean that in something like:

template< typename T >
void
f( T const& t )
{
g( t ) ;
}

g will be looked up in the context of the template definition, where the
actual type of its parameter is not known? And if so, what about:

template< typename Base >
class Derived : public Base
{
public:
void f()
{
this->g() ;
}
} ;

?

I'd say that templates must have the concept of dependent names, which
are looked up at the point of instantiation.

When people argue against two-phase lookup in C++, they are saying first
of all that all names should be considered dependent. And of course,
that dependent name lookup should work exactly like any other name
lookup, and not use special rules.
 
J

Jerry Coffin

[ ... ]
And yet, they shot down Herb's proposal to remove it, 28 in favor of
keeping it, 8 against.

That's not too surprising, at least to me. First of all, removing a
keyword, feature, etc., from a language is a major step, and the
majority of the committee would have to be convinced that there was a
_major_ benefit from doing so before it would pass.

The reality is that most of them clearly consider the current
situation perectly acceptable: the standard requires export, but
virtually everybody ignores the requirement.
Those in favor of keeping it included major implementors that don't
yet support it.

That's not a major surprise, at least to me. It appears to me that
compiler vendors mostly fall into two camps: those who have already
implemented export, and those who have no plan to do so.

Those who've already implemented export are obviously motivated to
keep it.

Those who haven't mostly don't seem to care and have no plans to
implement it anyway.

The only vendors to whom it would be a major issue would be those who
have not implemented it, but figure they'll have to do so if it
remains in the standard. The vote more or less confirms my opinion
that this group is quite small.

The cost isn't primarily to the vendors -- it's to the users. The big
problem is that export is just the beginning of the proverbial
slippery slope. If full compliance appears achievable, most vendors
will try to achieve it, even if it means implementing a few things
don't really value.

If full compliance appears unachievable, or at least totally
unrealistic, then they're left to their own judgement about what
features to leave out. In this case, those last few features they'd
have implemented for full compliance are likely to be left out.

The result is that for most people, not only is export itself
unusable, but (if they care at all about portability) quite a few
other features are rendered unusable as well.

[ ... ]
I think most of those effects come not from export but from two-phase
lookup.

I'd agree, to some extent -- it just happens that in an inclusion
model, the effects of two-phase name lookup seem relatively natural,
but in an export model they start to seem quite unnatural.

[ ... ]
If export took 1095 days to implement, and there are 10000 users of
export, each of them need only save one hour, and export is a net
win for the community.

First of all, I think this model of the cost is wrong to start with
(about which, see below). Second, even if the model was correct, the
numbers would still almost certainly be wrong: it assumes that the
people who write C++ compilers (specifically those who implement
export) are merely average C++ programmers.

I suspect only a small number of the very best programmers are capable
of implementing export at all. This means considerably _more_ time
needs to be saved than expended to reach the break-even point.
I don't know if there are 10000 users of export, but one thing people
seem to keep ignoring in all this cost/benefit discussion is the
sheer size if the C++ community; a feature can be extrodinarily
expensive for implementors, and yet be a net win even it is only
a small savings for each individual user.

Not really -- to the user, the real cost of export isn't directly
measured in the number of hours it took to implement. The real cost is
the other features that could have been implemented with the same
effort.

As such, for export to work out as a net benefit to the user, we have
to assume that the compiler is close enough to perfect otherwise that
implementing export is the single most efficient use of the
implementors' time.

I doubt that's the case right now, and I don't think I can predict
that it will ever be the case either.
 
W

Walter

llewelly said:
I don't know if there are 10000 users of export, but one thing people
seem to keep ignoring in all this cost/benefit discussion is the
sheer size if the C++ community; a feature can be extrodinarily
expensive for implementors, and yet be a net win even it is only
a small savings for each individual user.

In determining the benefit to users, one must also consider the improvements
to the compiler that are *not done* because of the heavy diversion of
resources to implementing export. A 2 to 3 man-year of implementation effort
should realistically result in a killer feature to be worth it.

Is export really the only improvement you want out of your existing C++
compiler?
 
A

Andrei Alexandrescu \(See Website for Email\)

[snip closing points that I agree with]

In an attempt to try to lure you into a discussion on would-be nice
features, here's a snippet from an email exchanged with a friend. It has to
do with the preprocessor distinguishing among tokens and more complex
expressions:

In my vision, a good macro system understands not only tokens, but also
other grammar nonterminals (identifiers, expressions, if-statements,
function-call-expressions, ...)

For example, let's think of writing a nice "min" macro. It should avoid
double evaluation by distinguishing atomic tokens (identifiers or integral
constants) from other expressions:

$define min(token a, token b) {
(a < b ? a : b)
}

$define min(a, b) {
min_fun(a, b)
}

Next we think, how about accepting any number of arguments. So we write:

$define min(token a, b $rest c) {
min(b, min(a, c))
}

$define min(a, b $rest c) {
min(a, min(b, c))
}

The code tries to find the most clever combination of inline operators ?:
and function calls to accomodate any number of arguments. In doing so, it
occasionally moves an identifier around in hope to catch as many identifier
pairs as possible. That's an example of a nice syntactic transformation.


Andrei
 
H

Hyman Rosen

Walter said:
In determining the benefit to users, one must also consider the improvements
to the compiler that are *not done* because of the heavy diversion of
resources to implementing export.

Ans pray tell what massive improvements did we see from the vendors who
didn't implement export?
 
G

Gabriel Dos Reis

[...]

| For example, let's think of writing a nice "min" macro. It should avoid
| double evaluation by distinguishing atomic tokens (identifiers or integral
| constants) from other expressions:
|
| $define min(token a, token b) {
| (a < b ? a : b)
| }

By the end of the day, you may rediscover C++ templates... ;-p
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,276
Latest member
Sawatmakal

Latest Threads

Top