Index a #define string

P

Paul Mensonides

Ioannis said:
And again, i would rather see those applications done with non-macro
code if possible.

I think you are missing what I'm getting at. I'm not talking about something
trivial like min/max. I'm also not talking about something that has an obvious
and easy solution as either a macro or a template (or whatever).

As an example, it is certainly possible to define a closure mechanism by
manually (i.e. copy-and-paste) writing a bunch of overloads/specializations with
varying arities--say up to 20 arguments as a maximum. The point is that doing
that is ridiculous when the preprocessor provides the facilities needed to do
that for you. When you make 20 specializations, you introduce 20 (actually
probably some multiple of 20) maintenance points into your source. A slight
step up is to use macros purely to reduce that burden in an isolated way, where
you factor out the things that change into macro arguments (changing arity is
not a good example here without variadic macros). A significant step up, OTOH,
is to generalize the concept of varying repetitions, so that it can be used in
many other places as well.

What it comes down to, is that, fundamentally, you aren't doing anything with
macros that you can't directly write out. So, it isn't a question of whether or
not you can do something without macros (there are only two major categories of
things that you can only do with macros--alternate source blocks in code
subjected to more than one environment and include guards). Even the
stringizing operator (#) is unnecessary for assertions--you can simply rewrite
the expression in quotes. The question than becomes where is it *better* to use
the preprocessor and where it is not? In some cases, the answer is simple, in
others it is not, but in no cases can you apply the generalization "avoid macros
as much as possible" and expect to get the best possible solution in all cases.
It is simply too general. As I said before, you need to break that
generalization down into constituent elements that are actually worthwhile.

Regards,
Paul Mensonides
 
I

Ioannis Vranos

Paul Mensonides said:
You are certainly correct regarding the STL and sequences, etc., and their
utility in general. Those things are not lost on me. However, consider, for
example, that the standard library has a notion of an "adaptable" function
object, which is one that has the appropriate typedefs. In particular, pointers
to functions and references to functions are not, according to the standard
library, adaptable. In reality, they are, but the implementation requires
repetition--a great deal of it.


Before getting into details i would say that if it is not possible to make
it otherwise i agree on the use of macros.

Now in the details. I did not understand what you meant by adaptable. Please
give an example:






Ioannis Vranos
 
I

Ioannis Vranos

Paul Mensonides said:
I pointed to examples used in real practical code, which I thought would be
better examples of utility than toy examples. From your mention of Spirit,
which is indeed a great library, I thought that examination of Spirit itself,
given your respect for it, would be more likely to alter your perception than
anything that I might say here. That is why I refrained from giving toy
examples, and why I pointed to other usages. It is always easy to provide toy
examples of something, but not always easy to prove their viability in practice.
As a preprocessor metaprogramming expert, one of a very few, I'm constantly
having to deal with this kind of blind adherence (not specifically directed at
you, but in general) that sometimes it wears on me, and I get frustrated.


Still i can't understand. If you had a choice to do something with templates
and to do it with macros, would you choose macros?






Ioannis Vranos
 
I

Ioannis Vranos

Paul Mensonides said:
No, not for constants. Use all caps identifiers for one thing and one thing
only--macro names. Constants, which I assume you to mean constant variables or
enumerators, should definitely *not* be all caps--otherwise you just reintroduce
the likelihood of a name clash.


And now we are at the beginning of YARW (yet another "religious" war). I
define all my constants in upper case, macros or no, so as it to be evident
in code that i have to do with a constant.


E.g. const float MAXDAYS=40;


//...


vector<int>amounts(MAXDAYS);

// ...






Ioannis Vranos
 
P

Paul Mensonides

Ioannis said:
Before getting into details i would say that if it is not possible to
make it otherwise i agree on the use of macros.

Now in the details. I did not understand what you meant by adaptable.
Please give an example:

An adaptable function object is one that has nested typedefs that say what the
return type and parameter types are. They are called "adaptable" because other
function objects can put them together or bind some of the arguments to produce
a new function object. An example might be that you have some binary (i.e. two
parameters) function object, and you also have some other template that that
takes a binary function object and a value that it binds to one of the two
parameters. As a result, it produces a new unary function object. The standard
library already has several of these facilities already. Regarding the above, a
pointer or reference to function does not have the prerequisite typedefs to do
this, but with repetition, those types can be deduced and extracted from the
function type in order to achieve this kind of higher-order functionality even
without the typedefs.

Regards,
Paul Mensonides

[As an aside, much of generic programming revolves around the proper use of
names. As such, having different names for similar things (such as
"first_argument_type" and "second_argument_type") does not scale well. Instead,
it is better to have a unified interface that is parametized (maybe something
like "argument_type<1>::type" instead of "first_argument_type" and
"argument_type<2>::type" instead of "second_argument_type"). That scales better
on a variety of fronts.]
 
I

Ioannis Vranos

Old Wolf said:
The main objection raised is what to do with the following:
int a;
short b;
max(a, b); // your template ver. won't compile



template<class T>
inline const T& max(const T& a, const T& b) { return a>b?a:b; }


int main()
{
int a=1;
short b=2;

max<int>(a, b);
}






Ioannis Vranos
 
P

Paul Mensonides

Ioannis said:
Still i can't understand. If you had a choice to do something with
templates and to do it with macros, would you choose macros?

I would evaluate the specific circumstance to determine which was the better
choice. Obviously, experience with both helps in this regard and makes this
process much quicker. If I could make the solution with macros that had all the
same benefits as templates but with drastically less physical code, I would
definitely use macros. What is more often the case, however, is a mixture of
templates and macros (e.g. macros that produce templates, etc.).

(IMO, all of the most interesting things regarding metaprogramming of any kind
happen right at the border between "metaprogramming" and "programming". In C++,
there is a layered model that is more or less like this: macro code -> template
code -> runtime code. In each case, I find the most interesting solutions being
the ones that employ more than one level (or all three) of this model
simultaneously.)

Regards,
Paul Mensonides
 
I

Ioannis Vranos

Paul Mensonides said:
An adaptable function object is one that has nested typedefs that say what the
return type and parameter types are. They are called "adaptable" because other
function objects can put them together or bind some of the arguments to produce
a new function object. An example might be that you have some binary (i.e. two
parameters) function object, and you also have some other template that that
takes a binary function object and a value that it binds to one of the two
parameters.


You are talking about adapters and binders.


As a result, it produces a new unary function object.



Yes you can define this stuff with templates.


The standard
library already has several of these facilities already. Regarding the above, a
pointer or reference to function does not have the prerequisite typedefs to do
this, but with repetition, those types can be deduced and extracted from the
function type in order to achieve this kind of higher-order functionality even
without the typedefs.


I am not sure i am following you to this. Can you give a small concrete
example?






Ioannis Vranos
 
P

Paul Mensonides

Ioannis said:
And now we are at the beginning of YARW (yet another "religious"
war). I define all my constants in upper case, macros or no, so as it
to be evident in code that i have to do with a constant.

This isn't a religious issue. It is a practical one. Macro names should be all
caps to distinguish them from everything else--for a very good reason. When you
use all caps for constants, you introduce a loophole into the protection
techniques that would otherwise never fail (at least not silently). If it
wasn't for the (valid) conventions regarding macro naming, I'd say that it is
merely a matter of subjective preference (such as "int *x" vs. "int* x", but it
isn't. It is a practical issue.

Regards,
Paul Mensonides
 
I

Ioannis Vranos

Paul Mensonides said:
I think you are missing what I'm getting at. I'm not talking about something
trivial like min/max. I'm also not talking about something that has an obvious
and easy solution as either a macro or a template (or whatever).

As an example, it is certainly possible to define a closure mechanism by
manually (i.e. copy-and-paste) writing a bunch of overloads/specializations with
varying arities--say up to 20 arguments as a maximum. The point is that doing
that is ridiculous when the preprocessor provides the facilities needed to do
that for you. When you make 20 specializations, you introduce 20 (actually
probably some multiple of 20) maintenance points into your source. A slight
step up is to use macros purely to reduce that burden in an isolated way, where
you factor out the things that change into macro arguments (changing arity is
not a good example here without variadic macros). A significant step up, OTOH,
is to generalize the concept of varying repetitions, so that it can be used in
many other places as well.


We discuss too general. Also void somefun(int a=0, int b=0, ..., int z=0);
is possible.


What it comes down to, is that, fundamentally, you aren't doing anything with
macros that you can't directly write out. So, it isn't a question of whether or
not you can do something without macros (there are only two major categories of
things that you can only do with macros--alternate source blocks in code
subjected to more than one environment and include guards). Even the
stringizing operator (#) is unnecessary for assertions--you can simply rewrite
the expression in quotes. The question than becomes where is it *better* to use
the preprocessor and where it is not? In some cases, the answer is simple, in
others it is not, but in no cases can you apply the generalization "avoid macros
as much as possible" and expect to get the best possible solution in all cases.
It is simply too general. As I said before, you need to break that
generalization down into constituent elements that are actually
worthwhile.



Ok, i can rephrase it to "Do not use macros unless it is unavoidable, or
their use yields significant benefits".






Ioannis Vranos
 
I

Ioannis Vranos

Paul Mensonides said:
This isn't a religious issue. It is a practical one. Macro names should be all
caps to distinguish them from everything else--for a very good reason. When you
use all caps for constants, you introduce a loophole into the protection
techniques that would otherwise never fail (at least not silently).


Well lets get into this war anyway. :) At first i do not use macros usually
in my daily code but that's another matter. Standard library macros like
assert() are in lowercase. Only constants like NULL are capitals. The same
convention applies both for macros and non-macros. Only contants should be
uppercase, and by doing this improves the visibility of the code. After all,
a library facility macro or not, should not make any difference for the end
user who has not to know about the implementation details.






Ioannis Vranos
 
P

Paul Mensonides

Ioannis said:
"Paul Mensonides" <[email protected]> wrote in message
I am not sure i am following you to this. Can you give a small
concrete example?

Many STL functions require some user-defined operation which can usually be
regular pointer or reference to function. However, you cannot adapt them using
standard library facilities because they don't have the typedefs (obviously).
However, given a type that is a pointer or reference to function, you can easily
extract the types necessary to make it adaptable:

template<class> struct function_traits;

template<class R, class A, class B> struct function_traits<R (*)(A, B)> {
// make typedefs for R, A, and B
};

It is similar to the iterator_traits class that works with pointers and
class/struct iterators. However, with something like the above, you have to
define a lot more specializations that just one or two if you want to have a
truly general solution--instead of just unary and binary--and the same goes for
the adaptors and compose* elements as well. Arities ranging from 0-2, while
useful, is not enough and is not general.

Regards,
Paul Mensonides
 
P

Paul Mensonides

Ioannis said:
Well lets get into this war anyway. :) At first i do not use macros
usually in my daily code but that's another matter. Standard library
macros like assert() are in lowercase. Only constants like NULL are
capitals. The same convention applies both for macros and non-macros.

Those standard library macros come from C--where name clashing issues are not
nearly as significant as they are in C++ (at least as far as cutting across all
underlying language scoping mechanisms go).
Only contants should be uppercase, and by doing this improves the
visibility of the code.

I disagree; I think that it clutters it, and more importantly makes it less
obvious when macros are involved. However, that just goes to show the
subjectiveness of this sort of thing. A subjective choice is perfectly fine
when there is no solid objective contradiction, but in this case, there is.
After all, a library facility macro or not,
should not make any difference for the end user who has not to know
about the implementation details.

No, it definitely does make a difference--consider the min/max macro case for
example. One of the arguments is always evaluated twice. That is an important
distinction that is a product of its macro nature. That is why the C standard
has to make so many guarantees about that sort of thing because just about
everything in the standard library can be a macro so long as it has a function
also.

Regards,
Paul Mensonides
 
P

Paul Mensonides

Julie said:
What we _should_ say is:

The preprocessor, specifically including macro expansion, is a
language construct that should be fully understood prior to using in
production-quality code.

That's it -- it isn't good, it isn't bad, it just is.

Just so you don't feel left out, I think that's very well said.

Regards,
Paul Mensonides
 
P

Paul Mensonides

Ioannis said:
We discuss too general. Also void somefun(int a=0, int b=0, ..., int
z=0); is possible.

Not when dealing with higher-order programming--like closures, and it can be
significantly less efficient otherwise.
Ok, i can rephrase it to "Do not use macros unless it is unavoidable,
or their use yields significant benefits".

I can live with that. :)

Regards,
Paul Mensonides
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top