David said:
Well, I think the conversion to 0 _is_ bad. Given that you can use
#ifdef or #if defined() for things such as:
#ifdef _cplusplus
how can the implicit conversion to 0 of an undefined symbol be a good
thing? Why is it better than issuing an error?
Because it is a "reasonable default." Reasonable defaults make code less
verbose. This happens with templates also:
template<class T> void f(int reactorType)
{
// No definition of REACTOR_NEW_MODEL given
if(reactorType == T::REACTOR_NEW_MODEL)
{
// ...
}
}
The compiler will pass this with no problem even though it still parses the
expression, etc.. The reasonable default here is "non-type". The point being
that the language has to deal with unknown names and make assumptions about what
they mean in various places.
That is just the way it is. You know what the behavior is, so writing "safe"
code is up to you to use the language in safe ways. C and C++ certainly don't
protect you from unsafe usage many areas, why should they do that here?
If you changed the behavior to an error, how would you do this in a non-verbose
way:
# if !__cplusplus && __STDC_VERSION__ >= 199901L
You'd have to do something really annoying because you cannot use any
conditional test that uses the name outside the defined operator. You can't
even do this:
#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
...because that constitutes an error under your model if __STDC_VERSION__ is not
defined. You'd have to separate the test for definition from the conditional
expression:
# if !defined __cplusplus && defined __STDC_VERSION__
# if __STDC_VERSION__ >= 199901L
# // 1
# else
# // 2
# endif
# else
# // 2
# endif
....and that is a code doubler for point 2.
If you changed the behavior to expanding to nil instead of 0, you'd have silent
changes in other ways. You'd also end of seeing a lot of "hacks" like this:
# if !(__cplusplus+0) && (__STDC_VERSION__+0) >= 199901L
In order to simulate the common scenario that we already have built into the
preprocessor.
I agree. That's why I'd like it to work _safely_.
It does work safely if used correctly. I said it before already: The #if and
#elif directives are not designed to implicitly perform the kind of verification
that you want--because that kind of verification (if done by default) is
downright annoying.
Further, the root problem here is 1) forgetting to include a file, or 2) design
error. Assuming that it is just a case of forgetting to include the file that
defines the symbols, there a many ways in which a program can silently change
meaning in C++ by not including a file (e.g. silently choosing different
function overloads or different template specializations).
Yes, but I want to do it safely. I do not want the outcome of an #if
to be one of these two possibilities:
1. The result of the expression of previously defined symbols.
It is totally ill-conceived. You can do what you want reasonably, but you
cannot do what it already does reasonably. You already have the option to do
what you want:
#if defined(REACTOR_TYPE) \
&& defined(REACTOR_NEW_MODEL) \
&& REACTOR_TYPE == REACTOR_NEW_MODEL
You can't go back the other way.
2. A programmer's mistake in forgetting to include the defined
symbols.
This is inherently unsafe.
No it isn't _inherently_ unsafe. It can be unsafe in certain contexts, and you
have to be aware of that when you write code. However, the alternative is much
worse. You can simulate what you want with a small amount of code; you cannot
simulate what it already does with a small amount of code.
The possibility of no. 2 is the reason
that C++ insists on all function definitions being present and that
there is a suitable match for every argument. Does not one other
person here think that this is a problem?
C++ does not insist on all function declarations that you've defined in a group
of files be present at each overload resolution--which can cause silent
differences in overload resolution, etc..
Regards,
Paul Mensonides