To bean or not to bean

S

Steven T. Hatton

E. Robert Tisdale said:
Steven T. Hatton wrote:

Yes. The invariant is trivial.
And I think that your understanding of invariance is consistent
with the way that Stroustrup uses the term.
Notice that, in the example above, there is no way to construct
an invalid SocialSecurityNumber and no assignment is defined
except the default assignment from another valid SocialSecurityNumber.

Ironically, your SSN is very close to a class I created while playing around
with developing an OO parser/validator for C++. My class is Identifier.

/***************************************************************************
* Copyright (C) 2004 by Steven T. Hatton *
* (e-mail address removed) *
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
* This program is distributed in the hope that it will be useful, *
* but WITHOUT ANY WARRANTY; without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
* GNU General Public License for more details. *
* *
* You should have received a copy of the GNU General Public License *
* along with this program; if not, write to the *
* Free Software Foundation, Inc., *
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *

***************************************************************************/
#ifndef STH_CLASSBUILDERIDENTIFIER_H
#define STH_CLASSBUILDERIDENTIFIER_H

#include <string>
#include <cctype>

#include "InvalidIdentifier_Exception.h"

namespace sth
{
namespace ClassBuilder
{
/**
@author Steven T. Hatton
*/
class Identifier
{
public:

Identifier();
Identifier(const std::string& value_)
throw(InvalidIdentifier_Exception);
~Identifier();

/**
* Convenience function to support input validation.
* @param c
* @return
*/
static bool is_valid_char(const char& c)
{
return isalnum(c) || c == '_';
}

/**
* Convenience function to support input validation.
* @param c
* @return
*/
static bool is_valid_first_char(const char& c)
{
return Identifier::is_valid_char(c) && !isdigit(c);
}

static bool is_valid(const std::string& s);

operator std::string()
{
return this->value;
}

private:
std::string value;
};
};
};

#endif

#include "Identifier.h"

namespace sth
{

namespace ClassBuilder
{

Identifier::Identifier()
{}

Identifier::Identifier(const std::string& value_)
throw(InvalidIdentifier_Exception)
: value(value_)
{
if(!is_valid(this->value))
{
throw InvalidIdentifier_Exception();
}
}

Identifier::~Identifier()
{}

}
;

};

/*!
\fn sth::ClassBuilder::Identifier::is_valid(const std::string& s)
*/
bool sth::ClassBuilder::Identifier::is_valid(const std::string& s)
{
if((s.length() == 0) || !is_valid_first_char(s[0]))
{
return false;
}
for(size_t i = 1; i < s.length(); i++)
{
if(!is_valid_char(s))
{
return false;
}
}
return true;
}


I finally realized what I am really questioning is the relationship between
things like property lists and the concept of invariants. I had already
come to the conclusion that it is reasonable for a class to have some
invariants which are preserved under a given set of operations. For
example, a rectangle should preserve its right angles an side lengths under
translations and rotations. But the length of the sides would likely be
adjustable under other operations. A subset of these operations could be
defined so that the ratios of the side lengths are preserved.

I don't know exactly what Stroustrup's position is on that aspect of
invariance. This is the article that really helped me understand what he
was getting at with invariants.

http://www.research.att.com/~bs/eh_brief.pdf

Consider this list of properties describing a simple box that displays text
with specified colors, size, shape and location, as well as some other
information about its role in a collection of other objects of this class:
bool is_leaf;
bool is_vertical;
std::string text;
RgbColor bg_color;
RgbColor text_color;
RgbColor edge_color;
double h;
double w;
double border_w;
double x;
double y;
int bg_z;
int text_z;
The data members were not chose because I recognized an invariant and
identified them as essential in maintaining that invariant. They were
chosen because they are what I need in order to implement my design, and
they are all essential to the one aspect of the program which involves
presenting the computed results. The variables such as RgbColor type
members are designed to preserve their own invariants. And that invariance
is a result of using unsigned char for the member data. RgbColor is a
struct.

One thing I've been wondering about is the notion of distributed invariants.
That is, in the case of my property objects, it has become reasonable to
employ an observer pattern that involves notifying some objects that the
properties have changed. Looking at the data I have listed above, it seems
fairly orthogonal. Any one member can be changed without modifying the
others. The one exception is that the text position needs to be calculated
when any of the other geometric data is modified.

There is, however a requirement that different objects in the system remain
synchronized with the state of the properties object. I'm wondering how
useful it is to extend the notion of class invariants to a notion of
distributed state with multiple participants.
 
P

Paul Mensonides

Perhaps you could simply provide a definition for the term. My attempting
to extract a definition from an example has the significant potential for
my arriving at a different definition than the one you intend.

Stringizing is a macro operator that converts a macro argument to a string
literal. E.g.

#define STR(x) #x
STR(abc) // "abc"
Now, back to the topic at hand. I changed the subject field in the heading
to reflect the fork this thread has taken. For the moment, forget any of
my own comments regarding the Cpp, and explain to the news group where
Bjarne Stroustrup is in error regarding the opinions expressed in the
following:

Sure. The main factors, by far, that has led to the lack of more sophisticated
development environments (in C++) is templates (or more specifically,
second-phase lookup) and (in C and C++) context-dependent parsing. For a tool
to do an analysis that is anywhere near complete, it has to do a nearly complete
parse of the language, *and* it has to be able to deal with intermediate stages
of construct completion (i.e. as you type in an IDE; it can't just bail like a
compiler can). The preprocessor is almost insignificant in comparison.

Regards,
Paul Mensonides
 
S

Steven T. Hatton

Paul said:
Stringizing is a macro operator that converts a macro argument to a string
literal. E.g.

#define STR(x) #x
STR(abc) // "abc"

And that buys you what? I don't see the profound usefullness of this trick.
Sure. The main factors, by far, that has led to the lack of more
sophisticated development environments (in C++) is templates (or more
specifically,
second-phase lookup) and (in C and C++) context-dependent parsing. For a
tool to do an analysis that is anywhere near complete, it has to do a
nearly complete parse of the language, *and* it has to be able to deal
with intermediate stages of construct completion (i.e. as you type in an
IDE; it can't just bail like a
compiler can). The preprocessor is almost insignificant in comparison.

Regards,
Paul Mensonides

I've given this issue some consideration, and I've also observed the
behavior of KDevelop. I agree that templates throw the IDE a curve ball.
I suspect this is (in part) what Stroustrup was addressing when he
mentioned incremental compilation and a C++ interpreter. There are at
least two options for dealing with templates and edit-time error detection
and code completion. They are not mutually exclusive.

One is to present the 'raw' interface of the template to the user. That is,
you just parse the template and show the user code completion options in
the 'raw' form, e.g., if the user is creating a deque the following
completion options could be presented:
....
void push_front(const T& x);
void push_back(const T& x);
iterator insert(iterator position, const T& x);
void insert(iterator position, size_type n, const T& x);
....

Clearly some level of error detection could be implemented at that level as
well. For example. If the user attempted to invoke push_font(const T& x)
the checker could detect that error.

Your observation regarding two phase lookup, if I understand your meaning,
is relevant, but not a show-stopper. This relates back to my early comment
about KDevelop. I don't know how they do it, but they are providing code
completion for classes instantiated from templates. That requires that the
template code has been compiled. To that I say 'big deal'! It's that way
with JBuilder and Java. JBuilder balks at providing error detection and
code completion in some cases in which the code has not been compiled, and
it gets it wrong sometimes when the code has changed, and has not been
recompiled.

One might argue that the same could be done with the Cpp. My response is
that the Cpp is far less structured, and trying to predict what someone
might have done to his code with it, is beyond what I would expect a
reasonable programmer to care to address.

This is a perfect example of how the CPP undermines the integrity of C++.
It's from the xerces C++ samples:
CreateDOMDocument.cpp
http://tinyurl.com/3wt4x

#include <xercesc/util/PlatformUtils.hpp>
#include <xercesc/util/XMLString.hpp>
#include <xercesc/dom/DOM.hpp>
#if defined(XERCES_NEW_IOSTREAMS)
#include <iostream>
#else
#include <iostream.h>
#endif

XERCES_CPP_NAMESPACE_USE
....

#define X(str) XStr(str).unicodeForm()

....

unsigned int elementCount = doc->getElementsByTagName(X("*"))->getLength();
XERCES_STD_QUALIFIER cout << "The tree just created contains: "
<< elementCount << " elements." << XERCES_STD_QUALIFIER endl;
....

Using JBuilder, and the Java counterpart, I can fly through XML coding.
It's a pleasure to work with. The above excerpt is a mangled and tangled
maze of inelegant kludgery. There are historical reasons for the code
being the way it is. But there is no justification for this kind of
anti-programming in any code written with the future in mind.

I pretty much figured out what all the macro mangling does in the above
sample. But by that time I was so disgusted, I lost interest in using the
library. There are countless examples of such anti-code in most of the C++
libraries I've seen.

I should be able to write "using xercesc::XMLString" and the compiler should
pull in what it needs, and ONLY WHAT IT NEEDS, to provide that class. I
should not need the #include a file that #includes a file that #includes a
file that provides the definition of the namespace macro, and another three
or more levels of copy and paste to get the bloody class declaration, and
then pray that it can find the corresponding definition.

For anybody who wants to tell me that the reason that C++ code is this hard
to understand is that C++ is more powerful, please explain to me why
IBM-the company that wrote the Xerces C++ code-is using the Java
counterpart to serve out IBM's latest online C++ language and library
documentation.

http://publib.boulder.ibm.com/infocenter/comphelp/index.jsp\
?topic=/com.ibm.vacpp7a.doc/language/ref/clrc00about_this_reference.htm

Years ago I read Kernighan and Ritchie's TCPL(2E). I've read all of
TC++PL(SE) except for the appendices, from which I've read selections. I
read the core language sections twice. I've also read much other material
on the language, and I've been writing a good deal of code. Additionally,
I've read The Java Programming language 3rd Edition, and have considerable
experience working with Java. From that foundation I have formed the
opinion, shared by the creator of C++, that the preprocessor undermines the
integrity of the core C++ programming language and is the main reason for
the comparative ease of use Java provides. The Cpp is not a strength, it is
a liability.
 
P

Paul Mensonides

Steven said:
And that buys you what? I don't see the profound usefullness of this
trick.

It buys you meaningful assertions with replicating code among other things.
I've given this issue some consideration, and I've also observed the
behavior of KDevelop. I agree that templates throw the IDE a curve
ball.
I suspect this is (in part) what Stroustrup was addressing when he
mentioned incremental compilation and a C++ interpreter. There are at
least two options for dealing with templates and edit-time error
detection and code completion. They are not mutually exclusive.

One is to present the 'raw' interface of the template to the user.
That is, you just parse the template and show the user code
completion options in the 'raw' form, e.g., if the user is creating a
deque the following completion options could be presented:

The 'raw' interface of which specialization of the template? The point is that
a tool, to be at all effective in modern programming, has to do near full parse
of the code and semantic analysis. With C++ especially, that is *far* from
trivial.
Your observation regarding two phase lookup, if I understand your
meaning, is relevant, but not a show-stopper.

Actually, it pretty much is a show-stopper, more so than anything else. A tool
can trivially preprocess a file, but a tool cannot do any meaningful code
completion inside a template. E.g. what options might a tool give me at this
point:

template<class T> void f(T) {
X<T>:: /* here */
}

The answer is "nothing" because it cannot know what T is, and therefore cannot
tell what specialization of X is chosen, nor can it even tell what X
specializations there might be. In generic programming, there is very little
code in a template body that is not dependent on template parameters. This is
second phase lookup (a very useful feature) at work. What it boils down to is
that code analysis such as completion is fundamentally useless inside template
code, and that is precisely the place where it would be most useful.
This relates back to my
early comment about KDevelop. I don't know how they do it, but they
are providing code completion for classes instantiated from
templates. That requires that the template code has been compiled.
To that I say 'big deal'!

Okay, I'll follow this line (even though "partially" compiling C++ as you type
would be incredibly expensive). If you say 'big deal' to that, than what is the
problem with the preprocessor? Tools can trivially preprocess source code much
easier than they can parse the underlying language itself.
It's that way with JBuilder and Java.
JBuilder balks at providing error detection and code completion in
some cases in which the code has not been compiled, and it gets it
wrong sometimes when the code has changed, and has not been
recompiled.

One might argue that the same could be done with the Cpp. My response
is that the Cpp is far less structured, and trying to predict what
someone might have done to his code with it, is beyond what I would
expect a reasonable programmer to care to address.

What on earth are you talking about? Cpp *is* well-structured, and more
importantly, it is a straightline process. If a tool cares to look at the code
resulting from the preprocessor, it merely has to preprocess it. To that I say
'big deal'.
This is a perfect example of how the CPP undermines the integrity of
C++.

No, it is an example of code that *may* be misusing the preprocessor. Even if
it is, so what? If I wanted to produce asinine and unreadable code, use of the
preprocessor is not required. In fact, it is possible in *any* language with
*any* feature-set.
It's from the xerces C++ samples:
CreateDOMDocument.cpp
http://tinyurl.com/3wt4x

#include <xercesc/util/PlatformUtils.hpp>
#include <xercesc/util/XMLString.hpp>
#include <xercesc/dom/DOM.hpp>
#if defined(XERCES_NEW_IOSTREAMS)
#include <iostream>
#else
#include <iostream.h>
#endif

XERCES_CPP_NAMESPACE_USE
...

#define X(str) XStr(str).unicodeForm()

...

unsigned int elementCount =
doc->getElementsByTagName(X("*"))->getLength(); XERCES_STD_QUALIFIER
cout << "The tree just created contains: " << elementCount << "
elements." << XERCES_STD_QUALIFIER endl; ...

Using JBuilder, and the Java counterpart, I can fly through XML
coding. It's a pleasure to work with. The above excerpt is a mangled
and tangled maze of inelegant kludgery. There are historical reasons
for the code being the way it is. But there is no justification for
this kind of anti-programming in any code written with the future in
mind.

Actually, there is. Because the underlying language is so complex, there are
numerous bugs and missing features in every single existing C++ compiler (more
so for some than others). Even looking at the tiny snippet of code above, which
I know nothing about as a whole, I can tell that it is working around that exact
issue. In this example, the preprocessor merely makes it possible to do what
otherwise could not be done.
I pretty much figured out what all the macro mangling does in the
above sample. But by that time I was so disgusted, I lost interest
in using the library. There are countless examples of such anti-code
in most of the C++ libraries I've seen.

There are plenty of ways to misuse the preprocessor, just as there are plenty of
ways to misuse any language feature in any language. So what? Further, the
above code is 'anti-code' only in the sense that it is working around flaws in
the compiler, standard library, or isolating inherent platform-related
dependencies.
I should be able to write "using xercesc::XMLString" and the compiler
should pull in what it needs, and ONLY WHAT IT NEEDS, to provide that
class. I should not need the #include a file that #includes a file
that #includes a file that provides the definition of the namespace
macro, and another three or more levels of copy and paste to get the
bloody class declaration, and then pray that it can find the
corresponding definition.

What does this have to do with the C or C++ preprocessor? C++ does not have a
module system. You're talking about fundamental changes to the language, not
problems with the preprocessor. Further, there are things called "good
organization" and "good code structure" that make dealing with these issues
trivial (e.g. to use interface X include file Y and link to Z--well designed
headers can be blackboxes).
For anybody who wants to tell me that the reason that C++ code is
this hard to understand is that C++ is more powerful, please explain
to me why IBM-the company that wrote the Xerces C++ code-is using the
Java counterpart to serve out IBM's latest online C++ language and
library documentation.

Because they felt like it? Honestly, who cares? In my opinion, which is what
matters to me, Java sucks on so many levels it is unusable.

Regarding C++ (in general)... C++ is more powerful--that is unquestionable.
Whether all that power is necessary to accomplish some specific task is another
question altogether.
http://publib.boulder.ibm.com/infocenter/comphelp/index.jsp\
?topic=/com.ibm.vacpp7a.doc/language/ref/clrc00about_this_reference.htm

Years ago I read Kernighan and Ritchie's TCPL(2E). I've read all of
TC++PL(SE) except for the appendices, from which I've read
selections. I read the core language sections twice. I've also read
much other material on the language, and I've been writing a good
deal of code. Additionally, I've read The Java Programming language
3rd Edition, and have considerable experience working with Java.
From that foundation I have formed the opinion, shared by the creator
of C++,

So what? Bjarne is not the ultimate authority on what is good and bad.
that the preprocessor undermines the integrity of the core
C++ programming language and is the main reason for the comparative
ease of use Java provides. The Cpp is not a strength, it is a
liability.

You're opinion, which you are entitled to, is simply wrong. The preprocessor is
not to blame, the complexity of the language as a whole is. For an experienced
C++ programmer, using Java is like tying your hands behind your back. The
language (Java) not only promotes, but enforces bad design as a result of
seriously lacking feature set--with the excuse that it is protecting you from
yourself. C++ doesn't make such decisions for us, instead it gives us the tools
to make abstractions that do it for us. In other words, it isn't the result of
arrogance propelled by limited vision.

Regards,
Paul Mensonides
 
S

Steven T. Hatton

Paul said:
Steven T. Hatton wrote:

It buys you meaningful assertions with replicating code among other
things.

Perhaps this is useful. I have never tried using assertions. When I read
about them in TC++PL(SE) it basically went. 'Check this assertion thing
out. Pretty cool, eh? They're macros. They suck!'

There are places where retaining some of the Cpp for the implementation to
use seems to make sense. For example the various __LINE__, __FILE__,
__DATE__, etc. are clearly worth having around so debuggers and other tools
can use them. They should not be considered part of the core language to
be used by application programmers.

The 'raw' interface of which specialization of the template?

That seems like an ambiguity that could be resolved by examining the
parameter when necessary.
The point is
that a tool, to be at all effective in modern programming, has to do near
full parse
of the code and semantic analysis. With C++ especially, that is *far*
from trivial.

I agree. However, the presence of the CPP complicates this issue to the
point where it seems to degenerate into an exercise in absurdity. I will
observe that many Java IDEs do this rather successfully.
Actually, it pretty much is a show-stopper, more so than anything else. A
tool can trivially preprocess a file, but a tool cannot do any meaningful
code
completion inside a template. E.g. what options might a tool give me at
this point:

template<class T> void f(T) {
X<T>:: /* here */
}

Whatever can be extracted from X. If there are specializations of X the it
seems reasonable to provide the superset of options with an indication of
which are specific to a given specialization.
The answer is "nothing" because it cannot know what T is, and therefore
cannot tell what specialization of X is chosen, nor can it even tell what
X specializations there might be.

Why can't it tell what specialization for X exist?
In generic programming, there is very little
code in a template body that is not dependent on template parameters.
This is second phase lookup (a very useful feature) at work. What it
boils down to is that code analysis such as completion is fundamentally
useless inside template code, and that is precisely the place where it
would be most useful.

I'm not convinced that either of these assertions are correct. It may be
more difficult to provide meaningful code completion inside a template, but
I believe it is fundamentally possible to provide a significant amount of
information in that context. Furthermore, I don't accept the notion that
code completion inside a template is where it would be most useful. By
their nature, templates are abstractions, and their use implies a certain
amount of selective ignorance that would preclude your knowing the details
of how specific template parameters would affect the context.
Okay, I'll follow this line (even though "partially" compiling C++ as you
type
would be incredibly expensive). If you say 'big deal' to that, than what
is the
problem with the preprocessor? Tools can trivially preprocess source code
much easier than they can parse the underlying language itself.

Well, to some extent there is syntax checking going on with KDevelop. It's
not at the level I would like, but it continues to improve. As for the
cost of compiling C++, I'm not convinced that the proprocessor and the
compilation mechanisms it supports and encourages aren't a significant part
of the problem. The current approach seems rather monolithic. I suspect a
more compartmentalized strategy would prove far more efficient. I have to
admit that g++ can take an absurd amount of time to compile fairly simple
programs. My impression is that is due to the recompilation of units that
invariably produce identical results.
What on earth are you talking about? Cpp *is* well-structured, and more
importantly, it is a straightline process. If a tool cares to look at the
code
resulting from the preprocessor, it merely has to preprocess it. To that
I say 'big deal'.

But now you are talking about something editing your code at the same time
you are, but not displaying the results. What I should have said is that
the result of using the Cpp is far less structured than the result of using
a programming language.
Actually, there is. Because the underlying language is so complex, there
are numerous bugs and missing features in every single existing C++
compiler (more
so for some than others). Even looking at the tiny snippet of code above,
which I know nothing about as a whole, I can tell that it is working
around that exact
issue. In this example, the preprocessor merely makes it possible to do
what otherwise could not be done.

I see no reason to try to support a compiler that doesn't understand
namespaces. What significant platform is restricted to using such a
compiler?
There are plenty of ways to misuse the preprocessor, just as there are
plenty of
ways to misuse any language feature in any language. So what? Further,
the above code is 'anti-code' only in the sense that it is working around
flaws in the compiler, standard library, or isolating inherent
platform-related dependencies.

As I said, there are historical reason for the code to be that way. That is
far from the only place where the people used the CPP to rewrite code that
should be straight forward C++. There are arguments for generating code in
ways that currently rely on the Cpp. I use them all the time.
http://doc.trolltech.com/3.3/moc.html
What does this have to do with the C or C++ preprocessor? C++ does not
have a module system. You're talking about fundamental changes to the
language, not problems with the preprocessor.

"I suspect my dislike for the preprocessor is well known. Cpp is essential
in C programming, and still important in conventional C++ implementations,
but it is a hack, and so are most of the techniques that rely on it.
*_It_has_been_my_long-term_aim_to_make_Cpp_redundant_*."
Further, there are things called "good
organization" and "good code structure" that make dealing with these
issues trivial (e.g. to use interface X include file Y and link to Z--well
designed headers can be blackboxes).

I try hard to get things right. There are some aspects of Xerces which do
show a better face than what I previously introduced:
http://cvs.apache.org/viewcvs.cgi/xml-xerces/c/src/xercesc/dom/

The separation of interface and implementation is textbook proper C++. I
still find the added level of complexity involved in using headers
unnecessary, and a sgnificant burden. I'm currently working on a tool that
will mitigate this drudgery by treating the class as a unit, rather than a
composit.
Because they felt like it? Honestly, who cares? In my opinion, which is
what matters to me, Java sucks on so many levels it is unusable.
That's a silly statement. I've seen it used to do losts of useful things.
This may be a fairly useless program, but I believe it demonstrates that
Java is capable of supporting the development of fairly sophisticated
programs. It's my second Java3d project. It was pretty much a limbering
up exercise.
http://baldur.globalsymmetry.com//projects/math-3d/living-basis.html
Regarding C++ (in general)... C++ is more powerful--that is
unquestionable.

What do you mean by powerful? It seems clear that major players in the
industry do not consider C++ to be appropriate for many major applications.
Whether all that power is necessary to accomplish some
specific task is another question altogether.


So what? Bjarne is not the ultimate authority on what is good and bad.

No, but I find it interesting that his thoughts on this matter seem to
reflect my own - mostly independently formed - assessment. I believe it is
also significant that he does hold such a strong opinion about the matter.

He may not agree with me here, and even if he does, he may be unwilling to
say as much for the sake of polity. The way the standard headers are used
is a logical mess. It causes the programmer to waste valuable time
searching for things that should be immediately accessible by name. The is
true of most other libraries as well. There is no need for this lack of
structure other than the simple fact that no one has been able to move the
mindset of some C++ programmers out of the Nixon era.
You're opinion, which you are entitled to, is simply wrong. The
preprocessor is
not to blame, the complexity of the language as a whole is. For an
experienced
C++ programmer, using Java is like tying your hands behind your back. The
language (Java) not only promotes, but enforces bad design as a result of
seriously lacking feature set--with the excuse that it is protecting you
from
yourself.

In the case of the CPP, it isn't so much me I want to be protected from.
It's the people who think it's a good idea to use it for anything beyond
simple conditional compilation, some debugging, and perhaps for #including
header files. I will admit that I still find the use of header files
bothersome. They represented one of the biggest obsticles I encountered
when first learning C++.
C++ doesn't make such decisions for us, instead it gives us the tools
to make abstractions that do it for us. In other words, it isn't the
result of arrogance propelled by limited vision.

Leaving some things to choice serves no usful purpose beyond pleasing
multiple constituents. There are places where a lack of rule is not
empowering, it is restricting. I hear diving in Rome is not what one could
properly call a civilized affair. Personally, I like the idea that people
stop at stoplights, use trunsignals appropriatly, stay in one lane under
normal circumstances, etc.
 
P

Paul Mensonides

Steven said:
Perhaps this is useful. I have never tried using assertions. When I
read about them in TC++PL(SE) it basically went. 'Check this
assertion thing out. Pretty cool, eh? They're macros. They suck!'

Assertions are invaluable tools.
There are places where retaining some of the Cpp for the
implementation to use seems to make sense. For example the various
__LINE__, __FILE__, __DATE__, etc. are clearly worth having around so
debuggers and other tools can use them.

Debuggers? Macros don't exist after compilation, so I'm not sure that __DATE__
and __TIME__ would have any useful meaning at all, and debuggers (if source
information is available) already know the line and file.
They should not be
considered part of the core language to be used by application
programmers.

An immediate example comes to mind. What if I write a program that, when
executed with a --version option, outputs a copyright notice, version number,
and the build date and time? (This is a fairly common scenario, BTW.)
That seems like an ambiguity that could be resolved by examining the
parameter when necessary.

Yes, but that means that the IDE has to be able to parse and semantically
analyze the entire source code--including doing overload resolution, partial
ordering, template declaration instantiation, etc..
I agree. However, the presence of the CPP complicates this issue to
the point where it seems to degenerate into an exercise in absurdity.

I think you have a serious misunderstanding of the preprocessor. How does CPP
complicate this issue?
I will observe that many Java IDEs do this rather successfully.

Parsing Java is quite a bit simpler than parsing C++.
Whatever can be extracted from X. If there are specializations of X
the it seems reasonable to provide the superset of options with an
indication of which are specific to a given specialization.

It doesn't know all of the specializations. As a general rule, it only knows
about a few general specializations.
Why can't it tell what specialization for X exist?

Because they might not exist yet.
I'm not convinced that either of these assertions are correct. It
may be more difficult to provide meaningful code completion inside a
template, but I believe it is fundamentally possible to provide a
significant amount of information in that context. Furthermore, I
don't accept the notion that code completion inside a template is
where it would be most useful. By their nature, templates are
abstractions, and their use implies a certain amount of selective
ignorance that would preclude your knowing the details of how
specific template parameters would affect the context.

That's true, but (as a general rule) well-designed template code is also the
most complex code. Code completion outside of template code, while useful, is
only a small benefit.
Well, to some extent there is syntax checking going on with KDevelop.
It's not at the level I would like, but it continues to improve. As
for the cost of compiling C++, I'm not convinced that the
proprocessor and the compilation mechanisms it supports and
encourages aren't a significant part of the problem.

Look, if a tool author is willing to fully parse the underlying language,
preprocessing the source as it does so is trivial in comparison. If you
disagree, tell me why.
But now you are talking about something editing your code at the same
time you are, but not displaying the results. What I should have
said is that the result of using the Cpp is far less structured than
the result of using a programming language.

No, I'm not. C and C++ code can be preprocessed as it is parsed into a syntax
tree in a single pass. It's not like when you type something, the IDE goes and
tries to find it in the source code--that would be *incredibly* inefficient. In
essence, it is already rewriting the code into an internal format that is
designed for algorithmic navigation that also discards information that it
doesn't care about.
I see no reason to try to support a compiler that doesn't understand
namespaces. What significant platform is restricted to using such a
compiler?

I wish it was that simple. In many companies, there is a lot of inertia from a
compiler version. I.e. it often takes years to upgrade to new compilers (if at
all)--simply because the time required to make older code compatible with the
new compiler can be massively expensive. Thus, new code gets written for older
compilers all the time.
As I said, there are historical reason for the code to be that way.

There are current reasons as well.
That is far from the only place where the people used the CPP to
rewrite code that should be straight forward C++.

The preprocessor does not 'rewrite' code--it expands macros which are *part* of
the code. In doing so, it can reap major benefits in readability and
maintenance.
"I suspect my dislike for the preprocessor is well known. Cpp is
essential in C programming, and still important in conventional C++
implementations, but it is a hack, and so are most of the techniques
that rely on it.
*_It_has_been_my_long-term_aim_to_make_Cpp_redundant_*."

More than anything else, Bjarne hates #define, not #include, BTW. The
preprocessor is not a hack, it is an incredibly useful tool that can, like every
other tool, be misused.
I try hard to get things right. There are some aspects of Xerces
which do show a better face than what I previously introduced:
http://cvs.apache.org/viewcvs.cgi/xml-xerces/c/src/xercesc/dom/

The separation of interface and implementation is textbook proper
C++. I still find the added level of complexity involved in using
headers unnecessary, and a sgnificant burden.

What complexity are you talking about exactly? Separation of interface and
implementation is a cornerstone of separate compilation--which is one of the
fundamental reasons that C++ is able to scale to very large projects without
dropping efficiency.
I'm currently working
on a tool that will mitigate this drudgery by treating the class as a
unit, rather than a composit.

What do you mean by 'composite', separation of interface and implementation?
That's a silly statement. I've seen it used to do losts of useful
things.

That's not exactly what I meant. It is unusable to me because I am accustomed
to a much more complete toolset. With Java, you have to jump through a lot of
hoops to get around language-imposed limitations. Granted, there is a measure
of safety in some things, but then it is immediately lost by lack of generics
(real generics, not half-ass generics). Languages can be loosely classified by
how much safety they enforce by default. C++ is at one end. It allows many
things that could be unsafe because it allows access to the "details". Other
languages are at the other end, such as Scheme or Haskell. Such languages can
yield higher productivity because they rely much more on compiler optimization
of the details. Both are valid strategies. Java (and it isn't just Java) is
smack in the middle of the two approaches, which ends up providing only a small
fraction of the benefits of either. If I want control of details (for whatever
reason) I'll use a language (like C++) that doesn't actively work against me
controlling those details. If I want a higher-level language where I can
largely ignore many of those details, I'll use a real higher-level language
(like Haskell).
This may be a fairly useless program, but I believe it
demonstrates that Java is capable of supporting the development of
fairly sophisticated programs.

I agree that Java is capable.
It's my second Java3d project. It
was pretty much a limbering up exercise.
http://baldur.globalsymmetry.com//projects/math-3d/living-basis.html


What do you mean by powerful?

I mean that it gives you access to lower-level details while simultaneously
giving you tools to create higher-level abstractions.
It seems clear that major players in
the industry do not consider C++ to be appropriate for many major
applications.

Ha. The opposite is true.
No, but I find it interesting that his thoughts on this matter seem to
reflect my own - mostly independently formed - assessment. I believe
it is also significant that he does hold such a strong opinion about
the matter.

There are other people, also actively involved in the design of C++ and just as
authoritative, that have opposing viewpoints.
He may not agree with me here, and even if he does, he may be
unwilling to say as much for the sake of polity. The way the standard
headers are used is a logical mess. It causes the programmer to
waste valuable time searching for things that should be immediately
accessible by name.

That is what documentation is for, which you need anyway. (Further, good
documentation is significantly more involved that what can be trivially put in
header file comments.) As an aside, the standard library header structure is
not that well organized--mostly because some headers contain too many things
(e.g. said:
The is true of most other libraries as well.
There is no need for this lack of structure other than the simple
fact that no one has been able to move the mindset of some C++
programmers out of the Nixon era.

An interface is specified in some file, which you include to access the
interface. The documentation tells you what file you need to include and what
you need to link to (if anything) to use the interface. That's pretty simple
and well-structured.
In the case of the CPP, it isn't so much me I want to be protected
from. It's the people who think it's a good idea to use it for
anything beyond simple conditional compilation, some debugging, and
perhaps for #including header files.

Give an example of why this protection is necessary. There is only one: name
conflicts introduced by unprefixed or lowercase macro definitions. That is just
bad design, and you need a whole lot more changes to the language to prevent
someone else's bad design from affecting you.
I will admit that I still find
the use of header files bothersome. They represented one of the
biggest obsticles I encountered when first learning C++.

They are different than Java, but they aren't fundamentally complex.
Leaving some things to choice serves no usful purpose beyond pleasing
multiple constituents.

I'm not talking about providing two or more near identical features that all
have the same tradeoffs. Fundamentally, it's about being able to pick which
tradeoffs are worthwhile for a particular thing--and that serves an incredibly
useful purpose.
There are places where a lack of rule is not
empowering, it is restricting.
Example?

I hear diving in Rome is not what one
could properly call a civilized affair. Personally, I like the idea
that people stop at stoplights, use trunsignals appropriatly, stay in
one lane under normal circumstances, etc.

I really dislike analogies because they are so readily available to support any
argument, but there are some key concepts in this one. First, what you describe
is a system that is enforced by convention rather than by the streets,
stoplights, signs, vehicles, etc. (i.e. the "language"). Second, the ability to
deviate from the conventions is vital because there are unforeseeable variables
(or foreseeable variables that are too costly to handle) that enter the
system--such as a power outage or severe whether. Under the Java model, when
those circumstances occur (i.e. when you know better than the system enforced by
the language) you can't do anything about it. In C++, when you know better than
the system enforced by conventions, you can. (Note that both of these
situations occur in both Java and in C++, it isn't nearly as black-and-white as
this generalization. However, as a generalization, it is true. C++ gives you
more flexibility by moving the system from the language to convention moreso
than Java does.)

Regards,
Paul Mensonides
 
S

Steven T. Hatton

Paul said:
Assertions are invaluable tools.

Some people seem to think so. I read up on them in both Java and C++, and
was also aware of them in C. They never seemed to be much use. I'll grant
you, with the weak exception handling of C++ such a thing might be a bit
handy. Too bad C++ doesn't have printStackTrace. I can't even think of a
problem I've had where such a mechanism would be of much use.
Debuggers? Macros don't exist after compilation, so I'm not sure that
__DATE__ and __TIME__ would have any useful meaning at all, and debuggers
(if source information is available) already know the line and file.

Oh well, one less argument in favor of keeping the preprocessor.
An immediate example comes to mind. What if I write a program that, when
executed with a --version option, outputs a copyright notice, version
number,
and the build date and time? (This is a fairly common scenario, BTW.)

This kind of thing?

"GNU Emacs 21.3.50.2 (i686-pc-linux-gnu, X toolkit, Xaw3d scroll bars) of
2004-08-30 on ljosalfr"

Nothin' a bit of sed and date in the Makefile won't do for you.
Yes, but that means that the IDE has to be able to parse and semantically
analyze the entire source code--including doing overload resolution,
partial ordering, template declaration instantiation, etc..

No it doesn't. Even if it were necessary to do all you said, I don't
believe it is beyond the capabilities of existing technology. But, as I've
already explained, it isn't necessary to parse everything at edit time in
order for such a tool to be useful. They can often rely on the results of
a previous compilation.

Oh, and I just checked. KDevelop is giving me code completion on templates.
I think you have a serious misunderstanding of the preprocessor. How does
CPP complicate this issue?

Because it supports the antiquated technique of pasting together a
translation unit out of a bunch of different files, and it modifies the
source code between the time the program is edited and the time it is
actually compiled.
Parsing Java is quite a bit simpler than parsing C++.

Some of that is due to a simpler grammar, and some (much) of it is due to
the fact that Java uses a superior mechanism for locating resources
external to the actual file containing the source under development.
It doesn't know all of the specializations. As a general rule, it only
knows about a few general specializations.

What is "it"? I'm not following you here. The specialization are defined
somewhere in the code base, so they can be cached like anything else.
Because they might not exist yet.

What are you talking about? Either they do or they don't exist. It would
be damn hard for any toll to provide code completion based on code you have
yet to write.

That's true, but (as a general rule) well-designed template code is also
the
most complex code. Code completion outside of template code, while
useful, is only a small benefit.

There are a lot of darn simple templates in the Standard Library.
Look, if a tool author is willing to fully parse the underlying language,
preprocessing the source as it does so is trivial in comparison. If you
disagree, tell me why.

The problem is knowing what needs to be preprocessed. There is also the
problem the the preprocessor does not adhere to scoping rules, so the tool
cannot limit the scope under consideration without ignoring the potential
impact of the preprocessor.
No, I'm not. C and C++ code can be preprocessed as it is parsed into a
syntax
tree in a single pass. It's not like when you type something, the IDE
goes and
tries to find it in the source code--that would be *incredibly*
inefficient. In essence, it is already rewriting the code into an
internal format that is designed for algorithmic navigation that also
discards information that it doesn't care about.

Nonetheless, the ast is not going to directly coincide with what is in the
edit buffer. Adding one preprocessor directive can add dozens of source
files to the translation unit. That is unstructured, and unpredictable.
I wish it was that simple. In many companies, there is a lot of inertia
from a
compiler version. I.e. it often takes years to upgrade to new compilers
(if at all)--simply because the time required to make older code
compatible with the
new compiler can be massively expensive. Thus, new code gets written for
older compilers all the time.

I don't believe there is a compelling reason to introduce new libraries
intended for general use in forward looking technology filled with ugly
kludges in order to try to be compatable with the least common denominator.
There are better ways of dealing with such issues. By bending over
backward to try to remain compatable with obsolete technology, you
compromise your own produce and encourage the survival of technology that
was outdate for a reason.
There are current reasons as well.

And the result is that people don't use the product nearly as much as they
otherwise would.

The preprocessor does not 'rewrite' code--it expands macros which are
*part* of
the code. In doing so, it can reap major benefits in readability and
maintenance.

I've more often seen the opposite effect. Most use of macros makes code
less comprehendable, and trying to track down the point of definition can
be exasperating.
More than anything else, Bjarne hates #define, not #include, BTW. The
preprocessor is not a hack, it is an incredibly useful tool that can, like
every other tool, be misused.

I think its primary use is as a crutch the C++ can't so without because it
was never attempted.

What complexity are you talking about exactly? Separation of interface
and implementation is a cornerstone of separate compilation--which is one
of the fundamental reasons that C++ is able to scale to very large
projects without dropping efficiency.

Headers should not be the means of achieving the separation of
implementation and interface. The only place the ISO/IEC 14882:2003 even
mentions a header file is in the C compatability appendix. The language
specification /should/ address this issue by providing a better solution
than that which currently exists.
What do you mean by 'composite', separation of interface and
implementation?

My actually having to maintain redundant constructs between these files.
Changing one member in a class can result in having to edit the location
where the member is defined, the parameter list in the constructor
declaration, the parameter list in constructor definition, the member
initialization list, and perhaps parameter lists in both the the header and
source file of any functions involved. Additionally, I am likely to have
to remove a forward declaration and a #include. There may also be a
requirement to modify the destructor.

A header doesn't really give you a genuine interface, and the tradeoff in
trying to make a header devoid of anything but pointer definitions in order
to prevent dependencies can be a nuisance as well. Some of this is
inevitable no matter what. Some of it is beyond repair. Could have,
perhaps should have, been done differently at the outset. Some of the
relative complexity compared to Java is due to the fact that C++ has
pointers, references, and (regular) variables. So my tool, if I ever get
it completed, is intended to mitigate more than could be solved by
eliminating the #include. I favor the falsely advertised feature described
here:

http://gcc.gnu.org/onlinedocs/gcc-3.4.1/gcc/C---Interface.html#C++ Interface

It don't work!
Java (and it isn't just Java)
is smack in the middle of the two approaches, which ends up providing only
a small
fraction of the benefits of either. If I want control of details (for
whatever reason) I'll use a language (like C++) that doesn't actively work
against me
controlling those details. If I want a higher-level language where I can
largely ignore many of those details, I'll use a real higher-level
language (like Haskell).

It's not the safety that makes Java useful. It is that fact that it
facilitates the location and leveraging of resources. Much of that has to
do with the superior mechanism of importing declarations. Java more
effectively separates interface from implementation than does C++. I
believe some of what Java does could be done in C++ without negatively
impacting the language.
I mean that it gives you access to lower-level details while
simultaneously giving you tools to create higher-level abstractions.

I agree that C++ does that. What it doesn't do well is facilitate the
location of resources, and the isolation of components.
Ha. The opposite is true.

I'm not saying no one is using C++. I am saying that there are major areas
where C++ is not the language of choice, and it is due to the problems I've
been discussing. It's a combination of many issues which individually seem
trivial, but when they are combined become genuine obstacles to making
progress. Sure, with a few years experience people can learn to compensate
for these defects. But a lot of people won't get the luxury of being only
moderately productive for that amount of time.

That is what documentation is for, which you need anyway.

No /that/ is what _interfaces_ are for! The programming language should be
the primary means of communication between the author and the reader. API
documentation can be very useful, and much of it can be generated by tools
such as JavaDoc and Doxygen.
(Further, good
documentation is significantly more involved that what can be trivially
put in header file comments.)

Sometimes it's nice to have more than just the autogenerated html, but even
that can be quite useful:
http://www.kdevelop.org/HEAD/doc/api/html/classTemplateParameterAST.html

Trolltech generates all their API documentation from the source code:

http://doc.trolltech.com/3.3/qstring.html

And of course the highly successful Java API documentation is likewise
generated directly from the source files:
http://java.sun.com/j2ee/1.4/docs/api/index.html
As an aside, the standard library header structure
is not that well organized--mostly because some headers contain too many
things (e.g. <algorithm>).

And the namespace is flat. There should be one mechanism for determining
the subset of the library responsible for any particular area of
functionality. As it stands you take the intersection of the namespace and
the header name. Since some headers include other headers, you often end
up with more than you specified. That is bad. It can introduce hidden
dependencies.
An interface is specified in some file, which you include to access the
interface. The documentation tells you what file you need to include and
what
you need to link to (if anything) to use the interface. That's pretty
simple and well-structured.

An interface should be self-describing, and should not introduce more into
an environment than is essential to serve the immediate purpose.
Give an example of why this protection is necessary. There is only one:
name
conflicts introduced by unprefixed or lowercase macro definitions. That
is just bad design, and you need a whole lot more changes to the language
to prevent someone else's bad design from affecting you.

I've already provided an example in the Xerces code. That's a friggin slap
in the face to a person who wants to read that code.
They are different than Java, but they aren't fundamentally complex.

No, but they can combine to create very unstructured complexity.
I'm not talking about providing two or more near identical features that
all
have the same tradeoffs. Fundamentally, it's about being able to pick
which tradeoffs are worthwhile for a particular thing--and that serves an
incredibly useful purpose.

I'm talking about the simple things like not specifying a file name
extension. Sure, it seems trivial, but it can be a PITA when switching
between tools which default to different conventions, or mixing libraries
that use different conventions. Some tools think .c files are C files and
go into C mode, not C++ mode unless you punch it a couple of times. And
worse is the .h file, because more people are likely to name their C++
header files that way. Others want to call everything .cpp which I find
annoying, but quite common. Others correctly prefer the .cc and .hh
extentions.
... However, as a generalization, it is true. C++ gives
you more flexibility by moving the system from the language to convention
moreso than Java does.)

The problem as I see it is that the lack of specification of certain things
such as name resolution based on fully qualified identifiers rather than
relying on the programmer to #include the file containing the declaration
is a major structural deficiency in C++. I wish the standard simply said
'given a fully qualified identifier the implementation shall resolve that
name and make the declaration and/or definition available to the compiler
as needed'.
 
K

Kai-Uwe Bux

Steven said:
Some people seem to think so.

Some people, including me, *do* think so.
I read up on them in both Java and C++, and was also aware of them in C.
They never seemed to be much use.

"Read up on them"? Have you ever used this feature? Sometimes you realize
that something is useful not from reading about it.
I'll grant you, with the weak exception handling of C++ such a thing
might be a bit handy.

Assertions and exceptions are completely different beasts. Assertions are
about aborting the programm and printing a useful statement (useful
predominantly in debugging). Exceptions provide a flow construct to escape
from arbitrary levels of nesting (useful predominantly in postponing the
handling of error conditions).
Too bad C++ doesn't have printStackTrace. I can't even think of
problem I've had where such a mechanism would be of much use.

My debugger prints the stacktrace just fine.




I will just snip the rest of your post, as I have a more fundamental issue
with the direction of your reasoning. I have been following your posts for
quite some time now, starting with the rant about how the standard headers
are not faithfully represented in the header files of the implementation
you are using and about how that poses a difficulty for the IDE of your
dreams.

The overall structure of your approach to C++, as I understand it from the
collection of your posts, seems to be something like this:

a) Let us consider feature / mechanism X. It has the following uses that I
do approve of: <some list>.

b) Unfortunately, X also allows for the following that I do not approve of:
<some list>. These uses are bad because:

* I do not see how that could be useful to anybody.
* It will not allow me to have my IDE.
* If others can do something like that, I will have to adjust.
* Bjarne Stroustrup seems to say so.
* In Java you cannot do that; and Java rocks.
* ...

c) May I suggest to replace X by some feature set / mechanism that would
only allow for the uses listed in (a). [Specifics about the proposed
replacement are unfortunately missing at this time.]


In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code. I like to explore the possibilites, and every once
in a while, I am really awestruck at how something can be done elegantly in
C++ in a way that I could not have fathomed.

I do not think that C++ is perfect, but I dislike the direction in which
you want to push it. To me, it looks as though you are about to cripple the
language.


Best

Kai-Uwe Bux
 
S

Steven T. Hatton

Kai-Uwe Bux said:
Some people, including me, *do* think so.

"Read up on them"? Have you ever used this feature? Sometimes you realize
that something is useful not from reading about it.

There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I believe
to be a good practice.
Assertions and exceptions are completely different beasts. Assertions are
about aborting the programm and printing a useful statement (useful
predominantly in debugging). Exceptions provide a flow construct to escape
from arbitrary levels of nesting (useful predominantly in postponing the
handling of error conditions).

Actually Stroustrup goes on to demonstrate an alternative form of assertion
which also failed to appeal to me. He uses a template that takes an
invariant as a parameter. And get this. It throws an exception rather
than aborting the program. In general that is the kind of thing I was
talking about, but I simply don't find cluttering my programs with
debugging code a good idea, nor, in general do I find it useful. I've
noticed C and C++ programmers new to Java tend to use stuff like if(DEBUG
{/*...*/} until they realize it really isn't all that useful in that
context.
My debugger prints the stacktrace just fine.

When the code crashes? Will it print it to a web browser when your web
application aborts? How do you use something like abort in a lodable
module hosted by an application server?
* I do not see how that could be useful to anybody.

That is not something I really care about, as long as the feature doesn't
impact anything else.
* It will not allow me to have my IDE.
This is a very important issue. The availability of superior IDEs for C++
was one of the key features that distinguished it from the pack in the
early days. What I've seen done with Java IDEs has shown me that they can
be extremely powerful. Perhaps the more recent Microsoft IDEs for C++ are
similar in their capabilities to what JBuilder, Eclipse, NetBeans, etc
provide for Java. Something tells me that is not the case.
* If others can do something like that, I will have to adjust.

Some things I accept as unlikely to change. For example, I don't believe
there will ever be a file naming specification for C++. Some things seem
more significant. For example, the dependence on header files to include
resources. I've seen better ways of handling that situation, and I think
it is a serious impediment to providing far more powerful support to the
language.
* Bjarne Stroustrup seems to say so.

You may care to note that this thread was forked from a thread I started
with the purpose of questioning one of his more strongly voiced opinions.
* In Java you cannot do that; and Java rocks.

For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++. There are good ideas in Java. The people who
created the language are not stupid, and they had solid accomplishments
using C (and perhaps C++) before they started working on Java. To ignore
Java, as many C++ programmers would like to do, is an unwise approach to a
legitimately successful competitor to C++.
* ...

c) May I suggest to replace X by some feature set / mechanism that would
only allow for the uses listed in (a). [Specifics about the proposed
replacement are unfortunately missing at this time.]

Well, then you haven't read all of what I've posted. I've been very
specific about what I would like the exception handling to do. I also
posted a fairly extensive explanation of how I believe the Java library
model would work for C++. I also proposed the addition of a feature to
allow for user defined infix operators.

You may also care to notice that I have been persuaded on several occasions
that my proposals were not as viable as I originally thought.
In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code.

Then you didn't read all of what I wrote about the topic. Either that, or
you chose to ignore it.
I like to explore the possibilites, and every once
in a while, I am really awestruck at how something can be done elegantly
in C++ in a way that I could not have fathomed.

I do not think that C++ is perfect, but I dislike the direction in which
you want to push it. To me, it looks as though you are about to cripple
the language.

How so? By introducing a more elegant, efficient, and effective means of
managing libraries? I know damn good and well that the Cpp is going to be
around for the foreseeable future. That doesn't mean I can't criticize its
use, and proposed solutions for supporting the same functionality without
resorting to using it.
 
P

Phlip

Steven said:
There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I believe
to be a good practice.

In a softer language, array[x] throws an exception for you if x is out of
bounds. In a C language, you have the option to either cross your fingers
and do nothing, override [] and throw an exception, or override [] and
provide an assertion that only compiles without NDEBUG activated.

The C languages need the ability to do nothing, or conditionally compile an
exception, to compete with assembly language.

(BTW, if you are not competing with assembly language, _don't use C++_...)

On the test side, here's an assertion:

#define CPPUNIT_ASSERT_EQUAL(sample, result) \
if ((sample) != (result)) { stringstream out; \
out << __FILE__ << "(" << __LINE__ << ") : "; \
out << #sample << "(" << (sample) << ") != "; \
out << #result << "(" << (result) << ")"; \
cout << out.str() << endl; \
OutputDebugStringA(out.str().c_str()); \
OutputDebugStringA("\n"); \
__asm { int 3 } }

The stringerizer, #, converts an expression into a string, and operator<<
formats the expression's value as a string. The macro inserts both these
strings into a stringstream object. Both cout and OutputDebugStringA() reuse
this object's value.

The result, at test failure time, is this line:

c:\...\project.cpp(56) : "Ignatz"(Ignatz) != name(Ignotz)

<F8> takes us directly to the failing test assert statement. The assertion
provides...

- fault navigation - the editor takes you to the failing line
- expression source reflected into the output
- expression values reflected into the output
- a breakpoint

On a platform descended from the Intel x86 architecture, if you run these
tests from a command line, not the editor, the OS will treat __asm { int 3 }
as a hard error, and it will offer to raise the default debugger.

When NDEBUG is off (what some folk call "Debug Mode"), tests can
aggressively exercise production code, making its internal assertions safer
to turn off.
Actually Stroustrup goes on to demonstrate an alternative form of assertion
which also failed to appeal to me. He uses a template that takes an
invariant as a parameter. And get this. It throws an exception rather
than aborting the program. In general that is the kind of thing I was
talking about, but I simply don't find cluttering my programs with
debugging code a good idea, nor, in general do I find it useful. I've
noticed C and C++ programmers new to Java tend to use stuff like if(DEBUG
{/*...*/} until they realize it really isn't all that useful in that
context.

Aggressive testing tends to help code right-size its assertions, and
exceptions, and hide them behind minimal interfaces. For example, this loads
a library, pulls a function pointer out of it, and asserts it found the
function:

HMODULE libHandle = LoadLibrary("opengl32.dll");
void (APIENTRY *glBegin) (GLenum mode);
FARPROC farPoint = GetProcAddress(libHandle, "glBegin");
assert(farPoint);
glBegin = reinterpret_cast</something/ *> (farPoint);
// use glBegin() like a function
FreeLibrary(libHandle);

The assert is not the problem here - the /something/ is. It should be this:

glBegin = reinterpret_cast
<
void (APIENTRY *) (GLenum mode)
(farPoint);

To reduce the risk, and avoid writing the complete type of glBegin() and
every other function we must spoof more than once, we upgrade the situation
to use a template:

template<class funk>
void
get(funk *& pointer, char const * name)
{
FARPROC farPoint = GetProcAddress(libHandle, name);
assert(farPoint);
pointer = reinterpret_cast<funk *> (farPoint);
} // return the pointer by reference

That template resolves the high-risk /something/ for us. It collects
glBegin's target's type automatically, puts it into funk, and reinterprets
the return value of GetProcAddress() into funk's pointer's type. And it
calls assert() (which could be your exception) only once, behind the get()
interface.

Pushing the risk down into a relatively typesafe method permits a very clean
list of constructions for all our required function pointers:

void (APIENTRY *glBegin) (GLenum mode);
void (APIENTRY *glEnd) (void);
void (APIENTRY *glVertex3f) (GLfloat x, GLfloat y, GLfloat z);

get(glBegin , "glBegin" );
get(glEnd , "glEnd" );
get(glVertex3f , "glVertex3f" );

That strategy expresses each function pointer's type-the high risk part-once
and only once. (And note how the template keyword converts statically typed
C++ into a pseudo-dynamic language.)
For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.

And the things we can't do in Java (stringerization, token pasting,
conditional compilation, etc.) you decry as "bad C++ style".

Java was invented because too many programmers not bright enough to use C or
C++ were forced into it by schools and managers. They did not write
'new'-free code, and did not use smart pointers where they needed a 'new'.
So Java's inventors said, "Hey, let's tell everyone we don't permit
pointers. Yeah, that's the ticket, pointers are bad. Oh, also we need to
pass primitives by reference, and we need to store heterogenous arrays, and
we need to declare exceptions at all interfaces," and they ended up doing a
zillion extra things to the language specification that C++ lets you do
_with_ the language.
How so? By introducing a more elegant, efficient, and effective means of
managing libraries? I know damn good and well that the Cpp is going to be
around for the foreseeable future. That doesn't mean I can't criticize its
use, and proposed solutions for supporting the same functionality without
resorting to using it.

Please base that criticism on your direct personal experience, not on
quoting Bjarne.
 
K

Kai-Uwe Bux

Steven said:
There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I
believe to be a good practice.

That is why assertions are preproccessed away in production code. And of
course, the preprocessor would allow you to write an assertion macro that
does not abort.

Actually Stroustrup goes on to demonstrate an alternative form of
assertion
which also failed to appeal to me. He uses a template that takes an
invariant as a parameter. And get this. It throws an exception rather
than aborting the program. In general that is the kind of thing I was
talking about, but I simply don't find cluttering my programs with
debugging code a good idea, nor, in general do I find it useful. I've
noticed C and C++ programmers new to Java tend to use stuff like if(DEBUG
{/*...*/} until they realize it really isn't all that useful in that
context.

An assert( blah ) line is not cluttering your program with debug code but a
concise way of stating an invariant (e.g., within a loop, at the entry of a
block, or before a return statement).

When the code crashes? Will it print it to a web browser when your web
application aborts? How do you use something like abort in a lodable
module hosted by an application server?

Admittedly not. I was thinking of a debugging session to get my code right
so that it would not crash in the field.

That is not something I really care about, as long as the feature doesn't
impact anything else.

This is a very important issue. The availability of superior IDEs for C++
was one of the key features that distinguished it from the pack in the
early days. What I've seen done with Java IDEs has shown me that they can
be extremely powerful. Perhaps the more recent Microsoft IDEs for C++ are
similar in their capabilities to what JBuilder, Eclipse, NetBeans, etc
provide for Java. Something tells me that is not the case.


Some things I accept as unlikely to change. For example, I don't believe
there will ever be a file naming specification for C++. Some things seem
more significant. For example, the dependence on header files to include
resources. I've seen better ways of handling that situation, and I think
it is a serious impediment to providing far more powerful support to the
language.


You may care to note that this thread was forked from a thread I started
with the purpose of questioning one of his more strongly voiced opinions.


For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.

This is an interesting point. I would like to make a distinction. I think
you pointed to things you *can know* in Java but you *cannot know* in C++.
In C++, because of the preprocessor, you cannot really be sure that what
you read is not transformed into something entirely different. Because of
conditional compilation, you cannot know which headers are included in
which order. Because of the flexibility of throw(), you cannot be sure
about the type of the object thrown. All these things are things you cannot
know about, because of things you (or others) *can do*.

This is precisely what I was refering to farther down the post when I said
that you apparently want to enforce coding policy by language design. I am
not saying that that is necessarily a bad thing. I am, however, saying that
I like C++ because it does not do that.
There are good ideas in Java. The people who
created the language are not stupid, and they had solid accomplishments
using C (and perhaps C++) before they started working on Java. To ignore
Java, as many C++ programmers would like to do, is an unwise approach to a
legitimately successful competitor to C++.

I neither deny that Java is successful nor that Java has incorporated some
good ideas. And I never claimed Java to be designed by stupid people. I
just do not feel the need to make C++ more Java-like.
* ...

c) May I suggest to replace X by some feature set / mechanism that would
only allow for the uses listed in (a). [Specifics about the proposed
replacement are unfortunately missing at this time.]

Well, then you haven't read all of what I've posted.

I will note that it is very hard to read "all of what you've posted": you
are prolific. I am sure that I missed many of your points. I appologize.
I've been very
specific about what I would like the exception handling to do. I also
posted a fairly extensive explanation of how I believe the Java library
model would work for C++. I also proposed the addition of a feature to
allow for user defined infix operators.

You may also care to notice that I have been persuaded on several
occasions that my proposals were not as viable as I originally thought.
In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code.

Then you didn't read all of what I wrote about the topic. Either that, or
you chose to ignore it.

It is a pitty that you chose to pick on the example and did not address the
main point that I raised int the topic sentence. I appologize if I
mischaracterized your opinion on exceptions. I still feel that you prefer
the language to enforce policies rather than have it provide mechanisms.
How so? By introducing a more elegant, efficient, and effective means of
managing libraries? I know damn good and well that the Cpp is going to be
around for the foreseeable future. That doesn't mean I can't criticize
its use, and proposed solutions for supporting the same functionality
without resorting to using it.

Sure you can criticize the cpp. I was not denying any rights of yours. I
was pointing toward an underlying philosophy in your way of criticizing
various features of C++, including the preproccessor. It is that philosophy
that makes me uneasy about the direction of your proposals.

To take your last paragraph as an example: You start by talking about
library management when finally addressing the preprocessor in the
negative. This creates the subtext: "library management is *the* legitimate
use of the preprocessor, let's find a better way to that (so that hopefully
nobody will use the preprocessor anymore)."

Again, I am not saying that policy enforcing is a bad idea. I just prefer
to have at least one really powerful language around that does not do that.
And I like C++ to be that language.



Best

Kai-Uwe Bux
 
S

Steven T. Hatton

Phlip said:
Steven said:
There has to be some need I have before I look for something to fill it.
The idea of aborting a program on failure is simply not something I believe
to be a good practice.

In a softer language, array[x] throws an exception for you if x is out of
bounds. In a C language, you have the option to either cross your fingers
and do nothing, override [] and throw an exception, or override [] and
provide an assertion that only compiles without NDEBUG activated.

Or you can use the library correctly to set the bounds of your index.
There's no need to override [] when using the standard library containers.
They offer both checked and unchecked indexing. You also don't need to use
NDEBUG to eliminate the debugging code from the release build. You can use
a static const just as easily. If the compiler can determine that an
expression evaluates to false at compile time, it should omit that code
from what it emits. I therefore don't need to use #ifdef...#endif
The C languages need the ability to do nothing, or conditionally compile
an exception, to compete with assembly language.

(BTW, if you are not competing with assembly language, _don't use C++_...)

On the test side, here's an assertion:

#define CPPUNIT_ASSERT_EQUAL(sample, result) \
if ((sample) != (result)) { stringstream out; \
out << __FILE__ << "(" << __LINE__ << ") : "; \
out << #sample << "(" << (sample) << ") != "; \
out << #result << "(" << (result) << ")"; \
cout << out.str() << endl; \
OutputDebugStringA(out.str().c_str()); \
OutputDebugStringA("\n"); \
__asm { int 3 } }

Why do I need a macro here? As has already been observed, if I have the
source, I don't need __LINE__ and __FILE__ information in order to locate
the error. It is unlikely I will be debugging code without having the
source available to me. An exception to that might be when using an
embedded implementation.

If I throw an exception, the debugger will take me to the exact location of
the origin, and it shows me all the context variables, and their values.
The stringerizer, #, converts an expression into a string, and operator<<
formats the expression's value as a string. The macro inserts both these
strings into a stringstream object. Both cout and OutputDebugStringA()
reuse this object's value.

The one thing that would currently not be doable with a template is to
instantiate it using a string or char*. I can do that with an exception,
however.
The result, at test failure time, is this line:

c:\...\project.cpp(56) : "Ignatz"(Ignatz) != name(Ignotz)

<F8> takes us directly to the failing test assert statement. The assertion
provides...

- fault navigation - the editor takes you to the failing line
- expression source reflected into the output
- expression values reflected into the output
- a breakpoint

On a platform descended from the Intel x86 architecture, if you run these
tests from a command line, not the editor, the OS will treat __asm { int 3
} as a hard error, and it will offer to raise the default debugger.

When NDEBUG is off (what some folk call "Debug Mode"), tests can
aggressively exercise production code, making its internal assertions
safer to turn off.

But I get all that without using assert.

[snip]

Interesting, but I don't see how it relates to the discussion. I didn't say
I don't believe in testing my code, I just don't like leaving a lot of suff
behind which more often than not was put there to find a specific problem.
There are typically certain points in a program at which one test can
verify that many parameters are correct. That is where I am likely to
leave some kind of test code. Depending on the situation, I may leave it
on or off.
And the things we can't do in Java (stringerization, token pasting,
conditional compilation, etc.) you decry as "bad C++ style".

I never decried conditional compilation as bad. In many cases there are
better strategies than putting the conditional code in the main body of the
application.

http://www.mozilla.org/projects/nspr/reference/html/index.html
Java was invented because too many programmers not bright enough to use C
or C++ were forced into it by schools and managers. They did not write
'new'-free code, and did not use smart pointers where they needed a 'new'.
So Java's inventors said, "Hey, let's tell everyone we don't permit
pointers. Yeah, that's the ticket, pointers are bad. Oh, also we need to
pass primitives by reference, and we need to store heterogenous arrays,
and we need to declare exceptions at all interfaces," and they ended up
doing a zillion extra things to the language specification that C++ lets
you do _with_ the language.

And you get threadding, unicode, effortless portability, incredibly smooth
refactoring, highlevel abstraction with the tools to support it, great,
well organized documentation, easy to integrate libraries of all kinds, a
full suite of basic networking components, encryption, introspection,
remote method invocation, class loading, a reasonably functional GUI tool
kit, 2D graphics, xml support, a very nice I/O library, compression
libraries, an easy to use build system, etc., etc...., most of it out of
the box.
Please base that criticism on your direct personal experience, not on
quoting Bjarne.

I already have.
 
S

Steven T. Hatton

Kai-Uwe Bux said:
Steven T. Hatton wrote:


That is why assertions are preproccessed away in production code. And of
course, the preprocessor would allow you to write an assertion macro that
does not abort.

It was really a philosophical point. I don't want my code to abort for any
but absolutely unrecoverable circumstances. I started working with
hardware in 1979. failing to maintain an operational system was not merely
bad for my reputation, it was a threat to national security. My more
recent experience has been of the same nature. Add to that the fact that I
was developing code to run on servers, as services. Crashing the server
was just not a good development strategy. Especially because I was the
sysadmin.
An assert( blah ) line is not cluttering your program with debug code but
a concise way of stating an invariant (e.g., within a loop, at the entry
of a block, or before a return statement).

I believe I understand the strategy. In the kinds of development I've done
I haven't seem much need for that kind of thing. I'm more likely to want
to leave dumpers that will spit all the members of a class out to a stream.

Admittedly not. I was thinking of a debugging session to get my code right
so that it would not crash in the field.
But that's the nature of application servers. A single application should
not bring down the server. I'll admit I have very little experience with
C++ in that area, so there may be ways to call abort and only crash the
service, and not the server.
This is an interesting point. I would like to make a distinction. I think
you pointed to things you *can know* in Java but you *cannot know* in C++.
In C++, because of the preprocessor, you cannot really be sure that what
you read is not transformed into something entirely different. Because of
conditional compilation, you cannot know which headers are included in
which order. Because of the flexibility of throw(), you cannot be sure
about the type of the object thrown. All these things are things you
cannot know about, because of things you (or others) *can do*.

Yes. You are correct. Information technology is about information. Good
solid easily obtainable information is vital to design, to implementation,
to trouble shooting and to security.

In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion that
throw() should only accept arguments derived from std::exception would
break some of my code.

Then you didn't read all of what I wrote about the topic. Either that,
or you chose to ignore it.

It is a pitty that you chose to pick on the example and did not address
the main point that I raised int the topic sentence. I appologize if I
mischaracterized your opinion on exceptions. I still feel that you prefer
the language to enforce policies rather than have it provide mechanisms.

In the case of exceptions, I have solid experience that supports the
opinions I hold. What I suggested simply works better as a default. And
that is what I was referring to when I suggested you hadn't read everything
I wrote. I explicitly said that the should be configurable through some
mechanism similar to the one currently used to switch out handlers. I'm
not sure how that might be accomplished, but I suspect someone in the C++
community is capable of finding a means.

To take your last paragraph as an example: You start by talking about
library management when finally addressing the preprocessor in the
negative. This creates the subtext: "library management is *the*
legitimate use of the preprocessor, let's find a better way to that (so
that hopefully nobody will use the preprocessor anymore)."

Again, I am not saying that policy enforcing is a bad idea. I just prefer
to have at least one really powerful language around that does not do
that. And I like C++ to be that language.

Actually, to a large extent, it's the other way around. I believe the
library management in C++ stinks. If there were a better system - which I
believe is highly doable - I would have far less to complain about
regarding the CPP. The CPP does go against certain principles of
encapsulation that are generally regarded as good in computerscience. A
better library system would go a long way toward addressing that as well.
If people aren't #including 15 megs of source in every file, there is less
(virtually no) opportunity for hidden variables entering the environment.
 
C

Chris F Clark

Steven said:
There has to be some need I have before I look for something to fill
it. The idea of aborting a program on failure is simply not
something I believe to be a good practice.

Who says assertions have to abort your program? The assertions in our
code invoke a program specific interactive debugger that allows us to
trace what is going on at the level of user (and developer) visible
objects. Thus, if there is a problem connecting a "net" to a "gate"
via a "pin" (our code is used for circuit design) and an assertion
fires, the developer can inspect (graphically) the net, gate, and pin
and look at the diagram or user written source code that is involved,
and if necessary inspect the internal structure of each of the objects
(or look at other connected objects). The developer can also escape
into the normal C++ debugger if necessary. et cetera, et cetera, et
cetera.... And perhaps, most importantly, we can decide which
assertions represent errors that users can make (i.e. connecting a net
so that the net has two different drivers, which makes the net an
invalid circuit) or one that results from something unexpected
happening in the code (finding a net without any drivers after all the
nets have been checked to assure that they all have exactly one
driver), which means the code has some undetected flaw in it that
caused bad data to reach the point in question (or perhaps the code at
the current point is wrong).

Note, that nowhere did I say that our assertions abort the execution
of the program--assertions tell you where something went wrong, a good
assertion package helps you debug that point--a poor assertion
paqckage simply "gives up" (with a message) at the point where the
code failed--however, even that is better than the code randomly doing
something further wrong and possibly silently producing incorrect
results or mysteriously crashing in some unrelated part of the
program.

However, in either case, assertions are invaluable in making certain
that the code does exactly what we say it does and that flaws are
caught as near to the source of the error as possible. As a result,
the team has been amazingly productive. It has really paid off in the
maintenance/enhacement phase, where we can quickly upgrade or change
the semantics of various pieces knowing that if we violate any
downstream assumptions, those problems will be caught by the developer
before finishing coding, "unit" testing, and checking in. If you
don't use assertions, your programs probably continue to do bad things
in the sections you haven't well tested and you have to infer the
problem from the tea-leaves at the point where you finally figure out
the the program has done something wrong (like crashed)--that doesn't
work so well on a 600K line program, where there are huge blocks of
code you have never read much less understand or wrote.

Now, to bring this back to the cpp. Our assertion code is a set of
cpp macros. Originally, they were just the macros supplied by the C++
compiler. However, because that C++ assertion support was written as
cpp macros, when we realized that we wanted something more
sophisticated than what our C++ vendor provided, we were able to
customize it for ourselves. Moreover, because we now have our own cpp
macros, we can use several different compilers and have our system
work the same way. (And, yes, just like the XERCES code you were
complaining about, not all of our target compilers are "modern" so we
have some macros that deal with broken compilers that we are REQUIRED
to support.)

(This by the way is a flaw in JAVA. If I go to a web-site that has
JAVA that is poorly written and only works with one version of the
spec and that isn't the version installed on my machine, the web-site
is broken. What does one do when one has two different web sites,
that both require different versions of the spec? In C++ the web site
author can write his code to be compatible not only with the newest
spec, but also with older versions (perhaps with less functionality),
so that the user can pick an old version of the compiler and have all
the code be happy--maybe not as "fancy" as one would like, but
working! BTW, I experience this problem on a daily basis, as I have
tow web sites that I need to visit and they use incompatible JAVA
versions. I have two machines so that I can work around that
problem.)

And, what is the beauty of those macros (besides their portability),
is that our primary source code (the uses of those macros) is easy to
read. The syntax of the assert (and other macros) is quite simple and
obvious. That is only doable because they are macros. If they
weren't macros, some of the behind-the-scene-stuff like determining
the current object would have to be present in the source code of each
use. Without cpp, either the source code (at assertion use) would
have to be much uglier or the "assertion support" would have to be
built into the compiler. And if the assertion support were built into
the compiler, we would only get what the C++ compiler vendor provided
us, which would probably be a poor assertion package on most
platforms. Also note that the sophistication of our assertion support
has grown with our code. If it wasn't done via macros, the
behind-the-scenes stuff that cpp hides would have to have been changed
at every assertion use when we realized that the was "a better way" to
do things, as we have numerous times. In a language without cpp, we
simply wouldn't have bothered to do it, and we wouldn't have been
nearly as productive as a result.

So, that's the beauty of cpp, it lets one write simple obvious code
and put behind it something sophisticated that makes the code do the
right thing. It lets one do that in a compiler independent manner and
do so even in the pressence of seriously broken compilers. And, it
lets us "the application developers" do it and does not make us
dependent on our compiler vendors.

Oh, and by the way, I think that by "cleverly" using function
prototypes that match our macros, we get nice code completion of the
macros in our IDE--the best of both worlds.

Note, somewhere you wrote that you can do some of the same things with
sed and make. If you don't like cpp, why would you want to impose an
external macro processor (that's what you are using sed for in that
scenario) on your code? At least with cpp, you can know that you have
something that is portable. For the longest time, I used systems that
didn't even have sed on them. I may still do so. I don't know. It
isn't something I would use. Moreover, with sed macros, your IDE has
no hope of knowing exactly how your code will be transformed before
being compiled. That is much worse than cpp. (I can just imagine how
our development environment would be obscure if to get assertions, we
had to preprocess all our code with sed scripts. Wouldn't that be a
joy???)

-Chris

*****************************************************************************
Chris Clark Internet : (e-mail address removed)
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------
 
S

Steven T. Hatton

Chris said:
Who says assertions have to abort your program?

IIRC, that was the context not the necessary behavior. And I was really
intending the official <cassert> which is said to be documented in the C
standard documentation. I don't have that, but K&R tell me it aborts the
program. They don't tell me I can change that behavior.
Note, that nowhere did I say that our assertions abort the execution
of the program--assertions tell you where something went wrong, a good
assertion package helps you debug that point--a poor assertion
paqckage simply "gives up" (with a message) at the point where the
code failed--however, even that is better than the code randomly doing
something further wrong and possibly silently producing incorrect
results or mysteriously crashing in some unrelated part of the
program.

However, in either case, assertions are invaluable in making certain
that the code does exactly what we say it does and that flaws are
caught as near to the source of the error as possible. As a result,
the team has been amazingly productive. It has really paid off in the
maintenance/enhacement phase, where we can quickly upgrade or change
the semantics of various pieces knowing that if we violate any
downstream assumptions, those problems will be caught by the developer
before finishing coding, "unit" testing, and checking in.

In the sense of JUnit, I have used that approach. In that case you create a
harness, and some use cases to run against your code, testing that
preconditions produce correct post conditions. That typically stands
outside the actual program code.
If you
don't use assertions, your programs probably continue to do bad things
in the sections you haven't well tested and you have to infer the
problem from the tea-leaves at the point where you finally figure out
the the program has done something wrong (like crashed)--that doesn't
work so well on a 600K line program, where there are huge blocks of
code you have never read much less understand or wrote.

It probably also depends on the nature of the product whether such things
are generally useful. You seem to have something akin to a huge Karnaugh
map. Probably much more suited to such structured evaluation than the
kinds of systems I've worked on. Also bear in mind that I do use
exceptions in similar ways.
Now, to bring this back to the cpp. Our assertion code is a set of
cpp macros. Originally, they were just the macros supplied by the C++
compiler. However, because that C++ assertion support was written as
cpp macros, when we realized that we wanted something more
sophisticated than what our C++ vendor provided, we were able to
customize it for ourselves. Moreover, because we now have our own cpp
macros, we can use several different compilers and have our system
work the same way. (And, yes, just like the XERCES code you were
complaining about, not all of our target compilers are "modern" so we
have some macros that deal with broken compilers that we are REQUIRED
to support.)

I understand that such things are required in some cases. There also seems
to be a tendency to imply 'we would have done it like this anyway' in many
cases. I understand that it can add flexibility to your code base if you
can systematically change your namespace name throughout the code base by
modifying a macro somewhere. There are other ways of accomplishing such
things which preserve and indeed exploit the integrity of the language. If
the code base has a predictable and regular structure, it is quite easy to
perform systematic global manipulations. I've done it in various
circumstances. Such a task is an anathema to the sensibilities of may C
and C++ programmers, for understandable reasons.
(This by the way is a flaw in JAVA. If I go to a web-site that has
JAVA that is poorly written and only works with one version of the
spec and that isn't the version installed on my machine, the web-site
is broken.
It sounds ancient. There was a significant change between the 1.x and 2.x
Java versions that impacts the GUI.
What does one do when one has two different web sites,
that both require different versions of the spec?

I believe there are ways of addressing that problem. Solaris used to have
three versions of Java installed, and was able to pick the right one for
every app somehow.
In C++ the web site
author can write his code to be compatible not only with the newest
spec, but also with older versions (perhaps with less functionality),
so that the user can pick an old version of the compiler and have all
the code be happy--maybe not as "fancy" as one would like, but
working!

That's a completely different issue. With Java, you are running an applet
on your local JVM. With a page served out with C++ you are either getting
HTML served to you, or you are running some kind of pluggin that has to be
specifically compiled for your OS and hardware. If its the former, Java
can do that in ways there are completely indistinguishable from what a C++
server will do. If you are talking about running a pluggin, then I can
assure you with the utmost confidence there are _more_ compatability issues
with C++ than with Java.

And, what is the beauty of those macros (besides their portability),
is that our primary source code (the uses of those macros) is easy to
read. The syntax of the assert (and other macros) is quite simple and
obvious. That is only doable because they are macros. If they
weren't macros, some of the behind-the-scene-stuff like determining
the current object would have to be present in the source code of each
use.

I don't follow here. How can a macro determine the current object without
some kind of intentional intervention? I'm not saying it can't be done.
Perhaps you are talking about something similar to Qt's moc?
Without cpp, either the source code (at assertion use) would
have to be much uglier or the "assertion support" would have to be
built into the compiler. And if the assertion support were built into
the compiler, we would only get what the C++ compiler vendor provided
us, which would probably be a poor assertion package on most
platforms. Also note that the sophistication of our assertion support
has grown with our code. If it wasn't done via macros, the
behind-the-scenes stuff that cpp hides would have to have been changed
at every assertion use when we realized that the was "a better way" to
do things, as we have numerous times.

It sounds like macros are working well for you. And there certainly is
merit in 'if it ain't broke, don't fix it'. At the same time, I have to
wonder whether you would not have discovered an equally elegant approach
using other techniques. I also have to wonder what other internal features
of the language would have evolved to fill such needs. I can also
appreciate that macros may serve you better in supporting backward
compatibility. What I'm striving for is a way of doing things better if
you can afford a clean start.

Take careful not that I am not actually advocated abolishing the CPP, nor am
I advocating the abolishment of the #include. I simply want superior means
of doing some of the things the CPP is currently used for.
In a language without cpp, we
simply wouldn't have bothered to do it, and we wouldn't have been
nearly as productive as a result.

I'll have to take that as an opinion, not a conclusion. I've never used
them, but Java supports some kind of assertion. And as I mentioned earlier
there is JUnit as well.
So, that's the beauty of cpp, it lets one write simple obvious code
and put behind it something sophisticated that makes the code do the
right thing. It lets one do that in a compiler independent manner and
do so even in the pressence of seriously broken compilers. And, it
lets us "the application developers" do it and does not make us
dependent on our compiler vendors.

Again, I have to wonder if the situation is a clear cut as you say it is.
I've seen 'pre-processing' done with Java, and I've seen some rather
inventive manipulations of Lisp which transcend the core language to add
functionality. I doubt it's likely to happen, but I suspect that if the
CPP were removed from the language, people would rediscover sed and awk in
a hurry.
Oh, and by the way, I think that by "cleverly" using function
prototypes that match our macros, we get nice code completion of the
macros in our IDE--the best of both worlds.

Simple code completion really isn't much of an issue. I am talking about
far more sophisticated features such as being able to locate any class and
import the fully qualified class name by typing a few characters of the
name, and hitting a key combo. Also virtually complete error detection
before you compile.
Note, somewhere you wrote that you can do some of the same things with
sed and make. If you don't like cpp, why would you want to impose an
external macro processor (that's what you are using sed for in that
scenario) on your code? At least with cpp, you can know that you have
something that is portable. For the longest time, I used systems that
didn't even have sed on them. I may still do so. I don't know. It
isn't something I would use. Moreover, with sed macros, your IDE has
no hope of knowing exactly how your code will be transformed before
being compiled. That is much worse than cpp. (I can just imagine how
our development environment would be obscure if to get assertions, we
had to preprocess all our code with sed scripts. Wouldn't that be a
joy???)

I wasn't fully serious when I said that. OTOH, over the years, I've
encountered a good deal of that kind of thing. Also worth mentioning is
that asserts are really not something I see a a significant problem. At
worst I think they are a bit tacky.
 
P

Phlip

Steven said:
And you get threadding, unicode, effortless portability, incredibly smooth
refactoring, highlevel abstraction with the tools to support it, great,

Threading is good??
The one thing that would currently not be doable with a template is to
instantiate it using a string or char*. I can do that with an exception,
however.


I can do it with a scalable cluster of furbies:

http://www.trygve.com/furbeowulf.html ;-)
Why do I need a macro here?

So the IDE can automatically take you to a failing line in a test.

(No, I don't want a test case to throw an exception at failure time.)
As has already been observed, if I have the
source, I don't need __LINE__ and __FILE__ information in order to locate
the error. It is unlikely I will be debugging code without having the
source available to me. An exception to that might be when using an
embedded implementation.

If I throw an exception, the debugger will take me to the exact location of
the origin, and it shows me all the context variables, and their values.

There's a cycle of gain-saying going on here. I propose that the CPP offers

- stringerization
- token pasting
- conditional compilation

Not only can't other C++ mechanisms supply those, other languages have a
very hard time supplying them either. However, whenever I give an example of
one of those solving a quite legitimate problem, you declaim how a similar
problem could be solved via a different technique.

We could have the same conversation regarding 'int'.
 
S

Steven T. Hatton

Phlip said:
Threading is good??

Hu? Even Microsoft eventually figured that out. Yes. Thread support is
crucial for creating any sophisticated application that can be expected to
do more than one thing at a time.
- stringerization
- token pasting
- conditional compilation

Not only can't other C++ mechanisms supply those, other languages have a
very hard time supplying them either. However, whenever I give an example
of one of those solving a quite legitimate problem, you declaim how a
similar problem could be solved via a different technique.

You were presenting these example in support of your claim that the Cpp is
extremely valuable. You are also claiming these things can't be none in
other languages. Java code can be used to generate new classes at runtime,
and load them. Even JavaScript can preform self modification, and much
more powerfully than the CPP. But, the services these examples provide for
you, other than conditional compilation, are things that I have no need
for. There are other ways of achieving the same ends that don't require as
much effort on my part. Not to say your macros are difficult to write or
to use. It's just that I have solutions to these problems that require
virtually no effort.

If you want to impress me, do this in 50 keystrokes, or fewer:
/* original text */
RgbColor text_color;
bool is_leaf;
bool is_vertical;
std::string text;
RgbColor bg_color;
RgbColor edge_color;
double border_w;
double h;
double w;
double x;
double y;
int bg_z;
int text_z;
std::string text;

/* regexp search and replace results*/

<< "text_color" << text_color << "\n"
<< "is_leaf" << is_leaf << "\n"
<< "is_vertical" << is_vertical << "\n"
<< "text" << text << "\n"
<< "bg_color" << bg_color << "\n"
<< "edge_color" << edge_color << "\n"
<< "border_w" << border_w << "\n"
<< "h" << h << "\n"
<< "w" << w << "\n"
<< "x" << x << "\n"
<< "y" << y << "\n"
<< "bg_z" << bg_z << "\n"
<< "text_z" << text_z << "\n"
<< "text" << text << "\n"
 
S

Sam Holden

Hu? Even Microsoft eventually figured that out. Yes. Thread support is
crucial for creating any sophisticated application that can be expected to
do more than one thing at a time.

Strangely enough I've been using non-threaded applications that
do more than one thing at a time and are reasonably sophisticated.

Where sophisticated means things like: makes multiple concurrent TCP
connections, encodes and decodes audio sending and receiving it over
UDP, displays GUI and so on - without a thread in sight (well one
thread for the pedants).

I even write them occassionaly.

Threading throws away decades of work in creating systems with useful
protected memory spaces for processes. And lets the average programmer
meet all the problems and reinvent (poorly) all the solutions all over
again. Rather than using the solution implemented by the (hopefully
much more experienced and competent in the domain) OS authors.

Of course there's that vanishingly small percentage of problems that
are best solved with threads, but chances are you, me, and the
next guy aren't working on one of them.

And of course there's Windows and Java which make up a large chunk
of the platforms that are programmed for and need threads to do
anything more interesting than "Hello World" because the main
alternatives (non-blocking IO and processes) are brain damaged,
broken, slow, or all of the above.

But this has nothing to do with C++ :)
 
P

Phlip

Sam said:
Steven T. Hatton wrote:

Strangely enough I've been using non-threaded applications that
do more than one thing at a time and are reasonably sophisticated.

Where sophisticated means things like: makes multiple concurrent TCP
connections, encodes and decodes audio sending and receiving it over
UDP, displays GUI and so on - without a thread in sight (well one
thread for the pedants).

I even write them occassionaly.

Threading throws away decades of work in creating systems with useful
protected memory spaces for processes. And lets the average programmer
meet all the problems and reinvent (poorly) all the solutions all over
again. Rather than using the solution implemented by the (hopefully
much more experienced and competent in the domain) OS authors.

Of course there's that vanishingly small percentage of problems that
are best solved with threads, but chances are you, me, and the
next guy aren't working on one of them.

And of course there's Windows and Java which make up a large chunk
of the platforms that are programmed for and need threads to do
anything more interesting than "Hello World" because the main
alternatives (non-blocking IO and processes) are brain damaged,
broken, slow, or all of the above.

I am aware that sometimes a program must eat a sandwich with one hand and
drive a car with the other.

I have never personally seen a situation improved by threads. If "Process A
takes to long, and we need Process B to run at the same time", this
indicates that A incorrectly couples to its event driver. Event driven
programs should respond to events and update their object model. They should
not go into a loop and stay in it for a while,

People thread when they are unaware of how select() or
MsgWaitForMultipleObjects() work. Then if they need inter-thread
communication, they add back the semaphores that their kernel would have
provided.
You were presenting these example in support of your claim that the Cpp is
extremely valuable.

Not extremely. Just more valuable than mimicking Bjarne's more advanced
opinion.
If you want to impress me, do this in 50 keystrokes, or fewer:
/* original text */
RgbColor text_color;
bool is_leaf;
bool is_vertical;
std::string text;
RgbColor bg_color;
RgbColor edge_color;
double border_w;
double h;
double w;
double x;
double y;
int bg_z;
int text_z;
std::string text;

/* regexp search and replace results*/

<< "text_color" << text_color << "\n"
<< "is_leaf" << is_leaf << "\n"
<< "is_vertical" << is_vertical << "\n"
<< "text" << text << "\n"
<< "bg_color" << bg_color << "\n"
<< "edge_color" << edge_color << "\n"
<< "border_w" << border_w << "\n"
<< "h" << h << "\n"
<< "w" << w << "\n"
<< "x" << x << "\n"
<< "y" << y << "\n"
<< "bg_z" << bg_z << "\n"
<< "text_z" << text_z << "\n"
<< "text" << text << "\n"

So far as I understand the question, here:

http://www.codeproject.com/macro/metamacros.asp

CPP strikes again!
 
K

Kai-Uwe Bux

Steven said:
Kai-Uwe Bux said:
Steven T. Hatton wrote: [snip]
For the most part, I've pointed to things you /can/ do with Java and
you /can't/ do with C++.

This is an interesting point. I would like to make a distinction. I think
you pointed to things you *can know* in Java but you *cannot know* in
C++. In C++, because of the preprocessor, you cannot really be sure that
what you read is not transformed into something entirely different.
Because of conditional compilation, you cannot know which headers are
included in which order. Because of the flexibility of throw(), you
cannot be sure about the type of the object thrown. All these things are
things you cannot know about, because of things you (or others) *can do*.

Yes. You are correct. Information technology is about information. Good
solid easily obtainable information is vital to design, to implementation,
to trouble shooting and to security.

a) This is rethoric. The information in "Information technology is about
information." is the information, my program deals with. The information in
"Good solid easily obtainable information is vital to design, to
implementation, to trouble shooting and to security." is information about
the structure of my code. The second statement may be true, but it does not
follow from the first. You are using one term to reference two different
entities.

b) The information about my code is already available. After all, it must
be sufficient for the compiler to generate object code. However, I agree
that in C++ the information may be scattered around and is not as local as
the human mind (or some IDE) would like it to be.

c) I still see that there are trade offs. If you increase what can be known
at coding time at the cost of what can be done, then there are trade offs.
Since, apparently, I do not face the difficulties you are dealing with, I
would rather keep the power of what can be done.

In short, you want to enforce coding policies by language design. I,
however, like C++ precisely because it does not enforce policies but
provides mechanisms, and a lot. E.g., [try, throw, catch] to me is not
about error handling but about stack unwinding; and your suggestion
that throw() should only accept arguments derived from std::exception
would break some of my code.

Then you didn't read all of what I wrote about the topic. Either that,
or you chose to ignore it.

It is a pitty that you chose to pick on the example and did not address
the main point that I raised int the topic sentence. I appologize if I
mischaracterized your opinion on exceptions. I still feel that you prefer
the language to enforce policies rather than have it provide mechanisms.

In the case of exceptions, I have solid experience that supports the
opinions I hold. What I suggested simply works better as a default. And
that is what I was referring to when I suggested you hadn't read
everything
I wrote. I explicitly said that the should be configurable through some
mechanism similar to the one currently used to switch out handlers. I'm
not sure how that might be accomplished, but I suspect someone in the C++
community is capable of finding a means.

I will drop exceptions. Obviously talking about them just gives you an
opportunity not to address the issue of enforcing policies versus providing
mechanisms.


Actually, to a large extent, it's the other way around. I believe the
library management in C++ stinks. If there were a better system - which I
believe is highly doable - I would have far less to complain about
regarding the CPP. The CPP does go against certain principles of
encapsulation that are generally regarded as good in computerscience. A
better library system would go a long way toward addressing that as well.
If people aren't #including 15 megs of source in every file, there is less
(virtually no) opportunity for hidden variables entering the environment.

Maybe, if cpp was even more powerful and more convenient, a superior
library management could be implemented using macros.


Best

Kai-Uwe Bux
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,480
Members
44,900
Latest member
Nell636132

Latest Threads

Top