What has C++ become?

J

James Kanze

You can say the same for a change to any header.

Yes. Which is why you don't want to modify headers more often
than necessary. And why you ban as many implementation details
as possible in the headers, and use the compilation firewall
idiom rather regularly. And strictly limit the use of templates
and inline functions, since both require the implementation in
the header.
 
J

James Kanze

See the "Stable Dependencies Principle" and the "Stable Abstractions
Principle".

"Thus, the software that encapsulates the *high level design
model* of the system should be placed into stable packages."
- Emphasis added -
"[The Stable Abstractions Principle] says that a stable
package should also be abstract so that its stability does not
prevent it from being extended."
Robert C. Martin's article on stability principles pretty much
stands against everything you've said in this thread to date.

You've obviously not understood the article, or what I've been
saying. The abstraction and the design should be stable. It's
implementation won't necessarily be. The problem with C++
templates, and currently implemented by most compilers, is that
they require the implementation in places where logically, you
should only have the abstraction. And thus introduce
instability in places where you don't want it.
Templates are the epitome of abstraction.

I wouldn't go that far. They're a tool which can help to
implement certain types of abstraction.
Perhaps if you were not so anti-template

I'm not anti-template. I'm very much in favor of templates. So
much, in fact, that I'd actually like to seem compilers
implement them in a way that was usable in practice (with
export). My complaints aren't with templates; they're with the
cruddy implementations I have have of them.
you'd do some looking into how to make the best use of them
and you would not be arguing about changing templates causing
long builds; you'd be well aware that you simply don't change
templates that often.

That's true for things like the standard library, and lower
level code. On the other hand, if you're not changing your
application, then what are you doing when you program.

[...]
Of course, you need to go back and read about the other design
principles that Martin describes in order to see the entire
reasoning behind why you put the *high level code* in your
stable, abstract packages. I'm not begging an authority,
Martin's stuff just happens to be very good and the reasoning
stands on its own.
The principles of OOD translate very well to Generic Programming.

I have a very high regard for Robert Martin; he's one of the
people who taught me C++. The problem is simply pragmatic;
templates don't really work with most compilers. For most
compilers, they're really just elaborate macros, with all the
problems macros entail.
 
J

James Kanze

I love when compilation takes more then a couple of seconds: I
have extra time to think! Sometimes it ends with killing the
compilation and doing something else, rather then trying the
result.

Interesting development process. I usually try to think before
editing, much less compiling.
 
I

Ian Collins

James said:
Yes. Which is why you don't want to modify headers more often
than necessary. And why you ban as many implementation details
as possible in the headers, and use the compilation firewall
idiom rather regularly. And strictly limit the use of templates
and inline functions, since both require the implementation in
the header.
Inline functions tend not to be too big an issue in practice. They tend
to be trivial and "write once".

As for templates, I leave that call to the team. By their nature,
templates tend to be used for utility functions which, like trivial
inline function, tend to change less frequently than specific
application code.

Another common situation is a template will be introduced to avoid
unnecessary duplication, it is only used where the duplication would
occur. If the template wasn't there, the duplicated code would have to
change instead of the template. Without the template, the cost in
recompilation would be the same but the cost of coding changes would be
greater.

So I wouldn't go so far as to say strictly limit the use of templates
and inline functions, just treat them like any tool.

Gratuitous use of templates is another matter, but peer pressure should
solve that problem.
 
J

Jerry Coffin

[ ... ]
(In the end, much advanced optimization involves visiting nodes
in a graph, and I think that there are ways to parallelize
this, although I don't know whether they are pratical or only
theoretical.)

Yes and no. For example, a depth-first-search of a general graph has
been studied pretty extensively. I don't know of anything _proving_ that
it can't be done in parallel productively, but there are definitely some
pretty strong indications in that direction*.

OTOH, I believe for a compiler you're dealing primarily with DAGs. I'm
pretty sure a depth-first search of a DAG _can_ be done in parallel
productively -- at least if they're large enough for the savings from
parallelization to overcome communication overhead and such.

I'm not sure whether a compiler typically generates DAGs that large or
not. I've written a few small compilers, but don't recall having
instrumented the size of graphs they worked with. My guess is that if
you only do function-level optimization, they're usually going to be too
small for it to help, but if you do global optimization, they might
easily become large enough -- but that's purely my feeling; I don't have
any solid data to support it, and we all know how undependable that is.

[ ... ]
And for the application headers, even farming the compiles out
to different machines (in parallel) may not work; since the
application headers will normally reside on one machine, you may
end up saturating the network. (I've seen this in real life.
The usual ethernet degrades rapidly when the number of
collisions gets too high.)

Does anybody really use hubs anymore? Using switched Ethernet,
collisions are quite rare, even when the network is _heavily_ loaded.
 
J

Jerry Coffin

[ ... ]
Yes and no. For example, a depth-first-search of a general graph has
been studied pretty extensively. I don't know of anything _proving_ that
it can't be done in parallel productively, but there are definitely some
pretty strong indications in that direction*.

Oops -- I left out the footnote I intended there:

J.H. Reif: _Depth-first Search is Inherently Sequential_, Information
Processing Letters 20. 1985
 
N

Noah Roberts

James said:
The problem is simply pragmatic;
templates don't really work with most compilers. For most
compilers, they're really just elaborate macros, with all the
problems macros entail.

LOL! And the story changes yet again.
 
N

Noah Roberts

James said:
You've obviously not understood the article, or what I've been
saying. The abstraction and the design should be stable. It's
implementation won't necessarily be. The problem with C++
templates, and currently implemented by most compilers, is that
they require the implementation in places where logically, you
should only have the abstraction. And thus introduce
instability in places where you don't want it.

I'm afraid it is you who have not understood the article. Perhaps you
are not familiar with the dependency inversion principle.

You're calling the STL "low level code" when it is, in fact, high level
code. The find_if algorithm, for instance, is a high level algorithm
that can be used with any data type that obeys the iterator abstraction.
The containers, for another example, are generic, high level
constructs that can contain any data type that implements the required
concepts. As that article states, the stability and the dependencies
move toward high level code and templates have a tendency to be very
independent and lacking in many dependencies.

As high level code, the internals of templates simply do not change very
often. For example, there's a limited number of ways to do a qsort.
With the concepts established that are required to do such a sort, the
implementation of the *high level* algorithm does not need to change.
What has to change is any object that wants to depend upon the qsort
template so that it implements the concepts that the qsort interface
imposes upon its clients.

In other words, the dependency in the relationship between an STL
container and its contents is not the container depending on the
contents, but the other way around. The STL container depends on no
concrete object, imposes an interface on all its contents, and works at
a more abstract level than the objects it contains. It is an abstract
kind of object and this is exactly what *high level* code is.

I simply do not know why you are calling these things "low level code"
and you've spent no time defending that assertion whatsoever. This is
not out of unawareness of my disagreement for I've expressed it several
times. Robert Martin says exactly the same thing I've just expressed in
the article I've cited and in his article on the Dependency Inversion
Principle.

Furthermore, as expressed in this later article, his statements are
applying to the implementation of higher level objects and not just
their interfaces. This is implied by his Open/Closed Principle just
like all the other principles he talks about. That principle states
that an object should be open to extension, but *closed to
modification*. Any time you change the implementation of any function
or object you've violated that principle.
 
I

Ian Collins

Walter said:
I think Symantec C++ was the first to do distributed builds with their
"netbuild" feature in the early 90's. The problem was, as you said, the
network congestion of transmitting all the header files around more than
ate up the time saved.

Not too big a deal with a modern OS and a big enough bucket of RAM to
cache them.
 
C

coal

I think Symantec C++ was the first to do distributed builds with their
"netbuild" feature in the early 90's. The problem was, as you said, the
network congestion of transmitting all the header files around more than
ate up the time saved.

Did they use any compression? That should be used in my opinion.

Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
 
P

Pascal J. Bourguignon

Noah Roberts said:
You're calling the STL "low level code" when it is, in fact, high
level code. The find_if algorithm, for instance, is a high level
algorithm that can be used with any data type that obeys the iterator
abstraction.

No. find_if is low level.

Compare:

Key k;
std::vector<Element> v;
std::vector<Element>::iterator end=v.end();
std::vector<Element>::iterator found;
found=find_if(v.begin(),end,boost::bind(&Key::equal,k,boost::bind(&Element::getKey,_1)));
if(found==end){
doSomething(v,NULL);
}else{
doSomething(v,*found);
}

vs.

(lambda (k v) (do-something v (find-if (lambda (e) (equal k (get-key e))) v)))


And again, here I already used boost::bind; if I had to use the stl
instead it would have tripled the number of line.

And again, here I transposed directly. In this specific example, lisp
is even higher level (ie. more concise):

(lambda (k v) (do-something v (find k v :key (function get-key) :test (function equal))))


You just cannot say tht the STL is high level. You only can say that
it is slightly higher level than pure C code. But it doesn't
qualifies yet as "high level".
 
J

James Kanze

I think Symantec C++ was the first to do distributed builds
with their "netbuild" feature in the early 90's. The problem
was, as you said, the network congestion of transmitting all
the header files around more than ate up the time saved.

It's useful for the lower level libraries, that tend to only
depend on system headers (which can be copied and held locally
on each machine, since they "never" change).

Our experience in the mid 90's was that distributed compiling
did speed things up considerably (something like 10 times),
despite all of our application source files being served from a
single server. On the other hand, we organized things so that
linking was always done on the server; the object files and
libraries were big, and the linker didn't really use that much
CPU, so most of the time linking elsewhere was due to network
traffic.

As Jerry has pointed out, network technologies have evolved. On
the other hand, so has network use---back in the mid 90's, I
didn't have to compete for network bandwidth with 20 other users
downloading a lot of .gif's:). All I can say is that I can
still see a big difference here between working on a local disk
or networking. (Not systematically, but often enough to be
bothersome.)
 
J

James Kanze

Not too big a deal with a modern OS and a big enough bucket of
RAM to cache them.

That would only be true if the RAM were shared. In the past, we
used our workstations as a build farm---typically, each machine
would only compile one or two sources anyway, but there would be
several hundred machines downloading the headers from the
server, all at the same time.
 
J

James Kanze

This is, of course, totally ridiculous. find_if is part of the
compiler---by definition, you can't get any lower.
No. find_if is low level.

Key k;
std::vector<Element> v;
std::vector<Element>::iterator end=v.end();
std::vector<Element>::iterator found;
found=find_if(v.begin(),end,boost::bind(&Key::equal,k,boost::bind(&Element::getKey,_1)));
if(found==end){
doSomething(v,NULL);
}else{
doSomething(v,*found);
}

(lambda (k v) (do-something v (find-if (lambda (e) (equal k (get-key e))) v)))
And again, here I already used boost::bind; if I had to use the stl
instead it would have tripled the number of line.
And again, here I transposed directly. In this specific example, lisp
is even higher level (ie. more concise):
(lambda (k v) (do-something v (find k v :key (function get-key) :test (function equal))))
You just cannot say tht the STL is high level. You only can
say that it is slightly higher level than pure C code. But it
doesn't qualifies yet as "high level".

That's a different argument. The STL doesn't attempt to furnish
application level abstractions, nor should it. By definition,
you can't have a higher level without a lower level; you need
something to build on. The STL may not be particularly well
designed for what it does, either, but that's orthogonal to the
question.
 
N

Noah Roberts

Pascal said:
No. find_if is low level.

Compare:

Key k;
std::vector<Element> v;
std::vector<Element>::iterator end=v.end();
std::vector<Element>::iterator found;
found=find_if(v.begin(),end,boost::bind(&Key::equal,k,boost::bind(&Element::getKey,_1)));
if(found==end){
doSomething(v,NULL);
}else{
doSomething(v,*found);
}

vs.

(lambda (k v) (do-something v (find-if (lambda (e) (equal k (get-key e))) v)))


And again, here I already used boost::bind; if I had to use the stl
instead it would have tripled the number of line.

And again, here I transposed directly. In this specific example, lisp
is even higher level (ie. more concise):

(lambda (k v) (do-something v (find k v :key (function get-key) :test (function equal))))


You just cannot say tht the STL is high level. You only can say that
it is slightly higher level than pure C code. But it doesn't
qualifies yet as "high level".

I'm fairly confident that the distinction between low and high level
code has nothing to do with whether it was written in Lisp.
 
N

Noah Roberts

James said:
find_if is part of the
compiler

LOL!!! No it's not.

I knew you were talking some serious nonsense but I never for a moment
thought you didn't know the difference between a compiler and a library
component.
 
I

Ian Collins

James said:
That would only be true if the RAM were shared. In the past, we
used our workstations as a build farm---typically, each machine
would only compile one or two sources anyway, but there would be
several hundred machines downloading the headers from the
server, all at the same time.
I've done the same, on older hardware. You also get the barrage of near
simultaneous writes back to the server.

With modern hardware capable of running multiple compiles on each node
you use fewer boxes and get more sharing. There's still a lot of
traffic going back, but trunked Ethernet goes a long way to coping with
that.

Like all things computer related, you have to track the technology and
tune your solution to fit.
 
J

James Kanze

LOL!!! No it's not.
I knew you were talking some serious nonsense but I never for
a moment thought you didn't know the difference between a
compiler and a library component.

It is according to the standard, and it is part of all of the
compilers I use (g++, Sun CC).
 
P

Pascal J. Bourguignon

Noah Roberts said:
I'm fairly confident that the distinction between low and high level
code has nothing to do with whether it was written in Lisp.

Indeed. It is theorically possible to write a high level library in
C++. But strangely enough, it looks like it's so hard to do that it's
not done often... Ok, just nagging, there's Lpp.
http://www.interhack.net/projects/lpp/
 
P

Pascal J. Bourguignon

Walter Bright said:
I'm not sure I agree with your definition of higher level being more
concise. Doesn't higher level mean more abstract?

Yes, and there's a good correlation betweewn abstraction and conciseness.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top