Linux programming, is there any C++?

  • Thread starter Tomás Ó hÉilidhe
  • Start date
T

Tomás Ó hÉilidhe

I'm kind of new to Linux and I've started downloading applications and
installing them. Applications are distributed as source code in a
".tar.gz" file. You unzip the file, navigate to the directory, run
"configure", then run "make", then run "make install".

Anyway, in all the programs I've seen, there's only ever been .c files, and
it looks like only gcc has been invoked. Do people in the Linux community
not use C++ and distribute their source in ".tar.gz"?
 
R

Rolf Magnus

Tomás Ó hÉilidhe wrote:

I'm kind of new to Linux and I've started downloading applications and
installing them. Applications are distributed as source code in a
".tar.gz" file. You unzip the file, navigate to the directory, run
"configure", then run "make", then run "make install".

Anyway, in all the programs I've seen, there's only ever been .c files,
and it looks like only gcc has been invoked. Do people in the Linux
community not use C++ and distribute their source in ".tar.gz"?

They do. Many libraries are written in C, so they can be easier used with
other languages.
 
J

Jeff Schwab

Tomás Ó hÉilidhe said:
I'm kind of new to Linux and I've started downloading applications and
installing them. Applications are distributed as source code in a
".tar.gz" file. You unzip the file, navigate to the directory, run
"configure", then run "make", then run "make install".

Anyway, in all the programs I've seen, there's only ever been .c files, and
it looks like only gcc has been invoked. Do people in the Linux community
not use C++ and distribute their source in ".tar.gz"?

C and Unix have a lot of history together. C was invented specifically
to write Unix, and remains the lingua franca for linking object files
compiled from different source languages.
 
J

Jeff Schwab

Roland said:
C++ never had much weight in the Unix world because it violates the
basic Unix philosophy:
http://www.catb.org/~esr/writings/taoup/html/ch01s07.html

I don't think C++ violates KISS. C++ is certainly a more complicated
language than C, but that does not imply that applications will be
correspondingly more complex; in fact, the reverse seems to be true most
of the time. C++ provides language features that help keep client code
simple.

C++ happened to hit the scene a little later than C, but was developed
at Bell Labs mainly for use on Unix. Plenty of modern Unix applications
are written in C++, including those developed by Unix industry
consortia. Dominant desktop environments, including CDE and KDE, are
written in C++. Some of the most popular languages used on Unix servers
today, including Java and Ruby, are written in a sort of hybrid between
C and C++.

I personally love both Unix and C++. I use C for device drivers, but
prefer C++ for user-space applications. What's great about C is that
the language is so simple (for the amount of power it gives you) that it
is often the first HLL for which compilers are available on new
platforms. What's great about C++ is that it plays well with very
low-level code, but provides excellent support for arbitrarily
high-level abstractions.
 
I

Ian Collins

Jeff said:
I personally love both Unix and C++. I use C for device drivers, but
prefer C++ for user-space applications.

Why make that distinction? Provided you avoid language features that
require run time support, C++ is an excellent language for drivers.
 
J

Jeff Schwab

Ian said:
Why make that distinction? Provided you avoid language features that
require run time support, C++ is an excellent language for drivers.

If I ever gave a client a Unix device driver written in C++, I'd be told
to re-write it. (I know this for a face, since I've suggested it.) The
problem is that the overwhelming majority of people who professionally
write Unix device drivers are far more comfortable with C then C++, so
maintenance of a C++ driver on Unix is potentially much more expensive.

This isn't to say, btw, that the C++ language is inherently unsuitable
for device drivers. I've jumped through plenty of hoops to make C do
things that would be trivial in C++. At the top of my list is the
common practice in C of using unadorned ints to represent all kinds of
codes, sets of flags, and other bundles of information. Given an int
that semantically represents a specific type of information, code has to
be very careful to pass the int to type-appropriate functions. I would
vastly prefer that such ints be at least wrapped in separate static
types, with conceptually similar functions overloaded for the different
static types. When debugging, trying to figure out the meaning of some
bundle of bits can get awfully frustrating, especially if variables are
poorly named. Which brings me to namespaces...
 
P

Phil Endecott

Tomás Ó hÉilidhe said:
I'm kind of new to Linux and I've started downloading applications and
installing them. Applications are distributed as source code in a
".tar.gz" file. You unzip the file, navigate to the directory, run
"configure", then run "make", then run "make install".

Anyway, in all the programs I've seen, there's only ever been .c files, and
it looks like only gcc has been invoked. Do people in the Linux community
not use C++ and distribute their source in ".tar.gz"?

Go to freshmeat.net, click "browse", then "programming language" under
"browse by" [at this point you'll notice that it reports ~8000 C
projects and ~4000 C++ projects], click C++, then "pick a filter to
add", "operating system :: POSIX :: linux". (Note that much so-called
Linux software will actually run on many platforms, so it may not be
catalogued as precisely as the Linux category here.)

Most peculiarly, the first item in the results is the Linux kernel!
Someone is having a joke there... Then you'll see the expected projects
like OpenOffice and a few thousand more.


Phil.
 
I

Ian Collins

Jeff said:
If I ever gave a client a Unix device driver written in C++, I'd be told
to re-write it. (I know this for a face, since I've suggested it.) The
problem is that the overwhelming majority of people who professionally
write Unix device drivers are far more comfortable with C then C++, so
maintenance of a C++ driver on Unix is potentially much more expensive.
Fair enough, I've only ever had to supply binary drivers. A large
proportion of C drivers I've seen (especially NIC drivers) tend to be
written in pseudo OO C.
 
J

Jeff Schwab

Ian said:
Fair enough, I've only ever had to supply binary drivers. A large
proportion of C drivers I've seen (especially NIC drivers) tend to be
written in pseudo OO C.

Yep. Have you seen the C-language object model used by the BSD kernel?
There's so much preprocessor magic, it's almost like learning a new
language. It's hard to believe that it wouldn't be easier just to make
the jump to C++.

http://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/kernel-objects-using.html
 
P

peter koch

                                     ^^^^^^^^^^^^^^^^^^^^^^

Please elaborate.

Jeff would have to respond to the "arbitrarily" part, but just take a
look at e.g. the boost or the standard C++ library, where quite
complex constructs exist to make life simple for the developer.
For one example, there's all the collections in the standard library
together with the algorithms give a quite rich environment to work in.
Another example is a library such as Blitz, that allows numerical code
to be written in a notation that corresponds almost perfectly to the
matematical one, but still gives optimal performance.
My third and last example is spirit that allows you to write parsers
directly in the source code, describing the language in something that
is very close to BNF.

/Peter
 
J

Jeff Schwab

Matthias said:
^^^^^^^^^^^^^^^^^^^^^^

Please elaborate.

C++ is a language that relies heavily on, and provides correspondingly
great support for, libraries. No matter how tricky code gets, it can
almost always be safely encapsulated behind a clean, easy-to-use
interface. When you first starting writing C++ code, you can use
goodies like the standard string class and the standard container types
right away, with very little understanding of how they work. As your
own code becomes more complicated, you can provide easy access to it
through simple, intuitive APIs, without hurting performance. Even the
standard library is written in plain old C++ code, without any voodoo
you couldn't use in your own code.

Complicated ideas can be represented directly in C++ code through the
static type system. The ideas are then combined using a relatively
small set of syntactic constructs that have been assigned particular
meanings, either by convention or formal standards. Once you understand
the syntax and the conventions, you can mix and match all kinds of
different ideas from different people, and -- this is the amazing part
-- the compiler can help determine whether particular ways of connecting
ideas actually make sense. That's not to say you don't have to think;
on the contrary, writing good C++ code often requires more thought than
writing a "good enough" program in (say) Python. The C++ compiler,
though, can help you in ways no other compiler can, or at least none
I've ever seen. For example:

http://www.boost.org/libs/concept_check/concept_check.htm

I think the proof, for many of us, was the STL. Before I had seen STL
containers, I struggled briefly with MFC. If I wanted to store objects
in an MFC container, I had to inherit them from a particular class,
override virtual methods, and generally jump through a bunch of
artificial hoops.

The developers of the STL took a different approach: They wrote
containers that could hold objects of any type supporting particular
syntax and semantics. The syntax they chose was the syntax supported by
the primitive types inherited from C. Container elements had to be
assignable, copyable, constructible without arguments, etc. Most of my
object types already supported those concepts, using the expected
syntax, so they just worked with the STL containers, right out of the
box. Then I found out that the same algorithm, literally a single piece
of source code, could work with any container type that supported a
particular set of concepts. Most of the standard library containers
don't even accept containers, but instead take iterators, which are in
turn classified according to the sorts of ideas they represent, all
using the same old pointer-style syntax inherited from C. Then I found
out that I had been wasting my time writing my own string class, because
the one in the standard library supported more functionality with better
syntax: Array-style indexed access to characters, concatenation using
operator+, random access iterators... all the other things I liked
about plain char*, without the headaches.

The same approach continues to be useful; in fact, some of the neater
items commonly used in modern C++ are smart pointers, which support the
same old syntax as raw pointers, but actually encapsulate potentially
complicated ideas. Dereferencing a smart pointer can, for example,
automatically obtain and release a Mutex lock. Boost Shared Pointers
(or std::tr1::shared_ptr), using the same syntax, support automatic,
reference-counted garbage collection.

http://www.boost.org/libs/smart_ptr/shared_ptr.htm

The last time I had to deal directly with garbage collection in C was to
extend Tcl, and I remember staring at the screen, walking through the
code, trying to convince myself that I had called the right increment
and decrement functions, on the right structures, at exactly the right
places... Man, I don't *ever* want to go back there.

But that's all just the beginning. Moving forward, it turns out you can
actually implement a significant portion of most programs' functionality
long before you get any run-time data, and thereby get a tremendous
amount of help from the compiler during development. Getting the old
syntax to support these new "meta-programming" techniques can make for
cryptic code, but because of C++'s incredible support for encapsulation,
the functionality is still easily accessed from client code through
libraries:

http://en.wikipedia.org/wiki/Template_meta-programming
http://www.boost.org/libs/mpl/doc/index.html

Even features not "natively" supported by C++ can usually be implemented
by easy-to-use libraries. Sure, reference-counted smart pointers can be
used for simple garbage collection, but do you want to see something
really cool? Here's a library I haven't started using in production
code yet, but I'm itching to try:

http://www.boost.org/doc/html/lambda.html

This library supports the kinds of expressions I usually use in
scripting languages, but with the all the compile-time goodness I've
come to expect from C++: static concept checks, verification that
function argument types match their declarations, etc. Python doesn't
do any of that for me. I still have to write just as many unit tests
for the C++ version as for the Python version, but the tests don't fail
nearly as often, and all kinds of design issues are caught earlier with
C++ than with Python. I like Ruby a lot because the code is
aesthetically beautiful, but when I'm actually writing code to get
something done, I stick with C++.

C++ is a complicated language, but the wonder of it is that you can get
started with less training than it takes to write decent C code. The
more you learn, the more C++ rewards you. I remember someone I used to
work with, who had a morbid fear of C++, taking one look at a typical
C++ reference book and laughing derisively (yes, derisively, just like
an arrogant Bond villain). "How do they expect anybody to learn all
that?" he asked. The answer is that you don't have to learn it all
before you can use it. When somebody says they "know" C++, I always
wonder what they mean. It's like a ship's captain saying he "knows" the
ocean. C++ is my Desert Island Language, and if you invest some time in
it, I can almost guarantee that you'll be glad you did.
 
B

Bo Persson

Jeff said:
C++ is a language that relies heavily on, and provides
correspondingly great support for, libraries. No matter how tricky
code gets, it can almost always be safely encapsulated behind a
clean, easy-to-use interface. When you first starting writing C++
code, you can use goodies like the standard string class and the
standard container types right away, with very little understanding
of how they work. As your own code becomes more complicated, you
can provide easy access to it through simple, intuitive APIs,
without hurting performance. Even the standard library is written
in plain old C++ code, without any voodoo you couldn't use in your
own code.
Complicated ideas can be represented directly in C++ code through
the static type system. The ideas are then combined using a
relatively small set of syntactic constructs that have been
assigned particular meanings, either by convention or formal
standards. Once you understand the syntax and the conventions, you
can mix and match all kinds of different ideas from different
people, and -- this is the amazing part -- the compiler can help
determine whether particular ways of connecting ideas actually make
sense. That's not to say you don't have to think; on the contrary,
writing good C++ code often requires more thought than writing a
"good enough" program in (say) Python. The C++ compiler, though,
can help you in ways no other compiler can, or at least none I've
ever seen. For example:
http://www.boost.org/libs/concept_check/concept_check.htm

I think the proof, for many of us, was the STL. Before I had seen
STL containers, I struggled briefly with MFC. If I wanted to store
objects in an MFC container, I had to inherit them from a
particular class, override virtual methods, and generally jump
through a bunch of artificial hoops.

The developers of the STL took a different approach: They wrote
containers that could hold objects of any type supporting particular
syntax and semantics. The syntax they chose was the syntax
supported by the primitive types inherited from C. Container
elements had to be assignable, copyable, constructible without
arguments, etc. Most of my object types already supported those
concepts, using the expected syntax, so they just worked with the
STL containers, right out of the box. Then I found out that the
same algorithm, literally a single piece of source code, could work
with any container type that supported a particular set of
concepts. Most of the standard library containers don't even
accept containers, but instead take iterators, which are in turn
classified according to the sorts of ideas they represent, all
using the same old pointer-style syntax inherited from C. Then I
found out that I had been wasting my time writing my own string
class, because the one in the standard library supported more
functionality with better syntax: Array-style indexed access to
characters, concatenation using operator+, random access
iterators... all the other things I liked about plain char*,
without the headaches.
The same approach continues to be useful; in fact, some of the
neater items commonly used in modern C++ are smart pointers, which
support the same old syntax as raw pointers, but actually
encapsulate potentially complicated ideas. Dereferencing a smart
pointer can, for example, automatically obtain and release a Mutex
lock. Boost Shared Pointers (or std::tr1::shared_ptr), using the
same syntax, support automatic, reference-counted garbage
collection.
http://www.boost.org/libs/smart_ptr/shared_ptr.htm

The last time I had to deal directly with garbage collection in C
was to extend Tcl, and I remember staring at the screen, walking
through the code, trying to convince myself that I had called the
right increment and decrement functions, on the right structures,
at exactly the right places... Man, I don't *ever* want to go back
there.
But that's all just the beginning. Moving forward, it turns out
you can actually implement a significant portion of most programs'
functionality long before you get any run-time data, and thereby
get a tremendous amount of help from the compiler during
development. Getting the old syntax to support these new
"meta-programming" techniques can make for cryptic code, but
because of C++'s incredible support for encapsulation, the
functionality is still easily accessed from client code through
libraries:
http://en.wikipedia.org/wiki/Template_meta-programming
http://www.boost.org/libs/mpl/doc/index.html

Even features not "natively" supported by C++ can usually be
implemented by easy-to-use libraries. Sure, reference-counted
smart pointers can be used for simple garbage collection, but do
you want to see something really cool? Here's a library I haven't
started using in production code yet, but I'm itching to try:

http://www.boost.org/doc/html/lambda.html

This library supports the kinds of expressions I usually use in
scripting languages, but with the all the compile-time goodness I've
come to expect from C++: static concept checks, verification that
function argument types match their declarations, etc. Python
doesn't do any of that for me. I still have to write just as many
unit tests for the C++ version as for the Python version, but the
tests don't fail nearly as often, and all kinds of design issues
are caught earlier with C++ than with Python. I like Ruby a lot
because the code is aesthetically beautiful, but when I'm actually
writing code to get something done, I stick with C++.

C++ is a complicated language, but the wonder of it is that you can
get started with less training than it takes to write decent C
code. The more you learn, the more C++ rewards you. I remember
someone I used to work with, who had a morbid fear of C++, taking
one look at a typical C++ reference book and laughing derisively
(yes, derisively, just like an arrogant Bond villain). "How do
they expect anybody to learn all that?" he asked. The answer is
that you don't have to learn it all before you can use it. When
somebody says they "know" C++, I always wonder what they mean. It's
like a ship's captain saying he "knows" the ocean. C++ is my
Desert Island Language, and if you invest some time in it, I can
almost guarantee that you'll be glad you did.

Wow, a true hallelujah moment. And so right.

Thank you!


Bo Persson
 
J

James Kanze

[...]
I think the proof, for many of us, was the STL. Before I had
seen STL containers, I struggled briefly with MFC.

Why? There were a lot better libraries around, even back then.
If I wanted to store objects in an MFC container, I had to
inherit them from a particular class, override virtual
methods, and generally jump through a bunch of artificial
hoops.

A technique which went out of favor even before I learned C++,
around 1990.
The developers of the STL took a different approach: They wrote
containers that could hold objects of any type supporting particular
syntax and semantics.

That was more or less the standard approach at the time the STL
was being developed. The STL probably structured it more than
most, e.g. by requiring typedef's for things like value_type,
but that aspect of the STL was largely standard practice by the
time the STL came along: see the USL library, Booch components,
etc. (Even before templates were added to the language, people
were simulating them with macros.)
The syntax they chose was the syntax supported by the
primitive types inherited from C. Container elements had to
be assignable, copyable, constructible without arguments, etc.

Actually, explicitly avoiding any requirement for a default
constructor was probably an innovation of the STL. I think most
earlier libraries required it. (A more important innovation of
the STL was documenting such requirements in the form of
concepts.)
Most of my object types already supported those concepts,
using the expected syntax, so they just worked with the STL
containers, right out of the box.

Again, that's true of every component library I've ever seen.
Then I found out that the same algorithm, literally a single
piece of source code, could work with any container type that
supported a particular set of concepts.

Another innovation of the STL (although I think that the idea
was already "in the air" at the time) is precisely that
algorithms don't work with containers, but with sequences
(iterators).
Most of the standard library containers don't even accept
containers, but instead take iterators, which are in turn
classified according to the sorts of ideas they represent, all
using the same old pointer-style syntax inherited from C.

Which is, of course, that major flaw in the STL---requiring two
iterators instead of one causes no end of problems, making
filtering iterators incredibly difficult, and hindering the
normal nesting of function calls.
Then I found out that I had been wasting my time writing my
own string class, because the one in the standard library
supported more functionality

First, std::string was never part of the STL. And std::string
supports far less functionality than any other string class I
know: no trim or pad, no case manipulation, etc. (I'm not sure
that that's a default, however.)
with better syntax: Array-style indexed access to characters,
concatenation using operator+,

Interesting. All of the string classes I've ever seen support
this. I'm fairly convinced that array-style indexing, at least
for modification, should not be part of an abstraction which is
supposed to represent text---you never change a single
character, but replace one substring with another. (And of
course, the [] operator of std::string gives you access to the
underlying bytes, not the characters.) And while I gave up
arguing against it years ago, + is NOT the right operartor for
concatenation. At least to me, + implies commutitivity, and
concatenation definitely isn't commutitive. On the other hand,
+ is universally established for concatenation of strings---as I
said, I've never seen a string class in C++ (or a string type in
any other language, except AWK) which used anything else.
random access iterators...

Random access iterators are a misnomer (and in many ways, a
mis-feature---if you need a random access iterator, you aren't
iterating, but operating directly on the container).
all the other things I liked about plain char*, without the
headaches.
The same approach continues to be useful; in fact, some of the
neater items commonly used in modern C++ are smart pointers,
which support the same old syntax as raw pointers, but
actually encapsulate potentially complicated ideas.
Dereferencing a smart pointer can, for example, automatically
obtain and release a Mutex lock.

I'd like to see how that works. Or rather I wouldn't, since it
definitly sounds like a bad idea. Don't you mean rather that
smart pointers can manage the lock, holding it over the lifetime
of the pointer (rather than acquiring and releasing it with each
dereference).

[...]
Even features not "natively" supported by C++ can usually be implemented
by easy-to-use libraries. Sure, reference-counted smart pointers can be
used for simple garbage collection, but do you want to see something
really cool? Here's a library I haven't started using in production
code yet, but I'm itching to try:

This is more an example of the limitations of what you can do
within the current language. There are a number of subtle
issues, and you sometimes (frequently, in my experience) have to
use special constructions to make it work.

On the whole, lambda expressions/classes are something that
needs real language support. Boost::lambda does as much as is
possible without directly language support, but it still isn't
enough to make lambda truely effective.
C++ is a complicated language, but the wonder of it is that
you can get started with less training than it takes to write
decent C code.

C++ is a very complicated language. It's designed to solve
complicated problems. The complication is there, and can't be
avoided. The only question is whether you want it in the
language (which you have to learn once) or in your application
(which you have to master for each application).
The more you learn, the more C++ rewards you. I remember
someone I used to work with, who had a morbid fear of C++,
taking one look at a typical C++ reference book and laughing
derisively (yes, derisively, just like an arrogant Bond
villain). "How do they expect anybody to learn all that?" he
asked. The answer is that you don't have to learn it all
before you can use it.

But there's no real point in using it otherwise. The point is
that while it may be hard to learn, once you've learned it, the
time invested in doing so is paid back enormously in increased
productivity. (In many ways, it's like the wide spread
discussion between vim/emacs and simpler editors. It may take
more time to learn vim or emacs, but once you do, your
productivity improves enormously.)
 
M

Matthias Buelow

C++ is a language that relies heavily on, and provides correspondingly
great support for, libraries. No matter how tricky code gets, it can
almost always be safely encapsulated behind a clean, easy-to-use
interface.

That is fine but it isn't quite what I would call "arbitrarily
high-level". "Arbitrarily high-level" is, for me, a dynamic system that
can completely change itself, even at runtime. Something like Lisp, for
example. C++ can't do that, it is essentially just a complicated and
feature-stuffed assembler preprocessor. Once the code is assembled,
there is nothing left (the simplistic RTTI notwithstanding). I agree
with many of the things you wrote but to claim that C++ is "arbitrarily
high-level" is giving a rather fallacious idea of the language. No
offence intended, of course.
automatically obtain and release a Mutex lock. Boost Shared Pointers
(or std::tr1::shared_ptr), using the same syntax, support automatic,
reference-counted garbage collection.

Except that reference counting isn't a very good mechanism to implement
automatic memory management. Shared pointers are - in a class of
situations - better than having to do everything manually but if you
want GC, there're better mechanisms available (I've heard the Boehm GC,
which is mostly transparent to the programmer works rather well with
C++, and is using a conservative live-data tracking method).
 
T

Thomas J. Gritzan

James said:
On Feb 18, 8:18 pm, Jeff Schwab wrote:
And while I gave up
arguing against it years ago, + is NOT the right operartor for
concatenation. At least to me, + implies commutitivity, and
concatenation definitely isn't commutitive. On the other hand,
+ is universally established for concatenation of strings---as I
said, I've never seen a string class in C++ (or a string type in
any other language, except AWK) which used anything else.

In PHP, strings use the dot operator for concatenation.

Since the dot "." is also used for concatenation in some other contexts
(concatenation of object names and member names in C and C++, and function
concatenation in maths, which is more like a circle), it would be the right
operator. But it's not possible because the dot operator is not
overloadable, and would be ambigious anyway.
 
J

Jeff Schwab

James said:
Matthias said:
Jeff Schwab wrote:
[...]
I think the proof, for many of us, was the STL. Before I had
seen STL containers, I struggled briefly with MFC.

Why? There were a lot better libraries around, even back then.'

That may be true, but I had not (and have not) seen them.

A technique which went out of favor even before I learned C++,
around 1990.

I'm talking mid to late nineties, 10-12 years ago.

That was more or less the standard approach at the time the STL
was being developed. The STL probably structured it more than
most, e.g. by requiring typedef's for things like value_type,
but that aspect of the STL was largely standard practice by the
time the STL came along: see the USL library, Booch components,
etc.

Thanks, I'll take a look at those. Are they still worth using?

(Even before templates were added to the language, people
were simulating them with macros.)

I don't know about "most people," but there was a relatively advanced
technice that I have used in C called XInclude:

#define ELEM_T int
# include "list.h"
#endif

It's a far cry from what C++ templates give you. Googling XInclude just
turns up something related to XML-related processing. Googling XInclude
-XML also fails to turn up the XInclude pattern. It's still not an
especially well-known practice.

Actually, explicitly avoiding any requirement for a default
constructor was probably an innovation of the STL. I think most
earlier libraries required it. (A more important innovation of
the STL was documenting such requirements in the form of
concepts.)


Again, that's true of every component library I've ever seen.

You never saw MFC? Things have gotten better, but even today, most of
the container types in 3rd-party libraries I've seen are nothing like as
sophisticated as the STL. Take the Qt container types, for example; why
is there a QStringList, instead of just a QList specialized for strings?
If I want an object to communicate with Qt at all, it gets even more
complicated: I not only have to subclass their base QObject class, but
also have to implement a bunch of library-specific preprocessor nonsense
called "signal" and "slots." They're insidious; for example, given:

first_class: QObject {
public signals:
foo(some_namespace::some_type);
};

using some_namespace::some_type;

second_class: QObject {
public slots:
foo(some_type);
};

Qt will not recognize that the two methods have equivalent (or even
compatible) signatures. This is what happens when you try to work
outside the language, rather than within it.

Another innovation of the STL (although I think that the idea
was already "in the air" at the time) is precisely that
algorithms don't work with containers, but with sequences
(iterators).

It may have been in the air, but I didn't smell it. The only other
"iterators" I was using at the time were hateful little C-style things
that were intended to work like this:

some_lib_iter* iter = some_lib_create_iter(some_lib_some_container);

while (!some_lib_iter_done(iter)) {
some_item* item = (some_item)some_lib_iter_next(iter);
// ...
}

By the way, I'm currently using a recently written, professional,
industry-specific C++ library that supports almost the same idiom, and I
still don't like it.

Which is, of course, that major flaw in the STL---requiring two
iterators instead of one causes no end of problems, making
filtering iterators incredibly difficult, and hindering the
normal nesting of function calls.

It hasn't been a problem for me. Maybe I've just been spoiled by being
a client, rather than an implementer, of the STL.

First, std::string was never part of the STL.

That depends whom you ask. I always considered it a part of the
standard library that was "retrofitted" to be STL-like. Scott Meyers
includes it as part of the STL in his book, Effective STL, even though
he does not include (for example) the iostreams. SGI seems to claim
std::basic_string as part of the STL:

http://www.sgi.com/tech/stl/stl_introduction.html
And std::string
supports far less functionality than any other string class I
know: no trim or pad, no case manipulation, etc. (I'm not sure
that that's a default, however.)

The fact that those aren't member functions does not mean they're
difficult; in fact, they're trivial to write. Case-insensitive compare
is covered in plenty of introductory C++ texts, because it's one of the
easiest things to show people.

Interesting. All of the string classes I've ever seen support
this. I'm fairly convinced that array-style indexing, at least
for modification, should not be part of an abstraction which is
supposed to represent text---you never change a single
character, but replace one substring with another.

Er, I don't?
(And of
course, the [] operator of std::string gives you access to the
underlying bytes, not the characters.)

That's a sometimes-true but fundamentally misleading statement. If you
have a character type that serves better than char or wchar_t, you're
free to instantiate basic_string with it, specialize char_traits for it,
and generally define your own character type. The lack of a real
Unicode character type in the standard library is a valid weakness, but
not a fundamental limitation of the std::basic_string.

And by the way, I was relating my own experience. At the time I first
used std::string, the characters I needed to represent fit very
comfortably into bytes, and the [] operator did provide correct access
to them.

And while I gave up
arguing against it years ago, + is NOT the right operartor for
concatenation. At least to me, + implies commutitivity, and
concatenation definitely isn't commutitive.

That's a valid point.
On the other hand,
+ is universally established for concatenation of strings---as I
said, I've never seen a string class in C++ (or a string type in
any other language, except AWK) which used anything else.

I like operator+ as a concatenator. What I think is more confusing
(until you get used to it) is that the same operator is valid for
individual chars, but with completely different meaning.

#include <iostream>

int main() {
char c = 'c', d = 'd';

/* "199" on ASCII platforms. */
std::cout << (c + d) << '\n';
}

The standard library types behave so much like primitive types most of
the time, that I find it jarring when they are different.
Random access iterators are a misnomer (and in many ways, a
mis-feature---if you need a random access iterator, you aren't
iterating, but operating directly on the container).

That difference is correct, but I find it a natural extension of
iteration. Anyway, what the STL calls iterators are really more like
"pointer-like objects that may be used for iterating, but may sometimes
also be used for other stuff."

I'd like to see how that works.

See Andrei's book Modern C++ Design. I think there's an implementation
in Loki.
Or rather I wouldn't, since it
definitly sounds like a bad idea.

It's definitely open to abuse.
Don't you mean rather that
smart pointers can manage the lock, holding it over the lifetime
of the pointer (rather than acquiring and releasing it with each
dereference).

No, I mean acquiring and releasing with each dereference. You first
create a type that acquires the lock in its constructor and releases in
its destructor. The smart pointer creates a temporary of that type on
each dereference. This implements the extreme case of fine granularity,
acquiring the locking the mutex for an absolute minimum amount of time,
but with potentially frequent calls to the locking code. Accessing an
object only through such a locking smart-pointer is similar to using a
Java object with only "synchronized" methods, where every time you call
a method, you first have to get a lock on the object.
[...]
Even features not "natively" supported by C++ can usually be implemented
by easy-to-use libraries. Sure, reference-counted smart pointers can be
used for simple garbage collection, but do you want to see something
really cool? Here's a library I haven't started using in production
code yet, but I'm itching to try:

This is more an example of the limitations of what you can do
within the current language. There are a number of subtle
issues, and you sometimes (frequently, in my experience) have to
use special constructions to make it work.

Thanks for the heads up.
On the whole, lambda expressions/classes are something that
needs real language support. Boost::lambda does as much as is
possible without directly language support, but it still isn't
enough to make lambda truely effective.

So you're saying, point blank, flat out, that nobody will ever be able
to write a C++ library that supports lambdas to your satisfaction? That
seems like a pretty sweeping statement. When folks say "you can't do
that without special language support," C++ seems to prove them wrong a
lot; see D&E. I'm not saying you're wrong; I just don't buy into
blanket "that can't be done" statements without some real proof.

C++ is a very complicated language. It's designed to solve
complicated problems. The complication is there, and can't be
avoided. The only question is whether you want it in the
language (which you have to learn once) or in your application
(which you have to master for each application).

Agreed completely.

But there's no real point in using it otherwise.

Huh? Do you really think you know every nook and cranny of the standard
off the top of your head, including the standard libraries? It's enough
to have a fundamental grasp of the items you use regularly, and know
where to look to get more information. I do not ever expect to have the
whole thing memorized. Even if I were intimately familiar with the
current standard, I'd still have to update my knowledge every 5 years or
so, which seems to go against your "learn it once" philosophy.

The point is
that while it may be hard to learn, once you've learned it, the
time invested in doing so is paid back enormously in increased
productivity. (In many ways, it's like the wide spread
discussion between vim/emacs and simpler editors. It may take
more time to learn vim or emacs, but once you do, your
productivity improves enormously.)

Yep.
 
J

Jeff Schwab

Matthias said:
That is fine but it isn't quite what I would call "arbitrarily
high-level". "Arbitrarily high-level" is, for me, a dynamic system that
can completely change itself, even at runtime.

That's not providing "high level abstractions," it's making the language
mutable. By "abstraction" I mean the representation of potentially
complicated ideas by simpler ones. Iterating over a std::set, for
example, might involve some fancy left-child/self/right-child recursive
algorithm, but from the stand-point of client code, it's just increment
and dereference, increment and dereference...
Something like Lisp, for
example.

You mean with forms (which I take to be similar to C++ macros)? I'm not
a lisp expert, but the little I know does not seem to provide far less
support for abstraction than C++. How do you get Lisp to glue syntax to
concepts, and thereby check for conceptual errors a priori? The only
practical way I know to test programs written in dynamic languages is to
run the program down every possible path, and look for runtime errors.

C++ can't do that, it is essentially just a complicated and
feature-stuffed assembler preprocessor.

By that logic, every compiler is an "assembler prepreprocessor." :)
Once the code is assembled,
there is nothing left (the simplistic RTTI notwithstanding).

That's not true. You can have all the run-time logic you want: checks
for values being in range (std::vector::at), I/O errors
(std::ios_base::fail), or whatever else you need.
I agree
with many of the things you wrote but to claim that C++ is "arbitrarily
high-level" is giving a rather fallacious idea of the language.

We seem to mean exactly opposite things by "high-level." You take it to
mean that processing can be done as late possible, whereas I mean that
processing can be done as early as possible.
No
offence intended, of course.

None taken. :)
Except that reference counting isn't a very good mechanism to implement
automatic memory management. Shared pointers are - in a class of
situations - better than having to do everything manually but if you
want GC, there're better mechanisms available (I've heard the Boehm GC,
which is mostly transparent to the programmer works rather well with
C++, and is using a conservative live-data tracking method).

I haven't used them, but I hear good things.
 
M

Matthias Buelow

James said:
the
time invested in doing so is paid back enormously in increased
productivity.

Hmm. Actually, I was a lot more productive in C than I am now in C++
(which I do for money). I also have been more productive with the C++ of
15 years ago. This is unsurprising. In trying to fix the language, it
gets more and more broken, piling workaround upon workaround, making it
a discombombulated mess that is exceptionally hard to use and boggles
the mind with every new detail investigated. I don't think the language
can be fixed at all. Best to bury it in a quiet part of the garden.
Anyway, back to programming.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,048
Latest member
verona

Latest Threads

Top