Linux programming, is there any C++?

  • Thread starter Tomás Ó hÉilidhe
  • Start date
M

Matthias Buelow

Jeff said:
You mean with forms (which I take to be similar to C++ macros)? I'm not

Lisp macros aren't limited like C macros or C++ templates, they allow
full reprogramming of the forms they get as arguments for expansion. I
don't want to go into further details since this is not a Lisp newsgroup
and the issue is well explained on the Web for the interested reader.
a lisp expert, but the little I know does not seem to provide far less
support for abstraction than C++.

That's a misconception.
practical way I know to test programs written in dynamic languages is to
run the program down every possible path, and look for runtime errors.

Yes but this is the normal way with any program. Compilers can only
check for simplistic errors anyway. This is the debate about manifest
typing vs. static (compile-time) typing, discussions (often heated) of
which can be found on the net aplenty, with both sides presenting their
arguments.
By that logic, every compiler is an "assembler prepreprocessor." :)

The difference is, whether the compiler (or evaluator, in a generalized
way) is available at runtime (or not). And whether the full language is
available at runtime, allowing one to extend, inspect, debug and change
the application. I'm always frustrated at how limited debugging C++ is,
even with relatively powerful debuggers like gdb. It is is very
frustrating and cumbersome.
That's not true. You can have all the run-time logic you want: checks
for values being in range (std::vector::at), I/O errors
(std::ios_base::fail), or whatever else you need.

You generally cannot replace functions, extend objects (classes),
compile new functions from user input, generate new types and classes,
etc. All this information is lost at runtime (except for debug info and
RTTI) and there's no way to change or extend the program. (Writing out a
C++ source file, running the compiler on it and loading shared objects
obviously isn't what I mean, and it's not portable anyway)
We seem to mean exactly opposite things by "high-level." You take it to
mean that processing can be done as late possible, whereas I mean that
processing can be done as early as possible.

I like Alan Perlis definition:
"A programming language is low level when its programs require attention
to the irrelevant."
Having to think about when to pass a reference, a pointer, or whether
the value will be copied (and if a suitable copy constructor is
available) is nothing but a nuisance when you just want to express an
algorithm. I frequently run into the difference between pointers and
iterators, where the latter are supposed to work somewhat like the first
in a few simple cases but are, generally, something completely
different. Too much artificial bureaucracy, too much typing, too much
program text for essentially simple things.
 
J

Jeff Schwab

Matthias said:
Hmm. Actually, I was a lot more productive in C than I am now in C++
(which I do for money). I also have been more productive with the C++ of
15 years ago.

What is it that you could do then, that you can't feasibly do now? I
believe what you're saying, but I'm curious what "broke" for you.
 
M

Matthias Buelow

Jeff said:
What is it that you could do then, that you can't feasibly do now? I
believe what you're saying, but I'm curious what "broke" for you.

One example is running into error messages like:

blah.h:13: error: conversion from '<unresolved overloaded function type>' t
o non-scalar type '__gnu_cxx::__normal_iterator<foo*, std::vector<foo,
std::allo
cator<foo> > >' requested

is what makes it so slow and frustrating. Too many WTFs/minute. At least
gcc produces enormous output, often barely intelligible, for really
minor offences. Templates have really obfuscated the type system.
(Actually, the above error message is relatively harmless still, I just
couldn't find something worse off-hand.) Of course one learns to
recognize certain patterns but sometimes it's still too much
head-scratching before one knows what's going wrong.

Another thing is debugging. I know my way around with gdb fairly well,
so that isn't the problem. The problem is the combination of: an
obfuscating static type system (see above), too little runtime
information about said type system, combined with manual memory
managment in (real-life) programs that do not necessarily follow
bandaid-principles like RAII, in programs that are often event-driven
(like modern GUI applications) are. Sensory overload, coredump.
A dynamic language with good runtime support and automatic memory
management would be a big improvement here (I'm not talking about
Java/C#. Maybe Objective-C, never used it but it looks promising at
first glance. Maybe a modern version of Lisp. Or something Smalltalky.
Things like these.)
 
J

James Kanze

James said:
Matthias Buelow wrote:
Jeff Schwab wrote:
[...]
If I wanted to store objects in an MFC container, I had to
inherit them from a particular class, override virtual
methods, and generally jump through a bunch of artificial
hoops.
A technique which went out of favor even before I learned C++,
around 1990.
I'm talking mid to late nineties, 10-12 years ago.

Microsoft aways was a little behind the times.
Thanks, I'll take a look at those. Are they still worth using?

For the most part, they aren't even still around. I don't know
what happened with USL (which was the half standard library for
a long time), but Unix System Laboraties has been gone for ages.
Booch components was bought up by Rogue Wave, so that they
wouldn't have to compete with it. (The Booch components were
particularly well designed.)

You might want to take a look at OSE
(http://ose.sourceforge.net). IMHO, a lot better designed and
easier to use than the STL. Above all, a different approach.
It tend to use STL mainly as the low level tools, over which I
build the library I actually use. OSE is usable directly.
I don't know about "most people," but there was a relatively advanced
technice that I have used in C called XInclude:
#define ELEM_T int
# include "list.h"
#endif
It's a far cry from what C++ templates give you. Googling XInclude just
turns up something related to XML-related processing. Googling XInclude
-XML also fails to turn up the XInclude pattern. It's still not an
especially well-known practice.

Try googling for <generic.h>:).

Some people today claim that templates are too complicated.
They've never used <generic.h>. (Actually, this is a good
example of where language complexity makes life easier for the
You never saw MFC?

No. It wasn't available on the machines I worked on.
Things have gotten better, but even today, most of the
container types in 3rd-party libraries I've seen are nothing
like as sophisticated as the STL.

There are different ways of being sophisticated. There are some
things that are very sophisticated, and above all, very complete
in STL. There are also some serious design flaws, which make it
difficult to do even the simplest operations (e.g. a filtering
iterator, for example).
Take the Qt container types,

I'm afraid I don't know Qt. As soon as I heard that they needed
to preprocess, rather than using straight C++, I dropped it from
consideration. (Not that it makes much difference---the
machines I target typically don't have any terminal attached,
graphic or otherwise, so a GUI isn't really an issue.)

[...]
This is what happens when you try to work
outside the language, rather than within it.

Agreed. That's why I didn't bother looking any further when I
heard you needed a special pre-processor.

If I were going to write something with a GUI, I'd probably give
wxWidgets a trial. On the other hand, GUI's are something that
Java actually does quite well. (More because Swing is well
designed, that because of anything in the language itself.)
It may have been in the air, but I didn't smell it.

ADT's have a long, long history. I first heard about them in
the late 1970's. They were common practice in languages that
supported (Lisp dialects, Ada) them by the mid 1980's.
The only other "iterators" I was using at the time were
hateful little C-style things that were intended to work like
this:
some_lib_iter* iter = some_lib_create_iter(some_lib_some_container);

while (!some_lib_iter_done(iter)) {
some_item* item = (some_item)some_lib_iter_next(iter);
// ...
}
By the way, I'm currently using a recently written,
professional, industry-specific C++ library that supports
almost the same idiom, and I still don't like it.

It's very close to the USL idiom:). And the Java one. And
yes, combining advancing and accessing in a single function is
NOT a good idea. But one iterator still beats two, when you
have one function providing the range for another, or when you
want a filtering iterator. (Look at all the hoops
boost::iterator has to jump through to make their stuff work.)
It hasn't been a problem for me. Maybe I've just been spoiled
by being a client, rather than an implementer, of the STL.

So how to you write a function which returns a range, and use
the return value of that function as the argument to a function
which takes a range? Or how do you use the decorator pattern on
an iterator, to provide a filtering iterator?

My usual policy today is to write my iterators according to the
GoF pattern, with three functions: isDone(), next() and
element(), adding isEqual() if at all possible. And to have
them inherit from IteratorOperators<>, which uses the Barton and
Nackman trick to provide the STL interface. When it makes no
difference otherwise, I'll use the STL interface, on the grounds
that that's what most C++ programmers will be most familiar
with. But the clean interface is there if I need it, e.g. for
chaining, filtering, etc.
That depends whom you ask. I always considered it a part of
the standard library that was "retrofitted" to be STL-like.

Not really. The STL was a library developped by Stepanov while
he was at HP, and it didn't contain a string type. But when a
large part of the STL was adopted into the standard, the string
class that was there was retrofitted to sort of conform.
Scott Meyers includes it as part of the STL in his book,
Effective STL, even though he does not include (for example)
the iostreams. SGI seems to claim std::basic_string as part
of the STL:

SGI has extended the STL to include hash tables, strings, and
probably a few other things.
The fact that those aren't member functions does not mean
they're difficult; in fact, they're trivial to write.

But you have to write them:).

Seriously, the problem with std::string is that it is sort of a
bastard---it's too close to an STL container of charT to be an
effective abstraction of text, and it adds a bit too much which
is text oriented to be truly an STL container.

In its defense: even today, I'm not sure what a good abstraction
of text should support.
Case-insensitive compare is covered in plenty of introductory
C++ texts, because it's one of the easiest things to show
people.

Case insensitive compare is one of the most difficult problems I
know. Just specifying it is extremely difficult.
Er, I don't?

Then your code is probably wrong. (Note that you're certainly
not alone in this. The toupper and tolower functions in C and
in C++ all suppose a one to one mapping, which doesn't
correspond to the real world, and every time I integrated my
pre-standard string class into a project, I had to add a
non-const []---although the class supported an lvalue substring
replace:
s.substring( 3, 5 ) = "abcd" ;
was the equivalent of
s = s.replace( 3, 5, "abcd" ) ;
.)
(And of course, the [] operator of std::string gives you
access to the underlying bytes, not the characters.)
That's a sometimes-true but fundamentally misleading
statement. If you have a character type that serves better
than char or wchar_t, you're free to instantiate basic_string
with it, specialize char_traits for it, and generally define
your own character type.

Are you kidding. Have you ever tried this?

But that's not the problem. I usually use UTF-8, which fits
nicely in a char. But [] won't return a character.
The lack of a real Unicode character type in the standard
library is a valid weakness, but not a fundamental limitation
of the std::basic_string.

Even char32_t will sometimes require two char32_t for a single
character: say a q with a hacek.
And by the way, I was relating my own experience. At the time
I first used std::string, the characters I needed to represent
fit very comfortably into bytes, and the [] operator did
provide correct access to them.

Take a look at my .sig. I should be obvious that this is not
the case for me.

But even in English, if you're dealing with text, how often do
you replace a single letter, rather than a word?
That's a valid point.

I suggested / (in C++, & has the wrong precedance), but that
went over like a lead ballon.
I like operator+ as a concatenator. What I think is more
confusing (until you get used to it) is that the same operator
is valid for individual chars, but with completely different
meaning.

That's because char isn't a character type, but a small integral
type. That's because we don't have a character type. (As
mentionned above, I'm not sure that we have enough practice,
even today, to know what we would really need in a character
type, so perhaps the current situation is the best we can do.
or would be, if char were required to be unsigned.)
#include <iostream>
int main() {
char c = 'c', d = 'd';
/* "199" on ASCII platforms. */
std::cout << (c + d) << '\n';
}
The standard library types behave so much like primitive types
most of the time, that I find it jarring when they are
different.
That difference is correct, but I find it a natural extension
of iteration. Anyway, what the STL calls iterators are really
more like "pointer-like objects that may be used for
iterating, but may sometimes also be used for other stuff."

Exactly. Which is both their force and their weakness.

[...]
No, I mean acquiring and releasing with each dereference. You first
create a type that acquires the lock in its constructor and releases in
its destructor. The smart pointer creates a temporary of that type on
each dereference. This implements the extreme case of fine granularity,
acquiring the locking the mutex for an absolute minimum amount of time,
but with potentially frequent calls to the locking code. Accessing an
object only through such a locking smart-pointer is similar to using a
Java object with only "synchronized" methods, where every time you call
a method, you first have to get a lock on the object.

That much I understood. One of the things that I found out
quickly in Java is that synchronized methods were useless, and
could only be considered a misfeature. (But I guess it's nice
to say that in C++, you can emulate even the misfeatures of
another language.)

Of course, I'd thought about the technique you describe above,
and rejected it because it doesn't work in some frequent cases,
e.g.:

f( smartPtr->get1(), smartPtr->get2() ) ;
[...]
On the whole, lambda expressions/classes are something that
needs real language support. Boost::lambda does as much as is
possible without directly language support, but it still isn't
enough to make lambda truely effective.
So you're saying, point blank, flat out, that nobody will ever
be able to write a C++ library that supports lambdas to your
satisfaction? That seems like a pretty sweeping statement.
When folks say "you can't do that without special language
support," C++ seems to prove them wrong a lot; see D&E. I'm
not saying you're wrong; I just don't buy into blanket "that
can't be done" statements without some real proof.

Well, where lambda is concerned, a lot of really intelligent
people have been trying. Boost::lambda is amazing in what it
does, but it still encounters limits.

[...]
Huh? Do you really think you know every nook and cranny of
the standard off the top of your head, including the standard
libraries?

Not every nook and cranny. But I do expect anyone using C++ to
have at least an awareness of what it can do. You don't have to
know the details, but you do have to know that the possibility
exists, and where you should start looking in the documentation.

My point is just that if your goal is to just learn a minimum,
and start hacking code, C++ probably isn't the language for you.
The minimum necessary is a good deal more than for a lot of
other languages. But... the effort you invest won't be wasted,
because once you do have a good grasp of the language, your
productivity will be considerably higher than in any other
language. (At least, that's been my experience. But of course,
I've not tried every other possible language---maybe there's one
out there in which I'd be even more productive. If so, however,
I'm willing to bet that it will be just as complicated as C++,
if not more so.)
It's enough to have a fundamental grasp of the items you use
regularly, and know where to look to get more information. I
do not ever expect to have the whole thing memorized. Even if
I were intimately familiar with the current standard, I'd
still have to update my knowledge every 5 years or so, which
seems to go against your "learn it once" philosophy.

It's a fact of life that in this profession, we can't afford to
stop learning. And unlearning. (About six months ago, I was
cleaninig out some old directories, and came accross some code
I'd written almost 20 years ago. I wouldn't consider such code
acceptable today, but back then, my employers thought very
highly of it.)
 
J

James Kanze

Hmm. Actually, I was a lot more productive in C than I am now in C++
(which I do for money). I also have been more productive with the C++ of
15 years ago. This is unsurprising. In trying to fix the language, it
gets more and more broken, piling workaround upon workaround, making it
a discombombulated mess that is exceptionally hard to use and boggles
the mind with every new detail investigated. I don't think the language
can be fixed at all. Best to bury it in a quiet part of the garden.
Anyway, back to programming.

That's not been my experience at all. Nor that of any of the
places I've worked which have actually measured productivity.
 
L

Linonut

* Tomás Ó hÉilidhe peremptorily fired off this memo:
I'm kind of new to Linux and I've started downloading applications and
installing them. Applications are distributed as source code in a
".tar.gz" file. You unzip the file, navigate to the directory, run
"configure", then run "make", then run "make install".

Anyway, in all the programs I've seen, there's only ever been .c files, and
it looks like only gcc has been invoked. Do people in the Linux community
not use C++ and distribute their source in ".tar.gz"?

fluxbox (the window manager) is one C++ project.

There are many others.
 
K

kamisamanou

I'm kind of new to Linux and I've started downloading applications and
installing them. Applications are distributed as source code in a
".tar.gz" file. You unzip the file, navigate to the directory, run
"configure", then run "make", then run "make install".

Anyway, in all the programs I've seen, there's only ever been .c files, and
it looks like only gcc has been invoked. Do people in the Linux community
not use C++ and distribute their source in ".tar.gz"?

If I remember correctly, Songbird(Songbirdnest.com) has several cpp
source code files. I believe LMMS(lmms.sourceforge.net) does to.
 
J

Jeff Schwab

James Kanze wrote:

(liberally snipped post follows...)
You might want to take a look at OSE
(http://ose.sourceforge.net). IMHO, a lot better designed and
easier to use than the STL. Above all, a different approach.
It tend to use STL mainly as the low level tools, over which I
build the library I actually use. OSE is usable directly.

Seems like it has special support for Python. Speaking of stuff that's
"in the air," it sure seems like Python is rapidly becoming the de facto
standard scripting/dynamic language for interfacing to programs written
in C++. Now I just have to convince the clients that they don't really
want all that legacy code they've written in a half dozen other
scripting languages, and that it's time to learn yet another...


s/endif/undef ELEM_T/ (my bad)

Try googling for <generic.h>:).

Lots of <generic.h>s, but none that look like precursors to templates.
I was thinking of headers that defined a bunch of type-specific
structures and functions by being included multiple times, each time
with a set of macros representing a different static type. They mention
it briefly here:

http://en.wikipedia.org/wiki/C_preprocessor#Token_Concatenation

If I were going to write something with a GUI, I'd probably give
wxWidgets a trial. On the other hand, GUI's are something that
Java actually does quite well. (More because Swing is well
designed, that because of anything in the language itself.)

I like Swing, too, although the handful of GUI experts I know still seem
wary of it. I haven't used wxWidgets, but I hear mixed reviews.

It's very close to the USL idiom:). And the Java one. And
yes, combining advancing and accessing in a single function is
NOT a good idea.

Do you use istream_iterator?
But one iterator still beats two, when you
have one function providing the range for another, or when you
want a filtering iterator. (Look at all the hoops
boost::iterator has to jump through to make their stuff work.)

I'll take a look.
So how to you write a function which returns a range,

I don't think I've ever needed to. If I did, I'd probably follow the
STL approach of returning a std::pair (like std::equal_range).
and use
the return value of that function as the argument to a function
which takes a range? Or how do you use the decorator pattern on
an iterator, to provide a filtering iterator?

That, I've done, and with some success. I didn't come across any
particular problems (or if I did, they're so subtle I still don't see
them). You have the outer, decorating iterator, and the inner iterator
whose type is a template parameter. Intercept all
increment/dereference/etc. calls, and provide whatever delegation and
decoration are necessary. No fuss, no muss. Clean, simple client code.
I guess you're not a big fan of STL-style iterators, but I still love
them.
My usual policy today is to write my iterators according to the
GoF pattern, with three functions: isDone(), next() and
element(), adding isEqual() if at all possible. And to have
them inherit from IteratorOperators<>, which uses the Barton and
Nackman trick to provide the STL interface. When it makes no
difference otherwise, I'll use the STL interface, on the grounds
that that's what most C++ programmers will be most familiar
with.

Please continue to provide that interface. :)
But the clean interface is there if I need it, e.g. for
chaining, filtering, etc.

If that's really what you want, then more power to you.

But you have to write them:).

Well, yes, once. Or use the freely available, existing implementations,
which are probably good enough for the typical new C++ developer's purposes.

Seriously, the problem with std::string is that it is sort of a
bastard---it's too close to an STL container of charT to be an
effective abstraction of text, and it adds a bit too much which
is text oriented to be truly an STL container.

I don't see those as contradictory goals. Any representation of text is
effectively a container of characters.

In its defense: even today, I'm not sure what a good abstraction
of text should support.

Right, there still does not seem to be any widespread agreement on that.
It's probably a good idea to keep the C++ standard string class
interface minimal, until C++ developers know what they really want.

Case insensitive compare is one of the most difficult problems I
know. Just specifying it is extremely difficult.

Depends what you mean by it. What most new C++ developers mean by it is
a pretty simple idea, and a FAQ. Plenty of string classes that have
alleged case-insensitive comparison functions actually provide only the
toupper-each-char implementation.

If you're talking about an industrial-strength, portable implementation,
then of course it gets complicated, as do all natural-language related
issues. If you have a copy of Effective STL handy: The simple case is
covered by Item 35, and the complicated case is Appendix A, which is the
Matt Austern article from the May 2000 C++ Report.

http://lafstern.org/matt/col2_new.pdf

This article is getting a little long in the tooth; has anything really
changed? The only new info I've seen is library-specific documentation
(ICU and Qt).

Then your code is probably wrong.

I don't believe it is wrong. But you're entitled to your opinion.

(Note that you're certainly
not alone in this. The toupper and tolower functions in C and
in C++ all suppose a one to one mapping, which doesn't
correspond to the real world, and every time I integrated my
pre-standard string class into a project, I had to add a
non-const []---although the class supported an lvalue substring
replace:
s.substring( 3, 5 ) = "abcd" ;
was the equivalent of
s = s.replace( 3, 5, "abcd" ) ;
.)

Whether toupper and tolower are correct is a completely orthogonal issue
to whether it makes sense for the string class to have array-style
character indexing.

(And of course, the [] operator of std::string gives you
access to the underlying bytes, not the characters.)

But that makes sense for that particular abstraction, because
std::string is a typedef meant to represent the common case of
characters that fit within bytes. If the idea of a character is too
complex to be represented by a char or wchar_t, then it merits its own,
dedicated type, with support for conversions, normalization, etc.

Are you kidding. Have you ever tried this?

Yes, and it seemed to work well. It never got released in production
code though, because there just wasn't any need for it.

But that's not the problem. I usually use UTF-8, which fits
nicely in a char. But [] won't return a character.

What do you mean? std::basic_string::eek:perator[] returns a
reference-to-character, as defined by the character and traits types
with which basic_string was instantiated.

Even char32_t will sometimes require two char32_t for a single
character: say a q with a hacek.

I'll take your word for that example. :) Characters just aren't all the
same size anymore.

http://www.joelonsoftware.com/articles/Unicode.html

And by the way, I was relating my own experience. At the time
I first used std::string, the characters I needed to represent
fit very comfortably into bytes, and the [] operator did
provide correct access to them.

Take a look at my .sig. I should be obvious that this is not
the case for me.

Your sig looks fine to me, accented characters and all. It's actually a
nice proof of concept, since it includes three different (Western)
languages.

But even in English, if you're dealing with text, how often do
you replace a single letter, rather than a word?

Admittedly, not often. It's just not something that comes up a lot. If
I'm accessing an individual character, chances are good that I'm
actually iterating over the characters in a string. This kind of code
is usually just buried in low-level library functions. If a library is
going to support strings and substrings, then some code somewhere has to
work at this level. There's no getting around it.

Even if the standard library provided lots of Unicode-friendly string
support, indexed character access would still be important. There will
always be per-character functionality needed by client code, but not
provided directly by the library.

I suggested / (in C++, & has the wrong precedance), but that
went over like a lead ballon.



That's because char isn't a character type, but a small integral
type.

And a badly misnamed one, at that.
That's because we don't have a character type. (As
mentionned above, I'm not sure that we have enough practice,
even today, to know what we would really need in a character
type, so perhaps the current situation is the best we can do.
or would be, if char were required to be unsigned.)
Yup.
#include <iostream>
int main() {
char c = 'c', d = 'd';
/* "199" on ASCII platforms. */
std::cout << (c + d) << '\n';
}
The standard library types behave so much like primitive types
most of the time, that I find it jarring when they are
different.
That difference is correct, but I find it a natural extension
of iteration. Anyway, what the STL calls iterators are really
more like "pointer-like objects that may be used for
iterating, but may sometimes also be used for other stuff."

Exactly. Which is both their force and their weakness.

[...]
No, I mean acquiring and releasing with each dereference. You first
create a type that acquires the lock in its constructor and releases in
its destructor. The smart pointer creates a temporary of that type on
each dereference. This implements the extreme case of fine granularity,
acquiring the locking the mutex for an absolute minimum amount of time,
but with potentially frequent calls to the locking code. Accessing an
object only through such a locking smart-pointer is similar to using a
Java object with only "synchronized" methods, where every time you call
a method, you first have to get a lock on the object.

That much I understood. One of the things that I found out
quickly in Java is that synchronized methods were useless, and
could only be considered a misfeature. (But I guess it's nice
to say that in C++, you can emulate even the misfeatures of
another language.)

Right. :)
Of course, I'd thought about the technique you describe above,
and rejected it because it doesn't work in some frequent cases,
e.g.:

f( smartPtr->get1(), smartPtr->get2() ) ;

Good call.

Not every nook and cranny. But I do expect anyone using C++ to
have at least an awareness of what it can do.

In my experience, most C++ developers have no idea what the language can
do. They use it as a sort of "C with classes," replacing
function-pointers with virtual functions, but otherwise writing
glorified C code.

You don't have to
know the details, but you do have to know that the possibility
exists, and where you should start looking in the documentation.

Agreed. If you're not at least vaguely aware that a feature might
exist, you're not going to benefit from it.

My point is just that if your goal is to just learn a minimum,
and start hacking code, C++ probably isn't the language for you.

Oh, I think it is. Suppose you start with <insert language-of-the-month
here>. "Wow," you say, "this is really neat! LotM lets me print 'hello
world' with just a single line!" Or (this one is in vogue now): "Look
how much stuff I can do with Excel macros! I'm going to implement all
my business logic using them. Instead of writing applications, I'll
give everybody macro-heavy spreadsheets to fill in."

Sooner or later, that person needs to write a real, non-trivial program,
at which point the knowledge they gleaned from "Learn Language X in 24
Seconds" becomes worse than useless. It becomes baggage. Writing very
small programs in C++ is harder than writing them in some other
languages, but the point of newbie hacking isn't just to get something
working, but to lay the groundwork for harder tasks that lie ahead.

I'm sure you've seen this. I agree with it:
http://www.research.att.com/~bs/learn.html

The minimum necessary is a good deal more than for a lot of
other languages. But... the effort you invest won't be wasted,
because once you do have a good grasp of the language, your
productivity will be considerably higher than in any other
language.

Probably true.

(At least, that's been my experience. But of course,
I've not tried every other possible language---maybe there's one
out there in which I'd be even more productive. If so, however,
I'm willing to bet that it will be just as complicated as C++,
if not more so.)

I would have liked to see a more Smalltalk-heavy industry. All modern
dynamic languages seem to me like convoluted imitations of Smalltalk.
I'm not a Smalltalk expert, and it doesn't seem have much of a fan-base
anymore (like the Lisp cult), but the syntax was so clean, and you could
port it to a new bare-hardware platform in a Summer. What happened?
Was it the licensing? Why is Java the server-side "safe bet," rather
than Smalltalk?

It's a fact of life that in this profession, we can't afford to
stop learning. And unlearning. (About six months ago, I was
cleaninig out some old directories, and came accross some code
I'd written almost 20 years ago. I wouldn't consider such code
acceptable today, but back then, my employers thought very
highly of it.)

The mark of the true professional, as opposed to the mere practitioner,
is that he keeps improving.
 
J

Jim Langston

Tomás Ó hÉilidhe said:
I'm kind of new to Linux and I've started downloading applications and
installing them. Applications are distributed as source code in a
".tar.gz" file. You unzip the file, navigate to the directory, run
"configure", then run "make", then run "make install".

Anyway, in all the programs I've seen, there's only ever been .c
files, and it looks like only gcc has been invoked. Do people in the
Linux community not use C++ and distribute their source in ".tar.gz"?

I read an interesting article one time one the issues with using C++ as the
linux OS code. Unfortuntately I can't find it, but only references such as:
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2006-01/msg01825.html

The main issue, as I remember it, is using C++ for the core operating system
had big problems with new. new dynamically allocates memory, but when
you're writing the memory modules to an operating system, how is new
supposed to work? There would be some times when new would be fine,
sometimes it wouldn't.

The issue wasn't as much that it couldn't be done, but there was a real
concern that maintainance programmers would use new when they shouldn't and
introduce bugs.

I think that because C++ is not used for the operating system it's become a
matter of, well, if it's not good enough for the operating system, then it's
not good enough for my program, although that wasn't the real issue.
 
J

James Kanze

James Kanze wrote:
Seems like it has special support for Python. Speaking of stuff that's
"in the air," it sure seems like Python is rapidly becoming the de facto
standard scripting/dynamic language for interfacing to programs written
in C++. Now I just have to convince the clients that they don't really
want all that legacy code they've written in a half dozen other
scripting languages, and that it's time to learn yet another...

I've heard a lot of good things about Python. On the other
hand, I learned scripting back before even perl existed. Since
scripting is not an important enough part of my activity to
justify effort to continuously learn new things, and what I know
suffices for what I do, I still use mainly grep, awk and sed.
s/endif/undef ELEM_T/ (my bad)

Lots of <generic.h>s, but none that look like precursors to templates.
I was thinking of headers that defined a bunch of type-specific
structures and functions by being included multiple times, each time
with a set of macros representing a different static type. They mention
it briefly here:

Is it that long ago, that no one still has explinations of how
to use it. Basically, <generic.h> (part of the standard library
which came with CFront) defined macros which allowed you to
write things like:

#define MyClassdeclare(T) \
...
#define MyClassdefine(T) \
...

The user then wrote:
declare( MyClass, T )
and got the declaration for MyClass for type T, and
define( MyClass, T )
to get the implementation. (It may have been implement, rather
than define. It's been awhile.)
I like Swing, too, although the handful of GUI experts I know
still seem wary of it. I haven't used wxWidgets, but I hear
mixed reviews.

I've only taken a quick glance, and didn't particularly like
what I saw, but it wasn't enough to fairly judge. The fact
remains that in practice, it and Qt are the only widely used
libraries, and Qt requires a pre-processor.
Do you use istream_iterator?

At times. Most of the time, however, my input requires somewhat
more complex parsing than you can get from an istream_iterator.

[...]
I don't think I've ever needed to.

It would seem to occur naturally fairly often as a result of
functional decomposition. I was using the GoF iterator pattern
long before I'd heard of the STL, with filtering iterators and
functions returning custom iterators as part of the package.
If I did, I'd probably follow the STL approach of returning a
std::pair (like std::equal_range).

Which can't be used directly as an argument for the next
function, so you can't chain.
That, I've done, and with some success. I didn't come across
any particular problems (or if I did, they're so subtle I
still don't see them). You have the outer, decorating
iterator, and the inner iterator whose type is a template
parameter. Intercept all increment/dereference/etc. calls,
and provide whatever delegation and decoration are necessary.
No fuss, no muss. Clean, simple client code.

Except that the incrementation operator will typically want to
increment the decorated iterator more than once, and needs to
know the end, to avoid real problems.

Try writing an iterator which will iterate over the odd values
in a container of it, for example. Or one that will iterator
over the values outside a given range in a container of double.
In general, a filtering iterator must contain both the current
and the end iterators of what it's iterating over.
I guess you're not a big fan of STL-style iterators, but I
still love them.

I guess if I'd never known anything else, they wouldn't seem so
bad.

[...]
I don't see those as contradictory goals. Any representation
of text is effectively a container of characters.

And there is no basic type in C++ which represents a character.

Text is hard. Very hard, since it was designed by and for
humans, not machines. And humans are a lot more flexible than
machines. (There's also the fact that text is in two
dimensions, rather than one, and that it is graphical. I'm not
sure to what degree a string class should take that into
account, however.)
Right, there still does not seem to be any widespread agreement on that.
It's probably a good idea to keep the C++ standard string class
interface minimal, until C++ developers know what they really want.

Agreed. My real complaint about std::string is that it is too
heavy, not that it is missing features. I'd rather see it as
"just" an STL container. But then, what separates it from
std::vector<char>? Suppose we provide an overloaded operator+=,
operator+ and a replace function for vector, and all of the rest
of the functionality as external functions. (If I consider my
pre-standard string class, only two functions---other than
construction, assignment and destruction, of course---weren't
implemented in terms of other functions. Everything I did with
the string was defined in terms of replace or extract.)
Depends what you mean by it. What most new C++ developers
mean by it is a pretty simple idea, and a FAQ. Plenty of
string classes that have alleged case-insensitive comparison
functions actually provide only the toupper-each-char
implementation.

The problem is that toupper-each-char isn't implementable. At
least for any usable definition of toupper. What's toupper('ß')
supposed to return?

The real problem with case insensitive comparison, of course, is
that it isn't defined. You can't write a function to implement
it, because you don't know what that function really should do.
(And of course, what it should do depends on the locale. In
France, 'ä' compares equal to 'A', in Germany, it should collate
as "AE". Except, of course, that in France, it would compare
greater than 'A' if the two strings were otherwise equal. And
in Germany, there are actually several different standards for
ordering.)
If you're talking about an industrial-strength, portable
implementation, then of course it gets complicated, as do all
natural-language related issues.

As you say: natural-language related issues. That's the
problem.
If you have a copy of Effective STL handy: The simple case is
covered by Item 35, and the complicated case is Appendix A,
which is the Matt Austern article from the May 2000 C++
Report.

This article is getting a little long in the tooth; has
anything really changed? The only new info I've seen is
library-specific documentation (ICU and Qt).

Well, Matt does seem to ignore the fact that toupper and tolower
not only aren't bijections, but they aren't one to one. As I
said, in German, tolower( 'ß' ) must return a two character
sequence. It also ignores the fact that many characters require
two units to be represented---even in char32_t (32 bit Unicode).
And that frequently, a single character will have several
possible representations, using different numbers of units: in
Unicode, "\u00D4" and "\u006F\u0302" must compare equal. (Both
represent a capital O with a circumflex accent.)

[...]
(Note that you're certainly
not alone in this. The toupper and tolower functions in C and
in C++ all suppose a one to one mapping, which doesn't
correspond to the real world, and every time I integrated my
pre-standard string class into a project, I had to add a
non-const []---although the class supported an lvalue substring
replace:
s.substring( 3, 5 ) = "abcd" ;
was the equivalent of
s = s.replace( 3, 5, "abcd" ) ;
.)
Whether toupper and tolower are correct is a completely
orthogonal issue to whether it makes sense for the string
class to have array-style character indexing.

The question is: when could you use a non-const [] on a string,
if even for case conversions, it's wrong? Is there ever a case
where you can guarantee that replacing a single char with
another single char is correct. (There may be a few, e.g.
replacing the characters in a password---required to be US
ASCII---with '*'s. But they're very few.)
(And of course, the [] operator of std::string gives you
access to the underlying bytes, not the characters.)
But that makes sense for that particular abstraction, because
std::string is a typedef meant to represent the common case of
characters that fit within bytes.

It's such a common case that it doesn't exist in the real world.
If the idea of a character is too complex to be represented by
a char or wchar_t, then it merits its own, dedicated type,
with support for conversions, normalization, etc.

You said it above: it's a natural-language related issue. Thus,
by definition, extremely difficult and complicated.
Yes, and it seemed to work well. It never got released in
production code though, because there just wasn't any need for
it.

You mean you redefined everything necessary, all of the facets
in locale, etc., and everything necessary for iostream to work?
But that's not the problem. I usually use UTF-8, which fits
nicely in a char. But [] won't return a character.
What do you mean? std::basic_string::eek:perator[] returns a
reference-to-character, as defined by the character and traits types
with which basic_string was instantiated.

No. basic_string::eek:perator[] returns a reference to charT.
(With the requirement that charT be either char, wchar_t or a
user defined POD type.) A character is something more
complicated than that.
I'll take your word for that example. :) Characters just
aren't all the same size anymore.

(Someone else who's only scratched the surface of the
problem:). You might want to look at the technical reports at
the Unicode site, or get Haralambous' book.)
And by the way, I was relating my own experience. At the time
I first used std::string, the characters I needed to represent
fit very comfortably into bytes, and the [] operator did
provide correct access to them.
Take a look at my .sig. I should be obvious that this is not
the case for me.
Your sig looks fine to me, accented characters and all. It's
actually a nice proof of concept, since it includes three
different (Western) languages.

Except that in Unicode, some of the characters in it have
several different representations, some of which require a
sequence of code points. (I actually refered to it simply as an
indication that I do have to deal with multiple languages and
non-ASCII characters, on a daily basis.)
Admittedly, not often. It's just not something that comes up
a lot. If I'm accessing an individual character, chances are
good that I'm actually iterating over the characters in a
string. This kind of code is usually just buried in low-level
library functions. If a library is going to support strings
and substrings, then some code somewhere has to work at this
level. There's no getting around it.
Even if the standard library provided lots of Unicode-friendly
string support, indexed character access would still be
important.

Note that I'm not against it for read-only access. You often
have to scan, code point by code point, to find something. But
it's almost always a mistake to replace single code points,
without the provision for changing the number of code points.
[...]
The more you learn, the more C++ rewards you. I remember
someone I used to work with, who had a morbid fear of C++,
taking one look at a typical C++ reference book and laughing
derisively (yes, derisively, just like an arrogant Bond
villain). "How do they expect anybody to learn all that?" he
asked. The answer is that you don't have to learn it all
before you can use it.
But there's no real point in using it otherwise.
Huh? Do you really think you know every nook and cranny of
the standard off the top of your head, including the standard
libraries?
Not every nook and cranny. But I do expect anyone using C++ to
have at least an awareness of what it can do.
In my experience, most C++ developers have no idea what the
language can do. They use it as a sort of "C with classes,"
replacing function-pointers with virtual functions, but
otherwise writing glorified C code.

In which case, they'd probably be better off in Java.

I've encountered developers like that, but I've also worked in
shops that insisted on quality code.
Oh, I think it is. Suppose you start with <insert
language-of-the-month here>. "Wow," you say, "this is really
neat! LotM lets me print 'hello world' with just a single
line!" Or (this one is in vogue now): "Look how much stuff I
can do with Excel macros! I'm going to implement all my
business logic using them. Instead of writing applications,
I'll give everybody macro-heavy spreadsheets to fill in."
Sooner or later, that person needs to write a real,
non-trivial program, at which point the knowledge they gleaned
from "Learn Language X in 24 Seconds" becomes worse than
useless. It becomes baggage. Writing very small programs in
C++ is harder than writing them in some other languages, but
the point of newbie hacking isn't just to get something
working, but to lay the groundwork for harder tasks that lie
ahead.

The problem is that C++ has enough gotcha's that code written
without some basic undertstanding will contain subtle errors.

Note that my personal opinion is that programming is a complex
profession, that you can't learn in a week or two.
Independently of the language. I don't consider the effort
needed to learn the "necessary minimum" in C++ excessive.
Although it's probably more than is needed for the necessary
minimum in Java (for example), the fact is that in both cases,
it's only a small percentage of everything you need to know in
order to write correct programs.

[...]
I would have liked to see a more Smalltalk-heavy industry.
All modern dynamic languages seem to me like convoluted
imitations of Smalltalk. I'm not a Smalltalk expert, and it
doesn't seem have much of a fan-base anymore (like the Lisp
cult), but the syntax was so clean, and you could port it to a
new bare-hardware platform in a Summer. What happened? Was
it the licensing? Why is Java the server-side "safe bet,"
rather than Smalltalk?

Smalltalk got a bad reputation for performance. And of course,
static type checking (a la C++ or Java) does improve program
reliability, by couple of orders of magnitude.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,576
Members
45,054
Latest member
LucyCarper

Latest Threads

Top