Beginner help!

L

Luke Meyers

Bo said:
The slight advantage of using [] is that not checking the index saves
you some (very small) amount of time.

There's also an argument to be made in terms of clarity.
If your program runs millions of
times a day, that might be worth the risk!

Disagree. If your program runs millions of times a day, that
(efficiency gain) might be worth the *effort* of rigorously verifying
the correctness of the access so that the extra checking provided by
at() is redundant. Any code running millions of time, day in and day
out, is probably too important to accept additional *risk* of undefined
behavior.

A good design and clean implementation by someone who knows what
they're doing (not someone in an introductory programming course, mind
you) should be able to provide convincingly correct invariants when
necessary.

Luke
 
A

Alf P. Steinbach

* Luke Meyers:
Bo said:
The slight advantage of using [] is that not checking the index saves
you some (very small) amount of time.

There's also an argument to be made in terms of clarity.
If your program runs millions of
times a day, that might be worth the risk!

Disagree. If your program runs millions of times a day, that
(efficiency gain) might be worth the *effort* of rigorously verifying
the correctness of the access so that the extra checking provided by
at() is redundant. Any code running millions of time, day in and day
out, is probably too important to accept additional *risk* of undefined
behavior.

A good design and clean implementation by someone who knows what
they're doing (not someone in an introductory programming course, mind
you) should be able to provide convincingly correct invariants when
necessary.

I'm sorry, but that's not meaningful in any sense.

IBM did an experiment in coding on paper only, I think that was in
Australia. They'll never do that again.

That's why we we have checked operations such as at(), and that's why we
have testing.
 
O

osmium

Luke Meyers said:
Bo said:
The slight advantage of using [] is that not checking the index saves
you some (very small) amount of time.

There's also an argument to be made in terms of clarity.
If your program runs millions of
times a day, that might be worth the risk!

Disagree. If your program runs millions of times a day, that
(efficiency gain) might be worth the *effort* of rigorously verifying
the correctness of the access so that the extra checking provided by
at() is redundant. Any code running millions of time, day in and day
out, is probably too important to accept additional *risk* of undefined
behavior.

A good design and clean implementation by someone who knows what
they're doing (not someone in an introductory programming course, mind
you) should be able to provide convincingly correct invariants when
necessary.

The [] is much more clear that at. A well designed language would check for
bounds errors when using the [] notation. It would also have the option of
turning off checking on production builds. All problems solved, nothing to
discuss.
 
L

Luke Meyers

Alf said:
* Luke Meyers:
Bo said:
The slight advantage of using [] is that not checking the index saves
you some (very small) amount of time.

There's also an argument to be made in terms of clarity.
If your program runs millions of
times a day, that might be worth the risk!

Disagree. If your program runs millions of times a day, that
(efficiency gain) might be worth the *effort* of rigorously verifying
the correctness of the access so that the extra checking provided by
at() is redundant. Any code running millions of time, day in and day
out, is probably too important to accept additional *risk* of undefined
behavior.

A good design and clean implementation by someone who knows what
they're doing (not someone in an introductory programming course, mind
you) should be able to provide convincingly correct invariants when
necessary.

I'm sorry, but that's not meaningful in any sense.

IBM did an experiment in coding on paper only, I think that was in
Australia. They'll never do that again.

I think you mistook my meaning. I did not say to produce logical
proofs of program correctness (is that the meaning you took?). I said
"convincingly correct," by which I mean that the invariant "all
unchecked indices are valid," where useful in the context of a specific
time optimization, can generally be provided to an acceptable degree of
confidence within a sufficiently local context. Is that more clear?
Close scrutiny and thorough testing can suffice, when truly expedient.
That's why we we have checked operations such as at(), and that's why we
have testing.

Yes, but in C++, there is an important principle that "you don't pay
for what you don't use." So that's why the unchecked operation is
available.

Luke
 
L

Luke Meyers

osmium said:
The [] is much more clear that at.

A reasonable subjective argument can be made for that (or for the
contrary). I would imagine it depends a great deal on the individual
reading the code.
A well designed language would check for
bounds errors when using the [] notation.

This too is very subjective. What do you mean by "well-designed?"
Given that a programming language is created with certain design
principles involved, certain goals to accomplish, is it fair to call
something poorly designed if you disagree with the selection of
principles and goals? In C++, you don't pay (in terms of time and
space cost at runtime) for what you don't use. There's no free lunch
-- you can't have checked access and have it be as fast as unchecked
access. Inside the body of a tight loop, or another
performance-critical context, this overhead may be both unnecessary and
unacceptably expensive.

You might argue that, while unchecked access is a good facility to have
available, checked access should be the common case, and therefore
provide what you see as the clearest syntax. However, remember that
another important design goal of C++ has always been maximal
compatibility with C; it is because C++ is compatible with C that it
has come into such wide use. The vast majority of programming
languages, "well-designed" or not, languish in obscurity. The []
element access syntax comes from arrays, and for arrays it is an
unchecked operation. Hence, the most consistent scheme is to have the
unchecked operations for other containers use the same syntax.

So, with this in mind, do you really feel it's a bad design, or just
disagree with the priorities involved in choosing design criteria?
It would also have the option of
turning off checking on production builds.

That negates much of the benefit. Non-production code is only run on
whatever test cases you invent (and take the time to implement).
Production use is typically far more exhaustive, and as such may
uncover cases not reached in testing. If the access is unchecked in
those cases, you're no better off for having had checked access in your
tests. When you're in a nice, safe testing sandbox, undefined behavior
may take longer to diagnose than a handled error, but for the most part
the result is the same. In production, undefined behavior can be
catastrophic -- for a lot of C++ code, lives depend on not invoking
undefined behavior.

Better to use checked access in general, and make careful use of
unchecked access when a specific optimization need is discovered during
performance analysis. Fortunately, for those who use the STL
effectively and as intended, it's rare to have to use operator[] or
at() in any case -- that's why we have iterators and standard
algorithms. You can bet that your local for_each implementation isn't
using checked access, but it's small and tight and closely-scrutinized,
and it works great 100% of the time.
All problems solved, nothing to discuss.

That's naive (and rather arrogant). You made your points, based on
your subjective opinion; that doesn't entitle you to declare the
discussion closed and take a victory lap.

Luke
 
R

roberts.noah

Thomas said:
This seems to be a misunderstanding. The OP did already started to
program, but just posted his assignment. Since he did not quote the
context, you think he was asking for doing his homework.

Well, he has been given several ways to do his original question.
Steinbach gave him one right off the bat. So I don't understand why he
is still posting his assignment; seems to me like he should have tried
the solution given and be well on his way to finishing.

His original question was quite reasonable...
 
O

osmium

Luke Meyers said:
osmium said:
The [] is much more clear that at.

A reasonable subjective argument can be made for that (or for the
contrary). I would imagine it depends a great deal on the individual
reading the code.

Really? Do you actually know a person who finds at clearer?
A well designed language would check for
bounds errors when using the [] notation.

This too is very subjective. What do you mean by "well-designed?"
Given that a programming language is created with certain design
principles involved, certain goals to accomplish, is it fair to call
something poorly designed if you disagree with the selection of
principles and goals? In C++, you don't pay (in terms of time and
space cost at runtime) for what you don't use. There's no free lunch
-- you can't have checked access and have it be as fast as unchecked
access. Inside the body of a tight loop, or another
performance-critical context, this overhead may be both unnecessary and
unacceptably expensive.

You might argue that, while unchecked access is a good facility to have
available, checked access should be the common case, and therefore
provide what you see as the clearest syntax. However, remember that
another important design goal of C++ has always been maximal
compatibility with C; it is because C++ is compatible with C that it
has come into such wide use. The vast majority of programming
languages, "well-designed" or not, languish in obscurity. The []
element access syntax comes from arrays, and for arrays it is an
unchecked operation. Hence, the most consistent scheme is to have the
unchecked operations for other containers use the same syntax.

So, with this in mind, do you really feel it's a bad design, or just
disagree with the priorities involved in choosing design criteria?
It would also have the option of
turning off checking on production builds.

That negates much of the benefit. Non-production code is only run on
whatever test cases you invent (and take the time to implement).
Production use is typically far more exhaustive, and as such may
uncover cases not reached in testing. If the access is unchecked in
those cases, you're no better off for having had checked access in your
tests. When you're in a nice, safe testing sandbox, undefined behavior
may take longer to diagnose than a handled error, but for the most part
the result is the same. In production, undefined behavior can be
catastrophic -- for a lot of C++ code, lives depend on not invoking
undefined behavior.

What is it you don't understand about optional?
Better to use checked access in general, and make careful use of
unchecked access when a specific optimization need is discovered during
performance analysis. Fortunately, for those who use the STL
effectively and as intended, it's rare to have to use operator[] or
at() in any case -- that's why we have iterators and standard
algorithms.

Do you realize that people actually write mathematical programs and other
programs using random access (games) in C++?
 
?

=?iso-8859-1?q?Stephan_Br=F6nnimann?=

osmium said:
:
The [] is much more clear that at. A well designed language would check for
bounds errors when using the [] notation. It would also have the option of
turning off checking on production builds. All problems solved, nothing to
discuss.

An all or nothing bound check is not desirable. How would you
efficiently implement the 2 (non-sense) functions below?
With at and operator[] I can chosse what fits my needs.

char first(const std::string& s) throw (std::exception)
{
return s.at(0);
}

void addOne(std::string& s)
{
typedef std::string::size_type Size;
for (Size e = s.size(), i = 0; i < e; ++i) {
s += 1; // *)
}
}

*) Certainly Alf agrees to use the operator[] here :)
Overflow check omitted.

Regards, Stephan
(e-mail address removed)
Open source rating and billing engine for communication networks.
 
A

Alf P. Steinbach

* Luke Meyers:
Alf said:
* Luke Meyers:
Bo Persson wrote:
The slight advantage of using [] is that not checking the index saves
you some (very small) amount of time.

There's also an argument to be made in terms of clarity.

If your program runs millions of
times a day, that might be worth the risk!

Disagree. If your program runs millions of times a day, that
(efficiency gain) might be worth the *effort* of rigorously verifying
the correctness of the access so that the extra checking provided by
at() is redundant. Any code running millions of time, day in and day
out, is probably too important to accept additional *risk* of undefined
behavior.

A good design and clean implementation by someone who knows what
they're doing (not someone in an introductory programming course, mind
you) should be able to provide convincingly correct invariants when
necessary.

I'm sorry, but that's not meaningful in any sense.

IBM did an experiment in coding on paper only, I think that was in
Australia. They'll never do that again.

I think you mistook my meaning. I did not say to produce logical
proofs of program correctness (is that the meaning you took?). I said
"convincingly correct," by which I mean that the invariant "all
unchecked indices are valid," where useful in the context of a specific
time optimization, can generally be provided to an acceptable degree of
confidence within a sufficiently local context. Is that more clear?
Close scrutiny and thorough testing can suffice, when truly expedient.

First rule of optimization: don't do it.

Second rule: don't do it yet.

Third rule: measure first.

IMO teaching irrelevant and mostly problematic low-level optimization
techniques to novices, instead of proper engineering, is very bad.

At least, if unchecked operations are to be recommended, also recommend
measurement to see whether they actually have any positive effect.

Yes, but in C++, there is an important principle that "you don't pay
for what you don't use." So that's why the unchecked operation is
available.

The unchecked operations are available for the cases where you either
don't care about correctness, or guarantee it in other ways.

Those are not reasonable requirements for a novice.
 
R

roberts.noah

Luke said:
osmium wrote:

[snip - no point arguing missing sections since they are correct]
It would also have the option of
turning off checking on production builds.

That negates much of the benefit. Non-production code is only run on
whatever test cases you invent (and take the time to implement).
Production use is typically far more exhaustive, and as such may
uncover cases not reached in testing. If the access is unchecked in
those cases, you're no better off for having had checked access in your
tests. When you're in a nice, safe testing sandbox, undefined behavior
may take longer to diagnose than a handled error, but for the most part
the result is the same. In production, undefined behavior can be
catastrophic -- for a lot of C++ code, lives depend on not invoking
undefined behavior.

Better to use checked access in general, and make careful use of
unchecked access when a specific optimization need is discovered during
performance analysis. Fortunately, for those who use the STL
effectively and as intended, it's rare to have to use operator[] or
at() in any case -- that's why we have iterators and standard
algorithms. You can bet that your local for_each implementation isn't
using checked access, but it's small and tight and closely-scrutinized,
and it works great 100% of the time.

While much of what you say is true I do find that providing
non-production checks to be very benificial. Sometimes when you are in
the middle of development you are not fully aware of all the
concequences of your actions. Placing asserts that are compiled out in
release mode help other developers find those areas that get broken by
code they change or add. For instance in this case you may find that
something you did added length to a vector but you were not aware of
that concequence. Concequently you run into a buffer overrun situation
and without a debug assert telling you what happened and where it could
take you hours or days to find it.

Also you use of iterators as illustration is questionable since
iterators are unchecked. In a situation when you want to be sure never
to overrun it may be more beneficial to use at() then an iterator that
can go past end() into no-man's land.

That said, I never use at(). For one thing it would be a major issue
where I work because the lead dev doesn't like STL to begin with and
thinks its slow and does bounds checking all the time (I know) but also
because he doesn't like exceptions and at() generates exceptions. But
to tell the truth I never used it anyway since it is just so easy to
check for yourself and most of the time you have to anyway. IMHO even
when you don't you should be since exceptions are for exceptional
situations and shouldn't be used for bound checking.
 
L

Luke Meyers

osmium said:
Luke Meyers said:
osmium said:
The [] is much more clear that at.

A reasonable subjective argument can be made for that (or for the
contrary). I would imagine it depends a great deal on the individual
reading the code.

Really? Do you actually know a person who finds at clearer?

Anyone used to C would instinctively read [] as unchecked access. An
inconsistency in semantics (operator[] meaning different things for
different types) would make programs less clear to such individuals (of
which there are many).
What is it you don't understand about optional?

I take exception to your tone, and to the presumption that I'm failing
to understand. Someone asked a more detailed version about this
elsewhere in this thread; I'll make my response there.
Better to use checked access in general, and make careful use of
unchecked access when a specific optimization need is discovered during
performance analysis. Fortunately, for those who use the STL
effectively and as intended, it's rare to have to use operator[] or
at() in any case -- that's why we have iterators and standard
algorithms.

Do you realize that people actually write mathematical programs and other
programs using random access (games) in C++?

Obviously. There are certainly situations where it's called for. But
it's far more common, in a good design that makes wise use of standard
libraries, to treat the contents of containers systematically (e.g.
iterating over the whole set) rather than interacting with numeric
indices. I use the STL extensively, and neither operator[] nor at()
figure heavily into my usage patterns. It's nice to know they're
there, but I don't find them appearing often in good designs.
Obviously, there are exceptions peculiar to specific domains.

Luke
 
L

Luke Meyers

Luke said:
osmium wrote:

[snip - no point arguing missing sections since they are correct]

Cheers. ;)
It would also have the option of
turning off checking on production builds.

That negates much of the benefit. Non-production code is only run on
whatever test cases you invent (and take the time to implement).
Production use is typically far more exhaustive, and as such may
uncover cases not reached in testing. If the access is unchecked in
those cases, you're no better off for having had checked access in your
tests. When you're in a nice, safe testing sandbox, undefined behavior
may take longer to diagnose than a handled error, but for the most part
the result is the same. In production, undefined behavior can be
catastrophic -- for a lot of C++ code, lives depend on not invoking
undefined behavior.

Better to use checked access in general, and make careful use of
unchecked access when a specific optimization need is discovered during
performance analysis. Fortunately, for those who use the STL
effectively and as intended, it's rare to have to use operator[] or
at() in any case -- that's why we have iterators and standard
algorithms. You can bet that your local for_each implementation isn't
using checked access, but it's small and tight and closely-scrutinized,
and it works great 100% of the time.

While much of what you say is true I do find that providing
non-production checks to be very benificial. Sometimes when you are in
the middle of development you are not fully aware of all the
concequences of your actions. Placing asserts that are compiled out in
release mode help other developers find those areas that get broken by
code they change or add. For instance in this case you may find that
something you did added length to a vector but you were not aware of
that concequence. Concequently you run into a buffer overrun situation
and without a debug assert telling you what happened and where it could
take you hours or days to find it.

Oh, please don't misunderstand -- optionally-enabled runtime assertions
are an invaluable tool. But optionality isn't free. The fact that
(for std::vector) operator[] is unchecked and at() is checked is an
invariant. Users of at() would be disappointed if they wrote code
reliant on the checking provided by at(), only to find that this
checking was swept out from under them in production.
Optionally-checked random access would be a helpful feature, but it's
by no means an acceptable substitute for *guaranteed* checking.
Also you use of iterators as illustration is questionable since
iterators are unchecked. In a situation when you want to be sure never
to overrun it may be more beneficial to use at() then an iterator that
can go past end() into no-man's land.

Iterators are more tightly typed than integer indices. Integers can
easily pass to and fro through wide swathes of program code. Iterators
naturally restrict themselves to more limited scopes, closely
associated with the container they pertain to. As such, I think in
general the pedigree (validity) of an iterator is far easier to
ascertain than that of an integer used as an index. An integer can
mean anything, and is involved in countless operations having nothing
to do with your container -- iterators only participate in logic
pertinent to the container.

The point I was really trying to make, though, is that with iterators
one can rely on proven algorithms rather than constantly, redundantly
performing low-level manipulation of individual elements.
std::for_each operates on a whole container, not just a single element.
It's correct, it's optimal, it's reliable. Hand-crufted for loops are
far more prone to error and efficiency. It's far easier to pass valid
iterators to std::for_each than to write a correct for loop.
That said, I never use at().

Me neither, for the most part. Same with operator[] (for std::vector).
For one thing it would be a major issue
where I work because the lead dev doesn't like STL to begin with and
thinks its slow and does bounds checking all the time (I know)

You have my sympathy. However, those statements are provably false on
technical grounds. If your lead is so unreasonable that he will not
listen to such an argument, I wonder why you're wasting your time on a
team so severely handicapped by such deficient and misguided
leadership.
but also
because he doesn't like exceptions and at() generates exceptions.

Exceptions carry a lot of subtleties with them, but they're a valuable
language feature and should not be shunned. Professional software
developers should be expected to take the time to learn important tools
and how to use them properly. Put another way: the alternative to an
exception is undefined behavior. If your lead doesn't like exceptions,
how does he like segmentation faults and core dumps?
But
to tell the truth I never used it anyway since it is just so easy to
check for yourself and most of the time you have to anyway.

Most of the time it's moot because a standard algorithm is perfectly
suitable.
IMHO even
when you don't you should be since exceptions are for exceptional
situations and shouldn't be used for bound checking.

One valid interpretation is that exceptions can represent invariant
violations. Out-of-bounds access surely meets that criterion.

Luke
 
R

roberts.noah

Luke said:
You have my sympathy. However, those statements are provably false on
technical grounds. If your lead is so unreasonable that he will not
listen to such an argument, I wonder why you're wasting your time on a
team so severely handicapped by such deficient and misguided
leadership.

Oh, I like where I work I just butt heads with the lead on some issues.

I think I've shown that using the STL isn't as costly as he thinks well
enough that I don't hear about it too often anymore. I'm introducing
new ideas and such into the group, as it should be, and visa-versa and
I am learning things so I feel the relationship is mutually productive.
You shouldn't judge the crew based on my complaints; there are some
very smart people here.
Exceptions carry a lot of subtleties with them, but they're a valuable
language feature and should not be shunned. Professional software
developers should be expected to take the time to learn important tools
and how to use them properly.

Definately. I'm always buying books to improve my abilities and learn
different techniques. I don't always use them but I always get
something out of the reading. To me this is fundamental and in fact it
is how 90% of my learning has been accomplished.

In this case however, it isn't about not knowing how to use a language
feature (though since they don't there is some of that) it is that they
are percieved as too costly and as such explicitly disallowed. In
other words someone actually decided that they would not use them, not
just too lazy to learn.

Put another way: the alternative to an
exception is undefined behavior. If your lead doesn't like exceptions,
how does he like segmentation faults and core dumps?

The alternative to exceptions is return codes. I myself think this is
a much more inefficient way to deal with exceptional situations but I'm
not ready to go there. So we use return codes and debug build asserts.
I agree that we should be using exceptions but people lived a long
time without them and it is possible to continue doing so.
Most of the time it's moot because a standard algorithm is perfectly
suitable.

I often iterate vectors instead of calling for_each. Often it is some
simple thing I want to do and its easier just to write a for loop
instead of function or functor. Might be nice to use bll or phoenix
expressions but I can't....at least not yet....no boost.
One valid interpretation is that exceptions can represent invariant
violations. Out-of-bounds access surely meets that criterion.

Out of bounds access yes but checks should always be done before hand
either explicitly or through the nature of the algorithm itself. An
out of bounds exception should indicate that the algorithm has failed
in a fundamental way.
 
A

aaron

aha...it's so simple!!!

use the substr function
firstName.substr(0,1)
middleName.substr(0,1)
lastName.substr(0,1)

Thanks everyone!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,062
Latest member
OrderKetozenseACV

Latest Threads

Top