Question on vector at()

  • Thread starter Alexander Dong Back Kim
  • Start date
J

James Kanze

[...]
Any exception that is thrown _should_ specified in the
contract. Otherwise, why should a caller provide a catch()
clause? Uncaught exceptions are more or less equivalent to
asserts in that they unwinde the stack completely.

I think that there are system level exceptions which should be
considered "part of the contract" unless explicitly specified
otherwise. Things like std::bad_alloc, for example. The usual
rule is to document exceptions I guarantee to throw, along with
the conditions under which I guarantee to throw them, and to
document the cases where I guarantee to not even throw "system"
exceptions. Most functions, however, say nothing about things
like std::bad_alloc in their explicit contract. (They might,
however, say something like "throw()": the function guarantes to
throw no exceptions whatever, not even system level exceptions.)

FWIW: I find very little use for exceptions other than system
level exceptions, except in special cases like constructors (or
some functions which return objects, particularly overloaded
operators). But that may just be related to my application
domains.
 
J

James Kanze

I couldn't have responded better than James did. One note, though:
I honestly can't understand this opinion.
Developers are humans, and humans make mistakes. No matter how
experienced you are at developing, even if you are the best
C++ developer in the world and your code has the best quality
in the world, you will make small mistakes here and there from
time to time.

Totally agreed up until there.
The vast majority of these small mistakes can be found with a
proper debugger almost immediately.

That's not really my experience. First, you need exhaustive
unit tests, regardless. And typically, if you understand the
code you've written, and the unit tests are well designed, and
address all possible failure modes explicitly, then you'll know
pretty much what the error is just from the unit tests---you
don't need the debugger. (As a general rule. There are
exceptions, and as I posted, I encountered a case where the
debugger really helped just a couple of days ago. But it's the
first time in about 20 years.)

Secondly, there's strength in numbers. While everyone makes
mistakes, typically, each person will overlook something
different. So the people doing code review will catch those
that you missed. And catch the cases you forgot to test in your
unit tests. My experience is that when code review is done
correctly, you typically get something like one error per
100,000 lines of code going into integration. (It's possible to
get even less, but at some point, trying to eliminate that very
last error ceases to be cost effective, unless you're dealing
with truely critical software.)
You simply have to see the line where the program crashed in
order to immediately see your own mistake and fix it.

OK. When people talk about "using a debugger", I generally
think in terms of interactive execution of the program. I agree
that it's very useful for post-mortems.
Without the aid of a debugger it could take tens of minutes,
or even hours at worst. ("Segmentation fault" is not a very
informative error message. Having to find the place where the
fault happens by adding debugging prints to your program can
be a real pain.)

Segment fault leaves a core dump, which can be used in a post
mortem analysis. (At least it does on the machines I work on.)
If the error is not exactly at the line where the crash
happened, the debugger is even more useful, exactly because of
the stack trace. You can navigate the call stack down until
you find the actual place where the error happened, you can
easily examine the values of the variables at that place, and
in the majority of the cases you will immediately see what the
problem was (usually a small typo or thought mistake).

So are you talking about a post-mortem, or executing the program
under the debugger?
It is, of course, possible to have really nasty bugs which are
extremely hard, or even impossible, to find with just a
debugger. However, that doesn't mean that the debugger isn't
useful.
If I had to develop C++ programs without any kind of debugger,
it would be a real pain.

I've had to do it with C. It's not that bad. The real problem
is the absense of the ability to do a post-mortem.
Each small mistake, each small typo, would require a long and
laborious task of manually adding debug prints to try to
binary-search the location of the error...

It does mean rerunning the program with full logging activated.
Horrible.
(I have actually had to do this in the past, when developing
programs in a Unix environment using gcc and without gdb or
any other kind of debugger. It was not nice.)

One important point: when there is an error in delivered code,
your first job is to reproduce it in the unit tests. Normally,
if the program is adequately instrumented (logging, etc.), you
should be able to do this uniquely from the log data. And once
you've done it, we fall into the case I first cited, above: the
way the unit test fails pretty much indicates where the error
is. Whatever you do, of course, you do NOT correct an error in
the code until you have a unit test which reveals it.

Of course, all of my comments above are general rules. As any
experienced programmer knows, there will be special cases which
just don't fit in, and stubborn errors which slip through code
review, and which require use of the debugger just to find the
failure mode, so you can test it. Such cases are, or should be,
very rare, however, and one should resist the temptation to run
the program under the debugger just because you don't understand
anything. (As a general rule: unless you can state precisely
what you want to look at, and what you should see, or expect to
see, it's too early to break out the debugger.)
 
J

James Kanze

Juha Nieminen wrote:
There is a school of thought that I am becoming more aligned
with as my TDD skills improve that says using a debugger on
new code is a sign flaws in your development process.
I try avoid the debugger wherever possible, it simply slows me
down. Better to make a small change, run the tests. If a
test fails, revert the change and do it again another way.
Debugging with a debugger is one of those intense processes
that can suck up a lot of time before you realise ad it
disrupts the development flow.

I don't think you really mean it that way, but it sounds like
you're suggesting making stabs in the dark.

I think the important point is to understand what you are going
to write before writing it. I've seen a lot of second rate
programmers using the debugger to try to understand what they've
just written. That's wrong, and never gives good results. But
just writing random code, and then making random changes until
it passes the test suite isn't any better. The only general
rule that works is think first, then act. Don't write a single
line of code without knowing why---what the purpose of that line
is, what the state of the program is before the line is
executed, and what it will be after, etc.
 
I

Ian Collins

James said:
One important point: when there is an error in delivered code,
your first job is to reproduce it in the unit tests. Normally,
if the program is adequately instrumented (logging, etc.), you
should be able to do this uniquely from the log data. And once
you've done it, we fall into the case I first cited, above: the
way the unit test fails pretty much indicates where the error
is. Whatever you do, of course, you do NOT correct an error in
the code until you have a unit test which reveals it.
I couldn't agree more. The test will also guard against the
reintroduction of the same bug.

If more people worked like this, we'd have a lot less buggy software.
 
I

Ian Collins

James said:
I don't think you really mean it that way, but it sounds like
you're suggesting making stabs in the dark.
I hope not!
I think the important point is to understand what you are going
to write before writing it. I've seen a lot of second rate
programmers using the debugger to try to understand what they've
just written. That's wrong, and never gives good results. But
just writing random code, and then making random changes until
it passes the test suite isn't any better.

I agree, the point I was trying to make is it's often quicker to redo a
change that breaks unit tests than it is to debug it. If this isn't
the case, the change was probably too big, or the tests too crude.
 
J

Jeff Schwab

Juha said:
If I had to develop C++ programs without any kind of debugger, it
would be a real pain. Each small mistake, each small typo, would require
a long and laborious task of manually adding debug prints to try to
binary-search the location of the error... Horrible.

Why would you disregard the benefit of someone else's experience? I'm
telling you I rarely need the debugger, thoroughly enjoy writing C++
code, and do seem (at the risk of tooting my own horn) to write code of
very high quality, yet you're convinced that such a thing is impossible.
I can't speak for anyone else, but I will note that the best books on
software development include a good deal more coverage of how to use the
compiler and unit tests effectively, than of "how you can track down
problems in the debugger." FWIW, I find debuggers, ICEs, and logic
probes invaluable when doing very low-level programming, but only
because the code is so intimately tied to the hardware than independent
testing is very difficult.

You frequently use phrases like "I can't even begin to imagine why" and
"I honestly can't understand this opinion." You come across as
argumentative. I'm trying to share what I've learned, and to continue
learning by hearing and evaluating what others have to say. I don't
always agree, but arriving at that conclusion is an interesting process
in itself. Sometimes I even change my mind. :)

I'm going to give this one more stab, and then I think I'm through
adding to this thread (though I'll keep lurking).

(1) The most effective way to produce correct code is for it to be
correct by construction. Design, then code. Look before you leap.

(2) The earlier problems are caught, the better. It is better to catch
a bug at compile time than at run-time, and better to have a failing
unit test than a failing product in the field.

(3) When an error does occur in the wild, the program should handle it
as gracefully as possible. If an exception can be thrown, it is
preferable to a crash, unless there is an explicit reason that even a
stack unwind would unacceptable.

(4) Different concepts should have different names. As an example, the
name "assert" has already been tasked with identifying conditions that
are known, not just expected, to be true. Assertions are therefore
suitable for documenting and verifying class invariants, but not for
verifying prerequisites.
(I have actually had to do this in the past, when developing programs
in a Unix environment using gcc and without gdb or any other kind of
debugger. It was not nice.)

Is it possible that the problem was not the lack of a debugger, but the
fact that you were already more comfortable with the debugger than with
alternative solutions? Almost all of my coding for the past six months
has been on Linux, using gcc. I have rarely used gdb, and then mostly
as an "archaeological" tool to understand code that had been evolved,
rather than designed.
 
J

Juha Nieminen

Ian said:
There is a school of thought that I am becoming more aligned with as my
TDD skills improve that says using a debugger on new code is a sign
flaws in your development process.

Well, I don't see the debugger telling me "your program crashed here"
any different from a compiler telling me "you have a syntax error here".

Imagine if C++ compilers wouldn't tell you which one is the line where
you have the syntax error, only that there is a syntax error somewhere.
That wouldn't make much sense.
 
F

Fred Zwarts

Juha Nieminen said:
Perhaps a different design could be better here?

Usually indexing out of boundaries means one of two things:

1) It's an error, ie. a bug in the code. In this case an assert() is
usually better than throwing (because you can get the whole call stack
easily with a debugger).

A good debugger can also show the whole call stack when an exception occurs.
 
J

James Kanze

James Kanze wrote:

[...]
I agree, the point I was trying to make is it's often quicker
to redo a change that breaks unit tests than it is to debug
it. If this isn't the case, the change was probably too big,
or the tests too crude.

I'm not sure what you mean by "redo" the change. Do you mean go
back to the previous state, and start from scratch making the
change? That seems a bit wierd to me.

In my experience, there are really two distinct types of errors.
On one hand, there are typos and related types of errors. Your
basic approach to the problem was correct, but somehow, you
slipped up implementing it. Which of the unit tests failed, and
how they failed, should usually give a pretty good idea as to
where you slipped up, either a quick glance at the code reveals
it to you immediately, or you'll never see it (because you
generally see what you meant to write, not what is actually
there). In the latter, redoing the change could be a valid
solution, but generally, having someone else glance at the code
is even more effective---someone who's not looked at it before,
and doesn't know what you meant to write, and so really does see
what is actually there.

The other type is where there is a fundamental error in the way
you thought the problem out. The code correctly implements an
incorrect "design". In such cases, you probably will have to
back out and redo the change---trying to hack the code to change
the design generally results in a horrible mess. But before
doing so, you really have to think enough to be sure you know
why the design was wrong. (And again, talking it through with a
collegue who isn't so involved as to have preconceptions can be
remarkably effective.)

All in all, I find the debugger almost useless in the second
case. In the first case, if the unit test crashed somewhere
deep in a system function, a debugger can be very useful for
doing a post-mortem, to find out where in your code you were
when it crashed. And every once in a while (once in about
twenty years, in my case), using it interactively may help in
isolating where what you wrote wasn't what you thought you
wrote. (Another case where the debugger is useful is in
understanding third party software which isn't well documented,
or even well written, and for which you don't have any unit
tests. But of course, having the third party deliver well
written, well documented code with unit tests would be a far
better solution.)

The important thing, with the debugger or with any other tool,
is to think before you act. I've seen far too many cases where
the programmer will write some code, and then use the debugger
to try to understand what he just wrote. Where as he should
understand what he's going to write before writing it. I don't
use the debugger much myself, but if a programmer sits down, and
first works out what he wants to look at with it, and what he
expects to see, then I have no problem with it. If the attitude
is rather: the code doesn't work, and I don't know why, so I'll
step through it line by line in the debugger, hoping that will
give me some inspiration, it's the programmer you have to
change, before changing the code.
 
J

James Kanze

Well, I don't see the debugger telling me "your program
crashed here" any different from a compiler telling me "you
have a syntax error here".

Time. In order for the debugger to tell me anything, I have to
generate an executable, and run it under the debugger. The
compiler tells me a lot earlier.

But that's not really the point here. If the program crashes,
you do use the debugger on the core dump for a post-mortem, to
tell you where (supposing that the crash wasn't due to an
assertion failure, or the assertion failure was too deep down in
lower level code to tell you where you were in your code). A
lot of the time, however, the error will be detected by some
other type of unit test failure. And usually, the way the unit
test failed should give enough information to isolate the error
in less time than it takes to prepare a debugging session; in
fact, in most cases (not all), I find that the work necessary to
prepare a debugging session (deciding where to put the break
points, etc.) will reveal the error before you actually get to
the point of invoking the debugger.
Imagine if C++ compilers wouldn't tell you which one is the
line where you have the syntax error, only that there is a
syntax error somewhere. That wouldn't make much sense.

I'm not sure, but I wonder if you and Ian aren't talking about
different things. I know that I use the debugger for
post-mortems, to find out which line in my unit tests triggered
the crash. I also know, however, that using it interactively,
on a running program, rarely an effective means of debugging.
And that far too many people seem to use it instead of thinking,
which is never very effective.
 
J

James Kanze

I couldn't agree more. The test will also guard against the
reintroduction of the same bug.

It is, none the less, standard procedure in every software
development process I've ever seen.

Another standard procedure is to evaluate why the error
propagated so far. What could you have done differently to
ensure that it was caught by code review, or the original unit
tests? Or even, so that the programmer wouldn't have introduced
it in the first place? Any error downstream from development is
indicative of an "error" in the development process.

But of course, you can fix the error in the process after you've
fixed the error in the code, where as you can't fix the error in
the code until you have a unit test case which reveals it.
(Otherwise, how are you sure you've fixed it?)
If more people worked like this, we'd have a lot less buggy
software.

You've heard of the millions of monkeys typing away, and writing
Shakespeare. That's probably an accurate description of the
software development process in a lot of large companies.
 
J

James Kanze

[...]
(3) When an error does occur in the wild, the program should handle it
as gracefully as possible. If an exception can be thrown, it is
preferable to a crash, unless there is an explicit reason that even a
stack unwind would unacceptable.

And this is where you seem to have misunderstood something.
Throwing an exception is NOT an acceptable way of handling an
error in the wild, since it means that the error might go
unnoticed. Once the error has occurred (even before it has been
detected), the program is functionally incorrect. And
gracefully incorrect is still incorrect---the most important
thing is to ensure that the surrounding environment knows that
the program is incorrect, so that it can take the appropriate
actions. Nothing else really matters. (There are special
cases, e.g. like games, where it is preferable to attempt to
"hide" the error, but I don't think that they're that common.)

Stop and think about it for a minute. Getting a core dump (an
assertion failure) from your editor is certainly not a pleasant
experience. But would you prefer that it stumble on, and
perhaps overwrite all of your file with wrong data, rather than
getting an ugly error message, and probably loosing everything
you've done since the last checkpoint---with a modern editor,
say in the last 5 or 10 seconds?
(4) Different concepts should have different names. As an example, the
name "assert" has already been tasked with identifying conditions that
are known, not just expected, to be true. Assertions are therefore
suitable for documenting and verifying class invariants, but not for
verifying prerequisites.

I'm not sure what you mean by "prerequisites" here. In the
context of the program, you "know" that no one will call your
function with an illegal argument, since the contract you give
forbids it. In the same way you "know" that no one will
overwrite your object with memcpy, thus violating your class
invariants. In both cases, you use assert() to detect the
error, since it will terminate the program most rapidly, and in
a way that the surrounding environment cannot possibly consider
"correct". (At least, that's the intent of assert(). And the
way it works under Unix, and I'm pretty sure Windows.)
Is it possible that the problem was not the lack of a
debugger, but the fact that you were already more comfortable
with the debugger than with alternative solutions? Almost all
of my coding for the past six months has been on Linux, using
gcc. I have rarely used gdb, and then mostly as an
"archaeological" tool to understand code that had been
evolved, rather than designed.

There are different ways to use the debugger. I have no
hesitation about using it for a post-mortem. I also have no
problem with its use *if* the programmer, reflecting on the
code, comes to a point where he feals that 1) the variable in
question must have such and such a value, but 2) if it did, the
observed results aren't possible. (In that case, there's
obviously an error in his thinking---I can see using the
debugger to decide whether the error is upstream or downstream
of this critical point.) In practice, I personally find this
second case very, very rare, however. Individual components
should be simple enough that such problems don't occur within
them, and of course, you log at the interface between
components, so you can get that information from the process
logs. (This probably doesn't hold for shrinkwrapped software.)

That doesn't justify people who use the debugger instead of
thinking. And I know that in a learning environment, until
someone had learned to write at least small applications
correctly, I wouldn't allow access to a debugger. But I have no
problem with an experienced, professional programmer using it
for post-mortems, or even occasionally interactively. While
programs should be correct by construction, none of us are
perfect, and all software development processes do provide means
for double-checking the programmers work. Systematically.
 
I

Ian Collins

James said:
I'm not sure what you mean by "redo" the change. Do you mean go
back to the previous state, and start from scratch making the
change? That seems a bit wierd to me.
If the problem not obvious by inspection, it is often faster to revert
the change and redo it in smaller steps, running the tests on each step
than it is to dive in with a debugger. I really got into the habit of
doing this with PHP, where I didn't have a debugger!
The important thing, with the debugger or with any other tool,
is to think before you act. I've seen far too many cases where
the programmer will write some code, and then use the debugger
to try to understand what he just wrote. Where as he should
understand what he's going to write before writing it.

I agree. When working test first, the action of designing the test
forces one to think ahead.
9 place S�mard, 78210 St.-Cyr-l'�cole, France, +33 (0)1 30 23 00 34

Has something changed with your character set? Your postings used to
use ISO-8859-1, but this one claimed UTF8, mangling the accented characters.
 
I

Ian Collins

James said:
But of course, you can fix the error in the process after you've
fixed the error in the code, where as you can't fix the error in
the code until you have a unit test case which reveals it.
(Otherwise, how are you sure you've fixed it?)
The big question is, how do you introduce a failing test into the process?
 
J

James Kanze

James Kanze wrote:
Has something changed with your character set? Your postings
used to use ISO-8859-1, but this one claimed UTF8, mangling
the accented characters.

Not that I know of, but I'm posting (here at least) through a
firewall over which I have no control (and via Google groups),
and it wouldn't be the first time strange things happened
downstream. We've been migrating a lot of things to Linux
lately---the default encoding for Linux is usually UTF-8, but
under Solaris, I was using ISO-8859-1 (because the UTF-8 locales
weren't installed---although that seems to no longer be the
case). It wouldn't surprise me if someone migrated the firewall
without recognizing that it was doing something with the
encoding fields. (But it could also be Google who changed
something.)

If the postings are claiming UTF-8, I'll switch to that. (I've
now got the appropriate locale under Solaris. It may take some
time, however, since the versions of vim that they provide here
seem to have the fileencoding compiled in, rather than
configurable.)
 
I

Ian Collins

James said:
If the postings are claiming UTF-8, I'll switch to that. (I've
now got the appropriate locale under Solaris. It may take some
time, however, since the versions of vim that they provide here
seem to have the fileencoding compiled in, rather than
configurable.)
You're back to ISO-8859-1!

Content-Type: text/plain; charset=ISO-8859-1
 
J

James Kanze

You're back to ISO-8859-1!
Content-Type: text/plain; charset=ISO-8859-1

I think I've found the problem:). Because Firefox was crashing
so frequently on me under Linux, I was occasionally posting from
Windows. I think that it's the Windows configuration which sets
charset=UTF-8. And the problem was that they upgraded my
Windows machine not too long ago, and I reinstalled my
environment, copying it from the Unix shared disks. So my UTF-8
.sig was replaced by an ISO-8859-1 one.

Since my previous posting, Firefox on Linux has crashed again,
so I'm back on Windows (but I've fixed the .sig file). So this
will probably show up as UTF-8, but hopefully correctly.

(FWIW: does anyone know why Firefox crashes so often under
Linux? Curiously enough, under completely different conditions
at work and on my home system---at work, it's always when
viewing Google Groups---which admittedly is some of the worst
HTML I've ever seen---; at home, only when I want a print
preview, but then, even with the simplest HTML.)
 
J

Jeff Schwab

James said:
(FWIW: does anyone know why Firefox crashes so often under
Linux? Curiously enough, under completely different conditions
at work and on my home system---at work, it's always when
viewing Google Groups---which admittedly is some of the worst
HTML I've ever seen---; at home, only when I want a print
preview, but then, even with the simplest HTML.)

It hasn't crashed on me in months (years?). I'm using Firefox 2 on Suse
10. Are you on Firefox 3? It may still have some stability problems,
but I don't know; installing it will require upgrading my pango
libraries, and I haven't found time to dig into that yet.
 
L

Lionel B

(FWIW: does anyone know why Firefox crashes so often under Linux?
Curiously enough, under completely different conditions at work and on
my home system---at work, it's always when viewing Google Groups---which
admittedly is some of the worst HTML I've ever seen---; at home, only
when I want a print preview, but then, even with the simplest HTML.)

Strange; I can't ever recall Firefox crashing on me, probably over
several years. I run Gentoo at home, RHEL at work (both on x86_64) and
several versions of Firefox, 64-bit and 32-bit, pre-built and self-built.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top