Seriously struggling with C

R

Richard G. Riley

This is very nice for limited amounts of possible input.
But in this case, even simple unit test schemes in combination
with logging suffice.

Possibly. my own perference and that of the teams I have worked with
is to give the code a run through with the debugger befoe submitting
to unit tests. If you can get away without that then great.
[This is different, as far as I can tell, from Chris Hill's description
of automated testing using ICEs etc.]

I *always* use a debugger to run through any meaningful critical
code. It enables me to cast an eye over memory, stacks, locals etc. It
is an added safety barrier beyond my own smug ability to write error
free code :)

Well, I don't know what you're doing; I've just never been in a situation
where this would be helpful. I'm under no illusions about my ability to
write error-free code, it's just that using a debugger doesn't give me
value for money.

We must come from different schools of thought. I and every programmer
I have ever worked with routinely step through code alone or with a
colleague to check boundary conditions, memory initialisations etc. It
is a bedrock of any development I have done. Using break expressions
means I can put in wierd and wonderful parameters and have the
debugger break when a function is suddenly passed something it doesnt
know how to deal with. We are, after all, fallible.

Every serious project with >= 0.5 MLoC I was ever involved in
had its own resource management to track down certain kinds of
errors.

I dont doubt this and have used automated systems too where appropriate.
In addition, every single complex feature can be switched off
and internal states and debug indices can be made visible in
the output. Without, debugging would be sheer madness.

A good logging system guarded by switches is always invaluable. Again,
no disagreement here.
Only exception: Paranoia checks with debug mode "asserts".
Unit tests should suffice to feed all kind of "weird and
wonderful" parameters to a module. Regression tests make the
whole thing "round" -- customers and other colleagues tend to
find things one could not imagine when stepping through in the
debugger.

Again fine : but using a debugger to examine while developing
module(s) can do no harm and, for me, frequently raises issues with
regard to sensible program flow and frequently hilights unnecessary
loop depths and other such quirks which can be optimised out.
The trace and logging outputs are just builtin "printf()
debugging". When working with huge amounts of data, this may
be the only way to get a first idea what is happening. Without
this idea, you typically don't know what is going wrong, let
alone where.

I would never personally use printfs but a system specific log
function which may, or may not, end up using printf or some other form
of information provision.
What you describe sounds perfectly sensible - but I wouldn't describe
it as "using a debugger"; I think this is the disconnect.

[I don't know if I'd call the tools you mention "debuggers", either, but
it's too late to know for sure whether I wouldn't have /before/ this
discussion.]

Debugger. Eclipse "debugger". gdb. All debuggers.

debuggers (and why is the Eclipse debugger awarded scare-quotes?) ...

All code development tools.

... not (necessaily) debuggers: there's a difference here.

Debugging is part of development in my world. maybe we are talking
nomenclature differences here?

Mmmh. I don't know:

- Feature Specification and System Design
<-> Product Test based on the Spec, developed at the same time
by someone who is not the author of the Spec.
- One to several levels of component specifications and design
documents
<-> Automated Developer Tests, external and internal.
- Regular Automated Regression Tests incorporating Product and
Developer Tests as well as a large base of simple and complex
input
- Tracking system to track all requirements and limitations
through the different levels.
- Source control and configuration management.
During Specification and Design Phases: Several reviewers.
Occasional Code Reviews.

Sources of Bugs:
1) "Holes" in the design or specification documents. Most of
the time caught by the test specification process.
2) Implementation errors. Usually caught by the developer
tests.

2) Is my "debugging" phase. generally.
Debugging: Mostly necessary in legacy code written under time
pressure or circumventing such a process.

All code has bugs :-;
For smaller projects: An adapted version of the above.



Design better test drivers and frameworks. It pays.

If everything were so perfect it would. Even with designs, code reads,
automated testing I just find it better and more profitable to step
through my code at the earliest stage to be sure things are going the
rigth way and that nothing silly is going to waste time & money by
forcing the code to be thrown back at me or someone else at a later stage.
Cheers
Michael

thanks.
 
I

Ian Collins

Richard said:
Do you write standalone SW that only you maintain?

I find it incredulous that as a programmer, a debugger isnt a very
important tool on your list.

Noone is suggesting that good design and functionaly breakdown are not
important. What is indisputable though is that a debugger provides a
programmer with an easy, flexible way to perform run time checks and
manipulation of his code. ESPECIALLY when adding or modifying a legacy
system. Frequently you may need to call a poorly document external
function and need to bounds check it to be sure you can handle the
data and that it doesnt fall over with your "perfectly sound" input
data.

Adopt Test Drive Development and you will find yourself using the
debugger less and less. I hardly ever use mine these days, if a test
breaks, I just back out my last change and redo it.
 
M

Mark McIntyre

Well, I don't know what you're doing; I've just never been in a situation
where this would be helpful.

Whoa there boys.
It sounds to me like Richard isn't talking about a debugger at all,
but about automated tools like purify, quantify, lint etc. These
remove bugs, but theyre not debuggers as I (and everyone I have ever
worked with) thinks of them. A debugger is an IDE you fire up to step
through the code so you can examine variables manually. The former I
would always advocate using on any production code. The latter is a
development tool, not a testing tool.
Mark McIntyre
 
R

Rod Pemberton

Richard G. Riley said:
Heh, and ties in nicely with your .sig :-;

One of my favorites:

"...there are three classes of intellects: one which comprehends by itself;
another which appreciates what others comprehend; and a third which neither
comprehends by itself nor by the showing of others; the first is the most
excellent, the second is good, the third is useless."

Machiavelli, The Prince, Chapter 22


Rod Pemberton
 
R

Rod Pemberton

Richard G. Riley said:
Do you write standalone SW that only you maintain?

I find it incredulous that as a programmer, a debugger isnt a very
important tool on your list.

I wrote my first program in 1981 (or was it '79?). Anyway, in all that time
through maybe fourteen languages, I've only had to use a debugger twice.
Once for a compiler issue and the other to track data between multiple
processes. Printf's or it's equivalent is sufficient. Since I've
essentially never needed it, I have no choice but to consider the use of a
debugger to be a serious indicator that you're doing something wrong.
Perhaps, you need to develop a set of coding rules or a style guide to help
correct the errors your encountering?


Rod Pemberton
 
D

Dik T. Winter

> Richard G. Riley wrote: ....
>
> You seem to completely ignore the world of deeply embedded devices.

I did not see the original, but there is more to it. Recently I have
written quite a few C programs that either gave incorrect output (blatantly
incorrect) or crashed with a "segmentation fault". Debuggers are in these
cases generally useless (I found). In both cases the problem was with the
logic of the program, not with errors with respect to C. In the second
cases the debugger proved to be of no use at all, the only thing the
debugger (gdb) said when the error occurred was that almost none of the
variables could be accessed (the problem proved to be overly deep
recursion due to a problem with the logic). And the first was almost
always due to a problem with the logic. That is the kind of thing a
debugger will not help. (The programs all were about combinatorial
problems.)

For instance, in one case I declared an array of int's rather than
double's. How is a debugger going to help me to find the problem?
 
A

August Karlstrom

Rod said:
One of my favorites:

"...there are three classes of intellects: one which comprehends by itself;
another which appreciates what others comprehend; and a third which neither
comprehends by itself nor by the showing of others; the first is the most
excellent, the second is good, the third is useless."

Machiavelli, The Prince, Chapter 22

I think the first is excellent only if it incorporates the second. No
one wants to hang out with a besserwisser. ;-)


August
 
D

Dik T. Winter

Again not received the original, that is why I respond to this.

>
> Oh, I /am/ defending it. In my (limited) C experience, I have not
> needed to resort frequently to a C debugger. I would expect to need
> one less nowadays than I used to, as well.

What I am missing here is that using breakpoints debugging code can be
*more* time consuming than using printf's to give the state. I once
had to debug a program I had written (80k+ lines of code). On some
machine it did not work. It appeared that on the umpteenth occurrence
of some call to some routine something was wrong. It is impossible
to detect such using breakpoints or watchpoints. Using proper printf's
and scrutinising the output will get you much faster the answer to why
it did not work.
 
D

Dik T. Winter

> I'd write
>
> for (i = 0; check(i, i + 2); i++);
>
> There are cases (though I can't think of any right now!) where the
> relationship between control variables is too complex (or expensive) to
> capture, but when you can, I think having one is clearer.

for(i = 0, j = 1; check(i, j); i++, j = (j * 2) % n)?

And, yes, I do use such loops on occasion.
 
I

Ian Collins

Richard said:
Littering code with printfs or the quivalent plain sucks unless its
for logging purposes and is well DEFd out : even then it can
unecessrily break p the flow and readability of the code.
I agree.
Its why debuggers exist. Only the most trivial
or tiny code can be maintained or properly examined with messy and
time consuimg printfs. printfs only show what you *think* you need to
know : not the true state of memory, locals, stacks, memory blocks.
Even the most complex code can be maintained with the aid of
comprehensive unit tests, no need for printfs. If these are done well,
you will seldom, if ever, have to use your debugger.
If you can get away with it fine. Its certainly not something that
would generally be encouraged in any programing environment I have
been involved in.
What is 'it' in this context?

It must be a personal thing. For me the debugger is as crucual a part
as the editor : I would normally always step through the
debugger just to sniff out any issues with uninitialised stuff,
pointer run throughs etc. Its why IDEs put so much effort into the
debugger part these days.
That just shows you don't have tests.....
 
D

David Holland

> It's not very high up on my list, either. I use printf() and
> deep thought more often useful. I find stack traces useful
> sometimes, but I can get those without a debugger. I also find a
> debugger useful for viewing core dumps.
>
> It's not as though I never do "low-level" programming either.
> One of my projects, for example, is an entire (educational)
> operating system. In developing that, I don't think I've used a
> debugger more than once or twice (and I do have one that works
> with it).

I'd have to second this; I wrote nearly all the guts of my instructional
operating system before the debugger was ready. Other kernels I've
worked on I've never bothered with more than a stack trace.

The debugger isn't much help on the hard problems anyhow. If you have
a well-designed and well-tested system, all the obvious bugs you can
find by simple inspection with the debugger have already been shaken
out, and what's left are more subtle interactions between subsystems,
concurrency bugs, and unexpected UB. A conventional debugger doesn't
give much leverage with these. One could envision debuggers that
might, but that'd be a research project.
 
R

Richard G. Riley

I agree.

Even the most complex code can be maintained with the aid of
comprehensive unit tests, no need for printfs. If these are done well,
you will seldom, if ever, have to use your debugger.

Tell me : when any code of any reasonable level of complexity (maybe
long code, maybe lots of calls, maybe clever optimized bit
manipulations, maybe clever equations for image manipulation, maybe
calls to less robust areas of legacs system, goes wrong, what do you
do?

Do you , as the OP said, just "think the problem away"? Me, and
especially with C, I like to step through and get a feeling : no
amount of unit tests can remove the benefit of that.

There seems to be a trend in this thread from sone to suggest that
formal testing, automated testing, big designs are all a panacea for
the debugger shy programmer. I guess this is where we would differ :
it is of parameount importance for a programmer to be fully
comfortable with his code and to see it in action (IMO of course). No
amount of specialised framework testing will remove the benefit of that.
That just shows you don't have tests.....

No it doesnt. It shows that I test as I develop using the dbgr as an
aditional aid to writing as far from bug free code as I can. Tweaking
run time parameters, examining memory, checkling loop counts etc etc
etc.

cheers,
 
R

Richard G. Riley

I wrote my first program in 1981 (or was it '79?). Anyway, in all that time
through maybe fourteen languages, I've only had to use a debugger
twice.

A am truly astonished.
Once for a compiler issue and the other to track data between multiple
processes. Printf's or it's equivalent is sufficient. Since I've
essentially never needed it, I have no choice but to consider the use of a
debugger to be a serious indicator that you're doing something
wrong.

I am even more astonished.
Perhaps, you need to develop a set of coding rules or a style guide to help
correct the errors your encountering?


Rod Pemberton

So you write hundreds and hunders, maybe thousands of lines of code
and you never need a debugger to find a problem? You can keep the
entire working set in your head in complicated sitations?

I must say I have never mez someone able to work for that long without
using a debugger.

Maybe we mean something different?
 
M

Michael Mair

Richard said:
Tell me : when any code of any reasonable level of complexity (maybe
long code, maybe lots of calls, maybe clever optimized bit
manipulations, maybe clever equations for image manipulation, maybe
calls to less robust areas of legacs system, goes wrong, what do you
do?

Switch on logging and trace functionality, find the module
corrupting the data, run the tests for this module.
Then I know at which of thousands of objects it goes wrong
during which phase and can pinpoint the routine.
Looking at the code usually suffices to see what is going
wrong.
If the above is not sufficient, I may step into the whole
thing with the debugger to see whether one of the "this
point cannot be reached" points has been reached. printf
style debugging gives me the same info. Then, I discuss
with colleagues why the preconditions were not sufficient.
This is most of the time a design flaw or a problem of
unexpected consequences (which are the same). If I had to use
the debugger, the best course of action is to write
better tests or tracing.

Do you , as the OP said, just "think the problem away"? Me, and
especially with C, I like to step through and get a feeling : no
amount of unit tests can remove the benefit of that.

I have done that for years and found that I wasted valuable
time that way.
Using the debugger is nice for stepping through undocumented
legacy systems if you need one specific information but
eventually you are better off documenting the thing or throwing
it out.
Whenever I would really have needed a debugger, the situation
was too complex or unstable ("Heisenbugs") to actually use one.

There seems to be a trend in this thread from sone to suggest that
formal testing, automated testing, big designs are all a panacea for
the debugger shy programmer. I guess this is where we would differ :
it is of parameount importance for a programmer to be fully
comfortable with his code and to see it in action (IMO of course). No
amount of specialised framework testing will remove the benefit of that.

No. I am by no means debugger shy but having to use the debugger
means that I am down to the last resort.
In truth, I aim to have to look up the more involved functions
of my debugger every time -- because the intervals are sufficiently
long.

No it doesnt. It shows that I test as I develop using the dbgr as an
aditional aid to writing as far from bug free code as I can. Tweaking
run time parameters, examining memory, checkling loop counts etc etc
etc.

None of us claim to be superprogrammers.
With the right sort of experience, you just learn that
there is a way of working and programming which makes the
debugger not entirely superfluous but minimizes the need
for one.
I, for one, get done more this way.


Cheers
Michael
 
R

Richard G. Riley

I did not see the original, but there is more to it. Recently I have
written quite a few C programs that either gave incorrect output (blatantly
incorrect) or crashed with a "segmentation fault". Debuggers are in these
cases generally useless (I found). In both cases the problem was
with the

Find a good tutorial on using one. This is where debuggers can be
invaluable.
logic of the program, not with errors with respect to C. In the
second

Debuggers are not there to find problems with "the language". They are
there to find problems with data assignments, logic and program flow.
cases the debugger proved to be of no use at all, the only thing the
debugger (gdb) said when the error occurred was that almost none of the
variables could be accessed (the problem proved to be overly deep
recursion due to a problem with the logic). And the first was
almost

And a good debugger would hilite this very quickly if you set the
right watchpoints.
always due to a problem with the logic. That is the kind of thing a
debugger will not help. (The programs all were about combinatorial
problems.)

I would find a debugger useful here.
For instance, in one case I declared an array of int's rather than
double's. How is a debugger going to help me to find the problem?

It can and it cant. It can show you where resulting cast/assign mismatches
go wrong as you examine the data. In this case though, assuming you
hadnt cast everythign to death, the compiler should have been a help.
 
R

Richard G. Riley

Again not received the original, that is why I respond to this.



What I am missing here is that using breakpoints debugging code can be
*more* time consuming than using printf's to give the state. I once

Nothing is set in stone. All techniques can be more consuming if not
chosen wisely. But I can honestly say that using printfs has ever been
quicker except for in the most trivial cases.
had to debug a program I had written (80k+ lines of code). On some
machine it did not work. It appeared that on the umpteenth occurrence
of some call to some routine something was wrong. It is impossible
to detect such using breakpoints or watchpoints. Using proper
printf's

This is simply not true. Since you must have some idea where the
problem is to insert the "printf" then you have some idea where to set
your breakpoint to detect "naughty data" : then you can do a stack
trace to see where this data originated.
and scrutinising the output will get you much faster the answer to why
it did not work.

I must admit this would be slower for me.
 
I

Ian Collins

Richard said:
Tell me : when any code of any reasonable level of complexity (maybe
long code, maybe lots of calls, maybe clever optimized bit
manipulations, maybe clever equations for image manipulation, maybe
calls to less robust areas of legacs system, goes wrong, what do you
do?
Fix the test that breaks, or failing that, add a test that breaks then
fix the problem. If you do TDD well, you will have as near to 100% test
coverage as you can get.
Do you , as the OP said, just "think the problem away"? Me, and
especially with C, I like to step through and get a feeling : no
amount of unit tests can remove the benefit of that.
If you have developed test first, you know and trust your code. Each
line as been added to pass a test, so you don't have to step through it.
When I started TDD, I used to run my tests form within the debugger so
I could step through if required. Over time I found I wasn't using the
debugger at all,so I stopped using it.
There seems to be a trend in this thread from sone to suggest that
formal testing, automated testing, big designs are all a panacea for
the debugger shy programmer. I guess this is where we would differ :
it is of parameount importance for a programmer to be fully
comfortable with his code and to see it in action (IMO of course). No
amount of specialised framework testing will remove the benefit of that.
The best way for a programmer and more importantly, his customer, be
fully comfortable with your code is to have a complete set of automated
tests. Nothing else gives you the confidence to refactor code.
No it doesnt. It shows that I test as I develop using the dbgr as an
aditional aid to writing as far from bug free code as I can. Tweaking
run time parameters, examining memory, checkling loop counts etc etc
etc.
These aren't what I'd call tests, my tests must be automatic.
 
R

Richard G. Riley

Fix the test that breaks, or failing that, add a test that breaks then
fix the problem. If you do TDD well, you will have as near to 100% test
coverage as you can get.

If you have developed test first, you know and trust your code. Each
line as been added to pass a test, so you don't have to step through it.
When I started TDD, I used to run my tests form within the debugger so
I could step through if required. Over time I found I wasn't using the
debugger at all,so I stopped using it.

Hmm. Personal thing. I could never do this. Anythign half complex with
optimised pointer usage would always see me examining every last thing
for potential "x+1" overrruns.
The best way for a programmer and more importantly, his customer, be
fully comfortable with your code is to have a complete set of automated
tests. Nothing else gives you the confidence to refactor code.

I disagree on this I must say. I find automated tests tend to give a
false sense of security. They do contribute of course : but are often
seen as an indicator of infallability. SOmetimes betetr to stick a
monkey at the keyboard with a BIG hammer!!!! :)
 
C

Chris Dollin

Richard G. Riley wrote:

(Replying to Rod Pemberton)
So you write hundreds and hunders, maybe thousands of lines of code
and you never need a debugger to find a problem? You can keep the
entire working set in your head in complicated sitations?

I can't, of course, speak for Rod, but I currently work within a codebase
of over 100_000 lines (of Java, so the issues are somewhat different)
which is co-maintained by four other people and used by a significant number
of users (I'd be more specific but it's open-source so how can you tell
who's using it?).

While there /have/ been a couple of times when I've had to resort to
the (Eclipse) debugger to track down a problem, it's by no means
routine. As for "keep the entire working set in your head in complicated
sitations", well, I dunno. Is that what I'd need to do? I try not to
end up in complicated situations.

I suspect we have a domain issue as well as a style issue.
 
I

Ian Collins

Richard said:
Hmm. Personal thing. I could never do this. Anythign half complex with
optimised pointer usage would always see me examining every last thing
for potential "x+1" overrruns.
I guess you've never tried TDD?
I disagree on this I must say. I find automated tests tend to give a
false sense of security. They do contribute of course : but are often
seen as an indicator of infallability. SOmetimes betetr to stick a
monkey at the keyboard with a BIG hammer!!!! :)
The sense isn't false if the tests are good and written first. In my
opinion, test added after the code is written are second rate.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,188
Latest member
Crypto TaxSoftware

Latest Threads

Top