How do debuggers work?

W

William Pursell

The debugger needs to be called by the OS when something happens, and
obviously, it needs to be called before the inferior program starts, so
that breakpoints can be established at startup.

It is quite common to attach a debugger
to a process that is already executing.
 
J

jacob navia

William said:
It is quite common to attach a debugger
to a process that is already executing.

Yes. In this case it is the operating system that
stops the program and gives control to the debugger.

Under windows, you have the "just in time debugging",
what is very similar.

When an exception occurs, the operating system looks up a registry
key to see if there is a registered debugger. If there is, the
OS starts automatically the debugger, giving it the context
of the crash.

The registry key in question is called "AEDebug"
The full path is:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug

This feature is extremely useful because it saves you the
time needed to restart the program and reproduce the bug.


Under Unix, the system writes a file containing a memory image
of the program at the point of failure. These days, with all that
RAM available, it is not very useful since writing a 1GB file takes
QUITE a lot of time. There is in some Unix systems an environment
variable to specify that this file should not be created, as far
as I remember. Maybe some Unix expert could enlighten us more about
this.

I have never seen under linux/unix a feature like the windows
"just in time" debugging though. Normally you run gdb with the
"core" (dump) file as its argument.
 
K

Kenneth Brody

jacob navia wrote:
[...]
Under Unix, the system writes a file containing a memory image
of the program at the point of failure. These days, with all that
RAM available, it is not very useful since writing a 1GB file takes
QUITE a lot of time. There is in some Unix systems an environment
variable to specify that this file should not be created, as far
as I remember. Maybe some Unix expert could enlighten us more about
this.
[...]

<OT>
I've never seen a 1GB core file. I doubt I've even run a program
on Unix that requires 1GB of memory, though I'm sure they're out
there.

Remember -- a core file is not a dump of all memory in the system.
It only contains the non-code memory used by the current process.
(Plus other state information, such as the CPU registers.)
</OT>

--
+-------------------------+--------------------+-----------------------+
| Kenneth J. Brody | www.hvcomputer.com | #include |
| kenbrody/at\spamcop.net | www.fptech.com | <std_disclaimer.h> |
+-------------------------+--------------------+-----------------------+
Don't e-mail me at: <mailto:[email protected]>
 
R

Richard Tobin

I have never seen under linux/unix a feature like the windows
"just in time" debugging though. Normally you run gdb with the
"core" (dump) file as its argument.

Most often you just run the program again under the debugger!
These days, core dumps are usually disabled. You might enable
them if you have been having hard-to-reproduce problems.

Some versions of dbx allow you to run a program with no debugger
interaction unless and until an error occurs. I don't think gdb has
that option.

-- Richard
 
M

Mark Bluemel

Kenneth said:
jacob navia wrote:
[...]
Under Unix, the system writes a file containing a memory image
of the program at the point of failure. These days, with all that
RAM available, it is not very useful

Of course it can be useful... See below.

Under certain conditions, debugging the crash which occurred only
after the banking system had been running for 3 months is vitally
important but cannot be attempted without a core file to analyse.

Asking the customer to rerun for 3 months with your customer service
representative on-site operating the debugger is unlikely to be a good
move.

<OT>
I think Jacob might mean using the ulimit command
[...]

<OT>
I've never seen a 1GB core file.

I have, many times...
I doubt I've even run a program
on Unix that requires 1GB of memory, though I'm sure they're out
there.

Try a JVM with a large Java Heap...
Remember -- a core file is not a dump of all memory in the system.
It only contains the non-code memory used by the current process.

Which can be huge....
 
R

Richard

Morris Dovey said:
This could be a useful series. Will you also address
non-interactive debug programs such as those who identify
dynamically-allocated memory drains, etc for post-execution
analysis?


will you address debuggers used in non-hosted embedded
environments and those that can also be used to debug hardware,

When you address an issue do you then address every other issue with C?

Give it a break Morris.
such as those used to provide JTAG/EJTAG manipulation for
examining and modifying the states of internal CPU signals,
latches, and registers?

I would expect that a reasonably complete, high-quality effort
will keep you fairly busy for at least a week or so...

Why do you not do it since you seem to be so knowledgeable about it?
 
R

Richard

Yes. he can sometimes. But that can not really be taught. No one is
interested in how clever you and some of the other regs claim to be.

We can ALL fix bugs from looking at a printout. Sometimes. But its not
particularly easy to teach - it comes from expereince. Using a good
debugger can be taught however.
I have to agree with Me. Heathfield on this one, falling pretty close to
group (c), above. (AKA "BTDT".)

While most ("nearly all"?) debugging is more involved, I, too, have, on
more than one occasion, debugged someone else's code without ever seeing
it, based solely on the symptoms.

So what? What has this to do with how useful real debuggers are in the C
world?

It is. In the real world. With real code. With real deadlines. With real
people.

Yes it is.
Well, even though some debugging can be done over the phone / via e-mail,
without seeing any source, that's certainly the exception rather than the
rule. (Consider the case where a database index corruption is detected
150,000 records into a report, and the corruption occurred some 20,000
records earlier. I'd hate to think about tracking that one down without
a debugger. "Could" it have been done without one? Yes, which I guess
technically means the debugger wasn't "essential". Then again, you could
say that a C compiler isn't "essential" either, as an experienced
programmer could hand-compile the code without one.)

Well said. Sorry, but Heathfield is blowing hot air again.

Total baloney. You would not pass any interview with your superior
attitude. A good debugger is absolutely essential in the "general case"
of large programs on multi programmer projects which span multiple
versions.
 
R

Richard

Dik T. Winter said:
Indeed. In one instance the output from a test program was sufficient
to determine what the bug was. I may note that the but was in a

Of course it CAN be sufficient. So what?=
proprietary part of the software, so source was not even available,
neither was there a possibility to use a debugger.

Well, then that makes your comment even less applicable to a thread
about how to use a good debugger to save time and effort.
 
R

Richard

Keep in mind that McLean is religious. That kinda binds him to GWB in a
way that most rational people simply cannot grok.

Heathfield is a born again Christian too isn't he? Religious types
always give off this "holier and smarter than thou" oder.
 
K

Kenny McCormack

Heathfield is a born again Christian too isn't he? Religious types
always give off this "holier and smarter than thou" oder.

I didn't/don't know that. If true, it would explain some things.

But I disagree that they give off the attitude that you attribute.
Rather, I've always found that religious types give off an odor of
"dumb as a post" stupidity (Well, gee, look at the crap they profess to
believe...). To go further, an attitude of "I'm dumb, but I don't know it".
 
D

dj3vande

Kenneth Brody said:
(Consider the case where a database index corruption is detected
150,000 records into a report, and the corruption occurred some 20,000
records earlier. I'd hate to think about tracking that one down without
a debugger. "Could" it have been done without one? Yes, which I guess
technically means the debugger wasn't "essential". Then again, you could
say that a C compiler isn't "essential" either, as an experienced
programmer could hand-compile the code without one.)

If you can pin down where it's happening and take a look at that, a
single execution trace combined with a desk-check is often enough to
find the problem without a debugger.
If all you have is a post-mortem dump, I've yet to come across a
debugger that will let you step backwards over 20K records' worth of
execution (though I'm not going to claim that they don't exist).
So I'm not convinced that this is the best example of where a debugger
is essential.


War story:
The last time I encountered a comparable problem, the debugging tool of
choice turned out to be Excel.
It was a program that was supposed to compare records in the outputs of
multiple processes running on the same input data and combine results
that came from the same feature in the input; on occasion it would fail
to combine inputs when it should have.
Using a debugger to find the problem would have required either an
independent implementation of that module (to trap when the results
disagreed; this would have been the easiest way to programmatically
detect an error) or single-stepping through a few hours worth of
recorded data until we could eyeball a "should have been combined"
output that wasn't combined.
(If we had a dataset that exhibited the bug at a known point, we could
have done an offline run and interrupted it just before we got to that
point, but finding that point would have required the pre-debugger
analysis we ended up doing anyways, and once that analysis was done
setting up the test run under the debugger (never mind actually running
it) would have taken at least as long as where we actually went from
there.)

Then somebody decided to take an inputs-and-outputs dump from a large
run, and plot that with Excel; looking at the graphs let us find a
relatively isolated data point that exhibited the problem. Filtering
the (same) execution trace data to only look at what that data point
was doing turned up an anomaly that led to finding and fixing the bug
within about five minutes. (Total elapsed time between importing the
data into Excel and arranging a test run for the fix was under two
hours.)

(This particular bug involved a global property of the interactions
between the data structures the module used internally and the
distribution of the input data, so running on a reduced dataset (which
would have made an interactive debugging session rather less painful)
would've hidden the bug, even though it was deterministic for a given
input stream. It also took a while to come up with a unit test that
would have caught it, given the dependence that had on internal tuning
thresholds.)


dave
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,062
Latest member
OrderKetozenseACV

Latest Threads

Top