Andreas said:
Evertjan. said:
...set temporary breakpoints, like alert(1), alert(2), etc,
and replace the source string with known temporary [objects],
to determine where the error occurs.
That is the good old-fashioned way of debugging.
What would be a bad new-fashioned way of debugging?
console.log, or searching through anonymus functions in the error
stacktrace?
alert-type debugging in today's runtime environments and of today's Web
applications is sometimes feasible, but as a general approach extremely
questionable. Consider, for example, debugging an event listener for the
`focus' event: by calling alert, the element in question could easily lose
focus when alert() is called and receive it again after the alert() was
OK'd, which calls the listener, which calls alert() etc. pp. BTST.
console.log() does not have that disadvantage (but it has others, at least
in Webkit), neither has the ES5 `debugger' keyword; however, both require
changing the source code, which inevitably changes the position of the bug,
potentially heisenbugging the problem and introducing more bugs. There are
enough debuggers for any widely distributed browser (and therefore,
ECMAScript implementation and/or DOM implementation) with which you can set
breakpoints (even conditional ones), and watch values and the stack trace to
make sure this is very rarely needed to find a bug, if that.
However, in my experience surprisingly few developers seem to be proficient
enough to use a debugger, let alone professional enough to set one up. For
an off-topic example, I have seen people *refusing* to install Zend Debugger
or Xdebug (both are PHP debuggers) on their development server and resorting
to a rather tedious echo/print_r/var_dump/exit approach instead when remote
debugging is so very convenient with Eclipse PDT. And I still don't get it.
But it also appears to be true that the art of debugging is seldom taught in
computer science classes, so this should be changed.
PointedEars