K
Keith Thompson
Paul Hsieh said:[...]
And if you think that a sorting routine is simple enough that there's
no significant risk of getting it wrong, consider this (Jon Bentley,
"Programming Pearls"):
While the first binary search was published in 1946, the first
binary search that works correctly for all values of n did not
appear until 1962.
The most reliable code is the code that isn't there.
So you think compiler vendors implement qsort() without any code?
Of course not -- but it's code that *I* don't have to write. A bug in
a sort routine that I write (and test as much as I have time for) is
more likely than a bug in a vendor's qsort() routine that's been used
by zillions [*] of other programmers. That's not to say that a bug in a
vendor's code is impossible, of course.
I've never heard of a buggy qsort() in a released implementation of
the C standard library. Have you?
qsort() also mashes things through void * pointers, which means you
*LOSE* type safety -- so this introduces other risk to the code.
A valid point. You have to be careful in how you invoke qsort(), and
in how you write the comparison routine. I still think that using
qsort() is less error-prone than writing your own specialized sorting
routine (but sometimes the risk is worth it).
All this is besides the point that you just equated sorting with
binary search.
That was not my intent. I'd say that sorting and binary search are
tasks of comparable complexity. The observation that it took 16 years
to get binary search right was an illustration, not a proof.
[*] 1.21 zillion, on average, according to a recent survey.