Assuming that's really true, not an "urban legend",
It's certainly an urban legend. Scarcely any of the code
in those days was ever published, hundreds of people must have
written that algorithm during that decade, many of them were
quite bright .... Basically, there's *no way* of knowing when
the first "correct" implementation was written, only when the
first correct implementation was published.
In any case, correctness is somewhat in the eye of the
beholder. If in a particular application it is known that the
array is not empty, or that the sought-for element is present,
then you can write slightly simpler code that would fail some
of the stress testing that a more general code needs. People
in those days got jobs done, they weren't concerned with formal
correctness according to some abstract spec. [I don't claim
that as good, bad or indifferent, it was just the way things
were.]
I don't
understand why they had so much trouble getting the algorithm
correct.
You're talking about a period that was about a decade
before the first undergraduate CS courses, and long before
anyone but the odd fanatic worried about "correctness", as
opposed to "working". Even the notion of an algorithm was
little discussed. You learned programming by attending a
two-day course on the Autocode of a particular computer, and
then just doing it. There were no books about computing, no
published code, no standards; and professors got very uppity
if some spotty oik from the Computing Service tried to tell
them how to write programs.
Nowadays, that particular algorithm is one of the
common early tasks in an algorithms module. You tell the
class what you want their code to do, and give them a few
minutes to write it. Then you find out whether their code
works if the array is empty, if there is only one element,
if the target is larger/smaller than the last/first element,
and so on. It doesn't, so that gives you the chance to talk
about invariants, pre- and post-conditions, program structure
in general, blah-de-blah, *after* which:
[...] I remember the algorithm was rather
obvious: [...]
Quite so! Lots of science in general, and maths/CS in
particular, is rather obvious. The job of education is to make
it so. *Before* you've been taught how to make things obvious,
even the simplest concepts are obscure.
If LO>HI then your range is empty, which means you've already
bottomed out without finding the target, so you immediately
return a not-found indication, for example -1.
You've already hit a design problem! What will you do if
-1 is a permitted index? Do you want the complexity of returning
a structure consisting of a Boolean and an integer [if your chosen
language permits that]? Or do you want the caller to have to test
whether the returned index is in range?
Otherwise you compute MID = (LO+HI)/2 (doesn't matter which way
you round, you can even be inconsistent from moment to moment,
the algorithm works in any case).
[The algorithm works, of course, as long as LO <= MID <= HI,
it's just more efficient if MID is somewhere around the middle. But
if we're picking nits, then you're in trouble if LO+HI > Maxint, or
in various other similar potential overflow cases.]
Test target against element at MID.
- If target smaller, tail-recurse with LO, MID-1, or if you're
doing it iteratively then do assignment HI := MID-1. [...]
In the old days, this wasn't necessarily as easy as you
make it seem. For integer elements, OK. For reals, there was the
permitted error to worry about. Few languages had proper strings
before the mid-'60s, so if you had an application like trying to
see if a word was in a list, there was a lot of encoding and
decoding to do here. Many languages didn't permit recursion;
and even "iteration" was discouraged -- "while (lo >= hi) {...}"
is a relatively late invention -- so you got spaghetti code using
"goto" to emulate the iteration.