Gregor said:
Interesting. I had then replaced the built-in sort with my "own"
quicksort function and it was only slighlty slower. Since the sorting
itself wasn't an issue when compared to the "speed" of the required DOM
operations, I didn't investigate any further and stuck to the built-in
algorithm (according to the various Wikipedia articles quicksort,
mergesort and minsort don't differ too much in terms of speed).
For the amount of data you're likely to be sorting in an ECMAScript
program, implementation will probably matter far more than algorithm.
Quicksort and Mergesort are both generally described as O(n lg n)
algorithms[1], while the "min sort" (basically just a Selection Sort)
is, as Lasse said, O(n**2). This is all with complexity measured as
average number of comparisons in non-pathological cases.
But the average number of comparisons is obviously only part of the story.
A naive Quicksort, for example, degrades to O(n**2) if the input is
already sorted. There are strategies to avoid this (eg median-of-three
in pivot selection, or Introsort's detect-and-dodge approach), but
they mean additional code complexity. And Quicksort is defined
recursively, but for optimal performance on datasets of significant
size it's usually much better to implement it iteratively - again,
complicating the code, and leading to more work in the inner loop,
possible cache misses, etc.
Mergesort is straightforward, but it tends to have lousy locality of
reference once the large merges start. It's handy for sorts that have
to spill to disk (or slower media), but often not ideal for in-memory
sorting. And again, the obvious recursive implementation, though
elegant in source, will likely lose to an iterative one.
Then there are all the usual optimization concerns: memory
consumption, hoisting code out of the inner loop, and so forth.
(Looking at the implementation of "min sort" in WebKit that Lasse
linked to, I can't help but wonder if it wouldn't be better to do more
preprocessing and get rid of some of those tests in the inner loop.
Yikes.)
And for sorts that are actually implemented in ES, a lot will depend
on how the underlying interpreter works. Interpreting tokenized code
at runtime, for example, is relatively expensive because it ends up
doing indirection through some sort of lookup table. That makes the
path length of the sort's inner loop important for ES engines that are
true interpreters. On the other hand, an ES engine that does JIT
compilation would not be sensitive to this effect, and if the
inner-loop code fit in a CPU cache slot, additional complexity could
very well pay off.
[1] We can leave out the well-known complaints about casual abuse of
complexity notation, etc.