D
Dann Corbit
txr@x- said:websnarf said:The important thing: Measure it. If you are not willing to spend the
time to measure the speed of your code, that on its own proves that
you shouldn't waste your time optimising it.
I agree with this wholeheartedly.
There is one thing though, and it has to do with initial creation of the
code.
If you know that a particular function is asymptotically superior and
you know that it is possible for the data set to grow, choosing the more
efficient algorithm is usually the safer choice. Asymptotically
superior algorithms can definitely be inferior with smaller data sets,
but they won't go into the crapper when the data blows up into a giant
set that you were not expecting.
So (IMO) the initial design phase is a good place for some thought in
this regard.
After the code is written, never try to optimize without measuring
first.
I have two main comments on this:
1) You can *measure* performance before you even write code. [snip]
It's not possible to measure the performance of anything if the thing
to be measured doesn't exist. You can analyze what you expect the
performance might be, or you can measure the performance of something
else, but you can't measure the performance (or anything else) of a
piece of code until the code has been written.
O(f(n)) is a measure.