And further more, speed optimization should only occur
after the program is profiled (i.e. use a profiler or
logic analyzer or something else to see where the
most time is spent in a program).
I always thought that if you need to profile to determine if the program
needs optimization, you don't need to profile. That is, on software mainly
targeted for single-user (desktop) systems.. ;-)
In practise have found out that when something takes far too long time to
process, it is extremely rarely you don't know as the designer where the
time is spent. Rougly generalized, code is fast enough or it isn't. The
problem with this generalization is that the problem is rarely the code, it
can be perfect. It just happens that it is the wrong code for the task.
Example follows. You are building a vector of unique objects where each
object is assigned unique identifier, in this case index. When new object is
added into the vector, trivial approach would be to check every single
object already in the vector. At 100,000 object mark with current mainstream
desktop system the processing time can already be measured in minutes. If
however the appending of new objects is done through a simple hash table or
map suddenly the same case runs in seconds, since this is supposed to be
example case let's say 8 minutes for the brute-force linear search and 3.7
seconds for binary search method for 100,000 objects being indexed.
Profiler will tell what we already know.. that calling the indexing is
really, really slow. Ofcourse, if we build a complete system without ever
compiling and running and doing unit tests and so on, have a complete system
materialized out of thin air and suddenly have to start unwinding it,
profiler might come in handy. Virtually only times I ever had to rely on
profiler (and benefit from it) been situations where I suddenly being to
maintain codebase I am not previously familiar with. Say, if I am one day
given a job of taking a look at particular piece of sourcecode and find ways
of improving it.. I could take stabs looking at the sourcecode and do local
optimizations, but really, to make optimizations that are worth the effort
mean I must be familiar with the design. Apparently it is more pragmatic to
let the profiler to analyze the runtime characteristics and take appropriate
measures, redesign the system if the need and resources are there.
That said:
- I rarely (read: never) needed a profiler to accomplish performance goals
- I witnessed profiler being put into good use more than once (if you want
to be pedantic, twice)
- The C++ Standard doesn't know about profilers at all so who the hell
cares?
- I'm trolling so feel free to announce everyone that you shall promptly
proceed to killfile me
VTune, anyone? ;---o