Execution time of code?

M

mlt

I have some code that implements various seach and sorting algorithms. I
would like to get some kind of time measure for various parts of the
algorithm, like:

public myAlgo() {
....
....

float timer = // start measurement timer
for (...)
{
// do various calculations.

}

std::cout << "time spend = " << timer;


float timer2 = // start measurement timer
for (...)
{
// do some other calculations.

}
std::cout << "time2 spend = " << timer2;

....
....

}

Is there some build in function in C++ that is designed for this kind of
purpose? I am also interested in knowing if there exists some performance
measuring framework for this kind of task.
 
M

mlt

Victor Bazarov said:
There is 'clock()', but know that it's the last function you actually want
to use to measure the performance of your code. Look into what your OS
provides. Windows has 'QueryPerformanceCounter'. UNIX undoubtedly has
something similar.

Or simply get yourself a profiler. Trust me, your code and your customers
will love you for getting the performance where it should be.

I have googled 'C++ profiler' and get a lot of different hits. Must of them
deal with analysing where calls are made and not so much on how long time a
block of code takes to execute.

Are there any specific profiling tools I should search for to get this kind
of time measurement functionality?
 
D

Dennis Jones

mlt said:
I have some code that implements various seach and sorting algorithms. I
would like to get some kind of time measure for various parts of the
algorithm, like:

Is there some build in function in C++ that is designed for this kind of
purpose? I am also interested in knowing if there exists some performance
measuring framework for this kind of task.

If you are on Windows, I'll second Vector's recommendation for AQtime by
AutomatedQA, particularly if you want to get high-resolution timing results
on a function-by-function basis.

If you don't necessarily care about high-resolution timing and don't mind
writing some code, you can roll your own. You can do something similar to
what I did. I wrote an RAII class that measures the lifetime of objects of
the class. I followed the pattern of Alexandresu and Marginean's ScopeGuard
to let me do something like this:

void SomeFunction()
{
MEASURE_SCOPE();

// do a bunch of stuff
}

The MEASURE_SCOPE macro simply creates an object of my RAII scope
measurement class. When the object is destroyed, it logs the object's
lifetime (along with the function name and line number where the object was
created). It does require me to add the macro wherever I want to do
measurements, and it doesn't provide line-by-line timing, but if I need
that, I'll use AQtime. I used Petru Marginean's logging class in the
implementation of my RAII scope measurement class, so I can turn the logging
on and off at runtime with almost no runtime penalty when it is off, thereby
eliminating the need to comment out or disable the macro when I don't need
it. In Windows, my accuracy is dependent on the resolution of the clock
(about 18ms). I could probably re-write it to use a performance counter,
but I haven't had any reason to do that.

So anyway, there's a couple of ideas for you.

- Dennis
 
A

Alf P. Steinbach

* Victor Bazarov:
There is 'clock()', but know that it's the last function you actually
want to use to measure the performance of your code.

Why do you think that?

'clock' is extremely easy to use, and it's always available.

Hence, I'd say it's the /first/ you should try, absolutely not the last (if
other more heavy instruments are brought to bear, then 'clock' adds nothing).

To use 'clock', call the relevant piece of code an appropriate number of times
to get well within 'clock' resolution, and check how it fared.

This will often be enough to form a good opinion about rough performance, and
usually that's the best one can hope for anyway, no matter how sophisticated
instruments are employed (because detailed performance depends on data, machine
load, usage patterns, and factors that one could never imagine offhand).

Using 'clock' involves some work in adding intrusive code and/or factoring out
the relevant code to be measured.

Using a heavier instrument, unless one already has everything set up for that
instrument (which makes the question of choosing moot), involves even more work.


Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Victor Bazarov:
Alf said:
* Victor Bazarov:
[..]
There is 'clock()', but know that it's the last function you actually
want to use to measure the performance of your code.
Why do you think that?

I don't "think" that. I know that. From experience.

Sorry, all that means is that your experience indicates that for *you* 'clock'
is ungood. Perhaps you have used it incorrectly. Or perhaps you always have an
expensive tool set-up geared towards profiling (which is indicated by your
strong focus on micro-efficiency, so wouldn't surprise me!).

Cheers & hth.,

- Alf
 
A

Alf P. Steinbach

* Victor Bazarov:
Alf said:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
[..]
There is 'clock()', but know that it's the last function you
actually want to use to measure the performance of your code.
Why do you think that?
I don't "think" that. I know that. From experience.
Sorry, all that means is that your experience indicates that for
*you* 'clock' is ungood.

Yes, absolutely. What source of information do you use when you
claim 'clock's suitability? Marketting hype?

When you claim that 'clock' should be the last one tries, it is just a silly
claim until you have substantiated it somehow with facts and/or logic.

Which would be rather difficult since the claim, it seems, was purely a personal
one, referring to yourself as "you". :-o

Perhaps. Or perhaps on all systems I've had experience with,
the 'clock' was implemented inadequately. Or was relying on
some rather inadequate hardware mechanism.


My "strong focus on micro efficiency"? What gave you that idea?

Recent threads including this one.

And why are you so inclined to try to insult people this fine
morning? I am speaking from experience with I say that dealing
with return values is faster than using exceptions. It's what
my experience indicates.

Most serious investigations of that have yielded the opposite conclusion.

As an example of a serious investigation, the international C++ standardization
committee's Technical Report 18015:2006 on performance, available at <url:
http://www.open-std.org/jtc1/sc22/wg21/docs/TR18015.pdf>, quotes one compiler
vendor as reporting a 6% overhead for the "code" approach to implementing
exceptions (how the compiler does it internally), and asserts 0% for normal case
code for the "data" approach. Since normal case code then avoids having to check
for error cases everywhere it can result in total speed-up. YMMV, of course. :)

Plus, more importantly, as mentioned but seems needs can't be mentioned often
enough, micro-efficiency is entirely the wrong aspect to elevate to Most
Important Criterion -- e.g. correctness and programmer time is more important.


I don't try to simply convey somebody
else's viewpoint I've read somewhere. And, yes, when it comes
to efficiency, good tools are expensive. Not as expensive as
our customer's time, though.

'clock' just doesn't cut it on Windows, for example. Machines
nowadays are so fast and the software is so complex that time
measurement with the granularity of 20 milliseconds is just not
suitable for measuring time on a function level.

Have you considered calling your routine in a loop (as mentioned in the parts
you snipped from my posting)? <g>

I've not had any problems using 'clock' in Windows.

It doesn't, sorry.

Perhaps it might help other readers, though.


Cheers, & again, hth.,

- Alf
 
A

Alf P. Steinbach

* Victor Bazarov:
Alf said:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
[..]
There is 'clock()', but know that it's the last function you
actually want to use to measure the performance of your code.
Why do you think that?
I don't "think" that. I know that. From experience.
Sorry, all that means is that your experience indicates that for
*you* 'clock' is ungood.
Yes, absolutely. What source of information do you use when you
claim 'clock's suitability? Marketting hype?
When you claim that 'clock' should be the last one tries, it is just
a silly claim until you have substantiated it somehow with facts
and/or logic.
Which would be rather difficult since the claim, it seems, was purely
a personal one, referring to yourself as "you". :-o

Perhaps you have used it incorrectly.
Perhaps. Or perhaps on all systems I've had experience with,
the 'clock' was implemented inadequately. Or was relying on
some rather inadequate hardware mechanism.

Or
perhaps you always have an expensive tool set-up geared towards
profiling (which is indicated by your strong focus on
micro-efficiency, so wouldn't surprise me!).
My "strong focus on micro efficiency"? What gave you that idea?
Recent threads including this one.

And why are you so inclined to try to insult people this fine
morning? I am speaking from experience with I say that dealing
with return values is faster than using exceptions. It's what
my experience indicates.
Most serious investigations of that have yielded the opposite
conclusion. [..]

Just as I suspected. Reading somebody else's reports... It has
to count somewhere, at least in a newsgroup.

Noted, you discount the C++ standardization committee's Technical Report on
performance when discussing C++ performance.

And in addition resort to stupid personal insinuations (counting the one quoted
above, plus the one about insulting people, you're up to 2 so far).

A discussion here can not be fruitful on those terms: completely discounting the
technical facts, referring to unspecified personal experience, and accentuating
the personal aspect.


Oh well...

[..]
Have you considered calling your routine in a loop (as mentioned in
the parts you snipped from my posting)? <g>

No. I do not consider calling any routine in a loop unless the
logic of our multi-million LOC application requires it. Figuring
out how long in micro- or nano-seconds any particular function
would execute is not a good use of anybody's time. Or even the CPU
time, for that matter. It is only good in a project with a few
scores of functions, well, a few hundreds, maybe. When the count
of files/classes/projects goes beyond a number of the fingers of
the entier team's hands (and feet), performance of a single function
is of no consequence. On a toy project, 'clock' would definitely
suffice.

If you can't, in most cases, call the routine(s) in a loop, then you have a very
serious spaghetti problem. :)

That said, there are some special cases where some small routine is called
zillions of times from zillions of places.

But such cases are rare.


Cheers & hth.,

- Alf
 
K

Kai-Uwe Bux

Alf said:
* Victor Bazarov: [snip]
And in addition resort to stupid personal insinuations (counting the one
quoted above, plus the one about insulting people, you're up to 2 so far).
[snip]

I don't understand your count. It appears that you are discounting remarks
of your own like:

"Perhaps you have used it incorrectly." [with regard to std::clock()]

which _is_ an unveiled insinuation to incompetence (since you have never
seen the code you talk about and engage just in speculation).


[..]
Have you considered calling your routine in a loop (as mentioned in
the parts you snipped from my posting)? <g>

No. I do not consider calling any routine in a loop unless the
logic of our multi-million LOC application requires it. Figuring
out how long in micro- or nano-seconds any particular function
would execute is not a good use of anybody's time. Or even the CPU
time, for that matter. It is only good in a project with a few
scores of functions, well, a few hundreds, maybe. When the count
of files/classes/projects goes beyond a number of the fingers of
the entier team's hands (and feet), performance of a single function
is of no consequence. On a toy project, 'clock' would definitely
suffice.

If you can't, in most cases, call the routine(s) in a loop, then you have
a very serious spaghetti problem. :)

You do it again. You don't know the code in question. On top of that, you
misrepresent the point: Victor did not say that he _can't_ call the routine
in a loop but that he did not consider that because other measurements
yield way more meaningful data. If that, to you, can only be explained in
terms of a spaghetti code problem, the reason can as well be a lack of
imagination on your part. In any case, to speculate about the quality of
unseen code and to call its quality into question based on essentially no
evidence is _rude_. (And it does not add anything to the technical merits
of the discussion.)

[snip]


Best

Kai-Uwe Bux
 
A

Alf P. Steinbach

* Kai-Uwe Bux:
Alf said:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
[snip]
And in addition resort to stupid personal insinuations (counting the one
quoted above, plus the one about insulting people, you're up to 2 so far).
[snip]

I don't understand your count. It appears that you are discounting remarks
of your own like:

"Perhaps you have used it incorrectly." [with regard to std::clock()]

which _is_ an unveiled insinuation to incompetence (since you have never
seen the code you talk about and engage just in speculation).

No, that is an unfounded insinuating speculation that I have insinuated something.

Jeez.

When someone states in this newsgroup that they have problems using the 'clock'
routine, hinting about something to do with Windows, one naturally queries for
some concrete example.

That's just being helpful.

Otherwise, anybody could (and considering the above, /can/) state that they or
someone else are being the victims of veiled malevolent insinuation simply by
(1) stating there is a problem using, say, 'strcat', and then when respondents
list among a number of possible reasons that perhaps they're using the routine
incorrectly, respond in turn that (2) hey you're insinuating I'm incompetent,
thereby (3) insinuating something rather more nasty about the respondent.

[..]
Have you considered calling your routine in a loop (as mentioned in
the parts you snipped from my posting)? <g>
No. I do not consider calling any routine in a loop unless the
logic of our multi-million LOC application requires it. Figuring
out how long in micro- or nano-seconds any particular function
would execute is not a good use of anybody's time. Or even the CPU
time, for that matter. It is only good in a project with a few
scores of functions, well, a few hundreds, maybe. When the count
of files/classes/projects goes beyond a number of the fingers of
the entier team's hands (and feet), performance of a single function
is of no consequence. On a toy project, 'clock' would definitely
suffice.
If you can't, in most cases, call the routine(s) in a loop, then you have
a very serious spaghetti problem. :)

You do it again. You don't know the code in question.

You do again what you did above, insinuating by trying to give the impression
that someone has insinuated something -- that only exists in your fantasy.

If the code in question is a single example, then it has no power as argument
and is simply noise inserted into the discussion, e.g. to divert attention from
the technical matter discussed. Which I can readily believe because Victor
discounted and snipped all reference to C++ committee's report on performance
and instead added an insinuation. It seems all about diverting attention and
obscuring the subject matter, and I'm not insinuating anything when I state very
openly that in my opinion, what I'm thinking, that's exactly what happened.

If the code in question is, on the other hand, meant as a general argument, then
talking about such code in general, as I did above, is appropriate, and carries
no insinuation about any concrete manifestation of the problem.

On top of that, you
misrepresent the point: Victor did not say that he _can't_ call the routine
in a loop but that he did not consider that because other measurements
yield way more meaningful data. If that, to you, can only be explained in
terms of a spaghetti code problem, the reason can as well be a lack of
imagination on your part.

Yeah. If so then some concrete examples would be nice. But the concrete is
severely lacking here, even to the degree of snipping away facts and references
(We Shall Have No Facts, they're so bothersome), and the personal is very much
present, I'm sad to observe.

In any case, to speculate about the quality of
unseen code and to call its quality into question based on essentially no
evidence is _rude_.

Oh God, help me. It's rude to discuss the quality of code? Here?

(And it does not add anything to the technical merits
of the discussion.)

Since I'm the only one who has discussed the technical here, Victor and you
resorting to /snipping away/ the technical and going, via vague implications,
for the personal (can it really hurt so much being confronted on a technical
issue?), such a statement that is misleading about intentions -- yes, it's
insinuating -- and technically meaningless, well it doesn't surprise me.


- Alf
 
C

coal

* Victor Bazarov:




Alf said:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
[..]
There is 'clock()', but know that it's the last function you
actually want to use to measure the performance of your code.
Why do you think that?
I don't "think" that.  I know that.  From experience.
Sorry, all that means is that your experience indicates that for
*you* 'clock' is ungood.
Yes, absolutely.  What source of information do you use when you
claim 'clock's suitability?  Marketting hype?
When you claim that 'clock' should be the last one tries, it is just
a silly claim until you have substantiated it somehow with facts
and/or logic.
Which would be rather difficult since the claim, it seems, was purely
a personal one, referring to yourself as "you". :-o
Perhaps you have used it incorrectly.
Perhaps.  Or perhaps on all systems I've had experience with,
the 'clock' was implemented inadequately.  Or was relying on
some rather inadequate hardware mechanism.
Or
perhaps you always have an expensive tool set-up geared towards
profiling (which is indicated by your strong focus on
micro-efficiency, so wouldn't surprise me!).
My "strong focus on micro efficiency"?  What gave you that idea?
Recent threads including this one.
And why are you so inclined to try to insult people this fine
morning?  I am speaking from experience with I say that dealing
with return values is faster than using exceptions.  It's what
my experience indicates.
Most serious investigations of that have yielded the opposite
conclusion. [..]
Just as I suspected.  Reading somebody else's reports...  It has
to count somewhere, at least in a newsgroup.

Noted, you discount the C++ standardization committee's Technical Report on
performance when discussing C++ performance.

And in addition resort to stupid personal insinuations (counting the one quoted
above, plus the one about insulting people, you're up to 2 so far).

A discussion here can not be fruitful on those terms: completely discounting the
technical facts, referring to unspecified personal experience, and accentuating
the personal aspect.

I'm not sure if you responded to this very well:
'clock' just doesn't cut it on Windows, for example. Machines
nowadays are so fast and the software is so complex that time
measurement with the granularity of 20 milliseconds is just not
suitable for measuring time on a function level.

I've done some performance testing on Windows and Linux --
www.webEbenezer.net/comparison.html. On Windows I use clock
and on Linux I use gettimeofday. From what I can tell
gettimeofday gives more accurate results than clock on Linux.
Depending on how this thread works out, I may start using the
function Victor mentioned on Windows.


I'm interested in trading links with people on webEbenezer.net.
I don't care if your site doesn't get a lot of hits. I've been
there and done that and know it can be tough.

Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
 
A

Alf P. Steinbach

* (e-mail address removed):
Oh. Well, the question we were talking about was a timer-thing for measuring the
performance of various parts of an algorithm. 'clock' is eminently usable for
that; it's trivial to accumulate results, and/or adjust argument values for the
measured thing, to get into the resolution range, and I described that in
concrete in my first response in this thread. Not that it's necessarily how
would do it (Windows' GetTickCount API routine comes to mind... ;-)).

But I'm taking issue with Victor's statement that 'clock' is the last thing you
should try for this.

That is so far just a silly assertion that he's failed to back up in any way,
veering instead into general profiling of massive applications, adding in
various personal perspectives, snipping facts and references, etc.

I've done some performance testing on Windows and Linux --
www.webEbenezer.net/comparison.html. On Windows I use clock
and on Linux I use gettimeofday. From what I can tell
gettimeofday gives more accurate results than clock on Linux.
Depending on how this thread works out, I may start using the
function Victor mentioned on Windows.

Performance counters in Windows can be great for general profiling yes.

And (OFF-TOPIC for clc++) you can even access all that data without any special
tools, just importing it into nearest spreadsheet.

But for just measuring an algorithm, the OP's problem, that approach can be and
IME (although I have not very much experience with the performance counters)
usually is massive overkill... ;-)

I'm interested in trading links with people on webEbenezer.net.
I don't care if your site doesn't get a lot of hits. I've been
there and done that and know it can be tough.

Thanks. But I'm not really into link trading. It's just that the free Norwegian
hosting I've used is being terminated (for all thousands of homepages) in May,
so I had to find some new free hosting, and they require 10 hits per month,
otherwise the site is deemed inactive and is removed. I didn't know how much
traffic I had. As it turned out it seems I have 30-40 hits per day (unique
visitors), so I should be safe against the 10 visitors per month criterion. :)


Cheers,

- Alf
 
K

Kai-Uwe Bux

Alf said:
* Kai-Uwe Bux:
Alf said:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov:
Alf P. Steinbach wrote:
* Victor Bazarov: [snip] [snip]
[..]
Have you considered calling your routine in a loop (as mentioned in
the parts you snipped from my posting)? <g>
No. I do not consider calling any routine in a loop unless the
logic of our multi-million LOC application requires it. Figuring
out how long in micro- or nano-seconds any particular function
would execute is not a good use of anybody's time. Or even the CPU
time, for that matter. It is only good in a project with a few
scores of functions, well, a few hundreds, maybe. When the count
of files/classes/projects goes beyond a number of the fingers of
the entier team's hands (and feet), performance of a single function
is of no consequence. On a toy project, 'clock' would definitely
suffice.
If you can't, in most cases, call the routine(s) in a loop, then you
have a very serious spaghetti problem. :)

You do it again. You don't know the code in question.

You do again what you did above, insinuating by trying to give the
impression
that someone has insinuated something -- that only exists in your
fantasy.

I think, it exists in your post and not just in my fantasy. But we shall
see. At least, I claim that the way I understood you is a viable
interpretation, which you could have anticipated.
If the code in question is a single example, then it has no power as
argument and is simply noise inserted into the discussion, e.g. to divert
attention from the technical matter discussed. Which I can readily believe
because Victor discounted and snipped all reference to C++ committee's
report on performance and instead added an insinuation. It seems all about
diverting attention and obscuring the subject matter, and I'm not
insinuating anything when I state very openly that in my opinion, what I'm
thinking, that's exactly what happened.

So you think, the code in question is a single example.
If the code in question is, on the other hand, meant as a general
argument, then talking about such code in general, as I did above, is
appropriate, and carries no insinuation about any concrete manifestation
of the problem.

This paragraph is weird, then: it seems that you (like me) think the code
that Victor was talking about is a single example (the if-clause of the
previous paragraph). But in this paragraph, you say that in responding, you
responded as if it is not a single example but "code in general", wherefore
your response carries no insinuation. When I read your post, that escaped
me because it appears clear from the quote that Victor is talking about a
specific, though large, piece of code: the "multi-millon LOC application".
I took your response to be also talking about this specific piece of code
since there was no indication that the perspective changed to a more
generic point of view.

You may be right that Victors specific example does not carry weight in the
discussion. But instead of making that point, you called the quality of the
piece of code into question by asserting that it suffers from a spaghetti
problem without having seen it. This is not in my mind, this is in your
post.

[snip]
Oh God, help me. It's rude to discuss the quality of code? Here?

I never claimed that discussing the code per se is rude. I maintain, though,
that speculating about unseen code and calling its quality into question is
rude. I am sure, you see the difference.

Since I'm the only one who has discussed the technical here, Victor and
you resorting to /snipping away/ the technical

As for me, I snip the technical parts since I was _only_ interested in your
way of counting that gets Victor "up two". That is a non-technical issue.
and going, via vague implications,

I don't think, what I write is vague.
for the personal (can it really hurt so much being confronted on a
technical issue?), such a statement that is misleading about
intentions -- yes, it's insinuating -- and technically meaningless,
well it doesn't surprise me.

Since I am not interested in this particular technical problem, I focus
entirely on your way of counting. For the same reason, I am not being
confronted on a technical issue.


Best

Kai-Uwe Bux
 
J

James Kanze

* Victor Bazarov:

[...]
Why do you think that?
'clock' is extremely easy to use, and it's always available.
Hence, I'd say it's the /first/ you should try, absolutely not
the last (if other more heavy instruments are brought to bear,
then 'clock' adds nothing).

I tend to agree, but it's important to understand what clock()
actually measures on your system. According to the C standard,
it should measure CPU time used, if this is available. In VC++,
the last time I checked, it was broken, and returned elapsed
time.

On all of the Unix based systems I've used, it is at least as
good as anything else for CPU time. But of course, do you want
to count the time spent handling a page fault, or not? Or maybe
elapsed time is what you want (but what does that mean on a
machine that is running other programs at the same time).
Still, for all of the benchmarking I've done, I've just used
clock.

As you mentionned in the parts I've cut, the results won't
really be exact, because exact really doesn't exist in a
multi-process environment with virtual memory and who knows what
all else. But at least on Unix based machines, they've always
been close enough for my purposes. And even under Windows, if
the function is pure CPU, and I'm not doing anything else on the
machine. Just make sure you do a number of runs, and eliminate
the outliers. (I'll always do a first execution before starting
measurments, to ensure that the code being measured is actually
loaded.)
 
J

James Kanze

On Mar 5, 11:55 pm, "Alf P. Steinbach" <[email protected]> wrote:

[...]
I've done some performance testing on Windows and Linux
--www.webEbenezer.net/comparison.html. On Windows I use clock
and on Linux I use gettimeofday. From what I can tell
gettimeofday gives more accurate results than clock on Linux.
Depending on how this thread works out, I may start using the
function Victor mentioned on Windows.

On Unix based machines, clock() and gettimeofday() measure
different things. I use clock() when I want what clock()
measures, and gettimeofday() when I want what gettimeofday()
measures. For comparing algorithms to see which is more
effective, this means clock().

Victor is right about one thing: the implementation of clock()
in VC++ is broken, in the sense that it doesn't conform to the
specification of the C standard, e.g. that "The clock function
returns the implementation's best approximation to the processor
time used by the program since the beginning of an
implementation-defined era related only to the program
invocation." The last time I checked, the clock() function in
VC++ returned elapsed time, and not processor time. (Of course,
if you run enough trials, on a quiescent machine, the functions
involved are pure CPU, and the goal is just to compare, not to
obtain absolute values, the information obtained is probably
adequate anyway.)

Of course, neither the C standard nor Posix are very precise
about what is meant by "processor time". Depending on what you
are trying to do, the function times() or some of the timer_...
functions might be more appropriate, at least under Unix (but I
presume that Windows also has something similar). But I
wouldn't bother until I'd determined that clock() wasn't
sufficient. (The Unix command time, for example, will probably
use gettimeofday for the real time, and times for the user and
sys time.)

For the rest: if you have a large application which is running
slow, you need a profiler, to determine where it is running
slow. Having found the critical function, however, if often
makes sense to write up a quick benchmark harness to compare
different possible implementations of the function, in order to
determine which one is best without having to rebuild and
remeasure the entire application each time.
 
M

Martin Eisenberg

Kai-Uwe Bux said:
I think, it exists in your post and not just in my fantasy. But
we shall see. At least, I claim that the way I understood you is
a viable interpretation, which you could have anticipated.

One would think that most regulars have interacted with Alf for
enough years to take these arguefests in stride...


Martin
 
L

Lionel B

[...]
I've done some performance testing on Windows and Linux
--www.webEbenezer.net/comparison.html. On Windows I use clock and on
Linux I use gettimeofday. From what I can tell gettimeofday gives more
accurate results than clock on Linux. Depending on how this thread
works out, I may start using the function Victor mentioned on Windows.

On Unix based machines, clock() and gettimeofday() measure different
things. I use clock() when I want what clock() measures, and
gettimeofday() when I want what gettimeofday() measures. For comparing
algorithms to see which is more effective, this means clock().

FWIW, on Linux at least, there is also 'clock_gettime()' which can access
a variety of clocks including CLOCK_PROCESS_CPUTIME_ID, described as a
"High resolution per-process timer". As far as I can make out, this
measures something similar to 'clock()' but at higher resolution. It does
have issues, though, on some SMP systems since it may access the CPU's
built-in timer and CPU timers on SMP systems are not guaranteed to be in
sync. It can thus potentially give bogus results if e.g. a process
migrates to another CPU. I'm not sure, but I'd imagine that something
similar may apply to high-resolution timers on Windows.

[...]
 
A

Alf P. Steinbach

* Jeff Schwab:
And after all, who *doesn't* fantasize about C++ flamewars?

I don't, but it's amusing: after a flamefest of transparent insinuations from
Victor (consistently snipping away the technical, changing context, and so on),
then from Kai-Uwe, Martin adds one more and that's /all/ he manages to utter.

It seems you guys think this is a social engineering group, where technical
matters can be decided by girlish put-downing, pouting, posturing and suchlike.


- Alf
 
C

coal

On Mar 5, 11:55 pm, "Alf P. Steinbach" <[email protected]> wrote:

    [...]
I've done some performance testing on Windows and Linux
--www.webEbenezer.net/comparison.html.  On Windows I use clock
and on Linux I use gettimeofday.  From what I can tell
gettimeofday gives more accurate results than clock on Linux.
Depending on how this thread works out, I may start using the
function Victor mentioned on Windows.

On Unix based machines, clock() and gettimeofday() measure
different things.  I use clock() when I want what clock()
measures, and gettimeofday() when I want what gettimeofday()
measures.  For comparing algorithms to see which is more
effective, this means clock().

I've just retested the test that saves/sends a list<int> using
clock on Linux. The range of ratios from the Boost version to
my version was between 1.4 and 4.5. The thing about clock is
it returns values like 10,000, 20,000, 30,000, 50,000, 60,000, etc.
I would be more comfortable with it if I could get it to round
its results less. The range of results with gettimeofday for the
same test is not so wide -- between 2.0 and 2.8. I don't run
other programs while I'm testing besides a shell/vi and firefox.
I definitely don't start or stop any of those between the tests,
so I'm of the opinion that the elapsed time results are meaningful.

Victor is right about one thing: the implementation of clock()
in VC++ is broken, in the sense that it doesn't conform to the
specification of the C standard, e.g. that "The clock function
returns the implementation's best approximation to the processor
time used by the program since the beginning of an
implementation-defined era related only to the program
invocation."  The last time I checked, the clock() function in
VC++ returned elapsed time, and not processor time.  (Of course,
if you run enough trials, on a quiescent machine, the functions
involved are pure CPU, and the goal is just to compare, not to
obtain absolute values, the information obtained is probably
adequate anyway.)

Except for the part about the functions being purely CPU, this
describes my approach/intent.


Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
 
A

Alf P. Steinbach

* (e-mail address removed):
The thing about clock is
it returns values like 10,000, 20,000, 30,000, 50,000, 60,000, etc.
I would be more comfortable with it if I could get it to round
its results less.

For a difference between 'clock' results, i.e. a time interval expressed in
'clock' units, convert to 'double' and then divide by CLOCKS_PER_SEC.

But note: do that /after/ any subtraction of end time from start time.

That's part of using 'clock' correctly as alluded to earlier in the thread
(another part of that is James' observation about wall time versus processor time).


Cheers & hth.,

- Alf
 
C

coal

* (e-mail address removed):


For a difference between 'clock' results, i.e. a time interval expressed in
'clock' units, convert to 'double' and then divide by CLOCKS_PER_SEC.

But note: do that /after/ any subtraction of end time from start time.

I'm aware of that, but don't see the point here. Both the Boost and
Ebenezer numbers would be divided by the same constant. It is
simpler,
I think, to just add up the times from clock for each version and then
figure out the ratio. (I could document the results from clock, but
for now I just document the ratio.) I use semicolons within a shell
to run each version 3 times in a row. I execute that command twice.
The second group starts up right on the heels of the first. So the
test is run 6 times total. I ignore the first 3 runs/times and do
those just to get the machine ready for the next 3. Anyway, my
impression, and it seemed like Victor has a similar impression, is
the output from clock isn't as precise as it could be. The range
I got earlier from clock, 1.4 - 4.5, leaves quite a bit of room for
manipulation if that is a person's goal.
That's part of using 'clock' correctly as alluded to earlier in the thread
(another part of that is James' observation about wall time versus processor time).

I agree with James' point and plan to head in that direction.
I'm not sure if I'll use clock or platform specific APIs on Linux,
but on Windows it probably won't involve clock.


Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top