get the time needed to process one function.

D

Daffyduck

hi,
is possible to get in seconds with a precision of 0.0000 the time get
by one function ?
i check clock() but im getting 0 with this:

1 #include <stdio.h>
2 #include <time.h>
3
4 int
5 main(){
6 float time1, time2;
7 int a;
8 time1 = clock();
9 for(a=0;a<100; a++){
10 printf("a");
11 }
12 time2 = clock();
13 printf("seconds: %f\n",((float)tempo2 - (float)tempo1) /
CLOCKS_PER_SEC);
14 }
 
B

Barry Schwarz

hi,
is possible to get in seconds with a precision of 0.0000 the time get
by one function ?
i check clock() but im getting 0 with this:

1 #include <stdio.h>
2 #include <time.h>
3
4 int
5 main(){
6 float time1, time2;
7 int a;
8 time1 = clock();

clock returns a value of type clock_t which is implementation defined.
Converting to a float as you do here, or even to a double, could cause
it to lose significant digits. (On my system clock_t is some flavor
of long and I do get a diagnostic stating this exact point.)
9 for(a=0;a<100; a++){
10 printf("a");
11 }

What makes you think this loop consumes a measurable amount of time as
clock_t and CLOCKS_PER_SEC are defined on your system?
12 time2 = clock();
13 printf("seconds: %f\n",((float)tempo2 - (float)tempo1) /
CLOCKS_PER_SEC);

There are no variables named tempo2 and tempo1 in your program. How
did you get it to even compile? Alternately, where is your real code?
(Hint: in the future use cut and paste; don't try to retype.)

Since both clock_t and CLOCKS_PER_SEC are implementation defined, you
will have to look at your documentation to determine the best way to
compute with them and the best way to print the results.

By the way, to get the precision you asked for, CLOCKS_PER_SEC on your
system must be at least 10,000.
 
R

Richard Tobin

Daffyduck said:
is possible to get in seconds with a precision of 0.0000 the time get
by one function ?

I assume you mean to a precision of 0.0001 seconds, or something like
that.
9 for(a=0;a<100; a++){
10 printf("a");
11 }

Bear in mind that this may well take much less than 0.0001 seconds on
a modern computer.

The clock() function is not guaranteed to be precise enough for this,
and quite likely isn't. You could see if you platform has a timing
function with finer precision, but even if it appears to give the
time in microseconds it may not really be that precise - it might
just convert 60ths of a second to microseconds.

The obvious thing to do is to perform a lot more iterations, so that
the total time taken is of the order of a second or more. But this
has its own complications. In your example, the stdio buffer probably
won't get flushed (because it's bigger than 100 bytes), but if you
increase the number of iterations it will. What was it you wanted to
measure - the time to put a character in the buffer, or the average
time to output characters?

-- Richard
 
D

Daffyduck

hi !
sory for late,
my code in first post was wrong..
now i fix it and i have this:


$./a.out
time is : 0.010000
$
..
..

time1 = clock();
one_huge_function();
time2 = clock();
printf("time is : %f\n", ((float)time2 - (float)time1) /
CLOCKS_PER_SEC);
..
..
..


i think is ok, but im loosing precision ?
i mean i have 0.010000 as result, but is impossible this printf
give like 0.001 or 0.0001 right ?

Thanks a lot
 
E

Eric Sosman

Daffyduck said:
hi,
is possible to get in seconds with a precision of 0.0000 the time get
by one function ?
i check clock() but im getting 0 with this:

As an aside to what other responders have observed, I'd
like to point out that you are *not* getting a result of zero
from the code you showed.

Next time you want help with a piece of code, please
Please PLEASE show the actual code your question is about,
and not some half-baked inaccurate uncompilable Other.
 
E

Eric Sosman

pete said:
In posted code, such as that to which you are refering,
line numbers really don't help.
Without them
I could just copy and test the program,
but with them, it's tedious.

The line numbers aren't the only problem. Even after
they're eliminated, the posted code would not produce the
result claimed.

Yes, we can all guess what the actual code was like.
There are two likely possibilities, and either would lead
to the same responses to the O.P.'s question. But it's
not always so clear: Sometimes there are several rewritings
at similar "edit distances" from the posted junk, and we
wind up pondering a problem that has nothing to do with
what the poster was interested in. That's why I'm scolding
the O.P. here: On this occasion he didn't waste a lot of
time and trouble, but it'd be good to cure him of bad habits
before they grow worse.
 
B

Bartc

Daffyduck said:
hi,
is possible to get in seconds with a precision of 0.0000 the time get
by one function ?
i check clock() but im getting 0 with this:

clock() isn't a good way of measuring small timings. I normally measure such
things like this:

#include <stdio.h>
#include <time.h>

int testfn(int a,int b) {
return a*b+a+b;
}

int main () {
int t; /* Or type returned by clock() */
int N=100000000;
int i,looptime;

t=clock();

for (i=0; i<N; ++i) {
testfn(i,i+1);
}

t=clock()-t;

printf("Total time: %d msec\n",t); /* Assume milliseconds */
printf("Per iteration: %f ns\n",((double)t*1000000)/N);
}

This will include the loop overhead; this can be measured separately and
adjusted for, or unroll the loop a little (so 10 calls to testfn, executed
N/10 times).

Adjust N so that total time is at least 1 second, and preferably at least 10
(you might need ints above 32 bits).

You should minimise compiler optimisations because they mess with the code
too much (you are measuring /your code/ not the compiler writers').

But repeated operations on the same code and data can also be optimised by
the processor, so the figures you get could be misleading, if the actual
running code will be a different mix. On the other hand, sometimes you are
only interested in relative timings, when you are tweaking code for example.
 
C

CBFalconer

pete said:
.... snip ...

I think that
printf("a\n");
would be better than
printf("a");

With the newline character there is a stream flush with every
function call. Without it, there is may be a flush every once
in a while, or at the end of the program.

Just use simply:

puts("a");

and get the identical results. puts is safe, gets is not.
 
M

MisterE

Daffyduck said:
hi,
is possible to get in seconds with a precision of 0.0000 the time get
by one function ?
i check clock() but im getting 0 with this:

1 #include <stdio.h>
2 #include <time.h>
3
4 int
5 main(){
6 float time1, time2;
7 int a;
8 time1 = clock();
9 for(a=0;a<100; a++){
10 printf("a");
11 }
12 time2 = clock();
13 printf("seconds: %f\n",((float)tempo2 - (float)tempo1) /
CLOCKS_PER_SEC);
14 }

If you are running on windows look up the windows API look up the
performance counter. Infact any modern PC processor has a counter which
ticks over with every clock cycle giving you extreme accuracy, you will have
to look up documentation how to read it. As for standard portable C, there
is basically no way you can be assured to get precession any where near
microseconds.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,077
Latest member
SangMoor21

Latest Threads

Top