timer in milliseconds

S

S_at_work

hello,



can anybody give me a piece of code , or a hint , how i can recieve the
time i need for connect() to another host in milliseconds.



My problem is i want test the performance of ipsec, so i need the
time in milliseconds, how long it takes to connect to another host
over ipsec.



1) i tried clock() but clock() don't work, because connect() don't use
cpu time like sleep(), so i always get 0 milliseconds, but the
connect takes about 2 seconds (i test this with a sniffer) .



is the any way to get the time in milliseconds ????



thanks a lot and best regards, stefan
 
D

Dan Pop

can anybody give me a piece of code , or a hint , how i can recieve the
time i need for connect() to another host in milliseconds.

If you post this question to a newsgroup dedicated to programming on
your platform, I'm reasonably sure someone can. There is no portable
solution to your problem, the best you can get from standard C for
measuring real time intervals is time() and it typically works with
second resolution, i.e. three orders of magnitude worse than what you
need.
1) i tried clock() but clock() don't work, because connect() don't use
cpu time like sleep(), so i always get 0 milliseconds, but the

Where did you get the idea that sleep() uses CPU time from?
connect takes about 2 seconds (i test this with a sniffer) .

Yup, forget about clock(), it's not what you need, regardless of any
resolution issues: it does NOT measure real time intervals.
is the any way to get the time in milliseconds ????

If you're on a Unix platform and if microseconds are good enough, check
gettimeofday(2).

Dan
 
G

Glen Herrmannsfeldt

can anybody give me a piece of code , or a hint , how i can recieve the
time i need for connect() to another host in milliseconds.

Many processors now have a register that counts clock cycles. If yours
does, this probably will do what you want. (You also need to know the clock
rate, though.)

It takes a two instruction assembly program on x86 (off topic) , for
example.

-- glen
 
K

Keith Thompson

In <[email protected]> S_at_work


Where did you get the idea that sleep() uses CPU time from?

I think he meant that connect(), like sleep(), doesn't use CPU time.
(It uses some, of course, but not enough to make it useful to measure
it.)
 
R

Randy Howard

Many processors now have a register that counts clock cycles. If yours
does, this probably will do what you want. (You also need to know the clock
rate, though.)

It takes a two instruction assembly program on x86 (off topic) , for
example.

Please don't post this OT stuff, especially since it is likely to result
in bad results. For example, the solution using rdtsc for x86 is horribly
broken on SMP systems.
 
G

Glen Herrmannsfeldt

Randy Howard said:
Please don't post this OT stuff, especially since it is likely to result
in bad results. For example, the solution using rdtsc for x86 is horribly
broken on SMP systems.

Certainly it us up to the user to understand the system in use.

I have used rdtsc on a SMP system, without considering the problems that it
might cause. As well as I remember it, it worked fine, but I can see that
it could cause problems. I think the results will still be better than
clock(), though.

-- glen
 
S

S_at_work

thanks for help now it works

here is the code







#include <stdio.h>

#include <time.h>



#define MAIN

//#define DEBUG



float GetTimeDelta(struct timeval *time1){





float timereturn;

struct timeval time2;

time2.tv_usec=0;

time2.tv_sec=0;





if(time1->tv_usec==NULL&&time1->tv_sec==NULL ){

timereturn = 0.0;

gettimeofday(time1,NULL);

}

else{

gettimeofday(&time2,NULL);

if(time1->tv_usec > time2.tv_usec){

timereturn =((1000000.0+(time2.tv_usec-time1-
>tv_usec))/1000000.0);

time2.tv_sec-=1;

// printf(" milliseconds 1>2:%f\n",timereturn);

}

else{

timereturn =(time2.tv_usec-time1->tv_usec)/1000000.0;

//printf(" milliseconds 2>1:%f\n",timereturn);

}

timereturn+=(time2.tv_sec -time1->tv_sec);



}

#ifdef DEBUG

printf("time1 seconds:%d\n",time1->tv_sec);

printf("time1 milliseconds:%d\n",time1->tv_usec);



printf("time2 seconds:%d\n",time2.tv_sec);

printf("time2 milliseconds:%d\n",time2.tv_usec);



return timereturn;

#endif DEBUG

}





#ifdef MAIN

int main()

{

struct timeval time;

time.tv_sec =0;

time.tv_usec=0;

int i=100000;

int a=0;

printf("start time %f\n",GetTimeDelta(&time));



//time consuming

sleep(4);

while(i--)

a= i%12;

//end time consuming



printf("end time %f\n",GetTimeDelta(&time));





}

#endif
 
D

Dan Pop

Certainly it us up to the user to understand the system in use.

I have used rdtsc on a SMP system, without considering the problems that it
might cause. As well as I remember it, it worked fine, but I can see that
it could cause problems. I think the results will still be better than
clock(), though.

If the solution has to be platform-specific, anyway, why not use a *clean*
solution provided by the implementation, as an extension? Especially
since the OP only needs millisecond resolution.

Dan
 
D

Dan Pop

thanks for help now it works

here is the code

Why did you feel compelled to post non-portable C code to this newsgroup?

BTW, the *correct* usage of gettimeofday requires the inclusion of
<sys/time.h>.

Dan
 
R

Randy Howard

Certainly it us up to the user to understand the system in use.
Absolutely.

I have used rdtsc on a SMP system, without considering the problems that it
might cause. As well as I remember it, it worked fine, but I can see that
it could cause problems.

The problem is that the tsc data is not synched between CPUs, meaning that
your results are only accurate as long as your process (or thread) is
pinned to a specific CPU. That's usually not the case (nor ideal).
 
G

Glen Herrmannsfeldt

Randy Howard said:
The problem is that the tsc data is not synched between CPUs, meaning that
your results are only accurate as long as your process (or thread) is
pinned to a specific CPU. That's usually not the case (nor ideal).

I have a post to comp.sys.intel to see if anyone there knows. I know TSC
is zero at reset, and if all leave reset at the same time, they may be
synchronized. I will see if anyone answers there.

The one time I did it on a four way SMP machine was in Java, using a JVM
native method, which means C. I took a simple C program, compiled it
with -S, modified the result with an RDTSC instruction, assembled it, and it
pretty much worked. I didn't need nanosecond resolution, but millisecond
may not have been good enough. There are enough problems with the way Java
native method calls not to worry too much about the SMP problem.

I will see what they say on comp.sys.intel, though. I have a dual 350MHz P2
system at home, but I never tried it on that one.

-- glen
 
R

Randy Howard

I have a post to comp.sys.intel to see if anyone there knows. I know TSC
is zero at reset, and if all leave reset at the same time, they may be
synchronized. I will see if anyone answers there.

Been there done that. The folks I asked at Intel admit that is the case.
It's probably documented as such formally. I believe it was discussed
recently on the threading forum at Intel's website and verified as
unreliable" by Intel employees there. It has certainly been discussed in
plenty of places. Please let me know if you get a different answer, but I
have seen it give erratic results (I.e. negative tsc differences inside of
wraparound times) due to TSC skew between processors. You may need to
have a lot of dynamic load on the box such that the scheduler is moving
processes around between CPUs before it becomes obvious.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,527
Members
44,998
Latest member
MarissaEub

Latest Threads

Top