Oops, word slip. My bad, I meant first.
The C source tells us that in Windows, the performance counter call to
guarantee that it has been started occurs before the counter call to
actually get the number of seconds since the first call.
Yes I suspected as much. You are presumably referring to
-----------------
if (divisor == 0.0) {
LARGE_INTEGER freq;
QueryPerformanceCounter(&ctrStart);
if (!QueryPerformanceFrequency(&freq) || freq.QuadPart == 0) {
/* Unlikely to happen - this works on all intel
machines at least! Revert to clock() */
return PyFloat_FromDouble(clock());
}
divisor = (double)freq.QuadPart;
}
QueryPerformanceCounter(&now);
diff = (double)(now.QuadPart - ctrStart.QuadPart);
return PyFloat_FromDouble(diff / divisor);
-----------------
which IMO really should be (untested):
-----------------
if (divisor == 0.0) {
LARGE_INTEGER freq;
QueryPerformanceCounter(&ctrStart);
if (!QueryPerformanceFrequency(&freq) || freq.QuadPart == 0) {
/* Unlikely to happen - this works on all intel
machines at least! Revert to clock() */
return PyFloat_FromDouble(clock());
}
divisor = (double)freq.QuadPart;
now = ctrStart;
} else {
QueryPerformanceCounter(&now);
}
diff = (double)(now.QuadPart - ctrStart.QuadPart);
return PyFloat_FromDouble(diff / divisor);
-----------------
Good point, though I think that 9x variants also had .01 second
resolution.
I don't recall, but the old ~55ms tick was used a lot in the old days.
Now you are talking about external system loads and how they effect
timings. That wasn't the question.
True.
Regards,
Bengt Richter