Double Clock Experiment

S

sal

int main()
{

clock_t c0, c1, c2; /* clock_t is defined on <time.h> and
<sys/types.h> as int */

c0 and c2 are uninitialized variables. They'll get whatever garbage
happened to be on the stack when the program started. When you take their
difference you're including the difference between two essentially random
values.

Try initializing them to zero and see what you get.

c1's uninitialized too, of course, but you never use its initial value so
that's OK.

Aside from that, the time every operation will take will vary all over the
place, as the system spends a varying percentage of its time doing what
you want it to do versus doing other stuff, like taking page faults,
checking for cron jobs, processing requests from other users (if it's
timesharing), dealing with random garbage coming in from the ethernet
card, and so forth. So you'll never see a consistent result from a
program like this.

(If it's a Windows box you also need to consider the time it takes it to
phone Bill Gates and tell him what you're doing today.)
 
C

CoreyWhite

slebetman said:
The problem is you're running a program that expects perfect
measurement of time on a general purpose OS where time measurement is
only approximate. I ran your code on an embedded platform running a
real-time non-preemptive OS with interrupts turned off but replaced
your clock() with reading from a free running hardware counter (1kHz -
much finer resolution than a typical Unix tick). The result I got is
exactly what you expected:
First run:

>end (CPU); 5002992
>end (CPU2); 5008000
>end (NOW); 10016

Second run:

>end (CPU); 5002992
>end (CPU2); 5008000
>end (NOW); 10016

As you can see, both runs produced identical results. This is because
the system I'm running on is perfectly deterministic - no matter how
many times I run it I will get the same result. Also, you'll notice
that NOW/2 = 5008 which is the exact difference between CPU2 and CPU
times. This is what you get when nothing interferes with your
"experiment" - no interrupts, no task switching etc. which causes time
readings to be approximated. It also helps a little that my CPU is a
simple microcontroller with no pipelining or branch prediction or
out-or-order execution or instruction caching which may cause code to
take different amounts of time to execute depending on the CPU state.

Hey okay, that really helps and thank you. Where can I get a system
like this to run some tests on?
 
A

Al Balmer

He's pandering. He's got some heroes here in this NG, and because
that's their style and he knows your post is the kind of thing THEY
would trash, he tries to beat them to the punch. In doing so, he
hopes to show that he's almost as smart as they are, but in any case,
he hopes it shows that he's one among them.

It's as transparent as it gets, and oh so pathetic.
Sock puppets are pretty transparent, also.
 
S

slebetman

Hey okay, that really helps and thank you. Where can I get a system
like this to run some tests on?

For the CPU I used a PIC microcontroller. Almost any micorcontroller
will do: AVR, 8051 etc. Although a microprocessor generally executes
code faster, processors tend to use statistical techniques like branch
prediction and cacheing to do this hence they are not as deterministic
as microprocessors.

For the clock I simply fed a 1kHz clock through two cascaded 74HC4040
12 bit counters giving me a 24 bit value which turns out to be enough
for the experiment. The counters are interfaced with the CPU via three
octal tristate buffers which allows me to read the 24 bit value with a
single 8 bit port.

As to where can you get/buy the components I personally get lots of my
stuff at the shops along Pasar Road in Kuala Lumpur. For uncommon parts
like the 74HC4040 I get them from http://www.farnell.com. As Malaysia
is probably quite far from where your are at and I don't really know
about your local distributors I'd suggest going straight to Farnell.

And since this is getting a bit off topic I'd suggest dropping this
thread from comp.lang.c.
 
C

CoreyWhite

I've just installed a rtos that runs on PCs called OnTime. I'm going
to do some experimenting, but don't you think that given enough time
the program will still continue to preform as it does on a general os?
Try increasing the size of the loops, and leave it running over night.
Do you think you could do that for me and tell me how it preforms? I
need to know if it works or not.
 
M

Mark McIntyre

If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.

I think radioactive decay rates are considered a pretty good objective
way of measuring time. Given that this is how its defined...
Mark McIntyre
 
E

Ed Prochak

Mark said:
I think radioactive decay rates are considered a pretty good objective
way of measuring time. Given that this is how its defined...
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan


Sorry, but the time standard is based on FREQUENCY of radiation, not on
radioactive decay. Decay is better for measuring longer periods of time
(e.g. Carbon dating).

hth,
ed
 
L

Logodox

Corey,
Your post is kinda fun and interesting, irrespective of its accuracy.
If you get ahold of an english version of "On the Electrodynamics of
Moving Bodies" (1905AD) Albert Einstein, and peruse it for its
equaltions and explanations, you
will be able to generalize your idea into more interesting vistas.
Even if you had ideal clocks running under ideal conditions there still
exists a finite amount of time for "signal-sync" or communication.
Also, since there (scientifically) exists NO simultaneous NOW and the
infinitely small is as vast as the infinitely large, then the idea of
time travel becomes a bit illusory. This is because there could be NO
ABSOLUTE moment for multiple observers (or yourself at multiples
different "times") and "revisiting" the so-called past would really be
a new time for the observer.
 
W

Walter Roberson

If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.

There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.
 
K

Keith Thompson

There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.

In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).
 
B

Bill Hobba

Walter Roberson said:
There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

Interesting. I do know that it is predicted modern ultra precision clocks
will demonstrate relativistic effects just by driving them around in a car.

Thanks
Bill
 
B

Bill Hobba

Keith Thompson said:
In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).

'Real time'????? In physics, especially relativistic physics, time is what
a clock reads. Clock accuracy is a statistical thing based on comparisons
with other clocks, astronomical data etc. At present atomic clocks are the
most accurate.

Thanks
Bill
 
S

slebetman

I've just installed a rtos that runs on PCs called OnTime. I'm going
to do some experimenting, but don't you think that given enough time
the program will still continue to preform as it does on a general os?
Try increasing the size of the loops, and leave it running over night.
Do you think you could do that for me and tell me how it preforms? I
need to know if it works or not.

A 24 bit counter running at 1kHz will overflow after about 4 1/2 hours.
In my case the maximum error you can get is +/- 1 tick. I guess it ws
my luck that both runs produced the same result. The +/- 1 tick error
is due to the possibility of sampling the counter at in between
transition. Say for example you sample the counter just when it is
incrementing from1000 to 1001. In which case you have a 50% chance of
getting the either 1000 or 1001. But just like your Unix experiment
this says nothing about time travelling but more about sampling theory.

I can actually construct a set-up that can guarantee the same result
for every run simply by using the same clock source to drive both the
counter and the CPU. In which case the CPU is running in-sync with the
clock regardless of the acuracy of the clock source. Such a setup even
works if you keep varying the clock frequency because the CPU executes
instructions synchronously with the clock.

Think of it this way. If the CPU needs to execute exactly 100
instructions for each round of loop and each instruction executes in
exactly 2 clock cycles then each round of the loop will execute in
exactly 200 clock cycles. Now, when talking about 'clock' here we are
talking about the square wave used to drive the CPU. If we now use this
same square wave as the basis for the CPU to measure time then of
course the CPU will never disagree with its time measurement assuming
nothing else introduces delays or jitter to our instruction stream such
as interrupts.

If, like my experiment above, we use two different clock source: one to
drive the CPU and another to drive the counter then what you are
measuring is not "time travel" but simply the relative accuracy between
the square waveforms which can indeed be seen visually if the two
square waves are fed into an oscilloscope. In this case an error can
occur if you happen to sample the counter at a harmonic interval
between the two square waves:

(view using fixed width font or the alignment will be all wrong)

clockA 000111000111000111

clockB 00001111000011110000
^
|
if you happen to sample here
then you may get clockB's reading
as either 0 or 1

When interrupts come into play then the reading may be delayed by as
much time as it takes for the interrupt service routine to complete. So
if you're going to use OnTime's RTOS make sure you're not using
preemptive multitasking or time-sliced multitasking. And make sure
you're not using the real-time kernel. Use simple cooperative
multitasking and turn off all interrupts. Actually for the best result
use DOS and a DOS compiler like DJGPP.

Now finally, from a physical standpoint, what exactly *is* time
travelling? Your CPU? There is only one CPU, what is it travelling to
or away from, itself? This experiment does not show the CPU time
travelling but rather the software running on the CPU to be "time
travelling". In which case you need to understand that software is not
physical at all so all bets are off. Software is just like words coming
out of my mouth. If I say:

The quick brown fox jumps over the lazy dog.

and then later say:

The quick brown dog fox jumps over the lazy.

then did the word "dig" time travel in the second instance since it now
appears before the word "fox". Of course not. It is just how I decided
to utter the string of words. Just like how a CPU decides which
instruction to execute on a modern PC. On a modern PC groups of
instructions are scheduled pre-emptively with higher priority groups
being able to interrupt those with lower priority and instructions
themselves are often executed out of order.

You can conduct the same experiment like your code using a human
instead of a CPU. Ask your friend to say "The quick brown fox jumps
over the lazy dog" and measure the time between the word "fox" and
"dog". Each run will give out slightly different results not because
you measured time inaccurately, and not because the word "dog" time
travelled into the past or future, but because your friend takes
different amounts of time to utter the sentence with different length
of pauses between words and different things distracting him. This is
exactly what happens in a multitasking OS like Unix or Windows.
 
K

Keith Thompson

Bill Hobba said:
'Real time'????? In physics, especially relativistic physics, time is what
a clock reads. Clock accuracy is a statistical thing based on comparisons
with other clocks, astronomical data etc. At present atomic clocks are the
most accurate.

Ok, the clock() function is intended to measure the CPU time consumed
by a program, rather than (any approximation of) the time that might
be measured by a clock that measures so-called "real time" (such as an
atomic clock, sun dial, or whatever).

And this is why cross-posts between comp.lang.c and
sci.physics.relativity are a bad idea. (I have no idea why
comp.lang.c has been getting cross-posts from alt.magick lately.)
 
H

Hexenmeister

| In article <[email protected]>,
| >If you had a more objective way of measuring time then you could tell
| >me which of the final times in the program was the better of the two
| >aproximations. Because there is no more objective way to measure time
| >than this, both aproximations are equally accurate. In a subjective
| >way they are both entirely real since whatever we use to measure time
| >will not be perfectly accurate.
|
| There was a recent article in either Scientific American or
| American Scientist (I forget which), which indicated that clocks
| are now approaching sufficient precision that it would be impossible
| to synchronize any two of the ultra-precision clocks. Apparently
| on those timescales, any movement of the clocks has noticable relatively
| effects.
|
| On the other hand, the resolution of the clock() call is not high
| enough for such matters to be worth considering.
| --
| "law -- it's a commodity"
| -- Andrew Ryan (The Globe and Mail, 2005/11/26)

There was a recent article in either the New York Times, the Chicago
Tribune,
the London Times or the National Enquirer ( I forget which ) which indicated
that the Pope was an ardent relativist who believed prayers could reach the
throne of God ( i9.0 light years away) no faster than the speed of light.
Apparently this speed limit was imposed by St. Einstein who will be
canonised
as soon as he is accepted as the one and only true God by fuckin' idiots
everywhere.

Androcles.
 
B

Bill Hobba

Keith Thompson said:
Ok, the clock() function is intended to measure the CPU time consumed
by a program, rather than (any approximation of) the time that might
be measured by a clock that measures so-called "real time" (such as an
atomic clock, sun dial, or whatever).

Its accuracy is not that good - but accuracy is not what defines a clock -
at least in physics.
And this is why cross-posts between comp.lang.c and
sci.physics.relativity are a bad idea. (I have no idea why
comp.lang.c has been getting cross-posts from alt.magick lately.)

If you look at Corey White's posts, AKA a number of other people such as
Virtual Adepts, you will see he trolls across a number of groups including
those you mentioned. And yes he is a troll without question:
http://groups.google.com/group/alt.magick/msg/c7b39607eb0c30b5

Posting to a number of unrelated newsgroups is a typical troll tactic.
Except for this post I will try to remove computing forums in my responses
in future.

Best of luck with your newsgroup. Hope you don't have the trouble with
troll/cranks we at sci.physics.relativity do which is why most of its
legitimate posters have long ago developed their own way of handling them -
with varying degrees of success.

One regular poster here maintains a site of the worst - always good for a
laugh;
http://users.pandora.be/vdmoortel/dirk/Physics/ImmortalFumbles.html

Thanks
Bill
 
P

pete

Keith said:
Ok, the clock() function is intended to measure the CPU time consumed
by a program, rather than (any approximation of) the time that might
be measured by a clock that measures so-called "real time" (such as an
atomic clock, sun dial, or whatever).

And this is why cross-posts between comp.lang.c and
sci.physics.relativity are a bad idea. (I have no idea why
comp.lang.c has been getting cross-posts from alt.magick lately.)

Yes. "Real time" actually means something else in programming.

http://www.cs.york.ac.uk/rts/RTSBookThirdEdition.html
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,780
Messages
2,569,608
Members
45,241
Latest member
Lisa1997

Latest Threads

Top