Time api

S

Seth7TS

Hi,
I want to make a trial version of a software but if i read the system
clock the user can modify it... how can i prevent this??
i thought to read the time from internet but i don't know if there are
api to do this...

thanks everybody

Seth
 
R

Richard Bos

I want to make a trial version of a software but if i read the system
clock the user can modify it... how can i prevent this??

You cannot, and any attempt to do so that results in messing with the
normal system clock - for _trial_ software, no less, not for something
I'm actually using - will result in extreme animosity.
i thought to read the time from internet but i don't know if there are
api to do this...

So now I _have_ to dial in to my provider only so I can use your
program? No way, José!

Anyway, no, there are no such APIs that are portable, and nothing even
approaching it in ISO C.

Richard
 
K

Keith Thompson

You cannot, and any attempt to do so that results in messing with the
normal system clock - for _trial_ software, no less, not for something
I'm actually using - will result in extreme animosity.


So now I _have_ to dial in to my provider only so I can use your
program? No way, José!

WEll, if the application requires Internet access anyway, then that
shouldn't be a problem.
Anyway, no, there are no such APIs that are portable, and nothing even
approaching it in ISO C.

Agreed. There are system-specific APIs for this purpose; try a
system-specific newsgroup.
 
J

Joe Wright

Keith said:
WEll, if the application requires Internet access anyway, then that
shouldn't be a problem.


Agreed. There are system-specific APIs for this purpose; try a
system-specific newsgroup.
In the day the question was "Hey Joe, what time is it?". Valid answers
(depending on context) would include "a quarter to twelve" or "lunch
time" or "eleven forty six and ten seconds".

Thirty years ago (1978) I had the chance to buy a Seiko Digital
wristwatch in Japan for only $250 or so. A beautiful thing in stainless
steel with accuracy approaching cesium decay.

I would 'shoot' my left shirtcuff such that everyone could see my watch.
I was so proud. Now, "Hey Joe, what time is it?" would get the hours,
minutes and seconds response from me. More than was asked for. "Almost
twelve" would have sufficed.

Then I noticed that even though I set the Watch according to the Time
signal of my favorite radio station, it was not quite the same Time as
the signal from my favorite TV station. Using the Watch to compare the
two times (radio and TV), deciding which was closest so as to set the
Watch was taking over my live.

Some years ago now, I have determined not to wear a watch. I don't need
it. Time is on my desktop, on my cellphone, and pretty much wherever I
look. I don't need it on my wrist anymore. Also the answer to "what time
is it?" is more likely to get you "bedtime, I'm right behind you" than
anything involving seconds since whenever.

Thanks for sticking with me so far. The Unix time_t is 32 bits wide. The
reason that it 'runs out' in 2038 is because it is historically an int
and incrementing beyond INT_MAX will be UB. When time_t time is INT_MAX
we will be at "Tue Jan 19 03:14:07 2038", 68 years and change after the
Epoch.

If we would make time_t time unsigned and use the full 32 bits, we
extend the Epoch until "Sun Feb 7 06:28:15 2106", some 136 years after
the Epoch.
 
K

Keith Thompson

Joe Wright said:
Thanks for sticking with me so far. The Unix time_t is 32 bits
wide. The reason that it 'runs out' in 2038 is because it is
historically an int and incrementing beyond INT_MAX will be UB. When
time_t time is INT_MAX we will be at "Tue Jan 19 03:14:07 2038", 68
years and change after the Epoch.

If we would make time_t time unsigned and use the full 32 bits, we
extend the Epoch until "Sun Feb 7 06:28:15 2106", some 136 years
after the Epoch.

The Unix time_t is traditionally some integer type, with values other
than (time_t)-1 representing seconds since the epoch (1970). There is
no requirement for it to be only 32 bits, and there are plenty of Unix
and Unix-like systems today on which time_t is 64 bits (two such
systems are within arm's reach as I type this).

In my opinion, switching from signed 32-bit time_t to unsigned 32-bit
time_t would be a serious mistake. It would extend the tail end of
the range from 2038 to 2106, but at the cost of cutting off all times
prior to 1970.

64-bit systems are taking over; even 32-bit systems are capable of
manipulating 64-bit values without much trouble. We're barely more
than halfway from (time_t)0 to (time_t)2**31-1; the halfway point was
January 10, 2004. How many computer systems from 1970 are still
running? How many 32-bit systems from today will still be running in
2038? How many systems running in 2038 won't be able to use 64-bit
time_t? (I believe the answer to all three questions is "too few to
worry about".)

Remember that we're not talking about embedded systems here, or in C
standard terms, freestanding implementations. A freestanding
implementation needn't support <time.h> at all. So in effect, we're
only concerned with hosted implementations; workstations, laptops,
servers, and similar systems.

With a signed 64-bit time_t, with 1-second resolution, we have a
representable range of over 584 billion years. Plenty of systems
*already* support this, and there's more than enough time between now
and 2038 to make sure they *all* do.

Incidentally, The Open Group's specification (IEEE Std 1003.1, 2004
Edition, "POSIX") merely says:

time_t and clock_t shall be integer or real-floating types.

and that time_t is "Used for time in seconds". (I've argued against
using floating-point.)

To steer this back to topicality, *if* a future C standard specifies
the characteristis of time_t more tightly, I suggest that it should
*not* permit it to be an unsigned type.

My proposal (and I'm sure there are a lot of holes to be shot in it):

Require time_t to be a typedef for signed integer type of at least
64 bits. (Or define it in terms of a minimum required range. We
definitely want more than 136 years. We don't really need 584
billion years, but we pretty much get that range for free once we
exceed 32 bits.)

A time_t value represents the number of seconds since the epoch,
which is 1970-01-01 00:00:00 GMT. (I could be persuaded to leave
the epoch unspecified, I suppose.) Leap seconds are handled by
mounting really big rockets on the equator, ensuring that a solar
day is exactly 86400 seconds. (Don't laugh, that's probably the
easiest way to get everyone to agree.) (time_t)-1 is simply one
second before the epoch; use TIME_T_MAX or TIME_T_MIN to indicate
an error. Oh yeah, let's define TIME_T_MAX and TIME_T_MIN while
we're at it.

For precision better than 1 second, provide something like the
POSIX gettimeofday() function, which gives you a structure
containing a time_t and another integer representing microseconds
(we can make it nanoseconds, or attoseconds, or whatever).

Another possibility is to use make time_t a 64-bit signed integer
representing nanoseconds since the epoch; this gives us a range of
584 years. We can then drop the separate gettimeofday()-like
function (unless we want to support better than nanosecond
resolution).

Make sure that all the conversion and formatting functions work
correctly with *all* possible time_t values. Unless we limit
ourselves to the above-mentioned 584-year range, this means years
can be more than 4 digits. Solve the Y10K bug nearly 8000 years
early. Your descendents will thank you (or wonder why you
bothered; "we outgrew computers 7500 years ago!").

And to avoid breaking existing code, we'd have to put all this
into a new header and call the type something other than time_t.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,020
Latest member
GenesisGai

Latest Threads

Top