float? double?

K

Keith Thompson

Joe Wright said:
Aside: Why was time_t defined as 32-bit signed integer? What was
supposed to happen when time_t assumes LONG_MAX + 1 ? Why was there no
time to be considered before the Epoch. Arrogance of young men I
assume.

Arrogance would have been assuming that the system they were designing
would still be in use 68 years later.

The C standard only says that time_t is a numeric type.
The double type would have been a much better choice for time_t.

I disagree. If you want 1-second resolution, a 64-bit signed integer
gives you more than enough range. If you use a floating-point type,
you get very fine resolution near the epoch, and relatively poor
resolution farther away, which doesn't seem particularly useful.

<OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>
 
C

Chris Torek

Um, are you sure about that? 16 bits with 1-second resolution only
covers about 18 hours. Even 1-minute resolution only covers about a
month and a half.

Oops, you are correct that it was not 16 bits (it was 32), but I was
correct about the moved epochs. See
<http://www.tuhs.org/Archive/PDP-11/Distributions/research/Dennis_v3/Readme.nsys>.

(In any case, the "real" point -- that time_t is not specified as
32-bit, or even signed -- still stands. Unsigned 32-bit takes one
to a bit beyond 2100, and of course signed or unsigned 64-bit is
better.)
 
C

CBFalconer

Keith said:
.... snip ...

<OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>

Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.
 
C

CBFalconer

Keith said:
.... snip ...

<OT>Assuming a Unix-style time_t (a signed integer type with 1-second
resolution with 0 representing 1970-01-01 00:00:00 GMT), there's
plenty of time before 2038 to expand it to 64 bits; it's already
happened on many systems.</OT>

Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.
 
K

Keith Thompson

CBFalconer said:
Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.

It's not really an argument for 1900, which *wasn't* a leap year.
(I've seen systems that use 1901 because of this.)

But IMHO the leap year issue just isn't that big a deal. The
calculations aren't that hard, and it's a solved problem.
 
E

Eric Sosman

CBFalconer wrote On 08/21/06 04:36,:
Keith Thompson wrote:

... snip ...



Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.

If concerns about leap year are to govern the choice,
the zero point should be xxxx-03-01 00:00:00, where xxxx
is a multiple of 400.

However, leap year calculations are not so important
that they should govern such a choice. Everyone who's
anyone already knows that the Right Thing To Do is to
define the zero point as 1858-11-17 00:00:00.
 
C

Clark S. Cox III

CBFalconer said:
Keith Thompson wrote:
.... snip ...

Having the epoch start in 1970, or even 1978 (Digital Research) is
foolish, when 1968 or 1976 would simplify leap year calculations.
This is also an argument for using 1900.

It would be an even better argument for using 1600-03-01.
 
A

av

You can often get away with much worse. 25 years ago I had a
system with 24 bit floats, which yielded 4.8 digits precision, but
was fast (for its day) and rounded properly. This was quite

i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output
 
W

William Hughes

av said:
i think it is not useful to round numbers

Then don't use floating point (you think e.g. 1/3 has an exact
floating
point representation on your machine?).

However, it will come as a shock to many people to learn that floating
point is not useful.

-William Hughes
 
E

Eric Sosman

av wrote On 08/21/06 13:38,:
i think it is not useful to round numbers
the only place where to round numbers is an iussie seems when input-
output

double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?
 
K

Keith Thompson

Clark S. Cox III said:
It would be an even better argument for using 1600-03-01.

I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)

Choosing 1900-03-01 gives you a nearly 200-year range in which the
leap years are at regular 4-year intervals; 1904-01-01 might make some
calculations simpler. But following the Gregorian leap-year rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.

This is all somewhat off-topic, of course, since the C standard
doesn't specify either the epoch or the resolution, or even the
representation of time_t. But the most widespread implementation is
as a signed integer representing seconds since 1970-01-01 00:00:00
GMT. If a future C standard were to tighten the specification, that
would not be an unreasonable basis for doing so.

David R. Tribble has a proposal for time support in C 200X at
<http://david.tribble.com/text/c0xlongtime.html>. It's been discussed
in comp.std.c (and any further discussion of it should probaby take
place there).
 
C

CBFalconer

Eric said:
av wrote On 08/21/06 13:38,:

double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?

av is a known troll, and firmly in my PLONK list.
 
J

Joe Wright

Eric said:
CBFalconer wrote On 08/21/06 04:36,:

If concerns about leap year are to govern the choice,
the zero point should be xxxx-03-01 00:00:00, where xxxx
is a multiple of 400.

However, leap year calculations are not so important
that they should govern such a choice. Everyone who's
anyone already knows that the Right Thing To Do is to
define the zero point as 1858-11-17 00:00:00.

Hear. The Beginning of Time for DEC VMS. When was the End of Time? :)
 
G

Gordon Burditt

There's hardly any application where an accuracy of 1 in 16 million is not
You're making all this up, aren't you? Posix time today is somewhere
around 1,156,103,121 seconds since the Epoch. We are therefore a little
over half way to the end of Posix time in early 2038. Total Posix
seconds are 2^31 or 2,147,483,648 seconds. I would not expect to treat
such a number with a lowly float with only a 24-bit mantissa.

You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.

Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.
I do point
out that double has a 53-bit mantissa and is very much up to the task.

When someone decides to represent time as the number of picoseconds
since the beginning of the universe (and it's his problem, not mine, to
prove that he accurately knows when that is) in 512 bits, when 53-bits
will not be up to the task.
Aside: Why was time_t defined as 32-bit signed integer? What was
supposed to happen when time_t assumes LONG_MAX + 1 ? Why was there no
time to be considered before the Epoch. Arrogance of young men I assume.

One purpose of storing the time is file time stamps. It isn't that
surprising to not have files older than the creation of the system
on the system. time_t was really not designed to store historical
or genealogical dates (which are often dates with no times or time
zones attached: Christmas is on Dec. 25 regardless of time zone).

Why is the tm_year member of a struct tm not defined as long long
or intmax_t? And asctime() has a very bad Y10K problem (and Y0K
problem), even if you don't try to use it with time_t.
The double type would have been a much better choice for time_t.

But it probably gives the most precision for a time you know least.
 
K

Keith Thompson

The above was posted by (e-mail address removed) (Gordon Burditt).

The above was posted by Joe Wright said:
You wouldn't since you are aware of the issues. Some people
unfamiliar with computer floating point but too familiar with math
would figure that if you can represent 1.0e+38 and 0.25 exactly,
you can also represent the sum of those two exactly.

Posix time is certainly an example of how precision of 1 in 16 million
is *NOT* good enough.

The above was posated by (e-mail address removed) (Gordon Burditt).
You can tell this by the attribution line at the top of this post.

Gordon, please either stop snipping attribution lines, or stop posting
here. You have been told many many times how rude this is. Your
excuses for doing so are, at best, lame.
 
C

Clark S. Cox III

Keith said:
I think that choosing a date when much of the world was still using
the Julian calendar would not be a good thing. (Britain and its
possessions, including what became the US, didn't switch until 1752.)

Choosing 1900-03-01 gives you a nearly 200-year range in which the
leap years are at regular 4-year intervals; 1904-01-01 might make some
calculations simpler. But following the Gregorian leap-year rules is
*trivial*, and choosing an epoch that avoids them over some finite
span just isn't worth the trouble.

My point was just that. Choosing an epoch to simplify leap year
calculations leads to absurd results. Maybe I should have inserted an
emoticon.
 
A

av

av wrote On 08/21/06 13:38,:

double d = sqrt(2.0);

Assuming the machine has a finite amount of memory, how
do you propose to carry out this calculation without some
kind of rounding?

it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when input-output"
too
 
W

William Hughes

av said:
it is an input exaple, user say "d must be sqrt(2.0)"
and so
"round numbers is an iussie seems when input-output"
too

That's right. Anything that might require rounding must be relegated
to
input. Any function that might lead to rounding must be banned. So no
sqrt, sin, log, division ...

-William Hughes
 
J

Jordan Abel

2006-08-21 said:
The original Unix time was a 16-bit type.

Incorrect. It was 32-bit but with substantially higher resolution (1/60
second instead of 1 second), which was the reason the epoch kept getting
moved. Before "long" was added, it was an array of two ints - ever
wonder why all the time_t functions unnecessarily use pointers?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,266
Messages
2,571,085
Members
48,773
Latest member
Kaybee

Latest Threads

Top