ctime library - is there anything better in c++?

L

Lars Uffmann

Is the ctime library really that poorly designed, or is it just me
looking to do things the wrong way?

The whole concept of the struct tm seems way too highlevel for my taste
- this is something that I would implement myself if needed.

I have started programming c++ long before I did Visual Basic, and even
before Visual Basic was around. However, one has to give credit, where
credit is due: What's wrong with the (imo really good) VB approach to
store dates as a floating point value with the whole numbers being the
days since day X and the fraction being the part of the day elapsed?
I.e. 0.5 = 12:00 hours, 1/24 = 1 hour and so on?

If I need any more accurate times, I'll use the timer that starts when
the computer is turned on, not the system clock!

Then the tm structs definitions:
http://www.cplusplus.com/reference/clibrary/ctime/tm.html
All the values are seconds, minutes, hours, days, months *SINCE* some
point in time, starting with 0, but the tm_mday all of a sudden is "day
of the month" (values 1-31). What ever happened to consistency?

Now if I want to do such a simple thing as read a users text input of a
year and a "day of year", and convert it to an ISO date, I
first: have to create & initialize a tm struct with the date tm_year =
userInput, tm_month = 0, tm_mday = 1, then
second: have to convert that into a time_t value, using mktime()
function that also manipulates my input(!) value,
third: add the user entered "day of year" (minus one) to the time_t,
multiplied by 86400 seconds in a day of course, and finally
fourth: convert the time_t back to a tm struct so I am able to output it
for the user

Not to mention that the tm struct should be a class and initialize in
the constructor with the calendar's counting start date, instead of all
random values (or at least, if it isn't done for speed reasons, an
initialize routine should be part of the library - it's a one-liner, but
it's annoying to do it yourself as a user and include a function that
should be part of the library in every single program that needs it).

Then why on earth does the strftime function take a struct tm as an
argument, and not the (much more sensible) time_t format?

In my eyes, the ctime library is really the poorest piece of programming
I have ever seen in an official c standard library. The only reason I'm
forced to use it at the moment is that I do not know how to reliably
calculate leap years, leap seconds and the likes and really do not feel
like creating my own date/time library at all. Not to mention If I did
it, it wouldn't be anywhere near the speed performance that the standard
library probably is.

So - after being done with my rant, can someone point me to a better
solution in C++?

Best Regards,

Lars

PS: What were they thinking???
 
M

mail.dsp

Is the ctime library really that poorly designed, or is it just me
looking to do things the wrong way?

The whole concept of the struct tm seems way too highlevel for my taste
- this is something that I would implement myself if needed.

I have started programming c++ long before I did Visual Basic, and even
before Visual Basic was around. However, one has to give credit, where
credit is due: What's wrong with the (imo really good) VB approach to
store dates as a floating point value with the whole numbers being the
days since day X and the fraction being the part of the day elapsed?
I.e. 0.5 = 12:00 hours, 1/24 = 1 hour and so on?

If I need any more accurate times, I'll use the timer that starts when
the computer is turned on, not the system clock!

Then the tm structs definitions:http://www.cplusplus.com/reference/clibrary/ctime/tm.html
All the values are seconds, minutes, hours, days, months *SINCE* some
point in time, starting with 0, but the tm_mday all of a sudden is "day
of the month" (values 1-31). What ever happened to consistency?

Now if I want to do such a simple thing as read a users text input of a
year and a "day of year", and convert it to an ISO date, I
first: have to create & initialize a tm struct with the date tm_year =
userInput, tm_month = 0, tm_mday = 1, then
second: have to convert that into a time_t value, using mktime()
function that also manipulates my input(!) value,
third: add the user entered "day of year" (minus one) to the time_t,
multiplied by 86400 seconds in a day of course, and finally
fourth: convert the time_t back to a tm struct so I am able to output it
for the user

Not to mention that the tm struct should be a class and initialize in
the constructor with the calendar's counting start date, instead of all
random values (or at least, if it isn't done for speed reasons, an
initialize routine should be part of the library - it's a one-liner, but
it's annoying to do it yourself as a user and include a function that
should be part of the library in every single program that needs it).

Then why on earth does the strftime function take a struct tm as an
argument, and not the (much more sensible) time_t format?

In my eyes, the ctime library is really the poorest piece of programming
I have ever seen in an official c standard library. The only reason I'm
forced to use it at the moment is that I do not know how to reliably
calculate leap years, leap seconds and the likes and really do not feel
like creating my own date/time library at all. Not to mention If I did
it, it wouldn't be anywhere near the speed performance that the standard
library probably is.

So - after being done with my rant, can someone point me to a better
solution in C++?

Best Regards,

Lars

PS: What were they thinking???

Use boost library. It is open source. Download it from
www.boost.org
 
J

Juha Nieminen

Lars said:
What's wrong with the (imo really good) VB approach to
store dates as a floating point value with the whole numbers being the
days since day X and the fraction being the part of the day elapsed?

Accuracy comes to mind. For example the value 0.1 cannot be
represented accurately with base-2 floating point numbers (for the exact
same reason as 1/3 cannot be represented accurately with base-10 decimal
numbers).

If you add 0.1 to itself 10 times, you won't get 1.0 (you get a value
which is extremely close to it, but not exactly 1.0). That might not be
what one wants.
 
L

Lars Uffmann

Juha said:
Accuracy comes to mind. For example the value 0.1 cannot be
represented accurately with base-2 floating point numbers (for the exact
same reason as 1/3 cannot be represented accurately with base-10 decimal
numbers).

Okay, that is a valid point. However, the same point holds true for the
time_t format - just at a higher level of details, with exactly the
examle of 1/3 that you mentioned. :)

Best Regards,

Lars
 
L

Lars Uffmann

Yannick said:
Sounds poor to me...

YMMD - it makes for very easy date calculation when you have to handle
lots and lots of date routines (where speed isn't as big an issue as
variety).
Euh, ask the peoples that invented the time system. In most cultures
I have been exposed to, a day starts at 00:00:00 as hours, minutes
and seconds (some use 12:00:00 AM but most understand both). Nowhere
that I am aware start their day at 01:01:01.

Well - then why is the ctime implementation of struct tm also counting
months starting from zero? And the weekday (tm_wday), day of year
(tm_yday) as well as the year (ok, actually that is being counted from
1970 - also a big limitation of this time format: due to the
unnecessarily high accuracy, you can not express any dates prior to
1902-01-01...
So the C library when creating a human referenceable structure simply
followed accepted human patterns.
Not really ;)
No, I was not aware of it, but that was the purpose of my OP - I wanted
to learn about alternatives :) So thanks for that!

Best Regards,

Lars
 
J

James Kanze

Accuracy comes to mind. For example the value 0.1 cannot be
represented accurately with base-2 floating point numbers (for
the exact same reason as 1/3 cannot be represented accurately
with base-10 decimal numbers).
If you add 0.1 to itself 10 times, you won't get 1.0 (you get
a value which is extremely close to it, but not exactly 1.0).
That might not be what one wants.

Just a nit, but you *might* get 1.0. More accurately, too, you
probably can't add 0.1 to itself, because you can't have a value
0.1, just a value which is extremely close to it. Although the
standard allows base 10 floating pointer arithmetic, I don't
know of a modern implementation that uses anything other than
base 2, 8 or 16. And 0.1 isn't representable in any of those
bases. (And of course, if adding 0.1 ten times does happen to
work, just choose some other value---whatever the
representation, adding 1/N to itself N times will fail to give 1
for some N. Because 1/N isn't representable.)

But the basic objection remains: you can't exactly represent
seconds, and the results of any calculations which should result
in seconds may be off by some very small amount. Which could
cause problems if you want to compare two times. Not to mention
that you're likely to end up with fractions of a second when you
don't want them. All in all, using floating point here is
probably the worst possible solution (but the standard allows
it: time_t may be a typedef to double).
 
J

James Kanze

YMMD - it makes for very easy date calculation when you have
to handle lots and lots of date routines (where speed isn't as
big an issue as variety).
Well - then why is the ctime implementation of struct tm also
counting months starting from zero? And the weekday (tm_wday),

Because these values are strings; adding numbers to strings
doesn't work very well, but adding a number to the index into a
table of strings does. And in C, indexes start at 0.
day of year (tm_yday) as well as the year (ok, actually that
is being counted from 1970

In time_t. In tm, its from 1900. Format the tm_year field as
%02d, and you get a 2 digit year, exactly as people back in the
1970's (when time.h was designed) would expect it.
- also a big limitation of this time format: due to the
unnecessarily high accuracy, you can not express any dates
prior to 1902-01-01...

Again, it's implementation defined. But most implementations do
use the Unix conventions, or something similar (in particular,
with time_t being a 32 bit integer). And again: this format was
designed mainly for file timestamps; the inventors weren't
worried about files which were modified before 1970, since they
couldn't exist.

For a general purpose library, 32 bits aren't enough, even for a
resolution of seconds. (A 64 bit value representing nanoseconds
is good for over 200,000 years, and representing microseconds,
for well over the expected age of the universe. Which should
cover most needs.)
Not really ;)

Very really, within the context
No, I was not aware of it, but that was the purpose of my OP -
I wanted to learn about alternatives :) So thanks for that!

Note that it's a Unix standard function, not necessarily
available everywhere.
 
O

osmium

Lars Uffmann said:
Is the ctime library really that poorly designed, or is it just me looking
to do things the wrong way?

I haven't followed this thread but the linked article is germane to the
general discussion. It says 28 GIs were killed because of
rounding/truncation/whatever error. It took me till now to get around to
looking for a clip of some kind, I had heard about the problem quite some
time ago.

http://www.ima.umn.edu/~arnold/disasters/patriot.html
 
L

Lars Uffmann

James said:
In time_t. In tm, its from 1900.

Ouch! So that is where that inconsistency comes from. I've been wondrin
why in my conversion routine, the date would sometimes be 1970-01-01
when I try to convert a date lower than the minimum tm value to a time_t.
By the way, the minimum for tm seems to be 1902-01-01 - though I'd have
to do some debugging on my code to be sure the observed behaviour of my
mini-program is indeed the behaviour of the underlying conversion
functions of ctime.

with time_t being a 32 bit integer). And again: this format was
designed mainly for file timestamps; the inventors weren't
worried about files which were modified before 1970, since they
couldn't exist.
Oh - okay, that makes a little more sense. I thought it was meant for
calendar calculations and the likes...
For a general purpose library, 32 bits aren't enough, even for a
resolution of seconds. (A 64 bit value representing nanoseconds
is good for over 200,000 years, and representing microseconds,
for well over the expected age of the universe. Which should
cover most needs.)
I'm not so sure... What about the next universe? And the one after that?
Suppose I want to write a letter to my 10^913th generation son? :D

Note that it's a Unix standard function, not necessarily
available everywhere.
K, all the same to someone developing with MinGW ;)

Thanks again!

Lars
 
L

Lars Uffmann

Lars said:
"Specifically, the time in tenths of second as measured by the system's
internal clock was multiplied by 1/10 to produce the time in seconds."
- now that would ridicule any rounding errors, if it was true *G*

Disregard, I'm stupid :/
 
H

Howard Hinnant

Sorry James, going into nitpick mode. :)

For a general purpose library, 32 bits aren't enough, even for a
resolution of seconds.  (A 64 bit value representing nanoseconds
is good for over 200,000 years,

Actually it is good for over 200,000 days, or about 584.6 years (if
you're doing signed representations that's +- 292 years).
and representing microseconds,
for well over the expected age of the universe.  Which should
cover most needs.)

Well, depending on who you believe for the age of the universe. ;-)
64 bits of microseconds covers about 584,000 years.

</nitpick>

Fwiw, Beman Dawes has just put <chrono> into a boost sandbox:

http://svn.boost.org/svn/boost/sandbox/chrono

<chrono> is a library largely based/inspired upon a subset of Jeff
Garland's boost date-time. However <chrono> is strictly about time
durations, time points, and clocks for retrieving time points. It is
completely ignorant of calendars (knows nothing of days, months and
years). <chrono> is somewhat documented in this proposal:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm

and is currently included in the C++0X working draft:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2798.pdf

(see 20.9 Time utilities [time])

Also the utilities in <chrono> (combined with <ratio> documented in
the same paper) can be used to *exactly* represent 1/10 of a second
(or any rational number of seconds). Example code:

#include <chrono>
#include <cstdio>

int main()
{
typedef std::chrono::duration<int, std::ratio<1, 10> > one_tenth;
for (one_tenth t(0); t <= std::chrono::seconds(1); t += one_tenth
(1))
std::printf("t = %d / %lld seconds\n",
t.count(), one_tenth::period::den);
}

t = 0 / 10 seconds
t = 1 / 10 seconds
t = 2 / 10 seconds
t = 3 / 10 seconds
t = 4 / 10 seconds
t = 5 / 10 seconds
t = 6 / 10 seconds
t = 7 / 10 seconds
t = 8 / 10 seconds
t = 9 / 10 seconds
t = 10 / 10 seconds

---

t is just holding an integral count of 1/10 of a second (in an int).
You can compare it to another unit which is holding an integral count
of seconds (std::chrono::seconds(1)), and the comparison will be
exact, even after adding 1/10 to t ten times.

-Howard
 
J

James Kanze

Ouch! So that is where that inconsistency comes from. I've
been wondrin why in my conversion routine, the date would
sometimes be 1970-01-01 when I try to convert a date lower
than the minimum tm value to a time_t. By the way, the
minimum for tm seems to be 1902-01-01 - though I'd have to do
some debugging on my code to be sure the observed behaviour of
my mini-program is indeed the behaviour of the underlying
conversion functions of ctime.

On most systems (or at least most Unix systems) today, time_t is
a 32 bit signed integer, measuring seconds from midnight, Jan.
1, 1970. If you do a little arithmetic, you'll find that this
allows you to represent from 1902 to 2038, roughly. Which
establishes the bounds for all of the time functions is
libraries which use this representation. After all of the noise
about the Y2K problem, implementations have gradually been
moving to 64 bits, but it doesn't happen overnight. (We've got
a lot of code which stores time_t on files. Luckily, we adopted
a text format.)
Oh - okay, that makes a little more sense. I thought it was
meant for calendar calculations and the likes...

It wasn't designed intentionally to prevent them, but the
most important issue was file timestamps and such. Beyond that,
I imagine that they looked at what they were doing at the time:
it works very well for scheduling cron jobs and at requests, for
example, or measuring elapsed program runtime. I'm fairly
certain, however, that no consideration was given to the
possibility of using for historical dates, and maybe not really
too much, if any, to various bookkeeping uses (which can be
complicated by the fact that for the purposes of calculating
interest, a month is 1/12 of a year, regardless, and a day is
1/30 of a month, and nothing under a day counts).
I'm not so sure... What about the next universe? And the one
after that? Suppose I want to write a letter to my 10^913th
generation son? :D

Design a machine, or a data support, that will last that long,
and we'll talk about it:).
 
J

James Kanze

Sorry James, going into nitpick mode. :)

Don't be sorry about it; I like to see nits picked, even if I'm
not the one doing the picking.
Actually it is good for over 200,000 days, or about 584.6
years (if you're doing signed representations that's +- 292
years).
Well, depending on who you believe for the age of the
universe. ;-) 64 bits of microseconds covers about 584,000
years.

Only. I must have miscalculated (or misread) something
somewhere. That wouldn't even suffice for some paleontologists.
But of course, nanoseconds are sufficient for some physics. I
guess using a single scale for everyone will require even more
bits.
</nitpick>
Fwiw, Beman Dawes has just put <chrono> into a boost sandbox:
<chrono> is a library largely based/inspired upon a subset of
Jeff Garland's boost date-time.  However <chrono> is strictly
about time durations, time points, and clocks for retrieving
time points.  It is completely ignorant of calendars (knows
nothing of days, months and years).  <chrono> is somewhat
documented in this proposal:

and is currently included in the C++0X working draft:

Yes. I'd missed that this had been added to the standard.
(What little time I have for standardization activities is
normally dedicated to core issues.) Purely by chance, I noticed
the chapter in the draft about an hour after having posted.
(see 20.9 Time utilities [time])
Also the utilities in <chrono> (combined with <ratio>
documented in the same paper) can be used to *exactly*
represent 1/10 of a second (or any rational number of
seconds).

Why is it that every time I perceive a need, someone has already
implemented it:)? (Usually a lot better than I could have
done, too.)
 
L

Lars Uffmann

James said:
Design a machine, or a data support, that will last that long,
and we'll talk about it:).


Well - maybe not me, but my favourite SF series (Perry Rhodan) has made
mention of a travel back in time to the point some inifinite bit of time
prior to the big bang, where huge machines were roaming the non-space,
that were the remnants of technology of the highest evolved
civilizations from the last universe (it *did* sound a bit like Douglas
Adams there, and the respective novels weren't written until 1984, so
they might have plagiarized just a little).

And many of the "inventions" of PR have come to pass already - so who
knows? *g*

Have a nice day!

Lars
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,066
Latest member
VytoKetoReviews

Latest Threads

Top