Date difference in days

P

Paul E. Schoen

I have been trying to get the number of days from 2010-09-13 to today, and
I'm getting strange results. I have consulted the FAQ and the Date::Calc
qw(Delta_Days) and Time::localtime modules, as well as
http://docstore.mik.ua/orelly/perl/cookbook/ch03_04.htm and
http://docstore.mik.ua/orelly/perl/cookbook/ch03_06.htm.

Here is the pertinent code:

#!/usr/bin/perl
use Date::Calc qw(Delta_Days);
use Time::localtime;

my $tm = localtime($time);
my $yr = $tm->year+1900;
my $mo = $tm->mon+1;
my $dy = $tm->mday;

my @dtStart = (2010, 9, 13);
my @dtNow = ($yr, $mo, $dy);

my $difference = Delta_Days(@dtStart, @dtNow);

my $hitRate = $count/$difference; #count is number of hits
print "Content-type: text/html\n\n";
print "Visitors in $difference days since Sept 13, 2010:
$count or $hitRate per day $yr, $mo, $dy";

The text in the website that uses this reads:

Visitors in -14866 days since Sept 13, 2010: 00359 or
-0.0241490649804924 per day 1969, 12, 31

I've tried so many different things that I'm totally confused. There seem to
be many ways to do this and it should be a simple task.

TIA,

Paul
 
P

Paul E. Schoen

J. Gleixner said:
Paul E. Schoen wrote:
[...]
The script can be seen in action as the hit counter in:
[...]

perldoc -q "I still don't get locking."

Yes, I read that (although I had to log in to my server with telnet, which
is inconvenient from a Win Vista machine).

This is the relevant code.

open (LOG, '<', "$logpath");
my @file = <LOG>; # an array of the file contents
close(LOG);

my $count = $file[0]; # the count value is the first line in the file,
i.e. $file[0]
$count++; # increments the counter value

open (LOG, ">$logpath"); # opens the log file for writing
flock(LOG, 2); # file lock set
print LOG "$count\n"; # prints out the new counter value to the file
flock(LOG, 8); # file lock unset
close(LOG);

This was from an existing script. I suppose I should check for return values
and use descriptive constants, but otherwise I don't see a problem. The
other information I found was:
http://perldoc.perl.org/functions/flock.html

BTW this webpage gets about 12 hits per day. Not much chance of a collision,
and probably not much damage if it happens. Losing count of a hit is not a
real problem, and I can accept a few "false" hits from bots. What would be
really useful would be to access the server log and sort out hits that don't
"count" such as those I create by testing, or those from automated web
crawlers or multiple hits from the same source.

And I understand that the hit counter is not perfect, but it does work to
some extent and at least we can get a general idea of the usage of the
website by our members, so we can determine if it is worthwhile to put more
effort into it at all. I implemented an event submission form and I have
only received notification from one member who used it as a trial. I also
have the framework of implementing the Google Calendar but it will take a
lot of work to allow members to enter, edit, and delete information. Or I
could just give them my login and password for that account (which I opened
specifically for that purpose).

I think the argument against hit counters is really splitting hares, and
that's pretty rough on the rabbits.

Paul
 
T

Tina Müller

Paul E. Schoen said:
BTW this webpage gets about 12 hits per day. Not much chance of a collision,
and probably not much damage if it happens. Losing count of a hit is not a
real problem,

That's not the biggest problem.
This can happen if
process 1 opens file and reads counter n, increments n
process 2 opens file and reads counter n, increments n
process 1 writes file and writes n+1
process 2 writes file and writes n+1

So the hit from the second process gets lost. You say, ok, but
that's just not very probable, so don't care if there are some
lost hits.

The *real* problem is, that this code can truncate your counter to zero.
(happened to me long time ago with my first counter =)

process 1 opens file and reads counter n, increments n
process 1 opens file with ">" mode, file is truncated
process 2 opens file and reads - nothing. oops! n=0, increments n
process 1 gets lock, writes n+1, closes
process 2 writes file with a count of 1

That's because in between the opening (and truncate) and the
flock there can be another process opening the file.

hth,
tina
 
J

Jürgen Exner

Paul E. Schoen said:
Yes, I read that (although I had to log in to my server with telnet, which
is inconvenient from a Win Vista machine).

How so? Perl including perldoc runs just fine on Vista.

jue
 
J

J. Gleixner

Paul said:
J. Gleixner said:
Paul E. Schoen wrote:
[...]
The script can be seen in action as the hit counter in:
[...]

perldoc -q "I still don't get locking."

Yes, I read that (although I had to log in to my server with telnet,
which is inconvenient from a Win Vista machine).

What does using Telnet have to do with anything???

Perl can be installed on virtually any OS and it comes with the
documentation.

[...]
I think the argument against hit counters is really splitting hares, and
that's pretty rough on the rabbits.

Yes. The reference to that was on how useless it is to display
counters.

You can get all statistics from the logs or even using Google Analytics,
without any of your "customers" seeing how little the site is being
used. A counter on a Web page screams, "Hey, this is my first web
page and I just started reading an HTML book from 1996."

If you want a less amateur looking site, drop the counter. Oh, and
before you get to the page in the book on blinking text, avoid that
one too. :)
 
P

Peter J. Holzer

J. Gleixner said:
Paul E. Schoen wrote:
[...]
The script can be seen in action as the hit counter in:
[...]

perldoc -q "I still don't get locking."

Yes, I read that (although I had to log in to my server with telnet, which
is inconvenient from a Win Vista machine).

This is the relevant code.

open (LOG, '<', "$logpath");
my @file = <LOG>; # an array of the file contents
close(LOG);

my $count = $file[0]; # the count value is the first line in the file,
i.e. $file[0]
$count++; # increments the counter value

open (LOG, ">$logpath"); # opens the log file for writing
flock(LOG, 2); # file lock set
print LOG "$count\n"; # prints out the new counter value to the file
flock(LOG, 8); # file lock unset
close(LOG);

This was from an existing script.

Wherever you got that script from, don't get any more scripts from
there. That's just awful.

As Tina mentioned, the flock there is useless. It doesn't protect the
two obvious race conditions. All it does protect is a single printf,
which almost certainly doesn't change the file anyway (the count is only
written at close, *after* you release the lock).

There are two ways to do this.

The safe way:

use IO::Handle; # for flush

open (my $log_fh, '+<', $logpath) or die "cannot open $logpath: $!";

# the file is now open for reading and writing, so we can lock it
flock($log_fh, LOCK_EX) or die "cannot lock $logpath: $!";

# read current counter and increment it
my $count = <$log_fh>;
$count++;

# rewind to begin of file and write the new counter
seek($log_fh, 0, SEEK_SET);
print $log_fh $count;
$log_fh->flush() or die "cannot flush $logpath: $!";

# after flush we know that the file counter has been written
# (at least to the OS disk cache, not necessarily the disk),
# so we can release the lock

flock($log_fh, LOCK_UN);

# done - close the file
close($log_fh) or die "cannot close $logpath: $!";

(actually, in this simple case, flush and flock($log_fh, LOCK_UN) are
redundant - close will automaticall flush any pending writes and unlock
the file (in this order)).
BTW this webpage gets about 12 hits per day. Not much chance of a collision,
and probably not much damage if it happens.

If you don't mind losing a hit every now and then you can do it safely
without locks:

open (my $log_fh, '<', $logpath) or die "cannot open $logpath: $!";
my $count = <$log_fh>;
$count++;
close($log_fh);
open (my $log_fh, '>', "$logpath.$$") or die "cannot open $logpath.$$: $!";
print $log_fh $count;
close($log_fh) or die "cannot close $logpath: $!";
rename("$logpath.$$", $logpath) or die "cannot rename $logpath.$$ to $logpath: $!";

If a second hit happens bitween the open and the rename, it won't be
counted. But the counter will never be accidentally reset to zero.

hp
 
P

Paul E. Schoen

Peter J. Holzer said:
J. Gleixner said:
Paul E. Schoen wrote:
[...]
The script can be seen in action as the hit counter in:
[...]

perldoc -q "I still don't get locking."

Yes, I read that (although I had to log in to my server with telnet,
which
is inconvenient from a Win Vista machine).

This is the relevant code.

open (LOG, '<', "$logpath");
my @file = <LOG>; # an array of the file contents
close(LOG);

my $count = $file[0]; # the count value is the first line in the file,
i.e. $file[0]
$count++; # increments the counter value

open (LOG, ">$logpath"); # opens the log file for writing
flock(LOG, 2); # file lock set
print LOG "$count\n"; # prints out the new counter value to the file
flock(LOG, 8); # file lock unset
close(LOG);

This was from an existing script.

Wherever you got that script from, don't get any more scripts from
there. That's just awful.

As Tina mentioned, the flock there is useless. It doesn't protect the
two obvious race conditions. All it does protect is a single printf,
which almost certainly doesn't change the file anyway (the count is only
written at close, *after* you release the lock).

There are two ways to do this.

The safe way:

use IO::Handle; # for flush

open (my $log_fh, '+<', $logpath) or die "cannot open $logpath: $!";

# the file is now open for reading and writing, so we can lock it
flock($log_fh, LOCK_EX) or die "cannot lock $logpath: $!";

# read current counter and increment it
my $count = <$log_fh>;
$count++;

# rewind to begin of file and write the new counter
seek($log_fh, 0, SEEK_SET);
print $log_fh $count;
$log_fh->flush() or die "cannot flush $logpath: $!";

# after flush we know that the file counter has been written
# (at least to the OS disk cache, not necessarily the disk),
# so we can release the lock

flock($log_fh, LOCK_UN);

# done - close the file
close($log_fh) or die "cannot close $logpath: $!";

(actually, in this simple case, flush and flock($log_fh, LOCK_UN) are
redundant - close will automaticall flush any pending writes and unlock
the file (in this order)).
BTW this webpage gets about 12 hits per day. Not much chance of a
collision,
and probably not much damage if it happens.

If you don't mind losing a hit every now and then you can do it safely
without locks:

open (my $log_fh, '<', $logpath) or die "cannot open $logpath: $!";
my $count = <$log_fh>;
$count++;
close($log_fh);
open (my $log_fh, '>', "$logpath.$$") or die "cannot open $logpath.$$:
$!";
print $log_fh $count;
close($log_fh) or die "cannot close $logpath: $!";
rename("$logpath.$$", $logpath) or die "cannot rename $logpath.$$ to
$logpath: $!";

If a second hit happens bitween the open and the rename, it won't be
counted. But the counter will never be accidentally reset to zero.

This is one source (but not the one I used):
http://www.comptechdoc.org/independent/web/cgi/perlmanual/perlhit.html
I think this is it:
http://www.akamarketing.com/simple-hit-counter-with-perl.html

There are many other scripts, and some have safeguards against "false" hits,
and some keep track of user data. Thanks for the many explanations and
suggestions. I may try searching the logs for information as a learning
exercise. I have viewed them manually but there is only a small portion of
interest.

Thanks,

Paul
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,906
Latest member
SkinfixSkintag

Latest Threads

Top