Perl script timeout problem

S

sipitai

Hi everyone,

I have written a Perl script that allows a user to download a file,
but only if they have a valid key to download that file.

The idea being that instead of the user just clicking on a link to
download a file (i.e. http://www.domain.com/file.zip), they click on a
link to the script, which contains the file name and the key (i.e.
http://www.domain.com/script.cgi/xxxxx/file.zip with xxxxx being the
key), and if the key is valid, the script sends back the file for them
to download, otherwise it blocks the request.

Which all works fine, the problem im running into is that the file
download is timing out after 300 seconds. More specifically, if the
file takes longer than 300 seconds to download, regardless of how many
KB's have been downloaded, or how many KB/sec it is transferring at,
the file stops downloading and displays "Download Complete".

Its worth noting that the amount of time it takes to timeout is
directly related to the Timeout value in the /etc/apache/httpd.conf
file, for example if I change it to 30 seconds then the file download
times out in 30 seconds, etc.

From what I can tell the server doesnt appear to recognise that the
script is still running, and is closing the connection. But then again
that's just my assumption based on what ive seen so far.

So having said all that, does anyone know how to fix this problem?

Without modifying the /etc/apache/httpd.conf Timeout value that is.

Thanks!
 
B

Brian McCauley

sipitai said:
I have written a Perl script that allows a user to download a file,
but only if they have a valid key to download that file.

The idea being that instead of the user just clicking on a link to
download a file (i.e. http://www.domain.com/file.zip), they click on a
link to the script, which contains the file name and the key (i.e.
http://www.domain.com/script.cgi/xxxxx/file.zip with xxxxx being the
key), and if the key is valid, the script sends back the file for them
to download, otherwise it blocks the request.

Which all works fine, the problem im running into is that the file
download is timing out after 300 seconds. More specifically, if the
file takes longer than 300 seconds to download, regardless of how many
KB's have been downloaded, or how many KB/sec it is transferring at,
the file stops downloading and displays "Download Complete".

Its worth noting that the amount of time it takes to timeout is
directly related to the Timeout value in the /etc/apache/httpd.conf
file, for example if I change it to 30 seconds then the file download
times out in 30 seconds, etc.

From what I can tell the server doesnt appear to recognise that the
script is still running, and is closing the connection. But then again
that's just my assumption based on what ive seen so far.

So having said all that, does anyone know how to fix this problem?

I shall assume this is a stealth CGI question.

Also, this does not appear to be a Perl question at all.

Ask yourself this: Would you expect the answer to be any different if
your CGI script were in python, C, pasacal, bash...?

There are newgroups that deal with CGI, you know.
Without modifying the /etc/apache/httpd.conf Timeout value that is.

Do not have the script send the file. Have it perform an internal
redirect to the file. You may or may not be able to configure your web
server to prevent people bypassing the script but even if you can't you
can just make sure that the directory name is obscure.

This not only will fix your problem but has other benefits that are OT
in this newsgroup. However there's usually at least one thread
discussing this at any given time in the CGI newsgroups.
 
G

Gregory Toomey

sipitai said:
Hi everyone,

I have written a Perl script that allows a user to download a file,
but only if they have a valid key to download that file.

The idea being that instead of the user just clicking on a link to
download a file (i.e. http://www.domain.com/file.zip), they click on a
link to the script, which contains the file name and the key (i.e.
http://www.domain.com/script.cgi/xxxxx/file.zip with xxxxx being the
key), and if the key is valid, the script sends back the file for them
to download, otherwise it blocks the request.

Which all works fine, the problem im running into is that the file
download is timing out after 300 seconds. More specifically, if the
file takes longer than 300 seconds to download, regardless of how many
KB's have been downloaded, or how many KB/sec it is transferring at,
the file stops downloading and displays "Download Complete".

Its worth noting that the amount of time it takes to timeout is
directly related to the Timeout value in the /etc/apache/httpd.conf
file, for example if I change it to 30 seconds then the file download
times out in 30 seconds, etc.

So fix httpd.conf Why bother us.

# Timeout: The number of seconds before receives and sends time out.
Timeout 1800

# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
KeepAlive On
From what I can tell the server doesnt appear to recognise that the
script is still running, and is closing the connection. But then again
that's just my assumption based on what ive seen so far.

So having said all that, does anyone know how to fix this problem?

Without modifying the /etc/apache/httpd.conf Timeout value that is.

Err, why?
gtoomey
 
S

sipitai

Brian McCauley wrote...
I shall assume this is a stealth CGI question.

Also, this does not appear to be a Perl question at all.

Ask yourself this: Would you expect the answer to be any different if
your CGI script were in python, C, pasacal, bash...?

Maybe, maybe not. I figure this problem could originate from either
the script itself, or the environment its being executed in.

Although if anyone could help me narrow this down I would be greatly
appreciative.
Do not have the script send the file. Have it perform an internal
redirect to the file. You may or may not be able to configure your web
server to prevent people bypassing the script but even if you can't you
can just make sure that the directory name is obscure.

Unfortunately there are a number of reasons why this wouldnt work, one
of which is that the "key" for the file needs to be able to expire.

Although thanks anyway for the feedback.
 
S

sipitai

Gregory Toomey wrote...
So fix httpd.conf Why bother us.

# Timeout: The number of seconds before receives and sends time out.
Timeout 1800

# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
KeepAlive On

I would like to think of that sort of solution as a last resort. I
would prefer to find out why it is timing out in the first place and
then fix the problem.
Err, why?

Because the Timeout value is there for a reason, and increasing it
just to hide the problem isn't what I would call a solution.

Although thanks anyway for the feedback.
 
J

Jürgen Exner

sipitai said:
Brian McCauley wrote...


Maybe, maybe not. I figure this problem could originate from either
the script itself, or the environment its being executed in.

Are you using alarm() in you program?
If not then it's the environment that is triggering the timeout.

jue
 
B

Brian McCauley

sipitai said:
Brian McCauley wrote...




Unfortunately there are a number of reasons why this wouldnt work,

Are any of them valid?
one of which is that the "key" for the file needs to be able to expire.

That one, for example, is not a valid reason. I suspect you misread
"internal" as "external".

Pseudo-code:

if ( key_has_expired ) {
display_error_page;
} else {
perform_internal_redirect; # Client never sees the real URL
}
 
S

sipitai

For anyone who isn't sure of the exact cause of the problem, but would
like to suggest an alternative method of achieving the same result,
the following is a description of what im trying to do.

The script is part of a shopping cart designed specifically for
purchasing downloadable software. It is used on the "download" page
that is displayed after the customer has successfully completed the
payment process. On this page the links to download the software point
to the script, which then serves up the file to be downloaded, instead
of just pointing to the file directly. This is mainly to try and
minimise the opportunity for piracy if a customer publishes the link
for other people to download.

The way I have implemented it as follows:

- When a successful purchase is made, a security record is written to
the database consisting of a unique key that is generated for each
file as well as all the other relevant details.
- Then the "download" page is displayed containing links to download
the software using the security record details.
- When the script is run, it only allows the file to be downloaded if
a). the security details it has been provided match the security
record in the database, b) the purchase was made less than X days ago,
c). the file has been downloaded less that X times.

Its also worth noting that the files themselves are stored in the
database, rather than directly on the hard disk (although this is also
for reasons other than those specified above).

I hope this description helps clarify a few things.

So if you can think of a better way of implementing it, by all means
let me know. Your input would be greatly appreciated!
 
S

sipitai

Brian McCauley wrote...
Are any of them valid?

After reading your comment below, maybe, maybe not.
That one, for example, is not a valid reason. I suspect you misread
"internal" as "external".

Pseudo-code:

if ( key_has_expired ) {
display_error_page;
} else {
perform_internal_redirect; # Client never sees the real URL
}

My bad, I did misread what you wrote.

Although now im not entirely sure what you mean by performing an
internal redirect to the file. If you could explain this in a bit more
detail, or point me in the direction of a website that covers this
subject, I would be greatly appreciative.
 
J

Jürgen Exner

sipitai said:
For anyone who isn't sure of the exact cause of the problem, but would
like to suggest an alternative method of achieving the same result,
the following is a description of what im trying to do.

The script is part of a shopping cart designed specifically for [...]
So if you can think of a better way of implementing it, by all means
let me know. Your input would be greatly appreciated!

Do you have a Perl question, too?
Otherwise I would suggest you head over to the CGI NGs because your question
has _very_little_ to do with Perl but a _whole_lot_ with CGI and web
authoring.

jue
 
B

Brian McCauley

sipitai said:
Brian McCauley wrote...




Although now im not entirely sure what you mean by performing an
internal redirect to the file.

You put the file in a directory that where it would be directly
accessible by a URL but don't tell anyone the directory.

Your CGI script then sends a CGI response to the web server that tells
it to genereate an HTTP response as if it had gotten a simple GET
request for the file.

You may even be able to configure your web server so that requests to
the secret directory that do not come via an internal redirect are
declined. This way wouldn't need to keep it secret.

(Nore of this, of course, has the slightest thing to do with Perl).
If you could explain this in a bit more detail,

One way for a Perl script to send an internal redirect CGI response
would be:

print "Location: /some/non/published/path/$file\n\n";

Note that an internal redirect does have the
'http://someserver.example.com/' prefix but starts at the '/' that is
the root of the current (virtual) server.

You could possibly use the redirect() method from CGI.pm rather than a
raw print() but this also generates a "Status:" header and I suspect
this may confuse some web servers into generating an external redirect.

You need to ensure that you web server software is configured not to
include 'helpful' Content-location headers that contain the true URL of
HTTP entities that resolve via internal redirects.
...or point me in the direction of a website that covers this
subject, I would be greatly appreciative.

Any website containing CGI reference or tutorial documentation should
cover this.
 
S

sipitai

...
Are you using alarm() in you program?
If not then it's the environment that is triggering the timeout.

Im not using alarm() anywhere in the script.

And do I realise its the environment that's actually triggering the
time out, rather than the script. Although what I was trying to ask in
my original statement was, is it more likely that there is a problem
with the environment that requires some sort of fix, or is it more
likely that there is a problem with the script that requires some sort
of fix (For example, I figure that perhaps the script might need to be
updated to somehow let the server know that it is transferring data,
or that it is still running, etc. to prevent the time out from
occurring) ?
 
S

sipitai

...
Do you have a Perl question, too?
Otherwise I would suggest you head over to the CGI NGs because your question
has _very_little_ to do with Perl but a _whole_lot_ with CGI and web
authoring.

Yes, check out my original post. I just added this description for
those people who wanted to do a bit of lateral thinking and suggest
some alternative methods of implementing a solution.
 
S

sipitai

Brian McCauley wrote...
You put the file in a directory that where it would be directly
accessible by a URL but don't tell anyone the directory.

Your CGI script then sends a CGI response to the web server that tells
it to genereate an HTTP response as if it had gotten a simple GET
request for the file.

You may even be able to configure your web server so that requests to
the secret directory that do not come via an internal redirect are
declined. This way wouldn't need to keep it secret.

(Nore of this, of course, has the slightest thing to do with Perl).

One way for a Perl script to send an internal redirect CGI response
would be:

print "Location: /some/non/published/path/$file\n\n";

Note that an internal redirect does have the
'http://someserver.example.com/' prefix but starts at the '/' that is
the root of the current (virtual) server.

You could possibly use the redirect() method from CGI.pm rather than a
raw print() but this also generates a "Status:" header and I suspect
this may confuse some web servers into generating an external redirect.

You need to ensure that you web server software is configured not to
include 'helpful' Content-location headers that contain the true URL of
HTTP entities that resolve via internal redirects.

That is a fairly good idea, although it requires the files to be
located directly on the hard disk, where as in this case the files are
are stored in a blob field in a MySQL database.

So as far as I know im still going to require a script to allow the
user to download the files, unless I start writing temporary files to
the hard disk, which is something I would like to avoid doing. But its
possible I might have missed something here.

Perhaps another way of looking at this problem would be to ask, what
is the most reliable way of allowing a user to download a file stored
in a blob field in a MySQL database, without having to write a
temporary file to the hard disk?
 
B

Brian McCauley

sipitai said:
Perhaps another way of looking at this problem would be to ask, what
is the most reliable way of allowing a user to download a file stored
in a blob field in a MySQL database, without having to write a
temporary file to the hard disk?

When serving up entities of non-trivial size there are all sorts of
considerations about entitity tags and partial responses if you want to
do it properly. None of this relates to Perl and is oft discussed in
newsgroups where it is on-topic.
 
B

Brian McCauley

sipitai said:
Perhaps another way of looking at this problem would be to ask, what
is the most reliable way of allowing a user to download a file stored
in a blob field in a MySQL database, without having to write a
temporary file to the hard disk?

Are you slurping the whole BLOB into a Perl variable? Perhaps you are
having problems with your CGI process going swap-bound. Try reading it
in chunks. (How you'd do this is SQL-dialect specific).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top