Question About Design Strategy



I have written an app in C#/ 2.0 that is a system built to
handle a large number of scenarios. Part of that system involves
allowing users to download large files. As part of my original design
strategy, I chose to locate these downloads in a directory separate
from the website file structure.

The two primary purposes for this were: (1) it is more secure because
users cannot link directly to the files and (2) it is modular,
allowing for complete refreshes/updates of the application code
without having to worry about deleting this directory and files.

To accomodate this, I created an appSettings key/value pair (along
with many others) in my web.config file to hold the path/location of
this external directory. Then I added a .cs file to my App_Code
directory that contains a static class called "Website" that reads all
the values and makes them available to the application during run
time. For example, after the app starts, a code behind page might get
the needed value with a call like "Website.DownloadsDirectory".

Next I created a web form whose only purpose is to "fetch" the file,
copy it to a temporary directory (within the website file structure),
and then Response.Redirect to it. The temporary directory itself is
purged periodically by the app, so these temporary files don't remain
in there forever.

Anyways, this all works great for small files...then we started trying
to do it with big files. When I say big, I mean over 1GB. As you might
imagine, we are now having problems. Here's what happens...(1) The
user clicks the Download link (2) then the "fetch" page kicks off and
begins copying the file into the temporary directory. during this the
screen remains the same. (3) finally a "Save File" dialog box appears.

The problem is occurring after the user clicks the download link. The
fetch page takes up to five minutes in some cases to copy/stage the
file before redirecting to it for download. During this time the user
might think the app is broken. He or she might click the Download link
again (and again...).

As you can imagine this can be confusing for the user, and this is
definitely not what I want!!! As part of my solution I want to avoid
moving the downloads directory within the website file structure.

How would you guys go about doing this? What can I do to improve on
the design?



How would you guys go about doing this? What can I do to improve on
the design?

Do you need the temp directory? Why not stream the file directly from the

But yea, 1gb files!? Those are huge. Might have issues with that regardless.


Cowboy \(Gregory A. Beamer\)

You can consider streaming out the bits, but you still have a potential
problem with the size. To do this, you set the mime type and then use a
streaming engine (generally an ASPX page).

If these files are used by internal users, you can consider a windows forms
application that actually makes a socket connection for the file. This can
also work outside with the ClickOnce style of deployment. With large files,
you should have better luck with a socket connection than using a web link.
If this is still a problem, you can code a download manager that can pick
back up when there is a failure, but I would see how things work before
heading this direction.

Depending on the nature of the docs, you may be a bit overboard on your
security, as users have to know where a doc resides to get to it. If the
documents need to be highly secure (contain user financial or personal
information, for example), you will need some form of intervention in the

Can you set up the app to use some form of login? If so, you can secure the
docs via configuration of some sort. The intervention, for a small number of
docs, can be entries in the web.config file for that directory. For a larger
number of docs, you can create an HTTP Handler or Module.

I do not find that the copy and link option is a good option, however.


An idea:

When the code runs to copy the file, don't have the user "wait".

Send them to a page, that is a polling page, and checks (every 10 seconds)
to see if the file they requested has finished downloading.

You can do this with a small database (flags that say "DownloadStarted",
"DownloadComplete") or something in Session state or something like that.

I use this scenario with reports. (for like 4 years now)
I actually let the user input report parameters, and then a windows service
picks up the request, and creates a .html file.
The html file may or may not be big, but the report processing time may be
long. Thus the "dont make me wait" strategy.

I actually (when I copy the file) use a guid/guid solution.


There are 2 guid's there, one for a directory, one for a file. I figure the
odds of guessing a filename with 2 guid's in it are pretty low.
PS The Guid.ToString("N") comes in handy here.

Naturally, you have to have a cleanup service. I check the lastmodified
date attribute of the file, and have a configurable value like "4" (for 4
hours) to go delete them out.

So that's an idea.

You have to figure out how anal your security scenario is.

The above solution..... someone can send a link to the file, that will work
for 4 hours. Maybe that's ok, maybe that's not ok.

In a banking scenario, you probably want to stream an image every single
time, ... like for a check image or something.

Seperately, I would strongly suggest looking at Jeff Prosise's example
Asynchronous ASP.NET

NorthwindImageGrabber : IHttpHandler

The project first shows how to use a IHttpHandler instead of using an aspx
page. (This was Jeff's big NO-NO at TechEd2007, don't use an aspx page to
stream images)

It doesn't play into my solution above necessarily, its just a new thing to
consider when figuring out a solution.

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Latest member

Latest Threads