Manish said:
In that case the file system should be controlled - and people/process
should be well aware of the consequences of moving the files around. If
you do want to take the blob route, do ensure to load test the system,
as I am quite sure blob selects will slow down your system
considerably, based on the image size, access frequency and access
mechanism (filtered vs. lookup..).
Yup, but, case in point, I have an application that does this (images in
file system, reference in database) which has been in use on a fair number
of sites since 2000. When I distribute a new version, the easiest solution
to doing an update is to delete the application directory and drop the new
WAR file in place. However, this doesn't work because I store uploaded
images, etc, in a subdirectory of the upload directory. This may not be
the best thing to do, but it mostly works and I haven't fixed it.
So to update one either needs to copy all the upload subdirectories out of
the application, then stop and drop the application and install the new
WAR, wait for it to unpack, and copy the contents of the upload
directories back (which can be done with Tomcat with no server restart),
or else unpack the new WAR into the old application directory overwriting
its contents, which (a) doesn't work if you are replacing jar files in the
WEB-INF/lib with new ones with different names (which I usually am), and
(b) definitely does require a server restart.
Every time I produce a new release we have this issue come up; if I did
keep the actual image data (and XSL, and other uploaded content) in the
database it wouldn't happen, and sooner or later I'm going to get round to
making that change. I appreciate there may be performance issues, which I
shall probably get around by caching.