Well, if the directory is large, I bet shelling out the chown is *faster*
than doing it in Perl. The system call doesn't handle the -R functionality,
and I bet the C-implementation of traversing a directory tree in chown is
faster than most, if not every, Perl solution using File::Find.
Of course, the strongest argument is: programmer time is more important
than CPU time. The time to program out
system "chown -R $uid:$gid $dir";
utterly dwarves the overhead of starting another process. (And note that
unless '$uid', '$gid' and '$dir' contain characters that are special to
shell, no shell will be started by perl - perl will call chown directly).
Thanks for the alternative angle, excellent stuff. I'm considering using
the original system() function now because of what you said about speed.
This is part of a background script on a webserving machine that takes
instructions from a file written to by a CGI script so the time each
request takes to execute is of the utmost importance as we want to keep
the user waiting for as little as possible for the result of the
request. I've yet to find a way to use the sleep() function to sleep for
less than a second (although I'm sure to find a module that will do
this), so even though using the system() function might only reduce the
time by a millisecond, this could cause the CGI script to sleep for a
second more than it really needs to. Well, just under a second than it
*needs* to but...
unless '$uid', '$gid' and '$dir' contain characters that are special to shell...
This is also good to know as it will only ever be passed variables from
the calling CGI script like so:
system("chown -R 1100:80 /home/username/public_html");
The $uid value is taken directly from a passwd file, $gid is a constant
of 80 and if $username is not alphanumeric the CGI script will return an
error.
Tim