G
george.e.sullivan
In a closed thread Mr. John W. Kahn posted a script to add up directory
usage per user that produces simple output of such as:
userA 112345
userB 57389293
userC 323
and so forth
Here is Mr. Kahn's script:
perl -MFile::Find -le '($m) = stat( $d = shift ); find( sub{ @s =
lstat; $m == $s[0] and $u{ getpwuid $s[4] } += $s[7]}, $d ); printf
"%-5s %d\n", $_, $u{$_} for sort { $u{$b} <=> $u{$a} } keys %u' .
The above is a cut and paste from there.
Output on one of my larger directories produces almost a 1 gigabyte
difference between this script and the du -ks command syntax.
du -ks = 37,928,180,000
script = 38,641,548,183
Is there any minute error in the script that would cause this or is the
script actually reading deeper into the file/directory structure and
accounting for unused blocks on the hard drive or other overhead types?
Thanks to all.
usage per user that produces simple output of such as:
userA 112345
userB 57389293
userC 323
and so forth
Here is Mr. Kahn's script:
perl -MFile::Find -le '($m) = stat( $d = shift ); find( sub{ @s =
lstat; $m == $s[0] and $u{ getpwuid $s[4] } += $s[7]}, $d ); printf
"%-5s %d\n", $_, $u{$_} for sort { $u{$b} <=> $u{$a} } keys %u' .
The above is a cut and paste from there.
Output on one of my larger directories produces almost a 1 gigabyte
difference between this script and the du -ks command syntax.
du -ks = 37,928,180,000
script = 38,641,548,183
Is there any minute error in the script that would cause this or is the
script actually reading deeper into the file/directory structure and
accounting for unused blocks on the hard drive or other overhead types?
Thanks to all.