Avoiding fragmentation

R

Roedy Green

I just notice how when I create files with Java, they often are badly
fragmented, with ten or more fragments.

This could be avoided with a simple strategy. When you open the file
for write you specify the estimated size of the file. The OS then
looks for a continuous hunk of disk space to satisfy. If you guess too
big, you return the tail to the free space pool. If you guess too
small, it adds another extent or two.
 
O

Owen Jacobson

I just notice how when I create files with Java, they often are badly
fragmented, with ten or more fragments.

This could be avoided with a simple strategy. When you open the file
for write you specify the estimated size of the file. The OS then
looks for a continuous hunk of disk space to satisfy. If you guess too
big, you return the tail to the free space pool. If you guess too
small, it adds another extent or two.

You can do that, or something much like it, by building up larger
chunks of the file in memory before writing them to disk (using NIO,
RandomAccessFile, your own buffering, or BufferedOutputStream). There's
no guarantee that the filesystem will fragment less given larger
writes, but a large block written at once at least propagates some size
information to the OS, whereas a small write to a stream indicates
nearly nothing about the volume of data following it.

This isn't really a Java issue, though: the lowest common denominator
for file creation is POSIX-like, which assumes infintely-appendable
files with no fixed size (or size hints). Blame Bell, basically. ;)

-o
 
L

Lawrence D'Oliveiro

Actually, this may come as a shock to you, but Linux filesystems can and
do have fragmentation problems.

And yet I have never known any of my customers’ Linux servers—or my own
Linux machines, for that matter—to slow down over time.
 
J

Joshua Cranmer

And yet I have never known any of my customers’ Linux servers—or my own
Linux machines, for that matter—to slow down over time.

And I have sped up cold startup by an order of magnitude just by
defragging my harddrive. Note that Linux's filesystems are geared more
towards server use, not towards desktop use.
 
L

Lew

I am going to undo your presumption in eliding vital parts of Joshua's post,
to revert the effect of your intellectual dishonesty, "Lawrence".

Not surprising. Also not surprising that you use your own very, very limited
personal experience to try to countervail the documented, technical evidence
that Joshua presented, then *omit* the reference lest the blinding light of
truth reveal your pathetic ignorance, "Lawrence".

To paraphrase Joshua, "But, hey, 'Lawrence', why let truth get in the way of
your ignorance?"
And I have sped up cold startup by an order of magnitude just by defragging my
harddrive. Note that Linux's filesystems are geared more towards server use,
not towards desktop use.

So Joshua not only has facts, dear boy, but personal experience that supports
the truth and not your narrow, blindered and inaccurate view of the world,
"Lawrence". So, "Larry"-boy, how 'bout you accept truth and allow your world
view to expand? It's not in your own best interest to fight facts, "Lar".
 
L

Lawrence D'Oliveiro

And I have sped up cold startup by an order of magnitude just by
defragging my harddrive.

On a Linux system?
Note that Linux's filesystems are geared more towards server use, not
towards desktop use.

What does that mean?
 
R

Roedy Green

This isn't really a Java issue, though: the lowest common denominator

If the various Java stream open methods developed an optional size
parameter, either that fact could be passed to the OS for
optimisation, or Java could explicitly allocate that much space, as
per random i/o, and then discard the excess on close.

I remember discovering that writing 0 byte record in DOS was the way
to chop the file. At first I thought it was a bug, then a joke.
 
R

Roedy Green

This could be avoided with a simple strategy. When you open the file
for write you specify the estimated size of the file. The OS then
looks for a continuous hunk of disk space to satisfy. If you guess too
big, you return the tail to the free space pool. If you guess too
small, it adds another extent or two.

The current approach is to use after-the-fact defragging. I have
written extensive reviews on Windows defraggers.

See http://mindprod.com/jgloss/defragger.html

This does you no good at all for temp files or short-lived files.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,581
Members
45,056
Latest member
GlycogenSupporthealth

Latest Threads

Top