Toward a generic Disk Sort for Java

R

Roedy Green

I was thinking how you might go about writing a sort that could handle
more data than could fit in RAM. It handled the problem is Abundance
by checkpointing the app to disk to free up maximum RAM, then spawning
a copy of Opt-Tech sort. My records were roughly like DataOutputStream
would produce, so I could automatically generate the command script
sort the fields in any way I wanted.

I thought you might pull it off in Java this way.

1. You write a comparator as if you were going to sort Objects in an
ArrayList.

2. the external sort has an add method that also takes collections.

It accepts a "chunk" of records, and sorts them using Sun's sort.

Then it writes them out as SERIALISED objects in heavily buffered
stream. There may be some way to do a partial reset after each object
to speed it up.

Then you repeat collecting, sorting and writing another batch to
another file.

When you have created N files, you recycle, appending. (Optimal N to
be determined by experiment). Ideally each file would be on a
different physical drive.

Then when all the records have been added, you start merging chunks
into longer chunks, and writing out the longer chunks. Each N-way
merge cuts the number of chunks by 1/N and increases the length of the
chunks N times.

on the final merge pass does not happen until the user invokes the
Iterator to hand over the resulting records.

Another way it might be done is the records to be sorted must by byte
arrays, chunks effectively produced by DataOutputStream. You specify
offset, length and key type e.g.
int, byte, short, float, double, String.

This would require a detailed knowledge of the bit structure of the
records, the way you did in the olden days of assembler and C.

This would be clumsier to use, but would avoid the overhead of
pickling and reconstituting records on every pass.

Then of course, there is the possibility someone has already solved
this and done it well.

The universe has a sneaky habit. Problems start out small, and it
looks like a purely in RAM solution is perfectly adequate. Then they
bit by bit grow and grow and start pushing the limits of the RAM
solution. Suddenly you are faced with a major redesign.
 
J

Joshua Cranmer

Roedy said:
I was thinking how you might go about writing a sort that could handle
more data than could fit in RAM. It handled the problem is Abundance
by checkpointing the app to disk to free up maximum RAM, then spawning
a copy of Opt-Tech sort. My records were roughly like DataOutputStream
would produce, so I could automatically generate the command script
sort the fields in any way I wanted.

Knuth's The Art of Computer Programming, Volume 3 (Searching and
Sorting) has long discussions on optimizing searching and sorting for
tape drives that could be very relevant here.
 
A

Arne Vajhøj

Roedy said:
I was thinking how you might go about writing a sort that could handle
more data than could fit in RAM. It handled the problem is Abundance
by checkpointing the app to disk to free up maximum RAM, then spawning
a copy of Opt-Tech sort. My records were roughly like DataOutputStream
would produce, so I could automatically generate the command script
sort the fields in any way I wanted.

I thought you might pull it off in Java this way.

1. You write a comparator as if you were going to sort Objects in an
ArrayList.

2. the external sort has an add method that also takes collections.

It accepts a "chunk" of records, and sorts them using Sun's sort.

Then it writes them out as SERIALISED objects in heavily buffered
stream. There may be some way to do a partial reset after each object
to speed it up.

Then you repeat collecting, sorting and writing another batch to
another file.

When you have created N files, you recycle, appending. (Optimal N to
be determined by experiment). Ideally each file would be on a
different physical drive.

Then when all the records have been added, you start merging chunks
into longer chunks, and writing out the longer chunks. Each N-way
merge cuts the number of chunks by 1/N and increases the length of the
chunks N times.

on the final merge pass does not happen until the user invokes the
Iterator to hand over the resulting records.

Another way it might be done is the records to be sorted must by byte
arrays, chunks effectively produced by DataOutputStream. You specify
offset, length and key type e.g.
int, byte, short, float, double, String.

This would require a detailed knowledge of the bit structure of the
records, the way you did in the olden days of assembler and C.

This would be clumsier to use, but would avoid the overhead of
pickling and reconstituting records on every pass.

Then of course, there is the possibility someone has already solved
this and done it well.

The universe has a sneaky habit. Problems start out small, and it
looks like a purely in RAM solution is perfectly adequate. Then they
bit by bit grow and grow and start pushing the limits of the RAM
solution. Suddenly you are faced with a major redesign.

There are written plenty about that out of memory sorts.

Start at:

http://en.wikipedia.org/wiki/External_sorting

Arne
 
A

Arne Vajhøj

Joshua said:
Knuth's The Art of Computer Programming, Volume 3 (Searching and
Sorting) has long discussions on optimizing searching and sorting for
tape drives that could be very relevant here.

Disks are random access where tapes are sequential access and that
could change things.

But I don't know if it actually does.

Arne
 
R

Roedy Green

Disks are random access where tapes are sequential access and that
could change things.

But I don't know if it actually does.

If your intermediate files are on the same physical disk drive, the
arms will hop back and forth between them as you merge. However, If
you have a big enough buffer, (better still if you had double
buffering) then it would not be such a problem.

One of the tweaking parameters is how big should be make your buffers
and how many records should you sort at a time for each chunk. It
gets more complicated with today's virtual RAM. I suppose you have to
just benchmark it various ways.

I helped write a tape sort back in the early 60s in Fortran for the
7044.. Your buffering then was quite limited. The tricky part was
making the tape sort come out so that be final pass went to the drive
with the output tape mounted on it. You might have 4 to 6 intermediate
drives. Tapes were typically on the same channel so you could not do
simultaneous I/O.

I don't have a feel for how sorts would behave now compared with then.
Everything is much faster, and bigger but I don't know in what
proportion.
 
M

Mark Space

Roedy said:
The universe has a sneaky habit. Problems start out small, and it
looks like a purely in RAM solution is perfectly adequate. Then they
bit by bit grow and grow and start pushing the limits of the RAM
solution. Suddenly you are faced with a major redesign.

With out doing any research, my gut instinct would be to start big with
an actual general purpose merge sort, writing out temp files if needed.
And I'd favor DataInput/OutputStream heavily over Serialization.

The merge sort would degenerate to an in-memory shell sort below some
limen, configure externally to the program (read from a config file, for
example).
 
M

Martin Gregorie

With out doing any research, my gut instinct would be to start big with
an actual general purpose merge sort, writing out temp files if needed.
And I'd favor DataInput/OutputStream heavily over Serialization.

The merge sort would degenerate to an in-memory shell sort below some
limen, configure externally to the program (read from a config file, for
example).
I'd go with that. As I'm sure you know, that is how traditional mainframe
tape sorts used to work. Years back I implemented exactly that on an 8/16
bit CPU (MC 6809) with 48K RAM and floppy disks. It was quite easy to do
in PL/9 (a clone of PL/M) and worked surprisingly fast.

Its described and the code provided in Kernighan & Plauger's "Software
Tools in Pascal", though if you don't already have a copy you'll probably
have to try used bookstores.
 
R

Roedy Green

Its described and the code provided in Kernighan & Plauger's "Software
Tools in Pascal", though if you don't already have a copy you'll probably
have to try used bookstores.

In the old days, you created a string of bytes to sort, and you told
the sort the offsets lengths and formats of the keys embedded in the
record.

We have been spoiled by in-RAM sorts where we can write a comparator
that uses field names, and where the object need not be pulled
together into a string of bytes.

Part of the challenge is to figure out how to provide the convenience
of an in-RAM comparator as the only thing you need to specify to use
the external sort. Somehow the externalness should be transparent,
just kicks in when RAM is tight.
 
J

John B. Matthews

Martin Gregorie said:
I'd go with that. As I'm sure you know, that is how traditional mainframe
tape sorts used to work. Years back I implemented exactly that on an 8/16
bit CPU (MC 6809) with 48K RAM and floppy disks. It was quite easy to do
in PL/9 (a clone of PL/M) and worked surprisingly fast.

Its described and the code provided in Kernighan & Plauger's "Software
Tools in Pascal", though if you don't already have a copy you'll probably
have to try used bookstores.

Excellent! Or Wirth's "Algorithms + Data Structures = Programs"; seen
here in Oberon, p 64:

<http://www-old.oberon.ethz.ch/WirthPubl/AD.pdf>

Or, as the third shift operator once told me: throw it into a database
and let Codd sort it out;-)
 
R

Roedy Green

Or, as the third shift operator once told me: throw it into a database
and let Codd sort it out;-)

It sounds like a joke, but I suspect that is how most big sorts are
done. Back in the olden days external sorts were pretty well the
first software to go commercial. Nearly everything else you wrote
yourself. You just don't see ads for them like you used to.
Opt-Tech is still around, but very quiet.
 
J

John B. Matthews

Roedy Green said:
It sounds like a joke, but I suspect that is how most big sorts are
done. Back in the olden days external sorts were pretty well the
first software to go commercial. Nearly everything else you wrote
yourself. You just don't see ads for them like you used to.
Opt-Tech is still around, but very quiet.

Yes, a terrible pun on Codd and God, but you're right about the olden
days. At the same time, I wonder if a JDBC-enabled database may be a
useful abstraction for a generic, flat-file disk sort.

Yes, I know the project is already out of hand:)
 
A

Arne Vajhøj

Roedy said:
It sounds like a joke, but I suspect that is how most big sorts are
done. Back in the olden days external sorts were pretty well the
first software to go commercial. Nearly everything else you wrote
yourself. You just don't see ads for them like you used to.
Opt-Tech is still around, but very quiet.

Data manipulation are done in databases or in memory today.

The use of large flat files for data is rare today, so little
demand for sort utilities.

Arne
 
M

Martin Gregorie

At the same time, I wonder if a JDBC-enabled database may be a
useful abstraction for a generic, flat-file disk sort.
It would be interesting to see how tipping records into an indexed
table and then reading them out again compared for speed with the
traditional in-memory sorted pre-string phase followed by n-way merge
passes. I don't think I'd lay bets either way.

Of course, if your system has enough swap space you could just substitute
a TreeMap for JDBC+database.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top