Making subfoos not leak memory

J

John Ersatznom

It's come up several times lately: subfoo. Besides String's substring
method, a Subset class was proposed here recently that would provide a
subset backed by a Set, and a recent mention of image cropping naturally
suggests representing a cropped image by using the source image as a
backer and the added memory consumption is then only the bounds.

The naive implementations (including of String's substring method) tend
to have a problem: the parent object cannot be garbage collected as long
as the child object is floating around free.

I've come up with a fix to that, but it has a cost -- the object has to
be wrapped by one that has a finalizer, which impacts GC performance.
Fortunately the usual need is on larger images, strings, and the like
that are generally long-lived.

* First, don't make the subfoo implementation an inner class of the
backer, or at least only make it a static nested class. A true
inner class instance won't let its parent be garbage collected.
* Next, have the subfoo hold only a weak reference to its parent.
* Now comes the tricky part.

public class WeakSubFoo {
private class Holder {
public Foo foo;
public Holder (Foo foo) { this.foo = foo; }
protected void finalize () {
backerDead(foo);
}
}
private WeakReference<Holder> backer;
private Foo replacement = null;
// whatever else
public WeakSubFoo (Foo backer, other_args) {
this.backer = new WeakReference(new Holder(backer));
// whatever else
}
void backerDead (Foo foo) {
this.replacement = <actually copy relevant part of foo>
// Adjust bounds, e.g. from left=13, top=17, width=32,
// height=32, parentWidth = 1024 to left=0, top=0,
// width=32, height=32, parentWidth = 32 or from
// start=17, end=42 to end -= start, start=0
backer = null;
}
private Foo getBackingFoo () {
Holder h = backer.get();
if (h != null) return h.foo;
while (replacement == null) {
try {
sleep(1000);
} catch (InterruptedException e) {
// Ignore.
}
}
return replacement;
}
// Other logic, using getBackingFoo
}

The major issue I see is that getBackingFoo has to block if it gets
called after the reference is cleared but before the backer is finalized
by the GC. In theory that could be a while. I don't see any easy way to
avoid this, because I don't see any way to actually get a reference to a
weakly-reachable object as soon as it's discovered to be only
weakly-reachable by the GC.

If the backing object is of interface type, this alternative exists:

public final class BackingFoo implements Foo {
private Foo myFoo;
public BackingFoo (Foo foo) { myFoo = foo; }
public Foo getFoo () { return myFoo; }
// Punt implementation of Foo interface to myFoo
}

public class WeakSubFoo {
private WeakReference<BackingFoo> backerRef;
private ReferenceQueue<BackingFoo> q
= new ReferenceQueue<BackingFoo>();
private Foo backer = foo;
// whatever else
public WeakSubFoo (BackingFoo backer, other_args) {
this.backer = backer.getFoo();
this.backerRef
= new WeakReference<BackingFoo>(backer, q);
// whatever else
}
private Foo getBackingFoo () {
if (q.poll() == backerRef) {
backerRef = null;
q = null; // make these reclaimable
backer = <copy of relevant subset of backer>
// Adjust bounds, e.g. from left=13, top=17,
// width=32, height=32, parentWidth = 1024 to
// left=0, top=0, width=32, height=32,
// parentWidth = 32 or from start=17, end=42
// to end -= start, start=0
}
return backer;
}
// Other logic, using getBackingFoo
}

The key to making this work is for client code to use BackingFoo and
WeakSubFoo but never just Foo. All raw Foos should be immediately
wrapped in a BackingFoo. When a BackingFoo with a WeakSubFoo becomes no
longer strongly reachable outside WeakSubFoos, it has in fact become
only weakly reachable. When getBackingFoo eventually is called in a
WeakSubFoo, it finds the reference backerRef on q and responds by
copying the relevant part of backer, which holds the underlying Foo
itself. E.g. a stringalike's 17 to 42 substring to a new stringalike
with only those 26 characters, and the substring position data changed
appropriately. The strong reference to the parent Foo is now gone, and
as long as there are no others (they were held only through references
to the single BackingFoo that is now only weakly reachable), the Foo
itself is now only weakly reachable and is reclaimable.

Besides the need to use exactly one BackingFoo per Foo and avoid holding
direct references to the underlying Foo, there is another caveat: if a
WeakSubFoo goes unused for a long time the backing Foo doesn't become
reclaimable for a long time. This suggests having WeakSubFoos register
themselves with a private static thread that periodically polls its
registrants by invoking getBackingFoo on them but doesn't use the return
value. This can be done once every minute or so and will clear out dead
Foos sooner. Of course the thread has to hold its referents with
WeakReferences and dump references that become null.

Substring copying example with a hypothetical nonfinal MyString:
backerRef = null; q = null;
backer = new MyString(backer.toString().substring(start, end));
end -= start;
start = 0;
Calling methods expect the WeakMySubstring's value to be
getBackingMySubstring from index "start" to index "end", which invariant
is preserved by the above copy-on-parent-doomed routine. Note that this
works also if "end" points one past the end. Note also that the copying
cannot use any WeakMySubstring methods on "this" for fear of an endless
recursion.
 
T

Tom Hawtin

John said:
It's come up several times lately: subfoo. Besides String's substring
method, a Subset class was proposed here recently that would provide a
subset backed by a Set, and a recent mention of image cropping naturally
suggests representing a cropped image by using the source image as a
backer and the added memory consumption is then only the bounds.

The naive implementations (including of String's substring method) tend
to have a problem: the parent object cannot be garbage collected as long
as the child object is floating around free.

I've come up with a fix to that, but it has a cost -- the object has to
be wrapped by one that has a finalizer, which impacts GC performance.
Fortunately the usual need is on larger images, strings, and the like
that are generally long-lived.

You really want to avoid finalizers if at all possible. Even
WeakReferences are slow and large relative to Strings.

A problem with what you are trying to do is that subfoos should usually
keep in sync with one another. They need the same backing data. You may
be able to trim some of it, but making private copies is wrong.

Tom Hawtin
 
J

John Ersatznom

Tom said:
A problem with what you are trying to do is that subfoos should usually
keep in sync with one another. They need the same backing data. You may
be able to trim some of it, but making private copies is wrong.

I'm only suggesting making a private copy when the backing data is no
longer referenced anywhere else. I came up with a method that doesn't
use finalizers (only WeakFoo).
 
T

Tom Hawtin

John said:
I'm only suggesting making a private copy when the backing data is no
longer referenced anywhere else. I came up with a method that doesn't
use finalizers (only WeakFoo).

In your example, what happens if you have two WeakSubFoo from a single
BackingFoo...?

Tom Hawtin
 
J

John Ersatznom

Tom said:
In your example, what happens if you have two WeakSubFoo from a single
BackingFoo...?

They've become independent of course. This isn't a semantic problem if
the objects are immutable (which is where you'd normally want this as an
efficiency thing anyway, as otherwise there are aliasing problems). It's
only an efficiency problem of the WeakSubFoos overlap substantially and
the BackingFoo wasn't substantially larger than their union.

This does suggest a more general method, which is to store ALL your foos
(strings, images, whatever) as size-bounded segments with weak links
among them, either in a flat or a tree structure, and the
using-code-visible objects hold strong references to all of the segments
they use as well as indexing information (begin and end, cliprect,
whatever). A subfoo is just created as another of these objects,
referring to only the segments it uses. Algorithms that respect the
indexing bounds may use the weak links but won't leave the set of
segments strongly held by the container in so doing. Or the weak links
may be omitted entirely in that case. Either way, segments no longer
reachable get collected. In this case it's possible to avoid Reference
objects *and* finalizers, at the cost of a bit more complexity. A string
for example could take the form of segments and logical-string objects,
the latter containing an array (or List) of consecutive segments. A
substring gets constructed with an array of just the segments that
overlap with the substring. This gives one area of actual decreased
complexity, in that "full" strings and substrings have the same
representation (but only substrings sometimes begin or end in the middle
of the first or last segment, respectively).

The main tuning parameter is segment size: a bigger segment means
subfoos carry more "baggage", but a smaller segment makes the array
copying in subfoo creation more substantial. The limiting case is a
segment that's a minimal element, in which case you've just got a string
as char array and a substring as copied subset, which is back to square
zero. The other limiting case is a segment that's the whole string, and
a substring prevents the whole string being collected, which is back to
square one. Intermediate segmenting gives intermediate performance
characteristics.

A fixed upper bound on segment size limits the wasted memory to at most
about two segments' worth per subfoo for strings or the like. The worst
case is that only one minimal element (character or whatever) is used
from the first and one from the last segment. With images it can still
be pretty bad -- the worst case amount of wasted memory is O(perimeter)
in general, so O(sqrt(N)) for 2D images and other 2D stuff, where N is
the minimum data size of the subfoo. Still better than O(N) or even
unlimited (O(size of parent)).

Tree structures are hairier to design and test, but give you multiple
levels of granularity. Subfoo creation is O(log N) instead of O(1)
(memory-wasting implementation) or O(N) (copying) in the size of the
subfoo. Memory wastage is more complex: the leaf-segment size is the
segment size limiting wastage in subfoos that outlive their parents, but
the amount of storage needed to maintain the tree structure itself is
nontrivial and O(total number of segments), unlike arrays (O(1)). Parent
links in the tree need to either be absent or weak but child links
should be string and foos should strong reference the deepest common
ancestor of any segment the foo intersects. Segment size is now a triple
tradeoff. Smaller makes wastage per subfoo less, copying greater (but
logarithmically instead of linearly), and overhead greater (n log n).
Larger makes wastage per subfoo more, copying less (but only
logarithmically), and overhead less (n log n). Trees make sense if
segments are fairly large, but the parent objects may be enormous --
30kx30k pixel images, multimeg disk files read in as strings, and the like.

Array or tree structures like the above under the hood also are useful
for handling large objects memory-efficiently by storing them as
temporary files on disk while keeping in memory the pieces only being
immediately used. Pieces not being used must be garbage collectable, so
the node object should hold a weak reference to the piece and a strong
reference to the information needed to load the piece from disk. The
getter has to dereference the former, if not null return the result, and
otherwise load the data from disk, fix the weak reference, and return a
strong reference. Soft reference may be used in place of weak if it's
likely recently accessed data will be traversed to again. This kind of
framework also supports persisting the object (use a *non*-temporary
file) and loading into RAM only the subset being worked on at a given
time. This granular disk-storage thing is useful if the data gets into
the hundreds of megs or even the gigs. (30kx30k images at 24/32bpp, for
instance.) Data with even larger sizes (in the tb and above) and complex
internal structure should be persisted in a database of course, and
worked on tiny pieces at a time. And any persistent data requiring real
transactional integrity needs a real database regardless of size.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top