S
Scott W Gifford
Another question about SoftReference objects and the garbage
collector. As I mentioned in my previous post, I'm experimenting with
implementing a cache using SoftReference objects.
I noticed that the strategy Sun's JVM uses seems to be "when there is
any memory pressure, clear all SoftReference objects". For example,
with the appended code I see this:
Allocated 1 blocks; 1 still alive.
[...]
Allocated 64 blocks; 64 still alive.
Allocated 65 blocks; 1 still alive.
[...]
Allocated 128 blocks; 64 still alive.
Allocated 129 blocks; 1 still alive.
[...]
Allocated 192 blocks; 64 still alive.
Allocated 193 blocks; 1 still alive.
[...]
Allocated 256 blocks; 64 still alive.
I was surprised by this; I expected to see only some of the
SoftReference's freed, perhaps the least-recently used ones or some
approximation of that.
This seems to make a SoftReference inappropriate for a cache, since
when the cache grows too large (which will inevitably happen), the
whole thing will be discarded at once. If the items in the cache were
expensive to calculate, that means there will be a surge in CPU use to
re-calculate everything still needed in the cache, and eventually it
will be entirely thrown away again.
Is the behavior I'm seeing normal and expected, or is it an artifact
of the microbenchmark I'm playing with? Is there any way to affect
this or tune it?
What I like about using SoftReference objects for my cache is that the
cache will automatically be sized based on available memory; I will
have quite a few of these caches (hundreds, one for each client) in my
application, and giving each one an appropriately sized cache will be
quite a balancing act. Is there any way to get similar memory-
conscious behavior in my own cache implementation?
Thanks!
---ScottG.
import java.util.*;
import java.lang.ref.*;
public class SoftRef {
final static int NUM_BLOCKS = 256;
final static int BLOCK_SIZE = 1000000;
private final static class MemoryHog {
byte[] arr;
public MemoryHog(int size) {
arr = new byte[size];
Arrays.fill(arr,(byte) 3);
}
}
public static int liveCount(List<? extends Reference<?>> list) {
int count = 0;
for(Reference<?> e: list) {
if (e.get() != null)
count++;
}
return count;
}
public static void main(String[] args) throws Exception {
ArrayList<SoftReference<MemoryHog>> hogPen = new ArrayList<SoftReference<MemoryHog>>(NUM_BLOCKS);
for(int i=1;i<=NUM_BLOCKS;i++) {
hogPen.add(new SoftReference<MemoryHog>(new MemoryHog(BLOCK_SIZE)));
System.out.println("Allocated " + i + " blocks; " + liveCount(hogPen) + " still alive.");
}
}
}
collector. As I mentioned in my previous post, I'm experimenting with
implementing a cache using SoftReference objects.
I noticed that the strategy Sun's JVM uses seems to be "when there is
any memory pressure, clear all SoftReference objects". For example,
with the appended code I see this:
Allocated 1 blocks; 1 still alive.
[...]
Allocated 64 blocks; 64 still alive.
Allocated 65 blocks; 1 still alive.
[...]
Allocated 128 blocks; 64 still alive.
Allocated 129 blocks; 1 still alive.
[...]
Allocated 192 blocks; 64 still alive.
Allocated 193 blocks; 1 still alive.
[...]
Allocated 256 blocks; 64 still alive.
I was surprised by this; I expected to see only some of the
SoftReference's freed, perhaps the least-recently used ones or some
approximation of that.
This seems to make a SoftReference inappropriate for a cache, since
when the cache grows too large (which will inevitably happen), the
whole thing will be discarded at once. If the items in the cache were
expensive to calculate, that means there will be a surge in CPU use to
re-calculate everything still needed in the cache, and eventually it
will be entirely thrown away again.
Is the behavior I'm seeing normal and expected, or is it an artifact
of the microbenchmark I'm playing with? Is there any way to affect
this or tune it?
What I like about using SoftReference objects for my cache is that the
cache will automatically be sized based on available memory; I will
have quite a few of these caches (hundreds, one for each client) in my
application, and giving each one an appropriately sized cache will be
quite a balancing act. Is there any way to get similar memory-
conscious behavior in my own cache implementation?
Thanks!
---ScottG.
import java.util.*;
import java.lang.ref.*;
public class SoftRef {
final static int NUM_BLOCKS = 256;
final static int BLOCK_SIZE = 1000000;
private final static class MemoryHog {
byte[] arr;
public MemoryHog(int size) {
arr = new byte[size];
Arrays.fill(arr,(byte) 3);
}
}
public static int liveCount(List<? extends Reference<?>> list) {
int count = 0;
for(Reference<?> e: list) {
if (e.get() != null)
count++;
}
return count;
}
public static void main(String[] args) throws Exception {
ArrayList<SoftReference<MemoryHog>> hogPen = new ArrayList<SoftReference<MemoryHog>>(NUM_BLOCKS);
for(int i=1;i<=NUM_BLOCKS;i++) {
hogPen.add(new SoftReference<MemoryHog>(new MemoryHog(BLOCK_SIZE)));
System.out.println("Allocated " + i + " blocks; " + liveCount(hogPen) + " still alive.");
}
}
}