V
Vince Darley
An application we've built makes very extensive use of
java.util.HashMaps to store the results of 10s of thousands of
calculations it does. Each of those calculations involves reading
from/writing to a HashMap, eventually filling it with, say,
10000-100000 entries. Once complete, the information is copied out
into a fixed matrix, and we can safely discard the HashMap (and all
its entries of course).
Now, the problem with this is that the code spends 40%+ of its time
allocating new HashMap$Entry objects (according the profiling we've
done), and manages to allocate some 33 million of them (>1 Gb of
memory in total), although certainly no more than 1/2 million are
active at any one time (say 10-20Mb, according to the profiler).
So, my question is: are there any suggestions for how we can
restructure this code or its use of HashMap to avoid such a mad
allocation/deallocation frenzy. It would all, presumably, run more
quickly if we could avoid this problem.
Any ideas?
Vince.
java.util.HashMaps to store the results of 10s of thousands of
calculations it does. Each of those calculations involves reading
from/writing to a HashMap, eventually filling it with, say,
10000-100000 entries. Once complete, the information is copied out
into a fixed matrix, and we can safely discard the HashMap (and all
its entries of course).
Now, the problem with this is that the code spends 40%+ of its time
allocating new HashMap$Entry objects (according the profiling we've
done), and manages to allocate some 33 million of them (>1 Gb of
memory in total), although certainly no more than 1/2 million are
active at any one time (say 10-20Mb, according to the profiler).
So, my question is: are there any suggestions for how we can
restructure this code or its use of HashMap to avoid such a mad
allocation/deallocation frenzy. It would all, presumably, run more
quickly if we could avoid this problem.
Any ideas?
Vince.