Memory leak

R

Ross

I appear to have some sort of a memory leak. Even though I can't see
where objects are being retained, the memory requirements of my
program seem way out of kilter with what is actually being stored. I
have one method which does 99% of the running, which has a single
array of size 400 of objects, none of which should be very large. But
I have to give the program tens of megabytes of heap space or it
crashes.

I'm calling the System.gc(); method once per loop, just in case
garbage wasn't being collected, but as expected, no

During each loop of the method, I copy the contents of the array into
another (locally declared) array, then swap the arrays. So, I could
have one copy of the array with out of date objects in it, but this
should be wiped next loop through.

Looking at the Runtime methods totalMemory() and freeMemory(), the
amount of memory in usage does go up and down, but eventually I burn
through 120m of head space - shouldn't happen.

Any hints as to what I could look for to see what is taking up so much
space?
 
D

Daniel Pitts

Ross said:
I appear to have some sort of a memory leak. Even though I can't see
where objects are being retained, the memory requirements of my
program seem way out of kilter with what is actually being stored. I
have one method which does 99% of the running, which has a single
array of size 400 of objects, none of which should be very large. But
I have to give the program tens of megabytes of heap space or it
crashes.
Giving tens of megabytes seems reasonable. Many of my programs require
256M+ of heap space to run properly.
I'm calling the System.gc(); method once per loop, just in case
garbage wasn't being collected, but as expected, no
Yeah, not likely to help, except to possibly cause a change in the delay
structure (if anything).
During each loop of the method, I copy the contents of the array into
another (locally declared) array, then swap the arrays. So, I could
have one copy of the array with out of date objects in it, but this
should be wiped next loop through.

Looking at the Runtime methods totalMemory() and freeMemory(), the
amount of memory in usage does go up and down, but eventually I burn
through 120m of head space - shouldn't happen.
Does it keep going up, or is that some sort of equilibrium?
Any hints as to what I could look for to see what is taking up so much
space?
Download a profiler and look at the heap dump, it will tell you what
objects are hanging around.

I use JProfiler, but it is a commercial product. Netbeans also has a
free profiler. There are well many other profilers.


Hope this helps.
 
A

Andreas Leitgeb

Ross said:
I appear to have some sort of a memory leak. Even though I can't see
where objects are being retained, the memory requirements of my
program seem way out of kilter with what is actually being stored.

Have a look at the utility jvisualvm (should be in your jdk's bin folder).
(But only if you have some 1.6.x version of the jdk)

With that you can attach to your program, and see how many instances of
each class really "live". This may give a hint towards which objects are
actually "live-leaking".

Apart from that, perhaps your objects reference each other, so you end up
keeping a linked list of all the objects ....
 
M

markspace

Ross said:
Any hints as to what I could look for to see what is taking up so much
space?


Yeah, what the others said: get a profiler. The NetBeans profiler has
a feature where you can make a jpeg of the the charts and graphs that it
creates. This has a big advantage because you can then post those jpegs
on the 'net and ask folks to take a look at them. This seems much
easier on everyone and more likely to yield an answer for you than
trying to verbally explain each thing you see in the profiling results.

The trick to debugging memory leaks is to look for objects that survive
generations of garbage collection. Any object which is being "held"
won't be garbage collected and therefore will eventually get a very high
generation number. Looking for objects that you have a lot of doesn't
work. If you have a lot of objects, and they have low generation
numbers, those objects are not leaking. They're being made and then
removed, like they should be.

Also, setting your memory requirements higher would be good. Maybe you
really need the memory. And if you do have a leak, more memory will
allow your program to run longer, and then you'll really see a high
generation number on leaked objects. Makes spotting them easier.

Good luck, and let us know how it works out. I don't have a lot of
experience debugging esoteric leaks, so I wouldn't mind helping you out
since I might learn some stuff. Get a profiler, post (jpeg!) some
results, let's have a look.
 
A

Andreas Leitgeb

Daniel Pitts said:
Yeah, not likely to help, except to possibly cause a change in the delay
structure (if anything).

I know of at least one of my programs where well-placed calls to System.gc()
boosted performance radically.

The jvm just failed to spot those points in the program, where about 50% of
all objects suddenly could be freed in a single sweep. Doing it later, as
it otherwise did resulted in the job being paged out partially by the OS.
I had to make the max heap size larger than physical memory for some peak-
memory usages of the algorithm. - That's probably exactly the situation
where jvm's memory handling is (or was back then) insufficient without
programmatic help.

Another trick was to apply one of the jdk/bin/* utilities on the running
program. One of them caused a System.gc() call in the target thereby pulling
it out of OS-paging.

PS: But, of course, a System.gc() never helps against a real leak.
 
R

Ross

Thanks for all the advice. I'm using java 1.6, so should have access
to the heap visualisation tool.

Looking through everything written, and thinking, I added a static int
to some of my classes. Then, I incremented that value in constructors,
and decremented in finalize(). The number of objects in the basic
array, measured after the call to System.gc() sticks at 200, when the
basic array size is 100. But each of these objects (of class Member*)
has an array of other objects (class Rule). There is supposedly a
maximum of ten Rule objects per Member. However, while the number of
members doesn't go above 200, the number of Rule objects rapidly rises
way past the theoretical 2000 maximum and continues up. Way, way, up.

Hmmm....... At least I've some sort of a clue what might be wrong now.
More work needed. Next, I'm going to check that the number of Rule
objects actually stored doesn't go over the theoretical maximum.

*I'm writing a genetic algorithm of types, "Member" is a member of the
population, "Rule" is part of the behaviour that I'm evolving.

........ a fair bit later :) ...... Got it. The members of my
population are supposed to only have ten rules each. But that was only
enforced when they were created (sort of "born"), when they "mutate",
but NOT when they "crossover". So, if one Member with 9 rules
reproduces with another with 10 rules, then the results could be one
member with 1 rule, and another with 18, exceeding the maximum. Then,
for some reason, "evolution" preferred Members with more rules, and
the numbers just went up and up and up.

That's one of the weirder bugs that I've had. I just presumed it was a
low level problem like a linked list or something, but it was an error
at a more abstract level.
 
T

Tom Anderson

....... a fair bit later :) ...... Got it. The members of my
population are supposed to only have ten rules each. But that was only
enforced when they were created (sort of "born"), when they "mutate",
but NOT when they "crossover". So, if one Member with 9 rules
reproduces with another with 10 rules, then the results could be one
member with 1 rule, and another with 18, exceeding the maximum. Then,
for some reason, "evolution" preferred Members with more rules, and
the numbers just went up and up and up.

Ah - so it wasn't a bug at all, it was a discovery!

tom
 
L

Lew

Ross said:
Looking through everything written, and thinking, I added a static int
to some of my classes. Then, I incremented that value in constructors,
and decremented in finalize().

So your application is single-threaded, then.
 
J

John B. Matthews

Tom Anderson said:
Ah - so it wasn't a bug at all, it was a discovery!

I propose the systematic name _Bureaucratus_administrivia_, an obligate
rule-o-phile.
 
B

Bill McCleary

Andreas said:
PS: But, of course, a System.gc() never helps against a real leak.

Well ... it *might* help track one down. In a single-threaded app, you
might stick a System.gc() call in some method that's called not
infrequently, let it run awhile with memory profiling and debugging on,
then slap a breakpoint on the line following the System.gc() call and
wait for it to halt. There should be only three broad categories of
objects in the memory profile at the instant the breakpoint got hit:

1. Local objects in the methods in the call chain leading to the
System.gc() call (and you can find all such methods by using the
debugger to execute "new Exception().printStackTrace()" after the
breakpoint trips).
2. Objects that are supposed to be long-lived. (Open windows and their
dependents; startup stuff; assorted constants, likely including lots
of Strings and Class objects; open documents or equivalent; possibly
DB connections, sockets, file streams, or whatever, depending on the
type of app; etc.)
3. Whatever's getting packratted.

An idea for a useful future tool also occurs to me. It is based on how
GC works. Pointers from old objects to newer objects require tracking
and modification during GC in generational collectors, and packratting
more or less requires packratted objects to be referenced by older
objects. So a profiling tool that spots such links and notes what
classes the referrer and target are could help identify not only what's
being packratted, but where. If the profiler shows the referrer for a
lot of such cases to be HashMap, for example, it points to one or more
HashMaps somewhere. If the tool integrates with the debugger and lets
you inspect a suspect HashMap, you might see from its contents what
HashMap it is in your program -- "Oh looky, the main Foo to Bar mapping
table is filling up with Foos for some reason. Oh crap, the Foo hashCode
is broken so all the Foos that should have replaced each other in the
Map are filling it up instead! And that's why the updateBar method
doesn't seem to do anything, I was wracking my brain all last night
staring at its code ... gah." :)
 
L

Lew

Eric said:
No. (Hint: Does finalize() run on the same thread that
runs the constructors?)

Then one wonders how he avoided synchronization issues on the static instance
counter.
 
A

Alessio Stalla

     Just a suspicion: He may not have realized he *has*
synchronization issues.  (He's not the first to overlook
such a possibility.  I deal regularly with a third-party
application that keeps track of the number of user sessions
that are active, and sometimes when they all log off it
happily reports -1 sessions ...  Can't you just *smell* an
unsynchronized counter that lost an up-tick?)

     Easiest solution is probably to change the counter from
a plain `int' to an AtomicInteger.  Note that `volatile' is
*not* a solution; increment or decrement needs both a read
and a write, and `volatile' will not glue the two operations
together inseparably.

But in this case, 100% accuracy isn't needed. If every 1000 increments/
decrements, two of them overlap, that's no big deal. Statistically it
will make no difference, because the sum of errors tends to be zero.
 
L

Lew

Alessio said:
But in this case, 100% accuracy isn't needed. If every 1000 increments/
decrements, two of them overlap, that's no big deal. Statistically it
will make no difference, because the sum of errors tends to be zero.

Does it?

Since synchronization issues are non-deterministic, and finalizers
don't even always run, there is a finite probability of the errors
summing to a number far from zero under quite normal conditions.

It might not matter to you in this particular case, because you don't
need perfect accuracy, but you shouldn't base your confidence on a
superstition like "the sum of errors tends to be zero".
 
K

Kevin McMurtrie

Eric Sosman said:
No. (Hint: Does finalize() run on the same thread that
runs the constructors?)

It should also be noted that overriding finalize() may consume a
significant amount of memory. Each instance of an object overriding
finalize() has a matching Finalizer reference put into a ReferenceQueue.
A thread named "Finalizer" pulls from the queue to execute the
finalize() methods. Multiple GC passes may be needed to eventually free
all of the memory. If any finalize() method stalls, GC of all objects
overriding finalize() will stall too.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,065
Latest member
OrderGreenAcreCBD

Latest Threads

Top