pylab/matplotlib large plot memory management - bug? or tuningparameter needed?

B

bdb112

Summary:

It is not straightforward to avoid memory leaks/consumption in pylab.
If we define
x = arange(1e6) # adjust size to make the increment visible, yet
fast enough to plot
# then repetition of
plot(x,hold=0) # consumes increasing memory according to ubuntu
system monitor

Details:
#versions: ubuntu 9.04 standard (py 2.6.2, ipython 0.91, matplotlib
0.98.5.2) , (results here)
#or win32 matplotlib '0.98.3', python 2.5 (similar, although mem usage
limits and reuses after 3-4 plot()s
First, turn off output caching in ipython because it consumes memory
(as it should)

ipython -pylab -cs 0

plot(x,hold=0)
plot(x,hold=0)
plot(x,hold=0)

# closing the window doesn't help much, neither does close() or any of
the below individually
# antidote
plot(hold=0); gcf().clf(); close() # first often doesn't help!
plot(hold=0); gcf().clf(); close() # need all three!
plot(hold=0); gcf().clf(); close()

As stated above, the windows version apparently starts more aggressive
garbage collection after 3-4 plots.
The ubuntu version reaches my system memory limit (1GB) without
reclaiming memory - i.e. memory usage just keeps growing until swap
space is used when array is 2e6 elements, consuming 100mB per plot.
For 1e6 elements, memory usage grows for about 10 50MB steps, and then
some garbage collection seems to happen, alothough more can be freeed
with the triple line above.

1/ I am running under VMWare so maybe VMWare isn;t reporting the
correct physical memory size to ubuntu/python - how to check this?

2/ possible bug - why doesn't closing the plot window release all
memory it uses? Especially when this approaches machine memory size.

3/ Are there python/matplotlib memory management tuning parameters I
can tweak?
 
J

Johan Grönqvist

bdb112 skrev:
Summary:

It is not straightforward to avoid memory leaks/consumption in pylab.
If we define
x = arange(1e6) # adjust size to make the increment visible, yet
fast enough to plot
# then repetition of
plot(x,hold=0) # consumes increasing memory according to ubuntu
system monitor
[...]

I do not know what closing the window does, but in my programs, running
on debian and opensuse, I found that explicitly calling close() solved
my memory leaks with matplotlib.
3/ Are there python/matplotlib memory management tuning parameters I
can tweak?

You could try to import gc, and then call gc.collect() after each call
to close().

docs at: http://docs.python.org/library/gc.html

Hope it helps

/ johan
 
C

Carl Banks

Summary:

It is not straightforward to avoid memory leaks/consumption in pylab.
If we define
x = arange(1e6)     # adjust size to make the increment visible, yet
fast enough to plot
# then repetition of
plot(x,hold=0)   # consumes increasing memory according to ubuntu
system monitor

Details:
#versions: ubuntu 9.04 standard (py 2.6.2, ipython 0.91, matplotlib
0.98.5.2) , (results here)
#or win32 matplotlib '0.98.3', python 2.5 (similar, although mem usage
limits and reuses after 3-4 plot()s
First, turn off output caching in ipython because it consumes memory
(as it should)

ipython -pylab -cs 0

plot(x,hold=0)
plot(x,hold=0)
plot(x,hold=0)

# closing the window doesn't help much, neither does close() or any of
the below individually
# antidote
plot(hold=0); gcf().clf(); close()  # first often doesn't help!
plot(hold=0); gcf().clf(); close()  # need all three!
plot(hold=0); gcf().clf(); close()

As stated above, the windows version apparently starts more aggressive
garbage collection after 3-4 plots.
The ubuntu version reaches my system memory limit (1GB) without
reclaiming memory - i.e. memory usage just keeps growing until swap
space is used when array is 2e6 elements, consuming 100mB per plot.
For 1e6 elements, memory usage grows for about 10 50MB steps, and then
some garbage collection seems to happen, alothough more can be freeed
with the triple line above.

1/ I am running under VMWare so maybe VMWare isn;t reporting the
correct physical memory size to ubuntu/python - how to check this?

2/ possible bug - why doesn't closing the plot window release all
memory it uses?  Especially when this approaches machine memory size.

3/ Are there python/matplotlib memory management tuning parameters I
can tweak?


First off, let's clear up a couple misconceptions.

1. CPython garbage collection is not triggered by physical or OS
memory constraints. Objects are garbage collected in CPython for one
reason: their reference count goes to zero. Python on Windows doesn't
"start more aggressive garbage collection", since there is no
aggressive garbage collection. (There is such thing aggressive cycle
detection; however it's also not triggered by physical memory. I doubt
cycles are the issue for you.)

2. References to objects can be kept around for various reasons, thus
keeping their reference counts above zero. So even if you del the
variable the object was bound to, there might be another reference
somewhere, in which case the object won't be garbage collected right
away. When you closed the Pylab window, and memory wasn't freed right
away, doesn't mean it's a bug in Python, or even Pylab.

Somehow on Ubuntu, pylab is either keeping references to old data
around (what I like to call a memory clog, as opposed to a memory
leak), or is allocating memory in such a way that it can't unwind the
heap and reclaim memory from a large array that was freed. The latter
seems unlikely to me, because any large memory allocations are
probably mmaped under the covers(which means it gets its own page and
the OS can move it around).

If it is a bug, it's probably with Pylab, but you never know.


Carl Banks
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top