Looking for memory tips for interactive use

I'm writing batch plotting code in the "traditional" sloppy
way by typing scripts from the command line in ipython
until I get what I want and then stringing the lines
together in a file. Then I put the whole thing in a loop
and use run from ipython. Now I can batch process a bunch
of .cvs files into nice plots.

I have two problems though:

1. The memory use in python increases about 5-10MB /sec
during processing. I have pylab.ioff() in the loop and put
a pylab.close('all') in the loop to try to close the figure
and release memory. Now 24 files processed results in
~190MB memory use. When I run again it keeps increasing.
I'm basically drawing the fig with several subplots and
labels and then using savefig() to save it to a .png file.
Is there some way to release memory explicity? I'm using
WinXP and tkAgg backend. I've tried things like gca() and
clf() at the beginning of the script to try to reuse the
canvas but It's not clear it it helps. Sometimes if I wait
long enough the memory use goes down so I don't suspect
it's not a memory leak, but garbage collection problem.
Unfortunately the wait can be very long. General as well
as specific tips are wellcome.

2. If I use show() or ion() the new plot window pops up
where it likes, usually on top of my other windows. Is
there a way to control this better? This may be a FAQ but I
didn't see it. In enviroments like IDL there is a "window"
command for this. Curious what Matlab does and if there is
a platform independent pylab equivalent.

-- David

ยทยทยท

+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+_+

__________________________________
Do you Yahoo!?
Yahoo! Personals - Better first dates. More second dates.
http://personals.yahoo.com

D Brown wrote:

I'm writing batch plotting code in the "traditional" sloppy
way by typing scripts from the command line in ipython
until I get what I want and then stringing the lines
together in a file. Then I put the whole thing in a loop
and use run from ipython. Now I can batch process a bunch
of .cvs files into nice plots.

I have two problems though:

1. The memory use in python increases about 5-10MB /sec
during processing. I have pylab.ioff() in the loop and put
a pylab.close('all') in the loop to try to close the figure
and release memory. Now 24 files processed results in
~190MB memory use. When I run again it keeps increasing. I'm basically drawing the fig with several subplots and
labels and then using savefig() to save it to a .png file. Is there some way to release memory explicity? I'm using
WinXP and tkAgg backend. I've tried things like gca() and
clf() at the beginning of the script to try to reuse the
canvas but It's not clear it it helps. Sometimes if I wait
long enough the memory use goes down so I don't suspect
it's not a memory leak, but garbage collection problem. Unfortunately the wait can be very long. General as well
as specific tips are wellcome.

You can try to do:

import gc
gc.collect()

This will force the garbage collector to kick in, which sometimes may help.

Best,

f