Memory usage with matplotlib

The question is: "why isn't the memory collected after

    > closing the figure?". The real program I developed builds
    > 5~15 graphic windows and the required memory is very
    > large.

It is, or it should be. Look at the module _pylab_helpers,
particularly this function which is called when a window is destroyed

    def destroy(num):
        if not Gcf.has_fignum(num): return
        figManager = Gcf.figs[num]

        oldQue = Gcf._activeQue[:]
        Gcf._activeQue =
        for f in oldQue:
            if f != figManager: Gcf._activeQue.append(f)

        del Gcf.figs[num]
        #print len(Gcf.figs.keys()), len(Gcf._activeQue)
        figManager.destroy()
        gc.collect()

ie, we make an explicit call to the garbage collector when a figure is
destroyed from within pylab. I'm not sure why you are not seeing the
memory freed up, but I believe garbage collection is a bit of a
mystery about what happens where.

I tend to rely on a script called unit/memleak_hawaii3.py which is in
matplotlib CVS to test for memory leaks. Unfortunately this works
only on linux and friends because it uses ps to collect memory usage.
Typically we like to see total memory asymptote out at around 10 to 30
figures and cease climbing. If it climbs monotonically with figure
number, it's indicative of a leak. For reasons beyond me, the memory
consumption doesn't stabilize for the first N figures, where N is an
arbitrary but smallish number. I don't think this has to do with
matplotlib as much as with the python garbage collector.

If you get a chance to test this script on linux, I would be
interested to hear what you find. If someone else knows more about
python's gc, please pipe in.

JDH