Memory leak with pcolor

I can confirm that memory leaks indeed are not a problem with a CVS build
on Debian. I can't seem to restore the pre-built Debian stable 0.82 so
I haven't tested it.

However, the problem is still that pcolor is too slow to use it
interactively on a plot with a half-dozen 200x608 warped grids, even
with shading='flat' and no antialiasing.

Would you consider accepting a 'structured grid' as a primitive patch
type? What consequence will this have for your various backends?
Presumably someone will want triangular meshes as well if they are
doing serious FEM work.

I created a prototype app using OpenGL and quad strips. The performance
with this is acceptable, but I need a lot more 2D graphing features.
I would much rather make an existing product better than rewrite from
scratch.

In my particular case the grid warping function can be expressed
analytically. Is it reasonable to consider warping a 2D image directly
using an AGG filter function? Could this be embedded in an existing
matplotlib graph, above some objects and below others?

Thanks in advance,

Paul Kienzle
pkienzle@...582...

···

On Tue, Nov 01, 2005 at 09:41:35PM -0600, John Hunter wrote:

    > Thanks for your advice with installing matplotlib on
    > cygwin. I downloaded and installed the windows binaries
    > and it worked. Anyway, the reason that I didn't want
    > to use binaries in the first place was because I wanted
    > to modify the matplotilb source code. But it seems like
    > even with the binaries, if I change the source code
    > then it will still affect the operation of the program
    > when I run it, which is what I want.

    > In particular, I am looking to speed up the pcolor()
    > function because it runs exceedingly slow with large
    > mesh sizes. I believe the reason it is running slow is
    > because of a memory leak. When I do the following:

    > from pylab import * n=200
    > [x,y]=meshgrid(arange(n+1)*1./n,arange(n+1)*1./n)
    > z=sin(x**2 + y**2)

    > and then do

    > pcolor(x,y,z)

    > repeatedly, the memory usage increases by about 15 MB
    > each time, and it runs progressively slower.each

At least with matplotlib CVS (and I don't think it's a CVS vs 0.84
issue) the memory consumption is rock solid with your example (see
below for my test script). What is your default "hold" setting in rc?
If True, you will be overlaying plots and will get the behavior you
describe. In the example below, I make sure to "close" the figure
each time -- a plain clear with clf should suffice though. My guess
is that you are repeatedly calling pcolor with hold : True and are
simply overlaying umpteen pcolors (to test for this, print the length
of the collections list

  ax = gca()
  print len(ax.collections)

if this length is growing, you've found your problem. A simple

  pcolor(x,y,z,hold=False)

should suffice.

You can also change the default hold setting in your config file
http://matplotlib.sf.net/matplotlibrc

JDH

I posted a CVS patch to the devel list a while ago which implemented an
alternate image class which could plot data on a stretched (rectangular)
grid much faster than pcolor can. The patch wasn't entirely complete as
I'm unsure what the user-interface should be but it is usable and does
what you seem to want to do without additional dependencies and with
fast zooming and panning once data is loaded. I've tested with data up
to 2048x2048 in size and it's quite usable on my laptop.

If you are interested I can give you a patch against the current CVS.

Nich

···

On Thu, 2005-11-03 at 11:10 -0500, Paul Kienzle wrote:

However, the problem is still that pcolor is too slow to use it
interactively on a plot with a half-dozen 200x608 warped grids, even
with shading='flat' and no antialiasing.