Patch suggestion


I'm a fairly heavy user of matplotlib (to plot results from plasma
physics simulations) and my use requires the display of fairly large

Having done some testing I've discovered (after bypassing anything slow
from the python code) that for large images where there image size
approaches the available memory the main performance bar seems to be the
conversion of the raw data to the _image.Image class. The way in which
the conversion takes place - with data being taken non-sequentially from
many points in a floating point source array and then converted to an 1
byte integer - is slow and if swapping becomes involved even slower.

To overcome this problem I suggest implementing c++ code to allow the
creation of the image from a buffer (with each rgba pixel as 4 bytes)
rather than a floating point array. Where image data is being generated
elsewhere (in my case in Fortran code) it's trivial to output to a
different format and doing so means that the size of the input data can
be significantly smaller and that the data in the source array is
accessed sequentially (it's likely that a compiler will also be able to
optimise a copy of this data more effectively). The image can then be
scaled and over plotted as with any existing image.

I've attempted to implement this code myself (see attached patch to
src/_image.cpp) but I'm not a regular c++ or even c programmer so it's
fairly likely there will be memory leaks in the code. For a 1024x2048
array using the GTKAgg backend and with plenty of memory free this
change results in show() taking <0.7s rather than >4.6s; if there is a
memory shortage and swapping becomes involved the change is much more
noticeable. I haven't made any decent Python wrapping code yet - but
would be happy do do so if someone familiar with c++ could tidy up my

Hope this is useful to others,

Nicholas Young

patch (2.47 KB)