I'm using matplotlib for various tasks beautifully...but on some occasions,
I have to visualize large datasets (in the range of 10M data points) (using
imshow or regular plots)...system start to choke a bit at that point...
I would like to be consistent somehow and not use different tools for
basically similar tasks...
so I'd like some pointers regarding rendering performance...as I would be
interested to be involved in dev is there is something to be done....
To active developers, what's the general feel does matplotlib have room to
spare in its rendering performance?...
or is it pretty tied down to the speed of Agg right now?
Is there something to gain from using the multiprocessing module now
included by default in 2.6?
or even go as far as using something like pyGPU for fast vectorized
I've seen around previous discussions about OpenGL being a backend in some
would it really stand up compared to the current backends? is there clues
about that right now?
thanks for any inputs!