can't find pygtk

I am using matplotlib 1.1.0 that came with the current EPD, which in
turn comes without pygtk.

However, the linux system I am using this on (CentOS6) has pygtk installed:

/usr/lib64/pygtk/2.0

Is there any change I can marry those two? Currently, when I try to
matplotlib.use('gtk')
I get an error
ImportError("Gtk* backend requires pygtk to be installed.")

Or do I need to recompile it into this matplotlib?

Yes, you need to recompile. It will need to compile _backend_gdk.c,
which needs to be able to find pygtk.h.

The plain (non-agg) gtk backend is basically unmaintained and its use is
discouraged.

And the GTKAgg backend would have the same constraints as my current WxAgg, correct?

Are you sure there isn't a reasonably easy way to do what
you need with qt4agg, for example? How do you want to visualize your
million points?

Obviously there isn't place for displaying 1 million points, so I would expect the backend to do averaging/rebinninig/down-sampling of my data, depending on the current zoom level, meaning when I zoom in, it should repeat the averaging/rebinning/downsampling, optimized for the currently displayed data range. I'm aware and very willing to accept the delays this implies for display, but this would still be so much more comfortable then to write my own downsampling routines.
I would believe that many believe would agree to the simplest averaging routines, if only it would be possible to display large data sets at all.

Michael

···

On 2012-10-18 05:58:46 +0000, Eric Firing said:

On 2012/10/17 6:13 PM, Michael Aye wrote:

Eric

Thanks for your help!

Michael

PS.: The reason why I want to try GTK is actually that there are
reports of it being able to cope with 1 million data points, something
all other Agg-related backends can not do, apparently. (My linux is
server is definitely not the limit :wink:

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
Matplotlib-users mailing list
Matplotlib-users@lists.sourceforge.net
matplotlib-users List Signup and Options

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct

I am using matplotlib 1.1.0 that came with the current EPD, which in
turn comes without pygtk.

However, the linux system I am using this on (CentOS6) has pygtk installed:

/usr/lib64/pygtk/2.0

Is there any change I can marry those two? Currently, when I try to
matplotlib.use('gtk')
I get an error
ImportError("Gtk* backend requires pygtk to be installed.")

Or do I need to recompile it into this matplotlib?

Yes, you need to recompile. It will need to compile _backend_gdk.c,
which needs to be able to find pygtk.h.

The plain (non-agg) gtk backend is basically unmaintained and its use is
discouraged.

And the GTKAgg backend would have the same constraints as my current
WxAgg, correct?

Correct. I take it you have tried with WxAgg, and run into the limitation?

Are you sure there isn't a reasonably easy way to do what
you need with qt4agg, for example? How do you want to visualize your
million points?

Obviously there isn't place for displaying 1 million points, so I would
expect the backend to do averaging/rebinninig/down-sampling of my data,
depending on the current zoom level, meaning when I zoom in, it should
repeat the averaging/rebinning/downsampling, optimized for the
currently displayed data range. I'm aware and very willing to accept
the delays this implies for display, but this would still be so much
more comfortable then to write my own downsampling routines.
I would believe that many believe would agree to the simplest averaging
routines, if only it would be possible to display large data sets at
all.

Mpl does some of this, for some plot types, at two levels. One is path simplification, in which points on a line that don't change the way the line would be displayed are deleted before being fed to agg. The second is slicing in the x domain when plotting a small range of a long time series. Both of these things are quite general and impose little or no penalty.

We are certainly open to suggestions for additional ways of handling large data sets, but finding methods that are general, fast, and guaranteed not to change the plot in a visually detectable way is not trivial.

I suspect a solution might be to provide a hook for a data subsetting callable, which could be supplied via a kwarg. This would allow the user to set the resolution versus speed tradeoff, to choose averaging versus subsampling, etc. mpl might then provide a few such callables, probably as classes with a __call__ method, and the user would be free to provide a custom callable optimized for a particular type of data and plot.

Eric

···

On 2012/10/18 8:54 AM, Michael Aye wrote:

On 2012-10-18 05:58:46 +0000, Eric Firing said:

On 2012/10/17 6:13 PM, Michael Aye wrote:

Michael

Eric

Thanks for your help!

Michael

PS.: The reason why I want to try GTK is actually that there are
reports of it being able to cope with 1 million data points, something
all other Agg-related backends can not do, apparently. (My linux is
server is definitely not the limit :wink: