size of axes in pixels

Hello, I have been unable to discover in the docs a method

    > for discovering the exact size in pixels of an axes.

    > The only way I have thought of is to get the size of the
    > canvas via FigureCanvas.get_width_height() and then multiply
    > by the results of axes.get_position(), but really I want to
    > have the exact size in pixels.

In [1]: ax = subplot(111)

In [2]: left, bottom, width, height = ax.bbox.get_bounds()

In [3]: print left, bottom, width, height

80.0 48.0 496.0 384.0

However, this question looks like one that would be better answered by
describing what you are trying to do. There may be a more elegant
solution using some of matplotlib's built-in coordinate systems which
would prevent you from having to do raw pixel calculations, which is
not usually what you want. So you may want to describe your actual
problem rather than your solution <wink>

JDH

Happy to do so, and I'm glad you asked.

I have recently been working on a project where I need to show a
spectrogram of some large data sets (200 million complex samples).
Directly using the specgram function is not a good idea, and I have come
to conclude that the traditional method for generating spectrograms is
inherently flawed. That is, the MATLAB approach of generating a picture
based on the psd's and then display it using imshow is a hack.

I arrived at this conclusion through my work-around to generating
spectrograms for these large files. What I did was to choose a number
of "data slice" to use, extract those at regular intervals throughout
the file, and then set overlap=0.

For example, if I wanted to use a 1024-point FFT, and if my axes is
about 500 pixels wide, then I would seek() through the file reading 1024
contiguous samples and then skipping 1./500-th of the total samples, and
reading in another slice. I would end up with 500*1024 points, and pass
this data to specgram() with overlap=0.

Now, of course, there is a lot of information that was discarded, but it
made the implementation tractable.

I would propose that the loss associated with this method was comparable
to what is lost when the entire data set is used and then an image
resize algorithm (hardly appropriate for this type of thing, IMHO)
averages out a tremendous amount of the computations that were
performed.

As I started to think about it, I concluded that the other extreme
applies. For short data sets, it is much more appropriate to have the
overlap increase automatically than to use an image interpolation
function.

The case of operating on large data sets then corresponds to a negative
overlap.

I recall one technical review I was in where the presenter was
displaying a spectrogram in MATLAB. He pointed out a visible feature
and then zoomed in on it. It was a large data set, and when it finished
drawing, the feature was no longer visible -- very strange, and
frustrating to the presenter! I then began to wonder about the
appropriateness of treating a spectrogram like a picture.

Not to imply that there wouldn't be anomalies like this with this
"auto-overlap" approach, but certainly it seems (to me) like a more
rational approach to this signal processing operation.

So, I'm hoping to find time on my current project to implement this type
of functionality. In my mind the FFT size would still be selected as a
parameter, so that power-of-2 implementations are used.. Granted, there
would be averaging going on along the vertical axis, I just propose that
better 1 than 2: the number of psd's performed would correspond exactly
to the horizontal dimension of the drawing area, so no resampling would
be required along that axis.

When the axes is resized, psd's would have to be recomputed, but what is
displayed on the screen would more closely related to the result of the
transforms performed.

Zooming would also necessitate recomputation of the psd's. My idea for
smooth zooming (dragging with right mouse button in pylab) was to keep
the existing functionality, until the mouse is released, at which time
the psd's would be recomputed, but only for the segment of the data that
corresponds to the visible portion of the horizontal axis. Same thing
for panning around this zoomed image: don't recalculate anything until
the user releases the mouse button.

This would obviously be a more complicated implementation, and I'm not
suggesting that the current specgram implementation is useless. This
alternate approach has served me well so far, and being able to extract
the size of the axes will make things more efficient.

Thanks for listening and for the tip.

Best Regards,
Glen Mabey

···

On Tue, Oct 24, 2006 at 08:49:19AM -0500, John Hunter wrote:

    > Hello, I have been unable to discover in the docs a method
    > for discovering the exact size in pixels of an axes.

    > The only way I have thought of is to get the size of the
    > canvas via FigureCanvas.get_width_height() and then multiply
    > by the results of axes.get_position(), but really I want to
    > have the exact size in pixels.

In [1]: ax = subplot(111)

In [2]: left, bottom, width, height = ax.bbox.get_bounds()

In [3]: print left, bottom, width, height

80.0 48.0 496.0 384.0

However, this question looks like one that would be better answered by
describing what you are trying to do. There may be a more elegant
solution using some of matplotlib's built-in coordinate systems which
would prevent you from having to do raw pixel calculations, which is
not usually what you want. So you may want to describe your actual
problem rather than your solution <wink>