In much the same way Basemap can take an image in a Plate Carree map

projection (e.g. blue marble) and transform it onto another projection

in a non-affine way, I would like to be able to apply a non-affine

transformation to an image, only using the proper matplotlib Transform

framework.

To me, this means that I should not have to pre-compute the projected

image before adding it to the axes, instead I should be able to pass

the source image and the Transformation stack should take care of

transforming (warping) it for me (just like I can with a Path).

As far as I can tell, there is no current matplotlib functionality to

do this (as I understand it, some backends can cope with affine image

transformations, but this has not been plumbed-in in the same style as

the Transform of paths and is done in the Image classes themselves).

(note: I am aware that there is some code to do affine transforms in

certain backends -

http://matplotlib.sourceforge.net/examples/api/demo_affine_image.html

- which is currently broken [I have a fix for this], but it doesn't

fit into the Transform framework at present.)

I have code which will do the actual warping for my particular case,

and all I need to do is hook it in nicely...

I was thinking of adding a method to the Transform class which

implements this functionality, psuedo code stubs are included:

class Transform:

...

def transform_image(self, image):

return self.transform_image_affine(self.transform_image_non_affine(image))

def transform_image_non_affine(self, image):

if not self.is_affine:

raise NotImplementedError('This is the hard part.')

return image

...

def transform_image_affine(self, image):

# could easily handle scale & translations (by changing the

extent), but not rotations...

raise NotImplementedError("Need to do this. But rule out

rotations completely.")

This could then be used by the Image artist to do something like:

class Image(Artist, ...):

...

def draw(self, renderer, *args, **kwargs):

transform = self.get_transform()

timg = transform.transform_image_non_affine(self)

affine = transform.get_affine()

...

renderer.draw_image(timg, ..., affine)

And the backends could implement:

class Renderer*...

def draw_image(..., img, ..., transform=None):

# transform must be an affine transform

if transform.is_affine and i_can_handle_affines:

... # convert the Transform into the backend's transform form

else:

timage = transform.transform_image(img)

The warping mechanism itself would be fairly simple, in that it

assigns coordinate values to each pixel in the source cs (coordinate

system), transforms those points into the target cs, from which a

bounding box can be identified. The bbox is then treated as the bbox

of the target (warped) image, which is given an arbitrary resolution.

Finally the target image pixel coordinates are computed and their

associated pixel values are calculated by interpolating from the

source image (using target cs pixel values).

As mentioned, I have written the image warping code (for my higher

dimensional coordinate system case using

scipy.interpolate.NearestNDInterpolator) successfully already, so the

main motivations for this mail then, are:

* To get a feel for whether anyone else would find this functionality

useful? Where else can it be used and in what ways?

* To get feedback on the proposed change to the Transform class,

whether such a change would be acceptable and what pitfalls lie ahead.

* To hear alternative approaches to solving the same problem.

* To make sure I haven't missed a concept that already exists in the

Image module (there are 6 different "image" classes in there, 4 of

which undocumented)

* To find out if anyone else wants to collaborate in making the

required change.

Thanks in advance for your time,