backend_gtk and expose_event

I think the easiest solution might be to use nans in the

    > transform module rather than raise an exception when
    > transforming an array. And the clients of the transform, eg
    > the legend auto-scaling code, can ignore the nans when
    > deciding where to place the legend.

I just modified the transforms numerix_x_y function to insert nan
instead of raising when the transformation of an element of the
sequence fails

    In [8]: x, y = randn(2,10)

    In [9]: xt, yt = trans.numerix_x_y(x,y)

    In [11]: y
    [-0.09215005,-0.08206097, 0.92980313,-0.22293784, 0.83486353,
    -1.66880057,-0.1844854 , 0.3235668 ,-0.08853855,]

    In [12]: yt
    [ nan, nan,428.96553625,
    nan, nan,384.95653999, nan,]

    In [13]: nx.isnan(yt)
    Out[13]: [1,1,0,1,0,0,1,1,0,1,]

Do people think this is the desired behavior? It will probably make
these functions easier to use by backend writers, who currently have
to fall back on transforming individual elements in try/except blocks

        def drawone(x, y, skip):
                if skip: raise(ValueError)
                xt, yt = transform.xy_tup((x, y))
                ret = '%g %g %c' % (xt, yt, drawone.state)
            except ValueError:
                drawone.state = 'm'
                drawone.state = 'l'
                return ret

which is probably a good bit slower.

Should all the transform methods have these symantics (nan on fail
rather than raise) or should the methods that transform single points
raise and the methods that transform sequences insert nans when
individual points fail?

I used the std c++ numeric_limits quiet_NaN

which worked with the MPL_isnan test from numerix on linux with
Numeric, numarray and numpy. It would probably be worth testing on
other platforms

  from pylab import subplot, nx
  ax = subplot(111)
  ax.semilogy(nx.arange(10), nx.mlab.rand(10))
  trans = ax.transData
  x,y = nx.mlab.randn(2,10)
  xt,yt = trans.numerix_x_y(x,y)
  print yt
  print nx.isnan(yt)