I think the easiest solution might be to use nans in the
> transform module rather than raise an exception when
> transforming an array. And the clients of the transform, eg
> the legend auto-scaling code, can ignore the nans when
> deciding where to place the legend.
I just modified the transforms numerix_x_y function to insert nan
instead of raising when the transformation of an element of the
In : x, y = randn(2,10)
In : xt, yt = trans.numerix_x_y(x,y)
In : y
[-0.09215005,-0.08206097, 0.92980313,-0.22293784, 0.83486353,
-1.66880057,-0.1844854 , 0.3235668 ,-0.08853855,]
In : yt
[ nan, nan,428.96553625,
nan, nan,384.95653999, nan,]
In : nx.isnan(yt)
Do people think this is the desired behavior? It will probably make
these functions easier to use by backend writers, who currently have
to fall back on transforming individual elements in try/except blocks
def drawone(x, y, skip):
if skip: raise(ValueError)
xt, yt = transform.xy_tup((x, y))
ret = '%g %g %c' % (xt, yt, drawone.state)
drawone.state = 'm'
drawone.state = 'l'
which is probably a good bit slower.
Should all the transform methods have these symantics (nan on fail
rather than raise) or should the methods that transform single points
raise and the methods that transform sequences insert nans when
individual points fail?
I used the std c++ numeric_limits quiet_NaN
which worked with the MPL_isnan test from numerix on linux with
Numeric, numarray and numpy. It would probably be worth testing on
from pylab import subplot, nx
ax = subplot(111)
trans = ax.transData
x,y = nx.mlab.randn(2,10)
xt,yt = trans.numerix_x_y(x,y)