I think there are two issues here. The postscript
> interpreter doesnt like to stroke long paths. If I break
> the path into 50 point chunks instead of 1000 point
> chunks, I see an improvement.
That's odd, because we never saw an issue stroking long pats before.
Or at least noone has ever reported it.
> Also, if I do the transform in backend_ps, instead of
> passing it to the postscript interpreter, I see a big
> speedup at render time. Right now I am doing this
> transform by hand:
> a,b,c,d,tx,ty=vec6 xo = a*x+c*y +tx yo = b*x+d*y + ty x,y
> = xo,yo
> Is there a better way to do this? I thought I could simply
> call numerix_x_y, but that function is not compatible with
> nonlinear transforms (returns a domain error if one of the
> axes is log-scaled).
You can transform the xy values point by point. Instead of separating
them into their nonlinear and affine components as we are currently
doing in backend_ps, you can call trans.xy_tup which will do both. If
successful, it will return the transformed xy, if it fails, it will
raise a ValueError, and you can set the moveto/lineto state
But I am still confused by your previous post: you said that when you
tried to load the simple plot example PS file in gs, the line rendered
quickly, and then there was the interminable pause. This suggests
that it is not the transformation in gs that is the bottleneck, but
something that happens after it.
In any case, here is an example script creating a semilogx transform.
The first transformation succeeds, the second raises a ValueError
from matplotlib.transforms import SeparableTransformation, \
Point, Value, Bbox, LOG10, IDENTITY, Func
import matplotlib.numerix as nx
# make some random bbox transforms
x,y = nx.mlab.rand(2)
return Point( Value(x), Value(y) )
ll = rand_point()
ur = rand_point()
return Bbox(ll, ur)
b1 = rand_bbox()
b2 = rand_bbox()
funcx = Func(LOG10)
funcy = Func(IDENTITY)
trans = SeparableTransformation(b1, b2, funcx, funcy)
print trans.xy_tup((1,2)) #ok
print trans.xy_tup((-1,2)) #raises