I think there are two issues here. The postscript interpreter doesnt like to
stroke long paths. If I break the path into 50 point chunks instead of 1000
point chunks, I see an improvement.
Also, if I do the transform in backend_ps, instead of passing it to the
postscript interpreter, I see a big speedup at render time. Right now I am
doing this transform by hand:
xo = a*x+c*y +tx
yo = b*x+d*y + ty
x,y = xo,yo
Is there a better way to do this? I thought I could simply call numerix_x_y,
but that function is not compatible with nonlinear transforms (returns a
domain error if one of the axes is log-scaled).
On Monday 03 April 2006 11:16 am, John Hunter wrote:
> I'm having second thoughts about the wisdom of having
> postscript handle the transforms. With the new API, I run
> backend_driver.py and get a file called
> axes_demo_PS.ps. On my machine, it takes about 10 seconds
> to open this file if it was created with the new API. If I
> mask draw_markers and recreate the postscript file, it
> loads instantly.
So you think the performance hit is caused by gs (or whatever viewer
you are using) by the postscript engine doing the transformations? It
surprises me that this would be so inefficient.
We have the option of doing the transformations in the ps backend...