Why does .transData.transform() return display coordinates as floating point numbers?

Hello! as per the title. Why, when converting data coordinates to display coordinates, does .transData.transform() return a numpy.ndarray of numpy.float64 objects?

The display coordinate unit should be the pixel and you can’t have a fraction of a pixel, so that’s kind of confusing to me?

Any help much appreciated!

Without knowing the internals of the plotting backend, the transformed coords are just a coordinate space; you may well want to go back and forms between the rendering coords and the data coords. If you round to an integer in one direction, that is going to be very lossy.

There’s no need to convert to pixels until things are actually drawn on some kind of image. And there are things like antialiasing and subpixel dithering where pixels are… shaded because something partially crosses that pixel - think of a diagonal line of a given width - if you drew it as fully coloured pixels where the line touches a pixel or fully uncoloured for untouched pixels you get jagged stepped effects.

In short, coordinates are done in floating point as that most closely approximates their ideal “real number” coordinates. Rendering to pixels happens very late - until then think of it as an ideal piece of paper, not a grid of pixels.

1 Like

The is only true when rendered, but render space is not the same as display space. For example, render space may be any abitrary scale of display space for a PDF or SVG, depending on what size the viewer is using. Also, if you are on a HiDPi screen, the render space may be 2 or even 3 times the display space, so you can have “half pixels” there as well.

1 Like