sample canvas object code

> Hi,
>
> I'm attaching the canvas object code I've been playing with.
>
> The API is still incomplete, but in the spirit of release early,
> release often, I'll put it out there for people to comment on.

Hey Paul this is really cool stuff. There is a minor bug -- the onAdd
callback in bindertest.py on line 26 should be onAddend I think.

onAppend. That's a brown paper bagger :frowning:

Also, I wanted to know if you've looked that the copy_region/blit
stuff we have.

I'm aware of the blit stuff but haven't looked in detail. So far I
don't need it because my graphs are simple. Later when I define
widgets operating on meshes I will look more closely.

BTW, poly_editor was broken but I just fixed it so you can check it
out of svn if you want to take a look.

Yup, it works now! Even when it was broken it was still a good source
of examples.

Right now I don't have any answers or comments to the questions you
raise in the code, eg on keyboard vs mouse focus handling -- you are
much deeper in this stuff than I am -- so I'll it's probably best if
you just keep thinking through these things as you go.

I'll be away for a weeks but will continue when I return. Working
with it when I return.

I already know that I need to use keyboard focus handling. My real
application can move objects around on the screen with the keyboard
for finer control, and this will only work if the keyboard focus stays
with the object even as it moves from the mouse.

As for your traits question, you are absolutely right about the need
for a common callback framework. I have been cleaning up the
transformations in mpl1 and the callbacks and properties on the
affines are tremendously useful (eg xlim is just a property based view
into the affine, and one can connect to events on affine changes). I
don't have a GUI backend layer yet in which one can begin playing
around with interaction, but I am close, with a few more changes, to
having a serviceable first cut at the transformations, artist
hierarchy, and renderer layer.

I'm looking forward to seeing it.

FYI, every artist does have a callback mechanism built in which you
can easily extend to support additional events (right now I have been
adding them on as as needed basis -- what traits does so nicely is
that they are there for every traited instance).

Eg to connect to the Axes xlim:

  ax.callbacks.connect('xlim_changed', func)

and func will be called with the signature func(ax). Eg see
examples/shared_axis_across_figures.py which utilizes the callbacks to
couple xlim across figure instances.

How much control is there over the which callbacks are called, what order
they are called in, and when effects show up on the screen?

My first implementation sends the event to the artist, and if the artist
returns false it goes to the axes, then the figure. This is much weaker
than the Tk/BLT canvas model, which allows the programmer to control the
order, and also has class-based dispatch.

Also, I'm nervous about this from a performance perspective --- I don't want
others to have a slower graphics engine just because I need callbacks
for what I'm doing.

Has anyone done profiling to see where the current bottlenecks are?

I know we will be looking into this; the user experience was so
awkward with four plotting panels in an wx.aui frame that we went
back to a straight wx.lib.plot backend.

- Paul

···

On Sat, Jul 21, 2007 at 02:39:52PM -0500, John Hunter wrote:

On 7/21/07, Paul Kienzle <pkienzle@...537...> wrote: