Ken McIvor wrote:
>> a wxBitmap is the same format as the native rendering
>> system. While most systems use 24b RGB (or 32b RGBA), people
>> can still run displays at 16bpp or whatever, so it's still
> I can understand why it's still necessary, although it's nice
> to sometimes pretend that everyone's running 24-bit color
> displays. I hope I didn't sound too judgemental!
>>> I don't think we're going to be able to get performance
>>> similar to that of the accelerator using straight Python code
>> But whether it's Python or C++, you still need to do the
>> Image->Bitmap conversion -- so if we can get rid of the data
>> copying from Agg buffer to wxImage in Python, we don't need
> I think we got some wires crossed at some point in the
> conversation, although it could be that I'm wearing the
> Stupid Hat today. I was talking about the
> image-from-a-buffer business not helping us with WX 2.4/2.6
> due to the RGBA to RGB conversion.
>> And it has. For wxPython 2.7 (and now in CVS) there are methods
>> for dumping 32 bit RGBA data directly into a wxBitmap with no
>> copying, if the data source is a Python Buffer object. I think
>> I posted a note about this here yesterday.
> Yes, you did mention it. I agree completely with this
> analysis of the situation. When I replied I wasn't thinking
> in terms of wxPython 2.7.
>> To really get it to work, the 24bit RGB Agg buffer needs to be
>> a Python Buffer object -- is it now? I'm sorry I don't have the
>> time to mess with this now -- maybe some day.
> I guess Guido lets John borrow his time machine, because
> RendererAgg appears to already have a buffer_rgba() method.
Guido has been very generous with us
>> You can alpha composite into a non-alpha background. You just
>> lose the alpha there, so that the background couldn't be
>> alpha-composited onto anything else -- but does it ever need to
> I thought that the buffer's accumulated alpha played a role
> in compositing new pixels onto it, but I apparently
It does: here is agg's rgba pixel blending routing
static AGG_INLINE void blend_pix(value_type* p,
unsigned cr, unsigned cg, unsigned cb,
calc_type r = p[Order::R];
calc_type g = p[Order::G];
calc_type b = p[Order::B];
calc_type a = p[Order::A];
p[Order::R] = (value_type)(((cr - r) * alpha + (r << base_shift)) >> base_shift);
p[Order::G] = (value_type)(((cg - g) * alpha + (g << base_shift)) >> base_shift);
p[Order::B] = (value_type)(((cb - b) * alpha + (b << base_shift)) >> base_shift);
p[Order::A] = (value_type)((alpha + a) - ((alpha * a + base_mask) >> base_shift));
> Images. Anyway, if the buffer's alpha channel isn't used,
> then the whole situation does seem a bit odd. Could the
> information be retained for PNGs or something?
It is useful to store the final pixel buffer (eg in a PNG) as RGBA
because some people like to have some parts of their figure
transparent to composite the figure with other images.
On 08/31/06 13:43, Christopher Barker wrote: