On Feb 21, 2013, at 7:15 PM, Graham Cox wrote:

> I have a simple requirement: take a buffer I malloc myself and get that 
> rendered as an image using CGImage.
> 
> I'm creating a data provider using CGDataProviderCreateWithData(), then using 
> that to create a CGImage of the desired dimensions and format. This is later 
> drawn using CGContextDrawImage().
> 
> The original buffer is updated directly by writing pixel values into the 
> buffer

You're not allowed to do this.  Once you create a data provider from a buffer, 
that buffer must not be modified until the releaseData callback is called.  If 
you look at the docs for CGDataProviderReleaseDataCallback, it says, "When 
Quartz no longer needs direct access to your provider data, your function is 
called. You may safely modify, move, or release your provider data at this 
time."  The implication is that you may not *modify*, move, or release your 
data before then (emphasis added).

> then the appropriate part of the view is invalidated which causes the image 
> to redraw. All I get is a black image - the data I write to the buffer 
> appears not to affect anything.
> 
> Is this the right way to handle this?

No.

> I understand that the whole system with data providers and so on is intended 
> for efficient image rendering and decoding, but in this case I want to get 
> back to almost the most basic situation possible. So maybe CGImage is caching 
> the content of my buffer once and not re-reading from it each time, which 
> would explain my results?

Yes.  Generally, a CGImage does lots of caching, including moving the image to 
VRAM when appropriate.

> How can I set up an image that just blits my buffer and doesn't try anything 
> clever like caching?

I don't know that you can.  You should probably create a fresh CGImage from the 
data for each render, unless you know that you'll render repeatedly with the 
same CGImage (which implies "with the same image content").  I believe that 
creating a new CGImage and drawing with that _is_ the way of telling the 
frameworks "I have updated pixel data and I want to draw it as efficiently as 
possible".

You could look into using an NSBitmapImageRep (which finally brings this 
on-topic for Cocoa-Dev), but it uses a CGImage internally.  Also, if you modify 
the pixel buffer, that will probably require unpacking it from the CGImage and 
then, when you use the image rep to draw, repacking it and possibly shipping it 
to the GPU.  So, it's actually more efficient to just create a new CGImage from 
the pixel buffer each time, since that's only a one-way transfer of data.

I strongly encourage everyone who works with images to read the relevant 
sections of the Snow Leopard AppKit release notes.[1]  There's a trove of 
important information there.  Search for "NSBitmapImageRep: CoreGraphics 
impedance matching and performance notes".  Note especially the statement 
"CGImages are not mutable".  You might also want to read the preceding sections 
about NSImage.
[1] 
https://developer.apple.com/library/mac/releasenotes/Cocoa/AppKitOlderNotes.html#X10_6Notes

Regards,
Ken


_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to