I'm trying to apply CIFilters to images (using CALayer) and movies (using 
QTMovieLayer). However, some of these filters return a bigger image than the 
original content image/movie, such as a custom composite reflection filter I've 
written.

My current solution is to render the images and movies using a CAOpenGLLayer 
subclass. This works fine for images, and is actually faster to redraw when 
filter properties are changed over a plain CALayer. But when trying to draw the 
frames of a QTMovie the performance is terrible.

I'm using -frameImageAtTime:withAttributes:error: to extract a CIImage, but 
when I profiled it I discovered that it first renders a CIImage. Since this is 
a known (and measured) expensive operation I attempted to request other types 
of frame image (QTMovieFrameImageTypeCVPixelBufferRef and 
QTMovieFrameImageTypeCVOpenGLTextureRef) however these returned NULL, leaving 
me back where I started.

I'm looking for suggestions about how to efficiently render video using Core 
Image filters and only the 64 bit QTKit. I've found plenty of sample projects, 
but they all uses functions deprecated/removed in the 64 bit version of the 
framework. Either that or how to suitably transform the output image of a 
filter chain so that it lies within the bounds of the CALayer rendering it.

A last resort I have considered, would be to write a separate 32 bit process 
making use of the functions removed from 64 bit QTKit, to render into an 
IOSurface. Though I'm looking to avoid the complexity.

Thanks,
Keith
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to arch...@mail-archive.com

Reply via email to