Hello,
  I would like a little help in designing my application, before I
veer off into the wrong direction. I am making an application that
captures input from a webcam using QTKit Capture, runs the image
through a few filters (background subtraction, contrast, high pass)
then runs that image through a blob detection algorithm. I will then
show the final image (with some additional drawing on it to represent
the blobs that were detected) in a QTCaptureView. I am wondering where
I should apply the filters to the video. In the QTRecorder example the
filters are applied right before being displayed to the view in the
method:
- (CIImage *)view:(QTCaptureView *)view willDisplayImage:(CIImage *)image
Is this what I should use too? I am still reading about Core Image and
Core Video, but in the QTRecorder example Core Video is not used.
Should I not use it either? How do Core Video and QTKit Capture
relate? Your advice will help me to study in the right direction. I am
trying to rewrite the application OpenTouch in Cocoa by May in time
for science fair, so I am fairly pressed for time.

 Thank You,
   Bridger Maxwell
_______________________________________________

Cocoa-dev mailing list (Cocoa-dev@lists.apple.com)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/cocoa-dev/archive%40mail-archive.com

This email sent to [EMAIL PROTECTED]

Reply via email to