libavcodec currently has support for hardware-accelerated decoding, but no 
support for encoding, and libavcodec+libavfilter+ffmpeg provide no support for 
a decode->filter->encode pipeline that doesn't involve copying buffers back and 
forth from the video card and cutting out a significant amount of the gain 
provided by using hardware acceleration to begin with. It'd be useful to 
provide a way to leave buffers on the GPU when possible, and copy back and 
forth only when using a filter that can't be done on the GPU.
Some filters could even be run without copying back and forth; for instance: 
scaling (for some scalers), overlays, cropping, drawtext/subtitles (the drawing 
component, anyway), deinterlacing, trim, and some post-processing could likely 
be done for a number of GPUs relatively easily, and others could likely also be 
done with additional work.
This would probably require significant changes to AVFrame, various lavc/lavfi 
structs and APIs, and ffmpeg.c, but it could likely produce significant 
improvements in speed and power consumption when using systems that can support 
a full decode->filter->encode pipeline on the GPU.

Thoughts on feasibility and/or implementation details?
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to