Le 20/09/2024 à 20:41, Carlos Ruiz a écrit :> The native hevc codec doesn't
> support resizing, so you decode video at full 4k on the gpu, which means
> allocating
> something like 5-10 surfaces at 3840x2160 which becomes 250MB of GPU memory,
> and then you have immediately take all of those frames, pass them through a
> filterchain,
> scale them down to e.g. 640x360, and waste CUDA cores instead of leveraging
> the
> dedicated video downsizing inside the NVDEC chip. Now do that for 50 camera
> streams
> and you'll quickly run out of GPU memory with a GPU utilization under 10%
> haha.

NVDEC does not implement fixed-function downscaling, in fact none of the 
desktop cards have any hardware dedicated to that.  
As far as I know, scaling, deinterlacing, and generally all post-processing
is done on the compute engine via cuda. This is still pretty efficient 
since the data can be shared between the decode/compute engines without
copy.
Tegra chips are the only ones that come with a VIC engine (Video & Image
Compositor) which can do scaling, deinterlacing, spatial/temporal filtering, 
and basic compositing.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to