On 14/02/17 19:44, Daniel Oberhoff wrote: > filter strictly “halves” the image efficiently, which is often exactly what > is needed > likely much faster than using scale
Did you benchmark this? How? $ time ./ffmpeg -f lavfi -i allyuv -vf 'scale=iw/2:ih/2' -vframes 400 -f null - ... frame= 400 fps= 26 q=-0.0 Lsize=N/A time=00:00:16.00 bitrate=N/A speed=1.05x ... real 0m15.365s user 0m11.092s sys 0m4.272s $ time ./ffmpeg -f lavfi -i allyuv -vf 'halve' -vframes 400 -f null - ... frame= 400 fps= 22 q=-0.0 Lsize=N/A time=00:00:16.00 bitrate=N/A speed=0.873x ... real 0m18.392s user 0m46.280s sys 0m3.656s So it uses four times as much CPU as swscale to be marginally slower? (Skylake 6300; I admit the SMT could well be making it look a bit worse than it actually is on the CPU time.) On a more general note, components that duplicate existing functionality are unlikely to be accepted without a compelling use-case for them to be included. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel