Hello!

I'm working on implementing lossy gif encoding, and I'd like to hear your 
opinion on how to best integrate it with ffmpeg in a way that would be 
acceptable for inclusion in the project.
Lossy encoding is done by making LZW use approximate matching, and with the 
right input it can halve GIF file sizes. I've previously implemented it for 
gifsicle: https://kornel.ski/lossygif

The main problem I have now is that the existing GIF codec receives frames 
already converted to AV_PIX_FMT_RGB8 or similar 8-bit format. When working with 
8bpp heavily-dithered input, the lossy compression struggles to find runs of 
similar pixels, so it isn't very effective.

For proper, high-quality lossy encoding I would need access full-quality input, 
ideally something like 24/32-bit RGB/A. However, I don't know if it's possible 
to do that within the existing GIF codec implementation.

libavcodec/gif.c in ff_gif_encoder.pix_fmts seems to passively declare types of 
pixel formats it accepts. Is this list fixed? Could it be made varying 
depending on flags? I'm thinking that if user sets a flag that opts in into 
lossy encoding, the codec should request AV_PIX_FMT_RGBA input instead of the 
8-bit inputs (or both at the same time, if that's possible).

Alternatively, would it be OK to create a separate codec just for lossy 
compression? I would create a new GIF codec, define it under a new name, and 
declare it always takes AV_PIX_FMT_RGB24 or AV_PIX_FMT_RGBA. Is that a good 
approach?

-- 
Kind regards, Kornel



_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to