Quoting Michael Niedermayer (2020-12-14 00:52:06) > On Sun, Dec 13, 2020 at 06:22:08PM +0100, Anton Khirnov wrote: > > Quoting Michael Niedermayer (2020-12-13 15:03:19) > > > On Sun, Dec 13, 2020 at 02:02:33PM +0100, Anton Khirnov wrote: > > > > Quoting Paul B Mahol (2020-12-13 13:40:15) > > > > > Why? Is it so hard to fix them work with latest API? > > > > > > > > It is not exactly obvious, since coded_frame is gone. I suppose you > > > > could instantiate an encoder and a decoder to work around that, but it > > > > all seems terribly inefficient. Lavfi seems to have some ME code, so > > > > perhaps that could be used for mcdeint. Or if not, maybe someone could > > > > get motivated to port something from avisynth or vapoursynth. Similarly > > > > for uspp, surely one can do a snow-like blur without requiring a whole > > > > encoder. > > > > > > > > In any case, seems to me like a good opportunity to find out whether > > > > anyone cares enough about those filters to keep them alive. I don't > > > > think we should keep code that nobody is willing to maintain. > > > > > > I might do the minimal changes needed to keep these working when i > > > find the time and if noone else does. Certainly i would not be sad > > > if someone else would do it before me ;) > > > > > > Also if redesign happens, what looks interresting to me would be to > > > be able to export the needed information from encoders. > > > Factorizing code from one specific encoder so only it can be used > > > is less general but could be done too of course > > > > > > if OTOH encoders in general could export their internal buffers for > > > filters > > > or debuging that seems more interresting. > > > > TBH I am very skeptical that this can be done in a clean and > > maintainable way. > > why ? > one could simply attach the decoded frame bitmap as side data to the > packet. This seems at the surface at least not really require anything > anywhere else. Its just like any other side data, just that it > would be done only when requested by the user. > I imagine this might be little more than a single call in a encoder > with the AVFrame and AVPacket as arguments ... > > > > Splitting off individual pieces and making them > > reusable is a better approach. > > Better for these 2 specific filters yes but that also makes it harder > to change them to a different encoder or even encoder settings. > > as the filters are currently, it would be reasonable easy to change them to > a different encoder, experiment around with them and things like that.
I am not convinced that passing video through an entire encoder is a meaningful filtering method, if one wants specific and well-defined results. Not to mention it will most likely be incredibly slow. -- Anton Khirnov _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".