On 10/31/2017 2:25 AM, Michael Niedermayer wrote: > (though as said, this fix is not ideal or complete, I would very much > prefer if this would be fixed by using a single buffer or any other > solution that avoids the speedloss.)
Using a single buffer would be marginally faster, but it does not solve the underlying problem, which is that the NAL "cache" (nals_allocated) seems to be cumulative, and the size of each buffer in it seems to be the largest observed size of a NAL in that position. Consider I could craft a stream that contains, in order: Has 1999 tiny NALs, followed by 1 10MiB NAL, in packet 1. Has 1998 tiny NALs, followed by 1 10MiB NAL, in packet 2. . . . Has 500 tiny NALs, followed by 1 10MiB NAL, in packet 1500. . . . And so forth. The result would be that we have 2000 10MiB buffers allocated in the NAL memory "pool" (nals_allocated == 2000), which will persist until the decode is deinitialized. Am I missing something here? P.S. I see Kieran mailed the same thing as I wrote this. - Derek _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel