On 06.12.2015 22:48, Michael Niedermayer wrote: > On Sun, Dec 06, 2015 at 08:26:41PM +0100, Andreas Cadhalpun wrote: >> On 05.12.2015 03:12, Michael Niedermayer wrote: >>> On Fri, Dec 04, 2015 at 10:28:35PM +0100, Andreas Cadhalpun wrote: >>>> On 03.12.2015 23:09, Michael Niedermayer wrote: >>>>> diff --git a/libavcodec/golomb.h b/libavcodec/golomb.h >>>>> index d30bb6b..323665d 100644 >>>>> --- a/libavcodec/golomb.h >>>>> +++ b/libavcodec/golomb.h >>>>> @@ -72,7 +72,7 @@ static inline int get_ue_golomb(GetBitContext *gb) >>>>> av_log(NULL, AV_LOG_ERROR, "Invalid UE golomb code\n"); >>>>> return AVERROR_INVALIDDATA; >>>>> } >>>>> - buf >>= log; >>>>> + buf >>= log & 31; >>>>> buf--; >>>>> >>>>> return buf; >>>>> >>>> >>>> While that certainly fixes the undefined behavior, I'm wondering what's >>>> the relation >>>> to commit fd165ac. In other words, why not just remove the CONFIG_FTRAPV >>>> from >>>> the error check above? >>> >>> the & 31 is a hack really (and choosen because at least clang >>> optimizes it out) >>> the "correct" way would be to test, print a warning and return an >>> error code. That way if a valid (non fuzzed) file triggers it we know >>> that the range of get_ue_golomb() is potentially too small. >>> With the & 31 no information is shown, before this patch here >>> ubsan would show it as well without the normal case being slower >>> so its all perfect already ... except ... that its wrong because its >>> undefined behavior >> >> So you're only worried that the check makes this slower? >> This check is only needed in the less-likely/less-optimized branch >> of get_ue_golomb, so it shouldn't matter that much. > > well, this is a inlined function in a header > adding this check might cause the compiler to stop inlining it > or the larger code size might lead to more cache misses > > >> >>> maybe someone has a better idea ... >> >> Actually, I think the correct error check would be 'log < 7', because >> UPDATE_CACHE only guarantees 25 meaningful bits. Thus I propose: >> --- a/libavcodec/golomb.h >> +++ b/libavcodec/golomb.h >> @@ -68,7 +68,7 @@ static inline int get_ue_golomb(GetBitContext *gb) >> int log = 2 * av_log2(buf) - 31; >> LAST_SKIP_BITS(re, gb, 32 - log); >> CLOSE_READER(re, gb); >> - if (CONFIG_FTRAPV && log < 0) { >> + if (log < 7) { >> av_log(NULL, AV_LOG_ERROR, "Invalid UE golomb code\n"); >> return AVERROR_INVALIDDATA; >> } >> >> I've benchmarked this with the fate-cavs sample (which triggers the error) >> cat'ed 100 times together and it isn't any slower than without this change. > > my concern is more on h264 (CAVLC) and hevc speed
I tested with 444_8bit_cavlc.h264 added 100 together with the concat demuxer, and couldn't see a measurable speed difference caused by this error check. Also ff_h264_decode_mb_cavlc uses mainly get_ue_golomb_31, which isn't affected by the change. And the hevc decoder uses get_ue_golomb_long, so it is also not affected by this change. Best regards, Andreas _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel