fre 2020-07-10 klockan 15:48 +0200 skrev Andreas Rheinhardt:
> This happened in get_ue_golomb() if the cached bitstream reader was
> in
> use, because there was no check to handle the case of the read value
> not being in the range 0..8190.
> 
> Signed-off-by: Andreas Rheinhardt <andreas.rheinha...@gmail.com>
> ---
>  libavcodec/golomb.h | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/libavcodec/golomb.h b/libavcodec/golomb.h
> index 7fd46a91bd..5bfcfe085f 100644
> --- a/libavcodec/golomb.h
> +++ b/libavcodec/golomb.h
> @@ -66,6 +66,10 @@ static inline int get_ue_golomb(GetBitContext *gb)
>          return ff_ue_golomb_vlc_code[buf];
>      } else {
>          int log = 2 * av_log2(buf) - 31;
> +        if (log < 0) {
> +            av_log(NULL, AV_LOG_ERROR, "Invalid UE golomb code\n");
> +            return AVERROR_INVALIDDATA;
> +        }

How come invalid codes can even be present like this? Seems
inefficient. Or is the decoder wrong? Maybe Michael wants to chime in
here, since he wrote the original implementation.

Reading a bit about Exp-Golomb it seems counting trailing zeroes would 
be the way to go. There's even a builtin for it, __builtin_ctz().
Alternatively a table-driven approach could be used.

/Tomas

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to