Nicolas Gaullier:
> Avoid an infinite 'retry' loop in output_packet when flushing.
> 
> Signed-off-by: Nicolas Gaullier <nicolas.gaullier@cji.paris>
> ---
>  libavformat/mpegenc.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/libavformat/mpegenc.c b/libavformat/mpegenc.c
> index e0955a7d33..e113a42867 100644
> --- a/libavformat/mpegenc.c
> +++ b/libavformat/mpegenc.c
> @@ -1002,7 +1002,7 @@ static int output_packet(AVFormatContext *ctx, int 
> flush)
>      MpegMuxContext *s = ctx->priv_data;
>      AVStream *st;
>      StreamInfo *stream;
> -    int i, avail_space = 0, es_size, trailer_size;
> +    int i, has_avail_data = 0, avail_space = 0, es_size, trailer_size;
>      int best_i = -1;
>      int best_score = INT_MIN;
>      int ignore_constraints = 0;
> @@ -1028,6 +1028,7 @@ retry:
>          if (avail_data == 0)
>              continue;
>          av_assert0(avail_data > 0);
> +        has_avail_data = 1;
>  
>          if (space < s->packet_size && !ignore_constraints)
>              continue;
> @@ -1048,6 +1049,8 @@ retry:
>          int64_t best_dts = INT64_MAX;
>          int has_premux = 0;
>  
> +        if (!has_avail_data)
> +            return 0;
>          for (i = 0; i < ctx->nb_streams; i++) {
>              AVStream *st = ctx->streams[i];
>              StreamInfo *stream = st->priv_data;

in case of errors, the context is left in an inconsistent state: The
PacketDesc linked-list claims that there is data in the FIFO although
this is wrong. I always prefer avoiding such scenarios over fixing them
lateron. In this case, fixing them would mean growing the FIFO before
allocating the new PacketDesc (if the FIFO needs growing at all). Shall
I do this or do you want to?
(In any case, thanks for reporting this issue.)

- Andreas
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to