Danivy: > Using data size and nAvgBytesPerSec to calculate the duration will be more > accurate > --- > libavformat/wavdec.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/libavformat/wavdec.c b/libavformat/wavdec.c > index d2fb81ca7f..50b1f81e62 100644 > --- a/libavformat/wavdec.c > +++ b/libavformat/wavdec.c > @@ -185,7 +185,8 @@ static int wav_parse_fmt_tag(AVFormatContext *s, int64_t > size, AVStream *st) > > st->internal->need_parsing = AVSTREAM_PARSE_FULL_RAW; > > - avpriv_set_pts_info(st, 64, 1, st->codecpar->sample_rate); > + uint64_t nAvgBytesPerSec = st->codecpar->bit_rate / 8; > + avpriv_set_pts_info(st, 64, 1, nAvgBytesPerSec); > > return 0; > } > @@ -637,7 +638,9 @@ break_loop: > / > (st->codecpar->channels * > (uint64_t)av_get_bits_per_sample(st->codecpar->codec_id)); > > - if (sample_count) > + if (data_size) > + st->duration = data_size; > + else if (sample_count) > st->duration = sample_count; > > if (st->codecpar->codec_id == AV_CODEC_ID_PCM_S32LE && > 1. What is the point of using a finer timestamp scale than the sample rate? 2. There are scenarios where this leads to a coarser timestamp scale; e.g. for mono adpcm yamaha files. 3. Parts of the code apparently still presume that the timebase is the sample rate; therefore FATE (our regression testing system) fails with your patch: https://patchwork.ffmpeg.org/check/36501/
- Andreas _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".