Hi,

On Tue, 20 Jul 2021, 胡玮文 wrote:

After compile and run this code, run this command to inspect the dts (which 
comes from the out of sync tfdt)

ffprobe -show_packets bug.mp4 | grep dts=

The output is:

dts=0
dts=1
dts=2
dts=2
dts=3
dts=4

With this patch applied, the output is:

dts=0
dts=1
dts=2
dts=10
dts=11
dts=12

Thanks for the repro case, and sorry for the delay in looking at it.

I do see the issue, but I disagree with your suggested solution. While your patch does create the correct, intended value in tfdt, you will instead create a file where the dts calculated from adding up previous sample durations differ from what's written in tfdt. So depending on whether the demuxer just accumulates durations or reads tfdt, it will produce a different result.

I guess it can be argued that if a demuxer reads a fragmented file, then tfdt should be more authoritative than duration and any other reader is buggy, but nevertheless, the code as is is designed to make sure that tfdt is consistent with the sum of durations.

It seems it's possible to fix the same issue differently though, by not adjusting track_duration and end_pts when autoflushing, if there's no samples in the track that are going to be flushed. That way, we retain the existing intended logic of the muxer, while avoiding diverging.

The result of your repro example, with my movenc modification, produces this dts sequence:

dts=0
dts=1
dts=2
dts=2
dts=11
dts=12

This is, of course less nice than what we had before, but after flushing the fragment containing tfdt=2, duration=0, the only consistent choice we have is to start the next fragment at tfdt/dts=2.


However, I'm open to add an option to ignore the end of the previous fragment and make the new fragment start at the exact desired timestamp.

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to