Dear FFMpeg contributors, I'm new to the FFMpeg code base and audio/video sync world, so forgive me in advance If my questions are a bit dumb.
I have a project where I need to synchronize multiple RTSP cameras with other network sensors (sync with NTP or PTP). In my case, I used Ffmpeg to decode the RTSP stream and then output the rawvideo to the stdout pipe. After looking in the RTPdec, I found multiple timestamps PST, DTS and also the PRFT (Producer Reference Timestamp). In my case the PRFT seems the correct one. After several tests and diggs, I found that the AV_PKT_DATA_PRFT produce by the RTSP Demux doesn't seems to be forwarded to the encoder/decoder, nor to the final Muxer. So I have multiple question: Is the forward of the AV_PKT_DATA_PRFT the correct solution? I saw also that Dashenc and Movenc use this Side data but how do they get it? Actually I have a dirty hack to output PRFT on the stdout, is there something "more standard" to communicate between Ffmpeg and a python script? Thanks for your help, Clément Clément Péron (3): frame: decode: propagate PRFT side data packet to frame avcodec: rawenc: Forward PRFT frame data to packet HACK: avformat: rawenc: allow to output a raw PRFT libavcodec/decode.c | 1 + libavcodec/rawenc.c | 12 ++++ libavfilter/f_sidedata.c | 1 + libavformat/rawenc.c | 122 +++++++++++++++++++++++++++++++++++++++ libavutil/frame.c | 1 + libavutil/frame.h | 4 ++ 6 files changed, 141 insertions(+) -- 2.42.0 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".