That is really interesting feedback guys. I have been thinking about things 
mostly in a MOV independent timecode track (or tracks) kind of way, but I know 
OMF, MXF, AAF, etc handle it more analogously to packet/frame side data.

Usually ffmpeg has some kind of superset of functionality for handling any one 
concept in order to be able to handle all the various forms and 
implementations. I don’t really see that for timecode though I don’t really 
know what that would look like either. Especially given the compromises you 
both pointed out.

In my case it turned out that our Decklink Duo 2 was in a duplex state that 
caused the first few frames to get dropped and thus miss the opening of the 
output format timing wise. That is why it appeared to fail setting the timecode 
in the output container. We don’t really need that duplex mode (or the Decklink 
at all for that matter) so I think we are set for now.

I will keep my thinking cap on about ffmpeg and timecode though. What I need my 
just be about adding understanding to mov.c and movenc.c for handling densely 
populated independent timecode tracks.

Thanks,
Jon

> On Jul 16, 2018, at 6:32 AM, Devin Heitmueller <dheitmuel...@ltnglobal.com> 
> wrote:
> 
> Hi Marton,
> 
>> 
>> In the current implementation per-frame timecode is stored as 
>> AV_PKT_DATA_STRINGS_METADATA side data, when AVPackets become AVFrames, the 
>> AV_PKT_DATA_STRINGS_METADATA is automatically converted to entries in the 
>> AVFrame->metadata AVDictionary. The dictionary key is "timecode".
>> 
>> There is no "standard" way to store per-frame timecode, neither in packets, 
>> nor in frames (other than the frame side data AV_FRAME_DATA_GOP_TIMECODE, 
>> but that is too specific to MPEG). Using AVFrame->metadata for this is also 
>> non-standard, but it allows us to implement the feature without worrying too 
>> much about defining / documenting it.
> 
> For what it’s worth, I’ve got timecode support implemented here where I 
> turned the uint32 defined in libavutil/timecode.h into a new frame side data 
> type.  I’ve got the H.264 decoder extracting timecodes from SEI and creating 
> these, which then feed it to the decklink output where they get converted 
> into the appropriate VANC packets.  Seems to be working pretty well, although 
> still a couple of edge cases to be ironed out with interlaced content and 
> PAFF streams.
> 
>> 
>> Also it is worth mentioning that the frame metadata is lost when encoding, 
>> so the muxers won't have access to it, unless the encoders export it in some 
>> way, such as packet metadata or side data (they current don't).
> 
> Since for the moment I’m focused on the decoding case, I’ve changed the V210 
> encoder to convert the AVFrame side data into AVPacket side data (so the 
> decklink output can get access to the data), and when I hook in the decklink 
> capture support I will be submitting patches for the H.264 and HEVC encoders.
> 
>>> 
>>> 2) Is there any reason not to make a valid timecode track (ala 
>>> AVMEDIA_TYPE_DATA AVStream) with timecode packets? Would that conflict with 
>>> the side data approach currently implemented?
>> 
>> I see no conflict, you might implement a timecode "track", but I don't see 
>> why that would make your life any easier.
> 
> The whole notion of supporting via a stream versus side data is a 
> long-standing issue.  It impacts not just timecodes but also stuff like 
> closed captions, SCTE-104 triggers, and teletext.  In some cases like MOV 
> it’s carried in the container as a separate stream; in other cases like 
> MPEG2/H.264/HEVC it’s carried in the video stream.
> 
> At least for captions and timecodes the side data approach works fine in the 
> video stream case but it’s problematic if the data is carried as side data 
> needs to be extracted into a stream.  The only way I could think of doing it 
> was to insert a split filter on the video stream and feed both the actual 
> video encoder and a second encoder instance which throws away the video 
> frames and just acts on the side data to create the caption stream.
> 
> And of course you have same problem in the other direction - if you receive 
> the timecodes/captions via a stream, how to you get it into side data so it 
> can be encoded by the video encoder.
> 
> ---
> Devin Heitmueller - LTN Global Communications
> dheitmuel...@ltnglobal.com
> 
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to