On 9/27/18 2:44 AM, Marton Balint wrote:
>
>
> On Tue, 25 Sep 2018, Jeyapal, Karthick wrote:
>
>>
>> On 9/24/18 7:42 PM, Devin Heitmueller wrote:
>>> Hello Karthick,
>>>
>>>
>>>> On Sep 24, 2018, at 7:49 AM, Karthick J <kjeya...@akamai.com> wrote:
>>>>
>>>> From: Karthick Jeyapal <kjeya...@akamai.com>
>>>>
>>>> This option is useful for maintaining input synchronization across N
>>>> different hardware devices deployed for 'N-way' redundancy.
>>>> The system time of different hardware devices should be synchronized
>>>> with protocols such as NTP or PTP, before using this option. 
>>>
>>> I can certainly see the usefulness of such a feature, but is the decklink 
>>> module really the right place for this?  This feels like something that 
>>> should be done through a filter (either as a multimedia filter or a BSF). 
>> Hi Devin,
>>
>> Thank you very much for the feedback. I agree with you that if it can be 
>> done through a filter, then it is certainly a better place to do it. But I 
>> as far I understand it can't be implemented reliably in a filter, without 
>> imposing additional restrictions and/or added complexities. This is 
>> primarily due to the fact the frames might take different times to pass 
>> through the pipeline thread in each hardware and reach the filter function 
>> at different times and hence losing some synchronization w.r.t system time. 
>> In other words, some modules in the pipeline contains CPU intensive 
>> code(such as video decode), before it reaches the filter function. The 
>> thread that needs to do this should be very lightweight without any CPU 
>> intensive operations. And for better reliability it needs to be done as soon 
>> as the frame is received from the driver. For example, the video 
>> frame(captured by decklink device) could take different times to pass 
>> through V210 decoder due to HW differences and/or CPU load due to other 
>> encoder threads. This unpredictable decoder delay kind of rules out 
>> multimedia filters for this kind of operation. Now a bitstream filter(BSF) 
>> can mitigate this issue to some extent as it sits before a decoder. We will 
>> still need to insert a thread(and associated buffering) in the BSF, so that 
>> the decoder is decoupled from this time-sensitive thread. But still it 
>> doesn't provide any guarantee against CPU intensive operations performed in 
>> the capture plugin. For example, the Decklink plugin performs some VANC 
>> processing which could be CPU intensive in a low-end 2-core Intel processor. 
>> Or even if we assume Decklink plugin doesn't perform any CPU intensive 
>> operations, we cannot guarantee the same for other capture device plugins. 
>> Another option to implement this in filters would be to use "copyts" and 
>> drop the frames based of PTS/DTS value instead of system time. But then this 
>> imposes a restriction that "copyts" needs to be used mandatorily. If 
>> somebody needs to use it without "copyts", then it won't work. My 
>> understanding on ffmpeg is limited, and hence the above explanation may not 
>> be entirely correct. So, please feel free to correct me. 
>
> How about adding such an option to ffmpeg.c? You still can use wallclock 
> timestamps in decklink, and then drop the frames (packets) in ffmpeg.c before 
> the timestamps are touched.
Yes, that's true. But, I will have to set the decklink options abs_wallclock 
and decklink_copyts for it work correctly. This will be the restriction that we 
will impose on the user of this option.
If I have to set those two decklink options, then an additional copyts option 
along with a f_select expression that you had suggested below will do the job 
without any changes to the code. For example, I used the select expr 
"lte(mod(pts*1000000*tb, 6*1000000), 41666)+selected_n" where 6 is the segment 
size and 41666 is the frame duration of 24fps video. This seems like an easiest 
workaround for this issue. Thank you very much for suggesting the usage of 
f_select. 
>
> Another approch might be to store the wallclock frame time as some kind of 
> metadata (as it is done for "timecode") and then add the possiblity to 
> f_select to drop based on this. However the evaluation engine has no concept 
> of complex objects (like frame, or frame metadata) so this probably needs 
> additional work.
This involves a lot of extra work for a feature that can be implemented very 
easily on the capture plugin. And still other capture plugins will have to add 
the relevant metadata/sidedata for this feature to work for them. If you still 
think that decklink plugin is not the right place to add this feature, then I 
respect that decision. I will live with the f_select solution with extra 
restrictions on timestamping options (
Thanks again for your valuable suggestions.

Regards,
Karthick
>
> Regards,
> Marton
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to