Jan 23, 2021, 23:12 by c...@passwd.hu:

>
>
> On Sat, 23 Jan 2021, Lynne wrote:
>
>> Jan 23, 2021, 21:42 by c...@passwd.hu:
>>
>>>
>>>
>>> On Sat, 23 Jan 2021, Lynne wrote:
>>>
>>>> Jan 23, 2021, 21:04 by c...@passwd.hu:
>>>>
>>>>>
>>>>>
>>>>> On Sat, 23 Jan 2021, Lynne wrote:
>>>>>
>>>>>> This is an RFC about the upcoming additions to the AVPacket structure
>>>>>> (whose size is still part of the ABI, so we need to plan for any 
>>>>>> changes).
>>>>>>
>>>>>> The current RFC patch adds 3 fields:
>>>>>>     - "void *opaque;" for the user to use as they wish, same as 
>>>>>> AVFrame.opaque
>>>>>>     - "void *opaque_ref;" for more permanent and propagating user data, 
>>>>>> same as AVFrame.opaque_ref
>>>>>>
>>>>>
>>>>> These seem legit.
>>>>>
>>>>>>     - "AVRational time_base;" which will be set to indicate the time 
>>>>>> base of the packet's timestamps
>>>>>>
>>>>>
>>>>> Why? It seems redundant and it will not be clear when to use to use 
>>>>> stream/bsf/etc time base and when embedded AVPacket timebase. So I don't 
>>>>> think this is a good idea.
>>>>>
>>>>
>>>> I'd like to switch to using this to avoid the dance you have to do with
>>>> avformat init, where you have to give it your packet's time_base in the 
>>>> stream time_base
>>>> then init, which then sets the time_base in the same field you provided 
>>>> your time_base,
>>>> and then you have to rescale the timestamps of packets to that timebase.
>>>>
>>>
>>> That is by design as far as I know, you set the timebase to your requested 
>>> time base, if the format supports that then you are happy, if not, then you 
>>> convert.
>>>
>>
>> You can still keep the mechanism, since it's init time, but what's
>> the problem with letting lavf convert the timestamps for you if they don't
>> match?
>>
>
> And why do you need per-AVPacket time bases for that if your packets are in a 
> fixed timestamp anyway?
>

If we don't change anything regarding the lavf API, it's exactly why this field 
was added:
So you could rescale the packet timestamps without having to match them
up to a stream. In my code the muxer loop just gets packets from encoders via a 
FIFO
and has to match them up to whatever stream the encoder was registered to use,
then look up its time base and rescale from that timebase. Currently I just use 
some
leftover fields as a hack, and while I could just carry that info in an 
opaque_ref,
it would be neater to just have all the info necessary to rescale every 
packet's timestamps
in the packet itself.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to