>Ah, so it's time and not phase? Renaming the #defines to make that clearer 
>would be nice. Other than that, this isn't a huge issue
I assume the start of the packet to be synched with the start of "a" video 
frame, I think there is no use case where pcm and video are not synced, at 
least when dolby E is concerned.
So, DOLBY_E_PHASE_MIN/MAX are phase and well named I think.
Concerning S337M_PHASE_PROBE_MIN that I renamed 
S337M_PROBE_GUARDBAND_MIN_BYTES, this is still a phase, a kind of safe guard 
phase, to make even more unlikely to have a wrong detection. But the question 
is to determine if this phase should be specified in milliseconds or just in 
raw bytes (or another man could say in number of samples whether there are 16 
or 24 bits). There may be other approaches, but if we consider strictly the 
risk of statistical match, it is related to the number of raw bytes, this is 
why I choosed raw bytes at the end and thus the name 
S337M_PROBE_GUARDBAND_MIN_BYTES looks good to me.
Furthermore, at the end, this S337M_PROBE_GUARDBAND_MIN_BYTES is zero for now 
(so whatever the unit), and in my experience, this is generally required as 
there are some programs which indeed have a zero-phase Dolby E (and BTW it 
appears some softwares doesn't like it). Taking the worst case scenario (16 
bits), the sync code takes 4 bytes, so the raw probability is only 1/2^32 (and 
even less taking into account that usually broadcast programs starts with a 
little bit of silence). So anyway, maybe I took too much care on this, but at 
least here this is explicit in the code, I think.

Nicolas
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to