Morning,

> Regarding 'progressive_frame', ffmpeg has 'interlaced_frame' in lieu of 
> 'progressive_frame'. I think that 'interlaced_frame' = !'progressive_frame' 
> but I'm not sure. Confirming it as a fact is a side project that I work on 
> only occasionally. H.242 defines "interlace" as solely the condition of PAL & 
> NTSC scan-fields (i.e. field period == (1/2)(1/FPS)), but I don't want to 
> pursue that further because I don't want to be perceived as a troll. :-)

I'm not entirely aware of what is being discussed, but progressive_frame = 
!interlaced_frame kind of sent me back a bit, I do remember the discrepancy you 
noted in some telecopied material, so I'll just quickly paraphrase from what we 
looked into before, hopefully it'll be relevant.

The AVFrame interlaced_frame flag isn't completely unrelated to mpeg 
progressive_frame, but it's not a simple inverse either, very 
context-dependent. With mpeg video, it seems it is an interlaced_frame if it is 
not progressive_frame, and it shouldn't result where mpeg progressive_sequence 
is set.

Basically, the best you can generalize from that is the frame stores interlaced 
video. (Yes interlaced_frame means the frame has interlaced material) Doesn't 
help at all... But I don't think it can be helped? Since AVFrames accommodates 
many more types of video frame data than just the generations of mpeg coded.

I think it was often said (not as much anymore) that "FFmpeg doesn't output 
fields" and I think at least part of the reason is this. At the visually 
essential level, there is the "picture" described as a single instance of a 
sequence of frames/fields/lines or what have you depending on the format and 
technology; the image that you actually see. 

But that's a visual projection of the decoded and rendered video, or if you're 
encoding, it's what you want to see when you decode and render your encoding. I 
think the term itself has a very abstract(?) nuance. The picture seen at a 
certain presentation timestamp either has been decoded, or can be encoded as 
frame pictures or field pictures.

Both are stored in "frames", a red herring in the terminology imo. The AVFrame 
that ffmpeg deals with isn't necessarily a "frame" as in a rectangular picture 
frame with width and height, but closer to how the data is  temporally 
"framed," e.g. in packets with header data, where one AVFrame has one video 
frame (picture). Image data could be scanned by macroblock, unless you are 
playing actual videotape.

So when interlace scanned fields are stored in frames, it's more than that both 
fields and frames are generalized into a single structure for both types of 
pictures called "frames" –  AVFrames, as the prefix might suggest, also are 
audio frames. And though it's not a very good analogy to field-based video, 
multiple channels of sound can be interleaved.

I apologize that was a horrible job at quickly paraphrasing but if there was 
any conflation of the packet-like frames and picture-like frames or interlaced 
scanning video lines and macro block scanning I think the info might be able to 
shift your footing and give you another perspective, even if it's not 100% 
accurate.

Regards,
Ted Park

_______________________________________________
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to