Hi Anamitra,

On Sat, Feb 29, 2020 at 04:50:23AM +0000, Anamitra Ghorui wrote:
Hello,
I have been reading through the parsing API and other things and here's what
I've managed to gather (I will be ignoring overruns in these functions for now).
Please tell me if I am right or wrong:

1. As long as the parse function determines next == END_NOT_FOUND,
  ff_combine_frame will keep increasing the AVParseContext index by buf_size.
  Once next is no longer END_NOT_FOUND, buf_size will be set to index + next.

  The bytes from the input chunks are copied into the buffer of AVParseContext
  during this process.

  while next == END_NOT_FOUND, and the thing being decoded is a video, we
  cannot really determine the end of frame, and hence poutbuf and poutbuf_size
  are set to zero by the function. However, this doesn't really matter for
  still images since they have a single frame.

2. av_parser_parse2 will look for whether poutbuf_size is greater than zero.
  If it is, the next frame start offset will be advanced, and the frame offset
  pointer will be set to the previous value of the next frame offset in
  AVCodecParserContext.

3. In https://ffmpeg.org/doxygen/trunk/decode_video_8c-example.html
  pkt->size will be set to zero as long as a frame has not been returned.
  Hence decode will not be triggered as long as a frame has not been found.

Yes this is all correct. Good work of looking at different parsers to understand this.


Now, Regarding FLIF16:
1. The pixels of the image are stored in this format (non interlaced):
(see https://flif.info/spec.html#_part_4_pixel_data)
     _______________________________________________
    |     _________________________________________ |
    |    |     ___________________________________ ||
all  |    |    |     _____________________________ |||
    |    |    |    |                             ||||
    |    |    | f1 | x1 x2 x3 ..... xw           ||||
    |    |    |    |                             ||||
    |    | y1 |    |_____________________________||||
    | c1 |    |                ...                |||
    |    |    |     _____________________________ |||
    |    |    |    |                             ||||
    |    |    | fn | x1 x2 x3 ..... xw           ||||
    |    |    |    |                             ||||
    |    |    |    |_____________________________||||
    |    |    |                                   |||
    |    |    |___________________________________|||
    |    |                 ...                     ||
    |    |     ___________________________________ ||
    |    |    |     _____________________________ |||
    |    |    |    |                             ||||
    |    |    | f1 | x1 x2 x3 ..... xw           ||||
    |    |    |    |                             ||||
    |    | yh |    |_____________________________||||
    |    |    |               ...                 |||
    |    |    |     _____________________________ |||
    |    |    |    |                             ||||
    |    |    | fn | x1 x2 x3 ..... xw           ||||
    |    |    |    |                             ||||
    |    |    |    |_____________________________||||
    |    |    |                                   |||
    |    |    |___________________________________|||
    |    |_________________________________________||
    |                                               |
    |                      ...                      |
    | cn                                            |
    |_______________________________________________|

where: ci: color channel
      yi: pixel row
      fi: frame number
      xi: individual pixel

Ah FLIF is a bit wacky. I can see why this might be helpful for decoding partial images on-the-fly, but I don't think it will be easy or even possible to do with the current AVFrame API.


The frames are not stored in a contiguous manner as observable. How should I be
getting the frame over here? It dosen't seem possible without either putting the
whole pixel data chunk in memory, or allocating space for all the frames at once
and then putting data in them.

I guess what the parser has to do in that case is that it will have to either
return the whole file length as the buffer to the decoder function, or make the
parser manage frames by itself through its own data structures and component
functions.

What should I be doing here?

For now go with the approach of reading all the data into a single AVPacket. This does mean that parser isn't splitting frames. We can figure out how to do progressive decoding like intended by FLIF later.


2. The FLIF format spec refers to a thing known as the 24 bit RAC. Is it an
  abbreviation for 24 bit RAnge Coding? 
(https://en.wikipedia.org/wiki/Range_encoding)
  What does the "24 bit" mean? Is it the size of each symbol that is processed
  by the range coder?


Yes RAC refers to Range Coding [1]. You can try to match what the reference codec does in [2] with the explanation in [1].

"24 bit" here is the working range of the entropy coder.

In range coding the sequence of all bits is stored as an aribitrarily long integer, which cannot be stored in working memory, so we define a range (like 16-24 bits used in FLIF) in which we will always keep our working variable. If it overflows during encoding we write some LSBs to the stream and shift it to bring it back in range.

12 bit is the precision with which probabilities are stored here.

For the sake of your qualification task, just use pseudo function RAC() wherever you feel the need as Yasiru should be working in its implementation.

I started going through the reference implementation of FLIF. I'll see what I
can make out of it. The decoder by itself under the Apache lisence so we could
refer to it or borrow some things from it: https://github.com/FLIF-hub/FLIF.

Thanks


Cheers

[1]: https://people.xiph.org/~tterribe/notes/range.html
[2]: https://github.com/FLIF-hub/FLIF/blob/master/src/maniac/rac.hpp

--
Jai (darkapex)
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to