On 02/27/2016 07:51 PM, Michael Niedermayer wrote:
On Sat, Feb 27, 2016 at 05:45:57PM +0100, Reimar Döffinger wrote:
On Sat, Feb 27, 2016 at 04:15:10PM +0100, Mats Peterson wrote:
On 02/27/2016 04:13 PM, Mats Peterson wrote:
On 02/27/2016 04:08 PM, Mats Peterson wrote:
On 02/27/2016 04:07 PM, Mats Peterson wrote:
On 02/27/2016 04:00 PM, Reimar Döffinger wrote:
On Sat, Feb 27, 2016 at 03:57:06PM +0100, Mats Peterson wrote:
On 02/27/2016 03:37 PM, Mats Peterson wrote:
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
I suppose this is what you mean, Reimar. Treating the palette, if a
packet
contains one at the end of the video data, as being AVPALETTE_SIZE
bytes
exclusively.
Well, actually not really.
If the palette is part of input frame it should be sent as side
data.
I am not sure where this variant comes from.
It might be that it should just be written as is.
Or even if the palette needs to be split it might be
necessary to auto-detect the palette size via
packet size - (width*height*bits per pixel)/8.
But as said, I am fairly unclear on what case that
code is supposed to handle.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
I agree that it should be stored in a side data packet myself normally,
and that this is a somewhat weird construction. It probably has to do
with the nut format originally, which stores raw palettized data after
the video data in the packets. Anyway, I have accepted the facts. For
the record, the new ff_reshuffle_raw_rgb() function written by Michael
in lavf/rawutils.c that aligns strides properly for AVI and QuickTime,
will set a CONTAINS_PAL flag if the packet size is larger than the
actual video data. He has hardcoded the palette size to 1024 bytes in
that file.
Mats
The nut format stores the PALETTE after the video data in the packets,
nothing else :)
In any case, on muxing, the packets will have the palette after the
video data in the packets, whether it's AVI or QuickTime. Neither
avienc.c or movenc.c uses any side data packets for the palette.
Michael's intention has been to enable palette switching in the middle
of the stream, hence storage of the palette in each packet, and AVI
supports it by using the 'xxpc' chunks in the video data. It is also
implemented by now.
Mats
Not that it couldn't be done with side data packets, though.
If it doesn't support side data then the muxers are plain broken.
If the nut muxer stores palette by appending it to the frames,
then the demuxer should split it out into side data.
Note that I am absolutely not a fan of this side data stuff,
but since we already decided to do it like that then that's
the way we need to go, not randomly doing one way in one
place and differently in another, that just makes for unusable
API.
The only reasons to support "palette appended to data" are
1) There are some existing users of the FFmpeg API that rely on it.
Ideally we should then change it so it works for all muxers, or
the other way round warn that this is a deprecated way of doing
things.
2) There are file formats that store it that way and we cannot easily
split it into side data. Not sure that can really happen.
palettes are a bit annoying, there are quite a few things
the chain generally is
demuxer -----> decoder -----> encoder -----> muxer
OR
demuxer -----------------------------------> muxer
Thus there are 2 interfaces, the demuxer -> muxer and the
decoder -> encoder interface
For the decoder -> encoder interface, the palette is in AVFrame.data[1]
the 8bit indexes as a width(stride) x height array in AVFrame.data[0]
that part is still easy
the demuxer -> muxer interface is more complex
in case of non raw, that is compressed codecs the palette can be in
a codec specific and inseprable format in AVPacket.data with the
rest of the compressed image.
but its also possible that there is no palette in AVPacket.data and
instead its stored in AVPackets side data which would be filled from
container specific chunks like avis PCxx or in the global extradata
So even without rawvideo there already exist both sidedata and non
sidedata cases
additionally key frame AVPackets must together with the global
extradata contain a full palette to be decodeable.
some containers support storing "partial palettes", for example
avis PCxx chunks can do that, so one should at keyframes store a
full one but subequent non keyframes should only store the part that
differs from the previous.
The container specific compression like PCxx would semantically
best fit into the muxer
should rawvideo AVPackets palette use data[] or sidedata, honestly i
do not know, but i dont think it makes a big difference
even supporting both, likely only adds 3-5 lines of code or so
its more a philosophical question
is the palette like chroma or alpha part of the frame?
why is alpha not sidedata if palette is?
or
palette is side data for a few (not many) compressed pal8 formats
so it can be for rawvideo too
there are arguments both way, iam not a philosopher so i dont really
have an oppinion on this ...
about existing API, i suspect there arent many applications that
use ffmpegs demuxers without the decoders for raw pal8, i might
of course be wrong but this seems a rather uncommon case of a uncommon
case. And muxers side it was all broken before mats ...
[...]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Thanks for chiming in, even if I didn't quite grasp all of it ;)
--
Mats Peterson
http://matsp888.no-ip.org/~mats/
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel