On 28.03.2015 11:53, Nicolas George wrote:
Le septidi 7 germinal, an CCXXIII, Lukasz Marek a écrit :
But, this time I dont understand you comments, could you elaborate it?
What's wrong, what can I do?

What I am saying is that there are a lot of different cases where we want to
read archives (not only zip, see my previous mail, but that does not matter
for now) on the fly with FFmpeg, and I am not sure your proposal is
convenient for all these use cases.

You implement it as a protocol, to access a single file in the archive. If
it was gzip, that would be fine, because it is a stream compression format.
In fact, I now realize we should have stackable protocols for all common
stream compression tools.

But a zip file is not just a compressed stream: it contains structure,
several files. In FFmpeg architecture, that may map better to a demuxer:
each file as a stream or as an attachment.

This is yet another case where the distinction between protocols and formats
is not entirely clear. If you think about it, an input protocol is just a
demuxer that outputs a single stream of AVMEDIA_TYPE_DATA packets. Of
course, for "normal" protocols and formats, like reading a Matroska file
from a plain file, the separation makes sense. But with more complex and
tied-in protocols and formats, it makes things actually harder. See the RTP
issues for example.

I have not yet looked closely enough at it, but I suspect the directory
listing API that you have just landed may start a bridge between the two: a
protocol may no longer be just an API for accessing a single stream but a
whole filesystem. Then we can have demuxers that use it. I suppose one of
the most pressing tasks would be to have the image2 demuxer use the
directory listing API, is it not?

I know that line between protocol and format is very thin.

I will try to use this libarchive first and do some tests. Your approach may collapse in case compression libraries doesn't support parallel compression/decompression (I mean that you write or read several files from single archive file) I would be much surprised if at least writing will not work. But I will test it, there is no point in guessing here. Of course making it protocol doesn't solve that potential issue, but it may be less confusing for the user.

I wonder if there is other solution: zip could be protocol as it is now, it allows to benefit from list API and gives flexibility other demuxers to benefit from it. There could also be a "directory" demuxer which would also use that API and possibly could serve streams in your way. That demuxer could also handle directories over any protocol that supports that API.

Personally I don't favor any of the approaches, but if I had to decide then probably a protocol.

So my actual proposal about this patch is: keep it near at hand, but not
apply it; rather, use it as a work case to see the most we can do with new
APIs.

(Well, I do not oppose actually applying it. But if so, let us make us very
clear that this is something really experimental. Not experimental "it
probably works poorly" but experimental "we may change it completely
tomorrow because we had another idea, we will not bother AT ALL with
compatibility for now".)

I have no pressure to merge asap. At least this libarchive is worth to try.

I think you misunderstood this. There is no doc, but reading files by index
is a fallback when user doesn't specify file explicitly. For example:

Thanks for correcting me, I really missed that.

./ffplay zip://zipfile.zip/aaa.avi

Ok, but that leads me to another question: what does this do:

ffplay zip:///tmp/outer.zip/tmp/inner.zip/tmp/data.bin

libzip can't handle it (the same way it cannot handle files via protocols), maybe libarchive will be better

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to