L'octidi 8 germinal, an CCXXIII, Lukasz Marek a écrit :
> I will try to use this libarchive first and do some tests. Your approach may
> collapse in case compression libraries doesn't support parallel
> compression/decompression (I mean that you write or read several files from
> single archive file) I would be much surprised if at least writing will not
> work.

This is a likely issue, but fortunately it would not prevent all use cases.

> I wonder if there is other solution: zip could be protocol as it is now, it
> allows to benefit from list API and gives flexibility other demuxers to
> benefit from it. There could also be a "directory" demuxer which would also
> use that API and possibly could serve streams in your way. That demuxer
> could also handle directories over any protocol that supports that API.

That was the kind of idea that I had. But I believe that to get that working
a bit reliably, we will need to extend the directory listing callbacks to
allow a URL context to create new URL contexts, to open remote files without
establishing a new connection (and it will also be necessary for network
servers). Some kind of VFS API, then.

> >ffplay zip:///tmp/outer.zip/tmp/inner.zip/tmp/data.bin
> libzip can't handle it (the same way it cannot handle files via protocols),
> maybe libarchive will be better

I think you misunderstood the question. I was not asking whether it would be
able to decode nested files, but how your code did split nested paths: would
it try to find /tmp/inner.zip/tmp/data.bin inside /tmp/outer.zip, or
/tmp/data.bin inside /tmp/outer.zip/tmp/inner.zip (assuming someone was
stupid enough to name a directory dot-zip)?

Regards,

-- 
  Nicolas George

Attachment: signature.asc
Description: Digital signature

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to