On 2011-08-04, Torsten Curdt wrote: >> ZipFile relies on RandomAccessFile so any archive can't be bigger than >> the maximum size supported by RandomAccessFile. In particular the seek >> method expects a long as argument so the hard limit would be an archive >> size of 2^63-1 bytes. In practice I expect RandomAccessFile to not >> support files that big on many platforms.
> Yeah ... let's cross that bridge when people complain ;) With that I can certainly live. >> For the streaming mode offsets are currently stored as longs but that >> could be changed to BigIntegers easily so we could reach 2^64-1 at the >> expense of memory consumption and maybe even some performance issues >> (the offsets are not really used in calculations so I don't expect any >> major impact). > No insights on the implementation but that might be worth changing so > it's in line with the ZipFile impl ZipFile is already limited to longs via RandomAccessFile. >> I'm confident that even I would manage to write an efficient singly >> linked list that is only ever appended to and that is iterated over >> exactly once from head to tail. > +1 for that then :) Lasse's post showing that I'd need 100+ GB of RAM to take advantage of my bigger LinkedList made me drop that plan 8-) If anybody is really dealing with archives that big they likely don't use Commons Compress and if they do then support for archives split into multiple files might be more important. Stefan --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org For additional commands, e-mail: dev-h...@commons.apache.org