Hi Samuel, > > * I see that the > Archive< > http://commons.apache.org/sandbox/compress/apidocs/org/apache/commons/compress/Archive.html > > > interface and the > ArchiverFactory< > http://commons.apache.org/sandbox/compress/apidocs/org/apache/commons/compress/ArchiverFactory.html > > > deal a lot with files. > Wouldn't it be better for those APIs to only (mostly) rely on I/O > streams? I believe the API shouldn't assume that a file system is > available (when it makes sense). To cite one of my colleagues "files > are evil" ;-) For example, I have a particular use case where I would > like to extract a zip file from a within a war file. I obviously can't > rely on files being available as nothing tells me whether the war is > deployed exploded or not. Getting a stream to the given zip file > however wouldn't be an issue? > What do you think? > > * The Archive interface has the > "getEntryIterator"< > http://commons.apache.org/sandbox/compress/apidocs/org/apache/commons/compress/Archive.html#getEntryIterator()<http://commons.apache.org/sandbox/compress/apidocs/org/apache/commons/compress/Archive.html#getEntryIterator%28%29> > > > method. However this method will not iterate through entries of an > existing archive but only through added entries. I believe this should > somehow allow me to iterate through all entries, existing and added. > Again what do you think?
we allready thought about that too and started to redesign that api. You can take a look at the new api with GIT here: http://projects.grobmeier.de/compress-redesign.git/ This API only uses streams at the moment. Later it may be discussed to add the filebased classes as some kind of a helper somewhere. If so, it's worth to think about your getEntryIterator idea. I will take it on my TODO list :-) Best regards and thanks for you comments, Chris.