On 23 May 2005 09:28:15 -0700, "Marcus Lowland" <[EMAIL PROTECTED]>
wrote:
>Thank for the detailed reply John! I guess it turned out to be a bit
>tougher than I originally thought :-)
>
>Reading over your links, I think I better not attempt rewriting the
>zipfile.py program... a little over my
Thank for the detailed reply John! I guess it turned out to be a bit
tougher than I originally thought :-)
Reading over your links, I think I better not attempt rewriting the
zipfile.py program... a little over my head :-). The best solution,
from everything I read seems to be calling an unzip
On 20 May 2005 18:04:22 -0700, "Lorn" <[EMAIL PROTECTED]> wrote:
>Ok, I'm not sure if this helps any, but in debugging it a bit I see the
>script stalls on:
>
>newFile.write (zf.read (zfilename))
>
>The memory error generated references line 357 of the zipfile.py
>program at the point of decompre
Hi
I had make this test (try) :
- create 12 txt's files of 100 MB (exactly 102 400 000 bytes)
- create the file "tst.zip" who contains this 12 files (but the file result
is only 1 095 965 bytes size...)
- delete the 12 txt's files
- try your code
And... it's OK for me.
But : the compress
Ok, I'm not sure if this helps any, but in debugging it a bit I see the
script stalls on:
newFile.write (zf.read (zfilename))
The memory error generated references line 357 of the zipfile.py
program at the point of decompression:
elif zinfo.compress_type == ZIP_DEFLATED:
if not zlib:
r
Is there a limitation with python's zipfile utility that limits the
size of a file that can be extracted? I'm currently trying to extract
125MB zip files with files that are uncompressed to > 1GB and am
receiving memory errors. Indeed my ram gets maxed during extraction and
then the script quits. I