On Tue, 2004-12-14 at 02:39, Rohit wrote: > if I change fileText = fileLike.read() to fileText = > fileLike.readLines(). > > It works for a while before it gets killed of out of memory. > > These are huge files. My goal is to analyze the content of the gzip > file in the tar file without having to un gzip. If that is possible.
As far as I know, gzip is a stream compression algorithm that can't be decompressed in small blocks. That is, I don't think you can seek 500k into a 1MB file and decompress the next 100k. I'd say you'll have to progressively read the file from the beginning, processing and discarding as you go. It looks like a no-brainer to me - see zlib.decompressobj. Note that you _do_ have to ungzip it, you just don't have to store the whole decompressed thing in memory / on disk at once. If you need to do anything to it that does require the entire thing to be loaded (or anything that means you have to seek around the file), I'd say you're SOL. -- Craig Ringer -- http://mail.python.org/mailman/listinfo/python-list