Is it not possible to wrap your loop below within a loop doing
file.read([size]) (or readline() or readlines([size]),
reading the file a chunk at a time then running your re on a per-chunk
basis?

-ej


"Erick" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Ack, typo. What I meant was this:
> cat a b c > blah
>
> >>> import re
> >>> for m in re.finditer('\w+', file('blah')):
>
> ...   print m.group()
> ...
> Traceback (most recent call last):
> File "<stdin>", line 1, in ?
> TypeError: buffer object expected
>
> Of course, this works fine, but it loads the file completely into
> memory (right?):
> >>> for m in re.finditer('\w+', file('blah').read()):
> ...   print m.group()
> ...
> a
> b
> c
>


-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to