Steven D'Aprano <steve+comp.lang.pyt...@pearwood.info> writes: > while b: > buffer.append(b)
This looks bad because of the overhead of list elements, and also the reading of 1 char at a time. If it's bytes that you're reading, try using bytearray instead of list: def chunkiter(f,delim): buf = bytearray() bufappend = buf.append # avoid an attribute lookup when calling fread = f.read # similar while True: c = fread(1) bufappend(c) if c in delim: yield str(buf) del buf[:] If that's still not fast enough, you could do a more hacky thing of reading large chunks of input at once (f.read(4096) or whatever), splitting on the delimiter set with re.split, and yielding the split output, refilling the buffer when you don't find more delimiters. That doesn't tell you what delimiters actually match: do you need that? Maybe there is nicer a way to get at it than adding up the lengths of the chunks to index into the buffer. How large do you expect the chunks to be? -- https://mail.python.org/mailman/listinfo/python-list