Well, I've done a benchmark. >>> timeit.timeit("tail('/home/marco/small.txt')", globals={"tail":tail}, >>> number=100000) 1.5963431186974049 >>> timeit.timeit("tail('/home/marco/lorem.txt')", globals={"tail":tail}, >>> number=100000) 2.5240604374557734 >>> timeit.timeit("tail('/home/marco/lorem.txt', chunk_size=1000)", >>> globals={"tail":tail}, number=100000) 1.8944984432309866
small.txt is a text file of 1.3 KB. lorem.txt is a lorem ipsum of 1.2 GB. It seems the performance is good, thanks to the chunk suggestion. But the time of Linux tail surprise me: marco@buzz:~$ time tail lorem.txt [text] real 0m0.004s user 0m0.003s sys 0m0.001s It's strange that it's so slow. I thought it was because it decodes and print the result, but I timed timeit.timeit("print(tail('/home/marco/lorem.txt').decode('utf-8'))", globals={"tail":tail}, number=100000) and I got ~36 seconds. It seems quite strange to me. Maybe I got the benchmarks wrong at some point? -- https://mail.python.org/mailman/listinfo/python-list