On Sun, 8 May 2022 at 22:02, Chris Angelico <ros...@gmail.com> wrote: > > Absolutely not. As has been stated multiple times in this thread, a > fully general approach is extremely complicated, horrifically > unreliable, and hopelessly inefficient.
Well, my implementation is quite general now. It's not complicated and inefficient. About reliability, I can't say anything without a test case. > The ONLY way to make this sort > of thing any good whatsoever is to know your own use-case and code to > exactly that. Given the size of files you're working with, for > instance, a simple approach of just reading the whole file would make > far more sense than the complex seeking you're doing. For reading a > multi-gigabyte file, the choices will be different. Apart from the fact that it's very, very simple to optimize for small files: this is, IMHO, a premature optimization. The code is quite fast even if the file is small. Can it be faster? Of course, but it depends on the use case. Every optimization in CPython must pass the benchmark suite test. If there's little or no gain, the optimization is usually rejected. > No, this does NOT belong in the core language. I respect your opinion, but IMHO you think that the task is more complicated than the reality. It seems to me that the method can be quite simple and fast. -- https://mail.python.org/mailman/listinfo/python-list