On Jul 12, 2019, at 06:27, haael <[email protected]> wrote:
>
> with tuple(open(str(_n) + '.tmp', 'w') for _n in range(1000)) as f:
> for n, fn in enumerate(f):
> f.write(str(n))
Another thought here: There may be good examples for what you want—although I
suspect every such example will be much better handled by ExitStack—but the one
you gave is actually an argument against the idea, not for it. We don’t want to
make it easier to write code like this, because people shouldn’t be writing
code like this.
Having 1000 files open at a time may exhaust your resource limits. Writing to
another 999 files before closing the first will confuse the OS cache and slow
everything down. If someone trips over the power cable in the middle, you have
up to 1000 written but unflushed files that are in indeterminate state. It
forces you to come up with two names for each variable, leading to mistakes
like the one at the end, where you try to write to the tuple. When all of this
is necessary, you have to deal with those problems, but it usually isn’t, and
it isn’t here. You’re just writing to one file, then writing to the next, etc.;
there’s no reason not to close each one immediately. Which you can just write
like this:
for n in range(1000):
with open(str(n) + '.tmp', 'w') as f:
f.write(str(n))
_______________________________________________
Python-ideas mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at
https://mail.python.org/archives/list/[email protected]/message/DZJVTEQXVUAIINT542YL4PEBEX4KNHGS/
Code of Conduct: http://python.org/psf/codeofconduct/