On 29 December 2016 at 18:35, Chris Angelico <[email protected]> wrote:
> On Thu, Dec 29, 2016 at 7:20 PM, Steven D'Aprano <[email protected]> > wrote: > > I'd rather add a generator to the itertools > > module: > > > > itertools.iterhash(iterable) # yield incremental hashes > > > > or, copying the API of itertools.chain, add a method to hash: > > > > hash.from_iterable(iterable) # return hash calculated incrementally > > The itertools module is mainly designed to be consumed lazily. The > hash has to be calculated eagerly, so it's not really a good fit for > itertools. I understood the "iterhash" suggestion to be akin to itertools.accumulate: >>> for value, tally in enumerate(accumulate(range(10))): print(value, tally) ... 0 0 1 1 2 3 3 6 4 10 5 15 6 21 7 28 8 36 9 45 However, I think including Ned's recipe (or something like it) in https://docs.python.org/3/reference/datamodel.html#object.__hash__ as a tool for avoiding large temporary tuple allocations may be a better way to start off as: 1. It's applicable to all currently released versions of Python, not just to 3.7+ 2. It provides more scope for people to experiment with their own variants of the idea before committing to a *particular* version somewhere in the standard library Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia
_______________________________________________ Python-ideas mailing list [email protected] https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
