On 2022-03-12 at 21:45:56 +0100,
Marco Sulla <marco.sulla.pyt...@gmail.com> wrote:

[ ... ]

> So if I do not cache if the object is unhashable, I save a little
> memory per object (1 int) and I get a better error message every time.

> On the other hand, if I leave the things as they are, testing the
> unhashability of the object multiple times is faster. The code:
> 
> try:
>     hash(o)
> except TypeError:
>     pass
> 
> execute in nanoseconds, if called more than 1 time, even if o is not
> hashable. Not sure if this is a big advantage.

Once hashing an object fails, why would an application try again?  I can
see an application using a hashable value in a hashable situation again
and again and again (i.e., taking advantage of the cache), but what's
the use case for *repeatedly* trying to use an unhashable value again
and again and again (i.e., taking advantage of a cached failure)?

So I think that caching the failure is a lot of extra work for no
benefit.
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to