Nick Coghlan added the comment:

I realised that PEP 487's __set_name__ can be used to detect the `__slots__` 
conflict at class definition time rather than on first lookup:

    def __set_name__(self, owner, name):
        try:
            slots = owner.__slots__
        except AttributeError:
            return
        if "__dict__" not in slots:
            msg = f"'__dict__' attribute required on {owner.__name__!r} 
instances to cache {name!r} property."
            raise TypeError(msg)

It also occurred to me that at the expense of one level of indirection in the 
runtime lookup, PEP 487's __set_name__ hook and a specified naming convention 
already permits a basic, albeit inefficient, "cached_slot" implementation:

class cached_slot:
    def __init__(self, func):
        self.func = func
        self.cache_slot = func.__name__ + "_cache"
        self.__doc__ = func.__doc__
        self._lock = RLock()

    def __set_name__(self, owner, name):
        try:
            slots = owner.__slots__
        except AttributeError:
            msg = f"cached_slot requires '__slots__' on {owner!r}"
            raise TypeError(msg) from None
        if self.cache_slot not in slots:
            msg = f"cached_slot requires {self.cache_slot!r} slot on {owner!r}"
            raise TypeError(msg) from None

    def __get__(self, instance, cls=None):
        if instance is None:
            return self
        try:
            return getattr(instance, self.cache_slot)
        except AttributeError:
            # Cache not initialised yet, so proceed to double-checked locking
            pass
        with self.lock:
            # check if another thread filled cache while we awaited lock
            try:
                return getattr(instance, self.cache_slot)
            except AttributeError:
                # Cache still not initialised yet, so initialise it
                setattr(instance, self.cache_slot, self.func(instance))
        return getattr(instance, self.cache_slot)

    def __set__(self, instance, value):
        setattr(instance, self.cache_slot, value)

    def __delete__(self, instance):
        delattr(instance, self.cache_slot)

It can't be done as a data descriptor though (and can't be done efficiently in 
pure Python), so I don't think it makes sense to try to make cached_property 
itself work implicitly with both normal attributes and slot entries - instead, 
cached_property can handle the common case as simply and efficiently as 
possible, and the cached_slot case can be either handled separately or else not 
at all.

The "don't offer cached_slot at all" argument would be that, given slots are 
used for memory-efficiency when handling large numbers of objects and lazy 
initialization is used to avoid unnecessary computations, a "lazily initialised 
slot" can be viewed as "64 bits of frequently wasted space", and hence we can 
expect demand for the feature to be incredibly low (and the community 
experience to date bears out that expectation).

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue21145>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to