usage questions)"
>
> Date: Thursday, January 23, 2014 at 4:17 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
>
> Subject: Re: [openstack-dev] [oslo] memoizer aka cache
>
> Yes! There is a reason Keystone has a very small footprint of
> c
k-dev@lists.openstack.org>>
Date: Thursday, January 23, 2014 at 4:17 PM
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [oslo] memoizer aka cache
Yes! There is a reason Keystone has a ver
Yes! There is a reason Keystone has a very small footprint of
caching/invalidation done so far. It really needs to be correct when it
comes to proper invalidation logic. I am happy to offer some help in
determining logic for caching/invalidation with Dogpile.cache in mind as we
get it into oslo a
Sure, no cancelling cases of conscious usage, but we need to be careful
here and make sure its really appropriate. Caching and invalidation
techniques are right up there in terms of problems that appear easy and
simple to initially do/use, but doing it correctly is really really hard
(especially at
Ok, I see. Thanks, good to know.
Renat Akhmerov
@ Mirantis Inc.
On 23 Jan 2014, at 14:33, Doug Hellmann wrote:
> The fact that it is already in the requirements list makes it a top contender
> in my mind, unless we find some major issue with it.
>
> Doug
>
>
> On Thu, Jan 23, 2014 at 4:56 P
The fact that it is already in the requirements list makes it a top
contender in my mind, unless we find some major issue with it.
Doug
On Thu, Jan 23, 2014 at 4:56 PM, Morgan Fainberg wrote:
> Keystone uses dogpile.cache and I am making an effort to add it into the
> oslo incubator cache libr
Keystone uses dogpile.cache and I am making an effort to add it into the
oslo incubator cache library that was recently merged.
Cheers,
--Morgan
On Thu, Jan 23, 2014 at 1:35 PM, Renat Akhmerov wrote:
>
> On 23 Jan 2014, at 08:41, Joshua Harlow wrote:
>
> > So to me memoizing is typically a pre
On 23 Jan 2014, at 08:41, Joshua Harlow wrote:
> So to me memoizing is typically a premature optimization in a lot of cases.
> And doing it incorrectly leads to overfilling the python processes memory
> (your global dict will have objects in it that can't be garbage collected,
> and with enou
Hi,
First, I think common routines are great. More DRY is always good.
Second, my personal feeling is that when you see a hard-coded in-memory
cache like this, it's probably something that should be moved to be
behind a more generic caching framework that allows for different
backends such a
So to me memoizing is typically a premature optimization in a lot of cases. And
doing it incorrectly leads to overfilling the python processes memory (your
global dict will have objects in it that can't be garbage collected, and with
enough keys+values being stored will act just like a memory le
Top posting to point out that:
In Python3 there is a generic memoizer in functools called lru_cache.
And here is a backport to Python 2.7:
https://pypi.python.org/pypi/functools32
That leaves Python 2.6. Maybe some clever wrapping in Oslo can make it
available to all versions?
On Thu, Jan 23,
I would like to have us adopt a memoizing caching library of some kind
for use with OpenStack projects. I have no strong preference at this
time and I would like suggestions on what to use.
I have seen a number of patches where people have begun to implement
their own caches in dictionaries. This
12 matches
Mail list logo