Hi Phil,

> On 12 Nov 2014, at 12:54, p...@highoctane.be wrote:
> 
> Looking at LRUCache tests, I've found this:
> 
> testFibonacci
>     "After an idea by Jan Vrany.
>     Recursively enter the cache and its access protection"
>     
>     | fibCache |
>     fibCache := self newCache.
>     fibCache 
>         maximumWeight: 32;
>         beThreadSafe;
>         factory: [ :key |
>             key < 2
>                 ifTrue: [ key ]
>                 ifFalse: [ (fibCache at: key - 1) + (fibCache at: key - 2) ] 
> ].
>     self assert: (fibCache at: 40) equals: 102334155
> 
> the factory thing basically solves the problem.

Good ;-)

> Is there more docs about the caching somewhere? I read the code, comments, 
> and unit tests but still, will value an explanation.

I tried to comment everything, the class comments (check AbstractCache too) are 
quite extensive. What else do you want to know ?

> Keying by object works, but with symbols, it isn't behaving. I think I miss 
> something here.

I don't understand, AFAIK you can use any object as both key and value, most 
tests use Symbols. Can you elaborate ?

> Is the cache size set at a given limit?

I think you are looking for CacheWeight (read the class comment).

Basically, a cache has a weight, if it is exceeded, it starts removing items. 
How you compute the weight is up to you, the default is that each entry has a 
weight of 1, so you end up counting the entries. But you could use something 
like file or byte size of the value.

> I see keyIndex is Dictionary new. What if I do have a lot of keys? Is the 
> dictionary being copied over and over to get to the right size?

Yes, of course it is a dictionary that grows when needed, what else do you want 
? You want to predefine it at a certain size ?

Sven

PS: BTW, the implementation is slightly more complicated than what you would 
expect, it uses a double linked list and an index dictionary, to achieve proper 
O(1) performance in its common operations (accessing, adding & evicting 
entries).

> Phil
> 
> 
> Phil
> 
> Phil
> 
> 
> 
> 
> 
> 
> On Tue, Nov 11, 2014 at 7:52 PM, p...@highoctane.be <p...@highoctane.be> 
> wrote:
> I should ask more questions :-).
> 
> Now, this still doesn't answer the question on how to use this to memoize 
> object methods.
> 
> I think that what is closest to what I am looking for is this:
> 
> http://wiki.tcl.tk/10779
> http://wiki.tcl.tk/10981
> 
> 
> I am use we can do something like that in Pharo, with some memoize method in 
> Object playing around with thisContext.
> 
> SomeClass>>someMethodWith: aSomething and: aSomethingElse
> 
> 
> 
> Like self haltIf: aSymbol will break if aSymbol is in the call stack.
> 
> Any guru having a way to do that?
> 
> Phil
> 
>  
> 
> 
> On Tue, Nov 11, 2014 at 7:21 PM, Torsten Bergmann <asta...@gmx.de> wrote:
> Werner Kassens wrote:
> >Hi,
> >now this a nice idea, especially the memoizedUsing:cache idea. i would
> >really appreciate it, if that would be MIT licenced.
> >werner
> 
> It already is, thanks to the permission of author John Cromartie who answered
> us today. Both methods are in a slice already:
> 
> Details in https://pharo.fogbugz.com/f/cases/14458
> 
> Thx
> T.
> 
> 
> 
> 


Reply via email to