On Friday, August 16, 2013 1:49:17 PM UTC+2, Ykä Marjanen wrote: > > The example was a bit simplified. In the real process I fetch data from > the database, calculate scores and rankings and then return the result. So > it's not just for database queries, but all calculations. > > The reason I'm using cache_property is that many methods in the class use > the same data, so if I call "classinstance.participant_ranking" and > "classinstance.all_participant_rankings", they both use the same cached > data in the class (you cannot calculate ranking of one participant without > calculating it for all). Otherwise I would need to manually check if the > query and calculation has been already made if I don't want to duplicate > the calculation. > > I've understood that caches (cache.ram, cache.disk, memcache and redis) > are server level caches. So if my data changes per user, I cannot really > use them(?) >
ehm.... if you use the cache= argument of the select() the key is the query itself, so you can use it for whatever scheme your app may have (actually, you could have two identical queries to populate two totally different modules of your own and it would save you the additional roundtrip). If you need to cache the data figuring out the key on your own, just use as a part of the key the id of the user, assuming that's the "granularity" you need.... -- --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to web2py+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.