I think we need a cache.request that works per perquest. I will
implement it in a couple of days.



On Sep 18, 9:06 pm, Jurgis Pralgauskis <jurgis.pralgaus...@gmail.com>
wrote:
> at first I thought, I need this "caching" **per request**,
> but maybe this would not be the problem to set it for several seconds
> or even minutes
>
> A)
> if I want to use **dict[key]** notation (for clear'er code),
> I could do like this:
>
> for row in db(...).select(...,cache=(cache.ram,expiration))::
>     data.MyTable[row.id] = row
>
> B) or I could inherit Storage and overload like this?
>
> def __getitem__ (self, key):
>    self.rows.find(lambda r: r.id==key)
>
> Question: which variant woud be better?
> I am not sure, but seems to me - A ?
>
> On 19 Rugs, 01:32, mdipierro <mdipie...@cs.depaul.edu> wrote:
>
> > Something like that is there:
>
> > expiration=3600 # seconds
> > rows=db(...).select(...,cache=(cache.ram,expiration))
>
> > reset with
>
> > rows=db(...).select(...,cache=(cache.ram,0))
>
> > On Sep 18, 5:27 pm, Jurgis Pralgauskis <jurgis.pralgaus...@gmail.com>
> > wrote:
>
> > > Hello,
>
> > > I think I got a bit trapped by DAL in GAE :)
> > > maybe because of often used  db.MyTable[i].somefield
>
> > > /default/topic_examples/27001/view 200 2721ms 5145cpu_ms
> > > 2906api_cpu_ms
>
> > > service.call    #RPCs   real time       api time
> > > datastore_v3.Get        62      712ms   516ms
> > > datastore_v3.RunQuery   35      706ms   2390ms
> > > memcache.Get    3       27ms    0ms
> > > memcache.Set    2       12ms    0ms
>
> > > Though my tables are still quite small: ~ 5 main tables with ~ 100
> > > records in each.
>
> > > in order to decrease db requests, I think to make some singleton like
> > > interface, so I could ask
> > > data.MyTable[i].somefield
>
> > > instead of
> > > db.MyTable[i].somefield
>
> > > data would hold  storage copies of Tables which I need to access often
> > > by index
>
> > > and if data doesn't have particular table or its record, it could
> > > call db.MyTable.. then (and optionaly cache the result the Singleton
> > > way)
>
> > > maybe this could be called  select_cache.
> > > by the way, find() probably could also be more appropriate than
> > > separate select in such cases?
>
> > > this is just a plan now, am I on a sound way?
> > > how big the tables should grow (approximatelly) for my mechanizm to
> > > become useless??
>
>

Reply via email to