Hi,

I am serving some data which is expensive to compute with 
a @service.jsonrpc controller. I want to cache the result in RAM, using 
something like:

@cache(request.env.path_info,time_expire=1000,cache_model=cache.ram)

My data will not change very often, so I want to have a big ttl (actually, 
I would like a ttl "forever", is this possible?)

But my main problem is this: in my architecture, web2py does not know when 
new data is available (the data is in a third-party database, completely 
out of control of web2py). Changes to the data happen asynchronously, 
according to business processes.

I have three pieces:

   1. web2py serving the data to the clients, via jsonrpc
   2. a small script which knows (long-polling) when changes in the 
   third-party database occur
   3. a library to process the information in the third-party database. 
   Calling this must be avoided at all costs, *except* when we have actual 
   changes (which is why I want to add the cache)
   
The easiest implementation for me would be to be able, from the external 
script, to invalidate the web2py cache whenever I detect changes relevant 
to the active sessions (not the whole cache, just the part related to my 
jsonrpc controller)

Is this possible? Maybe by calling a web2py controller which is in charge 
of invalidating the cache?

What about the session? I assume the web2py cache is session-related. My 
script should call the web2py controller using the correct session, but I 
do not know how to handle this.

Thanks,
Daniel

-- 



Reply via email to