> > I've looked deeper into this, and in web2py doc: > http://www.web2py.com/examples/static/epydoc/web2py.gluon.cache.CacheInRam-class.html, > > it mentions: This is implemented as global (per process, shared by all > threads) dictionary. A mutex-lock mechanism avoid conflicts. > > Does this mean that when each request thread is accessing and modifying > the content (e.g. a dictionary in my case) of the cache, every other cache > is blocked and has to wait till the current request thread finishes with > it. If so, it seems to me that the race condition we fear as above should > not happen? Please correct me if I get this wrong. Thanks. >
Well, because cache.ram returns a reference to the object rather than a copy, I think you're OK in terms of avoiding conflicts when updating the dict (if you're just adding new keys). However, you might have to think about what happens when you hit the required number of entries. What if one request comes in with the final entry, but while that request is still processing (before it has cleared the cache), another request comes in -- what do you do with that new item? It may be workable, but you'll have to think carefully about how it should work. Anthony