Le lundi 19 septembre 2016 17:40:03 UTC+2, Cédric Krier a écrit :
>
> On 2016-09-19 08:19, Ali Kefia wrote: 
> > the issue with cache on multi workers is that invalidation does not 
> > propagate. 
>
> Could you proof your statement? 
>

Context: multi-process.
I could miss something (that is why I am asking)

   - cache.clear: empty cache and sets ir.cache on database
   - cache.get: reads from memory (does not check ir.cache)

=> No synchronization between workers to invalidate cache horizontally
 

>
> > And since using db is counter cache principle, we took Redis. 
>
> I do not understand the reasoning. 
>

Supposed solution:

   - every time we call cache.get, it checks db (ir.cache) for validation
   - db call for every cache.get

=> Makes no sense since we cache data to avoid db calls
 

>
> > side effect advantages were: 
> > 
> >    - less locks on Python code 
>
> On single-thread it should not change anything. 


Agree that in both cases, we wait for a response (we suppose that Lock on 
Python has no cost)
I will make a test and send you the result
 

>
> >    - less memory usage (shared memory) 
>
> agree even but at the cost of network communication. 
>
> >    - faster worker startup (since cache is already up and loaded) 
>
> except if using threaded workers. 
>

We have chosen the multi process model (because of GIL) and to be globally 
more scalable
We made a benchmark on 3.8 and it was much more comfortable / stable with 
many workers
=> We will give werkzeug a try (if you have a document that helps on 
configuration, please share)
 

>
> -- 
> Cédric Krier - B2CK SPRL 
> Email/Jabber: cedric...@b2ck.com <javascript:> 
> Tel: +32 472 54 46 59 
> Website: http://www.b2ck.com/ 
>

-- 
You received this message because you are subscribed to the Google Groups 
"tryton-dev" group.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tryton-dev/1bd4ac34-3c0c-4ce3-958b-fc340dc0e85c%40googlegroups.com.

Reply via email to