Thank you both for the clarifications, I'll try to stick with my first setup - 
x MTAs + x policyd instances (one for each MTA) + 1 database (in common between 
all policyd instances).

Thanks again,
Fabio


Il giorno 14/giu/2012, alle ore 13:20, Nigel Kukard ha scritto:

> On 14/06/2012 11:10, Simon Hobson wrote:
>> Fabio Sangiovanni wrote:
>> 
>>> That's true, but since we're using quota matching on single sals
>>> users, the quotas_tracking table could get read and subsequently
>>> updated by 2 mtas independently, leading to possible miscalculations
>>> of LastUpdate and Counter fields.
>>> 
>>> Example:
>>> mta1: select from quotas_tracking
>>> mta2: select from quotas_tracking
>>> mta1: Counter += size
>>> mta2: Counter += size
>>> mta1: update quotas_tracking
>>> mta2: update quotas_tracking
>>> 
>>> Result: Counter value is overwritten by the last update from mta2;
>>> size of message from mta1 is lost.
>>> 
>>> Is this scenario possibile?
>> Yes, that is possible, but it would require that the same user sent
>> two messages, via two different relays, within a very small window.
>> 
>> On the other hand, I think PolicyD itself is multithreaded - so
>> there's the same scope for missing an update with a single server
>> (and even a single server with a single MTA).
>> 
>> Personally, I think it's going to be really hard for this to make a
>> significant difference in throughput.
> 
> The write to the table should block the next write at row level if I"m not 
> mistaken. Both updates will be recorded correctly.
> 
> The "size" is actually a delta value which is the difference from the current 
> value to the new value.
> 
> There is no chance of a race condition with the above as far as I can 
> determine from what your usage case is.
> 
> -N
> 
> _______________________________________________
> Users mailing list
> [email protected]
> http://lists.policyd.org/mailman/listinfo/users

_______________________________________________
Users mailing list
[email protected]
http://lists.policyd.org/mailman/listinfo/users

Reply via email to