On Dec 14, 2010, at 2:29 AM, Brandon Williams wrote: > On Mon, Dec 13, 2010 at 6:43 PM, Daniel Doubleday <daniel.double...@gmx.net> > wrote: > Oh - well but I see that the coordinator is actually using its own score for > ordering. I was only concerned that dropped messages are ignored when > calculating latencies but that seems to be the case for local or remote > responses. And even than I guess you can assume that enough slow messages > arrive to destroy the score. > > That's odd, since it should only be tracking READ_RESPONSE messages... I'm > not sure how a node would send one to itself.
As far as I understand it the MessagingService is always user in the strong read path. Local messages will be shortcutted transport-wise but MessageDeliveryTask will still be used which in turn calls the ResponseVerbHandler which notifies the snitch about latencies. It's only in the weak read path where the MessagingService is not used at all. And it will always use the local data and (I think) latencies are not recorded. > > Maybe I misunderstand but that would not really lead to less load right. I > don't think that inconsistency / read repairs are the problem which leads to > high io load but the digest requests. Turning off read repair would also lead > to inconsistent reads which invalidates the whole point of quorum reads (at > least in 0.6. I think rr probability has no effect in strong reads in 0.7) . > Again assuming I am not misinterpreting the code. > > Ah, I see what you want to do: take a chance that you pick the two replicas > (at RF=3, at least) that should agree, and only send the last checksum > request if you lose (at the price of latency.) > Yes exactly. I want to use the two endpoints with the best score (according to dynamic snitch). As soon as I have test results I'll post them here. Thanks, Daniel