> 1) If using CL > 1 than using the dynamic snitch will result in a data read
> from node with the lowest latency (little simplified) even if the proxy node
> contains the data but has a higher latency that other possible nodes which
> means that it is not necessary to do load-based balancing on the client
> side.
>
> 2) If using CL =1 than the proxy node will always return the data itself
> even when there is another node with less load.
>
> 3) Digest requests will be sent to all other living peer nodes for that key
> and will result in a data read on all nodes to calculate the digest. The
> only difference is that the data is not sent back but IO-wise it is just as
> expensive.

I think I may just be completely misunderstanding something, but I'm
not really sure to what extent you're trying to describe the current
situation and to what extent you're suggesting changes? I'm not sure
about (1) and (2) though my knee-jerk reaction is that I would expect
it to be mostly agnostic w.r.t. which node happens to be taking the
RPC call (e.g. the "latency" may be due to disk I/O and preferring the
local node has lots of potential to be detrimental, while forwarding
is only slightly more expensive).

(3) sounds like what's happening with read repair.

> The next one goes a little further:
>
> We read / write with quorum / rf = 3.
>
> It seems to me that it wouldn't be hard to patch the StorageProxy to send
> only one read request and one digest request. Only if one of the requests
> fail we would have to query the remaining node. We don't need read repair
> because we have to repair once a week anyways and quorum guarantees
> consistency. This way we could reduce read load significantly which should
> compensate for latency increase by failing reads. Am I missing something?

Am I interpreting you correctly that you want to switch the default
read mode in the case of QUOROM to optimistically assume that data is
consistent and read from one node and only perform digest on the rest?

What's the goal here? The only thin saved by digest reads at QUROM
seems to me to be the throughput saved by not saving the data. You're
still taking the reads in terms of potential disk I/O, and you still
have to wait for the response, and you're still taking almost all of
the CPU hit (still reading and checksumming, just not sending back).
For highly contended data the need to fallback to real needs would
significantly increase average latency.

-- 
/ Peter Schuller

Reply via email to